The Overview Pyramid describes and characterizes the structure of an object oriented system by looking at three main areas: size and complexity, coupling and inheritance. This visualization technique has been defined by Radu Marinescu and Michele Lanza in their book Object-Oriented Metrics in Practice. In this blog post we’ll see how to compute all the necessary metrics by using NDepend.
The Overview Pyramid
The purpose of the Overview Pyramid is to provide a high level overview of any object oriented system. It does this by putting in one place some of the most important measurements about a software system. The left part describes the Size and Complexity. The right part describes the System Coupling. The top part describes the inheritance usage.
The Overview Pyramid uses two types of measurements:
- Direct Metrics. These metrics are absolute (e.g. number of packages). You can see them in the center of the pyramid.
- Computed Proportions. These metrics are computed by dividing each direct metric with the one above. Being proportions, these metrics are independent of one another and allow for comparison between projects. You can see these metrics in the left and right extremities of the pyramid.
Size and Complexity
These metrics calculate how big and how complex the software is.
Direct metrics
- NOP – Number of Packages – the number of high level packages. Depending on how you define the logical components in your code base, this can be either assemblies or namespaces.
- NOC – Number of Classes – the number of classes defined in the system.
- NOM – Number of Operations – the number of methods defined in the system.
- LOC – Lines of Code – the number of lines of code of all methods.
- CYCLO – Cyclomatic Number – the sum of the Cyclomatic Complexity for all methods.
Computed Proportions
- High-level Structuring (NOC/Package) – this indicates if packages tend to be coarse grained or fine grained.
- Class structuring (NOM/Class) – this indicates if classes tend to have too many methods.
- Operation structuring (LOC/Operation) – this indicates how complex are the defined methods.
- Intrinsic operation complexity (CYCLO/Code Line) – this indicates how much conditional complexity we should expect in a method.
CQLinq Query
// ** Size & Complexity ** // * Direct Metrics * let NOP = JustMyCode.Assemblies.Count() // If you use namespaces as logical components, you can count the number of namespaces: // let NOP = JustMyCode.Namespaces.Count() let NOC = JustMyCode.Types .Where(t => t.IsClass) .Count() let NOM = JustMyCode.Methods .Where(m => m.NbLinesOfCode.HasValue) .Count() let LOC = JustMyCode.Methods .Select(m => (int)m.NbLinesOfCode.GetValueOrDefault()) .Sum() let Cyclo = JustMyCode.Methods .Select(m => (int)m.CyclomaticComplexity.GetValueOrDefault()) .Sum() // * Computed Proportions * let ClassesPerPackage = (double)NOC/NOP let MethodsPerClass = (double)NOM/NOC let LinesPerMethod = (double)LOC/NOM let OperationComplexity = (double)Cyclo/LOC
Coupling
These measurements try to characterize how intensive and how dispersed is the system coupling.
Direct Metrics
- CALLS – Number of Operation Calls – this indicates the number of method calls in the system. It’s defined as the sum of distinct method calls for each method.
- FANOUT – Number of Called Classes – the sum of the number of classes from which we call methods for each method in the system.
Computed Proportions
- Coupling intensity (CALLS/Operation) – how many methods are called on average from each method. High values show excessive coupling.
- Coupling dispersion (FANOUT/Operation Call) – this indicates how many classes does the coupling involve.
CQLinq Query
// ** Coupling ** // * Direct Metrics * let Calls = JustMyCode.Methods .Select(m => (int)m.NbMethodsCalled.GetValueOrDefault()) .Sum() let Fanout = JustMyCode.Methods .Select(m => m.MethodsCalled .Select(called => called.ParentType) .ToHashSet() .Count()) .Sum() // * Computed Proportions * let CallsPerOperation = (double)Calls/NOM let FanoutPerCall = (double)Fanout/Calls
Inheritance
These measurements try to characterize how much is inheritance used throughout the code base.
Computed Proportions
- ANDC – Average Number of Derived Classes. This metric characterizes the width of the inheritance tree by computing the average number of direct subclasses of a class. It counts only classes defined in the system (interfaces are not counted).
- AHH – Average Hierarchy Height. This metric characterizes the depth of the inheritance tree. It’s computed as the average of the Height of the Inheritance Tree (HIT) for root classes. A class is a root if it is not derived from another class in the system. HIT for a class is the maximum path length from it to its deepest subclass.
CQLinq Query
// ** Inheritance ** // * Computed Proportions * // ANDC let ANDC = JustMyCode.Types .Where(t => !t.IsInterface) .Average(t => t.DirectDerivedTypes.Count()) // AHH let justMyTypes = JustMyCode.Types.ToHashSet() let rootClasses = JustMyCode.Types.Where(t => !t.IsInterface && // a root class cannot have its base class in the system t.BaseClasses.Intersect(justMyTypes).Count() == 0) let hitSum = rootClasses.Sum(c => // the maximum path from the root to its deepest subclass c.DerivedTypes.Max(d => d.DepthOfDeriveFrom(c))) let rootCount = rootClasses.Count() let AHH = (double?) hitSum/rootCount
Putting it all together
Here is a CQLinq query that computes all the required metrics:
// ** Size & Complexity ** // * Direct Metrics * let NOP = JustMyCode.Assemblies.Count() let NOC = JustMyCode.Types .Where(t => t.IsClass) .Count() let NOM = JustMyCode.Methods .Where(m => m.NbLinesOfCode.HasValue) .Count() let LOC = JustMyCode.Methods .Select(m => (int)m.NbLinesOfCode.GetValueOrDefault()) .Sum() let Cyclo = JustMyCode.Methods .Select(m => (int)m.CyclomaticComplexity.GetValueOrDefault()) .Sum() // * Computed Proportions * let ClassesPerPackage = (double)NOC/NOP let MethodsPerClass = (double)NOM/NOC let LinesPerMethod = (double)LOC/NOM let OperationComplexity = (double)Cyclo/LOC // ** Coupling ** // * Direct Metrics * let Calls = JustMyCode.Methods .Select(m => (int)m.NbMethodsCalled.GetValueOrDefault()) .Sum() let Fanout = JustMyCode.Methods .Select(m => m.MethodsCalled .Select(called => called.ParentType) .ToHashSet() .Count()) .Sum() // * Computed Proportions * let CallsPerOperation = (double)Calls/NOM let FanoutPerCall = (double)Fanout/Calls // ** Inheritance ** // * Computed Proportions * // ANDC let ANDC = JustMyCode.Types .Where(t => !t.IsInterface) .Average(t => t.DirectDerivedTypes.Count()) // AHH let justMyTypes = JustMyCode.Types.ToHashSet() let rootClasses = JustMyCode.Types.Where(t => !t.IsInterface && // a root class cannot have its base class in the system t.BaseClasses.Intersect(justMyTypes).Count() == 0) let hitSum = rootClasses.Sum(c => // the maximum path from the root to its deepest subclass c.DerivedTypes.Max(d => d.DepthOfDeriveFrom(c))) let rootCount = rootClasses.Count() let AHH = (double?) hitSum/rootCount // Only IMethod, IField, IType, INamespace or IAssembly // are accepted as the first result argument. let ignoreMe = Application.Assemblies.First() select new { ignoreMe, NOP, NOC, NOM, LOC, Cyclo, ClassesPerPackage, MethodsPerClass, LinesPerMethod , OperationComplexity, Calls, Fanout, CallsPerOperation, FanoutPerCall, ANDC, AHH }
Conclusion
The Overview Pyramid can help you get a first impression of the most important measurements of a software system. Object-Oriented Metrics in Practice describes how to interpret the pyramid by using statistical information. It defines statistical thresholds (low, medium, high) for each of the eight computed metrics that you can use as reference points. If you want a quick summary of Object-Oriented Metrics in Practice, you can read my review of the book.
I didn’t find a tool that generates the Overview Pyramid for .Net projects. The good news is that it can be done with NDepend. Querying the code base and defining custom code metrics are two powerful features of NDepend. With these tools at your disposal, writing CQLinq queries to compute the required metrics is a simple task.
Thanks for the post – I’m currently researching this exact topic myself. Out of interest, are you using the same thresholds for C# as those described in the book for Java?
Your CQLinq approach looks like much less work than mine – I augmented the VS metrics report using Mono.Cecil
Hi, Geoffrey! Yes, I’m using the thresholds described in the book.
Thanks for your reply 🙂 Have you actually applied the metrics to your work?
Yes. As the name says, these metrics are good for getting an overview of a code base. On my current project, we have multiple repositories, so I use them to have an overview over how big they are and also how do they compare with one another.