Using code metrics to guide code reviews

LinkedInGoogle+
Back to Articles
Article Image

I was reading an interesting article recently on The Register called "What Compsci textbooks don't tell you: Real world code sucks". Whilst the title of the post is quite brash, I did find myself nodding my head in agreement at the post. As a leader of developers, code quality is something that concerns me a lot, both in the legacy systems I have inherited and the new systems we are developing. I thought I would comment on one of the points in the original post:

"A tremendous amount of source code written for real applications is not merely less perfect than the simple examples seen in school - it's outright terrible by any number of measures. Due to bad design, sloppy or opaque coding practices, non-scalability, and layers of ugly "temporary" patches, it's often difficult to maintain, harder still to modify or upgrade, painful or impossible for a new person joining the dev team to understand, or (a different kind of problem) slow and inefficient. In short, a mess. Of course there are many exceptions, but they're just that: exceptions. In my experience, software is, almost as a rule, bad in one way or another."

I have inherited systems throughout my career that are very challenging to work with. They contain code that is very hard to maintain, doesn't scale and has virtually no unit test coverage. Pretty much all of the original authors have moved on, but that still leaves a wasteland of code for my team to maintain.

I now have a decent team working under me, and they are trying as best as they can to tame this huge legacy beast. They are doing a pretty good job with what they have got, but we still find ourselves having to make compromises that make us uneasy due to time pressures, shifting priorities and lots more reason.

It can get pretty frustrating when looking at code that you know contains problems throughout. I would say that the best way to try and drive up the overall code quality isn't to criticize the actual code all the time, as there are not enough hours in the day to go over everything. The best tactic I have found is to guide the team with the tools and techniques that they use. If you get the working practices sorted out and ingrained in the team, then the code developed from that point should start to sort itself out.

Everyone has their own way of implementing code. The way things are done reflect the personal style and experience of the developer in question. But if the following things are adhered too then the style that someone has implemented that particular class in doesn't worry me as much, because the code will have been built to defined practices. The practices we introduced to the team include:

  • Introducing a continuous delivery pipeline - All systems have gated builds (we use TFS here) so every check-in gets built, each test is executed and the MSIs are generated. If the build fails for any reason, i.e. compiler error or failing test, then the code fails to check in.
  • Enforcing Unit Testing Discipline - All developers now routinely write unit tests for their code. This was an uphill battle, but we are very close to winning that one. We monitor build over time reports in TFS which give a colour-coded clue to test coverage. We encourage developers to be around 70-75% covered on new code and code they are maintaining.
  • Use of standard Visual Studio Code Metrics - We encourage developers to keep an eye on certain metrics like Cyclomatic Code Complexity and Maintainability indexes. This gives a good high level indicator of any code smells brewing. These metrics are aimed at helping the developer to keep their code readable by reducing complexity.
  • Static Code Analysis - All new code and a lot of legacy systems have static code analysis rules enforced on local and TFS builds. For new projects we have a custom rule set that is a subset of the 'Microsoft All Rules' set. This caused a lot of heated debate in the teams when we started enforcing this, but once people got used to the rules, they just got used to working with it. For old legacy systems we start off by applying the 'Microsoft Minimum Rule' and work our way up from there.
  • Code Productivity Tools - We make all our developers use a code productivity tool. We settled on ReSharper, but tools like CodeRush and Telerik Just Code are just as good. The things I like about these tools are the visual on-screen feedback they give you. You can enable a coloured bar at the side of the code window that informs you of any code issues. These issues are driven from a rule set, so if you get the team into the mind set of removing the colour blips whilst they are working (the tool even does most of the work for you) then you are on the road to better code. Generally the refactoring helpers provided by these tools are better than those provided in Visual Studio too.

I won't pretend that we now churn out systems so beautiful that angels will weep tears of joy, but by enforcing these points we are driving up the code quality standards and the difference has been very noticeable. These tools can also be a great way to guide code reviews. Code reviews used to just be a bunch of developers sitting around the projector picking holes in code. These code reviews were not very effective. Instead, I propose the following process for running a code review:

  • Get the code out of source control fresh.
  • Does it build? If yes, then continue. If no, stop the code review as the solution is broken.
  • Run the unit tests.
  • Do they run and all pass? If not, look at why with the team.
  • Check the unit test code coverage. Is the coverage around >60%? If not, discuss with the team where the shortfall is.
  • Check the code metrics (Cyclomatic Complexity and Maintainability Index). Are the metrics within agreed boundaries? If not, dig deeper with the team to see where the problems are.
  • Run the static code analysis against the agreed rule set. Are there any warnings or errors? If there are, look at why with the team.
  • Once you get to this point, the development practices have been followed and you can proceed to review the actual code.

Unit Test Coverage

Whilst you are developing your software you should be writing tests to exercise that code. Whether you practice test driven development and write your tests first or write tests after the fact, you need a decent level of test coverage. This gives you a level of confidence that the code you are writing does what you expect it to. Also, it gives you a safety net when you need to refactor your code. If you make a change in one area, does it break something somewhere else? Unit tests should give you that answer.

The screenshot below shows the Test Explorer view in Visual Studio. From this view you can run all of your unit tests. In Visual Studio you can group your tests based on pass outcome, length of execution and project. Think of this view as your project health dashboard; if you have a good level of coverage and they are all green, then you can carry on developing. If you have any red tests then you need to work out why and fix them. Don't let this view lead you into a false sense of security though, you still need to write tests to a decent level of coverage and ensure you are testing the right things.

Article Image

You can check your test coverage very easily in Visual Studio. First you can click the little drop down 'Run' menu in the Test Explorer, or you can open the 'Test' menu in Visual Studio, and then open up the 'Analyze Test Coverage' and select 'All Tests'. This will give you a view similar to below.

Article Image

In the screenshot above you can see that the project overall has a coverage of 73%. This is a good number. It really is not worth chasing 100% as you end up testing things that just don't need to be tests, but 60 - 70% is a much more realistic goal, and 80 - 85% is great.

In the example above you can see each assembly in the project where you can drill down into more detail to look at class and method coverage. The key metric here is the 'Covered % Blocks' column on the right. It makes sense to routinely check this view so you can keep an eye on your overall coverage. If anything creeps below 60%, then you can take a look. Sometimes you may feel that the area in question doesn't need extra tests, but you can only make that call when you see the stats in front of your eyes. Purists will argue you need to cover every last inch of code in tests, but we live in the real world and need to be more pragmatic about it, in my opinion.

If you do have any area of code where test coverage has slipped, then Visual Studio makes it very easy to find these areas.

Article Image

In the code coverage window, click the highlighted button above (Show Code Coverage Colouring), and then double click on a method that has low coverage. You will be taken to the source code file and any uncovered blocks will be highlighted in red. In the example below, there are 3 items that are uncovered. The first and third examples are where an exception is being thrown based on a null argument check; I would add tests in for those. The middle example just has a public property returning a string. In my view there is no point adding a test for this as you are just testing that the language works at that point.

Article Image

Let us now dig into some of the tools that are available to us to help increase code quality. Most of these are available in Visual Studio, and one of them is a 3rd party tool.

Code Metrics

Unit test coverage is only one part of determining the health of your code base. You can have high test coverage and still have code that is tangled, hard to read and maintain. Visual Studio provides tools to help you, at a glance, look for smells in the structure of your code. To access this view, open the 'Analyse' menu in Visual Studio and select 'Calculate Code Metrics for Solution'. This will give you a view like below.

Article Image

The metrics shown in the columns are:

Maintainability Index: The Maintainability Index calculates an index value between 0 and 100 that represents the relative ease of maintaining the code. A high value means better maintainability. Colour-coded ratings can be used to quickly identify trouble spots in your code. A green rating is between 20 and 100 and indicates that the code has good maintainability. A yellow rating is between 10 and 19 and indicates that the code is moderately maintainable. A red rating is a rating between 0 and 9 and indicates low maintainability.

Cyclomatic Complexity: Cyclomatic complexity (or conditional complexity) is a software measurement metric that is used to indicate the complexity of a program. It directly measures the number of linearly independent paths through a program's source code. Cyclomatic complexity may also be applied to individual functions, modules, methods or classes within a program. A higher number is bad. I generally direct my team to keep this value below 7. If the number creeps up higher it means your method is getting too complex and could do with re-factoring, generally by extracting code into separate, well named methods. This will also increase the readability of your code.

Depth of Inheritance: Depth of inheritance, also called depth of inheritance tree (DIT), is defined as "the maximum length from the node to the root of the tree". A low number for depth implies less complexity but also the possibility of less code reuse through inheritance. High values for DIT mean the potential for errors is also high, low values reduce the potential for errors. High values for DIT indicate a greater potential for code reuse through inheritance, low values suggest less code reuse though inheritance to leverage. Due to lack of sufficient data, there is no currently accepted standard for DIT values. I find keeping this value below 5 is a good measure.

Class Coupling: Class coupling is a measure of how many classes a single class uses. A high number is bad and a low number is generally good with this metric. Class coupling has been shown to be an accurate predictor of software failure and recent studies have shown that an upper-limit value of 9 is the most efficient.

Lines of Code (LOC): Indicates the approximate number of lines in the code. The count is based on the IL code and is therefore not the exact number of lines in the source code file. A very high count might indicate that a type or method is trying to do too much work and should be split up. It might also indicate that the type or method might be hard to maintain.

Based on the metric descriptions above, you can use the code metrics view to drill down into your code and get a very quick view of areas in your code that start to break these guidelines. You can very quickly start to highlight smells in your code and responds to them sooner rather than later. I routinely stop coding and spend 30 minutes or so going through my code metrics to see if I have gone astray. These metrics will help to keep your code honest.

Static Code Analysis

The next quality tool I want to discuss is that of static code analysis. This has been around for Visual Studio for quite a while now, and used to be called FXCop, but this is now directly integrated into Visual Studio. Static code analysis runs a set of rules over your code to look for common pitfalls and problems that arise from day to day development. You can change the rule set to turn on/off rules that are relevant to you. You can also change the sensitivity of the rule. For example do you want it to produce a compiler warning, or actually break the build?

If you have a large code base and are thinking of introducing static code analysis, I recommend starting off setting the 'Microsoft Managed Minimum Rule set' and getting all those passing first. If you try to jump straight into the 'Microsoft All Rules' rules set you will quickly become swamped and then most likely turn off the code analysis.

Adding code analysis to your solution is easy. It is managed at the project level. Right click on the project in the solution explorer and select 'Properties'. When the properties window appears, select the 'Code Analysis' tab as in the screen shot below.

Article Image

First you should select the 'Enable Code Analysis on Build' check box. This will make sure the rules are run every time you build your code. This forces you to see the issues with your code every time you build instead of relying on yourself to manually check.

In the drop down box you can select which rule set to use. Generally the rule sets provided by Microsoft are enough to work with. What we found was that some rules were not as relevant to us, so I created a custom rules set. You can see where I selected this above. The rule set is called 'DFGUK2012'.

You can add your own rule set easily. In your solution, right click and select 'Add New Item'. When the dialog box appears, select 'Code Analysis Rule Set', as shown below.

Article Image

Then in your 'project properties' rule set drop down box, instead of selecting one of the Microsoft rule sets, select '<Browse>', and then browse to your custom rule set. If you double click on the rule set added to your solution, you will be shown the rule set editor, as shown below.

Article Image

From here you can enable/disable rules to suit your project and team. You can also select whether you want a broken rule to show as a compiler warning or error.

NDepend

Visual Studio contains an impressive arsenal of tools to help you monitor code quality metrics to help you as a developer and to help guide a code review session but sometimes you need to dig deeper into a code base. It is not uncommon to inherit a legacy system that is a big ball of mud and you really need to dive in deep to understand all the dependencies. This is where a tool like NDepend can come in good use.

Article Image

NDepend is an analysis tool that really allows you to dig deep into the structure and quality of your code, and you may want to do this for various reasons. You may want to routinely keep an eye on the quality of your project, or you may have to get under the skins of a large piece of legacy code so that you can refactor it.

Once you have installed NDepend and bound to your code solution you will notice lots of windows pop up and a report is generated in your browser. On first look this will all feel like information overload. NDepend gives you a lot of detailed information and it will feel very daunting at first.

The main features of NDepend are Code Queries, the Dependency Matrix, the Dependency Graph, Code Metrics and Code Diffs. I will run through each of these below.

Code Query

Article Image

NDepend contains a very powerful scripting system called CQLinq. This allows you to write custom queries over your code. For example the script in the image above looks for all types that could have a lower visibility than they currently have. There are around 200 queries and rules provided that you can use, or if you want to invest the time, you can define your own queries. The built in queries though are very comprehensive. What I now try and get into the habit of doing is running the queries and fixing anything that is highlighted in red or yellow. If you feel any of the rules are not relevant to your project, or you disagree with them, then you can disable them as needed.

Diagrams

NDepend outputs various types of visual diagrams to help you see at a glance the overall state of your code. They are great for breaking down a large project into something more visual.

Dependency Matrix

Article Image

The Dependency Structure Matrix (DSM) is a compact way to represent and navigate across dependencies and coupling between components. A DSM is an alternative to viewing a dependency graph. In fact a DSM is best used alongside a dependency graph. A graph is great if you want to look at dependencies for a small subsection of code and is more intuitive to look at, but they can get unmanageable when analysing a large code base The DSM lets you observe dependencies of a larger project at a glance all in one view but is not as intuitive. Once you know what you are looking for you can spot structural patterns at a glance.

What I really like about the DSM view in NDepend is that it offers context sensitive help to guide you through what you are seeing. This really helps break down the information presented to you.

NDepend's DSM relies on a simple 3 colour scheme for DSM cells: Blue, Green and Black. When hovering over a row or a column with the mouse, the Context-Sensitive Help explains the meaning of this colouring scheme.

Article Image

If a cell in the DSM contains a number, this number represents the level of coupling between objects represented in a row and column. Once you have identified some coupling you wish to investigate further you can use the information in the context help. Also if you click on the cell you will be presented with a dependency graph.

You can tell a lot about the code structure by the sorts of patterns you can see in the DSM view. I won't cover all these now, but there is a good guide to these on the NDepend site.

Dependency Graph

If you click on a cell in the DSM you will be presented with a dependency graph showing the coupling identified in that cell. If you look at the example below, you can see there is a cell showing a coupling score of 5 between the RichTextBoxPrinter object and the Formatrange structure.

Article Image

If you click on that cell, you will see a graph showing the relationship between those objects.

Article Image

What is really useful is if you right click on one of the objects in the graph you can use the 'Select Types', 'Select Methods' or 'Select Fields' menu to run a code query to highlight all the types that the object uses either directly or indirectly.

Treemap and Code Metrics

In the metric view, the code base is represented through a treemap. Tree mapping is a method for displaying tree-structured data by using nested rectangles. The tree structure used in NDepend represents the usual code hierarchy of .NET Assemblies containing namespaces, namespaces that contain types, and types that contain methods and fields.

Article Image

You can use the 'Metric' drop down box to determine what you want to display. In the example above I have it set to Cyclomatic complexity. The 'Level' drop down box is set to Method, which represents the boxes as methods. This can be set to namespace, method, type, field, or assembly.

The purpose of the tree map is to show you methods relative to other methods. The example above shows you the relative cyclomatic complexities. As you can see in the large blue box, there is a method called SetSelectionFontType(). This box very large compared to other boxed. If you hover your mouse over this you can see that is has a score of 25 units. This means that method has a cyclomatic complexity of 25 and is a good candidate for refactoring. If you double click on this box it will take you to the code.

Abstractness vs. Instability

NDepend can produce you a report about your code base (more on this below). One of the charts produced is the Abstractness vs. Instability report.

Article Image

This chart is very useful for giving you a very high level quality judgement against your software. In its simplest form, if the 'X' is in the red, this is bad. If it is in the green, this is good.

The Y-axis shows how abstract your assembly is. In other words can it be extended without recompiling? The X-axis is instability. This doesn't mean whether your application is instable as about to crash or cause havoc. What this means is that the public interface can't change because there are lots of assemblies that are dependent on it. Stable means that the public surface area of your assemblies can't change because there are a lot of assemblies that are dependent on it. An instable assembly has fewer, or no, dependant assemblies upstream.

Here's how to read the chart:

  • If an assembly is very stable (that is, lots of assembles depend on it) and it's not extensible (no abstract classes, no virtuals, etc.) then you're in the lower-left quadrant of the chart in the Zone of Pain.
  • If an assembly is very abstract, very extensible, but no one depends on it (it's not really being used) then it moves towards the Zone of Uselessness.

Charts like this can give you a lot of insight into how hard it would be to replace a 3rd party component if you needed, or just to extend one.

86 Code Metrics

Earlier in this article I talked about using Microsoft's code metrics to keep an eye on the overall quality of your code. These metrics include Cyclomatic Complexity, Maintainability index, depth of inheritance etc. NDepend provides all these and many more. 82 in fact! You can see a more comprehensive list and description of the metrics here.

Html Report

When you fire up NDepend, it will generate a convenient HTML report that will pop up in your browser. This report is great as it will run through all the standards metrics, analysis and charts in NDepend and compile them into 1 place for you. This makes it very easy for you to take a quick look into the quality of your code base without having to bury yourself in menus etc. If you do spot issues in the report, you can then refer back to visual studio and dig deeper.

Article Image

What I also like is that it will run lots of CQL queries and give you instant feedback on the results and highlight if there are any issues.

NDepend is a very deep and complex tool that can really help you gain understanding if your own code and legacy code that you may have inherited. You do have to be very careful with it though as it is very easy to fall into analysis paralysis where you spend all your time analysing code and not actually delivering so I maybe wouldn't give this to every member of your team.

Summary

We have covered a lot in this article, but this is all information that can really help you improve the quality of your code, especially when working with a team. Code metrics are useful both for individual developers or pairs who are looking to monitor their code quality as they work, but these metric also make a great way to guide code reviews.

If you have never done a code review using the process described in this article before, then you may want to introduce it gradually so as not to overwhelm the team.

Stephen Haunts is a Development Manager for a large healthcare retail company, Walgreen Boots Alliance, based in the UK. He is a frequent blogger at www.stephenhaunts.com, a Pluralsight Author, a book author for Syncfusion, a passionate developer and a father. He can be contacted via Twitter using handle, @stephenhaunts.

LinkedInGoogle+

Comments disabled

All dev articles are now on the TechNet UK Blog where we'd love to receive your comments.

previous

Getting started with Open Source – becoming an awesome contributor


next

On the importance of releasing

Related
What’s new from the Microsoft Edge Web Summit 2016
What’s new from the Microsoft Edge Web Summit 2016
Testing for quality using Team Foundation Server 2015
Testing for quality using Team Foundation Server 2015
Making your Windows game stand out with Cortana
Making your Windows game stand out with Cortana
Welcoming end of support for IE 8, 9 and 10
Welcoming end of support for IE 8, 9 and 10
How to use Arduino with the Azure IoT hub
How to use Arduino with the Azure IoT hub
Recent
We’re moving home
We’re moving home
Microsoft Developer Monthly News Roundup – July 2016
Microsoft Developer Monthly News Roundup – July 2016
UK team takes on the world at the Imagine Cup
UK team takes on the world at the Imagine Cup
An introduction to the Microsoft Bot Framework
An introduction to the Microsoft Bot Framework
Skyscanner’s Year in Bots
Skyscanner’s Year in Bots
previous

Getting started with Open Source – becoming an awesome contributor


next

On the importance of releasing