Since I began my new ‘architect’ (no, I really do write code… sometimes) role previously this year, I’ve been performing a ‘regular code quality survey’. This uses various tools to give an overview of our current codebase. A lot of the metrics come from NDepend, an excellent tool if you haven’t come across it before. Check out the writer, Patrick Smacchia, blog. We have lots of generated code, so you want to differentiate between that and the hand written stuff obviously.
Doing this is actually easy using CQL (Code Query Language), a type or kind of code-SQL. Here I’m looking for overly complex code and excluding anything that is attributed with our GeneratedCodeAttribute, I’m also excluding a project called ‘Shredder’ which is entirely generated. NDepend’s dependency evaluation is famous, and well worth a look also, but that’s another blog post entirely.
The duplicate code metrics are given by Simian, a simple command line tool that trolling through your source code looking for repetitive lines. I set the threshold at 6 lines of code (the default). It actually outputs a whole list of all the applications it finds and it’s nice to be able to run it regularly, put the result under source control, and diff variations to see where duplication is being released then.
- How is your turnover and what exactly are you doing to boost it
- Custom shortcodes,
- Romina Mangion I will recommend SiteCube Website Builder to everyone I know
- One last box will appear – click Uninstall again
- End all processes of AntiMalware GO
- 3 Reasons You Should Not Take up a Blog
- What is “NETCUT” and exactly how it works
- The company will assign a writer with a suitable degree to complete your paper
A great way of fighting the copy-and-paste code reuse pattern. The machine test metrics emerge from NCover directly. Since there have been no unit tests once I joined the united team, it’s not necessarily surprising how low the amount of coverage is. The fact that we’ve had the opportunity to crank up the true quantity of checks quite quickly is satisfying though. As you can see from the sample output, it’s a fairly cruddy old codebase where 27% of the code fails basic, very conservative, quality checks. Some of the most severe offending methods would make great entries in ‘the daily WTF’.
But if you ask me, working in a great deal of corporate .NET development shops, this isn’t unusual; if anything it’s a little much better than average. Since I joined the team, I’ve been very thinking about promoting software quality. There hadn’t been any focus on this before I joined, and that’s reflected by the inherent quality of the codebase.
I also need to emphasize these metrics are probably the least important of several things you must do to encourage quality. Certainly less important than code reviews, leading by example and regular workout sessions. Indeed, the metrics by themselves are very meaningless, and it’s easy to game the results but merely having some visibility on things such as repeated code and excessively complex methods makes the point that we value such things. I had been concerned initially that it might be adversely received, but in fact the opposite appears to be the case.