Skip to main content

Warnings as Errors

One thing that annoyed me when starting at Codeweavers was the amounts of warnings that would occur during a build of any of our projects. Seeing the build progress only to spew out a screenful of text was something that did not sit right with me. I was not the only one who felt this was wrong, but as there was so many warnings in some cases, it was easier just to pretend they were not there. After all everything was working fine.

The broken window theory is very much in action here. During our last standards review we decided that there should ideally be zero warnings per project. It is worth mentioning that most of our warnings were just that, warnings about something that was not really a major issue. Warnings such as unused variables and so on fall into this area.

On the other hand, while 90% of our warnings were ignorable, there were a handful which were rather important. Examples such as referencing different versions of required .dlls. Warnings like this are extremely helpful. It would be wrong for these to be hidden among a block of less serious issues. Warnings such as these once visible, can save hours of painful debugging.

Some of our projects had a fair few warnings - in the region of fifty plus. In order to begin tackling these larger projects we started slowly. If in a single day I would have removed a batch of warnings, this was a step in the right direction. After a week or so all our projects were void of warnings.

The next step was to make sure we do not go back to having larger projects with warnings galore. To prevent this I enabled "Treat warnings as errors" within Visual Studio. This is per project setting and can be found under the "Build" tab. Do note that you must enable this for "All Configurations" otherwise any settings you change will only apply to Debug/Release builds.

I like this feature of Visual Studio immensely. Having the compilier do as much work as possible - in this case check for warnings is similar to a tip found in Working Effectively with Legacy Code. Here the concept of "leaning on the compiler" is introduced. In other words you introduce an error in order to show you the usages of a piece of code - this is stark contrast to manually searching for the code in question.

The end result of this process is now during a build, if any warnings occur, the build will fail. The build will report where the warning is, along with why there is a problem. While this is great in theory it can cause some slight pain when developing, as you may comment out some code to experiment only to find the build failing due to unused variables. Despite this treating warnings as errors has been a great help. Recently we have solved some pretty serious issues with regards third party dependencies all thanks to treating warnings as errors.

The idea of allowing the computer to do as much work as possible applies to all languages. For your compiler/interpreter etc... there will be an option to apply warnings. This is not a specific language feature.

Comments

Popular posts from this blog

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…

Coding In the Real World

As a student when confronted with a problem, I would end up coding it and thinking - how do the professionals do this?For some reason I had the impression that once I entered the industry I would find enlightenment. Discovering the one true way to write high quality, professional code.It turns out that code in industry is not too far removed from the code I was writing back when I knew very little.Code in the real world can be:messy or cleanhard or easy to understandsimple or complexeasy or hard to changeor any combination of the aboveVery rarely will you be confronted with a problem that is difficult. Most challenges typically are formed around individuals and processes, rather than day to day coding. Years later I finally have the answer. Code in the real world is not that much different to code we were all writing when we first started out.If I could offer myself some advice back in those early days it would be to follow KISS, YAGNI and DRY religiously. The rest will fall into plac…

Feature Toggles

I'm a fan of regular releasing. My background and experience leads me to release as regularly as possible. There are numerous benefits to regular releases; limited risk, slicker release processes and the ability to change as requirements evolve.The problem with this concept is how can you release when features are not functionally complete?SolutionIf there is still work in progress, one solution to allow frequent releases is to use feature toggles. Feature toggles are simple conditional statements that are either enabled or disabled based on some condition.This simple example shows a feature toggle for an "Edit User" feature. If the boolean condition is false, then we only show the "New User" feature and the "Admin" feature. This boolean value will be provided by various means, usually a configuration file. This means at certain points we can change this value in order to demonstrate the "Edit User" functionality. Our demo environment could …

Reused Abstraction Principle

This is the second part of my series on abstractions.Part 1 - AbstractionsPart 3 - Dependency Elimination PrincipleThe Reused Abstraction Principle is a simple in concept in practice, but oddly rarely followed in typical enterprise development. I myself have been incredibly guilty of this in the past.Most code bases have a 1:1 mapping of interfaces to implementations. Usually this is the sign of TDD or automated testing being applied badly. The majority of these interfaces are wrong. 1:1 mappings between interfaces and implementations is a code smell.Such situations are usually the result of extracting an interface from an implementation, rather than having the client drive behaviour.These interfaces are also often bad abstractions, known as "leaky abstractions". As I've discussed previously, these abstractions tend to offer nothing more than simple indirection.ExampleApply the "rule of three". If there is only ever one implementation, then you don't need …