So what is quality code?

So what is your interpretation of good quality code? I’ve been asked this on most technical interviews, to varying levels of grilling. I figured it’d be good to spend some time self-reflecting about how I answer this question, and researching to fill in the gaps of my knowledge. Just to give you a heads up, a lot of this will be heavily opinion based. I’d love to hear other points of view.

An important thing to reflect on first, is what does not account for code quality. Code that just ‘looks nice’ is not necessarily quality code. This includes general styling such as, whitespace, member and function naming conventions (underscores or not, camel, snake or pascal casing), brace placement, or anything else that can be resolved with a basic linter.

Quality code to me is one that has a solid, well tested foundation such that new features can be built upon it without falling over, or becoming needlessly complex. A good indicator is that you should be able to come back to the code in 6 months and be able to pick it up again fairly easily… preferably without cursing yourself. My primary driver to writing quality code is to make it readable. If it’s not readable then it’s not maintainable.

It’s also important to note that not all software engineers will subscribe to the same set of standards, and that’s ok. You just have to agree what your standards are within your team and stick with it. For me, personally, I really like Uncle Bob’s mnemonic SOLID and Alistair Cockburn’s hexagonal architecture.

I’m sure most people have heard of the acronyms SOLID, DRY, KISS, YANGI, and TDD, and also the testing pyramid. You may or may not have heard of hexagonal architecture. I’ll go through my brief interpretations of these principals, what my takeaways are, and why I think they’re important.


Single Responsibility

Small classes that do one thing. Classes that are small and focussed are much easier to work with and only have one reason to change. This makes them much easier to test, as they usually will only have one public function. The tests themselves will also be more focused, as they only have one function to target.

The problem with this, if you’re not careful, is that you can get an explosion of classes very quickly, so you need to strike a bit of a balance here.


Classes should be written in a way that you can extend the application without modifying the existing implementation too much.

Extend using IoC and create new classes to encapsulate your new behaviour.

As an added bonus, we can easily feature switch (with A/B testing if required) at IoC level to change old behaviour for new!

Liskov Substitution

Classes in an inheritance chain should be interchangeable.

If a method takes a parameter of type BaseFoo, which has a concrete implementation Foo; passing either Foo or BaseFoo to that method shouldn’t change the methods behaviour.

You really shouldn’t have to think about this one if you’re favouring composition over inheritance.

Interface Segregation

Most classes are an implementation of a simple interface, that has a single responsibility.

Personally I’d say most classes will have their own interface, though I’ve found this can be a very marmite opinion, as some would say these interfaces are pointless. I’d argue they allow you to change implementations at IoC level without modifying existing code, which would otherwise break the Open/Closed principal!

Dependency Inversion

This to me, is plain old IoC. Dependencies are injected via a classes’ constructor as interfaces, and the class shouldn’t care about the implementation of those interfaces.

Avoid the new keyword as much as possible and you’ll avoid unnecessary dependencies. Just let IoC instantiate classes for you!


If I were to be asked to choose one part of SOLID to talk passionately about, it would have to be Dependency Inversion, as all other principals naturally follow this one.

When you’re setting up your DI framework, you have the opportunity to give each injectable class an interface. You can avoid Liskov Subsitution by avoiding inheritence altogether, so we’re coding by composition instead! If we’re adding functionality by creating it’s DI configuration first, we’re naturally building an open/closed system. And we can for sure keep our classes small and focussed whilst we’re doing all this.

DRY - don’t repeat yourself

Code has not been needlessly duplicated.

Whilst I do actively avoid duplication I also take this one with a pinch of salt, as there are plenty cases where duplication makes perfect sense. In an Agile world, we do need to protect ourselves from the frequent change in requirements, which is a totally healthy Agile approach.

For instance I would opt to duplicate DTO’s for different API’s endpoints even if the models are currently the same. The key word there is currently. If we were to have a single model across multiple endpoints, and one of those endpoints later required an additional field to be returned, we’d then have to split those models out anyway.

Here there is, of course an element of YAGNI, but I’d rather avoid the pain later.

KISS - keep it simple, stupid

Amusingly rude, but the key takeaway for me here is that code should always be easy to read and understand. Cognitive load should be minimal. Functions and variables should be well named.

A reader of a function’s body should be able to get a quick understanding of what’s going on without having to dig into yet more code. Uncle Bob would say that code that causes you to dig to find meaning is rude code. He’s often quoted as saying Clean code should read like well-written prose. As developers we spend the vast majority of our time reading code, not writing it. So making it easy to read is not only doing other readers a favour, but also our future-selves.

One of the benefits of putting some thought into naming things well, is that we don’t need to spam our codebase with comments. This in turn, makes it much easier to actually see the code to read it. There is usually a correlation between highly commented code, and poorly constructed code. Comments also lie, as code is changed and comments are often not updated to reflect those changes.

Appropriate design patterns and algorithms should be used, but not just used for the sake of it to prove an academic point that we can indeed implement them (CV-DD or Resume Driven Development). Do we really need to implement a strategy pattern here, or will a simple if statement do? Is this code likely to change?

YAGNI - you aren’t going to need it!

Don’t add code unless you need it right now. Don’t think too much about what you might need in the future, think about what the bare minimum is to implement what you need right now, whilst keeping the SOLID principals in mind.

It sounds like this contradicts some of the SOLID principals, and there is probably a balancing act between over-engineering and future-proofing a codebase.

Should I spend time encapsulating a mongodb implementation when it probably won’t ever be changed for… mysql? I think I absolutely should! I don’t want to have to rewrite a significant portion of a given application to change out a database provider. More on this later.

Or what if the business needs to migrate from on-prem to Azure or AWS, perhaps we should have thought about encapsulating that message bus implementation so we can easily switch from rabbit to azure message bus.


Test Driven Development, in brief, is the practice of writing your tests before you write your code, in such a way that it drives the development of your code architecture.

I’ve worked with some colleagues that prefered to write outside-in tests, where you’re essentially testing the entry point of an application, which in my limited experience is typically an API. Then you assert on data leaving the boundary of the API, which can be an API response, or perhaps a mocked operation on an external device; a database, other API, disk, or what have you. Everything in between is seen as implementation detail.

I’ve also worked with some that prefer the classic inside-out approach. With this approach you’ll have a much fuller suite of tests, typically a set of tests per class. I personally prefer this one, as I find the tests that I write are driving the code architecture more than the outside-in approach. Some argue that this approach leads to too many brittle tests.

If we bring it back to what we actually want to achieve with our tests, I’d like a suite of tests to essentially be the spec of the code. I should be able to see what the code is doing, just by reading the tests. I’d like to be told exactly what needs my attention when I make a code change, especially in an unfamiliar codebase. If I changed an == to a <= and turned a few tests red, should I be simply informed that the API is broken, or that the code I changed had intentions that I had overlooked?

The Testing Pyramid

There’s a lot written about this, but put quite simply, it’s how much time and effort we should be putting into each kind of test, compared to how brittle those tests will typically be.

Unit tests should run fast, and take up most of our effort, they should only need to change if we change the tests corresponding piece of code.

Integration and end-to-end tests should come next, they’ll probably take a bit longer to run than the unit tests, they’ll take a bit longer to write, and the scope of breakage here is much larger. You could make any number of small changes and break any of these tests. We should have a less through set of tests here than the unit tests.

UI tests are next, and these are typically super brittle. We should spend the least amount of time on these, and they should be used as a simple sanity check that the whole site hasn’t fallen apart. We certainly shouldn’t be testing every dropdown box option and combination of inputs, as the test would take hours to run!

Manual tests are last. These require an actual person to run through them, and as such are expensive and require massive amounts of effort to do anything significant. Regression tests would typically need to be done again and again, after every small change. If these are done at all it should just be to eyeball general front end design, or to have a quick run through an application to get a feel for UX.

Hexagonal Architecture

I really like hexagonal architecture. It’s a nice way to visualise the separation of concerns of an application. This has been around for a while, and is based on the ports and adapters pattern, which has been around for even longer, but we’re all on a different journey of discovery.

I might not have it 100% right, but this messy drawing is what I picture in my head:
coverlet code coverage 100 percent
Domain is the center piece of the application. It contains all relevant business logic and defines interfaces for the external devices, but doesn’t actually implement those interfaces itself.

A device is defined as anything external, which could be a database, network drive, tape-drive, HTTP endpoint, service bus, or whatever. These should be concrete implementations of the interfaces defined in domain and should remain separate.

API is the app-domain entry point and is the piece that configures the dependency container. The DI container contains the mapping between the interfaces defined in domain and their concrete device implementations. This is the place feature switches should live. If we should wish to switch logging from disk to database, we can do it here without messing with the domain implementation.

The way that this is structured defines a hard boundary between device implementation and domain logic. Domain doesn’t care about how we implement disk IO, or what database we choose to use. It only cares that it can use one, whichever one has been defined.

All of these hexagons could be in the same C# project, but I prefer for them to be separate. My reasoning is that it allows for external nuget dependencies to be kept separate for each device, which makes it easier to drop one implementation for another, and not have to unpick the dependencies from the monolith project.

And as most of us are, I’m a work in progress…

I can fairly easily use these principals in a C# application, and Uncle Bob shows he can use them in Java without issue, in his Clean Code series.

Having said that… I’ve tried a few times to use SOLID principals within React applications and ended up with grumbling co-workers for my efforts. I found it hard to implement DI and found it fairly cumbersome to use once implemented, as it goes against how a react application is typically constructed. Is there such a thing as a nice DI container for React?

Should we even be unit testing front end applications so throughly? Should we be pushing more logic server-side and just rely on integration, front end, and manual tests? The latter feels like it really goes against the testing pyramid and would result in super brittle tests.

Another thought… should all of these principals apply to microservices? Afterall, microservices should be small enough to be replaced in a few days or a week. I’ve heard mixed views on this, and haven’t yet come to a conclusion. On one hand we want a microservice to be maintainable, on the other we want to deliver these small applications quickly.

If anyone has any opposing views or answers to my musings, I’d love to hear them.

Popular posts from this blog

Taking a memory dump of a w3wp process


GitLab Badges