In our development team, we do code review on every single merge request (aka a pull request for GitHub users) without exception. Every MR is reviewed by two developers of the team. I personally love it as I think it has a lot of advantages (as I’ll explain below). But code review can also quickly become a bottleneck in software development.
We were feeling like some of us were spending a lot of time on code reviews, leaving pretty little time to actually produce code, so we came with the idea that maybe we needed a way to better distribute the code reviews so that the workload would be more even (that is, trying to find a way everybody would do the same “amount” of code review).
While looking for the best way to qualify and quantify a “difficult” code review, I found a ton of articles and discussions about code reviews, from ways to make it less painful to bots that distribute them automatically.
Why you should embrace code reviews
I’m pretty maniac about clean code and I love giving and getting feedback (which I consider as part of mentoring, which I find extremely important), so I was amazed during my research to learn that most developers don’t like code reviews (yeah, le me can be a bit naΓ―ve sometimes).
Code review has a lot of advantages for everyone:
For developers (authors of the code being reviewed)
- Probably the best and more efficient way to improve your code skills after you went through the basics.
- Great way to know which topics you should focus learning on.
- Learning opportunity: we cannot know everything, the code reviewer may come with an idea on how to implement a new functionality, using a tool or an API we didn’t know about yet.
For code reviewers
- Opportunity to develop your mentoring skills.
- Learning opportunity: yeah, the reviewer can also learn and discover new ideas from the code they read (if it doesn’t happen, it might be time to change the code review process in your team).
- Know what’s going on in the project, what new features were implemented…
- A nice way to notice your evolution, to notice that you can teach or explain things to others that you weren’t aware of or confident about some time ago.
- Can be a ego booster: in a job where it’s really easy to feel dumb af some days, it feels good to participate in others’ evolution and be able to teach them some things.
For the team
- Maintaining a clean code base and enforcing good practices, which will ease coding for everyone in the team but also onboarding new team members.
- Team cohesion, discussion starter, collaboration.
- Ensure the diverse visions of the team are heard and benefited from: some will have an eye for functional problems, some for technical issues, some for UX, some for reusability and app architecture…
How to make code reviews less painful
Code reviews shouldn’t be a burden. If a lot of developers don’t like it (apart from those who don’t like it because their ego can’t stand it…), it might be because the processes are not on point. Here are a few ideas for easier code reviews:
Make code reviews short
I found a few sources that stated that a code to review should be less than 400 or 500 LOC (lines of code) long. A code review should also take less than one hour. Longer than this, the reviewer won’t be able to focus enough and will experience a review fatigue. So, pure waste of time.
A first step is then to create tickets with the right scope. Anyway, apart from code review issues, it’s never good to have a developer work on a ticket for too long (hello merge conflicts and rebase nightmares). It’s also every developer’s responsibility to split their ticket if they appear to be bigger than expected. Finally, some automated solutions exist to prevent developers from opening merge requests that are too big (like a bot that’s integrated in your CI pipeline and will warn you if your MR is too long).
Automate every dumb aspect of the code review
Everything that can be handled automatically shouldn’t be the reviewers’ task. Code style and conventions relative to the programming language can be automatically checked. Every developer in the team should work with a common linter locally, and the code style should be checked again automatically on push. It’d avoid the pain of the reviewer to search for missing white spaces or semicolons, and the pain for the developer to have to push changes for adding a white space or a semicolon.
Code inspection can go further (code repetition, security issues, errors not caught properly…), with tools like SonarQube, but the team should decide how much they feel confident to leave to an automated system. Which leads to the next point…
Decide together what’s a code review
The team members should agree on what’s a code review, what’s expected from the reviewer and what’s not, get a clear vision on the how-to (like a checklist). I learnt recently about the concept of “Social contract” or “Team contract” via the Open Practice Library and I think it’s a perfect use case.
Code review expectations and process should be documented on the team wiki after having been discussed together and agreed upon, and should be an evolving process (the needs would change if you integrate an automated tool for spell check, code style, and so on).
It’s also important to emphasize on the why: people understanding why they have to execute a task (what’s in it for them, for the team…) are generally more encline to do it and do it well.
Be a team-player, ease the reviewer’s job
If you’re the author of the code, just don’t be a jerk. Team spirit is really important in code review processes. If you push code without considering the job of the code reviewer, you’re just making it a pain for them… which will lead to the corrections being a pain for you in the end anyway.
It happens to every developer to push code with errors, that’s basically why we need code review. But it’s not an excuse for delivering dirty code and not paying attention at all, solely relying on the reviewer to make it good. You should review your own code before opening a merge request, the same way you’d review someone else’s code.
How to choose the best code reviewer for your merge request
I spent a lot of time on this topic as it was our main concern in the team. And the question is not easy to answer. There is definitely no one-size-fits-all solution, but here’s some food for the thoughts.
Should everybody share the same load of code review
This is again a question worth bringing up while establishing the team contract. It really depends on your goal: if you just want code reviews to be done quickly without any other concern, code reviews would most probably be distributed unevenly, and fast reviewers would do more review.
This raises several problems: first, “fast” doesn’t mean “fast and efficient”. You also may be preventing really good developers from participating in the code base as much as the others. Then, you wouldn’t take advantage of the diversity of the team, both in authoring code and reviewing it. And of course, those who are not happy about doing reviews will just go slower and slower so they can make less and less of them.
So yes, to a certain extend, distributing the workload evenly seems important to me. Even though perfect balance is out of reach, it’s important to make it a team effort. But then comes the crucial question…
What defines the work load of a code review
In other words, how can we define that a code review is more time consuming than another, because balancing the work load doesn’t only mean giving the same amount of MRs to every one, but also taking complexity into account.
So I started thinking about a way to set a “score” to each developer and a “weight” to each MR based on its metadata. The score represents the current reviews-related workload of the developer, the weight represents the past reviews-related workload performed by the developer. Here’s my first draft:
- Score – In our team, every MR is developed by one developer then reviewed by two developers one after the other. For every MR assigned to a developer, they’d get score +1. The one currently reviewing would get an extra +1. That’s the quick-and-easy first version.
- Weight – To try to make the work load more even, the score should also consider the work done recently. For every MR reviewed in the last month*, each developer would get an additional score based on the weight of the MR. Here, I still have to think of which criteria it should be based on (and the values-weight for each): number of changes (files changed), number of LOC, number of LOC actually changed, maybe also how long it’s been open for review and/or the number of comments made on the MR and/or the number of comments made by the reviewer in question.
*In a few use cases about automated distribution of reviews, I found out that a lot of big companies, like Microsoft, seem to work on a 30-days basis, which makes sense because it lowers the impact of some events like short leaves or spending a few days focused on a project.
Should you auto-assign code reviewers
As far as I could find information, code review auto-assignment tools currently on the market are only capable of distributing tasks randomly taking into account tags associated to coworkers (so, not at all taking into account the difficulty of the MR or the work already done, for example).
The team members would generally be tagged with their field of expertise so that, for example, a merge request for a front-end development would only be distributed randomly among front-end developers (you can tag the members as you wish, by project, programming language, tier, seniority, whatever).
I’m not a big fan as this can create knowledge silos (each team member holds the knowledge on a particular stack, project, aspect…) and because limiting the possible author-reviewer pairs doesn’t allow much opinion challenging and visions diversity. This seems to be used in large organisations where development teams are really big (like, hundreds of heads, which potentially a lot of turnover) and it’s impossible to know everyone.
The other solution is a plain random among all team members. This allows to skip the time and effort of choosing the reviewer(s). It also allows to distribute the different types of code reviews more evenly: when you manually assign a code review, you could choose someone specific because you know they’ll make great comments and you’ll learn from them, because they’re particularly good at this technology/language (or at the contrary, you could be tempted to choose someone less picky if you want your ticket to be validated fast).
This means you’d often go for the same reviewer(s), not allowing juniors to evolve in code-reviewing, or people less at ease in some fields to challenge themselves. You’d also not benefit from the diversity of points of view and experiences.
I guess the perfect solution would be an hybrid between random distribution and work load estimation.
Can you quit code-reviewing if you’re pair programming
Another interesting point of view I came across is that some development teams decided to “replace” code reviews by pair programming. For example, a one author – two reviewers workflow would become a two authors – one reviewer workflow. In the end, there were still 3 pair of eyes on your code, but the dynamic is different.
I’ve never done pair programming, and I’m not sure I would like it. I’m pretty sure I wouldn’t want to do it every time for every ticket, but more occasionally to get knowledge on a particular topic. The advantage of this approach is that big problems could be found earlier, while code review may raise a big issue when the author thinks they already made most of the work. On the other hand, having to rework your code like this might be a very efficient way to remember not to make the same mistake in the future.
Let’s not forget that pair programming is less challenging than reviewing some code you’ve not worked on at all. When pair programming, one of the developers will probably take the lead (even if unconsciously), and the second one will be influenced by the direction taken. When you do a code review, you have to discover it all from the start, which can raise different questions and ideas without being influenced by the other’s vision.
Any thoughts on code reviews or what you just read?
I’d be glad to read them in the comments section!