In all the projects I’ve participated in, I have observed variations of the same time-wasters and effects on the team during the code reviewing process. In this post, I will compile these, briefly explain them and their side effects, and describe practices I have fostered and implemented whenever possible.
I’ll start by summarizing a code review process:
After working on a new feature or fixing a bug, we ask our peers to review our work by issuing a pull request (PR). Most of the time, our collaboration meets the quality of our team’s criteria, but sometimes it does not. We engage in debates, receive feedback, and make adjustments. Eventually, our code passes the review and is released to production.
I know code reviews are precious for improving quality and identifying potential issues. However, specific time-wasters can hinder the effectiveness and efficiency, directly impacting delivery times and team morale. I’ve had the opportunity to raise these time-wasters as an issue in many retrospective meetings and to collaborate toward remediation.
Time-wasters
In no particular order, these are the top time-wasters during code reviews:
Lack of clarity in code submission: When it is unclear or lacks sufficient context, reviewers may spend an unreasonable amount of time trying to understand the code’s purpose or how it fits into the system. Clear and concise explanations accompanying the code help reduce this.
Large or overly complex changesets: Reviewing large changesets or complex code can be time-consuming. Breaking down changes into smaller, manageable chunks helps reviewers focus on specific areas and provide more targeted feedback.
Incomplete or missing documentation: Insufficient documentation within the codebase can lead to reviewers spending extra time deciphering the code’s functionality or intent. Well-documented code and brief comments help reviewers understand the code faster and provide more relevant feedback.
Lack of adherence to coding standards: Code that deviates from established coding standards, or best practices, can lead to lengthy discussions and debates during code reviews. Consistently adhering to agreed-upon coding guidelines can minimize this.
Excessive back-and-forth discussions: If code reviews become a debate or discussion without clear resolutions, they may consume valuable time. Encouraging concise and focused discussions, e.g., setting a time limit for discussions, or involving a mediator, can help prevent excessive back-and-forth.
Non-actionable or subjective comments: Vague or subjective comments can lead to confusion and additional clarification requests, which slows the review process. We hope reviewers provide specific and actionable feedback, suggesting concrete improvements, or pointing out potential issues.
Reviewer overload: Reviewers are often back-logged with tasks assigned, sometimes outside of their department. Distributing the workload among multiple reviewers, or adopting a rotation system, can prevent reviewer burnout and reduce backlog.
Lack of automated tools and checks: Manual inspection of every line of code can be time-consuming and error-prone. Automated tools, such as static code analysis or linters, can help catch common issues and reduce the manual effort required.
Time-wasters’ Impact on Team’s Morale
I previously discussed time-wasters & retrospect, which allowed me to talk about my teammates’ and my feelings. It allowed us to analyze the impact on the team and our delivery throughput.
Frustration and demotivation: Some team members were frustrated with the process when code reviews became time-consuming (due to various inefficiencies).
Decreased productivity: Excessive time spent on code reviews takes away valuable time and energy needed for other essential tasks, like coding or monitoring. It leads to delays in project timelines which impacts the team’s ability to deliver efficient results.
Lack of engagement and participation: Team members will gradually disengage if code reviews become lengthy and unproductive. The perception is a burdensome activity rather than a valuable collaboration opportunity. This disengagement can result in fewer individuals actively participating in code reviews, thus reducing the diversity of perspectives, and the overall effectiveness of the review process.
Tension and conflict: Prolonged debates or unresolved discussions during code reviews build tension within the team. Disagreements over subjective matters or excessive nitpicking may lead to interpersonal conflicts, harming team dynamics, and morale.
Retention and turnover: Prolonged frustrations and disengagement resulting from time- wasters in code reviews contributed to low team morale, and ultimately impacted employee retention. Team members began seeking opportunities elsewhere after consistently feeling underproductive.
Optimizing for Code Review Efficiency
TL;DR: Efficiency is a matter of will
No matter the level of your code review process’ automation, the weakest link in the chain of the code review process is hoping people will do things consciously and thoroughly. This weakness lies within coders and reviewers.
We can reduce wasting time to a minimum by being conscious about our time usage and others’ time usage. Thinking: “I will do all I can, so Jane can give me actionable feedback in the least amount of time,” or “First, I’ll make sure I understand what John is working on.” These examples are good food for thought to drive our actions toward collaboration.
We want to save time for ourselves and others to enjoy activities beyond work. Being efficient and effective saves time, and being efficient is essential, not only for our own lives, but for others as well.
Good Practices for the Pull Request (PR) author
TL;DR: Provide context and narrow down the scope
The strategy is to maximize our reviewers’ output and minimize the time needed to review our code. It’s all about narrowing down the scope and providing context. Here are some recommended practices:
Give the PR a meaningful and short title: The title will often provide the first impression and first impressions matter.
Write a brief description of your PR: If your team does not have a PR template, you can begin the process by answering three simple questions: What, How and Why. This helps explain what you are doing, why you are doing it and how you did it.
Keep PRs under 400 lines of code (LOC): This is good for two reasons. First, because reviewers’ attention is limited, the more code you have to read, the weaker your attention over time; keep it short or break it down. Second, error density. Finding an error in exorbitant code is akin to finding a needle in a haystack, so if there’s too much code to read, it will be harder to spot.
Atomic PRs: Solve one problem at a time; your PR should address one, and only one, matter at a time. You can break down a feature into several pieces and combine later. A bug will usually require one PR, unless distributed on many repositories.
Annotate your code: Source code is for humans, not for machines, so think of the future “you” or a colleague coming to this particular piece of code for refactoring.
Write tests: Writing tests are the best practice; I can’t emphasize it enough; In the context of a code review, tests explain to the reviewer the behavior of our code; it also shows the depth of the written code.
Good Practices for the Reviewer
TL;DR: Get requirements, be thorough and provide actionable feedback.
As a reviewer, it is your responsibility to ensure the delivery quality. It is enriching for all to analyze the code with a reflective attitude, patience, and a spirit of healthy skepticism. My recommendations for healthy critique and collaboration are as follows:
Thoroughly understand the feature or bug: Although it may seem obvious, this understanding is overlooked often. It is easier to validate the implementation without knowledge of the bug or feature. Time box: You have many tasks to fulfill, and code review is one of them; allocate time to perform code reviews. Your attention fades over time, so spend at most one hour per review session. Try to stay focused during that time. Timeboxing helps you organize your tasks.
Validate if the collaboration fixes the bug or delivers the feature: This is paramount. Again, you are a guardian of the delivery value. If it’s a bug, you have to ensure there’s a solution and if it’s a feature we are building, you must ensure the feature meets the requirements.
Provide meaningful and actionable feedback: You’ll find something that needs commenting: be it a code smell, a bad practice, or an unclear block of code that you think is wrong, it may require you to produce a comment and that comment needs to be a piece of actionable feedback. This is an opportunity to help a teammate learn something new about the technology involved, the business, etc. The best way to learn is to teach, so if you feel like learning, seize the opportunity, gather some resources and references, and make a micro lecture.
Review on time: Reviewing on time avoids generating anxiety in our collaborators.
Don’t be biased by seniority: You’ll find yourself reviewing the code of a more senior teammate. If you notice, please denote.
Working together is such a valuable skill – let us provide feedback and when that happens, you must acknowledge it and communicate it with a good comment, a piece of actionable and meaningful feedback. Sometimes senior devs must pay more attention to basic things, like writing a test.
Be an advocate of best practices and conventions: If there is a more idiomatic way, or a top-rated solution to a common problem, let the collaborator know, and ask them to follow conventions. Again, provide actionable and meaningful feedback, and take advantage of an opportunity to reinforce some good knowledge by teaching others.
Example code reviews
I like exploring Open Source projects to learn more.
These projects are core elements of products that reach millions of people worldwide. And guess what? These teams also do code reviews. Here is my personalized list for exploration::
In some cases, they write a full description in the PR; in other cases, they provide a link containing the complete discussion. Please consider following the links in these PRs and MRs, and learn how our teams collaborate.
Wrapping Up
There is always room for improvement in our processes. We experience different ways of implementing the Software Development Lifecycle (SDLC) in each of our projects.
In this particular event of the SDLC, the Code Review, there is room for controversy, battles of egos and impostor syndrome. We must be kind and humble; this builds a safe and fruitful collaboration space.
I want to close this with a list of thoughts that guided the writing of this post:
- I want to avoid the frustration of senseless never-ending back and forth in code reviews
- I want to stop wasting time, take it back for me, and use it in other rewarding activities such as studying music or spending time with my family
- I want to enable others to do the same
About the Author
Edwin Abot is a Senior Backend Engineer with a special interest in cybersecurity. He has extensive experience in software development, with 15+ years working in the industry.
Want to learn more? Let’s talk.

Analysis Paralysis in AI Adoption
Learn why endless discussions and the relentless pursuit of flawless data are actually costing you valuable time, insights, and competitive advantage – just like it did for giants like Kodak and Blockbuster.

Don’t Take Product Out of the Equation: How to Nail Your AI Implementation
AI isn't just about the technology, it's about solving real problems and delivering real value. One way to do that is to keep product at the forefront during your AI implementation. Learn more about why having a product-first mindset is so important in this article by Principal Product Strategist Heather Harris.

Navigating AI in Banking and Financial Services: A Risk-Based Rebellion for Leaders
Every shiny AI use case in regulated industries has a shadow: governance, compliance, model risk, ethics, bias, explainability, cyberattack vectors and more. It's not that organizations and leaders don’t want AI, it’s that they’re paralyzed by the political, regulatory, and operational realities of deploying it. Sparq's Chief Technology Officer Derek Perry and VP, BFSI Industry Leader Rob Murray argue we need to change that. Check out this article to learn how to actually ship production AI use cases in regulated environments.

Five Important Questions to Ask Before Starting Your AI Implementation
Creating a lasting impact with AI requires more than just technical output. In this article by Principal Product Strategist Heather Harris, learn five questions to ask before starting an AI implementation so it can deliver long-term business value.