top of page

To test, or not to test

  • Writer: Nima Tadi
    Nima Tadi
  • Apr 8
  • 4 min read

To test, or not to test, that is the question: whether ’tis nobler in the code to suffer the bugs and crashes of outrageous fortune, or to take arms against a sea of errors and, by testing, end them.



What you just read may sound old and in many ways, it reflects an outdated way of thinking about what quality assurance (QA) brings to the table. QA has often been described as a “gate,” and by definition, a gate is designed to stop movement. This imaginary gate, intended to catch bugs, has also been stopping other critical parts of the software delivery process.

Let’s go back to where it all started to better understand what the future might hold.


In the early days of computing, there was no formal testing, no business analysts, and no agile coaches. There was simply a person who could communicate with a machine in basic terms to perform repetitive tasks. The problems were simple, and the person writing the code could fully understand them. For example, in the early 1950s, UNIVAC I was used to perform straightforward calculations, things like payroll or census data processing based on relatively simple logic.

These were tasks that could even be done manually, and early programmers were comfortable understanding every aspect of them.

As time passed and computer science advanced, more complex problems were handed over to machines. Developers could still grasp the technical side, but they didn’t always understand the business context their software was meant to serve. For instance, in the 1980s, SAP rose to prominence, enabling large organizations to manage complex business processes. However, developers didn’t necessarily understand the intricacies of each business function.

This gap led to the emergence of the business analyst role, someone who could translate business needs into requirements that developers could understand and implement.


Textbooks often point to the 1970s as the birthplace of quality assurance and that’s accurate, as QA was introduced as a response to the “software crisis.” However, in my view, QA gained real momentum alongside the rise of business analysis, driven by a different force: complexity.

As systems grew more complex and teams expanded, the need for validation became clear. Organizations needed a way to ensure that what was being built was actually fit for purpose. QA, in many ways, became an audit function for software systems.

From there, processes evolved. Over time, they were refined, optimized, and expanded into various forms. For years, proactive QA approaches such as shift-left strategies and test-driven development became key selling points in both delivery and pre-sales conversations. And to a large extent, they still are.

But this raises an important question: why are we still struggling with the basics of QA implementation? Why are so many automated test suites fragile, difficult to maintain, and time-consuming?

To answer that, we need to revisit the original purpose of QA.

In my view, QA was created to manage the complexity of software engineering and design, a role that made perfect sense 30 years ago. However, today’s development tools have advanced significantly, and many of these tools now address areas that QA once owned.

For a long time, this shift has been overlooked. Instead, we’ve seen an ongoing “arms race” between QA tools and development tools. More automation, more frameworks, more layers—often solving problems that may no longer exist in the same way.

So where does that leave us? Do we no longer need quality assurance? No testing at all? Are we ready to release flawless software into the world?



In April 2026, I would argue that more lines of code are written daily than were produced in an entire year in the early 1970s. This may be an estimate but it’s directionally true. Across the industry, we hear about AI agents writing code, automating releases, and deploying directly to production.

But are they really?

And if they are, we may be circling back to the early days of computing, just in a different form. Back then, humans translated problems into machine instructions. Today, machines are increasingly automating that translation. Yet the fundamental challenges that gave rise to QA and business analysis still remain.

At this point, it’s worth stepping back from the hype and focusing on those fundamentals.

Quality assurance is at a turning point. It is beginning to evolve from a gatekeeping function into a system-level validation mechanism one that analyzes and assures the end-to-end flow of data across systems.

I recently spoke with a company working in this space, taking a deeply technical approach to QA. Their model dives to the lowest levels of software behavior to identify issues that may never surface in real-world scenarios but are still, technically, defects. It’s a fascinating step forward, but only the beginning.

Looking ahead, QA will continue to play a critical role especially as AI agents take on more of the software development lifecycle. But its purpose may shift: from simply finding bugs to validating entire systems.

In my next article, I’ll explore this shift in more detail about where we are today and what’s coming next.

Stay tuned.


 
 
 

Comments


bottom of page