When developing software, there is a natural tension between “speed” and “quality”. They are often seen as opposing goals. Delivery teams are under pressure to ship new features as fast as possible, but the reliability and functionality of their systems can not be sacrificed. In fields like data engineering, this is especially apparent. The need to quickly gain new insights is real, but sacrificing on quality can result in the wrong insights, which is worse than no insights at all!. Does this relationship between speed and quality have to be a dichotomy? What if we could move fast, but also make sure we are delivering the kind of work we can be proud of as professionals?

In my team, it became evident that we had a split between the different camps. Some prioritised moving fast, pushing to deliver value as quickly as possible, and accepting the consequences of doing so. Others prioritised quality, letting perfect be the enemy of good enough. Both approaches have their merits, but they also lead to friction. The speed first folks felt like we weren’t delivering fast enough and were being held back by perfectionism and rigidity in process, while the quality focused team members worried about the impact of cutting corners. To find a middle ground, we had an open discussion and came up with a working agreement:

We focus on delivering maximum business value with speed but not haste, care but not waste

I’ve written about some of our working agreements before. This one (I can take no credit for the wording, it was created by another excellent engineer in my team) was born out of a desire to define what we mean when we talk about “speed” and “quality”. No one wants to be slow, and no one wants to have poor quality, so having some clear definitions around what those words mean allows us to talk rationally about our work. When someone says “I need to deliver this fast”, we can talk about what fast means, what can be left out to make that happen, and what is non negotiable in terms of quality.


The Pressure of Speed

A popular saying in software engineering is “Move fast and break things”. This came from the early days of Facebook, and was an attitude adopted by a lot of Silicon Valley giants. Speed of delivery is prioritised over everything else, releasing products quickly and accepting that there would be some mistakes (broken things). Product feature development might benefit from this mentality (it certainly worked for Facebook!), but in data engineering, “breaking things” has a few problems:

Data Quality is Critical

In traditional software, a bug might break some feature. In a data pipeline, bugs in transformation or faulty understanding of business logic can result in bad data. The consequences of this can be everything from poor reports and operational insights, to bad decisions made as a result of that information (a much worse consequence). Garbage in, garbage out. Cutting corners in quality validations can have pretty significant, long term impacts.

Collaboration Runs on Reliable Data

Data engineering is inherently collaborative, with (depending on your company) insights analysts, data scientists, product teams and others relying on the integrity of the data provided. Moving fast and breaking stuff is not a good mantra in this scenario, because the quality of their work depends on the quality of yours. If a marketing team needs real time data to support a campaign, and your pipeline keeps breaking, that team no longer has the inputs they need to succeed. They will likely start to have a poor perception of the engineering team due to their substandard quality systems (regardless of how fast they can deliver them).

If your systems break often, your reputation will suffer. When people don’t trust your systems, they start building workarounds. A reputation for poor quality is easily gained, and difficult to change once formed.

Technical Neglect

Moving fast can mean skipping functionality important to the long term support of a system. Maybe we do things via ClickOps (pressing buttons in a user interface) instead of doing it with Infrastructure as Code. Setting up automated tests and validations in a CI/CD pipeline is deferred to “later” (never). We definitely didn’t write any documentation, the code is documentation enough.

Sometimes, when the decision not to do these things is explicit, it is referred to as technical debt. Like money, there is “good debt” (e.g., borrowing money to purchase a house), and “bad debt” (Getting that PlayStation 5 with a payday loan seems like a sound financial decision). However, when this technical debt is not paid down (because the next feature is always prioritised), you end up not with debt, but neglect. You might be fast to start with, but over time your systems will become brittle, difficult to maintain, change and scale.

How Can We Move Fast With Care?

Early in my software engineering career I read Robert C. Martin (aka Uncle Bob)’s seminal work Clean Code. It was my first introduction to the idea of software engineering as craftsmanship. There is a quote from the book that I remind myself of often:

“Writing clean code is what you must do in order to call yourself a professional. There is no reasonable excuse for doing anything less than your best”

When speaking about quality, we are really talking about care. About being a professional. Care is about ensuring that systems are reliable, documented “well enough”, and built with solid foundations. Systems built with care are easy to understand, easy to change and easy to maintain.

With all of the dangers of moving too fast, what can we do to make sure we are still delivering software that looks like this at the pace that our customers desire it? What are the attributes of quality work, delivered with “good speed”?

Focus on the Most Impactful Work

Do less things, but do those things properly. Work with stakeholders to understand what data or features will give the most impact, and focus on that first. Start with why. It is better to deliver the most important thing “properly”, than to deliver 3 less important things in the same amount of time by cutting corners.

Tighten the Scope

When the most impactful work has been identified, define the scope and stick to it. Break down the work, build the most important thing first, and leave the “nice to haves” until the core functionality is delivered.

Deploy Incrementally and Iteratively

Break work down into small components using agile practices. Each release is small enough to be tested quickly, but large enough to move the work forward. Remember, most of the skill in (good) software engineering is in the thinking phase, not the typing phase. Really understanding the problem being solved allows you to sequence the work and break into chunks that still deliver value, but are much easier to reason about and test.

Use Proven Technologies

Choose well-documented and widely used components to build pipelines, rather than creating custom solutions from scratch. Avoid resume driven development. Leverage existing systems, managed services and the cloud to avoid bespoke solutions. Don’t go custom when something already exists that solves for your problem.

When Are We Cutting Corners?

When are we not moving with appropriate urgency, but with “bad” speed? We refer to this as haste.

Skipping Testing

Testing is not up for debate. We should unit test, and we should perform integration testing. We should test on a pre-prod or staging environment. Production must not be the first time code is run. Any short term gain made in skipping testing is lost by several orders of magnitude when you have to fix it in production.

Skipping Data Validation and Quality Checks

When defining data products and pipelines, we need to understand the applicable attributes of quality. Cardinality, optionality, freshness, among others. These things need to be documented, and baked into the pipeline.

If replacing existing data products, data regression testing should also be happening to understand the variances. Being unable to clearly articulate the impact of a change is a sure fire way to introduce garbage into your codebase and data platform.

No Documentation

Memory is a funny thing. Something that makes complete sense to you while working on it, is totally unfamiliar a few weeks later. And that’s for you as the author. For someone else, it is even more opaque how a system is stitched together. Writing the essential documentation on the process, code and expected inputs and outputs so that software is supportable, easy to modify, and easy to use is mandatory in delivering production systems.

Neglecting Security and Compliance

In the rush to ship, we should not bypass security best practices and ignore compliance checks (like GDPR), with the view that this can be done “later”. This mindset is dangerous. In the data space, for example doing a select * and ingesting unnecessary Personally Identifiable Information (PII) just because it’s faster than properly filtering or redacting is a clear case of cutting corners with serious consequences.

Speed to Value, Not Haste to Finish

Haste is going fast at the expense of quality, maintainability and long-term stability. It results in technical debt (and ultimately neglect); more bugs, poor user experience and higher maintenance costs.

Interestingly, Facebook changed its mantra in 2014 to “Move fast with stable infrastructure”. The aim of speed, not haste is to reflect that we want to do the same, delivering value quickly, but with all of the things that we would expect of robust, production calibre software.


Delivering Quality, Avoiding Waste

We’ve spoken at length about why we should not sacrifice quality at the expense of speed, and have defined quality as really meaning care. Quality work done with care is stable, secure and functions as intended. When then does this goal start to have diminishing returns? Can chasing “perfect” turn from taking care, to creating waste?

When we go too far in the pursuit of quality, we unnecessarily slow down delivery. Perfect becomes the enemy of good. This can happen for a number of reasons:

Premature Care

If a particular pipeline is a proof of concept, or an experiment, we don’t need to have the same expectations of quality that we would for something long lived. Be cautious here, as sometimes proofs of concept once deployed are hard to remove! Enforcing comprehensive test coverage and documentation on something still in an early stage can kill innovation. We need to be pragmatic, still expecting some level of testing and docs, but not the same level we would expect of something marked as “production ready”.

Sayre’s Law

Engineers are passionate. Sometimes, something as trivial as the wording of a column description can inspire days of back and forth under the banner of consistency and quality. Sayre’s Law states “In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake”. In other words some things just don’t matter. Save your efforts for the attributes of quality that are actually important (like, proper testing), and don’t block a PR if someone has spelled façade without the ç. Know when good enough, is enough.

Premature Optimisation and Over-Engineering

Understanding the scope is also about understanding what success looks like. If a daily batch pipeline needs to complete by 9am, and it does, then working on performance improvements to get it to land at 7am is wasted effort. Focus on the problems that need solving. This is also true of prematurely introducing abstractions or design patterns into code with the aim of “future proofing” (See also, YAGNI).


The Right Balance

The core of our working agreement is that effective speed is about clear priorities and execution. Delivering software with care means high-quality, reliable solutions without over-complicating things. When we focus on the business outcomes that need to be achieved, we will prioritise work correctly and move fast on the things that actually matter. Because we have selected the most important thing, we can prioritise testing and validation to make sure we are getting it right. We are aiming for good enough, without over-engineering or gold-plating, but still delivering work we can be proud of.

Speed, not Haste, Care not Waste is not just a catchy slogan, but a way to deliver reliable, quality software quickly. We use it to frame how we communicate amongst ourselves, but also with our stakeholders. It is important that everyone internal and external to the team understands the trade-offs between speed and quality, and what we are optimising for. The key to success is finding the right balance between the two. Don’t let the pressure to move quickly lead to mistakes, but also don’t get stuck refining when you could be delivering.

The next time you face a choice between speed and quality, remember: you don’t have to settle for one or the other. Following this guidance, it is possible to find that sweet spot where we can deliver maximum business value with speed, but not haste, care, but not waste.