Creating software for a living and keeping up with the industry and latest trends can be overwhelming. Every day brings a new framework, a new technology, a new architectural pattern. One of my favourite parodies of this fact is Days Since Last JavaScript Framework (spoiler, the answer is 0 days, every day). In the data space, the latest thing appears to be the data lakehouse pattern, which although not new has had a resurgence because of Apache Iceberg and the idea of distributed compute. And, anyone not under a rock has seen the explosion of Generative AI across the industry, with everyone trying to shoehorn AI into their product, whether that’s what their users want or not. With the pace of change, it is hard not to think that you’re being left behind. For some, this results in a practice I recently heard described as Resume Driven Development.

Resume driven development happens when engineers choose architectures, tools and technologies not because they are the best solution to the business or product need, but because they are currently in vogue, and will look impressive on their resume. This phenomenon results in overengineered, highly complex solutions that, in a lot of cases, don’t actually solve for actual product or business requirements. It occurs when engineers prioritise playing with new toys over what they are paid for (solving business problems). I think this happens for a number of reasons.

Good architecture can be boring

In most businesses, the problems you’re solving have been solved before. Data pipelines, CRUD apps, infrastructure scaling - these are not groundbreaking challenges, and exist in most companies. Unless you’re dealing with tier 1 tech company hyper-scale challenges (if you aren’t sure if that’s you, it isn’t), you’re likely not tackling anything novel. Engineering for solved problems can feel mundane, and building out yet another batch based data pipeline can definitely be boring. This can send some engineers reaching for the shiny tools when they just aren’t needed.

The real complexity doesn’t come from the technology most of the time, but from understanding what the business actually needs, and then selecting the appropriate technology to solve for that need. In our rush to keep up with the technical Joneses, we often overlook the simplest answers.

Take data engineering for example. At its core, data engineering is about making data accessible, clean, and ready for consumption for people or machines; analytics or the old “AI” (machine learning and data science). There are well understood architectural patterns for building data pipelines, data warehouses (and by this I mean the conceptual model, not a specific technology) and analytics platforms. The complexity of the discipline is not necessarily in choosing the right toolset; it’s in figuring out what the business is actually asking for and then aligning the technology to those requirements.

But they need a realtime dashboard

Let’s talk about a common scenario. A data team ingests data from a bunch of different sources to service a marketing campaign report. Dave* from the team picks up the work, and having spoken to exactly 0 people from marketing, decides to build these pipelines to be real-time with sub-second latency. For ingestion, he builds out Kafka for data ingestion (on EKS of course, so that he can play with Kubernetes). Once ingested into Kafka, real-time processing begins. For this, Dave uses Apache Flink (again, on his self managed EKS solution). After processing the data, Dave decides to store it in DynamoDB. He’s always wanted to learn NoSQL.

This solution uses the latest and greatest architectures for real time processing. the Kafka brokers can scale dynamically based on demand, K8 handles auto-scaling, self healing and load balancing to ensure performance under any load of data. Flink gives sub-second turnaround times, and DynamoDB can do millisecond latency and scale to massive volumes.

If Dave had spoken to anyone from the marketing team, he would know that this dashboard is exclusively used in their fortnightly status meeting with the executive team. As long as the report is fresh on Thursday at 2pm, it is fulfilling its purpose. No one on Dave’s team has any experience in Kafka, Flink, or DynamoDB. Dave himself has never used any of these technologies before either, and other than the tutorials he did as part of set up, has no experience supporting or maintaining these technologies.

Dave updates his LinkedIn profile. Skills: Kubernetes. Kafka. Flink. DynamoDB

Dave’s solution took several months to build. When it fails, getting it back up and running takes extended periods of time because no one really understands how it all works. During this time, the executive team has become increasingly frustrated about the lack of insight on their marketing investment during their fortnightly catch ups. The marketing team starts taking spreadsheet exports from the dashboard on Wednesday afternoon and using that during the meeting due to concerns about reliability of the pipeline.

What does the business need?

The above (somewhat extreme) example sounds crazy, but it happens all the time. The marketing team had no need for real time data, they just needed a report that was fresh for their meeting. A batch based solution with “boring” architecture would have been completed significantly faster, is infinitely easier to support, and debug when it breaks. The decision makers are looking for insights, and quite frankly could care less about your fancy high maintenance streaming pipeline. Instead of overengineering, if Dave had just spoken to a human being, he would have figured that out in a 10 minute conversation.

The team I work on now has a working agreement: Before starting any work, understand the why. (I’ve written about another working agreement I am fond of). If Dave had done this, he would have understood what business purpose his work served. He would have figured out that a daily, or even weekly batch process would have been sufficient for the actual need. His overengineered solution is a case of technology for the sake of it.

But that isn’t what Google/Meta/Netflix is doing

These shiny technologies and architectural patterns often come from the tier 1 hyper-scale companies to solve for their internal unique use cases. There is a mindset that software teams need to do what the giants are doing. After all, if it can work at the scale of FAANG, it can handle the demand of your organisation, right?

The answer is actually the opposite for the same reason: You’re not them.

Unless you work at one of these hyper-scale companies, their architecture is probably not applicable to your use case. The scale of the problems you are solving is vastly different. A quick Google search shows Facebook has 3.07 billion monthly active users. I imagine a select * from users is not as straight forward at Facebook as it is at your small-mid sized business or startup. Their architecture and technology choices are fuelled by the sheer size and scale of their systems: billions of users, petabytes of data, global infrastructure, lightning fast performance. To solve for this, they do need distributed data storage, advanced real-time streaming and custom built infrastructure. The problems they are solving are fundamentally different that the problems faced at the average eCommerce business.

With scale and complexity also comes cost and maintenance. Hyper-scale businesses have massive budgets and hundreds of engineers to manage their complex architecture. Smaller business just don’t have those kinds of resources, and won’t survive long if they spend all their money on funding what amounts effectively to engineers playing with new toys.

You’ll have far more maintainable systems, lower costs and happier stakeholders if you stick with what works: mature and stable technologies, fit for purpose for the scale and complexity of the problem you are actually solving. You’ll have more time to actually solve the problems that matter, rather than wrestling with overengineered infrastructure designed for a scale you don’t have.

So I can’t ever use anything new?

I’m not suggesting that at all. What I am suggesting is that in order to use the new shiny thing, it would need to be the best fit to solve a real business problem or unlock a real opportunity. In most cases, the toolbelt we already have is sufficient to meet business requirements.

The trick is to be able to differentiate between the shiny thing and the right thing:

Business Alignment & Risk

Shiny Thing: Chosen based on hype. Not aligned to the actual need for the business.

Right Thing: Directly solves a business problem, aligns to strategic goals and reduces risk.

Problem Fit & Complexity

Shiny Thing: Chosen because it’s trending or exciting, but introduces unnecessary complexity for the problem at hand.

Right Thing: Directly addresses a specific need, its features directly map to your requirements, and matches the complexity level of the problem. Simple for simple problems, more complex for more complex problems.

I borrow here from the Zen of Python:

Simple is better than complex.

Complex is better than complicated

Existing Tools & Maintenance

Shiny Thing: Chosen because it’s “better” than what you already have without actually evaluating if the current tool is sufficient. Potentially immature ecosystem and unproven track record.

Right Thing: Evaluated against existing tools to confirm new technology brings actual required improvement (e.g. performance, scalability) - required is the key word here. Well established technology, widely adopted, mature ecosystem that has been used successfully in similar contexts.

Team Expertise & Time to Value

Shiny Thing: Requires significant ramp up time, specialised knowledge and development resources that delay the business value.

Right Thing: Matches the team’s current experience, delivers quick implementation and value without unnecessary learning curves.

Scaleability & Cost

Shiny Thing: Chosen for “scalability” that doesn’t match actual requirements and scope. High cost of ownership.

Right Thing: Scales appropriately for the need, with a focus on cost-effectiveness and long-term maintainability.

Choose the right tool

It’s easy to get sucked into the vendor and industry hype of new technology. But when it comes to solving for actual business needs, novelty needs to be met with a healthy dose of scepticism. Pragmatism should trump resume padding when it comes to choosing the way we build our systems. The job is about delivering working software that is maintainable, and the right tool for the job, not the right tool for your next interview.

Everyone wants to stay relevant, and new tech is cool to experiment with, but resume driven development at work results in poor solutions: hard to maintain, complex, and overengineered. When building software, the focus should always be on delivering real business impact, not impressing future employers with buzzword bingo.

Next time you reach for new tech to solve a solved problem, take a step back and ask is this the most effective way to do this?. Consider if you are going down the path of the shiny thing, or the right thing. If the shiny thing doesn’t provide a clear benefit over what you already have, don’t let the desire for novelty push you into architecture by CV. Your customers will thank you when the software you provide just works, and does what they actually need.


* Dave is a work of fiction. Any resemblance to persons living or dead is purely coincidental.

** Don’t be a Dave, please.