It’s 2025, and the last few years has seen an explosion of GenAI-driven tools in knowledge work like software engineering. Technologies like GitHub Copilot, Cursor and large language models like ChatGPT now allow developers to generate entire applications with just a few keystrokes. If you spend any time on LinkedIn or other haunts of tech influencers, AI tools are often presented like a magic wand: type a prompt, hit enter, and tada! An entire application is built, fully integrated, with no need for you as a developer to ever do pesky things like read docs or write tests. Just ask the right question, and the code appears as if by magic. Gone are the days where engineering required knowledge and experience to do a good job. We have outsourced that to the machine.

In case my sarcasm isn’t obvious, let me be clear: I’m pretty sceptical about the power of these tools in the hands of developers who lack the skills necessary to produce quality software without them. While there is already proof about the downward pressure on software quality since the introduction of AI tooling, I don’t think we can solely blame the technology for this. The real issue isn’t just AI; it’s the use of these tools by those who don’t yet have the necessary experience to understand or assess the code they generate. Relying on AI without this experience is a bold strategy Cotton, but history suggests that shipping software you don’t understand doesn’t end well. This is not an “AI is bad” article—GenAI tools can definitely increase the speed of software development, but only if you already have the expertise to judge the quality of what the AI is producing. Without that, you’re outsourcing to something you don’t understand.

Worse still, is that this problem is only going to get worse due to what these tools are doing to the craft of software engineering. If you’re relying on AI to write your software for you and can’t produce something without it, you are skipping the learning and skill development that comes from the process itself. The mistakes, the trial and error—this is what makes you a better engineer. Engineering is much more than typing code, it’s about the problem, the process, and continually improving your skills. By outsourcing to AI, you will never experience that growth that turns you into a highly competent professional. This forms a vicious cycle, you produce bad code because you use AI without experience, and you never get experience because you are constantly using AI! To put it frankly, if your role is reduced to prompting AI for code you don’t understand, what value are you really adding?


The Magic of AI

As a child, I loved the Inheritance Cycle series by Christopher Paolini. One of the best parts of the books in my opinion is the magic system. In Paolini’s world of Alagaësia, magic is a function of language, but it also requires energy. You can cast a spell if you know the correct words in the “ancient language”, but you also must have the strength in your body to have actually done the work yourself. For example, to lift a heavy stone with magic, you must first be able to lift it with your own strength.

It seems that “magic” is underpinned by two key components: knowledge, and experience. To cast a spell, you need to know the correct words, and you need the strength (which we can think of as experience) to execute them. I think that AI tooling behaves exactly like this magic, and should be governed by the same rules. To produce great software, you need to have a grasp of the language, syntax, design patterns and architecture. But you also need the reps of doing it before many times - writing a lot of bad software in order to eventually write good software. All skills are like this, we learn at the speed of our mistakes. Using AI is a shortcut to produce quality work quickly only if you could already do the work yourself without these tools.

Quality is the key word here, because without this prerequisite knowledge and experience we have no idea if what we are producing (for it is not really us producing it) is good or not. When we outsource to the “magic” of the machine, but lack the experience to assess suitability, performance, maintainability, security and other attributes of good software, we are setting ourselves up for failure.

It’s worth noting that in Eragon’s world, the consequence for using magic when you don’t have the strength is simple: death. The consequences for using AI written code without understanding it can also sometimes be this severe. If you write software for autonomous vehicles, life-saving medical devices like pacemakers, or critical infrastructure like air traffic control services, being inexperienced and not fully understanding the code you produce can also literally be deadly. Even if you’re not working on something with such high stakes, you don’t have to look far to see examples of how poor quality software can cause significant harm through things like personal information breaches, or loss of substantial amounts of money.


Rhunön’s Lesson

Like all good Tolkienesque high fantasy, the Inheritance Saga has elves, who happen to speak the ancient language as their native tongue. They are also inhumanly strong; so strong in fact, that they can do things with magic inconceivable for a human. At one point in the story, the protagonist Eragon encounters Rhunön, an elven blacksmith. She is making a shirt of mail, and to his surprise, Eragon notices that she has welded every individual link in the shirt by hand. When he asks her why she simply doesn’t use magic to save herself the time and effort, she responds “When you can have anything you want by uttering a few words, the goal matters not, only the journey to it”.

This idea of the journey, not the destination being the goal is true for anyone who wants to truly master a craft, and software engineering is absolutely a craft. Of course, when creating software professionally, the end product is the goal, but the process is the thing that develops the knowledge and experience for you to create better quality software in the future. It’s what makes you a good engineer. If you use AI tools before you’ve acquired these skills, you will never develop them. The tools remove the journey. Without it, it’s unlikely that you will ever have the ability to determine if what is produced is good or not. I think a good analogy here is to think of learning mathematics, v.s. using a calculator. If you never learn how to multiply two numbers together, and always use a calculator, how can you say that you are a competent multiplier? You are totally reliant on the machine. One only needs to look at social media during a ChatGPT outage to see how pervasive this is. Engineers have become utterly reliant on the technology and incapable of working without it. Personally, I want my tools to enhance my own abilities, not replace them entirely.


Good Engineering Is More Thinking Than Typing

We’ve discussed the dangers of blindly relying on AI-generated code, but what we haven’t touched on at all is how experience impacts your ability to even ask the right questions in the first place. Of course software engineering involves writing software, but the much larger part of the skill-set is problem solving. You need to think critically, evaluate trade-offs, and most importantly, understand the why. Without experience, it’s easy to focus only on the typing part of software engineering, and to neglect the thinking part: ensuring we are solving the right problems, and in the right way.

You need more than just technical skills to produce good software. You need to understand the systems you’re working on: the architecture, the infrastructure it is deployed to, how it interacts with other systems, and, critically, the business need it solves for. AI, for all its magic, doesn’t yet understand these nuances. It can generate code in response to a prompt, but it will always lack the larger context. It has no clue about the trade-offs you’re considering or the long-term implications of your decisions. And, without experience, neither will you.

AI suffers from the XY problem; it will provide a solution for what you ask it to do (X), but not necessarily the thing you really need (Y). Without the right experience, you might not even know what the right question is!

AI can’t think for you. It can’t grasp the full context of your work, anticipate future requirements, or truly understand the business objectives driving your decisions. It is a tool that can speed up certain tasks sure, and help with brainstorming and prototyping, but it isn’t a replacement for the judgement, reasoning, critical thinking and problem-solving of a skilled engineer.

If you’re relying on AI to code for you without the experience to understand the underlying problem and its nuance, you’re not really doing engineering—you’re just outsourcing the typing, and totally neglecting the thinking.


The Future is Now, Old Man

I can see how some might view my emphasis on having a baseline level of competence and experience as the opinion of some jaded “pre-AI” developer afraid of the future. But I am not that person at all, I think AI tools are great, and use them regularly, for specific tasks.

For me, AI is great for certain things: brainstorming ideas, prototyping quickly, obscure single use code snippets, and drafting text that I don’t care too much about (writing that doesn’t need “me”). I use these tools as a rubber duck sometimes, and offload mundane things where my experience isn’t crucial. For example, my ChatGPT history is littered with things like:

  • “What are some options for deploying Apache Airflow?”
  • “Write me a command to rename every file in this directory that matches this pattern”
  • “I have to do a self performance review, these are the criteria, here are some dot points about what I’ve done this period”.

It’s important to note that I am using these tools in an exploratory manner. I never take the output at face value. It is always a place to start, not a finished product.

Where I think AI is dangerous is when you rely on it so heavily that you can’t do anything without it. Notice how with my examples above, I can still do all of these things without AI tools. They are shortcuts for knowledge and experience I already have. I can already lift the stone, so to speak. AI is not a substitute for the ability to do things from scratch.


Learn to Lift The Stone

AI isn’t going anywhere, and while it will continue to improve, I remain sceptical that it will replace the need for experienced engineers anytime soon. AI based tooling is just the next step in a long evolution of tools designed to assist software professionals. It can be a great tool sure, and it might seem like magic, but I hope I’ve convinced you that magic has rules not only in fiction.

The true value of a carpenter is not in their hammer, but in their knowledge and experience. Likewise, the value of a software engineer isn’t in the tools they use, but in their ability to solve problems, evaluate trade-offs, and make decisions that align with the broader goals of the business and customers. A nail gun might make a carpenter faster, but it won’t make them a better craftsperson, and AI will also not make you a better engineer if you outsource what you don’t understand.

So, use AI, but don’t get caught up in the hype. Remember before you can use magic, you still need to lift the stone on your own. Build your expertise, solve hard problems, put in the work, and then let AI enhance your abilities—not replace them.