Eight Lessons I’ve Learned Building Software with AI
- Ruwan Rajapakse
- 4 days ago
- 3 min read
Updated: 3 days ago
As I continue my layman’s exploratory journey into AI-assisted software development, I find myself increasingly excited—and, admittedly, a little hooked. This isn’t only because my visitor surveillance use case is finally taking shape, gratifying though that is. It’s also the sheer delight of discovering that working with large language models can feel a bit like having a competent robo-developer available around the clock: one with superhuman stamina, but who, like any human counterpart, is still fallible and prone to mistakes. And therefore one who benefits from partnership—constructive criticism, help with testing, and guidance on wider context, alternative approaches, priorities, and trade-offs.
Lately, I’ve begun to feel a little like a seasoned Dev Lead of old: getting a great deal done through my “minions” (the models), while still adding real value as the provider of requirements, the reviewer of solution options and outputs, and an active partner in debugging and troubleshooting.
Here are seven lessons I’ve learned from these AI-assisted development experiments. Many of these strengths and limitations may seem obvious when viewed through the lens of how LLMs and their derivatives work, but I’m sharing them from my perhaps naïve perspective of simply trying to get the job done, by hook or by crook.
Models like Gemini 3 are more than just code tinkerers. With the right guidance, it’s entirely possible to architect, design, and develop a complete epic or small solution—from high-level requirements through to working code. In some ways, the roles are inverted: the human provides direction, judgement, and reality checks, while the model handles much of the implementation and “head-scratching”.
They respond exceptionally well to evidence. Error logs, stack traces, bug reports, and other formal descriptions of faulty behaviour are powerful guides. Given such inputs, they can often identify the underlying issue quickly and with surprising accuracy.
They have short memories. Each step requires sufficient context, particularly when the next increment is non-trivial. It helps to recap the design, relevant code, known issues, and the immediate goal before asking for the next change.
They can also be “stubborn”, exhibiting a kind of inertia. Once they latch onto an incorrect interpretation, a gentle nudge is often insufficient. Clearer direction, stronger reasoning, or sometimes a clean restart is needed to move forward effectively.
They are frequently blindsided by their own structural limitations. This can range from minor quirks—such as insisting on producing artefacts that never materialise—to more serious issues, like misjudging how much information they can process or output in one go. They rarely suggest breaking work into smaller units themselves and may attempt to tackle scopes that are far too large, leading to repeated failures or incomplete results.
They often require multiple iterations to refine their thinking. Tasks such as performance tuning, bottleneck analysis, or complex debugging typically demand several cycles of dialogue. Feeding back observed behaviours, results, and the last known working version of the code is essential. With enough iteration—and with your own reasoning layered in at each step—they can eventually crack many of these “tricky” problems with impressive accuracy.
They also show surprising insight into user experience. Given a clear language description, they can often “picture” a GUI and generate code that aligns with common UX/UI best practices. While not flawless, their ability to translate conceptual descriptions into coherent, usable interfaces is genuinely impressive.
They are quite knowledgeable in architecture and design. Depending on a solution’s complexity, scalability requirements, and your own “taste”, they will often point out the simplest viable approach—and will alert you to better alternatives if you choose otherwise and later run into issues.
I’ve been able to make significant strides through this symbiosis, at speeds I would never have achieved working on my own a few hours a week.
None of this is to suggest that I’m blind to the common concerns surrounding AI and its adoption in the workplace—from its lack of genuine innovative capability to fears of it overtaking us and pushing us towards the singularity. What I am suggesting is that, when one engages with AI in a deeper-than-surface way to advance one’s IT solution-making goals, a constructive relationship begins to emerge. I believe there is a genuine sweet spot we could aim for, with appropriate safeguards in place. Reaching it will likely require a shift in the broader picture of where—and how—IT solution development takes place.
More on that later, once I’ve had time to reflect further.







Comments