We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths. -Walt Disney
AI is the next big thing. Is it a wave, a bubble, or here to stay? I wish I knew. I will tell you what I have learned about and from AI over the last few months.
At the end of the day, it’s a tool, and as we all know, the result of using tools depends on the user. Hammers can drive nails or smash your finger. Read a tape measure correctly and your project can look good, do it wrong and nothing fits. I won’t even talk about knives and saws.
On the other hand, we hear all the time about “Vibe Coding” and how easy it is to just have AI build an app for you. I can tell you I’ve probably tried this 5 or 6 times with limited results. When it’s worked, it’s been amazing and when it hasn’t, it’s no fun to work your way through code you did not write yourself and debug it.
Even if you ask AI to document the code, it’s not always easy to figure out what is going on. One of the things that separates professional developers from amateurs (which I most certainly am not a professional developer) is the ability to easily discern what is being done here. (And I’ll talk more about this in a second) It’s even more fun when you’re not as proficient at the particular language in use. For whatever reason, the AIs I have worked with prefer Python, which I’m learning (just not quick enough).
So what makes a good AI coding experience? Well for me, it’s something I have in abundance from my career, the ability to define requirements and specifications. Understanding how to code is not as important, but asking how to define what needs to be coded is very important. As the old joke goes, it’s all too easy to miss on the requirements, as this picture shows:
This reminds me of an argument that I had with my father as I was preparing to enter college. Being from the first generation of computer scientists and engineers, he was dead set against me being a Computer Science Major. He felt that it was more important to Major in some aspect of business, accounting, finance, even marketing and minor CS. This way I would understand why things needed to be coded the way they were and not just how to code them. His experience from the early days of computing had taught him that. However, I was not really interested in business concepts, and countered with a new course of study called Management Information Systems, which combined some aspects of business, computer science, and actual business applications that might be encountered by those in the business world. He didn’t think it was a good idea, so in youthful protest, I majored in Political Science.
OK, so enough of the biographical tidbits, how does this all relate to AI and coding? When defining a task with AI, it’s the requirements and their details that matter more than anything else.
Want to design a system, OK, what does it need to do from start to finish? Anything left as undefined or that you think is generally known is a potential spot for issues. One of the nicer things about AI is that it can be iterative, so as results are displayed and tested it’s easy to add in a feature and say “assume all documents are located in the user’s document folder” than not specifying it and then getting some new code. By the way, I’d also add in a declaration if this is a Windows, Mac, or Linux application since that will definitely affect where the documents are stored, and if it’s supposed to work in any environment, you might want to make this a configurable parameter. If anything, it’s too easy for a novice AI developer to work themselves into a corner by getting deeper into interesting features, than just getting the thing working. When coding under this paradigm, it’s a good idea to establish basic functionality before adding in extra features and functionality (something I’ve learned from hard experience)
As a best practice, something that I have been consistently adding to my AI Prompts is a clause along the lines of “Identify and address any issues or conflicts with best practices in this specification” I find that the tool will typically present things that I haven’t thought of and dwell on things that are potentially important, but not essential. I was using AI to build some demonstration code that had some security values hard coded and in plain text. Definitely a no-no in the professional world, but good enough for a simple one-off demonstration. So I added to the specification a notice that this was for a demonstration and not for production usage.
At the end of the day, what does this all mean? I’m going to make a few guesses:
• Writing code manually will become less important, if not outright deprecated. As people get more familiar with AI prompting, it will become irrelevant for creating basic applications and tools. I don’t think that major applications or operating systems will be built this way any time soon, so there’s no immediate worry for professional developers.
◦ This doesn’t mean that basic application development will be easy or seamless. Indeed, those who do develop in this fashion will need to be tight when defining specifications and scope. I foresee new teaching methods that will develop and enhance these needs.
◦ There’s still going to be a need for professional programmers. Currently, AI works by working with large libraries of information. As AI doesn’t truly create at this point, we still need developers that can create new ways of approaching problems and developing algorithms.
• We need to carefully examine the security models that will address how our AI tools interact with each other, whether it is for interacting on our behalf with other agents/systems for buying things, doing research, achieving a goal or creating applications.
My thinking is that the overall acceptance of AI as something truly useful and not a bubble depends on how the tools develop and are embraced not only by professionals, but by the average user. Part of this will be the design of the tools, is the head of the hammer too big; making it easy to hit one’s thumb? Can we easily read the tape measure to get correct measurements? And of course, can we adapt AI tools so that they are easy to understand and use? I’m pretty sure we will, but the road ahead could be somewhat rocky.
