![]() |
Software Engineering Series |
I've been playing with AI-based prompting in Microsoft 365 and Edge Copilot for SQL programming tasks and even for simple requests I got wrong or suboptimal solutions. Some of the solutions weren’t wrong by far, though it was enough for the solution to not work at all or give curious answers. Some solutions were even more complex than needed, which made their troubleshooting more challenging, to the degree that was easier to rewrite the code by myself. Imagine when such wrong solutions and lines of reasoning propagate uncontrolled within broader chains of reasoning!
Some of the answers we get from AI can be validated step by step, and the logic can be changed accordingly, though this provides no guarantee that the answers won't change as new data, information, knowledge is included in the models, or the model changes, directly or indirectly. In Software Development, there’s a minimum set of tests that can and should be performed to assure that the input generated matches the expectations, however in AI-based solutions there’s no guarantee that what worked before will continue to work.
Moreover, small errors can propagate in a chain-effect creating curious wrong solutions. AI acts and probably will continue to act as a Pandora's box. So, how much can we rely on AI, especially when the complexity of the problems and their ever-changing nature is confronted with a model highly sensitive to the initial or intermediary conditions?
Some of the answers may make sense, and probably also the answers can be better to some degree than the decisions made by experts, though how far do we want to go? Who is ready to let his own life blindly driven by the answers provided by an AI machine just because it can handle certain facts better than us? Moreover, the human brain is wired to cope with uncertainty, morality and other important aspects that can enhance the quality of the decisions, even if the decisions aren't by far perfect
It’s important to understand the sensitivity of AI models and outputs to the initial and even intermediate conditions on which such models are based, respectively what is used in their reasoning and how slight changes can result in unexpected effects. Networks, independently whether they are or not AI-based, lead to behavior that can be explainable to some degree as long full transparency of the model and outcomes of the many iterations is provided. When AI models behave like black boxes there’s no guarantee of the outcomes, respectively transparence on the jumps made from one state of the network to the other, and surprises can appear more often than we expect or are prepared to accept.
Some of the successes rooted in AI-based reasoning might happen just because in similar contexts people are not ready to trust their reasoning or take a leap of faith. AI tends to replace all these aspects that are part of human psychology, logic and whatever is part of the overall process. The eventual successes are thus not an immediate effect of the AI capabilities, but just that we took a shortcut. Unfortunately, this can act like a sharp blade with two edges.
I want to believe that AI is the solution to humanity's problems, and probably there are many areas of applicability, though letting AI control our lives and the over-dependence on AI can on long term cause more problems than AI and out society can solve. The idea of AI acting as a Copilot that can be used to extrapolate beyond our capabilities is probably not wrong, though one should keep the risks and various outcomes in sight!
Previous Post <<||>> Next Post