Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

03 October 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 103: Building the Bedrock - What’s Needed for the Future of AI)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on what is needed for creating a foundation for the further development of AI"

Introduction

Artificial Intelligence (AI) has rapidly evolved from a niche academic pursuit into a transformative force reshaping industries, societies, and everyday life. Yet, as AI systems grow more powerful and pervasive, the urgency to establish a robust foundation for their continued development becomes paramount. Much like thermodynamics emerged to explain the steam engine, we now need a scientific, ethical, and infrastructural framework to guide the future of intelligent systems.

1. Theoretical Understanding of Intelligence

At the heart of AI’s future lies a fundamental question: what is intelligence, and how can it be formalized? Despite the success of deep learning, we still lack a comprehensive theory that explains why certain architectures work, how generalization occurs, and what the limits of learning are. Researchers like Yann LeCun have called for an equivalent of thermodynamics for intelligence - a set of principles that can explain and predict the behavior of intelligent systems. This requires interdisciplinary collaboration across mathematics, neuroscience, cognitive science, and computer science to build a unified theory of learning and reasoning.

2. Robust and Transparent Infrastructure

AI development today is often fragmented, with tools, frameworks, and models scattered across platforms. To scale AI responsibly, we need standardized, interoperable infrastructure that supports experimentation and enterprise deployment. Initiatives like the Microsoft Agent Framework [1] aim to unify open-source orchestration with enterprise-grade stability, enabling developers to build multi-agent systems that are secure, observable, and scalable. Such frameworks are essential for moving from prototype to production without sacrificing trust or performance.

3. Trustworthy and Ethical Design

As AI systems increasingly influence decisions in healthcare, finance, and law, trustworthiness becomes non-negotiable. This includes:

  • Fairness: Ensuring models do not perpetuate bias or discrimination.
  • Explainability: Making decisions interpretable to users and regulators.
  • Safety: Preventing harmful outputs or unintended consequences.
  • Privacy: Respecting user data and complying with regulations.

The Fraunhofer IAIS White Paper [2] on Trustworthy AI outlines the importance of certified testing methods, ethical design principles, and human-centered development. Embedding these values into the foundation of AI ensures that innovation does not come at the cost of societal harm.

4. Global Collaboration and Regulation

AI is a global endeavor, but its governance is often fragmented. The European Union’s AI Act, for example, sets a precedent for regulating high-risk applications, but international alignment is still lacking. To create a stable foundation, nations must collaborate on shared standards, data governance, and ethical norms. This includes open dialogue between governments, academia, industry, and civil society to ensure that AI development reflects diverse values and priorities.

5. Investment in Research and Education

The future of AI depends on a pipeline of skilled researchers, engineers, and ethicists. Governments and institutions must invest in:

  • Basic research into learning theory, symbolic reasoning, and neuromorphic computing.
  • Applied research for domain-specific AI in climate science, medicine, and education.
  • Education and training programs to democratize AI literacy and empower the next generation.

Initiatives like the Helmholtz Foundation Model Initiative [3] exemplify how strategic funding and interdisciplinary collaboration can accelerate AI innovation while addressing societal challenges.

Conclusion

Creating a foundation for the further development of AI is not just a technical challenge - it’s a philosophical, ethical, and societal one. It requires a shift from building tools to building understanding, from isolated innovation to collaborative stewardship. If we succeed, AI can become not just a powerful technology, but a trusted partner in shaping a better future.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

References
[1] Microsoft (2025) Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps [link]
[2] Sebastian Schmidt et al (2024) Developing trustworthy AI applications with foundation models [link]
[3] Helmholtz AI (2025) Helmholtz Foundation Model Initiative

16 March 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part VII: Think for Yourself!)

Business Intelligence
Business Intelligence Series

After almost a quarter-century of professional experience the best advice I could give to younger professionals is to "gather information and think for themselves", and with this the reader can close the page and move forward! Anyway, everybody seems to be looking for sudden enlightenment with minimal effort, as if the effort has no meaning in the process!

In whatever endeavor you are caught, it makes sense to do upfront a bit of thinking for yourself - what's the task, or more general the problem, which are the main aspects and interpretations, which are the goals, respectively the objectives, how a solution might look like, respectively how can it be solved, how long it could take, etc. This exercise is important for familiarizing yourself with the problem and creating a skeleton on which you can build further. It can be just vague ideas or something more complex, though no matter the overall depth is important to do some thinking for yourself!

Then, you should do some research to identify how others approached and maybe solved the problem, what were the justifications, assumptions, heuristics, strategies, and other tools used in sense-making and problem solving. When doing research, one should not stop with the first answer and go with it. It makes sense to allocate a fair amount of time for information gathering, structuring the findings in a reusable way (e.g. tables, mind maps or other tools used for knowledge mapping), and looking at the problem from the multiple perspectives derived from them. It's important to gather several perspectives, otherwise the decisions have a high chance of being biased. Just because others preferred a certain approach, it doesn't mean one should follow it, at least not blindly!

The purpose of research is multifold. First, one should try not to reinvent the wheel. I know, it can be fun, and a lot can be learned in the process, though when time is an important commodity, it's important to be pragmatic! Secondly, new information can provide new perspectives - one can learn a lot from other people’s thinking. The pragmatism of problem solvers should be combined, when possible, with the idealism of theories. Thus, one can make connections between ideas that aren't connected at first sight.

Once a good share of facts was gathered, you can review the new information in respect to the previous ones and devise from there several approaches worthy of attack. Once the facts are reviewed, there are probably strong arguments made by others to follow one approach over the others. However, one can show that has reached a maturity when is able to evaluate the information and take a decision based on the respective information, even if the decision is not by far perfect.

One should try to develop a feeling for decision making, even if this seems to be more of a gut-feeling and stressful at times. When possible, one should attempt to collect and/or use data, though collecting data is often a luxury that tends to postpone the decision making, respectively be misused by people just to confirm their biases. Conversely, if there's any important benefit associated with it, one can collect data to validate in time one's decision, though that's a more of a scientist’s approach.

I know that's easier to go with the general opinion and do what others advise, especially when some ideas are popular and/or come from experts, though then would mean to also follow others' mistakes and biases. Occasionally, that can be acceptable, especially when the impact is neglectable, however each decision we are confronted with is an opportunity to learn something, to make a difference! 

Previous Post <<||>> Next Post

22 August 2023

🔖Book Review: Laurent Bossavit's The Leprechauns of Software Engineering (2015)




Software Engineering should be the "establishment and use of sound engineering principles to obtain economically software that is reliable and works on real machines efficiently" [2]. Working for more than 20 years in the field I feel sometimes that its foundation is a strange mix of sound and questionable ideas that take the form of methodologies, principles, standards, myths, folklore, statistics and other similar concepts that form its backbone.

I tend to look with critical eyes at the important numbers advanced in research and pseudo-scientific papers especially when they’re related to my job, this because I know that statistics are seldom what they appear to be - there are accidental and sometimes even intended errors made to support the facts. Unfortunately, the missing row data and often the information about the methodologies used in collecting and processing the respective data make numbers and/or graphics' understanding more challenging, not to mention the considerable amount of effort and time spent to uncover the evidence trail.
Fortunately, there are other professionals who went further down the path of bibliographical references and shared their findings in blogs, papers, books and other media content. It’s also the case of Laurent Bossavit, who in his book, "The Leprechauns of Software Engineering" (2015), looks behind some of the numbers that over time become part of the leprechaunish folklore of IT professionals, puts them into the historical context and provides in appendix the evidence trails for the reader to validate his findings. Over several chapters the author focuses mainly on the cost of defects, Boehm’s cone of uncertainty, the differences in productivity amount individual programmers (aka 10x claim), respectively the relation between poor requirements and defects.

His most important finding is that the references used in most of the researched sources advancing the above numbers were secondary, while the actual sources provide no direct information of empirical data or the methodology for its collection. The way the numbers are advanced and used makes one question the validity of the measurements performed, respectively the character of the mistakes the authors made. Many of the cited papers hardly match the academic requirements of other scientific fields, being a mix of false claims, improperly conducted research and citations.

Secondly, he argues that the small sample sizes used as basis for the experiments, the small population formed usually of students, respectively the way numbers were mixed without any reliable scientific character makes him (and the reader as well) question even more how the experiments were performed in the respective papers. With this, it is more likely that a bigger number of research based on these sources should raise further concerns. The reader can thus ask himself/herself how deep the domino effect goes inside of the Software Engineering field.

In author’s opinion Software Engineering as social process "needs to be studied with tools that borrow as much from the social and cognitive sciences as they do from the mathematical theories of computation". How much is possible to extend the theories and models of the respective fields is an open topic. The bottom line, the field of Software Engineering needs better and scientific empirical experiments that are based on commonly agreed definitions, data collection and processing techniques, respectively higher standards for research publications. Without this, we’ll continue to compare apples with peaches and mix them in calculations so we can get some stories that support our leprechaunish theories.

Overall, the book is a good read for software engineers as well as for other IT professionals. Even if it barely scratched the surface of software myths and folklore, there’s enough material for the readers who want to dive deeper.

Previous Post  <<||>>  Next Post

References:
[1] Laurent Bossavit (2015) "The Leprechauns of Software Engineering"
[2] Friedrich Bauer (1972) "Software Engineering", Information Processing

15 December 2011

📉Graphical Representation: Research (Just the Quotes)

"One of the greatest values of the graphic chart is its use in the analysis of a problem. Ordinarily, the chart brings up many questions which require careful consideration and further research before a satisfactory conclusion can be reached. A properly drawn chart gives a cross-section picture of the situation. While charts may bring out. hidden facts in tables or masses of data, they cannot take the place of careful, analysis. In fact, charts may be dangerous devices when in the hands of those unwilling to base their interpretations upon careful study. This, however, does not detract from their value when they are properly used as aids in solving statistical problems." (John R Riggleman & Ira N Frisbee, "Business Statistics", 1938)

"Although flow charts are not used to portray or interpret statistical data, they possess definite utility for certain kinds of research and administrative problems. With a well-designed flow chart it is possible to present a large number of facts and relationships simply, clearly, and accurately, without resorting to extensive or involved verbal description." (Anna C Rogers, "Graphic Charts Handbook", 1961)

"Graphic representation constitutes one of the basic sign-systems conceived by the human mind for the purposes of storing, understanding, and communicating essential information. As a "language" for the eye, graphics benefits from the ubiquitous properties of visual perception. As a monosemic system, it forms the rational part of the world of images. […] Graphics owes its special significance to its double function as a storage mechanism and a research instrument."  (Jacques Bertin, "Semiology of graphics" ["Semiologie Graphique"], 1967)

"The great difference between the graphic representation of yesterday, which was poorly dissociated from the figurative image, and the graphics of tomorrow, is the disappearance of the congential fixity of the image. […] When one can superimpose, juxtapose, transpose, and permute graphic images in ways that lead to groupings and classings, the graphic image passes from the dead image, the 'illustration,' to the living image, the widely accessible research instrument it is now becoming. The graphic is no longer only the 'representation' of a final simplification, it is a point of departure for the discovery of these simplifications and the means for their justification. The graphic has become, by its manageability, an instrument for information processing." (Jacques Bertin, "Semiology of graphics" ["Semiologie Graphique"], 1967)

"[…] fitting lines to relationships between variables is often a useful and powerful method of summarizing a set of data. Regression analysis fits naturally with the development of causal explanations, simply because the research worker must, at a minimum, know what he or she is seeking to explain." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"Typically, data analysis is messy, and little details clutter it. Not only confounding factors, but also deviant cases, minor problems in measurement, and ambiguous results lead to frustration and discouragement, so that more data are collected than analyzed. Neglecting or hiding the messy details of the data reduces the researcher's chances of discovering something new." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"Of course, false graphics are still with us. Deception must always be confronted and demolished, even if lie detection is no longer at the forefront of research. Graphical excellence begins with telling the truth about the data." (Edward R Tufte, "The Visual Display of Quantitative Information", 1983)

"Data analysis is rarely as simple in practice as it appears in books. Like other statistical techniques, regression rests on certain assumptions and may produce unrealistic results if those assumptions are false. Furthermore it is not always obvious how to translate a research question into a regression model." (Lawrence C Hamilton, "Regression with Graphics: A second course in applied statistics", 1991)

"Data analysis [...] begins with a dataset in hand. Our purpose in data analysis is to learn what we can from those data, to help us draw conclusions about our broader research questions. Our research questions determine what sort of data we need in the first place, and how we ought to go about collecting them. Unless data collection has been done carefully, even a brilliant analyst may be unable to reach valid conclusions regarding the original research questions." (Lawrence C Hamilton, "Data Analysis for Social Scientists: A first course in applied statistics", 1995)

"Good design protects you from the need for too many highly accurate components in the system. But such design principles are still, to this date, ill-understood and need to be researched extensively. Not that good designers do not understand this intuitively, merely it is not easily incorporated into the design methods you were taught in school. Good minds are still needed in spite of all the computing tools we have developed." (Richard Hamming, "The Art of Doing Science and Engineering: Learning to Learn", 1997)

"Data visualization [...] expresses the idea that it involves more than just representing data in a graphical form" (instead of using a table). The information behind the data should also be revealed in a good display; the graphic should aid readers or viewers in seeing the structure in the data. The term data visualization is related to the new field of information visualization. This includes visualization of all kinds of information, not just of data, and is closely associated with research by computer scientists." (Antony Unwin et al, "Introduction" [in "Handbook of Data Visualization"], 2008)

"Presentation graphics face the challenge to depict a key message in - usually a single - graphic which needs to fit very many observers at a time, without the chance to give further explanations or context. Exploration graphics, in contrast, are mostly created and used only by a single researcher, who can use as many graphics as necessary to explore particular questions. In most cases none of these graphics alone gives a comprehensive answer to those questions, but must be seen as a whole in the context of the analysis." (Martin Theus & Simon Urbanek, "Interactive Graphics for Data Analysis: Principles and Examples", 2009)

"Being able to express, analyze, and report on the issues of design practice demands facts, data, and research. Understanding how to turn information into valuable strategic assets is one of the key talents the design writer and researcher must possess." (Steven Heller, "Writing and Research for Graphic Designers: A Designer's Manual to Strategic Communication and Presentation, 2012) 

"Great research is partly police work. You find clues that lead to more clues that lead to a hot trail that leads to conclusive evidence. Often you may instinctively 'feel' something exists somewhere, but finding it is the result of luck and serendipity. You stumble over a document that leads you to an archive that provides you with a key, and so on. Despite that thrill of discovery, having a plan that takes you from point A to point B is useful." (Steven Heller, "Writing and Research for Graphic Designers: A Designer's Manual to Strategic Communication and Presentation", 2012) 

"Diagrams furnish only approximate information. They do not add anything to the meaning of the data and, therefore, are not of much use to a statistician or research worker for further mathematical treatment or statistical analysis. On the other hand, graphs are more obvious, precise and accurate than the diagrams and are quite helpful to the statistician for the study of slopes, rates of change and estimation," (interpolation and extrapolation), wherever possible." (S C Gupta & Indra Gupta, "Business Statistics", 2013)

"Collecting data through sampling therefore becomes a never-ending battle to avoid sources of bias. [...] While trying to obtain a random sample, researchers sometimes make errors in judgment about whether every person or thing is equally likely to be sampled." (Daniel J Levitin, "Weaponized Lies", 2017)

"Samples give us estimates of something, and they will almost always deviate from the true number by some amount, large or small, and that is the margin of error. […] The margin of error does not address underlying flaws in the research, only the degree of error in the sampling procedure. But ignoring those deeper possible flaws for the moment, there is another measurement or statistic that accompanies any rigorously defined sample: the confidence interval." (Daniel J Levitin, "Weaponized Lies", 2017)

"To be any good, a sample has to be representative. A sample is representative if every person or thing in the group you’re studying has an equally likely chance of being chosen. If not, your sample is biased. […] The job of the statistician is to formulate an inventory of all those things that matter in order to obtain a representative sample. Researchers have to avoid the tendency to capture variables that are easy to identify or collect data on - sometimes the things that matter are not obvious or are difficult to measure." (Daniel J Levitin, "Weaponized Lies", 2017)




Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.