04 October 2025

🖍️Sinan Ozdemir - Collected Quotes

"Attention is a mechanism used in deep learning models (not just Transformers) that assigns different weights to different parts of the input, allowing the model to prioritize and emphasize the most important information while performing tasks like translation or summarization. Essentially, attention allows a model to 'focus' on different parts of the input dynamically, leading to improved performance and more accurate results. Before the popularization of attention, most neural networks processed all inputs equally and the models relied on a fixed representation of the input to make predictions. Modern LLMs that rely on attention can dynamically focus on different parts of input sequences, allowing them to weigh the importance of each part in making predictions." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"[...] building an effective LLM-based application can require more than just plugging in a pre-trained model and retrieving results - what if we want to parse them for a better user experience? We might also want to lean on the learnings of massively large language models to help complete the loop and create a useful end-to-end LLM-based application. This is where prompt engineering comes into the picture." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Different algorithms may perform better on different types of text data and will have different vector sizes. The choice of algorithm can have a significant impact on the quality of the resulting embeddings. Additionally, open-source alternatives may require more customization and finetuning than closed-source products, but they also provide greater flexibility and control over the embedding process." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Embeddings are the mathematical representations of words, phrases, or tokens in a largedimensional space. In NLP, embeddings are used to represent the words, phrases, or tokens in a way that captures their semantic meaning and relationships with other words. Several types of embeddings are possible, including position embeddings, which encode the position of a token in a sentence, and token embeddings, which encode the semantic meaning of a token." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Fine-tuning involves training the LLM on a smaller, task-specific dataset to adjust its parameters for the specific task at hand. This allows the LLM to leverage its pre-trained knowledge of the language to improve its accuracy for the specific task. Fine-tuning has been shown to drastically improve performance on domain-specific and task-specific tasks and lets LLMs adapt quickly to a wide variety of NLP applications." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Language modeling is a subfield of NLP that involves the creation of statistical/deep learning models for predicting the likelihood of a sequence of tokens in a specified vocabulary (a limited and known set of tokens). There are generally two kinds of language modeling tasks out there: autoencoding tasks and autoregressive tasks." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Large language models (LLMs) are AI models that are usually (but not necessarily) derived from the Transformer architecture and are designed to understand and generate human language, code, and much more. These models are trained on vast amounts of text data, allowing them to capture the complexities and nuances of human language. LLMs can perform a wide range of language-related tasks, from simple text classification to text generation, with high accuracy, fluency, and style." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"LLMs encode information directly into their parameters via pre-training and fine-tuning, but keeping them up to date with new information is tricky. We either have to further fine-tune the model on new data or run the pre-training steps again from scratch." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Prompt engineering involves crafting inputs to LLMs (prompts) that effectively communicate the task at hand to the LLM, leading it to return accurate and useful outputs. Prompt engineering is a skill that requires an understanding of the nuances of language, the specific domain being worked on, and the capabilities and limitations of the LLM being used." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Specific word choices in our prompts can greatly influence the output of the model. Even small changes to the prompt can lead to vastly different results. For example, adding or removing a single word can cause the LLM to shift its focus or change its interpretation of the task. In some cases, this may result in incorrect or irrelevant responses; in other cases, it may produce the exact output desired." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Text embeddings are a way to represent words or phrases as machine-readable numerical vectors in a multidimensional space, generally based on their contextual meaning. The idea is that if two phrases are similar, then the vectors that represent those phrases should be close together by some measure (like Euclidean distance), and vice versa." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"The idea behind transfer learning is that the pre-trained model has already learned a lot of information about the language and relationships between words, and this information can be used as a starting point to improve performance on a new task. Transfer learning allows LLMs to be fine-tuned for specific tasks with much smaller amounts of task-specific data than would be required if the model were trained from scratch. This greatly reduces the amount of time and resources needed to train LLMs." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024) 

"Transfer learning is a technique used in machine learning to leverage the knowledge gained from one task to improve performance on another related task. Transfer learning for LLMs involves taking an LLM that has been pre-trained on one corpus of text data and then fine-tuning it for a specific 'downstream' task, such as text classification or text generation, by updating themodel’s parameters with task-specific data." (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

"Transfer learning is a technique that leverages pre-trained models to build upon existing knowledge for new tasks or domains. In the case of LLMs, this involves utilizing the pre-training to transfer general language understanding, including grammar and general knowledge, to particular domain-specific tasks. However, the pre-training may not be sufficient to understand the nuances of certain closed or specialized topics [...]" (Sinan Ozdemir, "Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs", 2024)

📉Graphical Representation: Interpretation (Just the Quotes)

"To a very striking degree our culture has become a Statistical culture. Even a person who may never have heard of an index number is affected [...] by [...] of those index numbers which describe the cost of living. It is impossible to understand Psychology, Sociology, Economics, Finance or a Physical Science without some general idea of the meaning of an average, of variation, of concomitance, of sampling, of how to interpret charts and tables." (Carrol D Wright, 1887)

"Except in some of the simplest cases where the line connecting the plotted data is straight, it will generally be possible to fit a number of very different forms of equation to the same curve, none of them exactly, but all agreeing with the original about equally well. Interpolation on any of these curves will usually give results within the desired degree of accuracy. The greatest caution, however, should be observed in exterpolation, or the use of the equation outside of the limits of the observations." (John B Peddle, "The Construction of Graphical Charts", 1910)

"Most authors would greatly resent it if they were told that their writings contained great exaggerations, yet many of these same authors permit their work to be illustrated with charts which are so arranged as to cause an erroneous interpretation. If authors and editors will inspect their charts as carefully as they revise their written matter, we shall have, in a very short time, a standard of reliability in charts and illustrations just as high as now found in the average printed page." (Willard C Brinton, "Graphic Methods for Presenting Facts", 1919)

"The principles of charting and curve plotting are not at all complex, and it is surprising that many business men dodge the simplest charts as though they involved higher mathematics or contained some sort of black magic. [...] The trouble at present is that there are no standards by which graphic presentations can be prepared in accordance with definite rules so that their interpretation by the reader may be both rapid and accurate. It is certain that there will evolve for methods of graphic presentation a few useful and definite rules which will correspond with the rules of grammar for the spoken and written language." (Willard C Brinton, "Graphic Methods for Presenting Facts", 1919)

"Graphic methods are very commonly used in business correlation problems. On the whole, carefully handled and skillfully interpreted graphs have certain advantages over mathematical methods of determining correlation in the usual business problems. The elements of judgment and special knowledge of conditions can be more easily introduced in studying correlation graphically. Mathematical correlation is often much too rigid for the data at hand." (John R Riggleman & Ira N Frisbee, "Business Statistics", 1938)

"The use of two or more amount scales for comparisons of series in which the units are unlike and, therefore, not comparable [...] generally results in an ineffective and confusing presentation which is difficult to understand and to interpret. Comparisons of this nature can be much more clearly shown by reducing the components to a comparable basis as percentages or index numbers." (Rufus R Lutz, "Graphic Presentation Simplified", 1949)

"Charts and graphs represent an extremely useful and flexible medium for explaining, interpreting, and analyzing numerical facts largely by means of points, lines, areas, and other geometric forms and symbols. They make possible the presentation of quantitative data in a simple, clear, and effective manner and facilitate comparison of values, trends, and relationships. Moreover, charts and graphs possess certain qualities and values lacking in textual and tabular forms of presentation." (Calvin F Schmid, "Handbook of Graphic Presentation", 1954)

"In line charts the grid structure plays a controlling role in interpreting facts. The number of vertical rulings should be sufficient to indicate the frequency of the plottings, facilitate the reading of the time values on the horizontal scale. and indicate the interval or subdivision of time." (Anna C Rogers, "Graphic Charts Handbook", 1961)

"The logarithmic transformation serves several purposes:" (1) The resulting regression coefficients sometimes have a more useful theoretical interpretation compared to a regression based on unlogged variables." (2) Badly skewed distributions - in which many of the observations are clustered together combined with a few outlying values on the scale of measurement - are transformed by taking the logarithm of the measurements so that the clustered values are spread out and the large values pulled in more toward the middle of the distribution." (3) Some of the assumptions underlying the regression model and the associated significance tests are better met when the logarithm of the measured variables is taken." (Edward R Tufte, "Data Analysis for Politics and Policy", 1974)

"The time-series plot is the most frequently used form of graphic design. With one dimension marching along to the regular rhythm of seconds, minutes, hours, days, weeks, months, years, centuries, or millennia, the natural ordering of the time scale gives this design a strength and efficiency of interpretation found in no other graphic arrangement." (Edward R Tufte, "The Visual Display of Quantitative Information", 1983)

"The bar of a bar chart has two aspects that can be used to visually decode quantitative information-size" (length and area) and the relative position of the end of the bar along the common scale. The changing sizes of the bars is an important and imposing visual factor; thus it is important that size encode something meaningful. The sizes of bars encode the magnitudes of deviations from the baseline. If the deviations have no important interpretation, the changing sizes are wasted energy and even have the potential to mislead." (William S. Cleveland, "Graphical Methods for Data Presentation: Full Scale Breaks, Dot Charts, and Multibased Logging", The American Statistician Vol. 38" (4) 1984)

"Good graphics can be spoiled by bad annotation. Labels must always be subservient to the information to be conveyed, and legibility should never be sacrificed for style. All the information on the sheet should be easy to read, and more important, easy to interpret. The priorities of the information should be clearly expressed by the use of differing sizes, weights and character of letters." (Bruce Robertson, "How to Draw Charts & Diagrams", 1988)

"Statistics is a tool. In experimental science you plan and carry out experiments, and then analyse and interpret the results. To do this you use statistical arguments and calculations. Like any other tool - an oscilloscope, for example, or a spectrometer, or even a humble spanner - you can use it delicately or clumsily, skillfully or ineptly. The more you know about it and understand how it works, the better you will be able to use it and the more useful it will be." (Roger J Barlow, "Statistics: A guide to the use of statistical methods in the physical sciences", 1989)

"The fact that map is a fuzzy and radial, rather than a precisely defined, category is important because what a viewer interprets a display to be will influence her expectations about the display and how she interacts with it." (Alan MacEachren, "How Maps Work: Representation, Visualization, and Design", 1995)

"Graphic misrepresentation is a frequent misuse in presentations to the nonprofessional. The granddaddy of all graphical offenses is to omit the zero on the vertical axis. As a consequence, the chart is often interpreted as if its bottom axis were zero, even though it may be far removed. This can lead to attention-getting headlines about 'a soar' or 'a dramatic rise" (or fall)'. A modest, and possibly insignificant, change is amplified into a disastrous or inspirational trend." (Herbert F Spirer et al, "Misused Statistics" 2nd Ed, 1998)

"Without meaningful data there can be no meaningful analysis. The interpretation of any data set must be based upon the context of those data. Unfortunately, much of the data reported to executives today are aggregated and summed over so many different operating units and processes that they cannot be said to have any context except a historical one - they were all collected during the same time period. While this may be rational with monetary figures, it can be devastating to other types of data." (Donald J Wheeler, "Understanding Variation: The Key to Managing Chaos" 2nd Ed., 2000)

"The acquisition of information is a flow from noise to order - a process converting entropy to redundancy. During this process, the amount of information decreases but is compensated by constant re-coding. In the recoding the amount of information per unit increases by means of a new symbol which represents the total amount of the old. The maturing thus implies information condensation. Simultaneously, the redundance decreases, which render the information more difficult to interpret." (Lars Skyttner, "General Systems Theory: Ideas and Applications", 2001)

"Every statistical analysis is an interpretation of the data, and missingness affects the interpretation. The challenge is that when the reasons for the missingness cannot be determined there is basically no way to make appropriate statistical adjustments. Sensitivity analyses are designed to model and explore a reasonable range of explanations in order to assess the robustness of the results." (Gerald van Belle, "Statistical Rules of Thumb", 2002)

"Choose scales wisely, as they have a profound influence on the interpretation of graphs. Not all scales require that zero be included, but bar graphs and other graphs where area is judged do require it." (Naomi B Robbins, "Creating More effective Graphs", 2005)

"Data often arrive in raw form, as long lists of numbers. In this case your job is to summarize the data in a way that captures its essence and conveys its meaning. This can be done numerically, with measures such as the average and standard deviation, or graphically. At other times you find data already in summarized form; in this case you must understand what the summary is telling, and what it is not telling, and then interpret the information for your readers or viewers." (Charles Livingston & Paul Voakes, "Working with Numbers and Statistics: A handbook for journalists", 2005)

"The visual representation of a scale - an axis with ticks - looks like a ladder. Scales are the types of functions we use to map varsets to dimensions. At first glance, it would seem that constructing a scale is simply a matter of selecting a range for our numbers and intervals to mark ticks. There is more involved, however. Scales measure the contents of a frame. They determine how we perceive the size, shape, and location of graphics. Choosing a scale" (even a default decimal interval scale) requires us to think about what we are measuring and the meaning of our measurements. Ultimately, that choice determines how we interpret a graphic." (Leland Wilkinson, "The Grammar of Graphics" 2nd Ed., 2005)

"Generally pie charts are to be avoided, as they can be difficult to interpret particularly when the number of categories is greater than five. Small proportions can be very hard to discern […] In addition, unless the percentages in each of the individual categories are given as numbers it can be much more diff i cult to estimate them from a pie chart than from a bar chart […]." (Jenny Freeman et al, "How to Display Data", 2008)

"Color can tell us where to look, what to compare and contrast, and it can give us a visual scale of measure. Because color can be so effective, it is often used for multiple purposes in the same graphic - which can create graphics that are dazzling but difficult to interpret. Separating the roles that color can play makes it easier to apply color specifically for encouraging different kinds of visual thinking. [...] Choose colors to draw attention, to label, to show relationships" (compare and contrast), or to indicate a visual scale of measure." (Felice C Frankel & Angela H DePace, "Visual Strategies", 2012)

"Done well, annotation can help explain and facilitate the viewing and interpretive experience. It is the challenge of creating a layer of user assistance and user insight: how can you maximize the clarity and value of engaging with this visualization design?" (Andy Kirk, "Data Visualization: A successful design process", 2012)

"The big problems with statistics, say its best practitioners, have little to do with computations and formulas. They have to do with judgment - how to design a study, how to conduct it, then how to analyze and interpret the results. Journalists reporting on statistics have many chances to do harm by shaky reporting, and so are also called on to make sophisticated judgments. How, then, can we tell which studies seem credible, which we should report?" (Victor Cohn & Lewis Cope, "News & Numbers: A writer’s guide to statistics" 3rd Ed, 2012)

"The main difference between journalistic and artistic infographics is that, while in the first information must try to be as objective as possible, the second supports a complete subjectivity and can lend itself to different interpretations, all of them valid. That’s the concept of 'subjective infographic', something apparently contradictory." (Jaime Serra, [interviewed] 2012)

"The universal intelligibility of a pictogram is inversely proportional to its complexity and potential for interpretive ambiguity." (Joel Katz, "Designing Information: Human factors and common sense in information design", 2012)

"While the information is of the utmost importance when it comes to soundness, what is done with the information - essentially, how it is designed - is also important. With this in mind, there are two things to consider: format and design quality. If an inappropriate format is used, the outcome will be inferior. Similarly, if the design misrepresents or skews the information deliberately or due to user error, or if the design is inappropriate given the subject matter, it cannot be considered high quality, no matter how aesthetically appealing it appears at first glance." (Jason Lankow et al, "Infographics: The power of visual storytelling", 2012)

"Visualization can be appreciated purely from an aesthetic point of view, but it’s most interesting when it’s about data that’s worth looking at. That’s why you start with data, explore it, and then show results rather than start with a visual and try to squeeze a dataset into it. It’s like trying to use a hammer to bang in a bunch of screws. […] Aesthetics isn’t just a shiny veneer that you slap on at the last minute. It represents the thought you put into a visualization, which is tightly coupled with clarity and affects interpretation." (Nathan Yau, "Data Points: Visualization That Means Something", 2013)

"Graphs can help us interpret data and draw inferences. They can help us see tendencies, patterns, trends, and relationships. A picture can be worth not only a thousand words, but a thousand numbers. However, a graph is essentially descriptive - a picture meant to tell a story. As with any story, bumblers may mangle the punch line and the dishonest may lie." (Gary Smith, "Standard Deviations", 2014)

"Commonly, data do not make a clear and unambiguous statement about our world, often requiring tools and methods to provide such clarity. These methods, called statistical data analysis, involve collecting, manipulating, analyzing, interpreting, and presenting data in a form that can be used, understood, and communicated to others." (Forrest W Young et al, "Visual Statistics: Seeing data with dynamic interactive graphics", 2016)

"Confirmation bias can affect nearly every aspect of the way you look at data, from sampling and observation to forecasting - so it’s something to keep in mind anytime you’re interpreting data. When it comes to correlation versus causation, confirmation bias is one reason that some people ignore omitted variables - because they’re making the jump from correlation to causation based on preconceptions, not the actual evidence." (John H Johnson & Mike Gluck, "Everydata: The misinformation hidden in the little data you consume every day", 2016)

"The main differences between Bayesian networks and causal diagrams lie in how they are constructed and the uses to which they are put. A Bayesian network is literally nothing more than a compact representation of a huge probability table. The arrows mean only that the probabilities of child nodes are related to the values of parent nodes by a certain formula" (the conditional probability tables) and that this relation is sufficient. That is, knowing additional ancestors of the child will not change the formula. Likewise, a missing arrow between any two nodes means that they are independent, once we know the values of their parents. [...] If, however, the same diagram has been constructed as a causal diagram, then both the thinking that goes into the construction and the interpretation of the final diagram change." (Judea Pearl & Dana Mackenzie, "The Book of Why: The new science of cause and effect", 2018)

"Too many simultaneous encodings will be overwhelming to the reader; colors must be easily distinguishable, and of a small enough number that the reader can interpret them. " (Danyel Fisher & Miriah Meyer, "Making Data Visual", 2018)

"As a first principle, any visualization should convey its information quickly and easily, and with minimal scope for misunderstanding. Unnecessary visual clutter makes more work for the reader’s brain to do, slows down the understanding" (at which point they may give up) and may even allow some incorrect interpretations to creep in." (Robert Grant, "Data Visualization: Charts, Maps and Interactive Graphics", 2019)

"Even though data is being thrust on more people, it doesn’t mean everyone is prepared to consume and use it effectively. As our dependence on data for guidance and insights increases, the need for greater data literacy also grows. If literacy is defined as the ability to read and write, data literacy can be defined as the ability to understand and communicate data. Today’s advanced data tools can offer unparalleled insights, but they require capable operators who can understand and interpret data." (Brent Dykes, "Effective Data Storytelling: How to Drive Change with Data, Narrative and Visuals", 2019)

"When dealing with meaningful visual representation, aspects of a representation's meaning can be altered by modifying its visual characteristics; these characteristics are extensively explored in semiotics, the study of signs and symbols and their use or interpretation." (Vidya Setlur & Bridget Cogley, "Functional Aesthetics for data visualization", 2022)

"As beautiful as data can be, it’s not an al fresco painting that should be open to interpretation from anyone who walks by its section of the museum. Make bold, smart color choices that leave no doubt what the purpose of the data is." (Kate Strachnyi, "ColorWise: A Data Storyteller’s Guide to the Intentional Use of Color", 2023)

"But rules are open to interpretation and sometimes arbitrary or even counterproductive when it comes to producing good visualizations. They’re for responding to context, not setting it. Instead of worrying about whether a chart is right" or "wrong", focus on whether it’s good." (Scott Berinato, "Good Charts : the HBR guide to making smarter, more persuasive data visualizations", 2023)

"Charts used to confirm are less formal, and designed well enough to be interpreted, but they don’t always have to be presentation worthy. […] Or maybe you don’t know what you’re looking for […] This is exploratory work - rougher still in design, usually iterative, sometimes interactive. Most of us don’t do as much exploratory work as we do declarative and confirmatory; we should do more. It’s a kind of data brainstorming." (Scott Berinato, "Good Charts : the HBR guide to making smarter, more persuasive data visualizations", 2023)

See also: Misinterpretation 

03 October 2025

♟️Strategic Management: Context (Just the Quotes)

"Leadership is always dependent upon the context, but the context is established by the relationships." (Margaret J Wheatley, "Leadership and the New Science: Discovering Order in a Chaotic World", 1992)

"A process perspective sees not individual tasks in isolation, but the entire collection of tasks that contribute to a desired outcome. Narrow points of view are useless in a process context. It just won't do for each person to be concerned exclusively with his or her own limited responsibility, no matter how well these responsibilities are met. When that occurs, the inevitable result is working at cross–purpose, misunderstanding, and the optimization of the part at the expense of the whole. Process work requires that everyone involved be directed toward a common goal; otherwise, conflicting objectives and parochial agendas impair the effort. " (James A Champy & Michael M Hammer, "Reengineering the Corporation", 1993)

"It is within the purview of each context to define its own rules and techniques for deciding how the object-oriented mechanisms and principles are to be managed. And while the manager of a large information system might wish to impose some rules based on philosophical grounds, from the perspective of enterprise architecture, there is no reason to make decisions at this level. Each context should define its own objectivity." (Rob Mattison & Michael J Sipolt, "The object-oriented enterprise: making corporate information systems work", 1994)

"Senior management needed to step in and make some very tough moves. [...] we also realized then that there must be a better way to formulate strategy. What we needed was a balanced interaction between the middle managers, with their deep knowledge but narrow focus, and senior management, whose larger perspective could set a context." (Andrew Grove, Only the Paranoid Survive, 1998)

"[...] information feedback about the real world not only alters our decisions within the context of existing frames and decision rules but also feeds back to alter our mental models. As our mental models change we change the structure of our systems, creating different decision rules and new strategies. The same information, processed and interpreted by a different decision rule, now yields a different decision. Altering the structure of our systems then alters their patterns of behavior. The development of systems thinking is a double-loop learning process in which we replace a reductionist, narrow, short-run, static view of the world with a holistic, broad, long-term, dynamic view and then redesign our policies and institutions accordingly." (John D Sterman, "Business dynamics: Systems thinking and modeling for a complex world", 2000)

"Deep change in mental models, or double-loop learning, arises when evidence not only alters our decisions within the context of existing frames, but also feeds back to alter our mental models. As our mental models change, we change the structure of our systems, creating different decision rules and new strategies. The same information, interpreted by a different model, now yields a different decision. Systems thinking is an iterative learning process in which we replace a reductionist, narrow, short-run, static view of the world with a holistic, broad, long-term, dynamic view, reinventing our policies and institutions accordingly." (John D Sterman, "Learning in and about complex systems", Systems Thinking Vol. 3, 2003)

"Strategic planning can generally be thought of as a three stage process:" (i) carrying out analyses of the organisation’s external context and of its internal conditions and the resources at its disposal" (ii) identifying and developing different strategic choices" (scenarios) and evaluating their attractiveness to the organisation" (iii) implementing the preferred strategy." (Roger Jones & Neil Murra, "Change, Strategy and Projects at Work", 2008)

"It is hard to avoid the conclusion that while strategy is undoubtedly a good thing to have, it is a hard thing to get right. […] So what turns something that is not quite strategy into strategy is a sense of actual or imminent instability, a changing context that induces a sense of conflict. Strategy therefore starts with an existing state of affairs and only gains meaning by an awareness of how, for better or worse, it could be different." (Lawrence Freedman, “Strategy: A history”, 2013)

"Change strategy is, by this definition, the way a business (1) manages the portfolio of change to make sure that the parts deliver the whole business strategy, (2) creates the context for change, and (3) monitors change risk and change performance across the entire business." (Paul Gibbons, "The Science of Successful Organizational Change",  2015)

"In the context of an organization, to have autonomy is to be empowered, not just feel empowered. […] But it does not mean being a lone wolf or being siloed or cut off from the rest of the organization." (Sriram Narayan, "Agile IT Organization Design: For Digital Transformation and Continuous Delivery", 2015)

"However, in a highly collaborative context filled with uncertainty over outcomes, relying on the org chart as a principal mechanism of splitting the work to be done leads to unrealistic expectations." (Matthew Skelton & Manuel Pais, "Team Topologies: Organizing Business and Technology Teams for Fast Flow", 2019)

"Organizations that rely too heavily on org charts and matrixes to split and control work often fail to create the necessary conditions to embrace innovation while still delivering at a fast pace. In order to succeed at that, organizations need stable teams and effective team patterns and interactions. They need to invest in empowered, skilled teams as the foundation for agility and adaptability. To stay alive in ever more competitive markets, organizations need teams and people who are able to sense when context changes and evolve accordingly." (Matthew Skelton & Manuel Pais, "Team Topologies: Organizing Business and Technology Teams for Fast Flow", 2019)

"The second rule of communication is to know what you want to achieve. Hopefully the aim is to encourage open debate, and informed decision-making. But there seems no harm in repeating yet again that numbers do not speak for themselves; the context, language and graphic design all contribute to the way the communication is received. We have to acknowledge we are telling a story, and it is inevitable that people will make comparisons and judgements, no matter how much we only want to inform and not persuade. All we can do is try to pre-empt inappropriate gut reactions by design or warning." (David Spiegelhalter, "The Art of Statistics: Learning from Data", 2019)

"Data architects often turn to graphs because they are flexible enough to accommodate multiple heterogeneous representations of the same entities as described by each of the source systems. With a graph, it is possible to associate underlying records incrementally as data is discovered. There is no need for big, up-front design, which serves only to hamper business agility. This is important because data fabric integration is not a one-off effort and a graph model remains flexible over the lifetime of the data domains." (Jesús Barrasa et al, "Knowledge Graphs: Data in Context for Responsive Businesses", 2021)

See also the quotes in Graphical RepresentationData Science, Software Engineering 


⛩️Jeremy C Morgan - Collected Quotes

"Another problem that can be confusing is that LLMs seldom put out the same thing twice. [...] Traditional databases are straightforward - you ask for something specific, and you get back exactly what was stored. Search engines work similarly, finding existing information. LLMs work differently. They analyze massive amounts of text data to understand statistical patterns in language. The model processes information through multiple layers, each capturing different aspects - from simple word patterns to complex relationships between ideas." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"As the old saying goes, 'Garbage in, garbage out.' Generative AI tools are only as good as the data they’re trained on. They need high-quality, diverse, and extensive datasets to create great code as output. Unfortunately, you have no control over this input. You must trust the creators behind the product are using the best code possible for the corpus, or data used for training. Researching the tools lets you learn how each tool gathers data and decide based on that." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Context is crucial for how language models understand and generate code. The model processes your input by analyzing relationships between different parts of the code and documentation to determine meaning and intent. [...] The model evaluates context by calculating mathematical relationships between elements in your input. However, it may miss important domain knowledge, coding standards, or architectural patterns that experienced developers understand implicitly." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Context manipulation involves setting up an optimal environment within the prompt to help a model generate accurate and relevant responses. By controlling the context in which the model operates, users can influence the output’s quality, consistency, and specificity, especially in tasks requiring clarity and precision. Context manipulation involves priming the model with relevant information, presenting examples within the prompt, and utilizing system messages to maintain the desired behavior." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Creating software is like building a house. The foundation is the first step; you can’t start without it. Building the rest of the house will be a struggle if the foundation doesn’t meet the requirements. If you don’t have the time to be thoughtful and do it right, you won’t have the time to fix it later." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Design is key in software development, yet programmers often rush it. I’ve done this, too. Taking time to plan an app’s architecture leads to happy users and lower maintenance costs." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"First, training data is created by taking existing source code in many languages and feeding it into a model. This model is evaluated and has layers that look for specific things. One layer checks the type of syntax. Another checks for keywords and how they’re used. The final layer determines whether :this is most likely to be correct and functional source code'. There is a vast array of machine learning algorithms that use the model to run through these layers and draw conclusions. Then, the AI produces output that is a prediction of what the new software should look like. The tool says, 'based on what I know, this is the most statistically likely code you’re looking for'. Then you, the programmer, reach the evaluation point. If you give it a thumbs up, the feedback returns to the model (in many cases, not always) as a correct prediction. If you give it a thumbs  down and reject it, that is also tracked. With this continuous feedback, the tool learns what good code should look like." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Generative AI is a kind of statistical mimicry of the real world, where algorithms learn patterns and try to create things." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025) 

"Generative AI for coding and language tools is based on the LLM concept. A large language model is a type of neural network that processes and generates text in a humanlike way. It does this by being trained on a massive dataset of text, which allows it to learn human language patterns, as described previously. It lets LLMs translate, write, and answer questions with text. LLMs can contain natural language, source code, and  more." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Generative AI tools for coding are sometimes inaccurate. They can produce results that look good but are wrong. This is common with LLMs. They can write code or chat like a person. And sometimes, they share information that’s just plain wrong. Not just a bit off, but totally backwards or nonsense. And they say it so confidently! We call this 'hallucinating', which is a funny term, but it makes sense." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Great planning and initial setup are crucial for a successful project. Having an idea and immediately cracking open an IDE is rarely a good approach. Many developers find the planning process boring and tiresome. Generative AI tools make these tasks more efficient, accurate, and enjoyable. If you don’t like planning and setup, they can make the process smoother and faster. If you enjoy planning, you may find these tools make it even more fun." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"In machine learning, 'training' is when we teach models to understand language and code by analyzing massive amounts of data. During training, the model learns statistical patterns - how often certain words appear together, what code structures are common, andhow different parts of text relate to each other. The quality of training data directly affects how well the model performs." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"It’s a pattern-matching predictor, not a knowledge retriever. It’s great at what it does, but since it works by prediction, it can predict nonsense just as confidently as it predicts facts. So, when you use these tools, be curious and skeptical! Don’t just accept what it gives you. Ask, 'Is this just a likely sounding pattern, or is it actually right?' Understanding how generative AI works helps you know when to trust it and when to double-check. Keeping this skepticism in mind is crucial when working with these tools to produce code." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"It’s essentially a sophisticated prediction system. Instead of looking up stored answers, an LLM calculates probabilities to determine what text should come next. While these predictions are often accurate, they’re still predictions - which is why it’s crucial to verify any code or factual claims the model generates. This probabilistic nature makes LLMs powerful tools for generating text and code but also means they can make mistakes, even when seeming very confident. Understanding this helps set realistic expectations about what these tools can and cannot do reliably." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Professional software developers must know how to use AI tools strategically.  This involves mastering advanced prompting techniques and working with AI across various files and modules. We must also learn how to manage context wisely. This is a new concept for most, and it is vitally important with code generation. AI-generated code requires the same scrutiny and quality checks as any code written by humans." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Recursive prompting is a systematic approach to achieving higher-quality outputs through iterative refinement. Rather than accepting the first response, it uses a step-by-step process of evaluation and improvement, making it particularly valuable for complex tasks such as code development, writing, and problem-solving. Our example demonstrated how a basic factorial function evolved from a simple implementation to a robust, optimized solution through multiple iterations of targeted refinements." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Stubbing is a fundamental technique in software development where simplified placeholder versions of code components are created before implementing the full functionality. It is like building the frame of a house before adding the walls, plumbing, and electrical systems. The stubs provide a way to test the overall structure and flow of an application early on, without getting bogged down in the details of individual components." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Testing is like an investment. You spend time building tests now to strengthen your product. This approach saves time and frustration by catching problems early. As your software evolves, each passing test reaffirms that your product still works properly. However, in today’s fast-paced development world, testing often falls behind. This is where generative AI can aid developers as a valuable resource." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"Unlike traditional code completion, which operates on predefined rules, generative AI creates a continuous improvement cycle, which includes the following five basic steps: (1) Developer input: You provide source code, comments, or natural language requirements. (2) Context analysis: The model analyzes patterns in your existingcode and requirements. (3) Prediction: Based on training data and your specific context, the model generates probable code. (4) Developer feedback: You accept, modify, or reject suggestions. (5) Model adaptation: The system incorporates your feedback to improve future suggestions." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"This ability to zero in on important code is why modern AI coding assistants can offer meaningful suggestions for your specific needs. It’s similar to how skilled developers know which code sections affect a new implementation the most. Each transformer layer learns about various code patterns, ranging from syntax validation to understanding the relationships among functions, classes, and modules." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

"When building new software, the clarity and precision of project requirements are pivotal. Getting the requirements right is critical as they often determine whether a software project meets its deadlines or faces significant delays. Requirements always change. Also, they’re frequently misinterpreted because we tend to grab the requirements and get to work. There is a lot of room for error here, so if we rush, we can get in trouble. Because generative AI tools make the requirements gathering process easier and faster, we can spend more time working on those requirements and getting them right." (Jeremy C Morgan, "Coding with AI: Examples in Python", 2025)

🤖〽️Prompt Engineering: Copilot Unabridged (Part 103: Building the Bedrock - What’s Needed for the Future of AI)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on what is needed for creating a foundation for the further development of AI"

Introduction

Artificial Intelligence (AI) has rapidly evolved from a niche academic pursuit into a transformative force reshaping industries, societies, and everyday life. Yet, as AI systems grow more powerful and pervasive, the urgency to establish a robust foundation for their continued development becomes paramount. Much like thermodynamics emerged to explain the steam engine, we now need a scientific, ethical, and infrastructural framework to guide the future of intelligent systems.

1. Theoretical Understanding of Intelligence

At the heart of AI’s future lies a fundamental question: what is intelligence, and how can it be formalized? Despite the success of deep learning, we still lack a comprehensive theory that explains why certain architectures work, how generalization occurs, and what the limits of learning are. Researchers like Yann LeCun have called for an equivalent of thermodynamics for intelligence - a set of principles that can explain and predict the behavior of intelligent systems. This requires interdisciplinary collaboration across mathematics, neuroscience, cognitive science, and computer science to build a unified theory of learning and reasoning.

2. Robust and Transparent Infrastructure

AI development today is often fragmented, with tools, frameworks, and models scattered across platforms. To scale AI responsibly, we need standardized, interoperable infrastructure that supports experimentation and enterprise deployment. Initiatives like the Microsoft Agent Framework [1] aim to unify open-source orchestration with enterprise-grade stability, enabling developers to build multi-agent systems that are secure, observable, and scalable. Such frameworks are essential for moving from prototype to production without sacrificing trust or performance.

3. Trustworthy and Ethical Design

As AI systems increasingly influence decisions in healthcare, finance, and law, trustworthiness becomes non-negotiable. This includes:

  • Fairness: Ensuring models do not perpetuate bias or discrimination.
  • Explainability: Making decisions interpretable to users and regulators.
  • Safety: Preventing harmful outputs or unintended consequences.
  • Privacy: Respecting user data and complying with regulations.

The Fraunhofer IAIS White Paper [2] on Trustworthy AI outlines the importance of certified testing methods, ethical design principles, and human-centered development. Embedding these values into the foundation of AI ensures that innovation does not come at the cost of societal harm.

4. Global Collaboration and Regulation

AI is a global endeavor, but its governance is often fragmented. The European Union’s AI Act, for example, sets a precedent for regulating high-risk applications, but international alignment is still lacking. To create a stable foundation, nations must collaborate on shared standards, data governance, and ethical norms. This includes open dialogue between governments, academia, industry, and civil society to ensure that AI development reflects diverse values and priorities.

5. Investment in Research and Education

The future of AI depends on a pipeline of skilled researchers, engineers, and ethicists. Governments and institutions must invest in:

  • Basic research into learning theory, symbolic reasoning, and neuromorphic computing.
  • Applied research for domain-specific AI in climate science, medicine, and education.
  • Education and training programs to democratize AI literacy and empower the next generation.

Initiatives like the Helmholtz Foundation Model Initiative [3] exemplify how strategic funding and interdisciplinary collaboration can accelerate AI innovation while addressing societal challenges.

Conclusion

Creating a foundation for the further development of AI is not just a technical challenge - it’s a philosophical, ethical, and societal one. It requires a shift from building tools to building understanding, from isolated innovation to collaborative stewardship. If we succeed, AI can become not just a powerful technology, but a trusted partner in shaping a better future.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

References
[1] Microsoft (2025) Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps [link]
[2] Sebastian Schmidt et al (2024) Developing trustworthy AI applications with foundation models [link]
[3] Helmholtz AI (2025) Helmholtz Foundation Model Initiative

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.