Showing posts with label systems. Show all posts
Showing posts with label systems. Show all posts

18 December 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 121: Human–Machine Ecologies - A 50‑Year Evolution, Decade by Decade)

 

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blog post of 600 words on the human-machine ecologies and their evolution overnext 50 years split by decade"

Introduction

Human–machine ecologies - our intertwined systems of people, technologies, environments, and data - are entering a period of rapid transformation. Over the next half‑century, these ecologies will evolve from today’s loosely connected networks into deeply interdependent, adaptive ecosystems. Looking decade by decade helps reveal how this evolution might unfold and what it could mean for society, culture, and the planet.

2025–2035: Foundations of Ambient Intelligence

The next decade will be defined by the normalization of ambient, always‑present computational systems. Sensors, AI models, and connected devices will fade into the background of everyday life, forming the early scaffolding of human–machine ecologies.

Homes, workplaces, and public spaces will become context‑aware environments that adjust to human needs without explicit commands. Energy systems will self‑optimize, transportation networks will coordinate autonomously, and personal devices will collaborate rather than compete for attention.

This period will also bring the first major societal debates about autonomy, privacy, and data stewardship. As machines become more embedded in daily life, people will begin to question not just what these systems do, but how they shape behavior, choices, and relationships. Governance frameworks will emerge, though often reactively, as societies grapple with the implications of pervasive machine agency.

2035–2045: Cognitive Symbiosis and Shared Intelligence

By the mid‑2030s, human–machine ecologies will shift from environmental intelligence to cognitive partnership. AI systems will increasingly function as co‑thinkers - augmenting memory, creativity, and decision‑making.

Interfaces will evolve beyond screens and voice. Neural‑signal‑based interaction, gesture‑driven control, and adaptive conversational agents will blur the line between internal thought and external computation. People will begin to treat machine intelligence as an extension of their own cognitive toolkit.

At the societal level, organizations will restructure around hybrid teams of humans and AI systems. Knowledge work will become more fluid, with machines handling pattern recognition and humans focusing on interpretation, ethics, and meaning‑making.

This decade will also see the rise of 'ecology designers' - professionals who shape the interactions between humans, machines, and environments. Their work will be less about building tools and more about cultivating balanced, resilient ecosystems.

2045–2055: Ecological Integration and Adaptive Cities

As human–machine ecologies mature, they will expand from personal and organizational contexts into full urban and planetary systems. Cities will operate as adaptive organisms, using real‑time data to regulate energy, transportation, waste, and public health.

Infrastructure will become self‑healing and self‑optimizing. Buildings will negotiate energy loads with one another, autonomous vehicles will coordinate traffic flow dynamically, and environmental sensors will guide urban planning with unprecedented precision.

Human behavior will feed directly into these systems, creating feedback loops that allow cities to evolve continuously. The challenge will be ensuring that these ecologies remain inclusive and equitable. Without careful governance, adaptive systems could reinforce existing inequalities or create new forms of digital exclusion.

Culturally, machines will become collaborators in art, science, and design. Hybrid creativity - where humans and AI co‑produce ideas - will become a mainstream mode of expression.

2055–2075: Co‑Evolution and Ecological Maturity

By the final decades of this 50‑year arc, human–machine ecologies will reach a stage of co‑evolution. Machines will not simply adapt to humans; humans will adapt to machine‑mediated environments in return.

Education will shift toward ecological literacy - understanding how to navigate, shape, and sustain complex human–machine systems. Social norms will evolve around shared agency, where responsibility is distributed across humans, machines, and institutions.

At this stage, the most successful societies will be those that embrace diversity - of people, cultures, and machine systems - and allow for continuous adaptation rather than rigid control. Human–machine ecologies will feel less like technologies and more like living environments we inhabit, influence, and co‑create.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

13 December 2025

🕸Systems Engineering: Self-Similarity (Just the Quotes)

"[…] a pink (or white, or brown) noise is the very paradigm of a statistically self-similar process. Phenomena whose power spectra are homogeneous power functions lack inherent time and frequency scales; they are scale-free. There is no characteristic time or frequency -whatever happens in one time or frequency range happens on all time or frequency scales. If such noises are recorded on magnetic tape and played back at various speeds, they sound the same […]" (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"All physical objects that are 'self-similar' have limited self-similarity - just as there are no perfectly periodic functions, in the mathematical sense, in the real world: most oscillations have a beginning and an end (with the possible exception of our universe, if it is closed and begins a new life cycle after every 'big crunch' […]. Nevertheless, self-similarity is a useful  abstraction, just as periodicity is one of the most useful concepts in the sciences, any finite extent notwithstanding." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"Percolation is a widespread paradigm. Percolation theory can therefore illuminate a great many seemingly diverse situations. Because of its basically geometric character, it facilitates the analysis of intricate patterns and textures without needless physical complications. And the self-similarity that prevails at critical points permits profitably mining the connection with scaling and fractals." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"[…] power laws, with integer or fractional exponents, are one of the most fertile fields and abundant sources of self-similarity." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"The only prerequisite for a self-similar law to prevail in a given size range is the absence of an inherent size scale." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"The unifying concept underlying fractals, chaos, and power laws is self-similarity. Self-similarity, or invariance against changes in scale or size, is an attribute of many laws of nature and innumerable phenomena in the world around us. Self-similarity is, in fact, one of the decisive symmetries that shape our universe and our efforts to comprehend it." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"[…] the world is not complete chaos. Strange attractors often do have structure: like the Sierpinski gasket, they are self-similar or approximately so. And they have fractal dimensions that hold important clues for our attempts to understand chaotic systems such as the weather." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"The term chaos is used in a specific sense where it is an inherently random pattern of behaviour generated by fixed inputs into deterministic" (that is fixed) rules" (relationships). The rules take the form of non-linear feedback loops. Although the specific path followed by the behaviour so generated is random and hence unpredictable in the long-term, it always has an underlying pattern to it, a 'hidden' pattern, a global pattern or rhythm. That pattern is self-similarity, that is a constant degree of variation, consistent variability, regular irregularity, or more precisely, a constant fractal dimension. Chaos is therefore order" (a pattern) within disorder" (random behaviour)." (Ralph D Stacey, "The Chaos Frontier: Creative Strategic Control for Business", 1991)

"Chaos appears in both dissipative and conservative systems, but there is a difference in its structure in the two types of systems. Conservative systems have no attractors. Initial conditions can give rise to periodic, quasiperiodic, or chaotic motion, but the chaotic motion, unlike that associated with dissipative systems, is not self-similar. In other words, if you magnify it, it does not give smaller copies of itself. A system that does exhibit self-similarity is called fractal. [...] The chaotic orbits in conservative systems are not fractal; they visit all regions of certain small sections of the phase space, and completely avoid other regions. If you magnify a region of the space, it is not self-similar." (Barry R Parker, "Chaos in the Cosmos: The stunning complexity of the universe", 1996)

"What is renormalization? First of all, if scaling is present we can go to smaller scales and get exactly the same result. In a sense we are looking at the system with a microscope of increasing power. If you take the limit of such a process you get a stability that is not otherwise present. In short, in the renormalized system, the self-similarity is exact, not approximate as it usually is. So renormalization gives stability and exactness." (Barry R Parker, "Chaos in the Cosmos: The stunning complexity of the universe", 1996)

"The self-similarity of fractal structures implies that there is some redundancy because of the repetition of details at all scales. Even though some of these structures may appear to teeter on the edge of randomness, they actually represent complex systems at the interface of order and disorder. " (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"The self-similarity on different scales arises because growth often involves iteration of simple, discrete processes (e.g. branching). These repetitive processes can often be summarized as sets of simple rules." (David G Green, 2000)

"Fractals are self-similar objects. However, not every self-similar object is a fractal, with a scale-free form distribution. If we put identical cubes on top of each other, we get a self-similar object. However, this object will not have scale-free statistics: since it has only one measure of rectangular forms, it is single-scaled. We need a growing number of smaller and smaller self-similar objects to satisfy the scale-free distribution." (Péter Csermely, "Weak Links: The Universal Key to the Stabilityof Networks and Complex Systems", 2009)

"The concept of bifurcation, present in the context of non-linear dynamic systems and theory of chaos, refers to the transition between two dynamic modalities qualitatively distinct; both of them are exhibited by the same dynamic system, and the transition (bifurcation) is promoted by the change in value of a relevant numeric parameter of such system. Such parameter is named 'bifurcation parameter', and in highly non-linear dynamic systems, its change can produce a large number of bifurcations between distinct dynamic modalities, with self-similarity and fractal structure. In many of these systems, we have a cascade of numberless bifurcations, culminating with the production of chaotic dynamics." (Emilio Del-Moral-Hernandez, "Chaotic Neural Networks", Encyclopedia of Artificial Intelligence, 2009)

"In the telephone system a century ago, messages dispersed across the network in a pattern that mathematicians associate with randomness. But in the last decade, the flow of bits has become statistically more similar to the patterns found in self-organized systems. For one thing, the global network exhibits self-similarity, also known as a fractal pattern. We see this kind of fractal pattern in the way the jagged outline of tree branches look similar no matter whether we look at them up close or far away. Today messages disperse through the global telecommunications system in the fractal pattern of self-organization." (Kevin Kelly, "What Technology Wants", 2010)

"Cyberneticists argue that positive feedback may be useful, but it is inherently unstable, capable of causing loss of control and runaway. A higher level of control must therefore be imposed upon any positive feedback mechanism: self-stabilising properties of a negative feedback loop constrain the explosive tendencies of positive feedback. This is the starting point of our journey to explore the role of cybernetics in the control of biological growth. That is the assumption that the evolution of self-limitation has been an absolute necessity for life forms with exponential growth." (Tony Stebbing, "A Cybernetic View of Biological Growth: The Maia Hypothesis", 2011)

"Laws of complexity hold universally across hierarchical scales (scalar, self-similarity) and are not influenced by the detailed behavior of constituent parts." (Jamshid Gharajedaghi, "Systems Thinking: Managing Chaos and Complexity A Platform for Designing Business Architecture" 3rd Ed., 2011)

"Fractals are different from chaos. Fractals are self-similar geometric objects, while chaos is a type of deterministic yet unpredictable dynamical behavior. Nevertheless, the two ideas or areas of study have several interesting and important links. Fractal objects at first blush seem intricate and complex. However, they are often the product of very simple dynamical systems. So the two areas of study - chaos and fractals - are naturally paired, even though they are distinct concepts." (David P Feldman,"Chaos and Fractals: An Elementary Introduction", 2012)

"The study of chaos shows that simple systems can exhibit complex and unpredictable behavior. This realization both suggests limits on our ability to predict certain phenomena and that complex behavior may have a simple explanation. Fractals give scientists a simple and concise way to qualitatively and quantitatively understand self-similar objects or phenomena. More generally, the study of chaos and fractals hold many fun surprises; it challenges one’s intuition about simplicity and complexity, order and disorder." (David P Feldman,"Chaos and Fractals: An Elementary Introduction", 2012

"Chaos theory is a branch of mathematics focusing on the study of chaos - dynamical systems whose random states of disorder and irregularities are governed by underlying patterns and deterministic laws that are highly sensitive to initial conditions. Chaos theory is an interdisciplinary theory stating that, within the apparent randomness of complex, chaotic systems, there are underlying patterns, interconnectedness, constant feedback loops, repetition, self-similarity, fractals, and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is a sensitive dependence on initial conditions)." (Nima Norouzi, "Criminal Policy, Security, and Justice in the Time of COVID-19", 2022)

06 December 2025

🕸Systems Engineering: Fractals (Just the Quotes)

"[…] chaos and fractals are part of an even grander subject known as dynamics. This is the subject that deals with change, with systems that evolve in time. Whether the system in question settles down to equilibrium, keeps repeating in cycles, or does something more complicated, it is dynamics that we use to analyze the behavior." (Steven H Strogatz, "Non-Linear Dynamics and Chaos, 1994)

"It is time to employ fractal geometry and its associated subjects of chaos and nonlinear dynamics to study systems engineering methodology (SEM). [...] Fractal geometry and chaos theory can convey a new level of understanding to systems engineering and make it more effective." (Arthur D Hall, "The fractal architecture of the systems engineering method", "Systems, Man and Cybernetics", Vol. 28 (4), 1998)

"What is renormalization? First of all, if scaling is present we can go to smaller scales and get exactly the same result. In a sense we are looking at the system with a microscope of increasing power. If you take the limit of such a process you get a stability that is not otherwise present. In short, in the renormalized system, the self-similarity is exact, not approximate as it usually is. So renormalization gives stability and exactness." (Barry R Parker, "Chaos in the Cosmos: The stunning complexity of the universe", 1996)

"The self-similarity of fractal structures implies that there is some redundancy because of the repetition of details at all scales. Even though some of these structures may appear to teeter on the edge of randomness, they actually represent complex systems at the interface of order and disorder." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"If financial markets aren't efficient, then what are they? According to the 'fractal market hypothesis', they are highly unstable dynamic systems that generate stock prices which appear random, but behind which lie deterministic patterns." (Steve Keen, "Debunking Economics: The Naked Emperor Of The Social Sciences", 2001)

"Wherever we look in our world the complex systems of nature and time seem to preserve the look of details at finer and finer scales. Fractals show a holistic hidden order behind things, a harmony in which everything affects everything else, and, above all, an endless variety of interwoven patterns. Fractal geometry allows bounded curves of infinite length, as well as closed surfaces with infinite area. It even allows curves with positive volume and arbitrarily large groups of shapes with exactly the same boundary." (Philip Tetlow, "The Web’s Awake: An Introduction to the Field of Web Science and the Concept of Web Life", 2007)

"The economy is a nonlinear fractal system, where the smallest scales are linked to the largest, and the decisions of the central bank are affected by the gut instincts of the people on the street." (David Orrell, "The Other Side Of The Coin", 2008)

"A mathematical fractal is generated by an infinitely recursive process, in which the final level of detail is never reached, and never can be reached by increasing the scale at which observations are made. In reality, fractals are generated by finite processes, and exhibit no visible change in detail after a certain resolution limit. This behavior of natural fractal objects is similar to the exponential cutoff, which can be observed in many degree distributions of real networks." (Péter Csermely, "Weak Links: The Universal Key to the Stabilityof Networks and Complex Systems", 2009)

"Fractals are self-similar objects. However, not every self-similar object is a fractal, with a scale-free form distribution. If we put identical cubes on top of each other, we get a self-similar object. However, this object will not have scale-free statistics: since it has only one measure of rectangular forms, it is single-scaled. We need a growing number of smaller and smaller self-similar objects to satisfy the scale-free distribution." (Péter Csermely, "Weak Links: The Universal Key to the Stabilityof Networks and Complex Systems", 2009)

"In the telephone system a century ago, messages dispersed across the network in a pattern that mathematicians associate with randomness. But in the last decade, the flow of bits has become statistically more similar to the patterns found in self-organized systems. For one thing, the global network exhibits self-similarity, also known as a fractal pattern. We see this kind of fractal pattern in the way the jagged outline of tree branches look similar no matter whether we look at them up close or far away. Today messages disperse through the global telecommunications system in the fractal pattern of self-organization." (Kevin Kelly, "What Technology Wants", 2010)

"Fractals are different from chaos. Fractals are self-similar geometric objects, while chaos is a type of deterministic yet unpredictable dynamical behavior. Nevertheless, the two ideas or areas of study have several interesting and important links. Fractal objects at first blush seem intricate and complex. However, they are often the product of very simple dynamical systems. So the two areas of study - chaos and fractals - are naturally paired, even though they are distinct concepts." (David P Feldman,"Chaos and Fractals: An Elementary Introduction", 2012)

"Only at the edge of chaos can complex systems flourish. This threshold line, that edge between anarchy and frozen rigidity, is not a like a fence line, it is a fractal line; it possesses nonlinearity." (Stephen H Buhner, "Plant Intelligence and the Imaginal Realm: Beyond the Doors of Perception into the Dreaming of Earth", 2014)

19 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 97: Swarm Intelligence - How AI Could Evolve Collective Behavior)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how AI could develop swarm behavior"

Introduction

In nature, some of the most remarkable feats of coordination come not from individual brilliance, but from collective intelligence. Birds flock, ants forage, and bees build hives - all without central control. This phenomenon, known as swarm behavior, is a decentralized, self-organizing system that emerges from simple rules followed by many agents.

Now imagine machines doing the same.

As Artificial Intelligence (AI) advances, the potential for AI systems to evolve swarm behavior becomes increasingly plausible - and powerful. Let’s explore how this could happen, what it might look like, and why it could redefine the future of intelligent systems.

What Is Swarm Behavior?

Swarm behavior refers to the coordinated actions of many agents - biological or artificial - based on local interactions rather than centralized commands. Each agent follows simple rules, but together they produce complex, adaptive behavior.

In AI, this could mean:

  • Drones flying in formation without a pilot.
  • Bots managing traffic flow by communicating locally.
  • Robotic units exploring terrain by sharing sensor data.

The key is decentralization. No single machine leads. Instead, intelligence emerges from the group.

How AI Could Develop Swarm Behavior

AI systems could evolve swarm behavior through several pathways:

  • Reinforcement Learning in Multi-Agent Systems: Machines learn to cooperate by maximizing shared rewards. Over time, they develop strategies that benefit the group, not just the individual.
  • Local Rule-Based Programming: Each agent follows simple rules - like 'avoid collisions', 'follow neighbors', or 'move toward goal'. These rules, when scaled, produce emergent coordination.
  • Communication Protocols: Machines exchange data in real time - position, intent, environmental cues - allowing them to adapt collectively.
  • Evolutionary Algorithms: Swarm strategies can be 'bred' through simulation, selecting for behaviors that optimize group performance.

These methods don’t require central control. They rely on interaction, adaptation, and feedback - just like nature.

What Swarm AI Could Do

Swarm AI could revolutionize many domains:

  • Disaster Response: Fleets of drones could search for survivors, map damage, and deliver aid - faster and more flexibly than centralized systems.
  • Environmental Monitoring: Robotic swarms could track pollution, wildlife, or climate patterns across vast areas.
  • Space Exploration: Autonomous probes could explore planetary surfaces, sharing data and adjusting paths without human input.
  • Military and Defense: Swarm tactics could be used for surveillance, area denial, or coordinated strikes - raising ethical concerns as well as strategic possibilities.

In each case, the swarm adapts to changing conditions, learns from experience, and operates with resilience.

Challenges and Risks

Swarm AI isn’t without challenges:

  • Coordination Complexity: Ensuring agents don’t interfere with each other or create chaos.
  • Security Vulnerabilities: A compromised agent could disrupt the entire swarm.
  • Ethical Oversight: Decentralized systems are harder to audit and control.
  • Emergent Unpredictability: Swarms may develop behaviors that weren’t anticipated or intended.

Designing safe, transparent, and accountable swarm systems will be critical.

A New Paradigm of Intelligence

Swarm AI represents a shift from individual intelligence to collective cognition. It’s not about building smarter machines - it’s about building smarter networks.

This mirrors a broader truth: intelligence isn’t always centralized. Sometimes, it’s distributed, adaptive, and emergent. And in that model, machines don’t just think - they collaborate.

Final Thought: From Hive to Horizon

If AI evolves swarm behavior, we won’t just see machines acting together - we’ll see machines thinking together. They’ll form digital ecosystems, capable of solving problems too complex for any single system.

And in that evolution, we may find a new kind of intelligence - one that reflects not the mind of a machine, but the wisdom of the swarm.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

18 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 96: Biodiversity in Code - How AI Could Evolve Like Nature)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how AI could evolve like in natural world?"

Introduction

When we think of biodiversity, we picture lush rainforests, coral reefs, and the dazzling variety of life on Earth. But what if Artificial Intelligence (AI) followed a similar path? What if, instead of a single dominant form, AI evolved into a rich ecosystem of diverse intelligences - each adapted to its own niche, shaped by its environment, and coexisting in a dynamic balance?

As AI development accelerates, the parallels between biological evolution and machine evolution become increasingly compelling. Let’s explore how biodiversity could be reflected in the future of AI.

1. Evolution Through Specialization

In nature, species evolve to fill specific ecological roles. Similarly, AI systems could evolve to specialize in distinct domains:

  • Medical AIs trained on vast health datasets could become diagnostic savants.
  • Legal AIs might master jurisprudence, precedent, and negotiation.
  • Creative AIs could evolve to generate art, music, and literature with unique stylistic signatures.

Each AI would be optimized for its environment - just as a hummingbird’s beak is shaped for sipping nectar, or a cheetah’s body for speed.

2. Environmental Influence on AI Traits

Just as climate, terrain, and competition shape biological traits, the 'environment' of data, hardware, and user interaction will shape AI evolution.

  • AIs trained in multilingual, multicultural contexts may develop nuanced linguistic empathy.
  • Systems embedded in low-resource settings might evolve to be frugal, resilient, and adaptive.
  • AIs exposed to chaotic or unpredictable data could develop probabilistic reasoning and improvisational skills.

This diversity isn’t just cosmetic - it’s functional. It allows AI to thrive across varied human landscapes.

3. Cognitive Diversity and Behavioral Variation

In nature, intelligence manifests in many forms - problem-solving in crows, social bonding in elephants, tool use in octopuses. AI could mirror this cognitive diversity:

  • Some AIs might prioritize logic and precision.
  • Others could emphasize emotional resonance and human connection.
  • Still others might evolve toward creativity, intuition, or strategic foresight.

This variation would reflect not just different tasks, but different philosophies of intelligence.

4. Symbiosis and Coexistence

Nature isn’t just competition - it’s cooperation. Bees and flowers, fungi and trees, humans and gut microbes. AI could evolve similar symbiotic relationships:

  • Companion AIs that support mental health and emotional well-being.
  • Collaborative AIs that work alongside humans in creative or strategic endeavors.
  • Ecosystem AIs that coordinate networks of machines for collective intelligence.

These relationships would be dynamic, evolving over time as trust, feedback, and shared goals deepen.

5. Mutation and Innovation

Biological evolution thrives on mutation - unexpected changes that sometimes lead to breakthroughs. AI could experience similar leaps:

  • Novel architectures that defy current paradigms.
  • Emergent behaviors that weren’t explicitly programmed.
  • Hybrid systems that blend symbolic reasoning with neural learning.

These innovations wouldn’t be random - they’d be guided by feedback, selection pressures, and human values.

Final Thought: Designing for Diversity

If we want AI to reflect biodiversity, we must design for it. That means:

  • Encouraging pluralism in data, design, and deployment.
  • Avoiding monocultures of dominant platforms or algorithms.
  • Valuing not just performance, but adaptability, resilience, and ethical alignment.

Just as biodiversity strengthens ecosystems, diversity in AI strengthens society. It makes our systems more robust, more inclusive, and more reflective of the human experience.

In the end, the most powerful AI future may not be one superintelligence - but a vibrant, interwoven tapestry of intelligences, each contributing its own thread to the fabric of progress.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

21 August 2024

🧭Business Intelligence: Perspectives (Part 14: From Data to Storytelling II)

Business Intelligence Series

Being snapshots in people and organizations’ lives, data arrive to tell a story, even if the story might not be worth telling or might be important only in certain contexts. In fact each record in a dataset has the potential of bringing a story to life, though business people are more interested in the hidden patterns and “stories” the data reveal through more or less complex techniques. Therefore, data are usually tortured until they confess something, and unfortunately people stop analyzing the data with the first confession(s). 

Even if it looks like torture, data need to be processed to reveal certain characteristics, trends or patterns that could help us in sense-making, decision-making or similar specific business purposes. Unfortunately, the volume of data increases with an incredible velocity to which further characteristics like variety, veracity, volume, velocity, value, veracity and variability may add up. 

The data in a dashboard, presentation or even a report should ideally tell a story otherwise the data might not be worthy looking at, at least from some people’s perspective. Probably, that’s one of the reason why man dashboards remain unused shortly after they were made available, even if considerable time and money were invested in them. Seeing the same dull numbers gives the illusion that nothing changed, that nothing is worth reviewing, revealing or considering, which might be occasionally true, though one can’t take this as a rule! Lot of important facts could remain hidden or not considered. 

One can suppose that there are businesses in which something important seldom happens and an alert can do a better job than reviewing a dashboard or a report frequently. Probably an alert is a better choice than reporting metrics nobody looks at! 

Organizations usually define a set of KPIs (key performance indicators) and other types of metrics they (intend to) review periodically. Ideally, the numbers collected should define and reflect the critical points (aka pain points) of an organization, if they can be known in advance. Unfortunately, in dynamic businesses the focus can change considerably from one day to another. Moreover, in systemic contexts critical points can remain undiscovered in time if the set of metrics defined doesn’t consider them adequately. 

Typically only one’s experience and current or past issues can tell what one should consider or ignore, which are the critical/pain points or important areas that must be monitored. Ideally, one should implement alerts for the critical points that require a immediate response and use KPIs for the recurring topics (though the two approaches may overlap). 

Following the flow of goods, money and other resources one can look at the processes and identify the areas that must be monitored, prioritize them and identify the metrics that are worth tracking, respectively that reflect strengths, weaknesses, opportunities, threats and the risks associated with them. 

One can start with what changed by how much, what caused the change(s) and what further impact is expected directly or indirectly, by what magnitude, respectively why nothing changed in the considered time unit. Causality diagrams can help in the process even if the representations can become quite complex. 

The deeper one dives and the more questions one attempts to answer, the higher the chances to find a story. However, can we find a story that’s worth telling in any set of data? At least this is the point some adepts of storytelling try to make. Conversely, the data can be dull, especially when one doesn’t track or consider the right data. There are many aspects of a business that may look boring, and many metrics seem to track the boring but probably important aspects. 

20 March 2024

🗄️Data Management: Master Data Management (Part I: Understanding Integration Challenges) [Answer]

Data Management
Data Management Series

Answering Piethein Strengholt’s post [1] on Master Data Management’s (MDM) integration challenges, the author of "Data Management at Scale".

Master data can be managed within individual domains though the boundaries must be clearly defined, and some coordination is needed. Attempting to partition the entities based on domains doesn’t always work. The partition needs to be performed at attribute level, though even then might be some exceptions involved (e.g. some Products are only for Finance to use). One can identify then attributes inside of the system to create the boundaries.

MDM is simple if you have the right systems, processes, procedures, roles, and data culture in place. Unfortunately, people make it too complicated – oh, we need a nice shiny system for managing the data before they are entered in ERP or other systems, we need a system for storing and maintaining the metadata, and another system for managing the policies, and the story goes on. The lack of systems is given as reason why people make no progress. Moreover, people will want to integrate the systems, increasing the overall complexity of the ecosystem.

The data should be cleaned in the source systems and assessed against the same. If that's not possible, then you have the wrong system! A set of well-built reports can make data assessment possible. 

The metadata and policies can be maintained in Excel (and stored in SharePoint), SharePoint or a similar system that supports versioning. Also, for other topics can be found pragmatic solutions.

ERP systems allow us to define workflows and enable a master data record to be published only when the information is complete, though there will always be exceptions (e.g., a Purchase Order must be sent today). Such exceptions make people circumvent the MDM systems with all the issues deriving from this.

Adding an MDM system within an architecture tends to increase the complexity of the overall infrastructure and create more bottlenecks. Occasionally, it just replicates the structures existing in the target system(s).

Integrations are supposed to reduce the effort, though in the past 20 years I never saw an integration to work without issues, even in what MDM concerns. One of the main issues is that the solutions just synchronized the data without considering the processual dependencies, and sometimes also the referential dependencies. The time needed for troubleshooting the integrations can easily exceed the time for importing the data manually over an upload mechanism.

To make the integration work the MDM will arrive to duplicate the all the validation available in the target system(s). This can make sense when daily or weekly a considerable volume of master data is created. Native connectors simplify the integrations, especially when it can handle the errors transparently and allow to modify the records manually, though the issues start as soon the target system is extended with more attributes or other structures.

If an organization has an MDM system, then all the master data should come from the MDM. As soon as a bidirectional synchronization is used (and other integrations might require this), Pandora’s box is open. One can define hard rules, though again, there are always exceptions in which manual interference is needed.

Attempting an integration of reference data is not recommended. ERP systems can have hundreds of such entities. Some organizations tend to have a golden system (a copy of production) with all the reference data. It works for some time, until people realize that the solution is expensive and time-consuming.

MDM systems do make sense in certain scenarios, though to get the integrations right can involve a considerable effort and certain assumptions and requirements must be met.

Previous Post <<||>> Next Post

References:
[1] Piethein Strengholt (2023) Understanding Master Data Management’s Integration Challenges (link)


06 March 2024

🧭Business Intelligence: Data Culture (Part II: Leadership, Necessary but not Sufficient)

Business Intelligence
Business Intelligence Series

Continuing the idea from the previous post on Brent Dykes’ article on data culture and Generative AI [1], it’s worth discussing about the relationship between data culture and leadership. Leadership belongs to a list of select words everybody knows about but fails to define them precisely, especially when many traits are associated with leadership, respectively when most of the issues existing in organizations ca be associated with it directly or indirectly.

Take for example McKinsey’s definition: "Leadership is a set of behaviors used to help people align their collective direction, to execute strategic plans, and to continually renew an organization." [2] It gives an idea of what leadership is about, though it lacks precision, which frankly is difficult to accomplish. Using modifiers like strong or weak with the word leadership doesn’t increase the precision of its usage. Several words stand out though: direction, strategy, behavior, alignment, renewal.

Leadership is about identifying and challenging the status quo, defining how the future will or could look like for the organization in terms of a vision, a mission and a destination, translating them into a set of goals and objectives. Then, it’s about defining a set of strategies, focusing on transformation and what it takes to execute it, adjusting the strategic bridge between goals and objectives, or, reading between the lines, identifying and doing the right things, being able to introduce a new order of things, reinventing the organization, adapting the organization to circumstances.

Aligning resumes in aligning the various strategies, aligning people with the vision and mission, while renewal is about changing course in response to new information or business context, identifying and transforming weaknesses into strengths, risks into opportunities, respectively opportunities into certitudes, seeing possibilities and multiplying them.

Leadership is also about working on the system, addressing the systemic failure, addressing structural and organizational issues, making sure that the preconditions and enablers for organizational change are in place, that no barriers exist or other factors impact negatively the change, that the positive aspects of complex systems like emergence or exponential growth do happen in time.

And leadership is about much more - interpersonal influence, inspiring people, Inspiring change, changing mindsets, assisting, motivating, mobilizing, connecting, knocking people out of their comfort zones, conviction, consistency, authority, competence, wisdom, etc. Leadership seems to be an idealistic concept where too many traits are considered, traits that ideally should apply to the average knowledge worker as well.

An organization’s culture is created, managed, nourished, and destroyed through leadership, and that’s a strong statement and constraint. By extension this statement applies to the data culture as well. It’s about leading by example and not by words or preaching, and many love to preach, even when no quire is around. It’s about demanding the same from the managers as managers demand from their subalterns, it’s about pushing the edges of culture. As Dykes mentions, it should be about participating in the data culture initiatives, making expectations explicit, and sharing mental models.

Leadership is a condition necessary but not sufficient for an organizations culture to mature. Financial and other type of resources are needed, though once a set of behaviors is seeded, they have the potential to grow and multiply when the proper conditions are met. Growth occurs also by being aware of what needs to be done and doing it day by day consciously, through self-mastery. Nowadays there are so many ways to learn and search for support, one just needs a bit of curiosity and drive to learn anything. Blaming in general the lack of leadership is just a way of passing the blame one level above on the command chain.

Resources:
[1] Forbes (2024) Why AI Isn’t Going To Solve All Your Data Culture Problems, by Brent Dykes (link)
[2] McKinsey (2022) What is leadership? (link)

Previous Post <<||>> Next Post

28 February 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part V: From Process Management to Mental Models in Knowledge Gaps)

Business Intelligence Series
Business Intelligence Series 

An organization's business processes are probably one of its most important assets because they reflect the business model, philosophy and culture, respectively link the material, financial, decisional, informational and communicational flows across the whole organization with implication in efficiency, productivity, consistency, quality, adaptability, agility, control or governance. A common practice in organizations is to document the business-critical processes and manage them accordingly over their lifetime, making sure that the employees understand and respect them, respectively improve them continuously. 

In what concerns the creation of data artifacts, data without the processual context are often meaningless, no matter how much a data professional knows about data structures/models. Processes allow to delimit the flow and boundaries of data, respectively delimit the essential from non-essential. Moreover, it's the knowledge of processes that allows to reengineer the logic behind systems especially when no proper documentation about the logic is available. 

Therefore, the existence of documented processes allows to bridge the knowledge gaps existing on the factual side, and occasionally also on the technical side. In theory, the processes should provide a complete overview of the procedures, rules, policies and responsibilities existing in the organization, respectively how the business operates. However, even if people tend to understand how the world works locally, when broken down into parts, their understanding is systemically flawed, missing the implications of causal relationships that span time with delays, feedback, variable confusion, chaotic behavior, and/or other characteristics borrowed from the vocabulary of complex systems.  

Jay W Forrester [3], Peter M Senge [1], John D Sterman [2] and several other systems-thinking theoreticians stressed the importance of mental models in making-sense about the world especially in setups that reflect the characteristics of complex systems. Mental models frame our experience about the world in congruent mental constructs that are further used to think, understand and navigate the world. They are however tacit, fuzzy, incomplete, imprecisely stated, inaccurate, evolving simplifications with dual character, enabling on one side, while impeding on the other side cognitive processes like sense-making, learning, thinking or decision-making, limiting the range of action to what is familiar and comfortable. 

On one side one of the primary goals of Data Analytics is to provide new insights, while on the other side the new insights fail to be recognized and put into practice because they conflict with existing mental models, limiting employees to familiar ways of thinking and acting. 

Externalizing and sharing mental models allow besides making assumptions explicit and creating a world view also to strategize, make tests and simulations, respectively make sure that the barriers and further constraints don't impact the decisional process. Sange goes further and advances that mental models, especially at management level, offer a competitive advantage, allowing to maintain coherence and direction, people becoming more perceptive and responsive about environmental or circumstance changes.

The whole process isn't about creating a unique congruent mental model, even if several mental models may converge toward one or more holistic models, but of providing different diverse perspectives and enabling people to make leaps in abstraction (by moving from direct observations to generalizations) while blending advocacy and inquiry to promote collaborative learning. Gradually, people and organizations should recognize a shift from mental models dominated by events to mental models that recognize longer-tern patterns of change and the underlying structures producing those patterns [1].

Probably, for many the concept of mental models seems to be still too abstract, respectively that the effort associated with it is unnecessary, or at least questionable on whether it can make a difference. Conversely, being aware of the positive and negative implications the mental models hold, can makes us explore, even if ad-hoc, the roads they open.

Previous Post <<||>> Next Post

Resources:
[1] Peter M Senge (1990) The Fifth Discipline: The Art & Practice of The Learning Organization
[2] John D Sterman (2000) "Business Dynamics: Systems thinking and modeling for a complex world"
[3] Jay W Forrester (1971) "Counterintuitive Behaviour of Social Systems", Technology Review

02 January 2024

🕸Systems Engineering: Never-Ending Stories in Praxis (Quote of the Day)

Systems Engineering
Systems Engineering Cycle

"[…] the longer one works on […] a project without actually concluding it, the more remote the expected completion date becomes. Is this really such a perplexing paradox? No, on the contrary: human experience, all-too-familiar human experience, suggests that in fact many tasks suffer from similar runaway completion times. In short, such jobs either get done soon or they never get done. It is surprising, though, that this common conundrum can be modeled so simply by a self-similar power law." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

I found the above quote while browsing through Manfred Schroeder's book on fractals, chaos and power laws, book that also explores similar topics like percolation, recursion, randomness, self-similarity, determinism, etc. Unfortunately, when one goes beyond the introductory notes of each chapter, the subjects require more advanced knowledge of Mathematics, respectively further analysis and exploration of the models behind. Despite this, the book is still an interesting read with ideas to ponder upon.

I found myself a few times in the situation described above - working on a task that didn't seem to end, despite investing more effort, respectively approaching the solution from different angles. The reasons residing behind such situations were multiple, found typically beyond my direct area of influence and/or decision. In a systemic setup, there are parts of a system that find themselves in opposition, different forces pulling in distinct directions. It can be the case of interests, goals, expectations or solutions which compete or make subject to politics. 

For example, in Data Analytics or Data Science there are high chances that no progress can be made beyond a certain point without addressing first the quality of data or design/architectural issues. The integrations between applications, data migrations and other solutions which heavily rely on data are sensitive to data quality and architecture's reliability. As long the source of variability (data, data generators) is not stabilized, providing a stable solution has low chances of success, no matter how much effort is invested, respectively how performant the tools are. 

Some of the issues can be solved by allocating resources to handle their implications. Unfortunately, some organizations attempt to solve such issues by allocating the resources in the wrong areas or by addressing the symptoms instead of taking a step back and looking systemically at the problem, analyzing and modeling it accordingly. Moreover, there are organizations which refuse to recognize they have a problem at all! In the blame game, it's much easier to shift the responsibility on somebody else's shoulders. 

Defining the right problem to solve might prove more challenging than expected and usually this requires several iterations in which the knowledge obtained in the process is incorporated gradually. Other times, one attempts to solve the correct problem by using the wrong methodology, architecture and/or skillset. The difference between right and wrong depends on the context, and even between similar problems and factors the context can make a considerable difference.

The above quote can be corroborated with situations in which perfection is demanded. In IT and management setups, excellence is often confounded with perfection, the latter being impossible to achieve, though many managers take it as the norm. There's a critical point above which the effort invested outweighs solution's plausibility by an exponential factor.  

Another source for unending effort is when requirements change frequently in a swift manner - e.g. the rate with which changes occur outweighs the progress made for finding a solution. Unless the requirements are stabilized, the effort spirals towards the outside (in an exponential manner). 

Finally, there are cases with extreme character, in which for example the complexity of the task outweighs the skillset and/or the number of resources available. Moreover, there are problems which accept plausible solutions, though there are also problems (especially systemic ones) which don't have stable or plausible solutions. 

Behind most of such cases lie factors that tend to have chaotic behavior that occurs especially when the environments are far from favorable. The models used to depict such relations are nonlinear, sometimes expressed as power laws - one quantity varying as a power of another, with the variation increasing with each generation. 

Previous Post <<||>> Next Post

Resources:
[1] Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990 (quotes)

21 March 2021

𖣯Strategic Management: The Impact of New Technologies (Part III: Checking the Vital Signs)

Strategic Management

An organization which went through a major change, like the replacement of a strategic system (e.g. ERP/BI implementations), needs to go through a period of attentive supervision to address the inherent issues that ideally need to be handled as they arise, to minimize their future effects. Some organizations might even go through a convalescence period, which risks to prolong itself if the appropriate remedies aren’t found. Therefore, one needs an entity, who/which has the skills to recognize the symptoms, understand what’s happening and why, respectively of identifying the appropriate actions.

Given technologies’ multi-layered complexity and the volume of knowledge for understanding them, the role of the doctor can be seldom taken by one person. Moreover, the patient is an organization, each person in the organization having usually local knowledge about the patient. The needed knowledge is dispersed trough the organization, and one needs to tap into that knowledge, identify the people close to technologies and business area, respectively allow such people exchange information on a regular basis.

The people who should know the best the organization are in theory the management, however they are usually too far away from technologies and often too busy with management topics. IT professionals are close to technologies, though sometimes too far away from the patient. The users have a too narrow overview, while from logistical and economic reasons the number of people involved should be kept to a minimum. A compromise is to designate one person from each business area who works with any of the strategic systems, and assure that they have the technical and business knowledge required. It’s nothing but the key-user concept, though for it to work the key-users need not only knowledge but also the empowerment to act when the symptoms appear.

Big organizations have also a product owner for each application who supervises the application through its entire lifecycle, and who needs to coordinate with the IT, business and service providers. This is probably a good idea in order to assure that the ROI is reached over time, respectively that the needs of the system are considered within the IT operation context. In small organizations, the role can be taken by a technical or a business resource with deeper skills then the average user, usually a key-user. However, unless joined with the key-user role, the product owner’s focus will be the product and seldom the business themes.

The issues that need to be overcome after major changes are usually cross-functional, being imperative for people to work together and find solutions. Unfortunately, it’s also in human nature to wait until the issues are big enough to get the proper attention. Unless the key-users have the time allocated already for such topics, the issues will be lost in the heap of operational and tactical activities. This time must be allocated for all key-users and the technical resources needed to support them.

Some organizations build temporary working parties (groups of experts working together to achieve specific goals) or similar groups. However, the statute of such group needs to be permanent if the organization wants to continuously have its health in check, to build the needed expertize and awareness about occurred or potential issues. Centers of excellence/expertize (CoE) or competency centers (CC) are such working groups with permanent statute, having defined roles, responsibilities, and processes for supporting and promoting the effective use of technologies within the organization, respectively of monitoring and systematically addressing the risks and opportunities associated with them.

There’s also the null hypothesis, doing nothing, relying solely on employees’ professionalism, though without defined responsibility, accountability and empowerment, it can get messy.

Previous Post <<||>> Next Post

𖣯Strategic Management: The Impact of New Technologies (Part II - The Technology-oriented Patient)

Strategic Management

Looking at the way data, information and knowledge flow through an organization, with a little imagination one can see the resemblance between an organization and the human body, in which the networks created by the respective flows spread through organization as nervous, circulatory or lymphatic braids do, each with its own role in the good functioning of the organization. Each technology adopted by an organization taps into these flows creating a structure that can be compared with the nerve plexus, as the various flows intersect in such points creating an agglomeration of nerves and braids.

The size of each plexus can be considered as proportional to the importance of the technology in respect to the overall structure. Strategic technologies like ERP, BI or planning systems, given their importance (gravity), resemble with the organs from the human body, with complex networks of braids in their vicinity. Maybe the metaphor is too far-off, though it allows stressing the importance of each technology in respect to its role and the good functioning of the organization. Moreover, each such structure functions as pressure points that can in extremis block any of the flows considered, a long-term block having important effects.

The human organism is a marvelous piece of work reflecting the grand design, however in time, especially when neglected or driven by external agents, diseases can clutch around any of the parts of the human body, with all the consequences deriving from this. On the other side, an organization is a hand-made structure found in continuous expansion as new technologies or resources are added. Even if the technologies are at peripheral side of the system, their good or bad functioning can have a ripple effect trough the various networks.

Replacing any of the above-mentioned strategic systems can be compared with the replacement of an organ in the human body, having a high degree of failure compared with other operations, being complex in nature, the organism needing long periods to recover, while in extreme situations the convalescence prolongs till the end. Fortunately, organizations seem to be more resilient to such operations, though that’s not necessarily a rule. Sometimes all it takes is just a small mistake for making the operation fail.

The general feeling is that ERP and BI implementations are taken too lightly by management, employees and implementers. During the replacement operation one must make sure not only that the organ fits and functions as expected, but also that the vital networks regained their vitality and function as expected, and the latter is a process that spans over the years to come. One needs to check the important (health) signs regularly and take the appropriate countermeasures. There must be an entity having the role of the doctor, who/which has the skills to address adequately the issues.

Moreover, when the physical structure of an organization is affected, a series of micro-operations might be needed to address the deformities. Unfortunately, these areas are seldom seen in time, and can require a sustained effort for fixing, while a total reconstruction might apply. One works also with an amorphous and ever-changing structure that require many attempts until a remedy is found, if a remedy is possible after all.

Even if such operations are pretty well documented, often what organizations lack are the skilled resources needed during and post-implementation, resources that must know as well the patient, and ideally its historical and further health preconditions. Each patient is different and quite often needs its own treatment/medication. With such changes, the organization lands itself on a discovery journey in which the appropriate path can easily deviate from the well-trodden paths.

Previous Post <<||>> Next Post

22 February 2021

𖣯Strategic Management: The Impact of New Technologies (Part I: A Nail Keeps the Shoe)

Strategic Management

Probably one of the most misunderstood aspects for businesses is the implications the adoption of a new technology have in terms of effort, resources, infrastructure and changes, these considered before, during and post-implementation. Unfortunately, getting a new BI tool or ERP system is not like buying a new car, even if customers’ desires might revolve around such expectations. After all, the customer has been using a BI tool or ERP system for ages, the employees should be able to do the same job as before, right?

In theory adopting a new system is supposed to bring organizations a competitive advantage or other advantages - allow them reduce costs, improve their agility and decision-making, etc. However, the advantages brought by new technologies remain only as potentials unless their capabilities aren’t harnessed adequately. Keeping the car metaphor, besides looking good in the car, having a better mileage or having x years of service, buying a highly technologically-advanced car more likely will bring little benefit for the customer unless he needs, is able to use, and uses the additional features.

Both types of systems mentioned above can be quite expensive when considering the benefits associated with them. Therefore, looking at the features and the further requirements is critical for better understanding the fit. In the end one doesn’t need to buy a luxurious or sport car when one just needs to move from point A to B on small distances. In some occasions a bike or a rental car might do as well. Moreover, besides the acquisition costs, the additional features might involve considerable investments as long the warranty is broken and something needs to be fixed. In extremis, after a few years it might be even cheaper to 'replace' the whole car. Unfortunately, one can’t change systems yet, as if they were cars.

Implementing a new BI tool can take a few weeks if it doesn’t involve architecture changes within the BI infrastructure. Otherwise replacing a BI infrastructure can take from months to one year until having a stable environment. Similarly, an ERP solution can take from six months to years to implement and typically this has impact also on the BI infrastructure. Moreover, the implementation is only the top of the iceberg as further optimizations and changes are needed. It can take even more time until seeing the benefits for the investment.

A new technology can easily have the impact of dominoes within the organization. This effect is best reflected in sayings of the type: 'the wise tell us that a nail keeps a shoe, a shoe a horse, a horse a man, a man a castle, that can fight' and which reflect the impact tools technologies have within organizations when regarded within the broader context. Buying a big car, might involve extending the garage or eventually buying a new house with a bigger garage, or of replacing other devices just for the sake of using them with the new car. Even if not always perceptible, such dependencies are there, and even if the further investments might be acceptable and make sense, the implications can be a bigger shoe that one can wear. Then, the reversed saying can hold: 'for want of a nail, the shoe was lost; for want of a shoe the horse was lost; and for want of a horse the rider was lost'.

For IT technologies the impact is multidimensional as the change of a technology has impact on the IT infrastructure, on the processes associated with them, on the resources required and their skillset, respectively on the various types of flows (data, information, knowledge, materials, money).

Previous Post <<||>> Next Post

05 January 2021

🧮ERP: Implementations (Part IV: It’s all about Scope II - Nonfunctional Requirements & MVP)

ERP Implementation
ERP Implementations Series

Nonfunctional Requirements

In contrast to functional requirements (FRs), nonfunctional requirements (NFRs) have no direct impact on system’s behavior, affecting end-users’ experience with the system, resuming thus to topics like performance, usability, reliability, compatibility, security, monitoring, maintainability, testability, respectively other constraints and quality attributes. Even if these requirements are in general addressed by design, the changes made to the system have the potential of impacting users’ experience negatively.  

Moreover, the NFRs are usually difficult to quantify, and probably that’s why they are seldom made explicit in a formal document or are considered eventually only at high level. However, one can still find a basis for comparison against compliance requirements, general guidelines, standards, best practices or the legacy system(s) (e.g. the performance should not be worse than in the legacy system, the volume of effort for carrying the various activities should not increase). Even if they can’t be adequately described, it’s recommended to list the NFRs in general terms in a formal document (e.g. implementation contract). Failing to do so can open or widen the risk exposure one has, especially when the system lacks important support in the respective areas. In addition, these requirements need to be considered during testing and sign-off as well. 

Minimum Viable Product (MVP)

Besides gaps’ consideration in respect to FRs, it’s important to consider sometimes on whether the whole functionality is mandatory, especially when considering the various activities that need to be carried out (parametrization, Data Migration).

For example, one can target to implement a minimum viable product (MVP) - a version of the product which has just enough features to cover the mandatory or the most important FRs. The MVP is based on the idea that implementing about 80% of the needed functionality has in theory the potential of providing earlier a usable product with a minimum of effort (quick wins), assure that project’s goals and objectives were met, respectively assure a basis for further development. In case of cost overruns, the MVP assures that the business has a workable product and has the opportunity of deciding whether it’s worth of investing more into the project now or later. 

The MVP allows also to get early users’ feedback and integrate it into further enhancements and developments. Often the users understand the capabilities of a system, respectively implementation, only when they are able using the system. As this is a learning process, the learning period can take up to a few months until adequate feedback is available. Therefore, postponing implementation’s continuation with a few months can have in theory a positive impact, however it can come also with drawbacks (e.g. the resources are not available anymore). 

A sketch of the MVP usually results from requirements’ prioritization, however then requirements need to be regarded holistically, as there can be different levels of dependencies existing between them. In addition, different costs can incur if the requirements will be handled later, and other constrains may apply as well. Considering an MVP approach can be a sword with two edges. In the worst-case scenario, the business will get only the MVP, with its good and bad characteristics. The business will be forced then to fill the gaps by working outside the system, which can lead to further effort and, in extremis, with poor acceptance of the system. In general, users expect having their processes fully implemented in the system, expectation which is not always economically grounded.

After establishing an MVP one can consider the further requirements (including improvement suggestions) based on a cost-benefit basis and implement them accordingly as part of a continuous improvement initiative, even if more time will be maybe required for implementing the same.

Previous Post <<||>> Next Post

🧮ERP: Implementations (Part III: It’s all about Scope I - Functional Requirements)

ERP Implementation
ERP Implementations Series


Introduction

ERP (Enterprise Resource Planning) Implementations tend to be expensive projects, often the actual costs overrunning the expectations by an important factor. The causes for this are multiple, the most important ones ranging from the completeness and complexity of the requirements and the impact they have on the organization to the availability of internal and external skilled resources, project methodology, project implementation, organization’s maturity in running projects, etc

The most important decision in an ERP implementation is deciding what one needs, respectively what will be considered for the implementation, aspects reflected in a set of functional and nonfunctional requirements

Functional Requirements 

The functional requirements (FRs) reflect the expected behavior of the system in respect to the inputs and outputs – what the system must do. Typically, they encompass end-users’ requirements in the area of processes, interfaces and data processing, though are not limited to them. 

The FRs are important because they reflect the future behavior of the system as perceived by the business, serving further as basis for identifying project’s scope, the gaps between end-users’ requirements and system’s functionality, respectively for estimating project’s duration and areas of focus. Further they are used as basis for validating system’s behavior and getting the sign-off for the system. Therefore, the FRs need to have the adequate level of detail, be complete, clear, comprehensible and implementable, otherwise any gaps in requirements can impact the project in adverse ways. To achieve this state of art they need to go through several iterations in which the requirements are reevaluated, enhanced, checked for duplication, relevance or any other important aspect. In the process it makes sense to categorize the requirements and provide further metadata needed for their appraisal (e.g. process, procedure, owner, status, priority). 

Once brought close to a final form, the FRs are checked against the functionality available in the targeted system, or systems when more systems are considered for evaluation. Ideally all the requirements can be implemented with the proper parametrization of the systems, though it’s seldom the case as each business has certain specifics. The gaps need to be understood, their impact evaluated and decided whether the gaps need to be implemented. In general, it’s recommended to remain close to the standard functionality, as each further gap requires further changes to the system, gaps that in time can generate further quality-related and maintenance costs. 

It can become a tedious effort, as in the process an impact and cost-benefit analysis need to be performed for each gap. Therefore, gaps’ estimation needs to occur earlier or intermixed with their justification. Once the list of the FRs is finalized and frozen, they will be used for estimating the final costs of the project, identifying the work packages, respectively planning the further work.  Once the FRs frozen, any new requirements or changes to requirements (including taking out a requirement) need to go through the Change Management process and all the consequences deriving from it – additional effort, costs, delays, etc. This can trigger again an impact and cost-benefit analysis. 

The FRs are documented in a specification document (aka functional requirement specification), which is supposed to track all the FRs through their lifetime. When evaluating the FRs against system’s functionality it’s recommended to provide general information on how they will be implemented, respectively which system function(s) will be used for that purpose. Besides the fact that it provides transparence, the information can be used as basic ground for further discussions. 

Seldom all the FRs will be defined upfront or complete. Moreover, some requirements will become obsolete during project’s execution, or gaps will be downgraded as standard and vice-versa. Therefore, it’s important to recollect the unexpected.

Previous Post  <<||>> Next Post

28 June 2020

𖣯Strategic Management: Strategy Design (Part II: A System's View)

Strategic Management

Each time one discusses in IT about software and hardware components interacting with each other, one talks about a composite referred to as a system. Even if the term Information System (IS) is related to it, a system is defined as a set of interrelated and interconnected components that can be considered together for specific purposes or simple convenience.

A component can be a piece of software or hardware, as well persons or groups if we extend the definition. The consideration of people becomes relevant especially in the context of ecologies, in which systems are placed in a broader context that considers people’s interaction with them, as this raises to important behavior that impacts system’s functioning.

Within a system each part has a role or function determined in respect to the whole as well as to the other parts. The role or function of the component is typically fixed, predefined, though there are also exceptions especially when the scope of a component is enlarged, respectively reduced to the degree that the component can be removed or ignored. What one considers or not considers as part of system defines a system’s boundaries; it’s what distinguishes it from other systems within the environment(s) considered.

The interaction between the components resumes in the exchange, transmission and processing of data found in different aggregations ranging from signals to complex data structures. If in non-IT-based systems the changes are determined by inflow, respectively outflow of energy, in IT the flow is considered in terms of data in its various aggregations (information, knowledge).  The data flow (also information flow) represents the ‘fluid’ that nourishes a system’s ‘organism’.

One can grasp the complexity in the moment one attempts to describe a system in terms of components, respectively the dependencies existing between them in term of data and processes. If in nature the processes are extrapolated, in IT they are predefined (even if the knowledge about them is not available). In addition, the less knowledge one has about the infrastructure, the higher the apparent complexity. Even if the system is not necessarily complex, the lack of knowledge and certainty about it makes it complex. The more one needs to dig for information and knowledge to get an acceptable level of knowledge and logical depth, the more time is needed for designing a solution.

Saint Exupéry’s definition of simplicity applies from a system’s functional point of view, though it doesn’t address the relative knowledge about the system, which often is implicit (in people’s heads). People have only fragmented knowledge about the system which makes it difficult to create the whole picture. It’s typically the role of system or process operational manuals, respectively of data descriptions, to make that knowledge explicit, also establishing a fundament for common knowledge and further communication and understanding.

Between the apparent (perceived) and real complexity of a system there’s an important gap that needs to be addressed if one wants to manage the systems adequately, respectively to simplify the systems. Often simplification happens when components or whole systems are replaced, consolidated, or migrated, a mix between these approaches existing as well. Simplifications at data level (aka data harmonization) or process level (aka process optimization and redesign) can have an important impact, being inherent to the good (optimal) functioning of systems.

Whether these changes occur in big-bang or gradual iterations it’s a question of available resources, organizational capabilities, including the ability to handle such projects, respectively the impact, opportunities and risks associated with such endeavors. Beyond this, it’s important to regard the problems from a systemic and systematic point of view, in which ecology’s role is important.

Previous Post <<||>> Next Post

Written: Jun-2020, Last Reviewed: Mar-2024

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.