Showing posts with label policies. Show all posts
Showing posts with label policies. Show all posts

01 January 2026

🔭Data Science: Policies (Just the Quotes)

"Every economic and social situation or problem is now described in statistical terms, and we feel that it is such statistics which give us the real basis of fact for understanding and analysing problems and difficulties, and for suggesting remedies. In the main we use such statistics or figures without any elaborate theoretical analysis; little beyond totals, simple averages and perhaps index numbers. Figures have become the language in which we describe our economy or particular parts of it, and the language in which we argue about policy." (Ely Devons,Essays in Economics", 1961)

"There are, indeed, plenty of ways in which statistics can help in the process of decision-taking. But exaggerated claims for the role they can play merely serve to confuse rather than clarify issues of public policy, and lead those responsible for action to oscillate between over-confidence and over-scepticism in using them." (Ely Devons,Essays in Economics", 1961)

"The formal structure of a decision problem in any area can be put into four parts: (1) the choice of an objective function denning the relative desirability of different outcomes; (2) specification of the policy alternatives which are available to the agent, or decisionmaker, (3) specification of the model, that is, empirical relations that link the objective function, or the variables that enter into it, with the policy alternatives and possibly other variables; and (4) computational methods for choosing among the policy alternatives that one which performs best as measured by the objective function." (Kenneth Arrow,The Economics of Information", 1984)

"Often, though, a policy or systems analyst is stuck with a bad model, that is, one that appeals to the analyst as adequately realistic but which is either: 1) contradicted by some data or is grossly implausible in some aspect it purports to represent, or 2) conjectural, that is, neither supported nor contradicted by data, either because data do not exist or because they are equivocal. [...] A model may have component parts that are not bad, but if, taken as a whole, it meets one of these criteria, it is a bad model." (James S Hodges, "Six (or So) Things You Can Do with a Bad Model", 1991)

"Management is not founded on observation and experiment, but on a drive towards a set of outcomes. These aims are not altogether explicit; at one extreme they may amount to no more than an intention to preserve the status quo, at the other extreme they may embody an obsessional demand for power, profit or prestige. But the scientist's quest for insight, for understanding, for wanting to know what makes the system tick, rarely figures in the manager's motivation. Secondly, and therefore, management is not, even in intention, separable from its own intentions and desires: its policies express them. Thirdly, management is not normally aware of the conventional nature of its intellectual processes and control procedures. It is accustomed to confuse its conventions for recording information with truths-about-the-business, its subjective institutional languages for discussing the business with an objective language of fact and its models of reality with reality itself." (Stanford Beer,Decision and Control", 1994)

"Garbage in, garbage out' is a sound warning for those in the computer field; it is every bit as sound in the use of statistics. Even if the 'garbage' which comes out leads to a correct conclusion, this conclusion is still tainted, as it cannot be supported by logical reasoning. Therefore, it is a misuse of statistics. But obtaining a correct conclusion from faulty data is the exception, not the rule. Bad basic data" (the 'garbage in') almost always leads to incorrect conclusions" (the 'garbage out'). Unfortunately, incorrect conclusions can lead to bad policy or harmful actions." (Herbert F Spirer et al,Misused Statistics" 2nd Ed, 1998)

"A sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Differently from supervised learning, in this case there is no target value for each input pattern, only a reward based of how good or bad was the action taken by the agent in the existent environment." (Marley Vellasco et al,Hierarchical Neuro-Fuzzy Systems" Part II, Encyclopedia of Artificial Intelligence, 2009)

"There are three possible reasons for [the] absence of predictive power. First, it is possible that the models are misspecified. Second, it is possible that the model’s explanatory factors are measured at too high a level of aggregation [...] Third, [...] the search for statistically significant relationships may not be the strategy best suited for evaluating our model’s ability to explain real world events [...] the lack of predictive power is the result of too much emphasis having been placed on finding statistically significant variables, which may be overdetermined. Statistical significance is generally a flawed way to prune variables in regression models [...] Statistically significant variables may actually degrade the predictive accuracy of a model [...] [By using] models that are constructed on the basis of pruning undertaken with the shears of statistical significance, it is quite possible that we are winnowing our models away from predictive accuracy." (Michael D Ward et al,The perils of policy by p-value: predicting civil conflicts" Journal of Peace Research 47, 2010)

"Using random processes in our models allows economists to capture the variability of time series data, but it also poses challenges to model builders. As model builders, we must understand the uncertainty from two different perspectives. Consider first that of the econometrician, standing outside an economic model, who must assess its congruence with reality, inclusive of its random perturbations. An econometrician’s role is to choose among different parameters that together describe a family of possible models to best mimic measured real world time series and to test the implications of these models. I refer to this as outside uncertainty. Second, agents inside our model, be it consumers, entrepreneurs, or policy makers, must also confront uncertainty as they make decisions. I refer to this as inside uncertainty, as it pertains to the decision-makers within the model. What do these agents know? From what information can they learn? With how much confidence do they forecast the future? The modeler’s choice regarding insiders’ perspectives on an uncertain future can have significant consequences for each model’s equilibrium outcomes." (Lars P Hansen,Uncertainty Outside and Inside Economic Models", [Nobel lecture] 2013)

"Comparisons are the lifeblood of empirical studies. We can’t determine if a medicine, treatment, policy, or strategy is effective unless we compare it to some alternative. But watch out for superficial comparisons: comparisons of percentage changes in big numbers and small numbers, comparisons of things that have nothing in common except that they increase over time, comparisons of irrelevant data. All of these are like comparing apples to prunes." (Gary Smith,Standard Deviations", 2014)

"it stands, in the context of computational learning, for a family of algorithms aimed at approximating the best policy to play in a certain environment" (without building an explicit model of it) by increasing the probability of playing actions that improve the rewards received by the agent." (Fernando S Oliveira,Reinforcement Learning for Business Modeling", 2014)

"We know what forecasting is: you start in the present and try to look into the future and imagine what it will be like. Backcasting is the opposite: you state your desired vision of the future as if it’s already happened, and then work backward to imagine the practices, policies, programs, tools, training, and people who worked in concert in a hypothetical past" (which takes place in the future) to get you there." (Eben Hewitt,Technology Strategy Patterns: Architecture as strategy" 2nd Ed., 2019)

"Once we know something is fat-tailed, we can use heuristics to see how an exposure there reacts to random events: how much is a given unit harmed by them. It is vastly more effective to focus on being insulated from the harm of random events than try to figure them out in the required details" (as we saw the inferential errors under thick tails are huge). So it is more solid, much wiser, more ethical, and more effective to focus on detection heuristics and policies rather than fabricate statistical properties." (Nassim N Taleb,Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications" 2nd Ed., 2022)

01 December 2025

🕸Systems Engineering: Policies (Just the Quotes)

"[System dynamics] is an approach that should help in important top-management problems [...] The solutions to small problems yield small rewards. Very often the most important problems are but little more difficult to handle than the unimportant. Many [people] predetermine mediocre results by setting initial goals too low. The attitude must be one of enterprise design. The expectation should be for major improvement [...] The attitude that the goal is to explain behavior; which is fairly common in academic circles, is not sufficient. The goal should be to find management policies and organizational structures that lead to greater success." (Jay W Forrester, "Industrial Dynamics", 1961)

"In complex systems cause and effect are often not closely related in either time or space. The structure of a complex system is not a simple feedback loop where one system state dominates the behavior. The complex system has a multiplicity of interacting feedback loops. Its internal rates of flow are controlled by nonlinear relationships. The complex system is of high order, meaning that there are many system states" (or levels). It usually contains positive-feedback loops describing growth processes as well as negative, goal-seeking loops. In the complex system the cause of a difficulty may lie far back in time from the symptoms, or in a completely different and remote part of the system. In fact, causes are usually found, not in prior events, but in the structure and policies of the system." (Jay W Forrester, "Urban dynamics", 1969)

"In complex systems cause and effect are often not closely related in either time or space. The structure of a complex system is not a simple feedback loop where one system state dominates the behavior. The complex system has a multiplicity of interacting feedback loops. Its internal rates of flow are controlled by nonlinear relationships. The complex system is of high order, meaning that there are many system states" (or levels). It usually contains positive-feedback loops describing growth processes as well as negative, goal-seeking loops. In the complex system the cause of a difficulty may lie far back in time from the symptoms, or in a completely different and remote part of the system. In fact, causes are usually found, not in prior events, but in the structure and policies of the system." (Jay Wright Forrester, "Urban dynamics", 1969)

"A model for simulating dynamic system behavior requires formal policy descriptions to specify how individual decisions are to be made. Flows of information are continuously converted into decisions and actions. No plea about the inadequacy of our understanding of the decision-making processes can excuse us from estimating decision-making criteria. To omit a decision point is to deny its presence - a mistake of far greater magnitude than any errors in our best estimate of the process." (Jay W Forrester,Policies, decisions and information sources for modeling", 1994)

"First, social systems are inherently insensitive to most policy changes that people choose in an effort to alter the behavior of systems. In fact, social systems draw attention to the very points at which an attempt to intervene will fail. Human intuition develops from exposure to simple systems. In simple systems, the cause of a trouble is close in both time and space to symptoms of the trouble. If one touches a hot stove, the burn occurs here and now; the cause is obvious. However, in complex dynamic systems, causes are often far removed in both time and space from the symptoms. True causes may lie far back in time and arise from an entirely different part of the system from when and where the symptoms occur. However, the complex system can mislead in devious ways by presenting an apparent cause that meets the expectations derived from simple systems." (Jay W Forrester, "Counterintuitive Behavior of Social Systems", 1995)

"Second, social systems seem to have a few sensitive influence points through which behavior can be changed. These high-influence points are not where most people expect. Furthermore, when a high-influence policy is identified, the chances are great that a person guided by intuition and judgment will alter the system in the wrong direction." (Jay W Forrester, "Counterintuitive Behavior of Social Systems", 1995)

"System dynamics models are not derived statistically from time-series data. Instead, they are statements about system structure and the policies that guide decisions. Models contain the assumptions being made about a system. A model is only as good as the expertise which lies behind its formulation. A good computer model is distinguished from a poor one by the degree to which it captures the essence of a system that it represents. Many other kinds of mathematical models are limited because they will not accept the multiple-feedback-loop and nonlinear nature of real systems." (Jay W Forrester, "Counterintuitive Behavior of Social Systems", 1995)

"Third, social systems exhibit a conflict between short-term and long-term consequences of a policy change. A policy that produces improvement in the short run is usually one that degrades a system in the long run. Likewise, policies that produce long-run improvement may initially depress behavior of a system. This is especially treacherous. The short run is more visible and more compelling. Short-run pressures speak loudly for immediate attention. However, sequences of actions all aimed at short-run improvement can eventually burden a system with long-run depressants so severe that even heroic short-run measures no longer suffice. Many problems being faced today are the cumulative result of short-run measures taken in prior decades." (Jay W Forrester, "Counterintuitive Behavior of Social Systems", 1995)

"[...] information feedback about the real world not only alters our decisions within the context of existing frames and decision rules but also feeds back to alter our mental models. As our mental models change we change the structure of our systems, creating different decision rules and new strategies. The same information, processed and interpreted by a different decision rule, now yields a different decision. Altering the structure of our systems then alters their patterns of behavior. The development of systems thinking is a double-loop learning process in which we replace a reductionist, narrow, short-run, static view of the world with a holistic, broad, long-term, dynamic view and then redesign our policies and institutions accordingly." (John D Sterman, "Business dynamics: Systems thinking and modeling for a complex world", 2000)

"To avoid policy resistance and find high leverage policies requires us to expand the boundaries of our mental models so that we become aware of and understand the implications of the feedbacks created by the decisions we make. That is, we must learn about the structure and dynamics of the increasingly complex systems in which we are embedded." (John D Sterman, "Business dynamics: Systems thinking and modeling for a complex world", 2000)

"Deep change in mental models, or double-loop learning, arises when evidence not only alters our decisions within the context of existing frames, but also feeds back to alter our mental models. As our mental models change, we change the structure of our systems, creating different decision rules and new strategies. The same information, interpreted by a different model, now yields a different decision. Systems thinking is an iterative learning process in which we replace a reductionist, narrow, short-run, static view of the world with a holistic, broad, long-term, dynamic view, reinventing our policies and institutions accordingly." (John D Sterman, "Learning in and about complex systems", Systems Thinking Vol. 3 2003)

"System dynamics is an approach to understanding the behaviour of over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system. It also helps the decision maker untangle the complexity of the connections between various policy variables by providing a new language and set of tools to describe. Then it does this by modeling the cause and effect relationships among these variables." (Raed M Al-Qirem & Saad G Yaseen, "Modelling a Small Firm in Jordan Using System Dynamics", 2010)

"Complex systems defy intuitive solutions. Even a third-order, linear differential equation is unsolvable by inspection. Yet, important situations in management, economics, medicine, and social behavior usually lose reality if simplified to less than fifth-order nonlinear dynamic systems. Attempts to deal with nonlinear dynamic systems using ordinary processes of description and debate lead to internal inconsistencies. Underlying assumptions may have been left unclear and contradictory, and mental models are often logically incomplete. Resulting behavior is likely to be contrary to that implied by the assumptions being made about' underlying system structure and governing policies." (Jay W. Forrester, "Modeling for What Purpose?", The Systems Thinker Vol. 24" (2), 2013)

"One of the remarkable features of these complex systems created by replicator dynamics is that infinitesimal differences in starting positions create vastly different patterns. This sensitive dependence on initial conditions is often called the butterfly - effect aspect of complex systems - small changes in the replicator dynamics or in the starting point can lead to enormous differences in outcome, and they change one’s view of how robust the current reality is. If it is complex, one small change could have led to a reality that is quite different." (David Colander & Roland Kupers, "Complexity and the art of public policy : solving society’s problems from the bottom up", 2014)

"The work around the complex systems map supported a concentration on causal mechanisms. This enabled poor system responses to be diagnosed as the unanticipated effects of previous policies as well as identification of the drivers of the sector. Understanding the feedback mechanisms in play then allowed experimentation with possible future policies and the creation of a coherent and mutually supporting package of recommendations for change. " (David C Lane et al, "Blending systems thinking approaches for organisational analysis: reviewing child protection", 2015)

09 October 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 107: The Rise of Autonomous AI: Learning, Reasoning, and Evolving)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words that depicts the evolution of AI over next years related to its autonomy around topics like reinforcement learning, causal reasoning, and self-improving systems" 

Introduction

As we look ahead to the next decade, Artificial Intelligence (AI) is poised to undergo a profound transformation - from a tool that executes predefined tasks to an autonomous system capable of learning, reasoning, and evolving on its own. This shift is being driven by three foundational pillars: reinforcement learning, causal reasoning, and self-improving systems. Together, they are shaping a future where AI doesn’t just follow instructions - it understands, adapts, and innovates.

Reinforcement Learning: The Engine of Adaptive Behavior

Reinforcement learning (RL) has already demonstrated its power in mastering complex games like Go and StarCraft. But its true potential lies in real-world applications where environments are dynamic, uncertain, and require continuous adaptation.

In the coming years, RL will be central to developing AI agents that can operate autonomously in high-stakes domains - think autonomous vehicles navigating unpredictable traffic, robotic surgeons adapting to patient-specific anatomy, or financial agents optimizing portfolios in volatile markets. These agents learn by trial and error, receiving feedback from their environment and adjusting their strategies accordingly.

What sets RL apart is its ability to optimize long-term outcomes, not just immediate rewards. This makes it ideal for tasks that require planning, exploration, and balancing short-term sacrifices for long-term gains - hallmarks of intelligent behavior.

Causal Reasoning: From Correlation to Understanding

While traditional machine learning excels at identifying patterns, it often struggles with understanding why those patterns exist. This is where causal reasoning comes in. By modeling cause-and-effect relationships, AI can move beyond correlation to make more robust, generalizable decisions.

Causal AI will be critical in domains like healthcare, where understanding the root cause of a symptom can mean the difference between life and death. It will also play a pivotal role in policy-making, climate modeling, and scientific discovery - areas where interventions must be based on more than just statistical associations.

In the near future, we’ll see AI systems that can simulate counterfactuals ("What would happen if we changed X?"), identify hidden confounders, and make decisions that are not only data-driven but causally sound. This will lead to more trustworthy and explainable AI, capable of navigating complex, real-world scenarios with greater confidence.

Self-Improving Systems: AI That Evolves

Perhaps the most transformative development on the horizon is the emergence of self-improving AI systems. These are models that can iteratively refine themselves - learning from new data, adapting to novel situations, and even rewriting parts of their own code or architecture to become more efficient.

This concept, often referred to as recursive self-improvement, hints at a future where AI systems can autonomously enhance their capabilities without human intervention. While we are still in the early stages, foundational work in meta-learning (learning to learn) and neural architecture search is paving the way.

In practical terms, this means AI systems that become better over time at tasks like language understanding, robotics, or scientific research. Imagine a drug discovery AI that not only identifies promising compounds but also improves its own algorithms to accelerate future discoveries.

The Path Forward

The convergence of reinforcement learning, causal reasoning, and self-improvement is setting the stage for a new era of AI - one where machines are not just tools, but collaborators capable of independent thought and action. However, with great autonomy comes great responsibility.

As AI systems gain more control over decisions that impact human lives, ensuring alignment with human values, robust safety mechanisms, and transparent decision-making becomes paramount. We must invest in AI governance frameworks that prioritize ethical considerations, fairness, and accountability.

The next chapter of AI is not just about building smarter machines - it’s about building systems we can trust. Autonomous AI, when guided by thoughtful design and ethical foresight, has the potential to amplify human potential, solve complex global challenges, and unlock new frontiers of innovation.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

05 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 42: How AI Can Help in Understanding Complex Systems)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how AI can help in understanding complex systems"

Introduction

Artificial Intelligence (AI) is revolutionizing the way we analyze and interpret complex systems - from financial markets to biological ecosystems. These systems consist of interconnected components that interact in unpredictable ways, making them difficult to understand using traditional methods. AI’s ability to process vast amounts of data, recognize patterns, and simulate scenarios makes it an invaluable tool for deciphering complexity.

1. AI’s Role in Analyzing Complex Systems

Complex systems exist in various domains, including finance, healthcare, transportation, and environmental science. AI enhances our understanding by:

  • Identifying hidden patterns in large datasets.
  • Predicting system behavior based on historical trends.
  • Simulating different scenarios to assess potential outcomes.

For example, AI can analyze financial markets to predict economic trends or optimize traffic systems to reduce congestion.

2. AI in Explainable Models for Complex Systems

One challenge in understanding complex systems is the black-box nature of AI models. Explainable AI (XAI) helps by:

  • Clarifying AI decision-making processes, making them more transparent.
  • Providing interpretable insights, ensuring users understand AI-generated conclusions.
  • Enhancing trust in AI-driven predictions, especially in critical sectors like healthcare and finance.

By making AI more explainable, researchers and policymakers can verify and refine AI-driven insights.

3. AI in Scientific Research and Discovery

AI accelerates scientific discovery by analyzing complex biological, chemical, and physical systems. Some applications include:

  • AI-driven drug discovery, identifying potential treatments faster.
  • Climate modeling, predicting environmental changes with greater accuracy.
  • Genomic analysis, uncovering genetic patterns linked to diseases.

AI’s ability to process massive datasets enables breakthroughs in fields that rely on complex system analysis.

4. AI in Decision-Making and Policy Development

Governments and organizations use AI to navigate complex policy decisions by:

  • Assessing economic impacts of policy changes.
  • Optimizing resource allocation in healthcare and infrastructure.
  • Enhancing cybersecurity, detecting threats in interconnected digital systems.

AI-driven insights help policymakers make informed decisions in dynamic environments.

Conclusion: AI as a Key to Understanding Complexity

AI’s ability to analyze, explain, and predict complex systems makes it an essential tool for scientific research, policy development, and industry innovation. By leveraging AI, humanity can better understand and manage intricate systems, leading to smarter decisions and groundbreaking discoveries.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

04 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 41: How AI Can Play Devil’s Advocate - Challenging Assumptions and Expanding Perspectives)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how AI can play devil's advocate"

Introduction

Artificial Intelligence (AI) is often seen as a tool for efficiency, automation, and problem-solving. However, one of its most intriguing capabilities is its ability to play devil’s advocate - challenging assumptions, questioning biases, and presenting alternative viewpoints. By acting as a skeptical counterbalance, AI can help individuals and organizations think critically, refine arguments, and explore diverse perspectives.

1. What Does It Mean to Play Devil’s Advocate?

Playing devil’s advocate means arguing against a prevailing opinion or assumption, even if one does not personally agree with the opposing stance. This approach is valuable in:

  • Debates and discussions, where opposing viewpoints strengthen arguments.
  • Decision-making, ensuring all possibilities are considered.
  • Problem-solving, where unconventional perspectives lead to innovative solutions.

AI, with its ability to analyze vast amounts of data and generate counterarguments, is uniquely positioned to take on this role.

2. How AI Challenges Confirmation Bias

One of AI’s most important functions as a devil’s advocate is breaking the confirmation bias loop - the tendency for people to seek out information that supports their existing beliefs while ignoring contradictory evidence. AI can:

  • Identify logical inconsistencies in arguments.
  • Present alternative viewpoints, even if they challenge popular opinions.
  • Encourage critical thinking by questioning assumptions.

By disrupting confirmation bias, AI helps individuals and organizations make more informed and balanced decisions.

3. AI in Decision-Making and Policy Development

AI-driven devil’s advocacy is particularly useful in policy-making, business strategy, and ethical debates. Some applications include:

  • Corporate decision-making: AI can highlight risks and alternative strategies before executives finalize plans.
  • Legal and ethical discussions: AI can present opposing viewpoints in debates about regulations and governance.
  • Scientific research: AI can challenge hypotheses, ensuring rigorous testing and validation.

By forcing individuals to consider alternative perspectives, AI enhances objectivity and rational decision-making.

4. AI’s Role in Amplifying Minority Voices

AI can also serve as a mediator for underrepresented perspectives, ensuring that minority viewpoints are heard in group discussions. AI-driven devil’s advocate systems:

  • Reduce social influence biases, preventing dominant voices from overshadowing others.
  • Encourage diverse perspectives, fostering more inclusive decision-making.
  • Improve psychological safety, allowing individuals to express dissenting opinions anonymously.
  • This approach ensures that critical discussions remain balanced and representative.

5. The Challenges of AI as a Devil’s Advocate

While AI can challenge assumptions, it also faces limitations:

  • AI models may reinforce biases if trained on skewed data.
  • AI lacks true human intuition, making some counterarguments overly rigid.
  • AI’s responses depend on prompts, requiring careful input to generate meaningful opposition.

To maximize AI’s effectiveness as a devil’s advocate, developers must ensure diverse training data and refine AI’s ability to engage in nuanced discussions.

Conclusion: AI as a Catalyst for Critical Thinking

AI’s ability to challenge assumptions, disrupt biases, and amplify diverse perspectives makes it a powerful tool for critical thinking and informed decision-making. By playing devil’s advocate, AI helps individuals and organizations refine arguments, explore alternative viewpoints, and make more balanced choices.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

02 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 39: How Fast Does Humanity Move in Enforcing Policies to Cope with AI’s Rapid Growth?)

Prompt Engineering Series
Prompt Engineering Series

Introduction

Artificial Intelligence (AI) is evolving at an unprecedented pace, reshaping industries, governance, and daily life. However, the speed at which AI advances often outpaces policy enforcement, leaving governments and institutions struggling to regulate its impact effectively. While AI’s rapid development demands swift policy responses, humanity’s ability to enforce regulations remains slow and fragmented due to bureaucratic hurdles, ethical dilemmas, and global inconsistencies.

1. The Lag Between AI Innovation and Policy Enforcement

AI’s growth is exponential, but policy enforcement follows a linear and bureaucratic process. Some key reasons for this lag include:

  • Complex legislative procedures: Governments require extensive debates, approvals, and revisions before implementing AI regulations.
  • Ethical uncertainties: Policymakers struggle to define AI’s ethical boundaries, delaying enforcement.
  • Corporate resistance: Tech companies often lobby against strict AI regulations, slowing policy adoption.

This delay creates a regulatory gap, where AI evolves faster than laws can govern its use.

2. Global AI Policy Enforcement: A Fragmented Approach

Different countries enforce AI policies at varying speeds, leading to inconsistent regulations worldwide. Some examples include:

  • The European Union (EU): The EU AI Act is one of the most comprehensive AI regulations, but its enforcement is slow due to legal complexities.
  • The United States: AI policy enforcement is largely decentralized, with states implementing different regulations.
  • China: AI governance is strict, but enforcement focuses on state control rather than ethical concerns.

This fragmented approach makes global AI regulation difficult, as different regions prioritize different aspects of AI governance.

3. The Challenges of AI Policy Enforcement

Even when AI policies are established, enforcement faces several challenges:

  • Lack of technical expertise: Policymakers often lack AI knowledge, making enforcement ineffective.
  • Corporate non-compliance: Companies may bypass regulations through loopholes or offshore operations.
  • Slow adaptation to AI advancements: Policies become outdated quickly as AI evolves.

Without continuous policy updates, AI enforcement remains inefficient and reactive rather than proactive.

4. The Need for Dynamic AI Regulations

To cope with AI’s fast pace, experts propose dynamic AI regulations, where policies evolve alongside technological advancements. Some solutions include:

  • AI governance frameworks that adapt to new AI developments.
  • International AI cooperation to create unified global standards.
  • Automated AI auditing systems to monitor compliance in real time.

By implementing flexible and adaptive policies, humanity can keep pace with AI’s rapid growth.

Conclusion: Humanity Must Accelerate AI Policy Enforcement

While AI evolves rapidly, policy enforcement remains slow and fragmented, creating regulatory gaps that could lead to ethical and security risks. Governments, businesses, and researchers must accelerate AI policy development, ensuring regulations keep pace with technological advancements. Without proactive enforcement, AI’s impact on society could become unmanageable, making responsible AI governance more urgent than ever.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

01 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 38: The Growing Backlog of AI Policies - How Much Needs to Be Done?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how big is the backlog of policies that need to be made" because of AI

Introduction

Artificial Intelligence (AI) is advancing at an unprecedented pace, reshaping industries, governance, and daily life. However, the rapid evolution of AI has outpaced regulatory frameworks, leaving governments, businesses, and institutions struggling to catch up with necessary policies. The backlog of AI-related policies is substantial, covering areas such as ethics, security, employment, privacy, and accountability.

1. The Urgent Need for AI Governance

AI’s widespread adoption has created legal and ethical gaps that require immediate policy intervention. Some of the most pressing areas include:

  • AI bias and fairness: Policies must ensure AI systems do not reinforce discrimination.
  • Data privacy and security: AI-driven data collection raises concerns about surveillance and personal privacy.
  • AI accountability: Governments must define who is responsible when AI makes errors or unethical decisions.
  • AI in employment: Regulations must address job displacement and workforce adaptation.

Without clear policies, AI’s impact on society could become unpredictable and potentially harmful.

2. The Backlog of AI Policies Across Industries

Different sectors require specific AI regulations, but many industries are lagging behind in policy development. Some examples include:

  • Healthcare: AI-powered diagnostics and treatment recommendations need strict oversight to prevent errors.
  • Finance: AI-driven trading algorithms require regulations to prevent market manipulation.
  • Education: AI-powered learning tools must be monitored to ensure ethical use and prevent misinformation.
  • Law enforcement: AI surveillance and facial recognition technologies raise concerns about civil liberties.

Governments and institutions must accelerate policy development to keep up with AI’s rapid integration into these fields.

3. Global AI Policy Challenges

AI regulation is not uniform across countries, leading to conflicting policies and enforcement difficulties. Some challenges include:

  • Different AI ethics standards: Countries have varying perspectives on AI’s role in governance and security.
  • Cross-border AI regulations: AI-driven cyber threats and misinformation require international cooperation.
  • Corporate AI policies: Businesses must align AI governance with global standards to ensure compliance.

The backlog of AI policies is not just national - it’s global, requiring collaboration between governments, corporations, and researchers.

4. The Role of AI Policy Frameworks

Organizations and governments are working to develop AI policy frameworks, but progress is slow. Some key initiatives include:

  • The EU AI Act, which aims to regulate AI applications based on risk levels.
  • ISO 42001, an AI management system standard for responsible AI governance.
  • Corporate AI policies, guiding ethical AI use within businesses.

Despite these efforts, the backlog of AI policies remains significant, requiring faster implementation and enforcement.

Conclusion: AI Policy Development Must Accelerate

The backlog of AI policies is large and growing, covering ethics, security, employment, privacy, and accountability. Governments, businesses, and institutions must prioritize AI governance, ensuring regulations keep pace with technological advancements. Without proactive policy development, AI’s impact on society could become unmanageable, making responsible AI regulation more urgent than ever.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

04 June 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 11: How the Danger of AI on Society Can Be Alleviated)

Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how the danger of AI on society can be alleviated"

Introduction

Artificial Intelligence (AI) is rapidly transforming industries, improving efficiency, and driving innovation. However, as AI becomes more embedded in society, concerns about its risks grow. Issues such as bias, privacy invasion, job displacement, and security threats require proactive solutions. Rather than fearing AI’s expansion, society must focus on responsible development and regulation to alleviate potential dangers.

1. Implementing Ethical AI Frameworks

One of the most critical steps in reducing AI-related risks is the implementation of ethical guidelines. AI should be developed in ways that align with human values, fairness, and inclusivity. Strategies include:

  • Bias reduction: Ensuring AI models are trained on diverse and representative datasets to prevent discrimination.
  • Explainability: AI decisions must be transparent, allowing users to understand how conclusions are reached.
  • Accountability: Companies should audit AI systems regularly to detect ethical violations.

Establishing strong governance structures ensures AI remains a positive force rather than a tool for unchecked automation.

2. Strengthening Data Privacy Regulations

AI depends on vast amounts of data, raising concerns about misuse, surveillance, and privacy violations. Strengthening privacy laws ensures AI operates within ethical boundaries. Governments and organizations can:

  • Enforce data protection laws such as GDPR and CCPA to regulate AI-driven data collection.
  • Promote secure data storage to prevent breaches and cyberattacks.
  • Enhance consumer control over their personal information, allowing users to opt out of unnecessary AI-driven data tracking.

By prioritizing privacy and security, society can ensure AI is used ethically and responsibly.

3. Addressing AI-Driven Job Displacement

Automation powered by AI replaces human labor in various industries, creating concerns about mass unemployment and economic instability. Governments, businesses, and educational institutions must:

  • Invest in workforce reskilling to help displaced workers transition to AI-related roles.
  • Encourage AI augmentation rather than total automation - allowing AI to assist workers rather than replace them.
  • Develop policies that support AI-driven job creation while mitigating mass layoffs.

A balanced approach ensures that AI improves productivity without harming employment opportunities.

4. Regulating AI in Autonomous Systems

AI plays a key role in autonomous vehicles, robotics, and military applications, which can pose serious risks if improperly managed. To minimize dangers:

  • Governments must set strict regulations for self-driving technology to prevent accidents.
  • AI-powered weaponry should be internationally regulated to prevent autonomous conflicts.
  • AI-driven healthcare tools must be monitored by professionals to avoid errors in medical diagnostics.

Regulating AI applications in critical sectors prevents unintended harm to society.

5. Combating AI-Generated Misinformation

AI-driven misinformation - such as deepfakes, AI-generated fake news, and manipulated social media content - can harm democracy and trust in digital spaces. Solutions include:

  • Developing AI fact-checking systems to detect and flag misleading information.
  • Educating the public on recognizing AI-generated fake content.
  • Enforcing platform accountability, requiring tech companies to monitor and moderate AI-driven misinformation.

AI’s potential for deception must be controlled to preserve trust in digital communication.

Conclusion: Shaping AI for a Safe Future

AI holds enormous potential, but without ethical development and responsible policies, it can become a threat rather than a benefit. By implementing fairness guidelines, regulating data usage, addressing job displacement, controlling autonomous AI, and combating misinformation, society can steer AI toward progress rather than chaos.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

29 April 2025

🏭🗒️Microsoft Fabric: Data Loss Prevention (DLP) in Purview [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 10-Jun-2025

[Microsoft Purview] Data Loss Prevention (DLP)
  • {def} the practice of protecting sensitive data to reduce the risk from oversharing [2]
    • implemented by defining and applying DLP policies [2]
  • {benefit} helps to protect sensitive information with policies that automatically detect, monitor, and control the sharing or movement of sensitive data [1]
    • administrators can customize rules to block, restrict, or alert when sensitive data is transferred to prevent accidental or malicious data leaks [1]
  • {concept} DLP policies
    • allow to monitor the activities users take on sensitive items and then take protective actions [2]
      • applies to sensitive items 
        • at rest
        • in transit [2]
        • in use [2]
      • created and maintained in the Microsoft Purview portal [2]
    • {scope} only supported for Power BI semantic models [1]
    • {action} show a pop-up policy tip to the user that warns that they might be trying to share a sensitive item inappropriately [2]
    • {action} block the sharing and, via a policy tip, allow the user to override the block and capture the users' justification [2]
    • {action} block the sharing without the override option [2]
    • {action} [data at rest] sensitive items can be locked and moved to a secure quarantine location [2]
    • {action} sensitive information won't be displayed 
      • e.g. Teams chat
  • DLP reports
    • provides data from monitoring policy matches and actions, to user activities [2]
      • used as basis for tuning policies and triage actions taken on sensitive items [2]
    • telemetry uses M365 audit Logs and processed the data for the different reporting tools [2]
      • M365 provides with visibility into risky user activities [2]
      • scans the audit logs for risky activities and runs them through a correlation engine to find activities that are occurring at a high volume [1]
        • no DLP policies are required [2]
  • {feature} detects sensitive items by using deep content analysis [2]
    • ⇐ not by just a simple text scan [2]
    • based on
      • keywords matching [2]
      • evaluation of regular expressions [2] 
      • internal function validation [2]
      • secondary data matches that are in proximity to the primary data match [2]
      • ML algorithms and other methods to detect content that matches DLP policies
    • all DLP monitored activities are recorded to the Microsoft 365 Audit log [2]
  • DLP lifecycle
    • {phase} plan for DLP
      • train and acclimate users to DLP practices on well-planned and tuned policies [2]
      • {recommendation} use policy tips to raise awareness with users before changing the policy status from simulation mode to more restrictive modes [2]
    • {phase} prepare for DLP
    • {phase} deploy policies in production
      • {action} define control objectives, and how they apply across workloads [2]
      • {action} draft a policy that embodies the objectives
      • {action} start with one workload at a time, or across all workloads - there's no impact yet
      • {feature} implement policies in simulation mode
        • {benefit} allows to evaluate the impact of controls
          • the actions defined in a policy aren't applied yet
        • {benefit} allows to monitor the outcomes of the policy and fine-tune it so that it meets the control objectives while ensuring it doesn't adversely or inadvertently impacting valid user workflows and productivity [2]
          • e.g. adjusting the locations and people/places that are in or out of scope
          • e.g. tune the conditions that are used to determine if an item and what is being done with it matches the policy
          • e.g. the sensitive information definition/s
          • e.g. add new controls
          • e.g. add new people
          • e.g. add new restricted apps
          • e.g. add new restricted sites
        • {step} enable the control and tune policies [2]
          • policies take effect about an hour after being turned on [2]
      • {action} create DLP policy 
      • {action} deploy DLP policy 
  • DLP alerts 
    • alerts generated when a user performs an action that meets the criteria of a DLP policy [2]
      • there are incident reports configured to generate alerts [2]
      • {limitation} available in the alerts dashboard for 30 days [2]
    • DLP posts the alert for investigation in the DLP Alerts dashboard
    • {tool} DLP Alerts dashboard 
      • allows to view alerts, triage them, set investigation status, and track resolution
        • routed to Microsoft Defender portal 
        • {limitation} available for six months [2]
      • {constraint} administrative unit restricted admins see the DLP alerts for their administrative unit only [2]
  • {concept} egress activities (aka exfiltration)
    • {def} actions related to exiting or leaving a space, system or network [2]
  • {concept}[Microsoft Fabric] policy
    • when a DLP policy detects a supported item type containing sensitive information, the actions configured in the policy are triggered [3]
    • {feature} Activity explorer
      • allows to view Data from DLP for Fabric and Power BI
      • for accessing the data, user's account must be a member of any of the following roles or higher [3]
        • Compliance administrator
        • Security administrator
        • Compliance data administrator
        • Global Administrator 
          • {warning} a highly privileged role that should only be used in scenarios where a lesser privileged role can't be used [3]
        • {recommendation} use a role with the fewest permissions [3]
    • {warning} DLP evaluation workloads impact capacity consumption [3]
    • {action} define policy
      • in the data loss prevention section of the Microsoft Purview portal [3]
      • allows to specify 
        •  conditions 
          • e.g. sensitivity labels
        •  sensitive info types that should be detected [3]
      • [semantic model] evaluated against DLP policies 
        • whenever one of the following events occurs:
          • publish
          • republish
          • on-demand refresh
          • scheduled refresh
        •  the evaluation  doesn't occur if either of the following is true
          • the initiator of the event is an account using service principal authentication [3]
          • the semantic model owner is a service principal [3]
      • [lakehouse] evaluated against DLP policies when the data within a lakehouse undergoes a change
        • e.g. getting new data, connecting a new source, adding or updating existing tables, etc. [3]

References:
[1] Microsoft Learn (2025) Learn about data loss prevention [link]
[2] Microsoft Learn (2024) Purview: Learn about data loss prevention [link]
[3] Microsoft Learn (2025) Get started with Data loss prevention policies for Fabric and Power BI [link]

Resources:
[R1] Microsoft Fabric Updates Blog (2024) Secure Your Data from Day One: Best Practices for Success with Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric [link]
[R2] 

Acronyms:
DLP - Data Loss Prevention
M365 - Microsoft 365

06 February 2025

🌌🏭KQL Reloaded: First Steps (Part V: Database Metadata)

When working with a new data repository, one of the first things to do is to look at database's metadata, when available, and try to get a birds eye view of what's available, how big is the databases in terms of size, tables and user-defined objects, how the schema was defined, how the data are stored, eventually how often backup are taken, what users have access and to what, etc. 

So, after creating some queries in KQL and figuring out how things work, I tried to check what metadata are available, how it can be accessed, etc. The target is not to provide a full list of the available metadata, but to understand what information is available, in what format, how easy is to extract the important metadata, etc. 

So, the first set of metadata is related to database:

// get database metadata metadata
.show databases (ContosoSales)

// get database metadata metadata (multiple databases)
.show databases (ContosoSales, Samples)

// get database schema metadata
.show databases (ContosoSales) schema

// get database schema metadata (multiple databases) 
.show databases (ContosoSales, Samples) schema

// get database schema violations metadata
.show database ContosoSales schema violations

// get database entities metadata
.show databases entities with (showObfuscatedStrings=true)
| where DatabaseName == "ContosoSales"

// get database metadata 
.show databases entities with (resolveFunctionsSchema=true)
| where DatabaseName == "ContosoSales" and EntityType == "Table"
//| summarize count () //get the number of tables

// get a function's details
.show databases entities with (resolveFunctionsSchema=true)
| where DatabaseName == "ContosoSales" 
    and EntityType == "Function" 
    and EntityName == "SalesWithParams"

// get external tables metadata
.show external tables

// get materialized views metadata
.show materialized-views

// get query results metadata
.show stored_query_results

// get entities groups metadata
.show entity_groups

Then, it's useful to look at the database objects. 

// get all tables 
.show tables 
//| count

// get tables metadata
.show tables (Customers, NewSales)

// get tables schema
.show table Customers cslschema

// get schema as json
.show table Customers schema as json

// get table size: Customers
Customers
| extend sizeEstimateOfColumn = estimate_data_size(*)
| summarize totalSize_MB=round(sum(sizeEstimateOfColumn)/1024.00/1024.00,2)

Unfortunately, the public environment has restrictions in what concerns the creation of objects, while for the features available one needs to create some objects to query the corresponding metadata.

Furthermore, it would be interesting to understand who has access to the various repositories, what policies were defined, and so on. 

// get principal roles
.show database ContosoSales principal roles

// get principal roles for table
.show table Customers principal roles

// get principal roles for function:
.show function SalesWithParams principal roles

// get retention policies
.show table Customers policy retention

// get sharding policies
.show table Customers policy sharding

There are many more objects one can explore. It makes sense to document the features, respectively the objects used for the various purposes.

In addition, one should check also the best practices available for the data repository (see [2]).

Happy coding!

Previous Post <<||>> Next Post

References:
[1] Microsoft Learn (2024) Management commands overview [link]
[2] Microsoft Learn (2024) Kusto: Best practices for schema management [link]

19 March 2024

𖣯Strategic Management: Inflection Points and the Data Mesh (Quote of the Day)

Strategic Management
Strategic Management Series

"Data mesh is what comes after an inflection point, shifting our approach, attitude, and technology toward data. Mathematically, an inflection point is a magic moment at which a curve stops bending one way and starts curving in the other direction. It’s a point that the old picture dissolves, giving way to a new one. [...] The impacts affect business agility, the ability to get value from data, and resilience to change. In the center is the inflection point, where we have a choice to make: to continue with our existing approach and, at best, reach a plateau of impact or take the data mesh approach with the promise of reaching new heights." [1]

I tried to understand the "metaphor" behind the quote. As the author through another quote pinpoints, the metaphor is borrowed from Andrew Groove:

"An inflection point occurs where the old strategic picture dissolves and gives way to the new, allowing the business to ascend to new heights. However, if you don’t navigate your way through an inflection point, you go through a peak and after the peak the business declines. [...] Put another way, a strategic inflection point is when the balance of forces shifts from the old structure, from the old ways of doing business and the old ways of competing, to the new." [2]

The second part of the quote clarifies the role of the inflection point - the shift from a structure, respectively organization or system to a new one. The inflection point is not when we take a decision, but when the decision we took, and the impact shifts the balance. If the data mesh comes after the inflection point (see A), then there must be some kind of causality that converges uniquely toward the data mesh, which is questionable, if not illogical. A data mesh eventually makes sense after organizations reached a certain scale and thus is likely improbable to be adopted by small to medium businesses. Even for large organizations the data mesh may not be a viable solution if it doesn't have a proven record of success. 

I could understand if the author would have said that the data mesh will lead to an inflection point after its adoption, as is the case of transformative/disruptive technologies. Unfortunately, the tracking record of BI and Data Analytics projects doesn't give many hopes for such a magical moment to happen. Probably, becoming a data-driven organization could have such an effect, though for many organizations the effects are still far from expectations. 

There's another point to consider. A curve with inflection points can contain up and down concavities (see B) or there can be multiple curves passing through an inflection point (see C) and the continuation can be on any of the curves.

Examples of Inflection Points [3]

The change can be fast or slow (see D), and in the latter it may take a long time for change to be perceived. Also [2] notes that the perception that something changed can happen in stages. Moreover, the inflection point can be only local and doesn't describe the future evolution of the curve, which to say that the curve can change the trajectory shortly after that. It happens in business processes and policy implementations that after a change was made in extremis to alleviate an issue a slight improvement is recognized after which the performance decays sharply. It's the case of situations in which the symptoms and not the root causes were addressed. 

More appropriate to describe the change would be a tipping point, which can be defined as a critical threshold beyond which a system (the organization) reorganizes/changes, often abruptly and/or irreversible.

Previous Post <<||>> Next Post

References:
[1] Zhamak Dehghani (2021) Data Mesh: Delivering Data-Driven Value at Scale (book review)
[2] Andrew S Grove (1988) "Only the Paranoid Survive: How to Exploit the Crisis Points that Challenge Every Company and Career"
[3] SQL Troubles (2024) R Language: Drawing Function Plots (Part II - Basic Curves & Inflection Points) (link)

07 March 2021

💼Project Management: Methodologies (Part I: Agile Manifesto Reloaded I - An Introduction)

 

Project Management

There are so many books written on agile methodologies, each attempting to depict the realities of software development projects. There are many truths considered in them, though they seem to blend in a complex texture in which the writer takes usually the position of a preacher in which the sins of the traditional technologies are contrasted with the agile principles. In extremis everything done in the past seems to be wrong, while the agile methods seem to be a panacea, which is seldom the case.

There are already 20 years since the agile manifesto was published and the methodologies adhering to the respective principles don’t seem to provide the expected success, suffering from the same chronical symptoms of their predecessors - they are poorly understood and implemented, tend to function after hammer’s principle, respectively the software development projects still deliver poor results. Moreover, there are more and more professionals who raise their voice against agile practices.

Frankly, the principles behind the agile manifesto make sense. A project should by definition satisfy stakeholders’ requirements, ideally through regular deliveries that incorporate the needed functionality while gradually seeking to get early feedback from customers, respectively involve the customer through all project’s duration, working together to deliver a feasible product. Moreover, self-organizing teams, face-to-face meetings, constant pace, technical excellence should allow minimizing the waste, respectively maximizing the efficiency in the project. Further aspects like simplicity, good design and architecture should establish a basis for success.

Re-reading the agile manifesto, even if each read pulls from experience more and more pro and cons, the manifesto continues to look like a Christmas wish-list. Even if the represented ideas make sense and satisfy a specific need, they are difficult to achieve in a project’s context and setup. Each wish introduces a constraint that brings with it its own limitations. Unfortunately, each policy introduced by a methodology follows the same pattern, no matter of the methodology considered. Moreover, the wishes cover only a small subset from a project’s texture, are general and let lot of space for interpretation and implementation, though the same can be said about any principles that don’t provide a coherent worldview or a conceptual model.

The software development industry needs a coherent worldview that reflects its assumptions, models, characteristics, laws and challenges. Software Engineering (SE) attempts providing such a worldview though unfortunately is too complex for many and there seem to be a big divide when considered in respect to the worldviews introduced by the various Project Management (PM) methodologies. Studying one or two PM methodologies, learning a few programming languages and even the hand on experience on a few projects won’t fill the gaps in knowledge associated with the SE worldview.

Organizations don’t seem to see the need for professionals of having a formal education in SE. On the other side is expected from employees to have by default some of the skillset required, which is not the case. Besides understanding and implementing a technology there are a set of knowledge areas in which the IT professional must have at least a high-level knowledge if it’s expected from him/her to think critically about the respective areas. Unfortunately, the lack of such knowledge leads sometimes to situations which can impact negatively projects.

Almost each important word from the agile manifesto pulls with it a set of concepts from a SE’ worldview – customer satisfaction, software delivery, working software, requirements management, change management, cooperation, teamwork, trust, motivation, communication, metrics, stakeholders’ management, good design, good architecture, lessons learned, performance management, etc. The manifesto needs to be regarded from a SE’s eyeglasses if one expects value from it.

Previous Post <<||>>  Next Post

29 August 2019

🛡️Information Security: Data Leak/Loss Prevention (Definitions)

"Attempts to prevent the loss of confidentiality of sensitive information by limiting the use of confidential information only for authorized purposes." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"A feature that protects data on laptops by enabling file-level authentication and secure erase options in the event that a laptop is lost or stolen." (CommVault, "Documentation 11.20", 2018)

"A set of technologies and inspection techniques used to classify information content contained within an object—such as a file, an email, a packet, an application or a data store - while at rest (in storage), in use (during an operation), or in transit (across a network). DLP tools also have the ability to dynamically apply a policy—such as log, report, classify, relocate, tag, and encrypt - and/or apply enterprise data rights management protections." (William Stallings, "Effective Cybersecurity: A Guide to Using Best Practices and Standards", 2018)

"The actions that organizations take to prevent unauthorized external parties from gaining access to sensitive data." (Shon Harris & Fernando Maymi, "CISSP All-in-One Exam Guide" 8th Ed., 2018)

"Data loss prevention (DLP; also known as data leak prevention) is a computer security term referring to systems that identify, monitor, and protect data in use (e.g. endpoint actions), data in motion (e.g. network actions), and data at rest (e.g. data storage) through deep content inspection, contextual security analysis of transaction (attributes of originator, data object, medium, timing, recipient/destination, and so on) and with a centralized management framework. Systems are designed to detect and prevent unauthorized use and transmission of confidential information." (Robert F Smallwood, "Information Governance for Healthcare Professionals", 2018)

 "A capability that detects and prevents violations to corporate policies regarding the use, storage, and transmission of sensitive data. Its purpose is to enforce policies to prevent unwanted dissemination of sensitive information." (Forrester)

"A systems ability to identify, monitor, and protect data in use (e.g. endpoint actions), data in motion (e.g. network actions), and data at rest (e.g. data storage) through deep packet content inspection, contextual security analysis of transaction (attributes of originator, data object, medium, timing, recipient/destination, etc.), within a centralized management framework. Data loss prevention capabilities are designed to detect and prevent the unauthorized use and transmission of NSS information." (CNSSI 4009-2015 CNSSI 1011)

"Data loss protection (DLP) describes a set of technologies and inspection techniques used to classify information content contained within an object - such as a file, email, packet, application or data store - while at rest (in storage), in use (during an operation) or in transit (across a network). DLP tools are also have the ability to dynamically apply a policy - such as log, report, classify, relocate, tag and encrypt - and/or apply enterprise data rights management protections." (Gartner)

"Data loss prevention (DLP) is a strategy for making sure that end users do not send sensitive or critical information outside the corporate network. The term is also used to describe software products that help a network administrator control what data end users can transfer." (TechTarget) [source]

"Data loss prevention (DLP) makes sure that users do not send sensitive or critical information outside the corporate network. The term describes software products that help a network administrator control the data that users can transfer." (proofpoint) [source]

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.