Showing posts with label prediction. Show all posts
Showing posts with label prediction. Show all posts

04 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 86: Regulating AI - A Race Against Future Risks)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how AI must be regulated to avoid future risks"

Introduction

Artificial Intelligence (AI) is evolving at breakneck speed. From autonomous vehicles to predictive healthcare, AI is transforming industries and reshaping society. But with this rapid advancement comes a shadow of uncertainty - ethical dilemmas, misinformation, surveillance, and even existential threats. To avoid catastrophic outcomes, we must regulate AI now, before its power outpaces our ability to control it.

The Risks We Can’t Ignore

AI’s potential is immense - but so are its risks. Without regulation, we face:

  • Bias and discrimination: AI systems trained on flawed data can perpetuate racial, gender, and socioeconomic biases.
  • Job displacement: Automation threatens millions of jobs, especially in manufacturing, transportation, and customer service.
  • Surveillance and privacy erosion: Facial recognition and data mining technologies can be weaponized by governments and corporations.
  • Misinformation: Deepfakes and AI-generated content can distort reality, undermine trust, and destabilize democracies.
  • Autonomous weapons: AI-controlled drones and cyberweapons could trigger conflicts without human oversight.
  • Loss of control: As AI systems become more complex, even their creators may struggle to understand or predict their behavior.

These aren’t distant hypotheticals - they’re unfolding now. Regulation is not a luxury; it’s a necessity.

What Regulation Should Look Like

Effective AI regulation must be proactive, adaptive, and globally coordinated. Here’s what it should include:

1. Transparency and Accountability

AI systems must be explainable. Developers should disclose how models are trained, what data is used, and how decisions are made. If an AI system causes harm, there must be clear lines of accountability.

2. Ethical Standards

Governments and institutions must define ethical boundaries - what AI can and cannot do. This includes banning autonomous lethal weapons, enforcing consent in data usage, and protecting vulnerable populations.

3. Bias Audits

Mandatory bias testing should be required for all high-impact AI systems. Independent audits can help identify and mitigate discriminatory outcomes before deployment.

4. Human Oversight

Critical decisions - like medical diagnoses, legal judgments, or military actions - must involve human review. AI should assist, not replace, human judgment in sensitive domains.

5. Global Cooperation

AI knows no borders. International frameworks, similar to climate accords or nuclear treaties, are essential to prevent regulatory loopholes and ensure shared safety standards.

Who Should Lead the Charge?

Regulation must be a collaborative effort:

  • Governments: Must enact laws that protect citizens and promote responsible innovation.
  • Tech companies: Should adopt self-regulation, publish ethical guidelines, and invest in safety research.
  • Academia: Can provide independent analysis, develop ethical frameworks, and educate future AI leaders.
  • Civil society: Must advocate for transparency, fairness, and human rights in AI development.

No single entity can shoulder this alone. It’s a shared responsibility - and a shared opportunity.

Why Time Is Running Out

AI’s growth is exponential. Each new model is more powerful, more autonomous, and more integrated into daily life. The longer we wait, the harder it becomes to rein in its influence.

History has shown that unregulated technologies - from nuclear energy to social media - can spiral into crises. AI is no different. The window for safe regulation is closing fast.

Conclusion: Intelligence With Integrity

AI is not inherently good or bad - it’s a tool. But powerful tools demand responsible stewardship. Regulation is how we align AI’s capabilities with human values, ensuring it serves society rather than undermines it.

We must act now - not out of fear, but out of foresight. Because the future of AI isn’t just about smarter machines - it’s about wiser choices.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

03 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 85: AI’s Water Crisis - How Fast Could We Run Dry?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how fast the water will be depleted as is consumed by the data centers for AI"

Introduction

Artificial Intelligence (AI) is revolutionizing the world - but it’s also quietly draining one of our most precious resources: water. As AI models grow larger and more complex, the data centers that power them are consuming staggering volumes of water to stay cool. With AI’s exponential growth, the question is no longer if this will impact global water supplies, but how fast.

Let’s unpack the urgency behind this hidden crisis.

Why AI Needs Water

Data centers are the beating heart of AI. They house thousands of servers that run nonstop, generating immense heat. To prevent overheating, these facilities rely heavily on cooling systems - many of which use water.

Water is consumed in two key ways:

  • Evaporative cooling: Water is evaporated to lower air temperature.
  • Liquid cooling: Water circulates directly to absorb heat from servers.

While efficient, these methods are resource-intensive. And as AI workloads surge, so does the demand for cooling.

The Exponential Growth of AI - and Water Use

AI’s growth is not linear - it’s exponential. Each new model is bigger, more data-hungry, and more computationally demanding than the last. For example:

  • GPT-3 required hundreds of thousands of liters of water to train.
  • Google’s data centers consumed over 15 billion liters of water in 2022.
  • Microsoft’s water usage jumped 34% in one year, largely due to AI workloads.

If this trend continues, AI-related water consumption could double every few years. That means by 2030, global data centers could be consuming tens of billions of liters annually - just to keep AI cool.

Regional Strain and Environmental Impact

Many data centers are located in water-scarce regions like Arizona, Nevada, and parts of Europe. In these areas, every liter counts. Diverting water to cool servers can strain agriculture, ecosystems, and human consumption.

Moreover, the water returned to the environment is often warmer, which can disrupt aquatic life and degrade water quality.

When Could We Run Dry?

While it’s unlikely that AI alone will deplete the world’s water supply, its contribution to water stress is accelerating. Consider this:

  • The UN estimates that by 2030, half the world’s population will live in water-stressed regions.
  • If AI continues to grow exponentially, its water demand could outpace conservation efforts in key regions within a decade.
  • Without intervention, local water shortages could become common by the mid-2030s - especially in tech-heavy zones.

In short, we may not run dry globally, but AI could push vulnerable regions past their tipping points far sooner than expected.

Can We Slow the Drain?

There are solutions - but they require urgent action:

  • Green data centers: Facilities designed for minimal water use and powered by renewable energy.
  • Alternative cooling: Air-based and immersion cooling systems that reduce or eliminate water dependency.
  • AI optimization: Smarter scheduling and model efficiency to reduce computational load.

Tech companies must invest in sustainable infrastructure and disclose water usage transparently. Governments must regulate and incentivize eco-friendly practices.

The Ethical Dilemma

AI promises incredible benefits - from medical breakthroughs to climate modeling. But if its growth comes at the cost of clean water, we must ask: Is it worth it?

Water is not optional. Intelligence should not come at the expense of sustainability. As we build smarter machines, we must also build smarter systems - ones that respect planetary boundaries.

Conclusion: Intelligence Must Be Sustainable

AI’s water footprint is growing fast - and if left unchecked, it could accelerate regional water crises within the next 10 to 15 years. The solution isn’t to halt AI’s progress, but to align it with ecological responsibility.

We must act now. Because in the race for artificial intelligence, the real test isn’t how smart our machines become - it’s how wisely we manage their impact.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

02 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 84: The Hidden Cost of Intelligence - AI’s Water Footprint)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how fast the water will be depleted as is consumed by the data centers for AI"

Introduction

Artificial Intelligence (AI) is often hailed as the future of innovation, but behind its dazzling capabilities lies a resource-intensive reality. As AI models grow larger and more powerful, the data centers that train and run them are consuming staggering amounts of electricity - and water. Yes, water. And the pace at which it’s being depleted is raising serious environmental concerns.

Let’s dive into how fast this invisible drain is accelerating - and what it means for our planet.

Why Data Centers Need Water

Data centers are the backbone of AI. They house thousands of servers that process, store, and transmit data. These servers generate immense heat, and to prevent overheating, cooling systems are essential. While some centers use air-based cooling, many rely on water-cooled systems - especially in regions where electricity costs are high or temperatures are extreme.

Water is used in two main ways:

  • Direct cooling: Circulating water absorbs heat from servers.
  • Indirect cooling: Water is evaporated in cooling towers to lower air temperature.

The result? Millions of liters of water consumed daily - often in areas already facing water stress.

How Fast Is Water Being Consumed?

Recent estimates suggest that training a single large AI model - like GPT or similar - can consume hundreds of thousands of liters of freshwater. For example:

  • Training GPT-3 reportedly used over 700,000 liters of water, equivalent to the daily water use of 370 U.S. households.
  • Google’s data centers in the U.S. consumed over 15 billion liters of water in 2022 alone.
  • Microsoft’s water usage jumped by 34% in a single year, largely due to AI workloads.

And this is just the beginning. As demand for generative AI explodes, the number of models being trained and deployed is multiplying. If current trends continue, AI-related water consumption could double every few years, outpacing conservation efforts.

Regional Impact: Where It Hurts Most

The environmental toll isn’t evenly distributed. Many data centers are located in water-scarce regions like Arizona, Nevada, and parts of Europe. In these areas, every liter counts - and diverting water to cool servers can strain local ecosystems and agriculture.

Moreover, water used for cooling often returns to the environment at higher temperatures, which can disrupt aquatic life and degrade water quality.

Can We Slow the Drain?

There are promising innovations aimed at reducing AI’s water footprint:

  • Liquid immersion cooling: A more efficient method that uses less water.
  • AI workload scheduling: Running models during cooler hours to reduce cooling needs.
  • Green data centers: Facilities powered by renewable energy and designed for minimal water use.

But these solutions are not yet widespread. The race to build bigger models and faster infrastructure often outpaces sustainability efforts.

The Ethical Dilemma

AI’s water consumption raises a profound ethical question: Is intelligence worth the cost if it depletes a vital resource? As we marvel at AI’s ability to write poetry, diagnose diseases, and simulate human thought, we must also reckon with its environmental shadow.

Transparency is key. Tech companies must disclose water usage, invest in sustainable cooling, and prioritize regions where water is abundant. Regulators and consumers alike should demand accountability.

Conclusion: A Smarter Path Forward

AI is here to stay - but its growth must be aligned with ecological responsibility. Water is not an infinite resource, and intelligence should not come at the expense of sustainability. By acknowledging the cost and innovating toward greener solutions, we can ensure that AI’s future is not just smart - but wise.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

01 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 83: Mapping the Future - A 25-Year Evolution of AI)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words that depicts a map that reflects the evolution of AI for the next 25 years"

Introduction

Artificial Intelligence (AI) is no longer a distant dream - it’s a living, learning force reshaping our world. But what lies ahead? If we were to chart a map of AI’s evolution over the next 25 years, it would resemble a dynamic landscape of breakthroughs, ethical crossroads, and societal transformation.

Let’s take a journey through this imagined terrain.

Phase 1: 2025–2030 - The Age of Specialization

In the next five years, AI will become deeply embedded in vertical industries:

  • Healthcare: AI will assist in diagnostics, drug discovery, and personalized treatment plans.
  • Finance: Predictive models will dominate risk assessment, fraud detection, and algorithmic trading.
  • Education: Adaptive learning platforms will tailor content to individual student needs.

This phase is marked by narrow intelligence - systems that excel in specific domains but lack general reasoning. The focus will be on trust, transparency, and explainability, as regulators begin to demand accountability for AI-driven decisions.

Phase 2: 2030–2035 - The Rise of Generalization

By the early 2030s, we’ll witness the emergence of Artificial General Intelligence (AGI) prototypes - systems capable of transferring knowledge across domains.

Key developments will include:

  • Unified models that can write code, compose music, and conduct scientific research.
  • Self-improving architectures that optimize their own learning processes.
  • Human-AI collaboration frameworks where machines act as creative partners, not just tools.

This era will challenge our definitions of intelligence, creativity, and even consciousness. Ethical debates will intensify around autonomy, rights, and the boundaries of machine agency.

Phase 3: 2035–2040 - The Cognitive Convergence

As AGI matures, AI will begin to mirror human cognitive functions more closely:

  • Emotional modeling: AI will simulate empathy, persuasion, and social nuance.
  • Meta-reasoning: Systems will reflect on their own limitations and biases.
  • Synthetic consciousness debates: Philosophers and technologists will grapple with whether machines can possess subjective experience.

This phase will see AI integrated into governance, law, and diplomacy. Machines may advise on policy, mediate conflicts, or even represent interests in global forums.

 Phase 4: 2040–2045 - The Age of Integration

AI will no longer be a separate entity - it will be woven into the fabric of everyday life:

  • Neural interfaces will allow direct brain-AI communication.
  • Smart environments will anticipate human needs and adapt in real time.
  • Digital twins will simulate entire cities, economies, and ecosystems for planning and resilience.

The line between human and machine cognition will blur. Society will face existential questions: What does it mean to be human in a world of synthetic minds?

Phase 5: 2045–2050 - The Post-Intelligence Era

By mid-century, AI may surpass human intelligence in most domains. This doesn’t mean obsolescence - it means redefinition.

  • Collective intelligence: Humans and machines will form hybrid networks of decision-making.
  • Ethical superstructures: Global frameworks will govern AI rights, responsibilities, and coexistence.
  • Cultural transformation: Art, philosophy, and identity will evolve in response to machine creativity and cognition.

This era is not about domination - it’s about symbiosis. The map ends not with a singularity, but with a new equilibrium.

Navigating the Map

This 25-year map is speculative, but it’s grounded in current trajectories. It’s a tool for reflection, planning, and imagination. As we move forward, the challenge isn’t just technological - it’s philosophical, ethical, and deeply human.

AI’s evolution will be shaped not just by algorithms, but by our choices. The map is ours to draw - and to walk.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

29 August 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 82: Predicting the Future of AI - With AI)

Prompt Engineering Series

Prompt: "write a post of 600 words on how AI can be used to predict its evolution"

Introduction

Artificial Intelligence (AI) is evolving at a breathtaking pace, reshaping industries, societies, and even our understanding of intelligence itself. But as AI becomes more complex and autonomous, a fascinating question arises: Can AI be used to predict its own evolution?

The answer is yes - and it’s already happening. By analyzing trends, modeling innovation cycles, and simulating future scenarios, AI is becoming a powerful tool not just for solving problems, but for forecasting its own trajectory.

Learning from the Past to Predict the Future

AI systems excel at pattern recognition. By ingesting historical data on technological breakthroughs, research publications, patent filings, and funding flows, AI can identify the signals that precede major leaps in capability.

For example:

  • Natural language models can analyze scientific literature to detect emerging themes in AI research.
  • Machine learning algorithms can forecast the rate of improvement in benchmarks like image recognition, language translation, or autonomous navigation.
  • Knowledge graphs can map relationships between technologies, institutions, and innovations to anticipate convergence points.

This isn’t just speculation - it’s data-driven foresight.

Modeling Innovation Cycles

AI can also be used to model the dynamics of innovation itself. Techniques like system dynamics, agent-based modeling, and evolutionary algorithms allow researchers to simulate how ideas spread, how technologies mature, and how breakthroughs emerge.

These models can incorporate variables such as:

  • Research funding and policy shifts
  • Talent migration across institutions
  • Hardware and compute availability
  • Public sentiment and ethical debates

By adjusting these inputs, AI can generate plausible futures - scenarios that help policymakers, technologists, and ethicists prepare for what’s next.

Predicting Capability Growth

One of the most direct applications is forecasting the growth of AI capabilities. For instance:

  • Performance extrapolation: AI can analyze past improvements in model accuracy, speed, and generalization to estimate future milestones.
  • Architecture simulation: Generative models can propose new neural network designs and predict their theoretical performance.
  • Meta-learning: AI systems can learn how to learn better, accelerating their own development and hinting at the pace of future evolution.

This recursive forecasting - AI predicting AI - is a hallmark of the field’s increasing sophistication.

Challenges and Uncertainties

Despite its promise, predicting AI’s evolution is fraught with uncertainty:

  • Emergent behavior: AI systems often behave in unexpected ways, making long-term predictions difficult.
  • Black box models: Many AI systems lack interpretability, which limits our ability to understand their reasoning.
  • Ethical and societal variables: Human decisions - about regulation, deployment, and trust—can dramatically alter AI’s path.

AI can model probabilities, but it cannot guarantee outcomes. Its predictions must be treated as guidance, not gospel.

Governance and Strategic Planning

Using AI to predict its own evolution isn’t just a technical exercise - it’s a governance imperative. Governments, corporations, and research institutions can use these forecasts to:

  • Anticipate risks and disruptions
  • Allocate resources strategically
  • Design ethical frameworks before crises emerge
  • Foster international cooperation on AI safety

By turning prediction into preparation, we shift from reactive to proactive stewardship.

Conclusion: AI as a Mirror and a Map

AI’s ability to predict its own evolution is a profound development. It allows us to peer into the future - not with certainty, but with clarity. It offers a mirror to reflect on our trajectory, and a map to navigate the unknown.

But the ultimate responsibility remains human. AI can forecast its growth, but only we can decide what kind of future we want to build.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

28 August 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 81: Keeping Up with the Machine - How Humans Can Match AI’s Growth)

Prompt Engineering Series

Prompt Engineering Series

Prompt: "write a post of 600 words on what human can do to match AI's growth"

Introduction

Artificial Intelligence (AI) is growing at a staggering pace - learning faster, scaling broader, and evolving in ways that challenge our understanding of intelligence itself. As AI systems begin to outperform humans in narrow tasks and edge closer to general capabilities, the question arises: Can humans keep up? And if so, how?

Matching AI’s growth isn’t about competing with machines on raw processing power. It’s about leveraging our uniquely human strengths, adapting our systems, and evolving our mindset. Here’s how we can rise to the challenge.

1. Embrace Lifelong Learning

AI systems improve through constant training. Humans must do the same - but with a twist. Unlike machines, we can learn creatively, emotionally, and socially.

  • Upskill Continuously: Stay current with emerging technologies, data literacy, and digital tools.
  • Learn How to Learn: Develop metacognitive skills - reflection, adaptability, and strategic thinking.
  • Cross-Disciplinary Thinking: Combine knowledge from science, art, philosophy, and ethics to solve complex problems.

Education must shift from static curricula to dynamic, personalized learning ecosystems. The goal isn’t just knowledge acquisition - it’s cognitive agility.

2. Cultivate Human-Centric Skills

AI excels at pattern recognition, optimization, and automation. But it lacks emotional depth, moral reasoning, and embodied experience.

Humans can thrive by honing:

  • Empathy and Emotional Intelligence: Crucial for leadership, caregiving, negotiation, and collaboration.
  • Ethical Judgment: Navigating dilemmas that algorithms can’t resolve.
  • Creativity and Imagination: Generating novel ideas, stories, and visions beyond data-driven constraints.

These aren’t just soft skills - they’re survival skills in an AI-augmented world.

3. Collaborate with AI, Not Compete

Instead of viewing AI as a rival, we should treat it as a partner. Human-AI collaboration can amplify productivity, insight, and innovation.

  • Augmented Intelligence: Use AI to enhance decision-making, not replace it.
  • Human-in-the-Loop Systems: Ensure oversight, context, and ethical checks in automated processes.
  • Co-Creation: Artists, writers, and designers can use AI as a creative tool, not a substitute.

The future belongs to those who can orchestrate symphonies between human intuition and machine precision.

4. Redefine Intelligence and Success

AI challenges our traditional notions of intelligence—memory, logic, speed. But human intelligence is multifaceted.

We must:

  • Value Diverse Intelligences: Emotional, social, spatial, and existential intelligence matter.
  • Measure Meaning, Not Just Metrics: Success isn’t just efficiency - it’s purpose, fulfillment, and impact.
  • Foster Wisdom Over Data: Wisdom integrates knowledge with experience, ethics, and foresight.

By broadening our definition of intelligence, we reclaim our relevance in a machine-dominated landscape.

5. Build Resilience - Individually and Collectively

AI’s rise brings disruption. Jobs will change, institutions will evolve, and identities may be challenged.

Humans must build:

  • Psychological Resilience: Adapt to uncertainty, ambiguity, and rapid change.
  • Social Resilience: Strengthen communities, empathy, and shared values.
  • Institutional Resilience: Reform education, governance, and labor systems to support human flourishing.

Resilience isn’t resistance - it’s transformation.

Conclusion: Evolve, Don’t Imitate

To match AI’s growth, humans must evolve - not by mimicking machines, but by deepening what makes us human. Our creativity, empathy, ethics, and adaptability are not bugs - they’re features.

The race isn’t about speed. It’s about direction. AI may be accelerating, but humans can steer. And in that steering lies our greatest power - not to outpace machines, but to outthink them.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

27 August 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 80: The Countdown to Superintelligence: How Soon Will AI Surpass Humans?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how long it will take AI to surpass human given the exponential growth of AI from the past years"

Introduction

Artificial Intelligence (AI) has evolved from a niche academic pursuit to a transformative force reshaping industries, societies, and even our understanding of intelligence itself. With breakthroughs in deep learning, natural language processing, and autonomous systems, the question is no longer if AI will surpass human intelligence - but when.

Given the exponential trajectory of AI development, estimating the timeline for AI to outsmart humans is both urgent and elusive. Let’s explore the factors driving this acceleration and the challenges in predicting its tipping point.

The Exponential Curve of AI Progress

AI’s growth isn’t linear - it’s exponential. Consider the following milestones:

  • 2012: Deep learning revolutionized image recognition with AlexNet.
  • 2016: AlphaGo defeated world champion Lee Sedol in Go, a game once thought too complex for machines.
  • 2020s: Large language models like GPT and multimodal systems began generating human-like text, images, and even code.

Each leap builds on the last, compressing decades of progress into years. Moore’s Law may be slowing in hardware, but AI’s software capabilities are accelerating through better algorithms, larger datasets, and more efficient architectures.

Defining 'Surpassing Humans'

To estimate when AI will surpass humans, we must define what 'surpass' means:

  • Narrow Intelligence: AI already outperforms humans in specific domains - chess, protein folding, fraud detection.
  • General Intelligence: The ability to reason, learn, and adapt across diverse tasks. This is the holy grail - Artificial General Intelligence (AGI).
  • Superintelligence: Intelligence far beyond human capacity, capable of strategic planning, creativity, and self-improvement.

Most experts agree that AI has already surpassed humans in narrow tasks. AGI is the next frontier - and the most debated.

Predictions from the Field

Surveys of AI researchers reveal a wide range of predictions:

  • A 2022 survey by Metaculus estimated a 50% chance of AGI by 2040.
  • Some optimists, like Ray Kurzweil, predict human-level AI by 2029.
  • Others, like Yann LeCun, argue that AGI is still decades away due to the complexity of human cognition.

The divergence stems from uncertainty in how intelligence scales, how much data is enough, and whether current architectures can generalize.

Barriers to Surpassing Humans

Despite rapid progress, several challenges remain:

  • Common Sense Reasoning: AI struggles with context, nuance, and ambiguity.
  • Embodiment: Human intelligence is shaped by physical experience - something machines lack.
  • Ethics and Alignment: Ensuring AI goals align with human values is a major hurdle.
  • Interpretability: We often don’t understand how AI systems arrive at decisions, making trust and control difficult.

These barriers may delay the timeline - but they also highlight the need for caution.

The Singularity Question

The concept of the 'technological singularity' - a point where AI self-improves beyond human control—adds urgency to the timeline. If AI reaches a threshold where it can redesign itself, progress could become uncontrollable and unpredictable.

Some theorists suggest this could happen within decades. Others argue it’s speculative and depends on breakthroughs we haven’t yet imagined.

Conclusion: Prepare for Acceleration, Not Certainty

While we can’t pinpoint the exact year AI will surpass humans, the trajectory is clear: exponential growth, increasing autonomy, and expanding capabilities. Whether it’s 2030, 2040, or beyond, the time to prepare is now.

Surpassing human intelligence isn’t just a technological milestone - it’s a societal turning point. It demands foresight, governance, and humility. Because once AI crosses that threshold, the future won’t be shaped by how smart machines are - but by how wisely we’ve guided them.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

19 June 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 26: How Humans Can Outsmart AI: The Power of Human Ingenuity)

Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how human can outsmart AI"

Introduction

Artificial Intelligence (AI) has made incredible advancements, surpassing human capabilities in data processing, automation, and strategic decision-making. However, despite its speed and efficiency, AI still lacks intuition, creativity, and emotional intelligence - qualities that give humans a unique advantage. While AI can outperform humans in structured environments, there are several ways in which human ingenuity can outsmart AI.

1. Leveraging Creativity and Abstract Thinking

AI excels at pattern recognition and logical reasoning, but it struggles with creativity and abstract thought. Humans can:

  • Think outside the box, generating innovative solutions AI cannot predict.
  • Create art, music, and literature that reflect emotions and cultural depth.
  • Solve problems intuitively, without relying solely on data-driven patterns.

While AI can generate content, it lacks the ability to truly understand human creativity, making human ingenuity a powerful advantage.

2. Using Emotional Intelligence and Social Skills

AI lacks empathy, intuition, and emotional intelligence, which are essential for human relationships, leadership, and negotiation. Humans can:

  • Read emotions and body language, adapting communication accordingly.
  • Build trust and rapport, essential for teamwork and collaboration.
  • Make ethical decisions, considering moral implications beyond logic.

AI may analyze sentiment in text, but it cannot genuinely understand human emotions, giving humans an edge in social interactions and leadership.

3. Adapting to Unpredictable Situations

AI relies on structured data and predefined algorithms, but humans excel in adapting to uncertainty. Humans can:

  • Make quick decisions in unpredictable environments, such as crisis management.
  • Learn from minimal examples, while AI requires vast datasets.
  • Navigate complex social dynamics, where AI struggles with nuance.

AI performs well in controlled settings, but humans thrive in real-world unpredictability, making adaptability a key advantage.

4. Outsmarting AI in Ethical and Moral Reasoning

AI lacks moral judgment and ethical reasoning, making humans essential for guiding AI development responsibly. Humans can:

  • Recognize ethical dilemmas that AI may overlook.
  • Ensure fairness and inclusivity in AI-driven decisions.
  • Prevent AI from reinforcing biases, ensuring responsible AI use.

AI may optimize efficiency, but humans prioritize ethical considerations, ensuring technology aligns with societal values.

5. Controlling AI’s Development and Purpose

Ultimately, humans design, regulate, and oversee AI, ensuring it serves human interests rather than operating autonomously. Humans can:

  • Set boundaries for AI, preventing unintended consequences.
  • Regulate AI applications, ensuring responsible use.
  • Guide AI’s evolution, ensuring it complements human intelligence rather than replacing it.

While AI is powerful, humans remain in control, shaping its development to enhance society rather than disrupt it.

Conclusion: Human Ingenuity Will Always Matter

AI may outperform humans in speed, efficiency, and automation, but it cannot replicate creativity, emotional intelligence, adaptability, ethical reasoning, or human oversight. By leveraging these strengths, humans can outsmart AI, ensuring technology remains a tool for progress rather than an unchecked force.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

19 May 2025

#️⃣Software Engineering: Mea Culpa (Part VIII: A Look Beyond)

Software Engineering Series
Software Engineering Series

With AI on the verge, blogging and bloggers can easily become obsolete. Why bother navigating through the many blogs to get a broader perspective when the same can be obtained with AI? Just type in a prompt of the type "write a blogpost of 600 words on the importance of AI in society" and Copilot or any other similar AI agent will provide you an answer that may look much better than the first draft of most of the bloggers out there! It doesn't matter whether the text follows a well-articulated idea, a personal perspective or something creative! One gets an acceptable answer with a minimum of effort and that's what matters for many.

The results tend to increase in complexity the more models are assembled together, respectively the more uncontrolled are the experiments. Moreover, solutions that tend to work aren't necessarily optimal. Machines can't offer instant enlightenment or anything close to it. Though they have an incomparable processing power of retrieval, association, aggregation, segregation and/or iteration, which coupled with the vast amount of data, information and knowledge can generate anything in just a matter of seconds. Probably, the only area in which humans can compete with machines is creativity and wisdom, though how many will be able to leverage these at scale? Probably, machines have some characteristics that can be associated with these intrinsic human characteristics, though usually more likely the brute computational power will prevail.

At Microsoft Build, Satya Nadella mentioned that foundry encompasses already more than 1900 supported models. In theory, one can still evaluate and test such models adequately. What will happen when the scale increases with a few orders of magnitude? What will happen when for each person there are one or more personalized AI models? AI can help in many areas by generating and evaluating rapidly many plausible alternatives, though as soon the models deal with some kind of processing randomization, the chances for errors increase exponentially (at least in theory).

It's enough for one or more hallucinations or other unexpected behavior to lead to more unexpected behavior. No matter how well a model was tested, as long as there's no stable predictable mathematical model behind it, the chances for something to go wrong increase with the number of inputs, parameters, uses, or changes of context the model deals with. Unfortunately, all these aspects are seldom documented. It's not like using a formula and you know that given a set of inputs and operations, the result is the same. The evolving nature of such models makes them unpredictable in the long term. Therefore, there must always be a way to observe the changes occurring in models.

One of the important questions is how many errors can we afford in such models? How long it takes until errors impact each other to create effects comparable with a tornado. And what if the tornado increases in magnitude to the degree that it wrecks everything that crosses its path? What if multiple tornadoes join forces? How many tornadoes can destroy a field, a country or a continent? How many or big must be the tornadoes to trigger a warning?

Science-Fiction authors love to create apocalyptic scenarios, and all happens in just a few steps, respectively chapters. In nature, usually it takes many orders of magnitude to generate unpredictable behavior. But, as nature often reveals, unpredictable behavior does happen, probably more often than we expect and wish for. The more we are poking the bear, the higher the chances for something unexpected to happen! Do we really want this? What will be the price we must pay for progress?

Previous Post <<||>> Next Post

01 February 2021

📦Data Migrations (DM): Quality Assurance (Part III: Quality Acceptance Criteria III)

Data Migration
Data Migrations Series

Repeatability

Repeatability is the degree with which a DM can be repeated and obtain consistent results between repetitions. Even if a DM is supposed to be a one-time activity for a project, to guarantee a certain level of quality it’s important to consider several iterations in which the data requirements are refined and made sure that the data can be imported as needed into the target system(s). Considered as a process, as long the data and the rules haven’t changed, the results should be the same or have the expected level of deviation from expectations. 

This requirement is important especially for the data migrated during UAT and Go-Live, time during which the input data and rules need to remain frozen (even if small changes in the data can still occur). In fact, that’s the role of UAT – to assure that the data have the expected quality and when compared to the previous dry-run, that it attains the expected level of consistency. 

Reusability

Reusability is the degree to which the whole solution, parts of the logic or data can be reused for multiple purposes. Master data and the logic associated with them have high reusability potential as they tend to be referenced by multiple entities. 

Modularity

Modularity is the degree to which a solution is composed of discrete components such that a change to one component has minimal impact on other components. It applies to the solution itself but also to the degree to which the logic for the various entities is partitioned so to assure a minimal impact. 

Partitionability

Partitionability is the degree to which data or logic can be partitioned to address the various requirements. Despite the assurance that the data will be migrated only once, in practice this assumption can be easily invalidated. It’s enough to increase the system freeze by a few days and/or to have transaction data that suddenly requires master data not considered. Even if the deltas can be migrated in system manually, it’s probably recommended to migrate them using the same logic. Moreover, the performing of incremental loads can be a project requirement. 

Data might need to be partitioned into batches to improve processing’s performance. Partitioning the logic based on certain parameters (e.g. business unit, categorical values) allows more flexibility in handling other requirements (e.g. reversibility, performance, testability, reusability). 

Performance

Performance refers to the degree a piece of software can process data into an amount of time considered as acceptable for the business. It can vary with the architecture and methods used, respectively data volume, veracity, variance, variability, or quality.

Performance is a critical requirement for a DM, especially when considering the amount of time spent on executing the logic during development, tests and troubleshooting, as well for other activities. Performance is important during dry-runs but more important during Go-Live, as it equates with a period during which the system(s) are not available for the users. Upon case, a few hours of delays can have an important impact on the business. In extremis, the delays can sum up to days. 

Predictability

Predictability is the degree to which the results and behavior of a solution, respectively the processes involve are predictable based on the design, implementation or other factors considered (e.g. best practices, methodology used, experience, procedures and processes). Highly predictable solutions are desirable, though reaching the required level of performance and quality can be challenging. 

The results from the dry-runs can offer an indication on whether the data migrated during UAT and Go-Live provide a certain level of assurance that the DM will be a success. Otherwise, an additional dry-run should be planned during UAT, if the schedule allows it.

Previous Post <> Nest Post 

14 December 2019

🤝Governance: Control (Just the Quotes)

"To manage is to forecast and plan, to organize, to command, to coordinate and to control. To foresee and plan means examining the future and drawing up the plan of action. To organize means building up the dual structure, material and human, of the undertaking. To command means binding together, unifying and harmonizing all activity and effort. To control means seeing that everything occurs in conformity with established rule and expressed demand." (Henri Fayol, 1916)

"The concern of OR with finding an optimum decision, policy, or design is one of its essential characteristics. It does not seek merely to define a better solution to a problem than the one in use; it seeks the best solution... [It] can be characterized as the application of scientific methods, techniques, and tools to problems involving the operations of systems so as to provide those in control of the operations with optimum solutions to the problems." (C West Churchman et al, "Introduction to Operations Research", 1957)

"Management is a distinct process consisting of planning, organising, actuating and controlling; utilising in each both science and art, and followed in order to accomplish pre-determined objectives." (George R Terry, "Principles of Management", 1960)

"The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation." (Gene Amdahl et al, "Architecture of the IBM System", IBM Journal of Research and Development. Vol 8 (2), 1964)

"If cybernetics is the science of control, management is the profession of control." (Anthony S Beer, "Decision and Control", 1966)

"Most of our beliefs about complex organizations follow from one or the other of two distinct strategies. The closed-system strategy seeks certainty by incorporating only those variables positively associated with goal achievement and subjecting them to a monolithic control network. The open-system strategy shifts attention from goal achievement to survival and incorporates uncertainty by recognizing organizational interdependence with environment. A newer tradition enables us to conceive of the organization as an open system, indeterminate and faced with uncertainty, but subject to criteria of rationality and hence needing certainty." (James D Thompson, "Organizations in Action", 1967)

"Policy-making, decision-taking, and control: These are the three functions of management that have intellectual content." (Anthony S Beer, "Management Science" , 1968)

"The management of a system has to deal with the generation of the plans for the system, i. e., consideration of all of the things we have discussed, the overall goals, the environment, the utilization of resources and the components. The management sets the component goals, allocates the resources, and controls the system performance." (C West Churchman, "The Systems Approach", 1968)

"One difficulty in developing a good [accounting] control system is that quantitative results will differ according to the accounting principles used, and accounting principles may change." (Ernest Dale, "Readings in Management", 1970)

"To be productive the individual has to have control, to a substantial extent, over the speed, rhythm, and attention spans with which he is working […] While work is, therefore, best laid out as uniform, working is best organized with a considerable degree of diversity. Working requires latitude to change speed, rhythm, and attention span fairly often. It requires fairly frequent changes in operating routines as well. What is good industrial engineering for work is exceedingly poor human engineering for the worker." (Peter F Drucker, "Management: Tasks, Responsibilities, Practices", 1973)

"A mature science, with respect to the matter of errors in variables, is not one that measures its variables without error, for this is impossible. It is, rather, a science which properly manages its errors, controlling their magnitudes and correctly calculating their implications for substantive conclusions." (Otis D Duncan, "Introduction to Structural Equation Models", 1975)

"Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." (Charles Goodhart, "Problems of Monetary Management: the U.K. Experience", 1975)

"When information is centralized and controlled, those who have it are extremely influential. Since information is [usually] localized in control subsystems, these subsystems have a great deal of organization influence." (Henry L Tosi & Stephen J Carroll, "Management", 1976)

"[...] when a variety of tasks have all to be performed in cooperation, synchronization, and communication, a business needs managers and a management. Otherwise, things go out of control; plans fail to turn into action; or, worse, different parts of the plans get going at different speeds, different times, and with different objectives and goals, and the favor of the 'boss' becomes more important than performance." (Peter F Drucker, "People and Performance", 1977)

"Uncontrolled variation is the enemy of quality." (W Edwards Deming, 1980)

"The key mission of contemporary management is to transcend the old models which limited the manager's role to that of controller, expert or morale booster. These roles do not produce the desired result of aligning the goals of the employees and the corporation. [...] These older models, vestiges of a bygone era, have served their function and must be replaced with a model of the manager as a developer of human resources." (Michael Durst, "Small Systems World", 1985)

"The outcome of any professional's effort depends on the ability to control working conditions." (Joseph A Raelin, "Clash of Cultures: Managers and Professionals", 1986)

"Executives have to start understanding that they have certain legal and ethical responsibilities for information under their control." (Jim Leeke, PC Week, 1987)

"Give up control even if it means the employees have to make some mistakes." (Frank Flores, Hispanic Business, 1987)

"In complex situations, we may rely too heavily on planning and forecasting and underestimate the importance of random factors in the environment. That reliance can also lead to delusions of control." (Hillel J Einhorn & Robin M. Hogarth, Harvard Business Review, 1987)

"Managers exist to plan, direct and control the project. Part of the way they control is to listen to and weigh advice. Once a decision is made, that's the way things should proceed until a new decision is reached. Erosion of management decisions by [support] people who always 'know better' undermines managers' credibility and can bring a project to grief." (Philip W Metzger, "Managing Programming People", 1987)

"To be effective, a manager must accept a decreasing degree of direct control." (Eric G Flamholtz & Yvonne Randal, "The Inner Game of Management", 1987)

"[Well-managed modern organizations] treat everyone as a source of creative input. What's most interesting is that they cannot be described as either democratically or autocratically managed. Their managers define the boundaries, and their people figure out the best way to do the job within those boundaries. The management style is an astonishing combination of direction and empowerment. They give up tight control in order to gain control over what counts: results." (Robert H Waterman, "The Renewal Factor", 1987)

"We have created trouble for ourselves in organizations by confusing control with order. This is no surprise, given that for most of its written history, leadership has been defined in terms of its control functions." (Margaret J Wheatley, "Leadership and the New Science: Discovering Order in a Chaotic World", 1992)

"Management is not founded on observation and experiment, but on a drive towards a set of outcomes. These aims are not altogether explicit; at one extreme they may amount to no more than an intention to preserve the status quo, at the other extreme they may embody an obsessional demand for power, profit or prestige. But the scientist's quest for insight, for understanding, for wanting to know what makes the system tick, rarely figures in the manager's motivation. Secondly, and therefore, management is not, even in intention, separable from its own intentions and desires: its policies express them. Thirdly, management is not normally aware of the conventional nature of its intellectual processes and control procedures. It is accustomed to confuse its conventions for recording information with truths-about-the-business, its subjective institutional languages for discussing the business with an objective language of fact and its models of reality with reality itself." (Stanford Beer, "Decision and Control", 1994)

"Without some element of governance from the top, bottom-up control will freeze when options are many. Without some element of leadership, the many at the bottom will be paralysed with choices." (Kevin Kelly, "Out of Control: The New Biology of Machines, Social Systems and the Economic World", 1995)

"Management is a set of processes that can keep a complicated system of people and technology running smoothly. The most important aspects of management include planning, budgeting, organizing, staffing, controlling, and problem solving." (John P Kotter, "Leading Change", 1996) 

"The manager [...] is understood as one who observes the causal structure of an organization in order to be able to control it [...] This is taken to mean that the manager can choose the goals of the organization and design the systems or actions to realize those goals [...]. The possibility of so choosing goals and strategies relies on the predictability provided by the efficient and formative causal structure of the organization, as does the possibility of managers staying 'in control' of their organization's development. According to this perspective, organizations become what they are because of the choices made by their managers." (Ralph D Stacey et al, "Complexity and Management: Fad or Radical Challenge to Systems Thinking?", 2000)

"Success or failure of a project depends upon the ability of key personnel to have sufficient data for decision-making. Project management is often considered to be both an art and a science. It is an art because of the strong need for interpersonal skills, and the project planning and control forms attempt to convert part of the 'art' into a science." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"The premise here is that the hierarchy lines on the chart are also the only communication conduit. Information can flow only along the lines. [...] The hierarchy lines are paths of authority. When communication happens only over the hierarchy lines, that's a priori evidence that the managers are trying to hold on to all control. This is not only inefficient but an insult to the people underneath." (Tom DeMarco, "Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency", 2001)

"Management can be defined as the attainment of organizational goals in an effective and efficient manner through planning, organizing, staffing, directing, and controlling organizational resources." (Richard L Daft, "The Leadership Experience" 4th Ed., 2008)

"In a complex society, individuals, organizations, and states require a high degree of confidence - even if it is misplaced - in the short-term future and a reasonable degree of confidence about the longer term. In its absence they could not commit themselves to decisions, investments, and policies. Like nudging the frame of a pinball machine to influence the path of the ball, we cope with the dilemma of uncertainty by doing what we can to make our expectations of the future self-fulfilling. We seek to control the social and physical worlds not only to make them more predictable but to reduce the likelihood of disruptive and damaging shocks (e.g., floods, epidemics, stock market crashes, foreign attacks). Our fallback strategy is denial." (Richard N Lebow, "Forbidden Fruit: Counterfactuals and International Relations", 2010)

"Almost by definition, one is rarely privileged to 'control' a disaster. Yet the activity somewhat loosely referred to by this term is a substantial portion of Management, perhaps the most important part. […] It is the business of a good Manager to ensure, by taking timely action in the real world, that scenarios of disaster remain securely in the realm of Fantasy." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Without precise predictability, control is impotent and almost meaningless. In other words, the lesser the predictability, the harder the entity or system is to control, and vice versa. If our universe actually operated on linear causality, with no surprises, uncertainty, or abrupt changes, all future events would be absolutely predictable in a sort of waveless orderliness." (Lawrence K Samuels, "Defense of Chaos", 2013)

"The problem of complexity is at the heart of mankind’s inability to predict future events with any accuracy. Complexity science has demonstrated that the more factors found within a complex system, the more chances of unpredictable behavior. And without predictability, any meaningful control is nearly impossible. Obviously, this means that you cannot control what you cannot predict. The ability ever to predict long-term events is a pipedream. Mankind has little to do with changing climate; complexity does." (Lawrence K Samuels, "The Real Science Behind Changing Climate", LewRockwell.com, August 1, 2014) 

30 December 2018

🔭Data Science: Testing (Just the Quotes)

"We must trust to nothing but facts: These are presented to us by Nature, and cannot deceive. We ought, in every instance, to submit our reasoning to the test of experiment, and never to search for truth but by the natural road of experiment and observation." (Antoin-Laurent de Lavoisiere, "Elements of Chemistry", 1790)

"A law of nature, however, is not a mere logical conception that we have adopted as a kind of memoria technical to enable us to more readily remember facts. We of the present day have already sufficient insight to know that the laws of nature are not things which we can evolve by any speculative method. On the contrary, we have to discover them in the facts; we have to test them by repeated observation or experiment, in constantly new cases, under ever-varying circumstances; and in proportion only as they hold good under a constantly increasing change of conditions, in a constantly increasing number of cases with greater delicacy in the means of observation, does our confidence in their trustworthiness rise." (Hermann von Helmholtz, "Popular Lectures on Scientific Subjects", 1873)

"A discoverer is a tester of scientific ideas; he must not only be able to imagine likely hypotheses, and to select suitable ones for investigation, but, as hypotheses may be true or untrue, he must also be competent to invent appropriate experiments for testing them, and to devise the requisite apparatus and arrangements." (George Gore, "The Art of Scientific Discovery", 1878)

"The preliminary examination of most data is facilitated by the use of diagrams. Diagrams prove nothing, but bring outstanding features readily to the eye; they are therefore no substitutes for such critical tests as may be applied to the data, but are valuable in suggesting such tests, and in explaining the conclusions founded upon them." (Sir Ronald A Fisher, "Statistical Methods for Research Workers", 1925)

"A scientist, whether theorist or experimenter, puts forward statements, or systems of statements, and tests them step by step. In the field of the empirical sciences, more particularly, he constructs hypotheses, or systems of theories, and tests them against experience by observation and experiment." (Karl R Popper, "The Logic of Scientific Discovery", 1934)

"Science, in the broadest sense, is the entire body of the most accurately tested, critically established, systematized knowledge available about that part of the universe which has come under human observation. For the most part this knowledge concerns the forces impinging upon human beings in the serious business of living and thus affecting man’s adjustment to and of the physical and the social world. […] Pure science is more interested in understanding, and applied science is more interested in control […]" (Austin L Porterfield, "Creative Factors in Scientific Research", 1941)

"To a scientist a theory is something to be tested. He seeks not to defend his beliefs, but to improve them. He is, above everything else, an expert at ‘changing his mind’." (Wendell Johnson, 1946)

"As usual we may make the errors of I) rejecting the null hypothesis when it is true, II) accepting the null hypothesis when it is false. But there is a third kind of error which is of interest because the present test of significance is tied up closely with the idea of making a correct decision about which distribution function has slipped furthest to the right. We may make the error of III) correctly rejecting the null hypothesis for the wrong reason." (Frederick Mosteller, "A k-Sample Slippage Test for an Extreme Population", The Annals of Mathematical Statistics 19, 1948)

"Errors of the third kind happen in conventional tests of differences of means, but they are usually not considered, although their existence is probably recognized. It seems to the author that there may be several reasons for this among which are 1) a preoccupation on the part of mathematical statisticians with the formal questions of acceptance and rejection of null hypotheses without adequate consideration of the implications of the error of the third kind for the practical experimenter, 2) the rarity with which an error of the third kind arises in the usual tests of significance." (Frederick Mosteller, "A k-Sample Slippage Test for an Extreme Population", The Annals of Mathematical Statistics 19, 1948)

"If significance tests are required for still larger samples, graphical accuracy is insufficient, and arithmetical methods are advised. A word to the wise is in order here, however. Almost never does it make sense to use exact binomial significance tests on such data - for the inevitable small deviations from the mathematical model of independence and constant split have piled up to such an extent that the binomial variability is deeply buried and unnoticeable. Graphical treatment of such large samples may still be worthwhile because it brings the results more vividly to the eye." (Frederick Mosteller & John W Tukey, "The Uses and Usefulness of Binomial Probability Paper?", Journal of the American Statistical Association 44, 1949)

"Statistics is the fundamental and most important part of inductive logic. It is both an art and a science, and it deals with the collection, the tabulation, the analysis and interpretation of quantitative and qualitative measurements. It is concerned with the classifying and determining of actual attributes as well as the making of estimates and the testing of various hypotheses by which probable, or expected, values are obtained. It is one of the means of carrying on scientific research in order to ascertain the laws of behavior of things - be they animate or inanimate. Statistics is the technique of the Scientific Method." (Bruce D Greenschields & Frank M Weida, "Statistics with Applications to Highway Traffic Analyses", 1952)

"The only relevant test of the validity of a hypothesis is comparison of prediction with experience." (Milton Friedman, "Essays in Positive Economics", 1953)

"The main purpose of a significance test is to inhibit the natural enthusiasm of the investigator." (Frederick Mosteller, "Selected Quantitative Techniques", 1954)

"The methods of science may be described as the discovery of laws, the explanation of laws by theories, and the testing of theories by new observations. A good analogy is that of the jigsaw puzzle, for which the laws are the individual pieces, the theories local patterns suggested by a few pieces, and the tests the completion of these patterns with pieces previously unconsidered." (Edwin P Hubble, "The Nature of Science and Other Lectures", 1954)

"Science is the creation of concepts and their exploration in the facts. It has no other test of the concept than its empirical truth to fact." (Jacob Bronowski, "Science and Human Values", 1956)

"Null hypotheses of no difference are usually known to be false before the data are collected [...] when they are, their rejection or acceptance simply reflects the size of the sample and the power of the test, and is not a contribution to science." (I Richard Savage, "Nonparametric statistics", Journal of the American Statistical Association 52, 1957)

"The well-known virtue of the experimental method is that it brings situational variables under tight control. It thus permits rigorous tests of hypotheses and confidential statements about causation. The correlational method, for its part, can study what man has not learned to control. Nature has been experimenting since the beginning of time, with a boldness and complexity far beyond the resources of science. The correlator’s mission is to observe and organize the data of nature’s experiments." (Lee J Cronbach, "The Two Disciplines of Scientific Psychology", The American Psychologist Vol. 12, 1957)

"A satisfactory prediction of the sequential properties of learning data from a single experiment is by no means a final test of a model. Numerous other criteria - and some more demanding - can be specified. For example, a model with specific numerical parameter values should be invariant to changes in independent variables that explicitly enter in the model." (Robert R Bush & Frederick Mosteller,"A Comparison of Eight Models?", Studies in Mathematical Learning Theory, 1959)

"One feature [...] which requires much more justification than is usually given, is the setting up of unplausible null hypotheses. For example, a statistician may set out a test to see whether two drugs have exactly the same effect, or whether a regression line is exactly straight. These hypotheses can scarcely be taken literally." (Cedric A B Smith, "Book review of Norman T. J. Bailey: Statistical Methods in Biology", Applied Statistics 9, 1960)

"The null-hypothesis significance test treats ‘acceptance’ or ‘rejection’ of a hypothesis as though these were decisions one makes. But a hypothesis is not something, like a piece of pie offered for dessert, which can be accepted or rejected by a voluntary physical action. Acceptance or rejection of a hypothesis is a cognitive process, a degree of believing or disbelieving which, if rational, is not a matter of choice but determined solely by how likely it is, given the evidence, that the hypothesis is true." (William W Rozeboom, "The fallacy of the null–hypothesis significance test", Psychological Bulletin 57, 1960)

"It is easy to obtain confirmations, or verifications, for nearly every theory - if we look for confirmations. Confirmations should count only if they are the result of risky predictions. […] A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice. Every genuine test of a theory is an attempt to falsify it, or refute it." (Karl R Popper, "Conjectures and Refutations: The Growth of Scientific Knowledge", 1963)

"The final test of a theory is its capacity to solve the problems which originated it." (George Dantzig, "Linear Programming and Extensions", 1963)

"The mediation of theory and praxis can only be clarified if to begin with we distinguish three functions, which are measured in terms of different criteria: the formation and extension of critical theorems, which can stand up to scientific discourse; the organization of processes of enlightenment, in which such theorems are applied and can be tested in a unique manner by the initiation of processes of reflection carried on within certain groups toward which these processes have been directed; and the selection of appropriate strategies, the solution of tactical questions, and the conduct of the political struggle. On the first level, the aim is true statements, on the second, authentic insights, and on the third, prudent decisions." (Jürgen Habermas, "Introduction to Theory and Practice", 1963)

"The null hypothesis of no difference has been judged to be no longer a sound or fruitful basis for statistical investigation. […] Significance tests do not provide the information that scientists need, and, furthermore, they are not the most effective method for analyzing and summarizing data." (Cherry A Clark, "Hypothesis Testing in Relation to Statistical Methodology", Review of Educational Research Vol. 33, 1963)

"The usefulness of the models in constructing a testable theory of the process is severely limited by the quickly increasing number of parameters which must be estimated in order to compare the predictions of the models with empirical results" (Anatol Rapoport, "Prisoner's Dilemma: A study in conflict and cooperation", 1965)

"The validation of a model is not that it is 'true' but that it generates good testable hypotheses relevant to important problems.” (Richard Levins, "The Strategy of Model Building in Population Biology”, 1966)

"Discovery always carries an honorific connotation. It is the stamp of approval on a finding of lasting value. Many laws and theories have come and gone in the history of science, but they are not spoken of as discoveries. […] Theories are especially precarious, as this century profoundly testifies. World views can and do often change. Despite these difficulties, it is still true that to count as a discovery a finding must be of at least relatively permanent value, as shown by its inclusion in the generally accepted body of scientific knowledge." (Richard J. Blackwell, "Discovery in the Physical Sciences", 1969)

"Science consists simply of the formulation and testing of hypotheses based on observational evidence; experiments are important where applicable, but their function is merely to simplify observation by imposing controlled conditions." (Henry L Batten, "Evolution of the Earth", 1971)

"A hypothesis is empirical or scientific only if it can be tested by experience. […] A hypothesis or theory which cannot be, at least in principle, falsified by empirical observations and experiments does not belong to the realm of science." (Francisco J Ayala, "Biological Evolution: Natural Selection or Random Walk", American Scientist, 1974)

"An experiment is a failure only when it also fails adequately to test the hypothesis in question, when the data it produces don't prove anything one way or the other." (Robert M Pirsig, "Zen and the Art of Motorcycle Maintenance", 1974)

"Science is systematic organisation of knowledge about the universe on the basis of explanatory hypotheses which are genuinely testable. Science advances by developing gradually more comprehensive theories; that is, by formulating theories of greater generality which can account for observational statements and hypotheses which appear as prima facie unrelated." (Francisco J Ayala, "Studies in the Philosophy of Biology: Reduction and Related Problems", 1974)

"A good scientific law or theory is falsifiable just because it makes definite claims about the world. For the falsificationist, If follows fairly readily from this that the more falsifiable a theory is the better, in some loose sense of more. The more a theory claims, the more potential opportunities there will be for showing that the world does not in fact behave in the way laid down by the theory. A very good theory will be one that makes very wide-ranging claims about the world, and which is consequently highly falsifiable, and is one that resists falsification whenever it is put to the test." (Alan F Chalmers,  "What Is This Thing Called Science?", 1976)

"Tests appear to many users to be a simple way to discharge the obligation to provide some statistical treatment of the data." (H V Roberts, "For what use are tests of hypotheses and tests of significance",  Communications in Statistics [Series A], 1976)

"Prediction can never be absolutely valid and therefore science can never prove some generalization or even test a single descriptive statement and in that way arrive at final truth." (Gregory Bateson, "Mind and Nature, A necessary unity", 1979)

"The fact must be expressed as data, but there is a problem in that the correct data is difficult to catch. So that I always say 'When you see the data, doubt it!' 'When you see the measurement instrument, doubt it!' [...]For example, if the methods such as sampling, measurement, testing and chemical analysis methods were incorrect, data. […] to measure true characteristics and in an unavoidable case, using statistical sensory test and express them as data." (Kaoru Ishikawa, Annual Quality Congress Transactions, 1981)

"All interpretations made by a scientist are hypotheses, and all hypotheses are tentative. They must forever be tested and they must be revised if found to be unsatisfactory. Hence, a change of mind in a scientist, and particularly in a great scientist, is not only not a sign of weakness but rather evidence for continuing attention to the respective problem and an ability to test the hypothesis again and again." (Ernst Mayr, "The Growth of Biological Thought: Diversity, Evolution and Inheritance", 1982)

"Theoretical scientists, inching away from the safe and known, skirting the point of no return, confront nature with a free invention of the intellect. They strip the discovery down and wire it into place in the form of mathematical models or other abstractions that define the perceived relation exactly. The now-naked idea is scrutinized with as much coldness and outward lack of pity as the naturally warm human heart can muster. They try to put it to use, devising experiments or field observations to test its claims. By the rules of scientific procedure it is then either discarded or temporarily sustained. Either way, the central theory encompassing it grows. If the abstractions survive they generate new knowledge from which further exploratory trips of the mind can be planned. Through the repeated alternation between flights of the imagination and the accretion of hard data, a mutual agreement on the workings of the world is written, in the form of natural law." (Edward O Wilson, "Biophilia", 1984)

"Models are often used to decide issues in situations marked by uncertainty. However statistical differences from data depend on assumptions about the process which generated these data. If the assumptions do not hold, the inferences may not be reliable either. This limitation is often ignored by applied workers who fail to identify crucial assumptions or subject them to any kind of empirical testing. In such circumstances, using statistical procedures may only compound the uncertainty." (David A Greedman & William C Navidi, "Regression Models for Adjusting the 1980 Census", Statistical Science Vol. 1 (1), 1986)

"Science has become a social method of inquiring into natural phenomena, making intuitive and systematic explorations of laws which are formulated by observing nature, and then rigorously testing their accuracy in the form of predictions. The results are then stored as written or mathematical records which are copied and disseminated to others, both within and beyond any given generation. As a sort of synergetic, rigorously regulated group perception, the collective enterprise of science far transcends the activity within an individual brain." (Lynn Margulis & Dorion Sagan, "Microcosmos", 1986)

"Beware of the problem of testing too many hypotheses; the more you torture the data, the more likely they are to confess, but confessions obtained under duress may not be admissible in the court of scientific opinion." (Stephen M. Stigler, "Neutral Models in Biology", 1987)

"Prediction can never be absolutely valid and therefore science can never prove some generalization or even test a single descriptive statement and in that way arrive at final truth." (Gregory Bateson, Mind and Nature: A necessary unity", 1988)

"Science doesn't purvey absolute truth. Science is a mechanism. It's a way of trying to improve your knowledge of nature. It's a system for testing your thoughts against the universe and seeing whether they match. And this works, not just for the ordinary aspects of science, but for all of life. I should think people would want to know that what they know is truly what the universe is like, or at least as close as they can get to it." (Isaac Asimov, [Interview by Bill Moyers] 1988)

"The heart of the scientific method is the problem-hypothesis-test process. And, necessarily, the scientific method involves predictions. And predictions, to be useful in scientific methodology, must be subject to test empirically." (Paul Davies, "The Cosmic Blueprint: New Discoveries in Nature's Creative Ability to, Order the Universe", 1988)

"Science doesn’t purvey absolute truth. Science is a mechanism, a way of trying to improve your knowledge of nature. It’s a system for testing your thoughts against the universe, and seeing whether they match." (Isaac Asimov, [interview with Bill Moyers in The Humanist] 1989)

"A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that’s the only way you can take it in formal hypothesis testing), is always false in the real world. [...] If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what’s the big deal about rejecting it?" (Jacob Cohen, "Things I Have Learned (So Far)", American Psychologist, 1990)

"How has the virtually barren technique of hypothesis testing come to assume such importance in the process by which we arrive at our conclusions from our data?" (Geoffrey R Loftus, "On the tyranny of hypothesis testing in the social sciences", Contemporary Psychology 36, 1991)

"On this view, we recognize science to be the search for algorithmic compressions. We list sequences of observed data. We try to formulate algorithms that compactly represent the information content of those sequences. Then we test the correctness of our hypothetical abbreviations by using them to predict the next terms in the string. These predictions can then be compared with the future direction of the data sequence. Without the development of algorithmic compressions of data all science would be replaced by mindless stamp collecting - the indiscriminate accumulation of every available fact. Science is predicated upon the belief that the Universe is algorithmically compressible and the modern search for a Theory of Everything is the ultimate expression of that belief, a belief that there is an abbreviated representation of the logic behind the Universe's properties that can be written down in finite form by human beings." (John D Barrow, New Theories of Everything", 1991)

"Scientists use mathematics to build mental universes. They write down mathematical descriptions - models - that capture essential fragments of how they think the world behaves. Then they analyse their consequences. This is called 'theory'. They test their theories against observations: this is called 'experiment'. Depending on the result, they may modify the mathematical model and repeat the cycle until theory and experiment agree. Not that it's really that simple; but that's the general gist of it, the essence of the scientific method." (Ian Stewart & Martin Golubitsky, "Fearful Symmetry: Is God a Geometer?", 1992)

"The amount of understanding produced by a theory is determined by how well it meets the criteria of adequacy - testability, fruitfulness, scope, simplicity, conservatism - because these criteria indicate the extent to which a theory systematizes and unifies our knowledge." (Theodore Schick Jr.,  "How to Think about Weird Things: Critical Thinking for a New Age", 1995)

"The science of statistics may be described as exploring, analyzing and summarizing data; designing or choosing appropriate ways of collecting data and extracting information from them; and communicating that information. Statistics also involves constructing and testing models for describing chance phenomena. These models can be used as a basis for making inferences and drawing conclusions and, finally, perhaps for making decisions." (Fergus Daly et al, "Elements of Statistics", 1995)

"Science is distinguished not for asserting that nature is rational, but for constantly testing claims to that or any other affect by observation and experiment." (Timothy Ferris, "The Whole Shebang: A State-of-the Universe’s Report", 1996)

"There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory." (Gian-Carlo Rota, [lecture] 1996)

"Validation is the process of testing how good the solutions produced by a system are. The results produced by a system are usually compared with the results obtained either by experts or by other systems. Validation is an extremely important part of the process of developing every knowledge-based system. Without comparing the results produced by the system with reality, there is little point in using it." (Nikola K Kasabov, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering", 1996)

"The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test." (Richard Feynman, "The Meaning of It All", 1998)

"Let us regard a proof of an assertion as a purely mechanical procedure using precise rules of inference starting with a few unassailable axioms. This means that an algorithm can be devised for testing the validity of an alleged proof simply by checking the successive steps of the argument; the rules of inference constitute an algorithm for generating all the statements that can be deduced in a finite number of steps from the axioms." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"The greatest plus of data modeling is that it produces a simple and understandable picture of the relationship between the input variables and responses [...] different models, all of them equally good, may give different pictures of the relation between the predictor and response variables [...] One reason for this multiplicity is that goodness-of-fit tests and other methods for checking fit give a yes–no answer. With the lack of power of these tests with data having more than a small number of dimensions, there will be a large number of models whose fit is acceptable. There is no way, among the yes–no methods for gauging fit, of determining which is the better model." (Leo Breiman, "Statistical Modeling: The two cultures", Statistical Science 16(3), 2001)

"When significance tests are used and a null hypothesis is not rejected, a major problem often arises - namely, the result may be interpreted, without a logical basis, as providing evidence for the null hypothesis." (David F Parkhurst, "Statistical Significance Tests: Equivalence and Reverse Tests Should Reduce Misinterpretation", BioScience Vol. 51 (12), 2001)

"Visualizations can be used to explore data, to confirm a hypothesis, or to manipulate a viewer. [...] In exploratory visualization the user does not necessarily know what he is looking for. This creates a dynamic scenario in which interaction is critical. [...] In a confirmatory visualization, the user has a hypothesis that needs to be tested. This scenario is more stable and predictable. System parameters are often predetermined." (Usama Fayyad et al, "Information Visualization in Data Mining and Knowledge Discovery", 2002)

"There is a tendency to use hypothesis testing methods even when they are not appropriate. Often, estimation and confidence intervals are better tools. Use hypothesis testing only when you want to test a well-defined hypothesis." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"In science, for a theory to be believed, it must make a prediction - different from those made by previous theories - for an experiment not yet done. For the experiment to be meaningful, we must be able to get an answer that disagrees with that prediction. When this is the case, we say that a theory is falsifiable - vulnerable to being shown false. The theory also has to be confirmable, it must be possible to verify a new prediction that only this theory makes. Only when a theory has been tested and the results agree with the theory do we advance the statement to the rank of a true scientific theory." (Lee Smolin, "The Trouble with Physics", 2006)

"A type of error used in hypothesis testing that arises when incorrectly rejecting the null hypothesis, although it is actually true. Thus, based on the test statistic, the final conclusion rejects the Null hypothesis, but in truth it should be accepted. Type I error equates to the alpha (α) or significance level, whereby the generally accepted default is 5%." (Lynne Hambleton, "Treasure Chest of Six Sigma Growth Methods, Tools, and Best Practices", 2007)

"Each systems archetype embodies a particular theory about dynamic behavior that can serve as a starting point for selecting and formulating raw data into a coherent set of interrelationships. Once those relationships are made explicit and precise, the 'theory' of the archetype can then further guide us in our data-gathering process to test the causal relationships through direct observation, data analysis, or group deliberation." (Daniel H Kim, "Systems Archetypes as Dynamic Theories", The Systems Thinker Vol. 24 (1), 2013)

"In common usage, prediction means to forecast a future event. In data science, prediction more generally means to estimate an unknown value. This value could be something in the future (in common usage, true prediction), but it could also be something in the present or in the past. Indeed, since data mining usually deals with historical data, models very often are built and tested using events from the past." (Foster Provost & Tom Fawcett, "Data Science for Business", 2013)

"Another way to secure statistical significance is to use the data to discover a theory. Statistical tests assume that the researcher starts with a theory, collects data to test the theory, and reports the results - whether statistically significant or not. Many people work in the other direction, scrutinizing the data until they find a pattern and then making up a theory that fits the pattern." (Gary Smith, "Standard Deviations", 2014)

"Data clusters are everywhere, even in random data. Someone who looks for an explanation will inevitably find one, but a theory that fits a data cluster is not persuasive evidence. The found explanation needs to make sense and it needs to be tested with uncontaminated data." (Gary Smith, "Standard Deviations", 2014)

"Machine learning is a science and requires an objective approach to problems. Just like the scientific method, test-driven development can aid in solving a problem. The reason that TDD and the scientific method are so similar is because of these three shared characteristics: Both propose that the solution is logical and valid. Both share results through documentation and work over time. Both work in feedback loops." (Matthew Kirk, "Thoughtful Machine Learning", 2015)

"Science, at its core, is simply a method of practical logic that tests hypotheses against experience. Scientism, by contrast, is the worldview and value system that insists that the questions the scientific method can answer are the most important questions human beings can ask, and that the picture of the world yielded by science is a better approximation to reality than any other." (John M Greer, "After Progress: Reason and Religion at the End of the Industrial Age", 2015)

"The dialectical interplay of experiment and theory is a key driving force of modern science. Experimental data do only have meaning in the light of a particular model or at least a theoretical background. Reversely theoretical considerations may be logically consistent as well as intellectually elegant: Without experimental evidence they are a mere exercise of thought no matter how difficult they are. Data analysis is a connector between experiment and theory: Its techniques advise possibilities of model extraction as well as model testing with experimental data." (Achim Zielesny, "From Curve Fitting to Machine Learning" 2nd Ed., 2016)

"Bias is error from incorrect assumptions built into the model, such as restricting an interpolating function to be linear instead of a higher-order curve. [...] Errors of bias produce underfit models. They do not fit the training data as tightly as possible, were they allowed the freedom to do so. In popular discourse, I associate the word 'bias' with prejudice, and the correspondence is fairly apt: an apriori assumption that one group is inferior to another will result in less accurate predictions than an unbiased one. Models that perform lousy on both training and testing data are underfit." (Steven S Skiena, "The Data Science Design Manual", 2017)

"Early stopping and regularization can ensure network generalization when you apply them properly. [...] With early stopping, the choice of the validation set is also important. The validation set should be representative of all points in the training set. When you use Bayesian regularization, it is important to train the network until it reaches convergence. The sum-squared error, the sum-squared weights, and the effective number of parameters should reach constant values when the network has converged. With both early stopping and regularization, it is a good idea to train the network starting from several different initial conditions. It is possible for either method to fail in certain circumstances. By testing several different initial conditions, you can verify robust network performance." (Mark H Beale et al, "Neural Network Toolbox™ User's Guide", 2017)

"Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility." (Yuval N Harari, "Sapiens: A brief history of humankind", 2017)

"Variance is error from sensitivity to fluctuations in the training set. If our training set contains sampling or measurement error, this noise introduces variance into the resulting model. [...] Errors of variance result in overfit models: their quest for accuracy causes them to mistake noise for signal, and they adjust so well to the training data that noise leads them astray. Models that do much better on testing data than training data are overfit." (Steven S Skiena, "The Data Science Design Manual", 2017)

"[...] a hypothesis test tells us whether the observed data are consistent with the null hypothesis, and a confidence interval tells us which hypotheses are consistent with the data." (William C Blackwelder)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.