Showing posts with label limitations. Show all posts
Showing posts with label limitations. Show all posts

17 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 54: Will AI Make Data Analysis Obsolete?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how AI may start making data analysis obsolete"

Introduction

Artificial intelligence (AI) is revolutionizing industries, automating tasks, and reshaping how businesses process information. One of the most debated topics in the data-driven world is whether AI will eventually make traditional data analysis obsolete. While AI is undeniably transforming the field, the reality is more nuanced than a simple replacement scenario.

The Rise of AI in Data Analysis

AI-powered analytics tools have demonstrated impressive capabilities in processing vast amounts of data, identifying patterns, and generating insights. These tools can:

  • Automate data cleaning and preprocessing, reducing the time analysts spend on tedious tasks.
  • Detect trends and anomalies faster than human analysts.
  • Generate predictive models that anticipate future outcomes based on historical data.
  • Provide real-time insights, allowing businesses to make quicker decisions.

AI-driven automation is particularly useful for repetitive tasks, such as sorting and structuring data, enabling analysts to focus on higher-level problem-solving.

How AI is Changing the Role of Data Analysts

Rather than making data analysts obsolete, AI is shifting their responsibilities. Analysts are increasingly becoming AI supervisors, guiding AI-generated insights, ensuring accuracy, and refining AI-driven solutions. Instead of manually analyzing every dataset, analysts are leveraging AI to enhance productivity and streamline workflows.

AI is also democratizing data analysis by enabling non-experts to generate insights using natural language queries. Low-code and no-code platforms powered by AI allow users to extract meaningful information without extensive technical knowledge. While this reduces the barrier to entry, it does not eliminate the need for skilled analysts who understand data integrity, business context, and strategic decision-making.

Limitations of AI in Data Analysis

Despite its advancements, AI still faces significant limitations in data analysis:

  • Lack of Contextual Understanding: AI can identify correlations, but it struggles with interpreting causation and business context. Human analysts bring intuition, industry expertise, and strategic thinking that AI cannot replicate.
  • Error-Prone Insights: AI-generated insights are not always reliable. Bias in training data, incorrect assumptions, and flawed algorithms can lead to misleading conclusions. Analysts play a crucial role in validating AI-generated findings.
  • Ethical and Security Concerns: AI-driven analytics must adhere to strict regulations regarding data privacy and security. Businesses need human oversight to ensure compliance with laws such as GDPR.
  • Complex Decision-Making: Large-scale business decisions require a combination of data-driven insights and human judgment. AI can assist in analysis, but human expertise is essential for interpreting results and making strategic choices.

The Future of Data Analysis in an AI-Driven World

While AI is transforming data analysis, it is unlikely to make analysts obsolete. Instead, the role of data professionals will evolve into a more strategic and supervisory position. Analysts will focus on guiding AI, ensuring data quality, and solving complex problems that require human intuition.

AI will continue to enhance productivity, automate repetitive tasks, and democratize data analysis, but human analysts will remain essential for innovation, security, and ethical decision-making. The future of data analysis is not about replacement - it’s about collaboration between AI and human intelligence.

Conclusion

AI is revolutionizing data analysis, but it is not eliminating the need for human analysts. Instead, it is reshaping the industry, making data processing more efficient and accessible while requiring human oversight for creativity, security, and complex problem-solving. Rather than fearing obsolescence, data analysts should embrace AI as a powerful tool that enhances their capabilities and expands the possibilities of data-driven decision-making.

Disclaimer: The whole text was generated by Copilot (under Windows 10) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

13 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 51: Will AI Make Programmers Obsolete?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how AI may start making programmers obsolete"

Introduction

Artificial intelligence (AI) is rapidly transforming industries, automating tasks, and reshaping the way we interact with technology. One of the most debated topics in the tech world is whether AI will eventually replace programmers, making human coding skills obsolete. While AI-driven coding tools are becoming more advanced, the future of programming is more nuanced than a simple replacement scenario.

The Rise of AI in Software Development

AI-powered coding assistants, such as GitHub Copilot and OpenAI’s Codex, have demonstrated impressive capabilities in generating code, debugging, and optimizing software development workflows. These tools can analyze vast amounts of programming data, predict code structures, and even suggest solutions to complex problems.

AI-driven automation is particularly useful for repetitive coding tasks, such as writing boilerplate code, fixing syntax errors, and generating test cases. This efficiency allows developers to focus on higher-level problem-solving rather than spending time on routine coding tasks.

How AI is Changing the Role of Programmers

Rather than making programmers obsolete, AI is shifting the nature of programming. Developers are increasingly becoming AI supervisors, guiding AI-generated code, ensuring accuracy, and refining AI-driven solutions. Instead of writing every line of code manually, programmers are leveraging AI to enhance productivity and streamline development processes.

AI is also democratizing coding by enabling non-programmers to create software using natural language prompts. Low-code and no-code platforms powered by AI allow users to build applications without extensive programming knowledge. While this reduces the barrier to entry, it does not eliminate the need for skilled developers who understand system architecture, security, and optimization.

Limitations of AI in Programming

Despite its advancements, AI still faces significant limitations in software development:

  • Lack of Creativity and Contextual Understanding: AI can generate code based on patterns, but it struggles with creative problem-solving and understanding the broader context of a project. Human programmers bring intuition, innovation, and domain expertise that AI cannot replicate.
  • Error-Prone Code Generation: AI-generated code is not always reliable. It can introduce security vulnerabilities, logical errors, or inefficient solutions that require human oversight. Programmers play a crucial role in reviewing, testing, and refining AI-generated code.
  • Ethical and Security Concerns: AI-driven coding tools can inadvertently introduce biases, security risks, and compliance issues. Developers must ensure that AI-generated code aligns with ethical standards and industry regulations.
  • Complex Software Development: Large-scale software projects require architectural planning, debugging, and integration across multiple systems. AI can assist in coding, but human expertise is essential for designing robust and scalable solutions.

The Future of Programming in an AI-Driven World

While AI is transforming software development, it is unlikely to make programmers obsolete. Instead, the role of programmers will evolve into a more strategic and supervisory position. Developers will focus on guiding AI, ensuring code quality, and solving complex problems that require human intuition.

AI will continue to enhance productivity, automate repetitive tasks, and democratize coding, but human programmers will remain essential for innovation, security, and ethical decision-making. The future of programming is not about replacement - it’s about collaboration between AI and human intelligence.

Conclusion

AI is revolutionizing software development, but it is not eliminating the need for programmers. Instead, it is reshaping the industry, making coding more efficient and accessible while requiring human oversight for creativity, security, and complex problem-solving. Rather than fearing obsolescence, programmers should embrace AI as a powerful tool that enhances their capabilities and expands the possibilities of software development.

Disclaimer: The whole text was generated by Copilot (under Windows 10) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

04 July 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 41: How AI Can Play Devil’s Advocate - Challenging Assumptions and Expanding Perspectives)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on how AI can play devil's advocate"

Introduction

Artificial Intelligence (AI) is often seen as a tool for efficiency, automation, and problem-solving. However, one of its most intriguing capabilities is its ability to play devil’s advocate - challenging assumptions, questioning biases, and presenting alternative viewpoints. By acting as a skeptical counterbalance, AI can help individuals and organizations think critically, refine arguments, and explore diverse perspectives.

1. What Does It Mean to Play Devil’s Advocate?

Playing devil’s advocate means arguing against a prevailing opinion or assumption, even if one does not personally agree with the opposing stance. This approach is valuable in:

  • Debates and discussions, where opposing viewpoints strengthen arguments.
  • Decision-making, ensuring all possibilities are considered.
  • Problem-solving, where unconventional perspectives lead to innovative solutions.

AI, with its ability to analyze vast amounts of data and generate counterarguments, is uniquely positioned to take on this role.

2. How AI Challenges Confirmation Bias

One of AI’s most important functions as a devil’s advocate is breaking the confirmation bias loop - the tendency for people to seek out information that supports their existing beliefs while ignoring contradictory evidence. AI can:

  • Identify logical inconsistencies in arguments.
  • Present alternative viewpoints, even if they challenge popular opinions.
  • Encourage critical thinking by questioning assumptions.

By disrupting confirmation bias, AI helps individuals and organizations make more informed and balanced decisions.

3. AI in Decision-Making and Policy Development

AI-driven devil’s advocacy is particularly useful in policy-making, business strategy, and ethical debates. Some applications include:

  • Corporate decision-making: AI can highlight risks and alternative strategies before executives finalize plans.
  • Legal and ethical discussions: AI can present opposing viewpoints in debates about regulations and governance.
  • Scientific research: AI can challenge hypotheses, ensuring rigorous testing and validation.

By forcing individuals to consider alternative perspectives, AI enhances objectivity and rational decision-making.

4. AI’s Role in Amplifying Minority Voices

AI can also serve as a mediator for underrepresented perspectives, ensuring that minority viewpoints are heard in group discussions. AI-driven devil’s advocate systems:

  • Reduce social influence biases, preventing dominant voices from overshadowing others.
  • Encourage diverse perspectives, fostering more inclusive decision-making.
  • Improve psychological safety, allowing individuals to express dissenting opinions anonymously.
  • This approach ensures that critical discussions remain balanced and representative.

5. The Challenges of AI as a Devil’s Advocate

While AI can challenge assumptions, it also faces limitations:

  • AI models may reinforce biases if trained on skewed data.
  • AI lacks true human intuition, making some counterarguments overly rigid.
  • AI’s responses depend on prompts, requiring careful input to generate meaningful opposition.

To maximize AI’s effectiveness as a devil’s advocate, developers must ensure diverse training data and refine AI’s ability to engage in nuanced discussions.

Conclusion: AI as a Catalyst for Critical Thinking

AI’s ability to challenge assumptions, disrupt biases, and amplify diverse perspectives makes it a powerful tool for critical thinking and informed decision-making. By playing devil’s advocate, AI helps individuals and organizations refine arguments, explore alternative viewpoints, and make more balanced choices.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

24 June 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 31: The Potential of AI to Reach Singularity)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on the potential of AI to reach singularity"

Introduction

Artificial Intelligence (AI) has evolved rapidly, transforming industries and reshaping human interactions. But one of the most debated questions in AI research is whether it will eventually reach singularity - a hypothetical point where AI surpasses human intelligence and becomes self-improving beyond human control. While some experts believe AI singularity is inevitable, others argue that fundamental limitations may prevent it from ever occurring.

1. What Is AI Singularity?

AI singularity refers to the moment when artificial intelligence exceeds human intelligence, leading to exponential advancements beyond human comprehension. At this stage, AI would be capable of:

  • Self-improvement: AI systems could rewrite their own code to enhance their capabilities.
  • Autonomous decision-making: AI could operate independently without human intervention.
  • Unprecedented problem-solving: AI could tackle complex global challenges faster than humans.

Some researchers predict AI singularity could occur as early as 2027, while others believe it may take decades or might never happen.

2. The Path to AI Singularity

Several technological advancements are driving AI toward singularity:

  • Artificial General Intelligence (AGI): Unlike today’s AI, which specializes in narrow tasks, AGI would be capable of reasoning, learning, and adapting across multiple domains.
  • Quantum Computing: AI powered by quantum processors could achieve computational speeds far beyond traditional computers.
  • Neural Networks and Deep Learning: AI models are becoming increasingly sophisticated, mimicking human brain functions more closely.

Experts suggest that once AGI is achieved, it could rapidly evolve into Artificial Super Intelligence (ASI) - a level of intelligence surpassing all human knowledge combined.

3. Challenges and Limitations

Despite AI’s rapid progress, several obstacles could delay or prevent singularity:

  • Lack of consciousness: AI lacks subjective experiences, emotions, and self-awareness, which are fundamental aspects of human intelligence.
  • Ethical and regulatory constraints: Governments and researchers may impose strict regulations to prevent AI from evolving uncontrollably.
  • Computational limitations: Even with advanced processors, AI may struggle to replicate the complexity of human cognition.

Some scientists argue that current AI models are a "dead end" for achieving human-level intelligence, suggesting that singularity may remain a theoretical concept.

4. The Risks and Implications of AI Singularity

If AI singularity were to occur, it could bring both unprecedented benefits and existential risks:

  • Positive outcomes: AI could revolutionize healthcare, solve climate change, and accelerate scientific discoveries.
  • Negative consequences: AI could become uncontrollable, leading to ethical dilemmas, economic disruption, and security threats.
  • The question remains: Will AI singularity be a technological utopia or an existential crisis?

Conclusion: Is AI Singularity Inevitable?

While AI continues to advance, reaching singularity remains uncertain. Some experts believe it could happen within the next few decades, while others argue that fundamental limitations will prevent AI from ever surpassing human intelligence. Regardless of the timeline, AI’s development must be guided by ethical considerations, regulatory oversight, and responsible innovation to ensure it benefits humanity rather than posing a threat.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

30 May 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 6: Why There Are No Limits to AI Growth)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on why there are no limits to growth for AI"

Introduction

Artificial Intelligence (AI) has transcended its early stages of development and is now an integral part of business, science, and daily life. Unlike traditional technologies that have clear limits on scalability, AI operates under a unique framework - it evolves, learns, and refines itself over time. This adaptability ensures that AI's growth remains exponential, breaking past barriers that once seemed insurmountable.

AI’s Ability to Learn and Evolve

AI differs from other technologies because it isn’t bound by static programming - it learns and improves through machine learning (ML) and deep learning. These processes enable AI to:

  • Optimize itself over time, making operations more efficient.
  • Develop new patterns that even human researchers may not anticipate.
  • Enhance accuracy and predictive capabilities across industries.

Unlike traditional software, which requires manual updates and modifications, AI continuously refines itself based on real-world interactions. This ability ensures perpetual growth, keeping AI ahead of conventional limitations.

Computational Power Is Increasing Rapidly

One of the biggest factors enabling AI’s endless growth is the exponential increase in computational power. Advances in quantum computing, cloud-based GPUs, and edge computing allow AI models to process massive datasets faster than ever. The more computing resources AI has access to, the more complex problems it can solve.

  • Cloud computing democratizes AI, making it accessible to businesses worldwide.
  • Quantum computing will eventually enable AI to process unimaginable volumes of data, far surpassing current limitations.
  • Neuromorphic computing mimics the human brain, allowing AI models to improve efficiency while consuming less energy.

As technology progresses, AI's capabilities will continue expanding - faster, smarter, and more adaptable than ever before.

AI Has Infinite Applications Across Industries

AI's impact isn’t confined to a single sector; its growth is fueled by its versatility. From healthcare to finance, retail to space exploration, AI finds new ways to integrate into human progress.

  • In medicine, AI assists in diagnosing diseases, customizing treatments, and even predicting outbreaks.
  • In business, AI automates operations, reduces inefficiencies, and enhances data-driven decision-making.
  • In research, AI enables discoveries in fields like genetics, material science, and astrophysics.
  • In entertainment, AI creates realistic CGI, automates music composition, and personalizes media experiences.

Since AI isn’t restricted to one field, its expansion is self-perpetuating - each industry unlocks new AI-driven possibilities, further fueling innovation.

AI Growth Is Driven by Human Curiosity

AI’s evolution isn’t happening in isolation; human ingenuity fuels its progress. Researchers, engineers, and businesses continuously explore new ways to push AI beyond current capabilities.

  • Developers refine AI models to improve accuracy and efficiency.
  • Businesses invest billions in AI research, ensuring advancements continue.
  • Governments and educational institutions integrate AI into policies and learning frameworks to accelerate adoption.

As long as humans remain curious about solving problems, enhancing intelligence, and optimizing processes, AI will continue to grow indefinitely.

Ethical Challenges Won’t Halt AI Growth

While ethical concerns - such as AI bias, surveillance, and job displacement - must be addressed, they won’t stop AI’s progress. Instead, they shape its development, ensuring responsible AI integration.

  • Regulations will guide ethical AI use rather than stifle growth.
  • Bias reduction strategies will make AI more inclusive while improving accuracy.
  • AI augmentation will reshape workforces, enabling humans to focus on creative and strategic tasks.

Rather than presenting limitations, ethical considerations ensure that AI’s limitless growth remains aligned with human values and societal needs.

Conclusion: AI’s Infinite Expansion

Unlike previous technological revolutions that reached plateaus, AI’s adaptive nature, increasing computational power, and infinite applications ensure continued growth. With every breakthrough in machine learning, automation, and intelligent computing, AI inches closer to solving problems previously deemed impossible.

The future of AI is limitless - and as long as humans continue pushing boundaries, AI will evolve indefinitely.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

29 May 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 5: Why There Are Limits to AI Growth)

Prompt Engineering Series

Prompt: "write a blogpost of 600 words on why there are limits to growth for AI"

Introduction

Artificial Intelligence (AI) is evolving rapidly, transforming industries and redefining how humans interact with technology. However, despite its advancements, AI is not without limitations. While many claim that AI’s potential is limitless, several fundamental constraints affect its development, usability, and ethical application. In this blog post, we explore why AI growth is not boundless and the factors that restrict its expansion.

Computational and Hardware Constraints

AI depends heavily on computational power to process large amounts of data efficiently. While improvements in processing speeds, cloud computing, and quantum computing are promising, hardware limitations remain a bottleneck.

  • Energy consumption: Training sophisticated AI models requires immense energy, leading to high costs and environmental concerns.
  • Hardware limitations: Even cutting-edge processors (GPUs, TPUs) struggle with increasingly complex AI models, limiting performance scalability.
  • Data storage needs: AI requires vast amounts of storage and memory, which can be difficult to scale beyond current technologies.

Despite ongoing advancements, these factors impose real-world limitations on AI expansion, preventing it from growing indefinitely.

Dependence on Data and Human Oversight

AI is fundamentally data-driven, meaning its accuracy and capabilities are only as good as the data it receives. However, data availability and quality present major challenges:

  • Data scarcity: Not all industries generate sufficient high-quality datasets, limiting AI effectiveness.
  • Bias in data: AI models can inherit human biases, leading to ethical concerns and flawed decision-making.
  • Human intervention: AI requires constant supervision for algorithm adjustments, data corrections, and error management.

Since AI models learn from existing human-created data, they are restricted by data inconsistencies, biases, and inaccuracies - creating natural growth barriers.

Ethical and Legal Restrictions

AI operates within a world governed by laws, regulations, and ethical constraints, which significantly limit its unrestricted development:

  • Privacy concerns: AI depends on personal data for optimization, but privacy regulations (GDPR, CCPA) restrict unchecked data usage.
  • Security risks: AI-driven automation increases the risk of hacking, misinformation, and deepfake fraud, requiring strict oversight.
  • Job displacement worries: Governments and institutions limit AI adoption to protect workers and avoid societal disruption.

As AI continues to advance, legal frameworks must adapt - but regulatory boundaries will always exist to prevent ethical breaches.

AI’s Lack of True Consciousness

Unlike humans, AI does not possess consciousness, intuition, or creativity in the way people do. While AI mimics intelligence, it lacks genuine understanding:

  • No self-awareness: AI does not truly “think”; it only processes patterns based on data inputs.
  • Limited creativity: AI can generate art and music, but it does not comprehend emotion, cultural context, or abstract thought.
  • Inability to make ethical judgments: AI lacks moral reasoning and cannot replace human values or ethical decision-making.

This inherent lack of cognitive ability prevents AI from achieving true autonomy, reinforcing its growth limitations.

Final Thoughts: AI’s Evolution Has Limits

While AI continues to expand, it is not limitless due to constraints in hardware, data, ethics, and intelligence itself.

AI can optimize processes, enhance decision-making, and revolutionize industries, but it will always require human oversight and face technical and ethical barriers.

Rather than seeing AI as boundless, we should focus on responsible development, ensuring it remains a powerful tool rather than an unchecked force.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

26 April 2025

🏭🗒️Microsoft Fabric: Parameters in Dataflows Gen2 [Notes] 🆕

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 26-Apr-2

[Microsoft Fabric] Dataflow Gen2 Parameters

  • {def} parameters that allow to dynamically control and customize Dataflows Gen2
    • makes them more flexible and reusable by enabling different inputs and scenarios without modifying the dataflow itself [1]
    • the dataflow is refreshed by passing parameter values outside of the Power Query editor through either
      • Fabric REST API [1]
      • native Fabric experiences [1]
    • parameter names are case sensitive [1]
    • {type} required parameters
      • {warning} the refresh fails if no value is passed for it [1]
    • {type} optional parameters
    • enabled via Parameters >> Enable parameters to be discovered and override for execution [1]
  • {limitation} dataflows with parameters can't be
    • scheduled for refresh through the Fabric scheduler [1]
    • manually triggered through the Fabric Workspace list or lineage view [1]
  • {limitation} parameters that affect the resource path of a data source or a destination are not supported [1]
    • ⇐ connections are linked to the exact data source path defined in the authored dataflow
      • can't be currently override to use other connections or resource paths [1]
  • {limitation} can't be leveraged by dataflows with incremental refresh [1]
  • {limitation} supports only parameters of the type decimal number, whole number, text and true/false can be passed for override
    • any other data types don't produce a refresh request in the refresh history but show in the monitoring hub [1]
  • {warning} allow other users who have permissions to the dataflow to refresh the data with other values [1]
  • {limitation} refresh history does not display information about the parameters passed during the invocation of the dataflow [1]
  • {limitation} monitoring hub doesn't display information about the parameters passed during the invocation of the dataflow [1]
  • {limitation} staged queries only keep the last data refresh of a dataflow stored in the Staging Lakehouse [1]
  • {limitation} only the first request will be accepted from duplicated requests for the same parameter values [1]
    • subsequent requests are rejected until the first request finishes its evaluation [1]

References:
[1] Microsoft Learn (2025) Use public parameters in Dataflow Gen2 (Preview) [link

Resources:
[R1] Microsoft Fabric Blog (2025) Passing parameter values to refresh a Dataflow Gen2 (Preview) [link

Acronyms:
API - Application Programming Interface
REST - Representational State Transfer

13 March 2025

🏭🗒️Microsoft Fabric: Workspaces [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 25-Mar-2025

[Microsoft Fabric] Workspace

  • {def} a collection of items that brings together different functionality in a single environment designed for collaboration
  • {default} created in organization's shared capacity
    • workspaces can be assigned to other capacities
      • includes My Workspaces
      • via Workspace settings >> Premium
    • shared environment that holds live items
      • any changes made directly within the workspace immediately take effect and impact all users
  • components
    • header
      • contains
        • name 
        • brief description of the workspace
        • links to other functionality
    • toolbar 
      • contains 
        • controls for managing items to the workspace 
        • controls for managing files
    • view area
      • enables selecting a view
      • {type} list view
        • {subitem} task flow
          • area in which users can create or view a graphical representation of the data project [3]
            • ⇐ shows the logical flow of the project [3]
              • ⇐ it doesn't show the flow of data [3]
          • can be hided via Show/Hide arrows
        • {subitem} items list
          • area in which the users can see the items and folders in the workspace [3]
          • one can filter the items list by selecting the tasks, if any defined [3]
        • {subitem} resize bar
          • elements that allow to resize the task flow and items list by dragging the resize bar up or down [3]
      • {type} lineage view
        • shows the flow of data between the items in the workspace [3]
  • {feature} workspace settings 
    • allows to manage and update the workspace [3]
  • {feature} contact list 
    • allows to specify which users receive notification about issues occurring in the workspace [3] 
    • {default} contains workspace's creator [3]
  • {feature} SharePoint integration 
    • allows to configure a M365 Group whose SharePoint document library is available to workspace users [3]
      • ⇐ the group is created outside of MF first [3]
      • restrictions may apply to the environment
    • {best practice} give access to the workspace to the same M365 Group whose file storage is configured [3]
      • MF doesn't synchronize permissions between users or groups with workspace access, and users or groups with M365 Group membership [3]
  • {feature} workspace identity
    • an automatically managed service principal that can be associated with a Fabric workspace [6]
      • workspaces with a workspace identity can securely read or write to firewall-enabled ADSL Gen2 accounts through trusted workspace access for OneLake shortcuts [6]
      • Fabric creates a service principal in Microsoft Entra ID to represent the identity [6]
        • ⇐ an accompanying app registration is also created [6]
        • Fabric automatically manages the credentials associated with workspace identities [6]
          • ⇒ prevents credential leaks and downtime due to improper credential handling [6]
    • used to obtain Microsoft Entra tokens without the customer having to manage any credentials [6]
      • Fabric items can use the identity when connecting to resources that support Microsoft Entra authentication [6]
    • can be created in the workspace settings of any workspace except My workspaces
    • automatically assigned to the workspace contributor role and has access to workspace items [6]
  • {feature} workspace roles
    • allows to manage who can do what in a workspace [4]
    • sit on top of OneLake and divide the data lake into separate containers that can be secured independently [4]
    • extend the Power BI workspace roles by associating new MF capabilities 
      • e.g. data integration, data exploration
    • can be assigned to 
      • individual users
      • security groups
      • Microsoft 365 groups
      • distribution lists
    • {role} Admin
    • {role} Member
    • {role} Contributor
    • {role} Viewer
    • user groups
      • members get the role(s) assigned
      • users existing in several group get the highest level of permission that's provided by the roles that they're assigned [4]
      • {concept} [nested group]
  • {concept} current workspace
    • the active open workspace
  • {action} create new workspace
  • {action} pin workspace
  • {action} delete workspace
    • everything contained within the workspace is deleted for all group members [3]
      • the associated app is also removed from AppSource [3]
    • {warning} if the workspace has a workspace identity, that workspace identity will be irretrievably lost [3]
      • this may cause Fabric items relying on the workspace identity for trusted workspace access or authentication to break [3]
    • only admins can perform the operation
  • {action} manage workspace
  • {action} take ownership of Fabric items
    • Fabric items may stop working correctly [5]
      • {scenario} the owner leaves the organization [5]
      • {scenario}the owner don't sign in for more than 90 days [5]
      • in such cases, anyone with read and write permissions on an item can take ownership of the item [5]
        • become the owner of any child items the item might have
        • {limitation} one can't take over ownership of child items directly [5]
          • ⇐ one can take ownership only through the parent item [5]
  • {limitation} workspace retention
    • personal workspace
      • {default} 30 days [8]
    • collaborative workspace
      • {default} 7 days [8]
      • configurable between 7 to 90 days [8]
        • via Define workspace retention period setting [8]
    •  during the retention period, Fabric administrators can restore, respectively delete permanently the workspace [8]
      • after that, the workspace is deleted permanently and it and its contents are irretrievably lost [8]
  • {limitation} can contain a maximum of 1000 items [8]
    • includes both parent and child items [8]
      • refers to Fabric and Power BI items [8]
    • workspaces with more items have 180-day extension period applied automatically as of 10-Apr-2025 [8]
  • {limitation} certain special characters aren't supported in workspace names when using an XMLA endpoint [3]
  • {limitation} a user or a service principal can be a member of up to 1000 workspaces [3]
  • {feature} auditing
    • several activities are audited for workspaces [3]
      • CreateFolder
      • DeleteFolder
      • UpdateFolder
      • UpdateFolderAccess
  • {feature} workspace monitoring 
    • Eventhouse secure read-only database that collects and organizes logs and metrics from a range of Fabric items in the workspace [1]
      • accessible only to workspace users with at least a contributor role [1]
      • users can access and analyze logs and metrics [1]
      • the data is aggregated or detailed [1]
      • can be queried via KQL or SQL [1]
      • supports both historical log analysis and real-time data streaming [1]
      • accessible from the workspace [1]
        • one can build and save query sets and dashboards to simplify data exploration [1]
      • use the workspace settings to delete the database [1]
        •  wait about 15 minutes before recreating a deleted database [1]
    • {action} share the database
      • users need workspace member or admin role [1]
    • {limitation} one can enable either 
      • workspace monitoring 
      • log analytics
        • if enabled, the log analytics configuration must be deleted first before enabling workspace monitoring [1]
          • one should wait for a few hours before enabling workspace monitoring [1]
    • {limitation} retention period for monitoring data: 30 days [1]
    • {limitation}the ingestion can't be configured to filter for specific log type or category [1]
      • e.g. error or workload type.
    • {limitation} user data operation logs aren't available even though the table is available in the monitoring database [1]
    • {prerequisite} Power BI Premium or Fabric capacity [1]
    • {prerequisite} workspace admins can turn on monitoring for their workspaces tenant setting is enabled [1]
      • enabling the setting, requires Fabric administrator rights [1]
    • {prerequisite} admin role in the workspace [1]
  • workspace permissions
    •  the first security boundary for data within OneLake [7]
      • each workspace represents a single domain or project area where teams can collaborate on data [7]
      • security is managed through Fabric workspace roles [7]
    • items can have permissions configured separately from the workspace roles [7]
      • permissions can be configured either by [7]
        • sharing an item 
        • managing the permissions of an item

References:
[1] Microsoft Learn (2024) Fabric: What is workspace monitoring (preview)? [link]
[2] Microsoft Fabric Update Blog (2024) Announcing preview of Workspace Monitoring? [link]
[3] Microsoft Learn (2024) Fabric: Workspaces in Microsoft Fabric and Power BI [link]
[4] Microsoft Learn (2024) Fabric: Roles in workspaces in Microsoft Fabric [link]
[5] Microsoft Learn (2024) Fabric: Take ownership of Fabric items [link]
[6] Microsoft Learn (2024) Fabric: Workspace identity [link]
[7] Microsoft Learn (2024) Fabric: Role-based access control (RBAC) [link]
[8] Microsoft Learn (2024) Fabric: Manage workspaces [link]

Resources:
[R1] Microsoft Learn (2025) Fabric: What's new in Microsoft Fabric? [link]

Acronyms:
ADSL Gen2 - Azure Data Lake Storage Gen2
KQL - Kusto Query Language
M365 - Microsoft 365
MF - Microsoft Fabric
SQL - Structured Query Language

03 March 2025

💎🤖SQL Reloaded: Copilot Stories (Part VIII: Recursive Functions)

In theory, recursive functions prove useful in generating a list of numbers based on a formula in which the final result is based on the result of the previous step. In other words, a problem is solved by calculating the previous iteration of the same problem. In practice, they are limited by the number of iterations allowed, and there is a hard limit in SQL Server, the available resources (usually memory), and probably a few other factors. Of course, some algorithms can be implemented to overcome the limitations, while other times there are quite simple linear implementations based on a loop, like in the below examples. Therefore, recursive functions are more of a theoretical concept.   

Disclaimer: The below code was generated by Copilot via prompts and is used for demonstration purposes!

This post evaluates whether Microsoft 365 and Edge Copilot can be used to generate the code for some recursive functions. 

Case 1: Factorials

Let's start with the factorial that calculates the product of the n consecutive numbers.

Prompt:
Generate a recursive function in SQL Server to calculate the factorial of a number.

Output (M365 Copilot):

-- factorial function in Microsoft 365 Copilot
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    IF @n <= 1
        RETURN 1
    ELSE
        RETURN @n * dbo.Factorial(@n - 1)
END

Output (Edge Copilot):

-- factorial function in Edge Copilot
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    IF @n = 0 OR @n = 1
        RETURN 1
    ELSE
        RETURN @n * dbo.Factorial(@n - 1)
END;

The structure seems correct, and the returning value was identified correctly as bigint, which allows the maximum value in working with integer values. Unfortunately, both functions lead to the same error in a SQL database:

"Msg 455, Level 16, State 2, Line 9, The last statement included within a function must be a return statement."

The formulas ca be consolidated in a single statement as follows:

-- factorial function adapted (minimal)
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    RETURN 
        CASE 
            WHEN @n = 0 OR @n = 1 THEN 1
        ELSE
            @n * dbo.Factorial(@n - 1) END
END;

Especially when formulas become complex, it might be a good idea to store the result into a variable and use it in the RETURN statement.

-- factorial function adapted
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @retval as bigint;

    IF @n = 0 OR @n = 1
        SET @retval = 1
    ELSE
        SET @retval = @n * dbo.Factorial(@n - 1)

    RETURN @retval
END;

And, here are some tests for the function:

-- testing the functions for multiple values
SELECT dbo.Factorial (0) -- returns 1
SELECT dbo.Factorial (1) -- returns 1
SELECT dbo.Factorial (2) -- returns 2
SELECT dbo.Factorial (3) -- returns 6
SELECT dbo.Factorial (4) -- returns 24
SELECT dbo.Factorial (5) -- returns 120
SELECT dbo.Factorial (20) -- returns 2432902008176640000
SELECT dbo.Factorial (21) -- returns an error for all bigger values

As can be seen, the function will return an error for values bigger than 20, output limited by the data type used. One can build further constraints to test for the maximum value.

Use the following code to drop the function as needed:

-- dropping the function
DROP FUNCTION IF EXISTS dbo.Factorial 

Function's implementation has however another important caveat. If negative numbers are provided as parameters, the following error message is thrown:

"Msg 217, Level 16, State 1, Line 1, Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32)."

The final function can be defined as:

-- factorial function adapted (final)
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @retval as bigint;

    IF @n BETWEEN 0 AND 20
        IF @n = 0 OR @n = 1
            SET @retval = 1
        ELSE
            SET @retval = @n * dbo.Factorial(@n - 1)

    RETURN @retval
END;

One can linearize the function as a loop, though the limitation with the maximum value still exists:

Prompt:
Rewrite the factorial function using a loop.

Output (M365 Copilot):

-- non-recursive function to calculate the factorial of a number using a loop:
CREATE FUNCTION dbo.Factorial (@n INT)
RETURNS BIGINT
AS
BEGIN
    DECLARE @result BIGINT = 1;
    DECLARE @i INT = 1;

    WHILE @i <= @n
    BEGIN
        SET @result = @result * @i;
        SET @i = @i + 1;
    END

    RETURN @result;
END

-- test the code
SELECT dbo.Factorial(5) AS FactorialResult; -- result 120

One can obtain some improvement by changing the return type and @retsult from bigint to float, though the results are always approximations. Here are the approximated results:

-- test code
SELECT dbo.Factorial(20) AS FactorialResult; -- result 2432902008176640000
SELECT dbo.Factorial(21) AS FactorialResult; -- result 5.109094217170944E+19
SELECT dbo.Factorial(22) AS FactorialResult; -- result 1.1240007277776077E+21
SELECT dbo.Factorial(23) AS FactorialResult; -- result 2.585201673888498E+22
SELECT dbo.Factorial(24) AS FactorialResult; -- result 6.204484017332394E+23
SELECT dbo.Factorial(25) AS FactorialResult; -- result 1.5511210043330986E+25
SELECT dbo.Factorial(26) AS FactorialResult; -- result 4.0329146112660565E+26
SELECT dbo.Factorial(27) AS FactorialResult; -- result 1.0888869450418352E+28
SELECT dbo.Factorial(28) AS FactorialResult; -- result 3.0488834461171384E+29
SELECT dbo.Factorial(29) AS FactorialResult; -- result 8.841761993739701E+30
SELECT dbo.Factorial(30) AS FactorialResult; -- result 2.6525285981219103E+32
SELECT dbo.Factorial(31) AS FactorialResult; -- result 8.222838654177922E+33
SELECT dbo.Factorial(32) AS FactorialResult; -- result 2.631308369336935E+35
SELECT dbo.Factorial(170) AS FactorialResult; -- result 7.257415615307994E+306
SELECT dbo.Factorial(171) AS FactorialResult; -- results in arithmetic overflow

Another structure that allows creating recursive functions are the common table expressions (CTEs).

Prompt:
Create a recursive CTE in SQL Server for calculating n factorial.

Output (M365 Copilot):

-- n factorial as CTE
WITH FactorialCTE AS (
    -- Anchor member: factorial of 0 is 1
    SELECT 0 AS n, 1 AS factorial
    UNION ALL
    -- Recursive member: factorial of n is n * factorial of (n-1)
    SELECT n + 1, (n + 1) * factorial
    FROM FactorialCTE
    WHERE n < 10 -- Change this value to calculate factorial for larger numbers
)
SELECT n, factorial
FROM FactorialCTE;

One can cast the value in the anchor member as bigint or float as above with all the constraints resulting from this:

-- n factorial as CTE
WITH FactorialCTE AS (
    -- Anchor member: factorial of 0 is 1
    SELECT 0 AS n, Cast(1 as bigint) AS factorial
    UNION ALL
    -- Recursive member: factorial of n is n * factorial of (n-1)
    SELECT n + 1, (n + 1) * factorial
    FROM FactorialCTE
    WHERE n < 20 -- Change this value to calculate factorial for larger numbers
)
SELECT n, factorial
FROM FactorialCTE;

Case 2: Approximation of π

Power laws can be used to approximate the value of π and other irrational constants. Writing such a function via recursive functions has the same constraint as above, though there are alternatives (including storing the actual value as a constant). 

Prompt:
Generate a recursive function in SQL Server to approximate the value of π using power laws.

Output (M365 Copilot):

-- approximate the value of π using this series:
CREATE FUNCTION dbo.ApproximatePi (@n INT)
RETURNS FLOAT
AS
BEGIN
    IF @n = 0
        RETURN 4.0
    ELSE
        RETURN 4.0 * POWER(-1, @n) / (2 * @n + 1) + dbo.ApproximatePi(@n - 1)
END

-- test the function
SELECT dbo.ApproximatePi(1000) AS PiApproximation;

Function's definition has the same issue as the previous example. It's needed to rewrite the function as a single return:

-- approximate the value of π using this series (rewritten)
CREATE FUNCTION dbo.ApproximatePi (@n INT)
RETURNS FLOAT
AS
BEGIN
    RETURN
        CASE 
            WHEN @n = 0 THEN 4.0
            ELSE
                4.0 * POWER(-1, @n) / (2 * @n + 1) + dbo.ApproximatePi(@n - 1)
        END
END

-- testing the function
SELECT dbo.ApproximatePi (0) -- output: 4
SELECT dbo.ApproximatePi (1) -- output: 2.666666666667
SELECT dbo.ApproximatePi (2) -- output: 3.466666666667
SELECT dbo.ApproximatePi (3) -- output: 2.8952380952389998
SELECT dbo.ApproximatePi (10) -- output: 3.232315809407
SELECT dbo.ApproximatePi (20) -- output: 3.189184782279
SELECT dbo.ApproximatePi (30) -- output: 3.1738423371919993
SELECT dbo.ApproximatePi (31) -- output: 3.110350273699999
SELECT dbo.ApproximatePi (32) -- output: 3.171888735237999
SELECT dbo.ApproximatePi (33) -- throws error

After 32 iterations the call throws the same error as above. Fortunately, such functions can be (re)written using non-recursive implementations, which is the proper way to implementation such functions:

Prompt:
Implement the function without using recursive functions.

Output (M365 Copilot):

-- approximation within loop
CREATE FUNCTION dbo.ApproximatePi (@terms INT)
RETURNS FLOAT
AS
BEGIN
    DECLARE @pi FLOAT = 0.0;
    DECLARE @i INT = 0;

    WHILE @i < @terms
    BEGIN
        SET @pi = @pi + 4.0 * POWER(-1, @i) / (2 * @i + 1);
        SET @i = @i + 1;
    END

    RETURN @pi;
END 

This implementation is more flexible and a similar approach can be used for similar series. And, here are the results for various parameters:

-- testing the function
SELECT dbo.ApproximatePi(1000) AS PiApproximation; -- result: 3.140592653840002
SELECT dbo.ApproximatePi(10000) AS PiApproximation; -- result: 3.141492653571029
SELECT dbo.ApproximatePi(100000) AS PiApproximation; -- result: 3.141582653466945
SELECT dbo.ApproximatePi(1000000) AS PiApproximation; -- result: 3.1415916535590713

The last call is close, but probably not close enough, especially when one needs to use the function in multiple places. There are more complex power series which provide better performance faster. The direct use of the pi() function is usually recommended:

-- using the pi function
SELECT pi(); -- 3.141592653589793

Moreover, some series can be reduced to formulas based on constants like π, e or Φ.

And, a final piece of code:

-- drop the function
DROP FUNCTION IF EXISTS dbo.ApproximatePi

One can use a CTE also in this case:

Prompt:
Generate a recursive function in SQL Server to approximate the value of π using power laws.

Output (M365 Copilot):

-- series via recursive CTE
WITH PiApproximationCTE AS (
    -- Anchor member: start with k = 0
    SELECT 0 AS k, CAST(4.0 AS FLOAT) AS term, CAST(4.0 AS FLOAT) AS pi_approx
    UNION ALL
    -- Recursive member: calculate the next term and add it to the approximation
    SELECT 
        k + 1,
        CAST(4.0 * POWER(-1, k + 1) / (2 * (k + 1) + 1) AS FLOAT),
        pi_approx + CAST(4.0 * POWER(-1, k + 1) / (2 * (k + 1) + 1) AS FLOAT)
    FROM PiApproximationCTE
    WHERE k < 1000 -- Change this value to increase the number of terms
)
SELECT k, pi_approx
FROM PiApproximationCTE
OPTION (MAXRECURSION 0); -- Allow for more recursion levels if needed

This time Copilot generated a different faction for calculating the same. (The output is too long to be added in here as one record is generated  for each iteration.)

Happy coding!

Previous Post <<||>> Next Post

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.