Showing posts with label Software Engineering. Show all posts
Showing posts with label Software Engineering. Show all posts

27 May 2025

#️⃣Software Engineering: Mea Culpa (Part X: A Look Beyond AI)

Software Engineering Series

What’s the point of blogging when AI can do a better job than the average blogger? Frankly, one can generate a huge volume of content that’s comparable with the one of the best bloggers. One just needs a set of well-suited prompts and a bit of formatting, though also the formatting can be handled over prompts. So, what’s the point of blogging anyway? Frankly, the more AI takes over, the less space is available for bloggers! Taking a trendy way of formulating titles: is blogging dead? I really hope not!

I’ve been blogging for almost 20 years. Even if I haven’t managed to blog constantly, given the volume of work and learning, I still made it over time! Often it was late in the night, in between other activities, or a bit of work while watching a football game. With AI life changes more than we want. AI can currently write about 20-30% of the code, at least the initial structure on which it can build upon, and it saves a lot of time!

So, the average programmer needs to adapt to the new reality or die?! Becoming unimportant in the programming equation is equivalent with a slow death we witness in our job. Of course, when used wisely AI can provide higher productivity, at least for the moment, though what will happen when the average programmer is not able to keep the pace? What will happen when we can’t review what AI does for us? Probably, we’ll be able to build a smarter AI which can evaluate the output of less smart AI, though what will be the limit?

Just playing the devil’s advocate, though what happens when the devil outsmarts us? Cases of programmers who lost their jobs because of AI become more frequent in the news. This is probably the new reality we need to be accustomed to. AI can write better code, better stories and probably can make faster discoveries than the average programmer, scientist, or any other professions in whose fields AI can be used.

Of course, we can learn to use AI to our advantage, though how many will be capable of doing that? Many programmers will lose their jobs because of AI. Probably, the ones who are still better problems solvers than AI will remain in the business, though until when?! The ones who change jobs early will probably be more adapted to the new paradigm, though at what price? Who will be able to pay the price?

In the first instance the programmers are the ones who’ll suffer, but the chances are high that AI will take over other jobs as well. The shift in teaching and other jobs could change from knowledge transmission to supervision, from creation or simple stand-byers. Does it make sense? Does it help us to live a decent life? Does it help us in the long term?

Probably, all the jobs that need a human touch will be still needed, though it’s challenging to predict how everything will evolve, in which directions, on what terms. The problem is not necessarily AI alone, but the way it is used and misused. Of course, the future doesn’t necessarily need to look that bleak, though the chances of evolving in this direction are high.

AI develops at a faster pace than the human mind can progress. Probably, we’ll be able to benefit from cognitive boosters, developed probably with the help of AI. To any gain is expected also a loss. In which areas should we expect losing something? Is it worth the price we pay for the sake of progress?

Previous Post <<||>> Next Post

🤖〽️Prompt Engineering: Copilot Unabridged (Part 3: Why AI in Society Doesn’t Matter - Or Does It?)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a blogpost of 600 words on why AI in society doesn't matter"

Introduction

Artificial Intelligence (AI) has been heralded as one of the most transformative technologies of our time. It powers self-driving cars, shapes personalized recommendations, and revolutionizes healthcare and finance. Yet, despite its widespread adoption, one could argue that AI doesn’t truly matter in society. Is this a bold statement, or does it hold some truth? Let’s explore why AI might not be as significant as it seems.

AI Isn’t a Replacement for Human Intelligence

One of the biggest misconceptions surrounding AI is that it will soon surpass human intelligence and take over key roles in society. While AI excels at processing vast amounts of data quickly, it lacks creativity, emotional intelligence, and critical thinking - traits fundamental to human interaction and progress. No AI system truly understands context or the complexities of human emotions, which are vital in fields such as art, philosophy, and even day-to-day social interactions. AI can only simulate intelligence, but it doesn't possess consciousness or intuition, making it inherently limited.

AI Often Reinforces Existing Systems Rather Than Disrupting Them

For all the hype around AI revolutionizing industries, its role is often overstated. AI largely improves efficiencies within existing systems rather than reshaping them entirely. Companies use AI for automation, optimization, and prediction, but the fundamental nature of work and life hasn’t changed dramatically. For example, automated customer support chatbots may streamline responses, but they don’t replace the human touch required for complex problem-solving. Similarly, AI-driven medical diagnostics assist professionals, but final treatment decisions still rely on human expertise.

AI’s Societal Impact Is Unequal

While AI promises advancement, its benefits are not distributed equally across society. Large corporations and tech giants are the primary beneficiaries, leveraging AI for profit and efficiency while small businesses struggle with adoption costs and complexity. AI-driven automation disproportionately affects workers in manufacturing, transportation, and service industries, leading to job displacement rather than widespread social improvement. If AI were truly a game-changer, wouldn’t it create universally accessible advancements rather than deepening economic disparities?

AI Cannot Solve Humanity’s Biggest Challenges Alone

Despite AI’s problem-solving capabilities, it cannot single-handedly resolve pressing societal issues such as climate change, inequality, or ethical dilemmas. While AI can help analyze climate data, it cannot make policies or enforce meaningful global change - that remains a human responsibility. AI models reflect the biases embedded in their datasets, meaning they often reinforce rather than correct systemic societal issues. As a result, AI remains a tool rather than a revolutionary force in social justice, policy-making, or global ethics.

AI's Dependency on Data Creates Limitations

AI's effectiveness is entirely dependent on data quality, quantity, and interpretation. Without accurate and unbiased data, AI models can produce flawed results, leading to misinformation, unfair decision-making, and questionable conclusions. Consider AI-powered hiring systems: if trained on biased historical hiring data, they perpetuate discrimination rather than eliminate it. Similarly, AI in law enforcement can reinforce prejudices rather than providing objective analysis. The reality is, AI is only as good as the data it consumes, limiting its ability to act as a truly transformative force.

So, Does AI Matter?

Despite these arguments, AI has undoubtedly impacted various aspects of life, bringing efficiency and convenience to modern society. However, AI alone does not define societal progress - it is merely a supporting tool in our journey toward innovation. Its importance is often exaggerated, while its shortcomings remain overlooked.

Ultimately, AI’s role in society is what we make of it. If integrated responsibly and with human oversight, it can enhance our lives rather than dictate them. But in its current state, AI is far from being an autonomous force that replaces or fundamentally changes society.

So, does AI matter? Only as much as we allow it to.

Disclaimer: The whole text was generated by Copilot at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

20 May 2025

#️⃣Software Engineering: Mea Culpa (Part IX: A Look Inward)

Software Engineering Series
Software Engineering Series

Paraphrasing a saying attributed to Lao Tzu, "a journey starts with a look inward". Of course, there are also people walking just for the sake of walking, without a purpose or afterthought, though usually when we follow a path we ask ourselves: where we are? what's the destination? how long is the road? how much it takes to reach the intermediary and final goals? what do I need for the road? what can I find on the road? what else do I need to know? who can help me? with whom or what can I travel? does the journey make sense? am I missing something? am I fit for the road and obstacles ahead? what challenges I might meet?

The traveler's metaphor can be easily applied to the professional life as well, especially when one reached a junction or a point on which a decision must be made about the road ahead. Usually, the first step is being aware where one is, and this applies from all points of view: geographical, physical, mental, spiritual, social and even ideological or religious. Of course, not everything is equally important, and it depends on each individual's values, state of mind, situation in life, etc. The more dimensions one considers into the equations of life, the more challenging it is to find a solution, even if not optimal. Even if some things can be ignored, the overall texture can become quite complex, more complex than our linear way of thinking allows us to cope with!

It would be great if one could always choose what's the destination! Unfortunately, the destination is available based on a serios of variables we are constrained to: our capabilities and knowledge, our experience, the work ahead, on what others want to achieve, and this can change from day to day according to the priorities set. Ideally, we should be aware of all the strings attached to us, respectively which strings are pulled and who's pulling the strings, however life is usually at least one step ahead of us! The lower we find ourselves in the texture that decides upon our life, the more challenging it becomes. Ideally, we should have no attachment to things, situations, people or other material things. However, we aren't hermits living in a cave!

Once we set a destination, is important to consider what one needs for the road, respectively for reaching the intermediary steps. That's probably one of the most challenging aspects of all experience. How can I prepare one for the road not taken? The past is probably the only one that can offer a sense of the future, though what if the future doesn't resemble the past in many essential points? One can always make jumps of faith, though faith alone is not enough! Blind faith without knowledge or wisdom can one bring in the opposite or other wrong directions. One can but try to map the territory and establish a plan, however the contact with reality can be harsher than one can imagine. One can but try to adapt, though one must be built for this! Some fundament must be already there!

A good recommendation for the road ahead is to expect the unexpected. Planning is good for preparing the mind, though what one finds on the way might be surprising as well challenging to overcome. Harsh long-distance journeys are not for everybody! Starting small and training the qualities needed for such journeys might be the best advice one can get. Though, the beginning of wisdom takes more steps than this, no matter the road, no matter the experience or one’s aims!

19 May 2025

#️⃣Software Engineering: Mea Culpa (Part VIII: A Look Beyond)

Software Engineering Series
Software Engineering Series

With AI on the verge, blogging and bloggers can easily become obsolete. Why bother navigating through the many blogs to get a broader perspective when the same can be obtained with AI? Just type in a prompt of the type "write a blogpost of 600 words on the importance of AI in society" and Copilot or any other similar AI agent will provide you an answer that may look much better than the first draft of most of the bloggers out there! It doesn't matter whether the text follows a well-articulated idea, a personal perspective or something creative! One gets an acceptable answer with a minimum of effort and that's what matters for many.

The results tend to increase in complexity the more models are assembled together, respectively the more uncontrolled are the experiments. Moreover, solutions that tend to work aren't necessarily optimal. Machines can't offer instant enlightenment or anything close to it. Though they have an incomparable processing power of retrieval, association, aggregation, segregation and/or iteration, which coupled with the vast amount of data, information and knowledge can generate anything in just a matter of seconds. Probably, the only area in which humans can compete with machines is creativity and wisdom, though how many will be able to leverage these at scale? Probably, machines have some characteristics that can be associated with these intrinsic human characteristics, though usually more likely the brute computational power will prevail.

At Microsoft Build, Satya Nadella mentioned that foundry encompasses already more than 1900 supported models. In theory, one can still evaluate and test such models adequately. What will happen when the scale increases with a few orders of magnitude? What will happen when for each person there are one or more personalized AI models? AI can help in many areas by generating and evaluating rapidly many plausible alternatives, though as soon the models deal with some kind of processing randomization, the chances for errors increase exponentially (at least in theory).

It's enough for one or more hallucinations or other unexpected behavior to lead to more unexpected behavior. No matter how well a model was tested, as long as there's no stable predictable mathematical model behind it, the chances for something to go wrong increase with the number of inputs, parameters, uses, or changes of context the model deals with. Unfortunately, all these aspects are seldom documented. It's not like using a formula and you know that given a set of inputs and operations, the result is the same. The evolving nature of such models makes them unpredictable in the long term. Therefore, there must always be a way to observe the changes occurring in models.

One of the important questions is how many errors can we afford in such models? How long it takes until errors impact each other to create effects comparable with a tornado. And what if the tornado increases in magnitude to the degree that it wrecks everything that crosses its path? What if multiple tornadoes join forces? How many tornadoes can destroy a field, a country or a continent? How many or big must be the tornadoes to trigger a warning?

Science-Fiction authors love to create apocalyptic scenarios, and all happens in just a few steps, respectively chapters. In nature, usually it takes many orders of magnitude to generate unpredictable behavior. But, as nature often reveals, unpredictable behavior does happen, probably more often than we expect and wish for. The more we are poking the bear, the higher the chances for something unexpected to happen! Do we really want this? What will be the price we must pay for progress?

Previous Post <<||>> Next Post

18 May 2025

#️⃣Software Engineering: Mea Culpa (Part VII: A Look Forward)

Software Engineering Series
Software Engineering Series

I worked for more than 20 years in various areas related to ERP systems - Data Migrations, Business Intelligence/Analytics, Data Warehousing, Data Management, Project Management, (data) integrations, Quality Assurance, and much more, having experience with IFS IV, Oracle e-Business Suite, MS Dynamics AX 2009 and during the past 3-7 years also with MS Dynamics 365 Finance, SCM & HR (in that order). Much earlier, I started to work with SQL Server (2000-2019), Oracle, and more recently with Azure Synapse and MS Fabric, writing over time more than 800 ad-hoc queries and reports for the various stakeholders, covering all the important areas, respectively many more queries for monitoring the various environments. 

In the areas where I couldn’t acquire experience on the job, I tried to address this by learning in my free time. I did it because I take seriously my profession, and I want to know how (some) things work. I put thus a lot of time into trying to keep actual with what’s happening in the MS Fabric world, from Power BI to KQL, Python, dataflows, SQL databases and much more. These technologies are Microsoft’s bet, though at least from German’s market perspective, all bets are off! Probably, many companies are circumspect or need more time to react to the political and economic impulses, or probably some companies are already in bad shape. 

Unfortunately, the political context has a broad impact on the economy, on what’s happening in the job market right now! However, the two aspects are not the only problem. Between candidates and jobs, the distance seems to grow, a dense wall of opinion being built, multiple layers based on presumptions filtering out voices that (still) matter! Does my experience matter or does it become obsolete like the technologies I used to work with? But I continued to learn, to keep actual… Or do I need to delete everything that reminds the old?

To succeed or at least be hired today one must fit a pattern that frankly doesn’t make sense! Yes, soft skills are important though not all of them are capable of compensating for the lack of technical skills! There seems to be a tendency to exaggerate some of the qualities associated with skills, or better said, of hiding behind big words. Sometimes it feels like a Shakespearian inaccurate adaptation of the stage on which we are merely players.

More likely, this lack of pragmatism will lead to suboptimal constructions that will tend to succumb under their own structure. All the inefficiencies need to be corrected, or somebody (or something) must be able to bear their weight. I saw this too often happening in ERP implementations! Big words don’t compensate for the lack of pragmatism, skills, knowledge, effort or management! For many organizations the answer to nowadays problems is more management, which occasionally might be the right approach, though this is not a universal solution for everything that crosses our path(s).

One of society’s answers to nowadays’ problem seems to be the refuge in AI. So, I wonder – where I’m going now? Jobless, without an acceptable perspective, with AI penetrating the markets and making probably many jobs obsolete. One must adapt, but adapt to what? AI is brainless even if it can mimic intelligence! Probably, it can do more in time to the degree that many more jobs will become obsolete (and I’m wondering what will happen to all those people). 

Conversely, to some trends there will be probably other trends against them, however it’s challenging to depict in clear terms the future yet in making. Society seems to be at a crossroad, more important than mine.

Previous Post <<||>> Next Post

27 March 2025

#️⃣Software Engineering: Programming (Part XVII: More Thoughts on AI)

Software Engineering Series
Software Engineering Series

I've been playing with AI-based prompting in Microsoft 365 and Edge Copilot for SQL programming tasks and even for simple requests I got wrong or suboptimal solutions. Some of the solutions weren’t wrong by far, though it was enough for the solution to not work at all or give curious answers. Some solutions were even more complex than needed, which made their troubleshooting more challenging, to the degree that was easier to rewrite the code by myself. Imagine when such wrong solutions and lines of reasoning propagate uncontrolled within broader chains of reasoning! 

Some of the answers we get from AI can be validated step by step, and the logic can be changed accordingly, though this provides no guarantee that the answers won't change as new data, information, knowledge is included in the models, or the model changes, directly or indirectly. In Software Development, there’s a minimum set of tests that can and should be performed to assure that the input generated matches the expectations, however in AI-based solutions there’s no guarantee that what worked before will continue to work.

Moreover, small errors can propagate in a chain-effect creating curious wrong solutions. AI acts and probably will continue to act as a Pandora's box. So, how much can we rely on AI, especially when the complexity of the problems and their ever-changing nature is confronted with a model highly sensitive to the initial or intermediary conditions? 

Some of the answers may make sense, and probably also the answers can be better to some degree than the decisions made by experts, though how far do we want to go? Who is ready to let his own life blindly driven by the answers provided by an AI machine just because it can handle certain facts better than us? Moreover, the human brain is wired to cope with uncertainty, morality and other important aspects that can enhance the quality of the decisions, even if the decisions aren't by far perfect

It’s important to understand the sensitivity of AI models and outputs to the initial and even intermediate conditions on which such models are based, respectively what is used in their reasoning and how slight changes can result in unexpected effects. Networks, independently whether they are or not AI-based, lead to behavior that can be explainable to some degree as long full transparency of the model and outcomes of the many iterations is provided. When AI models behave like black boxes there’s no guarantee of the outcomes, respectively transparence on the jumps made from one state of the network to the other, and surprises can appear more often than we expect or are prepared to accept. 

Some of the successes rooted in AI-based reasoning might happen just because in similar contexts people are not ready to trust their reasoning or take a leap of faith. AI tends to replace all these aspects that are part of human psychology, logic and whatever is part of the overall process. The eventual successes are thus not an immediate effect of the AI capabilities, but just that we took a shortcut. Unfortunately, this can act like a sharp blade with two edges. 

I want to believe that AI is the solution to humanity's problems, and probably there are many areas of applicability, though letting AI control our lives and the over-dependence on AI can on long term cause more problems than AI and out society can solve. The idea of AI acting as a Copilot that can be used to extrapolate beyond our capabilities is probably not wrong, though one should keep the risks and various outcomes in sight!

Previous Post <<||>> Next Post

08 March 2025

#️⃣Software Engineering: Programming (Part XVI: The Software Quality Perspective and AI)

Software Engineering Series
Software Engineering Series

Organizations tend to complain about poor software quality developed in-house, by consultancy companies or third parties, without doing much in this direction. Unfortunately, this agrees with the bigger picture reflected by the quality standards adopted by organizations - people talk and complain about them, though they aren’t that eager to include them in the various strategies, or even if they are considered, they are seldom enforced adequately!

Moreover, even if quality standards are adopted, and a lot of effort may be spent in this direction (as everybody has strong opinions and there are many exceptions), as projects progress, all the good intentions come to an end, the rules fading on the way either because are too strict, too general, aren’t adequately prioritized or communicated, or there’s no time to implement (all of) them. This applies in general to programming and to the domains that revolve around data – Business Intelligence, Data Analytics or Data Science.

The volume of good quality code and deliverables is not only a reflection of an organization’s maturity in dealing with best practices but also of its maturity in handling technical debt, Project Management, software and data quality challenges. All these aspects are strongly related to each other and therefore require a systemic approach rather than focusing on the issues locally. The systemic approach allows organizations to bridge the gaps between business areas, teams, projects and any other areas of focus.

There are many questionable studies on the effect of methodologies on software quality and data issues, proclaiming that one methodology is better than the other in addressing the multifold aspects of software quality. Besides methodologies, some studies attempt to correlate quality with organizations’ size, management or programmers’ experience, the size of software, or whatever characteristic might seem to affect quality.

Bad code is written independently of companies’ size or programmer's experience, management or organization’s maturity. Bad code doesn’t necessarily happen all at once, but it can depend on circumstances, repetitive team, requirements and code changes. There are decisions and actions that sooner or later can affect the overall outcome negatively.

Rewriting the code from scratch might look like an approachable measure though it’s seldom the cost-effective solution. Allocating resources for refactoring is usually a better approach, though this tends to increase considerably the cost of projects, and organizations might be tempted to face the risks, whatever they might be. Independently of the approaches used, sooner or later the complexity of projects, requirements or code tends to kick back.

There are many voices arguing that AI will help in addressing the problems of software development, quality assurance and probably other areas. It’s questionable how much AI will help to address the gaps, non-concordances and other mistakes in requirements, and how it will develop quality code when it has basic "understanding" issues. Even if step by step all current issues revolving around AI will be fixed, it will take time and multiple iterations until meaningful progress will be made.

At least for now, AI tools like Copilot or ChatGPT can be used for learning a programming language or framework through predefined or ad-hoc prompts. Probably, it can be used also to identify deviations from best practices or other norms in scope. This doesn’t mean that AI will replace for now code reviews, testing and other practices used in assuring the quality of software, but it can be used as an additional method to check for what was eventually missed in the other methods.

AI may also have hidden gems that when discovered, polished and sized, may have a qualitative impact on software development and software. Only time will tell what’s possible and achievable.

22 February 2025

🧩IT: The Annotated Laws that Govern IT Professionals' Lives - Part I

"A bad idea executed to perfection is still a bad idea." (Norman R Augustine) [Augustine's Law]

"Bad code executed by powerful machines is still bad code." [sql-troubles]

"A great many problems do not have accurate answers, but do have approximate answers, from which sensible decisions can be made." (Berkeley's Law)

"It's easier to take/sell approximations as accurate answers than to find accurate answers. In time people will see no difference in between." [sql-troubles]

"About the time you finish doing something, you know enough to start." (James C Kinser) [Kinser's Law]

"By the time you finish something, the problem changed." [sql-troubles]

"People will more likely repeat their known mistakes than trying something new." [sql-troubles]

"The ofter a method failed, the higher the chances for it to succeed when used by somebody else." [sql-troubles]

"People tend to reuse a method that previously failed (multiple times) than try something new." [sql-troubles]

"By the time we start something, somebody else solved already the problem." [sql-troubles]

"Adding manpower to a late software project makes it later." (Fred P Brooks, "The Mythical Man-Month: Essays", 1975) [Brook's Law]

"Adding manpower seldom solves a problem that requires intelligent effort." [sql-troubles]

"The easiest way to make a project on time is to to move the deadline as suited." [sql-troubles]

"An object will fall so as to do the most damage." [Law of selective gravity]

"A bug will appear to do the most damage." [sql-troubles]

"Anything can be made to work if you fiddle with it long enough." (Wyszkowski's second law)
"Some problems do require infinite time." [sql-troubles]

"Build a system that even a fool can use, and only a fool will want to use it." [Shaw's principle]

"Doing it the hard way is always easier." (Murphy's paradox)

"Doing it the easy way is always harder." [sql-troubles]

"Don't force it - get a bigger hammer." [Anthony's law of force]

"Don't optimize it, get a more powerful machine." [sql-troubles]

"Every solution breeds new problems." [Murphy's laws]

"Every new problem multiplies the possible solutions." [sql-troubles]

"It's easier to change the problem to fit the solution." [sql-troubles]

"Everyone has a scheme that will not work." [Howe's law]

"Any scheme can work by accident." [sql-troubles]

"It takes more than an accident for a scheme to work." [sql-troubles]

"Everything goes wrong all at once." (Quantized revision of Murphy's law)

"Small events converge toward bigger events." [sql-troubles]

"Things already went wrong before we observe them as such." [sql-troubles]

"If a problem causes many meetings, the meetings eventually become more important than the problem." (Arthur Bloch, "Murphy's Law (Price/Stern/Sloan", 1977) (Hendrickson’s Law)

"More meetings tend to create more problems." [sql-troubles]

 "Fewer meetings tend to create more problems." [sql-troubles]

"If a project is not worth doing at all, it's not worth doing well." (Gordon's first law)

"The more a project is not worth doing, the more attention will attract."  [sql-troubles]

"If an experiment works, something has gone wrong." [Finagle's first law]

"If anything can go wrong, it will." [Murphy's laws]

"Things go wrong at a faster pace than one can find solutions." [sql-troubles]

"If there are two or more ways to do something, and one of those ways can result in a catastrophe, then someone will do it." [Murphy's Laws]

"It's enough one way, for things to result in catastrophes." [sql-troubles]

"Sometimes it's better to do nothing than make things worse." [sql-troubles]

"Once all the known wrong solutions were exhausted, one discovers a new wrong solution." [sql-troubles]

"If they know nothing of what you are doing, they suspect you are doing nothing." (Robert J Graham et al, "The Complete Idiot's Guide to Project Management", 2007)  [Graham's Law]

"People are good at ignoring the obvious." [sql-troubles]

"The more one explains, the more one is misunderstood." [sql-troubles] 

"If you mess with a thing long enough, it'll break." [Schmidt's law]

"Things break by design." [sql-troubles]

"One can learn to break things, by simply playing with them." [sql-troubles] 

"It's easier to break than design things. One can find thousands ways on how to break the same thing." [sql-troubles] 

"In any collection of data, the figure most obviously correct, beyond all need of checking, is the mistake." (Finagle's third law)

"In any collection of data there's at least a mistake." [sql-troubles]

"In any given set of circumstances, the proper course of action is determined by subsequent events." [McDonald's corollary to Murphy's laws]

"In crises that force people to choose among alternative courses of action, most people will choose the worst one possible." (Rudin's law)

"People go wrong with confidence." [sql-troubles]

"The more alternatives, the higher the chances to go wrong." [sql-troubles] 

"Information necessitating a change of design will be conveyed to the designer after - and only after - the plans are complete." [First law of revision:]

"In simple cases, presenting one obvious right way versus one obvious wrong way, it is often wiser to choose the wrong way so as to expedite subsequent revision." (First corollary

"The designer will get ahead of the design." [sql-troubles] 

"It is impossible to make anything foolproof because fools are so ingenious." (Murphy's second corollary)

"It works better if you plug it in." (Sattinger's law)

"It works longer if you don't plug it in." [sql-troubles]

"It's not a question of IF the car will break down, but WHEN it will break down." (Murphy's theory of automobiles)

"It's not a question of IF a program will break down, but when the code will break down." [sql-troubles]

"The longer a program runs smoothly, the higher the chances that will break down soon." [sql-troubles]

"Left to themselves, things tend to go from bad to worse." (Murphy's first corollary)

"The more on tries to fix things, the faster everything goes worse." [sql-troubles]

"Logic is a systematic method of coming to the wrong conclusion with confidence." (Manly's maxim)

 "One doesn't need logic to arrive at the right conclusion." [sql-troubles]

"Matter will be damaged in direct proportion to its value." (Murphy's constant)

"Most problems have either many answers or no answer. Only a few problems have a single answer." [Berkeley's Law]

"It's better to have a multitude of approximate solutions than one correct solution." [sql-troubles]

"Negative expectations yield negative results. Positive expectations yield negative results." (Non-reciprocal law of expectations)

"Negative results yield when there are no expectations." [sql-troubles]

"No matter how many things have gone wrong, there remains at least one more thing that will go wrong." (Murphy's law of the infinite)

"Things can go wrong in a multitude of ways." [sql-troubles]

"No matter how minor the job is, it's still over $50." (Murphy's rule of auto repair)

"No matter what the experiment's result, there will always be someone eager to: (i) misinterpret it, (ii) fake it, or (c) believe it supports his own pet theory." (Finagle's second law)
"It's easier to fake the experiment to get the right results." [sql-troubles]
"Nothing ever goes away." (Commoner's second law of ecology)
"Things do go away, but tend to come back." [sql-troubles]

"Nothing is as easy as it looks." (Murphy's first corollary)

"All things look simple until one dives deeper." [sql-troubles]

"Nothing is ever so bad that it can't get worse." (Gattuso's extension of Murphy's Law)

"Once a job is fouled up, anything done to improve it only makes it worse." (Finagle's fourth law)

"Once a mistake is corrected, a second mistake will become apparent." (Murphy's law of revision)

"Correcting mistakes introduces other mistakes." [sql-troubles]

"The chief cause of problems is solutions." [Sevareid's Law]

"The more time you spend in reporting on what you are doing, the less time you have to do anything. Stability is achieved when you spend all your time doing nothing but reporting on the nothing you are doing." [Cohn's Law]

"Reporting increases the needs for more information. The less one reports, the lower the need for further information." [sql-troubles]

"The more innocuous the modification appears to be, the further its influence will extend and the more plans will have to be redrawn." [H B Fyfe's second law of revision]

"The only thing more costly than stretching the schedule of an established development program is accelerating it, which is itself the most costly action known to man." (Norman R Augustine, "Augustine's Laws", 1983) [Law of economic unipolarity]

"The other line moves faster." (Etorre's observation)

"The other team moves faster." [sql-troubles]

"If you change lines, the one you just left will start to move faster than the one you are now in." (O'Brien's variation

"If you change a line, the whole codes breaks." [sql-troubles]

"The longer you wait in line, the greater the likelihood that you are in the wrong line." (The Queue Principal)

"The longer you wait for a deliverable, the greater the likelihood that it contains bugs." [sql-troubles]

"The perceived usefulness of an article is inversely proportional to its actual usefulness once bought and paid for." (Glatum's law of materialistic acquisitiveness)

"The probability of anything happening is in inverse ratio to its desirability." (Gumperson's law)

"The solution to a problem changes the problem." [Peers's Law]

"A problem to a solution changes thr solution." [sql-troubles]

"The tasks to do immediately are the minor ones; otherwise, you’ll forget them. The major ones are often better to defer. They usually need more time for reflection. Besides, if you forget them, they’ll remind you." [Wolf ’s Law of Management]

"There are two states to any large project: Too early to tell and too late to stop." (Ernest Fitzgerald) [Fitzgerald's Law]

"There is a solution to every problem; the only difficulty is finding it." [Evvie Nef's Law]

"There is a solution to every problem we are not trying to solve." [sql-troubles]

"Finding problems is easier than finding solutions." [sql-troubles]

"One stumbles upon the same problen twice." [sql-troubles]

"There is no mechanical problem so difficult that it cannot be solved by brute strength and ignorance. [William's Law]

"There's no software problem so difficult that can't be solved by brute force and ignorance." [sql-troubles]

"There's always one more bug." (Lubarsky's law of cybernetic entomology)

"Software solutions diverge to a set of bugs." [sql-troubles

"Things get worse under pressure." [Murphy's law of thermodynamics]

"Things get worse also without pressure." [sql-troubles]

"Things go right gradually, but things go wrong all at once." (Murphy's asymmetry principle)

"Tolerances will accumulate unidirectionally toward maximum difficulty of assembly. (Klipstein's law)

"Two wrongs are only the beginning." (Kohn's corollary to Murphy's law)

"One wrong can be the beginning of another." [sql-troubles]

"When all else fails, read the instructions." [Cahn's axiom]

"Even if you read the instructions, things fall." [sql-troubles]

"When an error has been detected and corrected, it will be found to have been correct in the first place." [Scott's second law]

"Any two related problems may look the same when regarded from same perspective." [sql-troubles]

"When in doubt, use a bigger hammer." [Dobbins’ Law]

"When taking something apart to fix a minor malfunction, you will cause a major malfunction." (Murphy's second law of construction)

"Whenever you set out to do something, something else must be done first." (Murphy's sixth corollary)

"While the difficulties and dangers of problems tend to increase at a geometric rate, the knowledge and manpower qualified to deal with these problems tend to increase linearly." [Dror's First Law]

"Beyond a point, the problems are so complex that people can't differentiate between geometric and linear rates." [sql-troubles]

 Previous Post <<||>> Next Post

22 December 2024

#️⃣Software Engineering: Mea Culpa (Part VI: A Look Back)

Software Engineering Series
Software Engineering Series

Looking back at my university years, I'd say that there are three teachers, respectively courses that made a considerable impact on students' lives. In the second year I learned Category Algebra, which despite the fact that it reflected past knowledge and the topics were too complex for most of us, it provided us with a unprecedented layer of abstraction that showed us that Mathematics is not what we thought it to be!

The second course was related to the Complex plane theory, a course in which, the decan of the university at those times, challenged our way of thinking about relatively basic concepts. It was a big gap between what we thought about Mathematics, and what the subject proved to be. The course was thought in a post-university year together with a course on Relativity Theory, in which even we haven't understood much about the concepts and theories, it was the first time (except the Graph theory), we saw applied Mathematics to a broader context. Please don't misunderstand me! There were many other valuable teachers and courses, though these were the three courses that made the most important impact for me!

During those times, we attended also courses on Fortran, Pascal, C++, HTML and even dBase, and, even if each programming language brought something new in the landscape, I can't say they changed how we thought about the world (some of us had similar courses during the lyceum years) and problem solving. That's what for example SQL or more generally a database-related course brought, even if I had to wait for the first MooC courses to appear. Equally important was also Scott E Page's course on Model Theory, which introduced the model-thinking approach, a structured way of thinking about models, with applicability to the theoretical and practical aspects of life.

These are the courses that anybody interested in programming and/or IT should attend! Of course, there are also courses on algorithms, optimization, linear and non-linear programming, and they bring an arsenal of concepts and techniques to think about, though, even if they might have a wide impact, I can't compare them with the courses mentioned above. A course should (ideally) change the way we think about the world to make a sensible difference! Same goes for programming and theoretical concepts too!...

Long after I graduated, I found many books and authors that I wished I had met earlier! Quotable Math reflects some of the writings I found useful, though now it seems already too late for those books to make a considerable impact! Conversely, it's never too late to find new ways to look at life, and this is what some books achieve! This is also a way of evaluating critically what we want to read or what is worth reading!

Of course, there are many courses, books or ideas out there, though if they haven't changed the way you think about life, directly or indirectly, are they worth attending, respectively reading? Conversely, if one hasn't found a new perspective brought by a topic, probably one barely scratched the surface of the subject, independently if we talk here about students or teachers. For some topics, it's probably too much to ask, though pragmatically talking, that's the intrinsic value of what we learn! 

That's a way to think about life and select the books worth reading! I know, many love reading for the sake of reading, though the value of a book, theory, story or other similar artifacts should be judged especially by the impact they have on our way of thinking, respectively on our lives. Just a few ideas that's maybe worth reflective upon... 

Previous Post <<||>> Next Post

17 September 2024

#️⃣Software Engineering: Mea Culpa (Part V: All-Knowing Developers are Back in Demand?)

Software Engineering Series

I’ve been reading many job descriptions lately related to my experience and curiously or not I observed that many organizations look for developers with Microsoft Dynamics experience in the CRM, respectively Finance and Operations (F&O) and Business Central (BC) areas. It’s a good sign that the adoption of Microsoft solutions for CRM and ERP increases, especially when one considers the progress made in the BI and AI areas with the introduction of Microsoft Fabric, which gives Microsoft a considerable boost. Conversely, it seems that the "developers are good for everything" syntagma is back, at least from what one reads in job descriptions. 

Of course, it’s useful to have an inhouse developer who can address all the aspects of an implementation, though that’s a lot to ask considering the different non-programming areas that need to be addressed. It’s true that a developer with experience can handle Requirements, Data and Process Management, respectively Data Migrations and Business Intelligence topics, though if one considers that each of the topics can easily become a full-time job before, during and post-project implementations. I’ve been there and I (hopefully) know that the jobs imply. Even if an experienced programmer can easily handle the different aspects, there will be also times when all the topics combined will be too much for a person!

It's not a novelty that job descriptions are treated like Christmas lists, but it’s difficult to differentiate between essential and nonessential skillset. I read many jobs descriptions lately in which among a huge list of demands, one of the requirements is to program in the F&O framework, sign that D365 programmers are in high demand. I worked for many years as programmer and Software Engineer, respectively in the BI area, where SQL and non-SQL code is needed. Even if I can understand the code in F&O, does it make sense to learn now to program in X++ and the whole framework? 

It's never too late to learn new tricks, respectively another programming language and/or framework. It even helps to provide better solutions in usual areas, though frankly I would invest my time in other areas, and AI-related topics like AI prompting or Data Science seem to be more interesting on the long run, especially when they are already in demand!

There seems to be a tendency for Data Science professionals to do everything, building their own solutions, ignoring the experience accumulated respectively the data models built in BI and Data Analytics areas, as if the topics and data models are unrelated! It’s also true that AI-modeling comes with its own requirements in what concerns data modeling (e.g. translating non-numeric to numeric values), though I believe that common ground can be found!

Similarly, the notebook-based programming seems to replicate logic in each solution, which occasionally makes sense, though personally I wouldn’t recommend it as practice! The other day, I was looking at code developed in Python to mimic the joining of tables, when a view with the same could be easier (re)used, maintained, read and probably more efficient, even if different engines will be used. It will be interesting to see how the mix of spaghetti solutions will evolve over time. There are developers already complaining of the number of objects used in the process by building logic for each layer from the medallion architecture! Even if it makes sense from architectural considerations, it will become a nightmare in time.

One can wonder also about nomenclature used – Data Engineer or Prompt Engineering for the simple manipulation of data between structures in data transformations, respectively for structuring the prompts for AI. I believe that engineering involves more than this, no matter the context! 

Previous Post <<||>> Next Post

13 June 2024

🧭🏭Business Intelligence: Microsoft Fabric (Part V: One Person Can’t Learn or Do Everything)

Business Intelligence Series
Business Intelligence Series

Today’s Explicit Measures webcast [1] considered an article written by Kurt Buhler (The Data Goblins): [Microsoft] "Fabric is a Team Sport: One Person Can’t Learn or Do Everything" [2]. It’s a well-written article that deserves some thought as there are several important points made. I can’t say I agree with the full extent of some statements, even if some disagreements are probably just a matter of semantics.

My main disagreement starts with the title “One Person Can’t Learn or Do Everything”. As clarified in webcast's chat, the author defines “everything" as an umbrella for “all the capabilities and experiences that comprise Fabric including both technical (like Power BI) or non-technical (like adoption data literacy) and everything in between” [1].

For me “everything” is relative and considers a domain's core set of knowledge, while "expertise" (≠ "mastery") refers to the degree to which a person can use the respective knowledge to build back-to-back solutions for a given area. I’d say that it becomes more and more challenging for beginners or average data professionals to cover the core features. Moreover, I’d separate the non-technical skills because then one will also need to consider topics like Data, Project, Information or Knowledge Management.

There are different levels of expertise, and they can vary in depth (specialization) or breadth (covering multiple areas), respectively depend on previous experience (whether one worked with similar technologies). Usually, there’s a minimum of requirements that need to be covered for being considered as expert (e.g. certification, building a solution from beginning to the end, troubleshooting, performance optimization, etc.). It’s also challenging to roughly define when one’s expertise starts (or ends), as there are different perspectives on the topics. 

Conversely, the term expert is in general misused extensively, sometimes even with a mischievous intent. As “expert” is usually considered an external consultant or a person who got certified in an area, even if the person may not be able to build solutions that address a customer’s needs. 

Even data professionals with many years of experience can be overwhelmed by the volume of knowledge, especially when one considers the different experiences available in MF, respectively the volume of new features released monthly. Conversely, expertise can be considered in respect to only one or more MF experiences or for one area within a certain layer. Lot of the knowledge can be transported from other areas – writing SQL and complex database objects, modelling (enterprise) semantic layers, programming in Python, R or Power Query, building data pipelines, managing SQL databases, etc. 

Besides the standard documentation, training sessions, and some reference architectures, Microsoft made available also some labs and other material, which helps discovering the features available, though it doesn’t teach people how to build complete solutions. I find more important than declaring explicitly the role-based audience, the creation of learning paths for the various roles.

During the past 6-7 months I've spent on average 2 days per week learning MF topics. My problem is not the documentation but the lack of maturity of some features, the gaps in functionality, identifying the respective gaps, knowing what and when new features will be made available. The fact that features are made available or changed while learning makes the process more challenging. 

My goal is to be able to provide back-to-back solutions and I believe that’s possible, even if I might not consider all the experiences available. During the past 22 years, at least until MF, I could build complete BI solutions starting from requirements elicitation, data extraction, modeling and processing for data consumption, respectively data consumption for the various purposes. At least this was the journey of a Software Engineer into the world of data. 

References:
[1] Explicit Measures (2024) Power BI tips Ep.328: Microsoft Fabric is a Team Sport (link)
[2] Data Goblins (2024) Fabric is a Team Sport: One Person Can’t Learn or Do Everything (link)

16 March 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part VII: Think for Yourself!)

Business Intelligence
Business Intelligence Series

After almost a quarter-century of professional experience the best advice I could give to younger professionals is to "gather information and think for themselves", and with this the reader can close the page and move forward! Anyway, everybody seems to be looking for sudden enlightenment with minimal effort, as if the effort has no meaning in the process!

In whatever endeavor you are caught, it makes sense to do upfront a bit of thinking for yourself - what's the task, or more general the problem, which are the main aspects and interpretations, which are the goals, respectively the objectives, how a solution might look like, respectively how can it be solved, how long it could take, etc. This exercise is important for familiarizing yourself with the problem and creating a skeleton on which you can build further. It can be just vague ideas or something more complex, though no matter the overall depth is important to do some thinking for yourself!

Then, you should do some research to identify how others approached and maybe solved the problem, what were the justifications, assumptions, heuristics, strategies, and other tools used in sense-making and problem solving. When doing research, one should not stop with the first answer and go with it. It makes sense to allocate a fair amount of time for information gathering, structuring the findings in a reusable way (e.g. tables, mind maps or other tools used for knowledge mapping), and looking at the problem from the multiple perspectives derived from them. It's important to gather several perspectives, otherwise the decisions have a high chance of being biased. Just because others preferred a certain approach, it doesn't mean one should follow it, at least not blindly!

The purpose of research is multifold. First, one should try not to reinvent the wheel. I know, it can be fun, and a lot can be learned in the process, though when time is an important commodity, it's important to be pragmatic! Secondly, new information can provide new perspectives - one can learn a lot from other people’s thinking. The pragmatism of problem solvers should be combined, when possible, with the idealism of theories. Thus, one can make connections between ideas that aren't connected at first sight.

Once a good share of facts was gathered, you can review the new information in respect to the previous ones and devise from there several approaches worthy of attack. Once the facts are reviewed, there are probably strong arguments made by others to follow one approach over the others. However, one can show that has reached a maturity when is able to evaluate the information and take a decision based on the respective information, even if the decision is not by far perfect.

One should try to develop a feeling for decision making, even if this seems to be more of a gut-feeling and stressful at times. When possible, one should attempt to collect and/or use data, though collecting data is often a luxury that tends to postpone the decision making, respectively be misused by people just to confirm their biases. Conversely, if there's any important benefit associated with it, one can collect data to validate in time one's decision, though that's a more of a scientist’s approach.

I know that's easier to go with the general opinion and do what others advise, especially when some ideas are popular and/or come from experts, though then would mean to also follow others' mistakes and biases. Occasionally, that can be acceptable, especially when the impact is neglectable, however each decision we are confronted with is an opportunity to learn something, to make a difference! 

Previous Post <<||>> Next Post

04 March 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part VI: The Data Citizen)

Business Intelligence
Business Intelligence Series

More than a century ago, Jerbert G Wells wrote on mathematical literacy: "[...] the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex world-wide States that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write" [1]. The quote is occasionally misquoted as referring to Statistics, though frankly the boundaries of mathematical, statistical, numerical and data literacy tend to melt into each other, existing multiple dependencies between them.

In the age of big data, data citizens, business people able to use data, data processing and visualization tools for building solutions that enable their job, become steadily a necessity for businesses in their quest of making data-driven decisions, gaining insight and whatever valuable use data might have for the organizations. The need is not new,  Microsoft Access and Excel were used for similar purposes already in the 90s, becoming a maintenance nightmare for IT, data islands without proper backup or documentation existing through the organizations, diverse numbers being reported and contradicting each other. 

Then IT took over, trying to find alternatives for the data islands, implementing concepts like single source(s) of truth, quality gates and supporting processes, designing data models and infrastructures for self-service, allowing users to tap into the data for data exploration, discovery, reporting, etc. Getting all this right required to redesign existing infrastructures, making one step forward and a few steps back, in the end everything is a learning process. Such an effort can easily consume an organization's resources. 

Microsoft and other vendors for data-driven solutions keep insisting on how much potential exist in their tools for the data citizen, how the citizens can bring competitive advantage for organizations, automating business and supporting processes. The potential is not to neglect, though it requires a considerable investment from organizations in training and mentoring data citizens, in building data warehouses or data meshes that focus on end-user self-service needs. The data citizen needs time to learn, to play with the data, build solutions, test their usefulness in the daily tasks, respectively incorporate and disseminate the knowledge gained within the organization. 

There are many scenarios in which results can be obtained with a minimum of effort, however there are also hard limits. Besides the learning effort and the time available, there are cognitive, knowledge and ability limits that vary from person to person. Understanding what good architecture, design and techniques means is unfortunately not for everybody, and here's where the concept of citizen data analyst or citizen scientist breaks, and this independently of the tools used. There are also IT people who have similar challenges. 

It must be also recognized that the solutions built in the early stages by data citizens are primarily personal solutions that need to be reviewed and brought to the standards adopted by the organization. In time, it's expected to reduce considerably such effort by evolving data citizen's knowledge and skillset. Without this further work, the solutions built will tend to display some of the shortcomings of the solutions built on MS Access or Excel

The concept of data citizen can work as long the various assumptions and needs are adequately addressed, however progress will not happen overnight. The effort needs to become part of organization's long-term strategy, and the effort can be considerable for many organizations. Mentorship in terms of technical and non-technical support is needed. It's advisable to proceed in small iterative steps and integrate gradually the lessons learned.

Previous Post <<||>> Next Post

Resources:

[1] "Mankind in the Making", by Herbert G Wells, 1903 [Source]

28 February 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part V: From Process Management to Mental Models in Knowledge Gaps)

Business Intelligence Series
Business Intelligence Series 

An organization's business processes are probably one of its most important assets because they reflect the business model, philosophy and culture, respectively link the material, financial, decisional, informational and communicational flows across the whole organization with implication in efficiency, productivity, consistency, quality, adaptability, agility, control or governance. A common practice in organizations is to document the business-critical processes and manage them accordingly over their lifetime, making sure that the employees understand and respect them, respectively improve them continuously. 

In what concerns the creation of data artifacts, data without the processual context are often meaningless, no matter how much a data professional knows about data structures/models. Processes allow to delimit the flow and boundaries of data, respectively delimit the essential from non-essential. Moreover, it's the knowledge of processes that allows to reengineer the logic behind systems especially when no proper documentation about the logic is available. 

Therefore, the existence of documented processes allows to bridge the knowledge gaps existing on the factual side, and occasionally also on the technical side. In theory, the processes should provide a complete overview of the procedures, rules, policies and responsibilities existing in the organization, respectively how the business operates. However, even if people tend to understand how the world works locally, when broken down into parts, their understanding is systemically flawed, missing the implications of causal relationships that span time with delays, feedback, variable confusion, chaotic behavior, and/or other characteristics borrowed from the vocabulary of complex systems.  

Jay W Forrester [3], Peter M Senge [1], John D Sterman [2] and several other systems-thinking theoreticians stressed the importance of mental models in making-sense about the world especially in setups that reflect the characteristics of complex systems. Mental models frame our experience about the world in congruent mental constructs that are further used to think, understand and navigate the world. They are however tacit, fuzzy, incomplete, imprecisely stated, inaccurate, evolving simplifications with dual character, enabling on one side, while impeding on the other side cognitive processes like sense-making, learning, thinking or decision-making, limiting the range of action to what is familiar and comfortable. 

On one side one of the primary goals of Data Analytics is to provide new insights, while on the other side the new insights fail to be recognized and put into practice because they conflict with existing mental models, limiting employees to familiar ways of thinking and acting. 

Externalizing and sharing mental models allow besides making assumptions explicit and creating a world view also to strategize, make tests and simulations, respectively make sure that the barriers and further constraints don't impact the decisional process. Sange goes further and advances that mental models, especially at management level, offer a competitive advantage, allowing to maintain coherence and direction, people becoming more perceptive and responsive about environmental or circumstance changes.

The whole process isn't about creating a unique congruent mental model, even if several mental models may converge toward one or more holistic models, but of providing different diverse perspectives and enabling people to make leaps in abstraction (by moving from direct observations to generalizations) while blending advocacy and inquiry to promote collaborative learning. Gradually, people and organizations should recognize a shift from mental models dominated by events to mental models that recognize longer-tern patterns of change and the underlying structures producing those patterns [1].

Probably, for many the concept of mental models seems to be still too abstract, respectively that the effort associated with it is unnecessary, or at least questionable on whether it can make a difference. Conversely, being aware of the positive and negative implications the mental models hold, can makes us explore, even if ad-hoc, the roads they open.

Previous Post <<||>> Next Post

Resources:
[1] Peter M Senge (1990) The Fifth Discipline: The Art & Practice of The Learning Organization
[2] John D Sterman (2000) "Business Dynamics: Systems thinking and modeling for a complex world"
[3] Jay W Forrester (1971) "Counterintuitive Behaviour of Social Systems", Technology Review

21 February 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part IV: The Loom of Interactions)

Business Intelligence Series
Business Intelligence Series 

The process of developing or creating a report is quite simple - there's a demand for data, usually a business problem, the user (aka requestor) defines a set of requirements, the data professional writes one or more queries to address the requirements, which are then used to build one or more reports. The report(s) is/are reviewed by the requestor and with this the process should be over in most of the cases. However, this is rather the exception - a long series of changes over multiple iterations are usually necessary, the queries and the reports get modified and even rewritten until they reach the final form, lot of effort being wasted in the process on both sides.

Common practices for improving the process behind resume to assuring that the requirements are complete and understood upfront, that best practices are followed, that the user gets an early review of the work and that there's a continuous communication, that process' performance is monitored, that controls are in place, etc. Standardizing the process helps to reduce the number of iterations, but only by a factor. Unfortunately, the bigger issue - the knowledge gap - is often ignored.

There's lot of literature on problem solving, on what steps to follow, on how to define the problem, what aspects should be considered, etc. Recipes are good when one knows how to follow them, respectively how to cook, and that can be a tedious process. It is said that framing the right problem is half the way to its solving, and that's so true. Part of the bigger issue is that users need data to better understand the problem, however the drives can be different - sometimes is problem's complexity, while other times the need is apparent, only with the first set of data the users start thinking seriously about the problem. 

So, the first major gap is between the problem and user's knowledge about the problem. Experience and theory can help reduce the gap, however the most important progress comes when the user understands the data behind the various processes that overlap with the problem. Sometimes, it's enough to explore the data visually, while other times deeper explorations are needed. Data literacy is important, though more important are the exposure to the data and problems of different variety and complexity, respectively having the time for this. 

The second gap concerns the data professional - building the data model and the logic for the report requires domain knowledge. The level of knowledge depends from case to case, and typically what one doesn't know has the biggest impact. A data professional can help to the degree of the information, respectively knowledge he has about the business. The expectation to provide a report based on a set of fields might be valid for simple requirements, though the more complex a problem, the more domain knowledge is needed. Moreover, the data professional might need to reengineer the logic from the source system, which can prove challenging only by looking at the data.

Ideally, the two parties should work together starting with problem's framing and build common ground while covering the knowledge gaps on both sides. Of course, the user doesn't need to dive into the technical knowledge unless the organization leverages this interaction further by adopting the data citizen mindset. Such interactions can help to build trust, respectively a basis for further collaboration. Conversely, the more isolated the two parties, the higher the chances for more iterations to occur. 

Covering the knowledge gaps might look like a redistribution of the effort, though by keeping the status quo there is little chance for growth!

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.