Showing posts with label failure. Show all posts
Showing posts with label failure. Show all posts

15 October 2024

🗄️Data Management: Data Governance (Part III: Taming the Complexity)

Data Management Series
Data Management Series

The Chief Data Officer (CDO) or the “Head of the Data Team” is one of the most challenging jobs because is more of a "political" than a technical role. It requires the ideal candidate to be able to throw and catch curved balls almost all the time, and one must be able to play ball with all the parties having an interest in data (aka stakeholders). It’s a full-time job that requires the combination of management and technical skillsets, and both are important! The focus will change occasionally in one direction more than in the other, with important fluctuations. 

Moreover, even if one masters the technical and managerial aspects, the combination of the two gives birth to situations that require further expertise – applied systems thinking being probably the most important. This, also because there are so many points of failure that it's challenging to address all the important causes. Therefore, it’s critical to be a system thinker, to have an experienced team and make use adequately of its experience! 

In a complex word, in which even the smallest constraint or opportunity can have an important impact especially when it’s involved in the early stages of the processes taking place in organizations. It relies on the manager’s and team’s skillset, their inspiration, the way the business reacts to the tasks involved and probably many other aspects that make things work. It takes considerable effort until the whole mechanism works, and even more time to make things work efficiently. The best metaphor is probably the one of a small combat team in which everybody has their place and skillset in the mechanism, independently if one talks about strategy, tactics or operations. 

Unfortunately, building such teams takes time, and the more people are involved, the more complex this endeavor becomes. The manager and the team must meet somewhere in the middle in what concerns the philosophy, the execution of the various endeavors, the way of working together to achieve the same goals. There are multiple forces pulling in all directions and it takes time until one can align the goals, respectively the effort. 

The most challenging forces are the ones between the business and the data team, respectively the business and data requirements, forces that don’t necessarily converge. Working in small organizations, the two parties have in theory more challenges to overcome the challenges and a team’s experience can weight a lot in the process, though as soon the scale changes, the number of challenges to be overcome changes exponentially (there are however different exponential functions in which the basis and exponent make the growth rapid). 

In big organizations can appear other parties that have the same force to pull the weight in one direction or another. Thus, the political aspects become more complex to the degree that the technologies must follow the political decisions, with all the positive and negative implications deriving from this. As comparison, think about the challenges from moving from two to three or more moving bodies orbiting each other, resulting in a chaotic dynamical system for most initial conditions. 

Of course, a business’ context doesn’t have to create such complexity, though when things are unchecked, when delays in decision-making as well as other typical events occur, when there’s no structure, strategy, coordinated effort, or any other important components, the chances for chaotic behavior are quite high with the pass of time. This is just a model to explain real life situations that seem similar on the surface but prove to be quite complex when diving deeper. That’s probably why a CDO’s role as tamer of complexity is important and challenging!

Previous Post <<||>> Next Post

14 September 2024

🗄️Data Management: Data Culture (Part V: Quid nunc? [What now?])

Data Management Series
Data Management Series

Despite the detailed planning, the concentrated and well-directed effort with which the various aspects of data culture are addressed, things don't necessarily turn into what we want them to be. There's seldom only one cause but a mix of various factors that create a network of cause and effect relationships that tend to diminish or increase the effect of certain events or decisions, and it can be just a butterfly's flutter that stirs a set of chained reactions. The butterfly effect is usually an exaggeration until the proper conditions for the chaotic behavior appear!

The butterfly effect is made possible by the exponential divergence of two paths. Conversely, success needs probably multiple trajectories to converge toward a final point or intermediary points or areas from which things move on the "right" path. Success doesn't necessarily mean reaching a point but reaching a favorable zone for future behavior to follow a positive trend. For example, a sink or a cone-like structure allow water to accumulate and flow toward an area. A similar structure is needed for success to converge, and the structure results from what is built in the process. 

Data culture needs a similar structure for the various points of interest to converge. Things don't happen by themselves unless the force of the overall structure is so strong that allows things to move toward the intended path(s). Even then the paths can be far from optimal, but they can be favorable. Probably, that's what the general effort must do - bring the various aspects in the zone for allowing things to unfold. It might still be a long road, though the basis is there!

A consequence of this metaphor is that one must identify the important aspects, respectively factors that influence an organization's culture and drive them in the right direction(s) – the paths that converge toward the defined goal(s). (Depending on the area of focus one can consider that there are successions of more refined goals.)

The structure that allows things to converge is based on the alignment of the various paths and implicitly forces. Misalignment can make a force move in other direction with all the consequences deriving from this behavior. If its force is weak, probably will not have an impact over the overall structure, though that's relative and can change in time. 

One may ask for what's needed all this construct, even if it doesn’t reflect the reality. Sometimes, even a not entirely correct model can allow us to navigate the unknown. Model's intent is to depict what's needed for a initiative to be successful. Moreover, success doesn’t mean to shoot bulls eye but to be first in the zone until one's skillset enables performance.

Conversely, it's important to understand that things don't happen by themselves. At least this seems to be the feeling some initiatives let. One needs to build and pull the whole structure in the right direction and the alignment of the various forces can reduce the overall effort and increase the chances for success. Attempting to build something just because it’s written in documentation without understanding the whole picture (or something close to it) can easily lead to failure.

This doesn’t mean that all attempts that don’t follow a set of patterns are doomed to failure, but that the road will be more challenging and will probably take longer. Conversely, maybe these deviations from the optimal paths are what an organization needs to grow, to solidify the foundation on which something else can be built. The whole path is an exploration that doesn’t necessarily match what is written in books, respectively the expectations!

Previous Post <<||>> Next Post

01 September 2024

🗄️Data Management: Data Governance (Part I: No Guild of Heroes)

Data Management Series
Data Management Series

Data governance appeared around 1980s as topic though it gained popularity in early 2000s [1]. Twenty years later, organizations still miss the mark, respectively fail to understand and implement it in a consistent manner. As usual, the reasons for failure are multiple and they vary from misunderstanding what governance is all about to poor implementation of methodologies and inadequate management or leadership. 

Moreover, methodologies tend to idealize the various aspects and is not what organizations need, but pragmatism. For example, data governance is not about heroes and heroism [2], which can give the impression that heroic actions are involved and is not the case! Actions for the sake of action don’t necessarily lead to change by themselves. Organizations are in general good at creating meaningless action without results, especially when people preoccupy themselves, miss or ignore the mark. Big organizations are very good at generating actions without effects. 

People do talk to each other, though they try to solve their own problems and optimize their own areas without necessarily thinking about the bigger picture. The problem is not necessarily communication or the lack of depth into business issues, people do communicate, know the issues without a business impact assessment. The challenge is usually in convincing the upper management that the effort needs to be consolidated, supported, respectively the needed resources made available. 

Probably, one of the issues with data governance is the attempt of creating another structure in the organization focused on quality, which has the chances to fail, and unfortunately does fail. Many issues appear when the structure gains weight and it becomes a separate entity instead of being the backbone of organizations. 

As soon organizations separate the data governance from the key users, management and the other important decisional people in the organization, it takes a life of its own that has the chances to diverge from the initial construct. Then, organizations need "alignment" and probably other big words to coordinate the effort. Also such constructs can work but they are suboptimal because the forces will always pull in different directions.

Making each manager and the upper management responsible for governance is probably the way to go, though they’ll need the time for it. In theory, this can be achieved when many of the issues are solved at the lower level, when automation and further aspects allow them to supervise things, rather than hiding behind every issue. 

When too much mircomanagement is involved, people tend to busy themselves with topics rather than solve the issues they are confronted with. The actual actors need to be empowered to take decisions and optimize their work when needed. Kaizen, the philosophy of continuous improvement, proved itself that it works when applied correctly. They’ll need the knowledge, skills, time and support to do it though. One of the dangers is however that this becomes a full-time responsibility, which tends to create a separate entity again.

The challenge for organizations lies probably in the friction between where they are and what they must do to move forward toward the various objectives. Moving in small rapid steps is probably the way to go, though each person must be aware when something doesn’t work as expected and react. That’s probably the most important aspect. 

So, the more functions are created that diverge from the actual organization, the higher the chances for failure. Unfortunately, failure is visible in the later phases, and thus self-awareness, self-control and other similar “qualities” are needed, like small actors that keep the system in check and react whenever is needed. Ideally, the employees are the best resources to react whenever something doesn’t work as per design. 

Previous Post <<||>> Next Post 

Resources:
[1] Wikipedia (2023) Data Management [link]
[2] Tiankai Feng (2023) How to Turn Your Data Team Into Governance Heroes [link]


08 April 2024

🧭Business Intelligence: Why Data Projects Fail to Deliver Real-Life Impact (Part III: Failure through the Looking Glass)

Business Intelligence
Business Intelligence Series

There’s a huge volume of material available on project failure – resources that document why individual projects failed, why in general projects fail, why project members, managers and/or executives think projects fail, and there seems to be no other more rewarding activity at the end of a project than to theorize why a project failed, the topic culminating occasionally with the blaming game. Success may generate applause, though it's failure that attracts and stirs the most waves (irony, disapproval, and other similar behavior) and everybody seems to be an expert after the consumed endeavor. 

The mere definition of a project failure – not fulfilling project’s objectives within the set budget and timeframe - is a misnomer because budgets and timelines are estimated based on the information available at the beginning of the project, the amount of uncertainty for many projects being considerable, and data projects are no exceptions from it. The higher the uncertainty the less probable are the two estimates. Even simple projects can reveal uncertainty especially when the broader context of the projects is considered. 

Even if it’s not a common practice, one way to cope with uncertainty is to add a tolerance for the estimates, though even this practice probably will not always accommodate the full extent of the unknown as the tolerances are usually small. The general expectation is to have an accurate and precise landing, which for big or exploratory projects is seldom possible!

Moreover, the assumptions under which the estimates hold are easily invalidated in praxis – resources’ availability, first time right, executive’s support to set priorities, requirements’ quality, technologies’ maturity, etc. If one looks beyond the reasons why projects fail in general, quite often the issues are more organizational than technological, the lack of knowledge and experience being some of the factors. 

Conversely, many projects will not get approved if the estimates don’t look positive, and therefore people are pressured in one way or another to make the numbers fit the expectations. Some projects, given their importance, need to be done even if the numbers don’t look good or can’t be quantified correctly. Other projects represent people’s subsistence on the job, respectively people's self-occupation to create motion, though they can occasionally have also a positive impact for the organizations. These kinds of aspects almost never make it in statistics or surveys. Neither do the big issues people are afraid to talk about. Where to consider that in the light of politics and office’s grapevine the facts get distorted!

Data projects reflect all the symptoms of failure projects have in general, though when words like AI, Statistics or Machine Learning are used, the chances for failure are even higher given that the respective fields require a higher level of expertise, the appropriate use of technologies and adherence to the scientific process for the results to be valid. If projects can benefit from general recipes, respectively established procedures and methods, their range of applicability decreases when the mentioned areas are involved. 

Many data projects have an exploratory nature – seeing what’s possible - and therefore a considerable percentage will not reach production. Moreover, even those that reach that far might arrive to be stopped or discarded sooner or later if they don’t deliver the expected value, and probably many of the models created in the process are biased, irrelevant, or incorrectly apply the theory. Where to add that the mere use of tools and algorithms is not Data Science or Data Analysis. 

The challenge for many data projects is to identify which Project Management (PM) best practices to consider. Following all or no practices at all just increases the risks of failure!

Previous Post <<||>> Next Post

06 April 2024

🧭Business Intelligence: Why Data Projects Fail to Deliver Real-Life Impact (Part II: There's Value in Failure)

Business Intelligence
Business Intelligence Series

"Results are nothing; the energies which produce them
and which again spring from them are everything."
(Wilhelm von Humboldt,  "On Language", 1836)

When the data is not available and is needed on a continuous basis then usually the solution is to redesign the processes and make sure the data becomes available at the needed quality level. Redesign involves additional costs for the business; therefore, it might be tempting to cancel or postpone data projects, at least until they become feasible, though they’re seldom feasible. 

Just because there’s a set of data, this doesn’t mean that there is important knowledge to be extracted from it, respectively that the investment is feasible. There’s however value in building experience in the internal resources, in identifying the challenges and the opportunities, in identifying what needs to be changed for harnessing the data. Unfortunately, organizations expect that somebody else will do the work for them instead of doing the jump by themselves, and this approach more likely will fail. It’s like expecting to get enlightened after a few theoretical sessions with a guru than walking the path by oneself. 

This is reflected also in organizations’ readiness to do the required endeavors for making the jump on the maturity scale. If organizations can’t approach such topics systematically and address the assumptions, opportunities, and risks adequately, respectively to manage the various aspects, it’s hard to believe that their data journey will be positive. 

A data journey shouldn’t be about politics even if some minds need to be changed in the process, at management as well as at lower level. If the leadership doesn’t recognize the importance of becoming an enabler for such initiatives, then the organization probably deserves to keep the status quo. The drive for change should come from the leadership even if we talk about data culture, data strategy, decision-making, or any critical aspect.

An organization will always need to find the balance between time, scope, cost, and quality, and this applies to operations, tactics, and strategies as well as to projects.  There are hard limits and lot of uncertainty associated with data projects and the tasks involved, limits reflected in cost and time estimations (which frankly are just expert’s rough guesses that can change for the worst in the light of new information). Therefore, especially in data projects one needs to be able to compromise, to change scope and timelines as seems fit, and why not, to cancel the projects if the objectives aren’t feasible anymore, respectively if compromises can’t be reached.

An organization must be able to take the risks and invest in failure, otherwise the opportunities for growth don’t change. Being able to split a roadmap into small iterative steps that allow besides breaking down the complexity and making progress to evaluate the progress and the knowledge resulted, respectively incorporate the feedback and knowledge in the next steps, can prove to be what organizations lack in coping with the high uncertainty. Instead, organizations seem to be fascinated by the big bang, thinking that technology can automatically fill the organizational gaps.

Doing the same thing repeatedly and expecting different results is called insanity. Unfortunately, this is what organizations and service providers do in what concerns Project Management in general and data projects in particular. Building something without a foundation, without making sure that the employees have the skillset, maturity and culture to manage the data-related tasks, challenges and opportunities is pure insanity!

Bottom line, harnessing the data requires a certain maturity and it starts with recognizing and pursuing opportunities, setting goals, following roadmaps, learning to fail and getting value from failure, respectively controlling the failure. Growth or instant enlightenment without a fair amount of sweat is possible, though that’s an exception for few in sight!

Previous Post <<||>> Next Post

22 March 2024

🧭Business Intelligence: Perspectives (Part IX: Dashboards Are Dead & Other Crap)

Business Intelligence
Business Intelligence Series

I find annoying the posts that declare that a technology is dead, as they seem to seek the sensational and, in the end, don't offer enough arguments for the positions taken; all is just surfing though a few random ideas. Almost each time I klick on such a link I find myself disappointed. Maybe it's just me - having too great expectations from ad-hoc experts who haven't understood the role of technologies and their lifecycle.

At least until now dashboards are the only visual tool that allows displaying related metrics in a consistent manner, reflecting business objectives, health, or other important perspective into an organization's performance. More recently notebooks seem to be getting closer given their capabilities of presenting data visualizations and some intermediary steps used to obtain the data, though they are still far away from offering similar capabilities. So, from where could come any justification against dashboard's utility? Even if I heard one or two expert voices saying that they don't need KPIs for managing an organization, organizations still need metrics to understand how the organization is doing as a whole and taken on parts. 

Many argue that the design of dashboards is poor, that they don't reflect data visualization best practices, or that they are too difficult to navigate. There are so many books on dashboard and/or graphic design that is almost impossible not to find such a book in any big library if one wants to learn more about design. There are many resources online as well, though it's tough to fight with a mind's stubbornness in showing no interest in what concerns the topic. Conversely, there's also lot of crap on the social networks that qualify after the mainstream as best practices. 

Frankly, design is important, though as long as the dashboards show the right data and the organization can guide itself on the respective numbers, the perfectionists can say whatever they want, even if they are right! Unfortunately, the numbers shown in dashboards raise entitled questions and the reasons are multiple. Do dashboards show the right numbers? Do they focus on the objectives or important issues? Can the number be trusted? Do they reflect reality? Can we use them in decision-making? 

There are so many things that can go wrong when building a dashboard - there are so many transformations that need to be performed, that the chances of failure are high. It's enough to have several blunders in the code or data visualizations for people to stop trusting the data shown.

Trust and quality are complex concepts and there’s no standard path to address them because they are a matter of perception, which can vary and change dynamically based on the situation. There are, however, approaches that allow to minimize this. One can start for example by providing transparency. For each dashboard provide also detailed reports that through drilldown (or also by running the reports separately if that’s not possible) allow to validate the numbers from the report. If users don’t trust the data or the report, then they should pinpoint what’s wrong. Of course, the two sources must be in synch, otherwise the validation will become more complex.

There are also issues related to the approach - the way a reporting tool was introduced, the way dashboards flooded the space, how people reacted, etc. Introducing a reporting tool for dashboards is also a matter of strategy, tactics and operations and the various aspects related to them must be addressed. Few organizations address this properly. Many organizations work after the principle "build it and they will come" even if they build the wrong thing!

Previous Post <<||>> Next Post

20 March 2021

🧭Business Intelligence: New Technologies, Old Challenges (Part I: An Introduction)

Business Intelligence

Each important technology has the potential of creating divides between the specialists from a given field. This aspect is more suggestive in the data-driven fields like BI/Analytics or Data Warehousing. The data professionals (engineers, scientists, analysts, developers) skilled only in the new wave of technologies tend to disregard the role played by the former technologies and their role in the data landscape. The argumentation for such behavior is rooted in the belief that a new technology is better and can solve any problem better than previous technologies did. It’s a kind of mirage professionals and customers can easily fall under.

Being bigger, faster, having new functionality, doesn’t make a tool the best choice by default. The choice must be rooted in the problem to be solved and the set of requirements it comes with. Just because a vibratory rammer is a new technology, is faster and has more power in applying pressure, this doesn’t mean that it will replace a hammer. Where a certain type of power is needed the vibratory rammer might be the best tool, while for situations in which a minimum of power and probably more precision is needed, like driving in a nail, then an adequately sized hammer will prove to be a better choice.

A technology is to be used in certain (business/technological) contexts, and even if contexts often overlap, the further details (aka requirements) should lead to the proper use of tools. It’s in a professional’s duties to be able to differentiate between contexts, requirements and the capabilities of the tools appropriate for each context. In this resides partially a professional’s mastery over its field of work and of providing adequate solutions for customers’ needs. Especially in IT, it’s not enough to master the new tools but also have an understanding about preceding tools, usage contexts, capabilities and challenges.

From an historical perspective each tool appeared to fill a demand, and even if maybe it didn’t manage to fill it adequately, the experience obtained can prove to be valuable in one way or another. Otherwise, one risks reinventing the wheel, or more dangerously, repeating the failures of the past. Each new technology seems to provide a deja-vu from this perspective.

Moreover, a new technology provides new opportunities and requires maybe to change our way of thinking in respect to how the technology is used and the processes or techniques associated with it. Knowledge of the past technologies help identifying such opportunities easier. How a tool is used is also a matter of skills, while its appropriate use and adoption implies an inherent learning curve. Having previous experience with similar tools tends to reduce the learning curve considerably, though hands-on learning is still necessary, and appropriate learning materials or tutoring is upon case needed for a smoother transition.

In what concerns the implementation of mature technologies, most of the challenges were seldom the technologies themselves but of non-technical nature, ranging from the poor understanding/knowledge about the tools, their role and the implications they have for an organization, to an organization’s maturity in leading projects. Even the most-advanced technology can fail in the hands of non-experts. Experience can’t be judged based only on the years spent in the field or the number of projects one worked on, but on the understanding acquired about implementation and usage’s challenges. These latter aspects seem to be widely ignored, even if it can make the difference between success and failure in a technology’s implementation.

Ultimately, each technology is appropriate in certain contexts and a new technology doesn’t necessarily make another obsolete, at least not until the old contexts become obsolete.

Previous Post <<||>>Next Post

28 August 2019

🛡️Information Security: Data Breach (Definitions)

[data loss:] "Deprivation of something useful or valuable about a set of data, such as unplanned physical destruction of data or failure to preserve the confidentiality of data." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"The unauthorized disclosure of confidential information, notably that of identifying information about individuals." (David G Hill, "Data Protection: Governance, Risk Management, and Compliance", 2009)

"A failure of an obligation to protect against the release of secure data." (Janice M Roehl-Anderson, "IT Best Practices for Financial Managers", 2010)

"The release of secure information to an untrusted environment. Other terms for this occurrence include unintentional information disclosure, data leak, and data spill." (Craig S Mullins, "Database Administration", 2012)

"The unauthorized movement or disclosure of sensitive information to a party, usually outside the organization, that is not authorized to have or see the information." (Olivera Injac & Ramo Šendelj, "National Security Policy and Strategy and Cyber Security Risks", 2016)

"An incident in which sensitive, protected or confidential data has been viewed, stolen or used by an unauthorized body." (Güney Gürsel, "Patient Privacy and Security in E-Health", 2017)

[data leakage:] "The advertent or inadvertent sharing of private and/or confidential information." (Shalin Hai-Jew, "Beware!: A Multimodal Analysis of Cautionary Tales in Strategic Cybersecurity Messaging Online", 2018)

"A security incident involving unauthorized access to data." (Boaventura DaCosta & Soonhwa Seok, "Cybercrime in Online Gaming", 2020)

"An incident where information is accessed without authorization." (Nathan J Rodriguez, "Internet Privacy", 2020)

"A process where large amounts of private data, mostly about individuals, becomes illegally available to people who should not have access to the information." (Ananda Mitra & Yasmine Khosrowshahi, "The 2018 Facebook Data Controversy and Technological Alienation", 2021)

"This refers to any intentional or unintentional leak of secure or private or confidential data to any untrusted system. This is also referred to as information disclosure or data spill." (Srinivasan Vaidyanathan et al, "Challenges of Developing AI Applications in the Evolving Digital World and Recommendations to Mitigate Such Challenges: A Conceptual View", 2021) 

"When the information is stolen or used without consent of the system’s owner, the data stolen may cover confidential information like credit cards or passwords." (Kevser Z Meral, "Social Media Short Video-Sharing TikTok Application and Ethics: Data Privacy and Addiction Issues", 2021)

[data loss:] "The exposure of proprietary, sensitive, or classified information through either data theft or data leakage." (CNSSI 4009-2015)

30 July 2019

💻IT: False Negative (Definitions)

"Spam that is mistaken for legitimate email." (Andy Walker, "Absolute Beginner’s Guide To: Security, Spam, Spyware & Viruses", 2005)

"Failing to report an event that should have been reported." (W Roy Schulte & K Chandy, "Event Processing: Designing IT Systems for Agile Companies", 2009)

"A subject who is identified as failing to have experienced the event of interest (e.g., exposure, disease) but has truly experienced the event is termed a false negative." (Herbert I Weisberg, "Bias and Causation: Models and Judgment for Valid Comparisons", 2010)

"An incorrect result, which fails to detect a condition or return a result that is actually present." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"An incorrect result as reported by a detective device, such as an IDS, an antivirus program, or a biometric security device. For example, an antivirus program may not “catch” a virus-infected file, or a fingerprint reader may incorrectly fail the fingerprint of the true user." (Mark Rhodes-Ousley, "Information Security: The Complete Reference, Second Edition, 2nd Ed.", 2013)

"A test result that incorrectly reports that a condition being tested for is absent, when, in fact, it is present (e.g., an intrusion detection subsystem falsely reports no attacks in the attack space of an enterprise system)." (O Sami Saydjari, "Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time", 2018)

"A condition when using optimistic locking whereby a row that was not updated since it was selected cannot be updated without first being selected again. Optimistic locking support does not allow a false positive to happen, but a false negative might happen. See also false positive." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

[false-negative result:] "A test result which fails to identify the presence of a defect that is actually present in the test object." (Software Quality Assurance)

09 December 2018

🔭Data Science: Failure (Just the Quotes)

"Every detection of what is false directs us towards what is true: every trial exhausts some tempting form of error. Not only so; but scarcely any attempt is entirely a failure; scarcely any theory, the result of steady thought, is altogether false; no tempting form of error is without some latent charm derived from truth." (William Whewell, "Lectures on the History of Moral Philosophy in England", 1852)

"Scarcely any attempt is entirely a failure; scarcely any theory, the result of steady thought, is altogether false; no tempting form of Error is without some latent charm derived from Truth." (William Whewell, "Lectures on the History of Moral Philosophy in England", 1852)

"We learn wisdom from failure much more than from success. We often discover what will do, by finding out what will not do; and probably he who never made a mistake never made a discovery." (Samuel Smiles, "Facilities and Difficulties", 1859)

"[…] the statistical prediction of the future from the past cannot be generally valid, because whatever is future to any given past, is in tum past for some future. That is, whoever continually revises his judgment of the probability of a statistical generalization by its successively observed verifications and failures, cannot fail to make more successful predictions than if he should disregard the past in his anticipation of the future. This might be called the ‘Principle of statistical accumulation’." (Clarence I Lewis, "Mind and the World-Order: Outline of a Theory of Knowledge", 1929)

"Science condemns itself to failure when, yielding to the infatuation of the serious, it aspires to attain being, to contain it, and to possess it; but it finds its truth if it considers itself as a free engagement of thought in the given, aiming, at each discovery, not at fusion with the thing, but at the possibility of new discoveries; what the mind then projects is the concrete accomplishment of its freedom." (Simone de Beauvoir, "The Ethics of Ambiguity", 1947)

"Common sense […] may be thought of as a series of concepts and conceptual schemes which have proved highly satisfactory for the practical uses of mankind. Some of those concepts and conceptual schemes were carried over into science with only a little pruning and whittling and for a long time proved useful. As the recent revolutions in physics indicate, however, many errors can be made by failure to examine carefully just how common sense ideas should be defined in terms of what the experimenter plans to do." (James B Conant, "Science and Common Sense", 1951)

"Catastrophes are often stimulated by the failure to feel the emergence of a domain, and so what cannot be felt in the imagination is experienced as embodied sensation in the catastrophe. (William I Thompson, "Gaia, a Way of Knowing: Political Implications of the New Biology", 1987)

"What about confusing clutter? Information overload? Doesn't data have to be ‘boiled down’ and  ‘simplified’? These common questions miss the point, for the quantity of detail is an issue completely separate from the difficulty of reading. Clutter and confusion are failures of design, not attributes of information." (Edward R Tufte, "Envisioning Information", 1990)

"When a system is predictable, it is already performing as consistently as possible. Looking for assignable causes is a waste of time and effort. Instead, you can meaningfully work on making improvements and modifications to the process. When a system is unpredictable, it will be futile to try and improve or modify the process. Instead you must seek to identify the assignable causes which affect the system. The failure to distinguish between these two different courses of action is a major source of confusion and wasted effort in business today." (Donald J Wheeler, "Understanding Variation: The Key to Managing Chaos" 2nd Ed., 2000)

"[…] in cybernetics, control is seen not as a function of one agent over something else, but as residing within circular causal networks, maintaining stabilities in a system. Circularities have no beginning, no end and no asymmetries. The control metaphor of communication, by contrast, punctuates this circularity unevenly. It privileges the conceptions and actions of a designated controller by distinguishing between messages sent in order to cause desired effects and feedback that informs the controller of successes or failures." (Klaus Krippendorff, "On Communicating: Otherness, Meaning, and Information", 2009)

"To get a true understanding of the work of mathematicians, and the need for proof, it is important for you to experiment with your own intuitions, to see where they lead, and then to experience the same failures and sense of accomplishment that mathematicians experienced when they obtained the correct results. Through this, it should become clear that, when doing any level of mathematics, the roads to correct solutions are rarely straight, can be quite different, and take patience and persistence to explore." (Alan Sultan & Alice F Artzt, "The Mathematics that every Secondary School Math Teacher Needs to Know", 2011)

"A very different - and very incorrect - argument is that successes must be balanced by failures (and failures by successes) so that things average out. Every coin flip that lands heads makes tails more likely. Every red at roulette makes black more likely. […] These beliefs are all incorrect. Good luck will certainly not continue indefinitely, but do not assume that good luck makes bad luck more likely, or vice versa." (Gary Smith, "Standard Deviations", 2014)

"We are seduced by patterns and we want explanations for these patterns. When we see a string of successes, we think that a hot hand has made success more likely. If we see a string of failures, we think a cold hand has made failure more likely. It is easy to dismiss such theories when they involve coin flips, but it is not so easy with humans. We surely have emotions and ailments that can cause our abilities to go up and down. The question is whether these fluctuations are important or trivial." (Gary Smith, "Standard Deviations", 2014)

"Although cascading failures may appear random and unpredictable, they follow reproducible laws that can be quantified and even predicted using the tools of network science. First, to avoid damaging cascades, we must understand the structure of the network on which the cascade propagates. Second, we must be able to model the dynamical processes taking place on these networks, like the flow of electricity. Finally, we need to uncover how the interplay between the network structure and dynamics affects the robustness of the whole system." (Albert-László Barabási, "Network Science", 2016)

More quotes in "Failure" at the-web-of-knowledge.blogspot.com.

29 November 2016

♟️Strategic Management: Failure (Just the Quotes)

"Failure to succeed greatly in management usually occurs not so much from lack of knowledge of the important principles of the science of management as from failure to apply them. Most of the principles of successful management are old, and many of them have received sufficient publicity to be well known, but managers are curiously prone to look upon managerial success as a personal attribute that is slightly dependent on principles or laws." (Allan C Haskell, "How to Make and Use Graphic Charts", 1919)

"Failure to delegate causes managers to be crushed and fail under the weight of accumulated duties that they do not know and have not learned to delegate." (James D Mooney, "Onward Industry!", 1931)

"The making of decisions, as everyone knows from personal experience, is a burdensome task. Offsetting the exhilaration that may result from correct and successful decision and the relief that follows the termination of a struggle to determine issues is the depression that comes from failure, or error of decision, and the frustration which ensues from uncertainty." (Chester I Barnard, "The Functions of the Executive", 1938)

"You can teach the rudiments of cooking, as of management, but you cannot make a great cook or a great manager. In both activities, you ignore fundamentals at grave risk  - but sometimes succeed. In both, science can be extremely useful but is no substitute for the art itself. In both, inspired amateurs can outdo professionals. In both, perfection is rarely achieved, and failure is more common than the customers realize. In both, practitioners don't need recipes that detail timing down to the last second, ingredients to the last fraction of an ounce, and procedures down to the Just flick of the wrist; they need reliable maxims, instructive anecdotes, and no dogmatism." (Robert Heller, "The Naked Manager: Games Executives Play", 1972)

"We never like to admit to ourselves that we have made a mistake. Organizational structures tend to accentuate this source of failure of information." (Kenneth E Boulding, "Toward a General Social Science", 1974)

"[...] when a variety of tasks have all to be performed in cooperation, synchronization, and communication, a business needs managers and a management. Otherwise, things go out of control; plans fail to turn into action; or, worse, different parts of the plans get going at different speeds, different times, and with different objectives and goals, and the favor of the 'boss' becomes more important than performance." (Peter F Drucker, "People and Performance", 1977)

"A competent manager can usually explain necessary planning changes in terms of specific facts which have contributed to the change. The existing fear, or attitude of failure, which results from missed completion dates should be replaced by a more constructive fear of failing to keep a plan updated." (Philip F Gehring Jr. & Udo W Pooch, "Advances in Computer Programming Management", 1980)

"All problems present themselves to the mind as threats of failure." (J. J. Gordon, "Creative Computing", 1983)

"One of the most important tasks of a manager is to eliminate his people's excuses for failure." (Robert Townsend, "Further Up the Organization", 1984)

"It seems to me that we too often focus on the inside aspects of the job of management, failing to give proper attention to the requirement for a good manager to maintain those relationships between his organization and the environment in which it must operate which permits it to move ahead and get the job done." (Breene Kerr, Giants in Management, 1985) 

"Most of us managers are prone to one failing: A tendency to manage people as though they were modular components." (Tom DeMarco & Timothy Lister, "Peopleware: Productive Projects and Teams", 1987)

"Setting goals can be the difference between success and failure. [...] Goals must not be defined so broadly that they cannot be quantified. Having quantifiable goals is an essential starting point if managers are to measure the results of their organization's activities. [...] Too often people mistake being busy for achieving goals." (Philip D Harvey & James D Snyder, Harvard Business Review, 1987)

"The tendency to hide unfavorable information often occurs in companies that are quick to reward success and equally quick to punish failure." (Robert M Tomasko, "Downsizing", 1987)

"The major fault in this process - and thus, in the way we were making decisions - is that it lacks an organizing framework. In pursuing a variety of goals and objectives, in whatever situation we manage, we often fail to see that some of them are in conflict and that the achievement of one might come at the expense of achieving another. In weighing up the actions we might take to reach our goals and objectives, we have no way to account for nature's complexity and only rarely factor it in." (Allan Savory & Jody Butterfield, "Holistic Management: A new framework for decision making", 1988)

"Failing organizations are usually overmanaged and under-led." (Warren G Bennis, 1988)

"Commonly, the threats to strategy are seen to emanate from outside a company because of changes in technology or the behavior of competitors. Although external changes can be the problem, the greater threat to strategy often comes from within. A sound strategy is undermined by a misguided view of competition, by organizational failures, and, especially, by the desire to grow." (Michael E Porter, "What is Strategy?", Harvard Business Review, 1996)

"Managers must clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational agenda involves continual improvement everywhere there are no trade-offs. Failure to do this creates vulnerability even for companies with a good strategy. The operational agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practice. In contrast, the strategic agenda is the right place for defining a unique position, making clear trade-offs, and tightening fit. It involves the continual search for ways to reinforce and extend the company’s position. The strategic agenda demands discipline and continuity; its enemies are distraction and compromise." (Michael E Porter, "What is Strategy?", Harvard Business Review, 1996)

"Managers are incurably susceptible to panacea peddlers. They are rooted in the belief that there are simple, if not simple-minded, solutions to even the most complex of problems. And they do not learn from bad experiences. Managers fail to diagnose the failures of the fads they adopt; they do not understand them. […] Those at the top feel obliged to pretend to omniscience, and therefore refuse to learn anything new even if the cost of doing so is success." (Russell L Ackoff, "A Lifetime Of Systems Thinking", Systems Thinker, 1999)

"The aim of leadership should be to improve the performance of man and machine, to improve quality, to increase output, and simultaneously to bring pride of workmanship to people. Put in a negative way, the aim of leadership is not merely to find and record failures of men, but to remove the causes of failure: to help people to do a better job with less effort." (W Edwards Deming, "Out of the Crisis", 2000)

"Process standardization from on high is disempowerment. It is a direct result of fearful management, allergic to failure. It tries to avoid all chance of failure by having key decisions made by a guru class (those who set the standards) and carried out mechanically by the regular folk. As defense against failure, standard process is a kind of armor. The more worried you are about failure, the heavier the armor you put on. But armor always has a side effect of reduced mobility. The overarmored organization has lost the ability to move and move quickly. When this happens, standard process is the cause of lost mobility. It is, however, not the root cause. The root cause is fear." (Tom DeMarco, "Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency", 2001)

"When unmeetable expectations are formed, failure is virtually assured, since we have defined failure as unmet expectations. This is called a planning failure and is the difference between what was planned to be accomplished and what was, in fact, achievable. The second component of failure is poor performance or actual failure. This is the difference between what was achievable and what was actually accomplished." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"When we fail to grasp the systemic source of problems, we are left to treat symptoms rather than eliminate underlying causes. Without systemic thinking, the best we can ever do is adapt or react. Systems thinking, powered by visual models, stimulates creative - rather than adaptive - behavior. [...] To benefit from systems thinking, the project team needs to extend that viewpoint upward to the bigger picture of the project’s overall environment."(Kevin Forsberg et al, "Visualizing Project Management: Models and frameworks for mastering complex systems" 3rd Ed., 2005)

"It’s tempting to view the multitude of monster projects gone bad as anomalies, excrescences of corporate and government bureaucracies run amok. But you will find similar tales of woe emerging from software projects big and small, public and private, old and new. Though details differ, the pattern is depressingly repetitive: Moving targets. Fluctuating goals. Unrealistic schedules. Missed deadlines. Ballooning costs. Despair. Chaos." (Scott Rosenberg, "Dreaming in Code", 2007)

"A bad strategy will fail no matter how good your information is and lame execution will stymie a good strategy. If you do enough things poorly, you will go out of business." (Bill Gates, "Business @ the Speed of Thought: Succeeding in the Digital Economy", 2009)

"Any strategy that involves crossing a valley - accepting short-term losses to reach a higher hill in the distance - will soon be brought to a halt by the demands of a system that celebrates short-term gains and tolerates stagnation, but condemns anything else as failure. In short, a world where big stuff can never get done." (Neal Stephenson, "Innovation Starvation," World Policy Journal, 2011)

"Experts in the 'Problem' area proceed to elaborate its complexity. They design complex Systems to attack it. This approach guarantees failure, at least for all but the most pedestrian tasks. The problem is a Problem precisely because it is incorrectly conceptualized in the first place, and a large System for studying and attacking the Problem merely locks in the erroneous conceptualization into the minds of everyone concerned. What is required is not a large System, but a different approach. Trying to design a System in the hope that the System will somehow solve the Problem, rather than simply solving the Problem in the first place, is to present oneself with two problems in place of one." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Pragmatically, it is generally easier to aim at changing one or a few things at a time and then work out the unexpected effects, than to go to the opposite extreme. Attempting to correct everything in one grand design is appropriately designated as Grandiosity. […] A little Grandiosity goes a long way. […] The diagnosis of Grandiosity is quite elegantly and strictly made on a purely quantitative basis: How many features of the present System, and at what level, are to be corrected at once? If more than three, the plan is grandiose and will fail." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Restructuring is a favorite tactic of antisocials who have reached a senior position in an organization. The chaos that results is an ideal smokescreen for dysfunctional leadership. Failure at the top goes unnoticed, while the process of restructuring creates the illusion of a strong, creative hand on the helm." (Manfred F R Kets de Vries, "The Leader on the Couch", 2011)

"Most leadership strategies are doomed to failure from the outset. As people have been noting for years, the majority of strategic initiatives that are driven from the top are marginally effective - at best." (Peter Senge, "The Dance of Change: The challenges to sustaining momentum in a learning organization", 2014)

"A strategy that doesn't take into account resources is doomed to failure." (John C Maxwell, "JumpStart Your Thinking: A 90-Day Improvement Plan", 2015)

"Culture is an emergent phenomenon produced by structures, practices, leadership behavior, incentives, symbols, rituals, and processes. All those levers have to be pulled to have any chance of success. However, one driver of culture change is more important than the others. Culture change fails when the most visible symbols of it fail to change. Those key symbols are almost always the top leader’​​​​​​s behavior, which speaks much louder than anything they might say." (Paul Gibbons, "The Science of Successful Organizational Change",  2015)

"[…] the practice of continuous integration helps a development team fail-fast in integrating code under development. A corollary of failing fast is to aim for fast feedback. The practice of regularly showcasing (demoing) features under development to product owners and business stakeholders helps them verify whether it is what they asked for and decide whether it is what they really want." (Sriram Narayan, "Agile IT Organization Design: For Digital Transformation and Continuous Delivery", 2015)

"Evidence is freely available which demonstrates a gap between what the company thinks is important to customers and what customers actually deem to be the most important when it comes to making their choices. The failure to understand what is really important leads to customers receiving a sub-optimal experience and the company sub-optimising its commercial position." (Alan Pennington, "The Customer Experience Book", 2016)

"Organizations that rely too heavily on org charts and matrixes to split and control work often fail to create the necessary conditions to embrace innovation while still delivering at a fast pace. In order to succeed at that, organizations need stable teams and effective team patterns and interactions. They need to invest in empowered, skilled teams as the foundation for agility and adaptability. To stay alive in ever more competitive markets, organizations need teams and people who are able to sense when context changes and evolve accordingly." (Matthew Skelton & Manuel Pais, "Team Topologies: Organizing Business and Technology Teams for Fast Flow", 2019)

02 October 2014

🕸Systems Engineering: Failure (Just the Quotes)

 "A complex system can fail in an infinite number of ways." (John Gall, "General Systemantics: How systems work, and especially how they fail", 1975)

"A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." (John Gall, "General Systemantics: How systems work, and especially how they fail", 1975)

"A system represents someone's solution to a problem. The system doesn't solve the problem." (John Gall, "General Systemantics: How systems work, and especially how they fail", 1975)

"Systems Are Seductive. They promise to do a hard job faster, better, and more easily than you could do it by yourself. But if you set up a system, you are likely to find your time and effort now being consumed in the care and feeding of the system itself. New problems are created by its very presence. Once set up, it won't go away, it grows and encroaches. It begins to do strange and wonderful things. Breaks down in ways you never thought possible. It kicks back, gets in the way, and opposes its own proper function. Your own perspective becomes distorted by being in the system. You become anxious and push on it to make it work. Eventually you come to believe that the misbegotten product it so grudgingly delivers is what you really wanted all the time. At that point encroachment has become complete. You have become absorbed. You are now a systems person." (John Gall, "General Systemantics: How systems work, and especially how they fail", 1975)

"The failure of individual subsystems to be sufficiently adaptive to changing environments results in the subsystems forming a collective association that, as a unit, is better able to function in new circumstances. Formation of such an association is a structural change; the behavioral role of the new conglomerate is a junctional change; both types of change are characteristic of the formation of hierarchies." (John L Casti, "On System Complexity: Identification, Measurement, and Management" [in "Complexity, Language, and Life: Mathematical Approaches"] 1986)

"The system always kicks back. - Systems get in the way - or, in slightly more elegant language: Systems tend to oppose their own proper functions. Systems tend to malfunction conspicuously just after their greatest triumph." (John Gall, "Systemantics: The underground text of systems lore", 1986)

"Physical systems are subject to the force of entropy, which increases until eventually the entire system fails. The tendency toward maximum entropy is a movement to disorder, complete lack of resource transformation, and death." (Stephen G Haines, "The Managers Pocket Guide to Systems Thinking & Learning", 1998)

"Most systems displaying a high degree of tolerance against failures are a common feature: Their functionality is guaranteed by a highly interconnected complex network. A cell's robustness is hidden in its intricate regulatory and metabolic network; society's resilience is rooted in the interwoven social web; the economy's stability is maintained by a delicate network of financial and regulator organizations; an ecosystem's survivability is encoded in a carefully crafted web of species interactions. It seems that nature strives to achieve robustness through interconnectivity. Such universal choice of a network architecture is perhaps more than mere coincidences." (Albert-László Barabási, "Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life", 2002)

"A fundamental reason for the difficulties with modern engineering projects is their inherent complexity. The systems that these projects are working with or building have many interdependent parts, so that changes in one part often have effects on other parts of the system. These indirect effects are frequently unanticipated, as are collective behaviors that arise from the mutual interactions of multiple components. Both indirect and collective effects readily cause intolerable failures of the system. Moreover, when the task of the system is intrinsically complex, anticipating the many possible demands that can be placed upon the system, and designing a system that can respond in all of the necessary ways, is not feasible. This problem appears in the form of inadequate specifications, but the fundamental issue is whether it is even possible to generate adequate specifications for a complex system." (Yaneer Bar-Yam, "Making Things Work: Solving Complex Problems in a Complex World", 2004)

"It is no longer sufficient for engineers merely to design boxes such as computers with the expectation that they would become components of larger, more complex systems. That is wasteful because frequently the box component is a bad fit in the system and has to be redesigned or worse, can lead to system failure. We must learn how to design large-scale, complex systems from the top down so that the specification for each component is derivable from the requirements for the overall system. We must also take a much larger view of systems. We must design the man-machine interfaces and even the system-society interfaces. Systems engineers must be trained for the design of large-scale, complex, man-machine-social systems." (A Wayne Wymore, "Systems Movement: Autobiographical Retrospectives", 2004)

"[…] in cybernetics, control is seen not as a function of one agent over something else, but as residing within circular causal networks, maintaining stabilities in a system. Circularities have no beginning, no end and no asymmetries. The control metaphor of communication, by contrast, punctuates this circularity unevenly. It privileges the conceptions and actions of a designated controller by distinguishing between messages sent in order to cause desired effects and feedback that informs the controller of successes or failures." (Klaus Krippendorff, "On Communicating: Otherness, Meaning, and Information", 2009)

"Experts in the 'Problem' area proceed to elaborate its complexity. They design complex Systems to attack it. This approach guarantees failure, at least for all but the most pedestrian tasks. The problem is a Problem precisely because it is incorrectly conceptualized in the first place, and a large System for studying and attacking the Problem merely locks in the erroneous conceptualization into the minds of everyone concerned. What is required is not a large System, but a different approach. Trying to design a System in the hope that the System will somehow solve the Problem, rather than simply solving the Problem in the first place, is to present oneself with two problems in place of one." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Pragmatically, it is generally easier to aim at changing one or a few things at a time and then work out the unexpected effects, than to go to the opposite extreme. Attempting to correct everything in one grand design is appropriately designated as Grandiosity. […] A little Grandiosity goes a long way. […] The diagnosis of Grandiosity is quite elegantly and strictly made on a purely quantitative basis: How many features of the present System, and at what level, are to be corrected at once? If more than three, the plan is grandiose and will fail." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Complex systems seem to have this property, with large periods of apparent stasis marked by sudden and catastrophic failures. These processes may not literally be random, but they are so irreducibly complex (right down to the last grain of sand) that it just won’t be possible to predict them beyond a certain level. […] And yet complex processes produce order and beauty when you zoom out and look at them from enough distance." (Nate Silver, "The Signal and the Noise: Why So Many Predictions Fail-but Some Don't", 2012)

"If an emerging system is born complex, there is neither leeway to abandon it when it fails, nor the means to join another, successful one. Such a system would be caught in an immovable grip, congested at the top, and prevented, by a set of confusing but locked–in precepts, from changing." (Lawrence K Samuels, "Defense of Chaos: The Chaology of Politics, Economics and Human Action", 2013)

"Stability is often defined as a resilient system that keeps processing transactions, even if transient impulses (rapid shocks to the system), persistent stresses (force applied to the system over an extended period), or component failures disrupt normal processing." (Michael Hüttermann et al, "DevOps for Developers", 2013)

"Although cascading failures may appear random and unpredictable, they follow reproducible laws that can be quantified and even predicted using the tools of network science. First, to avoid damaging cascades, we must understand the structure of the network on which the cascade propagates. Second, we must be able to model the dynamical processes taking place on these networks, like the flow of electricity. Finally, we need to uncover how the interplay between the network structure and dynamics affects the robustness of the whole system." (Albert-László Barabási, "Network Science", 2016)

09 November 2011

📉Graphical Representation: Failure (Just the Quotes)

"The essential quality of graphic representations is clarity. If the diagram fails to give a clearer impression than the tables of figures it replaces, it is useless. To this end, we will avoid complicating the diagram by including too much data." (Armand Julin, "Summary for a Course of Statistics, General and Applied", 1910)

"Where the values of a series are such that a large part the grid would be superfluous, it is the practice to break the grid thus eliminating the unused portion of the scale, but at the same time indicating the zero line. Failure to include zero in the vertical scale is a very common omission which distorts the data and gives an erroneous visual impression." (Calvin F Schmid, "Handbook of Graphic Presentation", 1954)

"[…] the only worse design than a pie chart is several of them, for then the viewer is asked to compare quantities located in spatial disarray both within and between pies. […] Given their low data-density and failure to order numbers along a visual dimension, pie charts should never be used." (Edward R Tufte, "The Visual Display of Quantitative Information", 1983)

"[…] the partial scale break is a weak indicator that the reader can fail to appreciate fully; visually the graph is still a single panel that invites the viewer to see, inappropriately, patterns between the two scales. […] The partial scale break also invites authors to connect points across the break, a poor practice indeed; […]" (William S. Cleveland, "Graphical Methods for Data Presentation: Full Scale Breaks, Dot Charts, and Multibased Logging", The American Statistician Vol. 38" (4) 1984)

"When a graph is constructed, quantitative and categorical information is encoded, chiefly through position, size, symbols, and color. When a person looks at a graph, the information is visually decoded by the person's visual system. A graphical method is successful only if the decoding process is effective. No matter how clever and how technologically impressive the encoding, it is a failure if the decoding process is a failure. Informed decisions about how to encode data can be achieved only through an understanding of the visual decoding process, which is called graphical perception." (William S Cleveland, "The Elements of Graphing Data", 1985)

"Confusion and clutter are failures of design, not attributes of information. And so the point is to find design strategies that reveal detail and complexity - rather than to fault the data for an excess of complication. Or, worse, to fault viewers for a lack of understanding. Among the most powerful devices for reducing noise and enriching the content of displays is the technique of layering and separation, visually stratifying various aspects of the data." (Edward R Tufte, "Envisioning Information", 1990)

"What about confusing clutter? Information overload? Doesn't data have to be ‘boiled down’ and  ‘simplified’? These common questions miss the point, for the quantity of detail is an issue completely separate from the difficulty of reading. Clutter and confusion are failures of design, not attributes of information." (Edward R Tufte, "Envisioning Information", 1990)

"Audience boredom is usually a content failure, not a decoration failure." (Edward R Tufte, "The cognitive style of PowerPoint", 2003)

"Diagrams are a means of communication and explanation, and they facilitate brainstorming. They serve these ends best if they are minimal. Comprehensive diagrams of the entire object model fail to communicate or explain; they overwhelm the reader with detail and they lack meaning." (Eric Evans, "Domain-Driven Design: Tackling complexity in the heart of software", 2003)

"No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails. Some display methods lead to efficient, accurate decoding, and others lead to inefficient, inaccurate decoding. It is only through scientific study of visual perception that informed judgments can be made about display methods." (William S Cleveland, "The Elements of Graphing Data", 1985)

"Most dashboards fail to communicate efficiently and effectively, not because of inadequate technology (at least not primarily), but because of poorly designed implementations. No matter how great the technology, a dashboard's success as a medium of communication is a product of design, a result of a display that speaks clearly and immediately. Dashboards can tap into the tremendous power of visual perception to communicate, but only if those who implement them understand visual perception and apply that understanding through design principles and practices that are aligned with the way people see and think." (Stephen Few, "Information Dashboard Design", 2006)

"The Sixth Principle for the analysis and display of data: 'Analytical presentations ultimately stand or fall depending on the quality, relevance, and integrity of their content.' This suggests that the most effective way to improve a presentation is to get better content. It also suggests that design devices and gimmicks cannot salvage failed content." (Edward R Tufte, "Beautiful Evidence", 2006)

"The main goal of data visualization is its ability to visualize data, communicating information clearly and effectively. It doesn’t mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex dataset by communicating its key aspects in a more intuitive way. Yet designers often tend to discard the balance between design and function, creating gorgeous data visualizations which fail to serve its main purpose - communicate information." (Vitaly Friedman, "Data Visualization and Infographics", Smashing Magazine, 2008)

"Designing good visual displays with an easy-to-use interactive system is difficult. The designer’s first attempts will usually fail, so it is critical that proposed systems be tested on at least several sets of typical users. These usability tests help the designer iterate to the best possible system." (Daniel B Carr & Linda W Pickle, "Visualizing Data Patterns with Micromaps", 2010)

"To be sure, data doesn’t always need to be visualized, and many data visualizations just plain suck. Look around you. It’s not hard to find truly awful representations of information. Some work in concept but fail because they are too busy; they confuse people more than they convey information [...]. Visualization for the sake of visualization is unlikely to produce desired results - and this goes double in an era of Big Data. Bad is still bad, even and especially at a larger scale." (Phil Simon, "The Visual Organization: Data Visualization, Big Data, and the Quest for Better Decisions", 2014)

"The goal of using data visualization to make better and faster decisions may lead people to think that any data visualization that is not immediately understood is a failure. Yes, a good visualization should allow you to see things that you might have missed, and to glean insights faster, but you still have to think." (Steve Wexler, "The Big Picture: How to use data visualization to make better decisions - faster", 2021)

"The rise of graphicacy and broader data literacy intersects with the technology that makes it possible and the critical need to understand information in ways current literacies fail. Like reading and writing, data literacy must become mainstream to fully democratize information access." (Vidya Setlur & Bridget Cogley, "Functional Aesthetics for data visualization", 2022)

"A perfectly relevant visualization that breaks a few presentation rules is far more valuable - it’s better - than a perfectly executed, beautiful chart that contains the wrong data, communicates the wrong message, or fails to engage its audience. [...] The more relevant a data visualization is to its context, the more forgiving, to a point, we can be about its execution" (Scott Berinato, "Good Charts : the HBR guide to making smarter, more persuasive data visualizations", 2023)

11 November 2008

🗄️Data Management: Data Quality (Part I: Information Systems' Perspective)

Data Management
Data Management Series

One LinkedIn user brought to attention the fact that according to top IT managers the top two reasons why CRM investments fail is: (1) managing resistance within the organization; (2) bad data quality.

The two reasons are common not only to CRM or BI solutions but also to other Information Systems, though from the two data quality has usually the biggest impact. Especially in ERP systems the data quality continues to be a problem and here are a few reasons:
  • Processes span different functions and/or roles, each of them maintaining the data they are interested in, without any agreement or coordination on the ownership. The lack of ownership is in general management’s fault.
  • Within an enterprise many systems arrive to be integrated, the quality of the data depending on the quality and scope of the integrations, whether they were addressed fully or only superficially. Few integrations are stable and properly designed. If stability can be obtained in time, scope is seldom changed as it involves further investments, and thus the remaining data need to be maintained manually, respectively the issues need to be troubleshooted or let accumulate in the backlog.
  • There are systems which are not integrated but use the same data, users needing to duplicate their effort, so they often focus on their immediate needs. Moreover, the lack of mappings between systems makes data analysis and review difficult. 
  • The lack of knowledge about the systems used in terms of processes, procedures, best practices, policies, etc. Users usually try to do their best based on the knowledge they have, and despite their best intent, the systems arrive to be misused just to get things done. 
  • Basic or inexistent validation for data entry in each important entry point (UI, integration interfaces, bulk upload functionality), system permissiveness (allowing workarounds), stability and reliability (bugs/defects).
  • Inexistence of data quality control mechanisms or quality methodologies, respectively a Data and/or Quality Management strategy. If the data quality is not kept under review, it can easily decrease over time. 
  • The lack of a data culture and processes that support data quality.
  • People lack consistency and/or the self-discipline to follow the processes and update the data as the processes requires it and not only the data to move to the next or final step. Therefore, the gap between reality and the one presented by the system is considerable.
  • People are not motivated to improve data quality even if they may recognize the importance of doing that.
Data quality is usually ignored in BI projects, and this is because few are the ones that go and search for the causes, making it easier to blame the BI solution or the technical team than to do something. This is one of the reasons for which users are reticent in using a BI solution, to which add up solution’s flexibility and the degree up to which the solution satisfies users’ needs. On the other side BI solutions are often abused, including also reports which have OLTP characteristics or of providing too much unstructured or inadequate content that needs to be further reworked.

Data quality comes on the managers' agenda, especially during ERP implementations. Unfortunately, as soon as that happens, it also disappears, despite being warned of the consequences poor data quality might have on the implementation and further data use. An ERP implementation is supposed to be an opportunity for improving the data quality, though for many organizations it remains in this state. Once this opportunity passes, organizations need more financial and human resources to reach a fraction from the opportunity missed.

The above topics are complex and need further discussion (see [1], [2]).


Written: Nov-2008, Last Reviewed: Mar-2024

Resources:
[1] SQL-Troubles (2010) Data Management: Data Quality - An Introduction (link)
[2] SQL-Troubles (2012) Data Migration: Data Quality’s Perspective I - A Bird’s-Eye View (link)

30 December 2007

🏗️Software Engineering: Failure (Just the Quotes)

"A complex system can fail in an infinite number of ways." (John Gall, "General Systemantics: How systems work, and especially how they fail", 1975)

"Failure to allow enough time for system test, in particular, is peculiarly disastrous. Since the delay comes at the end of the schedule, no one is aware of schedule trouble until almost the delivery date. Bad news, late and without warning, is unsettling to customers and to managers." (Fred P Brooks, "The Mythical Man-Month: Essays", 1975)

"The fundamental problem with software maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another. So the whole process is two steps forward and one step back. Why aren't defects fixed more cleanly? First, even a subtle defect shows itself as a local failure of some kind. In fact it often has system-wide ramifications, usually nonobvious. Any attempt to fix it with minimum effort will repair the local and obvious, but unless the structure is pure or the documentation very fine, the far-reaching effects of the repair will be overlooked. Second, the repairer is usually not the man who wrote the code, and often he is a junior programmer or trainee. (Frederick P. Brooks, "The Mythical Man-Month" , 1975)

"Systems with unknown behavioral properties require the implementation of iterations which are intrinsic to the design process but which are normally hidden from view. Certainly when a solution to a well-understood problem is synthesized, weak designs are mentally rejected by a competent designer in a matter of moments. On larger or more complicated efforts, alternative designs must be explicitly and iteratively implemented. The designers perhaps out of vanity, often are at pains to hide the many versions which were abandoned and if absolute failure occurs, of course one hears nothing. Thus the topic of design iteration is rarely discussed. Perhaps we should not be surprised to see this phenomenon with software, for it is a rare author indeed who publicizes the amount of editing or the number of drafts he took to produce a manuscript." (Fernando J Corbató, "A Managerial View of the Multics System Development", 1977)

"[...] when a variety of tasks have all to be performed in cooperation, synchronization, and communication, a business needs managers and a management. Otherwise, things go out of control; plans fail to turn into action; or, worse, different parts of the plans get going at different speeds, different times, and with different objectives and goals, and the favor of the 'boss' becomes more important than performance." (Peter F Drucker, "People and Performance", 1977)

"How do we convince people that in programming simplicity and clarity - in short: what mathematicians call 'elegance' - are not a dispensable luxury, but a crucial matter that decides between success and failure?" (Edsger W Dijkstra, "'Why Is Software So Expensive?' An Explanation to the Hardware Designer", [EWD648] 1982) 

"Leaders value learning and mastery, and so do people who work for leaders. Leaders make it clear that there is no failure, only mistakes that give us feedback and tell us what to do next." (Warren G Bennis, Training and Development Journal, 1984)

"Object-oriented programming languages support encapsulation, thereby improving the ability of software to be reused, refined, tested, maintained, and extended. The full benefit of this support can only be realized if encapsulation is maximized during the design process. […] design practices which take a data-driven approach fail to maximize encapsulation because they focus too quickly on the implementation of objects." (Rebecca Wirfs-Brock, "Object-oriented Design: A. responsibility-driven approach", 1989)

"Our experience with designing and analyzing large and complex software-intensive systems has led us to recognize the role of business and organization in the design of the system and in its ultimate success or failure. Systems are built to satisfy an organization's requirements (or assumed requirements in the case of shrink-wrapped products). These requirements dictate the system's performance, availability, security, compatibility with other systems, and the ability to accommodate change over its lifetime. The desire to satisfy these goals with software that has the requisite properties influences the design choices made by a software architect." (Len Bass et al, "Software Architecture in Practice", 1998)

"A test that reveals a bug has succeeded, not failed." (Boris Beizer, "Software Testing Techniques", 1990)

"Failure to initialize a shared object can lead to data-dependent bugs caused by residues from a previous use of that object by another transaction. Note that the culprit transaction is long gone when the bug's symptoms are discovered. Because the effect of corruption of dynamic data can be arbitrarily far removed from the cause, such bugs are among the most difficult to catch." (Boris Beizer, "Software Testing Techniques", 1990)

"Testing proves a programmer’s failure. Debugging is the programmer’s vindication." (Boris Beizer, "Software Testing Techniques", 1990)

"The picture of digital progress that so many ardent boosters paint ignores the painful record of actual programmers’ epic struggles to bend brittle code into functional shape. That record is of one disaster after another, marking the field’s historical time line like craters. Anyone contemplating the start of a big software development project today has to contend with this unfathomably discouraging burden of experience. It mocks any newcomer with ambitious plans, as if to say, What makes you think you’re any different?" (Scott Rosenberg, "Dreaming in Code", 2007)

"As a general rule, implementations do not just spontaneously combust. Failures tend to stem from the aggregation of many issues. Although some issues may have been known since the early stages of the project (for example, the sales cycle or system design), implementation teams discover the majority of problems during the middle of the implementation, typically during some form of testing." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"Understanding the causes of system failures may help organizations avoid them, although there are no guarantees." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"But the history of large systems demonstrates that, once the hurdle of stability has been cleared, a more subtle challenge appears. It is the challenge of remaining stable when the rules change. Machines, like organizations or organisms, that fail to meet this challenge find that their previous stability is no longer of any use. The responses that once were life-saving now just make things worse. What is needed now is the capacity to re-write the procedure manual on short notice, or even (most radical change of all) to change goals." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Experts in the 'Problem' area proceed to elaborate its complexity. They design complex Systems to attack it. This approach guarantees failure, at least for all but the most pedestrian tasks. The problem is a Problem precisely because it is incorrectly conceptualized in the first place, and a large System for studying and attacking the Problem merely locks in the erroneous conceptualization into the minds of everyone concerned. What is required is not a large System, but a different approach. Trying to design a System in the hope that the System will somehow solve the Problem, rather than simply solving the Problem in the first place, is to present oneself with two problems in place of one." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Pragmatically, it is generally easier to aim at changing one or a few things at a time and then work out the unexpected effects, than to go to the opposite extreme. Attempting to correct everything in one grand design is appropriately designated as Grandiosity. […] A little Grandiosity goes a long way. […] The diagnosis of Grandiosity is quite elegantly and strictly made on a purely quantitative basis: How many features of the present System, and at what level, are to be corrected at once? If more than three, the plan is grandiose and will fail." (John Gall, "The Systems Bible: The Beginner's Guide to Systems Large and Small"[Systematics 3rd Ed.], 2011)

"Systems with high risks must be tested more thoroughly than systems that do not generate big losses if they fail. The risk assessment must be done for the individual system parts, or even for single error possibilities. If there is a high risk for failures by a system or subsystem, there must be a greater testing effort than for less critical (sub)systems. International standards for production of safety-critical systems use this approach to require that different test techniques be applied for software of different integrity levels." (Andreas Spillner et al, "Software Testing Foundations: A Study Guide for the Certified Tester Exam" 4th Ed., 2014)

"The real bug here is that the design of the system even permits this class of bug. It is unconscionable that someone designing a critical piece of security infrastructure would design the system in such a way that it does not fail safe." (Jamie Zawinski, 2014)

"A fault is usually defined as one component of the system deviating from its spec, where - as a failure is when the system as a whole stops providing the required service to the user. It is impossible to reduce the probability of a fault to zero; therefore it is usually best to design fault-tolerance mechanisms that prevent faults from causing failures." (Martin Kleppmann, "Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems", 2015)

"A key contribution of DevOps was to raise awareness of the problems lingering in how teams interacted (or not) across the delivery chain, causing delays, rework, failures, and a lack of understanding and empathy toward other teams. It also became clear that such issues were not only happening between application development and operations teams but in interactions with many other teams involved in software delivery, like QA, InfoSec, networking, and more." (Matthew Skelton & Manuel Pais, "Team Topologies: Organizing Business and Technology Teams for Fast Flow", 2019)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.