Showing posts with label challenges. Show all posts
Showing posts with label challenges. Show all posts

14 September 2024

Data Management: Data Governance (Part II: Heroes Die Young)

Data Management Series
Data Management Series

In the call for action there are tendencies in some organizations to idealize and overcharge main actors' purpose and image when talking about data governance by calling them heroes. Heroes are those people who fight for a goal they believe in with all their being and occasionally they pay the supreme tribute. Of course, the image of heroes is idealized and many other aspects are ignored, though such images sell ideas and ideals. 

Organizations might need heroes and heroic deeds to change the status quo, but the heroism doesn't necessarily payoff for the "heroes"! Sometimes, organizations need a considerable effort to change the status quo. It can be people's resistance to new, to the demands, to the ideas propagated, especially when they are not clearly explained and executed. It can be the incommensurable distance between the "AS IS" and the "TO BE" images, especially when clear paths aren't in sight. It can be the lack of resources (e.g., time, money, people, tools), knowledge, understanding or skillset that makes the effort difficult. 

Unfortunately, such initiatives favor action over adequate strategies, planning and understanding of the overall context. The call do to something creates waves of actions and reactions which in the organizational context can lead to storms and even extreme behavior that ranges from resistance to the new to heroic deeds. Finding a few messages that support the call for action can help, though they can't replace the various critical for success factors.

Leading organizations on a new path requires a well-defined realistic strategy, respectively adequate tactical and operational planning that reflects organizations' specific needs, knowledge and capabilities. Just demanding from people to do their best is not enough, and heroism has chances to appear especially in this context. Unfortunately, the whole weight falls on the shoulders of the people chosen as actors in the fight. Ideally, it should be possible to spread the whole weight on a broader basis which should be considered the foundation for the new. 

The "heroes" metaphor is idealized and the negative outcome probably exaggerated, though extreme situations do occur in organizations when decisions, planning, execution and expectations are far from ideal. Ideal situations are met only in books and less in practice!

The management demands and the people execute, much like in the army, though by contrast people need to understand the reasoning behind what they are doing. Proper execution requires skillset, understanding, training, support, tools and the right resources for the right job. Just relying on people's professionalism and effort is not enough and is suboptimal, but this is what many organizations seem to do!

Organizations tend to respond to the various barriers or challenges with more resources or pressure instead of analyzing and depicting the situation adequately, and eventually change the strategy, tactics or operations accordingly. It's also difficult to do this as long an organization doesn't have the capabilities and practices of self-check, self-introspection, self-reflection, etc. Even if it sounds a bit exaggerated, an organization must know itself to overcome the various challenges. Regular meetings, KPIs and other metrics give the illusion of control when self-control is needed. 

Things don't have to be that complex even if managing data governance is a complex endeavor. Small or midsized organizations are in theory more capable to handle complexity because they can be more agile, have a robust structure and the flow of information and knowledge has less barriers, respectively a shorter distance to overcome, at least in theory. One can probably appeal to the laws and characteristics of networks to understand more about the deeper implications, of how solutions can be implemented in more complex setups.

Previous Post <<||>> Next Post

01 September 2024

Data Management: Data Governance (Part I: No Guild of Heroes)

Data Management Series
Data Management Series

Data governance appeared around 1980s as topic though it gained popularity in early 2000s [1]. Twenty years later, organizations still miss the mark, respectively fail to understand and implement it in a consistent manner. As usual, the reasons for failure are multiple and they vary from misunderstanding what governance is all about to poor implementation of methodologies and inadequate management or leadership. 

Moreover, methodologies tend to idealize the various aspects and is not what organizations need, but pragmatism. For example, data governance is not about heroes and heroism [2], which can give the impression that heroic actions are involved and is not the case! Actions for the sake of action don’t necessarily lead to change by themselves. Organizations are in general good at creating meaningless action without results, especially when people preoccupy themselves, miss or ignore the mark. Big organizations are very good at generating actions without effects. 

People do talk to each other, though they try to solve their own problems and optimize their own areas without necessarily thinking about the bigger picture. The problem is not necessarily communication or the lack of depth into business issues, people do communicate, know the issues without a business impact assessment. The challenge is usually in convincing the upper management that the effort needs to be consolidated, supported, respectively the needed resources made available. 

Probably, one of the issues with data governance is the attempt of creating another structure in the organization focused on quality, which has the chances to fail, and unfortunately does fail. Many issues appear when the structure gains weight and it becomes a separate entity instead of being the backbone of organizations. 

As soon organizations separate the data governance from the key users, management and the other important decisional people in the organization, it takes a life of its own that has the chances to diverge from the initial construct. Then, organizations need "alignment" and probably other big words to coordinate the effort. Also such constructs can work but they are suboptimal because the forces will always pull in different directions.

Making each manager and the upper management responsible for governance is probably the way to go, though they’ll need the time for it. In theory, this can be achieved when many of the issues are solved at the lower level, when automation and further aspects allow them to supervise things, rather than hiding behind every issue. 

When too much mircomanagement is involved, people tend to busy themselves with topics rather than solve the issues they are confronted with. The actual actors need to be empowered to take decisions and optimize their work when needed. Kaizen, the philosophy of continuous improvement, proved itself that it works when applied correctly. They’ll need the knowledge, skills, time and support to do it though. One of the dangers is however that this becomes a full-time responsibility, which tends to create a separate entity again.

The challenge for organizations lies probably in the friction between where they are and what they must do to move forward toward the various objectives. Moving in small rapid steps is probably the way to go, though each person must be aware when something doesn’t work as expected and react. That’s probably the most important aspect. 

So, the more functions are created that diverge from the actual organization, the higher the chances for failure. Unfortunately, failure is visible in the later phases, and thus self-awareness, self-control and other similar “qualities” are needed, like small actors that keep the system in check and react whenever is needed. Ideally, the employees are the best resources to react whenever something doesn’t work as per design. 

Previous Post <<||>> Next Post 

Resources:
[1] Wikipedia (2023) Data Management [link]
[2] Tiankai Feng (2023) How to Turn Your Data Team Into Governance Heroes [link]


13 June 2024

Business Intelligence: One Person Can’t Learn or Do Everything in Microsoft Fabric

Business Intelligence Series
Business Intelligence Series

Today’s Explicit Measures webcast [1] considered an article written by Kurt Buhler (The Data Goblins): [Microsoft] "Fabric is a Team Sport: One Person Can’t Learn or Do Everything" [2]. It’s a well-written article that deserves some thought as there are several important points made. I can’t say I agree with the full extent of some statements, even if some disagreements are probably just a matter of semantics.

My main disagreement starts with the title “One Person Can’t Learn or Do Everything”. As clarified in webcast's chat, the author defines “everything" as an umbrella for “all the capabilities and experiences that comprise Fabric including both technical (like Power BI) or non-technical (like adoption data literacy) and everything in between” [1].

For me “everything” is relative and considers a domain's core set of knowledge, while "expertise" (≠ "mastery") refers to the degree to which a person can use the respective knowledge to build back-to-back solutions for a given area. I’d say that it becomes more and more challenging for beginners or average data professionals to cover the core features. Moreover, I’d separate the non-technical skills because then one will also need to consider topics like Data, Project, Information or Knowledge Management.

There are different levels of expertise, and they can vary in depth (specialization) or breadth (covering multiple areas), respectively depend on previous experience (whether one worked with similar technologies). Usually, there’s a minimum of requirements that need to be covered for being considered as expert (e.g. certification, building a solution from beginning to the end, troubleshooting, performance optimization, etc.). It’s also challenging to roughly define when one’s expertise starts (or ends), as there are different perspectives on the topics. 

Conversely, the term expert is in general misused extensively, sometimes even with a mischievous intent. As “expert” is usually considered an external consultant or a person who got certified in an area, even if the person may not be able to build solutions that address a customer’s needs. 

Even data professionals with many years of experience can be overwhelmed by the volume of knowledge, especially when one considers the different experiences available in MF, respectively the volume of new features released monthly. Conversely, expertise can be considered in respect to only one or more MF experiences or for one area within a certain layer. Lot of the knowledge can be transported from other areas – writing SQL and complex database objects, modelling (enterprise) semantic layers, programming in Python, R or Power Query, building data pipelines, managing SQL databases, etc. 

Besides the standard documentation, training sessions, and some reference architectures, Microsoft made available also some labs and other material, which helps discovering the features available, though it doesn’t teach people how to build complete solutions. I find more important than declaring explicitly the role-based audience, the creation of learning paths for the various roles.

During the past 6-7 months I've spent on average 2 days per week learning MF topics. My problem is not the documentation but the lack of maturity of some features, the gaps in functionality, identifying the respective gaps, knowing what and when new features will be made available. The fact that features are made available or changed while learning makes the process more challenging. 

My goal is to be able to provide back-to-back solutions and I believe that’s possible, even if I might not consider all the experiences available. During the past 22 years, at least until MF, I could build complete BI solutions starting from requirements elicitation, data extraction, modeling and processing for data consumption, respectively data consumption for the various purposes. At least this was the journey of a Software Engineer into the world of data. 

References:
[1] Explicit Measures (2024) Power BI tips Ep.328: Microsoft Fabric is a Team Sport (link)
[2] Data Goblins (2024) Fabric is a Team Sport: One Person Can’t Learn or Do Everything (link)

03 April 2024

Business Intelligence: The Top 5 Pains of a BI/Analytics Manager

Business Intelligence Series
Business Intelligence Series

1) Business Strategy

A business strategy is supposed to define an organization's mission, vision, values, direction, purpose, goals, objectives, respectively the roadmap, alternatives, capabilities considered to achieve them. All this information is needed by the BI manager to sketch the BI strategy needed to support the business strategy. 

Without them, the BI manager must extrapolate, and one thing is to base one's decisions on a clearly stated and communicated business strategy, and another thing to work with vague declarations full of uncertainty. In the latter sense, it's like attempting to build castles into thin air and expecting to have a solid foundation. It may work as many BI requirements are common across organizations, but it can also become a disaster. 

2) BI/Data Strategy

Organizations usually differentiate between the BI and the data Strategy because different driving forces and needs are involved, even if there are common goals, needs and opportunities that must be considered from both perspectives. When there's no data strategy available, the BI manager is either forced to address thus many data-related topics (e.g. data culture, data quality, metadata management, data governance), or ignore them with all consequences deriving from this. 

A BI strategy is an extension of the business, data and IT strategies into the BI knowledge areas. Unfortunately, few organizations give it the required attention. Besides the fact that the BI strategy breaks down the business strategy from its perspective, it also adds its own goals and objectives which are ideally aligned with the ones from the other strategies. 

3) Data Culture

Data culture is "the collective beliefs, values, behaviors, and practices of an organization’s employees in harnessing the value of data for decision-making, operations, or insight". Therefore, data culture is an enabler which, when the many aspects are addressed adequately, can have a multiplier effect for the BI strategy and its execution. Conversely, when basic data culture assumptions and requirements aren't addressed, the interrelated issues resulting from this can prove to be a barrier for the BI projects, operations and strategy. 

As mentioned before, an organization’s (data) culture is created, managed, nourished, and destroyed through leadership. If the other leaders aren't playing along, each challenge related to data culture and BI will become a concern for the BI manager.

4) Managing Expectations 

A business has great expectations from the investment in its BI infrastructure, especially when the vendors promise competitive advantage, real-time access to data and insights, self-service capabilities, etc. Even if these promises are achievable, they represent a potential that needs to be harnessed and there are several premises that need to be addressed continuously. 

Some BI strategies and/or projects address these expectations from the beginning, though there are many organizations that ignore or don't give them the required importance. Unfortunately, these expectations (re)surface when people start using the infrastructure and this can easily become an acceptance issue. It's the BI manager's responsibility to ensure expectations are managed accordingly.

5) Building the Right BI Architecture

For the BI architecture the main driving forces are the shifts in technologies from single servers to distributed environments, from relational tables and data warehouses to delta tables and delta lakes built with the data mesh's principles and product-orientation in mind, which increase the overall complexity considerably. Vendors and data professionals' vision of how the architectures of the future will look like still has major milestones and challenges to surpass. 

Therefore, organizations are forced to explore the new architectures and the opportunities they bring, however this involves a considerable effort, skilled resources, and more iterations. Conversely, ignoring these trends might prove to be an opportunity lost and eventually duplicated effort on the long term.

20 March 2024

Data Management: Master Data Management (Part I: Understanding Integration Challenges) [Answer]

Data Management
Data Management Series

Answering Piethein Strengholt’s post [1] on Master Data Management’s (MDM) integration challenges, the author of "Data Management at Scale".

Master data can be managed within individual domains though the boundaries must be clearly defined, and some coordination is needed. Attempting to partition the entities based on domains doesn’t always work. The partition needs to be performed at attribute level, though even then might be some exceptions involved (e.g. some Products are only for Finance to use). One can identify then attributes inside of the system to create the boundaries.

MDM is simple if you have the right systems, processes, procedures, roles, and data culture in place. Unfortunately, people make it too complicated – oh, we need a nice shiny system for managing the data before they are entered in ERP or other systems, we need a system for storing and maintaining the metadata, and another system for managing the policies, and the story goes on. The lack of systems is given as reason why people make no progress. Moreover, people will want to integrate the systems, increasing the overall complexity of the ecosystem.

The data should be cleaned in the source systems and assessed against the same. If that's not possible, then you have the wrong system! A set of well-built reports can make data assessment possible. 

The metadata and policies can be maintained in Excel (and stored in SharePoint), SharePoint or a similar system that supports versioning. Also, for other topics can be found pragmatic solutions.

ERP systems allow us to define workflows and enable a master data record to be published only when the information is complete, though there will always be exceptions (e.g., a Purchase Order must be sent today). Such exceptions make people circumvent the MDM systems with all the issues deriving from this.

Adding an MDM system within an architecture tends to increase the complexity of the overall infrastructure and create more bottlenecks. Occasionally, it just replicates the structures existing in the target system(s).

Integrations are supposed to reduce the effort, though in the past 20 years I never saw an integration to work without issues, even in what MDM concerns. One of the main issues is that the solutions just synchronized the data without considering the processual dependencies, and sometimes also the referential dependencies. The time needed for troubleshooting the integrations can easily exceed the time for importing the data manually over an upload mechanism.

To make the integration work the MDM will arrive to duplicate the all the validation available in the target system(s). This can make sense when daily or weekly a considerable volume of master data is created. Native connectors simplify the integrations, especially when it can handle the errors transparently and allow to modify the records manually, though the issues start as soon the target system is extended with more attributes or other structures.

If an organization has an MDM system, then all the master data should come from the MDM. As soon as a bidirectional synchronization is used (and other integrations might require this), Pandora’s box is open. One can define hard rules, though again, there are always exceptions in which manual interference is needed.

Attempting an integration of reference data is not recommended. ERP systems can have hundreds of such entities. Some organizations tend to have a golden system (a copy of production) with all the reference data. It works for some time, until people realize that the solution is expensive and time-consuming.

MDM systems do make sense in certain scenarios, though to get the integrations right can involve a considerable effort and certain assumptions and requirements must be met.

Previous Post <<||>> Next Post

References:
[1] Piethein Strengholt (2023) Understanding Master Data Management’s Integration Challenges (link)


21 March 2021

Strategic Management: The Impact of New Technologies III (Checking the Vital Signs)

Strategic Management

An organization which went through a major change, like the replacement of a strategic system (e.g. ERP/BI implementations), needs to go through a period of attentive supervision to address the inherent issues that ideally need to be handled as they arise, to minimize their future effects. Some organizations might even go through a convalescence period, which risks to prolong itself if the appropriate remedies aren’t found. Therefore, one needs an entity, who/which has the skills to recognize the symptoms, understand what’s happening and why, respectively of identifying the appropriate actions.

Given technologies’ multi-layered complexity and the volume of knowledge for understanding them, the role of the doctor can be seldom taken by one person. Moreover, the patient is an organization, each person in the organization having usually local knowledge about the patient. The needed knowledge is dispersed trough the organization, and one needs to tap into that knowledge, identify the people close to technologies and business area, respectively allow such people exchange information on a regular basis.

The people who should know the best the organization are in theory the management, however they are usually too far away from technologies and often too busy with management topics. IT professionals are close to technologies, though sometimes too far away from the patient. The users have a too narrow overview, while from logistical and economic reasons the number of people involved should be kept to a minimum. A compromise is to designate one person from each business area who works with any of the strategic systems, and assure that they have the technical and business knowledge required. It’s nothing but the key-user concept, though for it to work the key-users need not only knowledge but also the empowerment to act when the symptoms appear.

Big organizations have also a product owner for each application who supervises the application through its entire lifecycle, and who needs to coordinate with the IT, business and service providers. This is probably a good idea in order to assure that the ROI is reached over time, respectively that the needs of the system are considered within the IT operation context. In small organizations, the role can be taken by a technical or a business resource with deeper skills then the average user, usually a key-user. However, unless joined with the key-user role, the product owner’s focus will be the product and seldom the business themes.

The issues that need to be overcome after major changes are usually cross-functional, being imperative for people to work together and find solutions. Unfortunately, it’s also in human nature to wait until the issues are big enough to get the proper attention. Unless the key-users have the time allocated already for such topics, the issues will be lost in the heap of operational and tactical activities. This time must be allocated for all key-users and the technical resources needed to support them.

Some organizations build temporary working parties (groups of experts working together to achieve specific goals) or similar groups. However, the statute of such group needs to be permanent if the organization wants to continuously have its health in check, to build the needed expertize and awareness about occurred or potential issues. Centers of excellence/expertize (CoE) or competency centers (CC) are such working groups with permanent statute, having defined roles, responsibilities, and processes for supporting and promoting the effective use of technologies within the organization, respectively of monitoring and systematically addressing the risks and opportunities associated with them.

There’s also the null hypothesis, doing nothing, relying solely on employees’ professionalism, though without defined responsibility, accountability and empowerment, it can get messy.

Previous Post <<||>> Next Post

20 March 2021

Business Intelligence: New Technologies, Old Challenges II (ETL vs. ELT)

 

Business Intelligence

Data lakes and similar cloud-based repositories drove the requirement of loading the raw data before performing any transformations on the data. At least that’s the approach the new wave of ELT (Extract, Load, Transform) technologies use to handle analytical and data integration workloads, which is probably recommendable for the mentioned cloud-based contexts. However, ELT technologies are especially relevant when is needed to handle data with high velocity, variance, validity or different value of truth (aka big data). This because they allow processing the workloads over architectures that can be scaled with workloads’ demands.

This is probably the most important aspect, even if there can be further advantages, like using built-in connectors to a wide range of sources or implementing complex data flow controls. The ETL (Extract, Transform, Load) tools have the same capabilities, maybe reduced to certain data sources, though their newer versions seem to bridge the gap.

One of the most stressed advantages of ELT is the possibility of having all the (business) data in the repository, though these are not technological advantages. The same can be obtained via ETL tools, even if this might involve upon case a bigger effort, effort depending on the functionality existing in each tool. It’s true that ETL solutions have a narrower scope by loading a subset of the available data, or that transformations are made before loading the data, though this depends on the scope considered while building the data warehouse or data mart, respectively the design of ETL packages, and both are a matter of choice, choices that can be traced back to business requirements or technical best practices.

Some of the advantages seen are context-dependent – the context in which the technologies are put, respectively the problems are solved. It is often imputed to ETL solutions that the available data are already prepared (aggregated, converted) and new requirements will drive additional effort. On the other side, in ELT-based solutions all the data are made available and eventually further transformed, but also here the level of transformations made depends on specific requirements. Independently of the approach used, the data are still available if needed, respectively involve certain effort for further processing.

Building usable and reliable data models is dependent on good design, and in the design process reside the most important challenges. In theory, some think that in ETL scenarios the design is done beforehand though that’s not necessarily true. One can pull the raw data from the source and build the data models in the target repositories.

Data conversion and cleaning is needed under both approaches. In some scenarios is ideal to do this upfront, minimizing the effect these processes have on data’s usage, while in other scenarios it’s helpful to address them later in the process, with the risk that each project will address them differently. This can become an issue and should be ideally addressed by design (e.g. by building an intermediate layer) or at least organizationally (e.g. enforcing best practices).

Advancing that ELT is better just because the data are true (being in raw form) can be taken only as a marketing slogan. The degree of truth data has depends on the way data reflects business’ processes and the way data are maintained, while their quality is judged entirely on their intended use. Even if raw data allow more flexibility in handling the various requests, the challenges involved in processing can be neglected only under the consequences that follow from this.

Looking at the analytics and data integration cloud-based technologies, they seem to allow both approaches, thus building optimal solutions relying on professionals’ wisdom of making appropriate choices.

Previous Post <<||>>Next Post

Business Intelligence: New Technologies, Old Challenges I (Introduction)

Business Intelligence

Each important technology has the potential of creating divides between the specialists from a given field. This aspect is more suggestive in the data-driven fields like BI/Analytics or Data Warehousing. The data professionals (engineers, scientists, analysts, developers) skilled only in the new wave of technologies tend to disregard the role played by the former technologies and their role in the data landscape. The argumentation for such behavior is rooted in the belief that a new technology is better and can solve any problem better than previous technologies did. It’s a kind of mirage professionals and customers can easily fall under.

Being bigger, faster, having new functionality, doesn’t make a tool the best choice by default. The choice must be rooted in the problem to be solved and the set of requirements it comes with. Just because a vibratory rammer is a new technology, is faster and has more power in applying pressure, this doesn’t mean that it will replace a hammer. Where a certain type of power is needed the vibratory rammer might be the best tool, while for situations in which a minimum of power and probably more precision is needed, like driving in a nail, then an adequately sized hammer will prove to be a better choice.

A technology is to be used in certain (business/technological) contexts, and even if contexts often overlap, the further details (aka requirements) should lead to the proper use of tools. It’s in a professional’s duties to be able to differentiate between contexts, requirements and the capabilities of the tools appropriate for each context. In this resides partially a professional’s mastery over its field of work and of providing adequate solutions for customers’ needs. Especially in IT, it’s not enough to master the new tools but also have an understanding about preceding tools, usage contexts, capabilities and challenges.

From an historical perspective each tool appeared to fill a demand, and even if maybe it didn’t manage to fill it adequately, the experience obtained can prove to be valuable in one way or another. Otherwise, one risks reinventing the wheel, or more dangerously, repeating the failures of the past. Each new technology seems to provide a deja-vu from this perspective.

Moreover, a new technology provides new opportunities and requires maybe to change our way of thinking in respect to how the technology is used and the processes or techniques associated with it. Knowledge of the past technologies help identifying such opportunities easier. How a tool is used is also a matter of skills, while its appropriate use and adoption implies an inherent learning curve. Having previous experience with similar tools tends to reduce the learning curve considerably, though hands-on learning is still necessary, and appropriate learning materials or tutoring is upon case needed for a smoother transition.

In what concerns the implementation of mature technologies, most of the challenges were seldom the technologies themselves but of non-technical nature, ranging from the poor understanding/knowledge about the tools, their role and the implications they have for an organization, to an organization’s maturity in leading projects. Even the most-advanced technology can fail in the hands of non-experts. Experience can’t be judged based only on the years spent in the field or the number of projects one worked on, but on the understanding acquired about implementation and usage’s challenges. These latter aspects seem to be widely ignored, even if it can make the difference between success and failure in a technology’s implementation.

Ultimately, each technology is appropriate in certain contexts and a new technology doesn’t necessarily make another obsolete, at least not until the old contexts become obsolete.

Previous Post <<||>>Next Post

29 September 2020

Strategic Management: Simplicity V (ERP Implementations' Story I)

Strategic Management

Probably ERP Implementations are one of the most complex type of projects one deals with in the IT world, however their complexity seldom resides in technologies themselves, but in the effort that needs to be made by organizations before, during and post-implementations. Through their transformative nature ERP implementations have the potential of changing the whole organization if their potential is exploited accordingly, which is unfortunately not always the case. Therefore, the challenges don’t resume only to managing a project or implementing a technology, but also in managing change, and that usually happens or needs to happen at several levels. 

Typically, the change is considered mainly at IT infrastructure and processual level, because at these levels most of the visible changes happen – that’s what steals the show. For the whole project duration is about replacing one or more legacy systems, making sure that the new infrastructure works as expected. The more an organization deviates from the standard the more effort is needed, and this effort can exhaust an organization’s resources to the degree that will need some time to recover after that, financially, but maybe more important from a vital point of view.

Even if the technological and processual layers are important, as they form the foundation on which an organization builds upon, besides the financial and material flow there are also the data, informational and knowledge flows, which seems to be neglected. Quite often that’s where the transformational potential resides. If an organization is not able to change positively these flows, on the long term the implementation will deal with problems people wished to be addressed much earlier, when the effort and effect would have met the lowest resistance, respectively the highest impact. 

An ERP implementation involves the migration of data between source(s) and target(s), the data requirements, including the one of appropriate quality, being regarded in respect to the target system(s). As within the data migration steps the data are extracted from the various sources, enriched, and prepared for import into the target system(s), there is the potential of bringing data quality to a level which would help the organization further. It’s probably simpler to imagine the process of taking the data from one place, cleaning and enriching the data to bring it to the needed form, and then putting the data into the new system. It’s a unique chance of improving data quality without touching the source or target system(s) while getting a considerable value.

Unfortunately, many organizations’ efforts to improve the quality of their data stop after the implementation. If there’s no focus and there are no structures in place to continue the effort, sooner or later data’s quality will decrease despite the earlier made efforts. Investing for example in a long-term data quality improvement or even a data management initiative might prove to be an exploratory and iterative process in which mistakes are maybe made, the direction might need to be changed, though, as long learning is involved, in this often resides the power of changing for the better.

When one talks about information there are two aspects to it: how an organization arrives from data to actionable information that reach timely the people who need it, respectively how information is further aggregated, recombined, shared, and harnessed into knowledge. These are the first three layers of knowledge (aka DIKW) pyramid, and an organization’s real success story is in how can manage these flows together, while increasing the value they provide for the organization. It’s an effort that must start with the implementation itself, or even earlier, and continue after the implementation, as an organization seems fit.

Previous Post <<||>> Next Post

Written: Sep-2020, Last Reviewed: Mar-2024

21 May 2020

Project Management: Planing Correctly Misundersood III

Mismanagement

One of the most misunderstood topics in Project Management seems to be the one of planning, and this probably because everyone has a good idea of what it means to plan an activity – we do it daily and most of the times (hopefully) we hit a bull’s-eye (or we have the impression we did that). You must do this and that, you have that dependency, you must coordinate with a few people, you must first reach that milestone before going further, you do one step at a time, and so on. It’s pretty easy, isn’t it?

From a bird’s eyes view project planning is like planning every other activity though there are several important differences. The most important one is of scale – the number of activities and resources involved, the level of coordination and communication, as well the quality with which occur, the level of uncertainty and control, respectively manageability. All these create a complexity that is hardly manageable by just one person. 

Another difference is the detail needed for the planning and targets’ reachability. Some believe that the plan needs to be done down to the lowest level of detail, which even if possible can prove to be an impediment to planning. Projects’ environment share some important characteristics with a battle field in terms of complexity of interactions, their dynamics and logistical requirements. Within an army’s structure there are levels of organization that require different mindsets and levels of planning. A general thinks primarily at strategic level in which troops and actions are seen as aggregations at the needed level of abstraction that makes their organization and planning manageable. The strategy is done however in collaboration with other generals and upper structures, while having defined the strategic goals the general must devise together with the immediate subalterns the tactics. In theory the project manager must regard the project from the same perspective. Results thus three levels of planning – strategic, done with the upper management, tactical done with the team members, respectively logistical, done within the team. That’s a way of breaking the complexity and dividing the responsibilities within the project. 

Projects’ final destination seem to have the character of a wish list more or less anchored in reality. From a technical point the target can be achievable though in big projects the most important challenges are of organizational nature – of being able to allocate and coordinate effectively the resources as needed by the project. The wish-like character is reflected also by the cost, scope, time triangle in respect to the expected quality – to some point in time one is forced to choose between two of them. On the other side, there’s the tendency to see the targets and milestones as fixed, with little room for deviation. One can easily forget that a strategic plan’s purpose is to set the objectives, identify the challenges and the possible lines of action, while a tactical plan’s objective is to devise the means to reach the objectives. Bringing everything together can easily obscure the view and, in extremis, the plan loses its actuality as soon was created (and approved). 

The most confusing aspect is probably the adherence of a plan to a given methodology, one dicing a project and thus a plan to fit a methodology by following blindly the rules and principles imposed by it instead of fitting the methodology to a project. Besides the fact that the methodologies are best practices but not necessarily good practices, what fits for an organization, they tend to be either too general, by specifying the what and not the how, or too restrictive (interpreted). 

29 April 2019

Data Management: Data Integration (Part I: From Disintegration to Integration)

Data Management

No matter how tight the integration between the various systems or processes there will be always gaps that need to be addressed in one way or another. The problems are in general caused by design errors rooted in the complexity of the logic from the integration layer or from the systems integrated. The errors can range from missing or incorrect validation rules, mappings and parameters to data quality issues.

An unidirectional integration involves distributing data from one system (aka publisher) to one or more systems (aka subscribers), while in bidirectional integrations systems can act as publishers and subscribers, resulting thus complex data flows with multiple endpoints. In simplest integrations the records flow one-to-one between systems, though more complex scenarios can involve logic based on business rules, mappings and other type of transformations. The challenge is to reflect the states as needed by the system with minimal involvement from the users.

Typically it falls in application/process owners or key users’ responsibilities to make sure that the integration works smoothly. When the integration makes use of interface or staging tables they can be used as starting point for the troubleshooting, however even then the troubleshooting can be troublesome and involve a considerable manual effort. When possible the data can be exported manually from the various systems and matched in Excel or similar solutions. This leads often to personal or departmental solutions hard to maintain, control and support.

A better approach is to automatize the process by importing the data from the integrated systems at regular points in time into the same database (much like in a data warehouse), model the entities and the needed logic in there, and report the differences. Even if this approach involves a small investment in the beginning and some optimization in logic or performance over time, it can become a useful tool for troubleshooting the differences. Such solutions can be used successfully in multiple integration scenarios (e.g. web shop or ERP integrations).

A set of reports for each entity can help identify the differences between the various entities. Starting from the reported differences the users can identify, categorize and devise specific countermeasures for the various issues. The best time to have such a solution is shortly before or during UAT. This would allow to make sure that the integration layer really works, and helps correcting the issues as long they still have a small impact on the systems. Some integration issues might even lead to a postponement of the Go-Live. The second best time is during the time the first important issues were found, as the issues can be used as support for a Business Case for implementing this type of solutions.

In general, it’s recommended to fix the problems in the integration layer and use the reports only for troubleshooting and for assuring that the integration runs smoothly. There are however situations in which the integration problems can’t be fixed without creating more issues. It’s the case in which multiple systems are involved and integrated over an integration bus.

One extreme approach, not advisable though, is to build a second integration to correct the issues of the first. This solution might work in theory however there’s the risk of multiplying the issues is really high and the complexity of troubleshooting increases with the degree of dependency between the two integrations. It would be more advisable to rebuild the integration anew, however also this approach has its advantages and disadvantages.

Bottom line is that integration issues should be addressed while they are small and that an automated solution for comparing the data can help in the process

22 April 2019

Project Management: The Choice of Tools in Project Management

Mismanagement

Beware the man of one book” (in Latin, “homo unius libri”), a warning generally attributed to Thomas Aquinas and having a twofold meaning. In its original interpretation it was referring to the people mastering a single chosen discipline, however the meaning degenerated in expressing the limitations of people who master just one book, and thus having a limited toolset of perspectives, mental models or heuristics. This later meaning is better reflected in Abraham Maslow adage: “If the only tool you have is a hammer, you tend to see every problem as a nail”, as people tend to use the tools they are used to also in situations in which other tools are more appropriate.

It’s sometimes admirable people and even organizations’ stubbornness in using the same tools in totally different scenarios, expecting though the same results, as well in similar scenarios expecting different results. It’s true, Mathematics has proven that the same techniques can be used successfully in different areas, however a mathematician’s universe and models are idealistically fractionalized to a certain degree from reality, full of simplified patterns and never-ending approximations. In contrast, the universe of Software Development and Project Management has a texture of complex patterns with multiple levels of dependencies and constraints, constraints highly sensitive to the initial conditions.

Project Management has managed to successfully derive tools like methodologies, processes, procedures, best practices and guidelines to address the realities of projects, however their use in praxis seems to be quite challenging. Probably, the challenge resides in stubbornness of not adapting the tools to the difficulties and tasks met. Even if the same phases and multiple similarities seems to exist, the process of building a house or other tangible artefact is quite different than the approaches used in development and implementation of software.

Software projects have high variability and are often explorative in nature. The end-product looks totally different than the initial scaffold. The technologies used come with opportunities and limitations that are difficult to predict in the planning phase. What on paper seems to work often doesn’t work in praxis as the devil lies typically in details. The challenges and limitations vary between industries, businesses and even projects within the same organization.

Even if for each project type there’s a methodology more suitable than another, in the end project particularities might pull the choice in one direction or another. Business Intelligence projects for example can benefit from agile approaches as they enable to better manage and deliver value by adapting the requirements to business needs as the project progresses. An agile approach works almost always better than a waterfall process. In contrast, ERP implementations seldom benefit from agile methodologies given the complexity of the project which makes from planning a real challenge, however this depends also on an organization’s dynamicity.
Especially when an organization has good experience with a methodology there’s the tendency to use the same methodology across all the projects run within the organization. This results in chopping down a project to fit an ideal form, which might be fine as long the particularities of each project are adequately addressed. Even if one methodology is not appropriate for a given scenario it doesn’t mean it can’t be used for it, however in the final equation enter also the cost, time, effort, and the quality of the end-results.
In general, one can cope with complexity by leveraging a broader set of mental models, heuristics and set of tools, and this can be done only though experimentation, through training and exposing employees to new types of experiences, through openness, through adapting the tools to the challenges ahead.

28 November 2014

Systems Engineering: Issues (Just the Quotes)

"[Disorganized complexity] is a problem in which the number of variables is very large, and one in which each of the many variables has a behavior which is individually erratic, or perhaps totally unknown. However, in spite of this helter-skelter, or unknown, behavior of all the individual variables, the system as a whole possesses certain orderly and analyzable average properties. [...] [Organized complexity is] not problems of disorganized complexity, to which statistical methods hold the key. They are all problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole. They are all, in the language here proposed, problems of organized complexity." (Warren Weaver, "Science and Complexity", American Scientist Vol. 36, 1948)

"The fundamental problem today is that of organized complexity. Concepts like those of organization, wholeness, directiveness, teleology, and differentiation are alien to conventional physics. However, they pop up everywhere in the biological, behavioral and social sciences, and are, in fact, indispensable for dealing with living organisms or social groups. Thus a basic problem posed to modern science is a general theory of organization. General system theory is, in principle, capable of giving exact definitions for such concepts and, in suitable cases, of putting them to quantitative analysis." (Ludwig von Bertalanffy, "General System Theory", 1968)

"Technology can relieve the symptoms of a problem without affecting the underlying causes. Faith in technology as the ultimate solution to all problems can thus divert our attention from the most fundamental problem - the problem of growth in a finite system." (Donella A Meadows, "The Limits to Growth", 1972)

"When a mess, which is a system of problems, is taken apart, it loses its essential properties and so does each of its parts. The behavior of a mess depends more on how the treatment of its parts interact than how they act independently of each other. A partial solution to a whole system of problems is better than whole solutions of each of its parts taken separately." (Russell L Ackoff, "The future of operational research is past", The Journal of the Operational Research Society Vol. 30 (2), 1979)

"The world is a complex, interconnected, finite, ecological–social–psychological–economic system. We treat it as if it were not, as if it were divisible, separable, simple, and infinite. Our persistent, intractable global problems arise directly from this mismatch." (Donella Meadows,"Whole Earth Models and Systems", 1982)

"The real leverage in most management situations lies in understanding dynamic complexity, not detail complexity. […] Unfortunately, most 'systems analyses' focus on detail complexity not dynamic complexity. Simulations with thousands of variables and complex arrays of details can actually distract us from seeing patterns and major interrelationships. In fact, sadly, for most people 'systems thinking' means 'fighting complexity with complexity', devising increasingly 'complex' (we should really say 'detailed') solutions to increasingly 'complex' problems. In fact, this is the antithesis of real systems thinking." (Peter M Senge, "The Fifth Discipline: The Art and Practice of the Learning Organization", 1990)

"Systemic problems trace back in the end to worldviews. But worldviews themselves are in flux and flow. Our most creative opportunity of all may be to reshape those worldviews themselves. New ideas can change everything." (Anthony Weston, "How to Re-Imagine the World", 2007)

"All forms of complex causation, and especially nonlinear transformations, admittedly stack the deck against prediction. Linear describes an outcome produced by one or more variables where the effect is additive. Any other interaction is nonlinear. This would include outcomes that involve step functions or phase transitions. The hard sciences routinely describe nonlinear phenomena. Making predictions about them becomes increasingly problematic when multiple variables are involved that have complex interactions. Some simple nonlinear systems can quickly become unpredictable when small variations in their inputs are introduced." (Richard N Lebow, "Forbidden Fruit: Counterfactuals and International Relations", 2010)

"The problem of complexity is at the heart of mankind's inability to predict future events with any accuracy. Complexity science has demonstrated that the more factors found within a complex system, the more chances of unpredictable behavior. And without predictability, any meaningful control is nearly impossible. Obviously, this means that you cannot control what you cannot predict. The ability ever to predict long-term events is a pipedream. Mankind has little to do with changing climate; complexity does." (Lawrence K Samuels, "The Real Science Behind Changing Climate", 2014)

"Because the perfect system cannot be designed, there will always be weak spots that human ingenuity and resourcefulness can exploit." (Paul Gibbons, "The Science of Successful Organizational Change",  2015)


25 December 2012

Project Management: Project Failure (Just the Quotes)

"Rushing into action, you fail.
Trying to grasp things, you lose them.
Forcing a project to completion,
you ruin what was almost ripe."
(Lao Tzu, "Tao Te Ching", cca. 6th-century BC)

"In many ways, project management is similar to functional or traditional management. The project manager, however, may have to accomplish his ends through the efforts of individuals who are paid and promoted by someone else in the chain of command. The pacing factor in acquiring a new plant, in building a bridge, or in developing a new product is often not technology, but management. The technology to accomplish an ad hoc project may be in hand but cannot be put to proper use because the approach to the management is inadequate and unrealistic. Too often this failure can be attributed to an attempt to fit the project to an existing management organization, rather than molding the management to fit the needs of the project. The project manager, therefore, is somewhat of a maverick in the business world. No set pattern exists by which he can operate. His philosophy of management may depart radically from traditional theory." (David I Cleland & William R King, "Systems Analysis and Project Management", 1968)

"But the greater the primary risk, the safer and more careful your secondary assumptions must be. A project is only as sound as its weakest assumption, or its largest uncertainty." (Robert Heller, "The Naked Manager: Games Executives Play", 1972)

"Faced with a decision, always ask one implacable question: If this project fails, if the worst comes to the worst, what will be the result? If the answer is total corporate disaster, drop the project. If the worst possible outcome is tolerable, say, break-even, the executive has the foundation of all sound decision making - a fail-safe position." (Robert Heller, "The Naked Manager: Games Executives Play", 1972)

"The Eighth Truth of Management is: if you are doing something wrong, you will do it badly. The reverse of this truth is that, if your decision is blindingly right, you will execute it well - or appear to do so, which is much the same thing. But any executive can massacre his own nonsensical project." (Robert Heller, "The Naked Manager: Games Executives Play", 1972)

"Yet intelligent men plump for one project rather than another on the strength of a difference of a few decimal points in the rate of return calculated over the next decade. All such mind-stretching calculation comes under the lash of the Seventh Truth of Management: if you need sophisticated calculations to justify an action, it is probably wrong (the sophisticated calculations, anyway, are all too often based on simple false assumptions)." (Robert Heller, "The Naked Manager: Games Executives Play", 1972)

"Poor management can increase software costs more rapidly than any other factor. Particularly on large projects, each of the following mismanagement actions has often been responsible for doubling software development costs." (Barry Boehm, "Software Engineering Economics", 1981)

"[…] the longer one works on […] a project without actually concluding it, the more remote the expected completion date becomes. Is this really such a perplexing paradox? No, on the contrary: human experience, all-too-familiar human experience, suggests that in fact many tasks suffer from similar runaway completion times. In short, such jobs either get done soon or they never get done. It is surprising, though, that this common conundrum can be modeled so simply by a self-similar power law." (Manfred Schroeder, "Fractals, Chaos, Power Laws Minutes from an Infinite Paradise", 1990)

"Even when you have skilled, motivated, hard-working people, the wrong team structure can undercut their efforts instead of catapulting them to success. A poor team structure can increase development time, reduce quality, damage morale, increase turnover, and ultimately lead to project cancellation." (Steve McConnell, "Rapid Development", 1996)

"Software projects fail for one of two general reasons: the project team lacks the knowledge to conduct a software project successfully, or the project team lacks the resolve to conduct a project effectively." (Steve C McConnell, "Software Project Survival Guide", 1997)

"Today, excellent companies realize that project failures have more to do with behavioral shortcomings - poor employee morale, negative human relations, low productivity, and lack of commitment." (Harold Kerzner, "In search of excellence in project management", 1998)

"Success in all types of organization depends increasingly on the development of customized software solutions, yet more than half of software projects now in the works will exceed both their schedules and their budgets by more than 50%." (Barry Boehm, "Software Cost Estimation with Cocomo II", 2000)

"Choosing a proper project strategy can mean the difference between success and failure." (James P Lewis, "Project Planning, Scheduling, and Control" 3rd Ed., 2001)

"Note that a project always begins as a concept, and a concept is usually a bit fuzzy. Our job as a team is to clarify the concept, to turn it into a shared understanding that the entire team will accept. It is failure to do this that causes many project failures." (James P Lewis, "Project Planning, Scheduling, and Control" 3rd Ed., 2001)

"Project failures are not always the result of poor methodology; the problem may be poor implementation. Unrealistic objectives or poorly defined executive expectations are two common causes of poor implementation. Good methodologies do not guarantee success, but they do imply that the project will be managed correctly." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"Projects often fail at the beginning, not the end." (James P Lewis, "Project Planning, Scheduling, and Control" 3rd Ed., 2001)

"Success or failure of a project depends upon the ability of key personnel to have sufficient data for decision-making. Project management is often considered to be both an art and a science. It is an art because of the strong need for interpersonal skills, and the project planning and control forms attempt to convert part of the 'art' into a science." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"Today, most project management practitioners focus on planning failure. If this aspect of the project can be compressed, or even eliminated, then the magnitude of the actual failure, should it occur, would be diminished. A good project management methodology helps to reduce planning failure. Today, we believe that planning failure, when it occurs, is due in large part to the project manager’s inability to perform effective risk management." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"When unmeetable expectations are formed, failure is virtually assured, since we have defined failure as unmet expectations. This is called a planning failure and is the difference between what was planned to be accomplished and what was, in fact, achievable. The second component of failure is poor performance or actual failure. This is the difference between what was achievable and what was actually accomplished. […] Perceived failure is the net sum of actual failure and planning failure. […] Planning failure is again assured even if no actual failure occurs. In both of these situations (overplanning and underplanning), the actual failure is the same, but the perceived failure can vary considerably." (Harold Kerzner, "Strategic Planning for Project Management using a Project Management Maturity Model", 2001)

"You need to identify and terminate infeasible projects early. Sending a message to project managers that project termination threatens their career will tempt them to continue projects that should die” (Barry Bohem,"Project termination doesn't equal project failure", Computer, 34 (9),  2001)

"Projects fail because of context, not because of content.[...] the traditional emphasis in project management on the technical issues of the project (content) has led to a legacy of an extremely poor set of tools, techniques, and tips for managing the complex of people, political, and other 'softer' issues that make up the context of the project." (Rob Thomsett, "Radical Project Management", 2002)

"Agile development methodologies promise higher customer satisfaction, lower defect rates, faster development times and a solution to rapidly changing requirements. Plan-driven approaches promise predictability, stability, and high assurance. However, both approaches have shortcomings that, if left unaddressed, can lead to project failure. The challenge is to balance the two approaches to take advantage of their strengths and compensate for their weaknesses." (Barry Boehm & Richard Turner, "Observations on balancing discipline and agility", Agile Development Conference, 2003)

"A project is composed of a series of steps where all must be achieved for success. Each individual step has some probability of failure. We often underestimate the large number of things that may happen in the future or all opportunities for failure that may cause a project to go wrong. Humans make mistakes, equipment fails, technologies don't work as planned, unrealistic expectations, biases including sunk cost-syndrome, inexperience, wrong incentives, contractor failure, untested technology, delays, wrong deliveries, changing requirements, random events, ignoring early warning signals are reasons for delays, cost overruns and mistakes. Often we focus too much on the specific project case and ignore what normally happens in similar situations (base rate frequency of outcomes- personal and others)." (Peter Bevelin, "Seeking Wisdom: From Darwin to Munger", 2003)

"If you've been in the software business for any time at all, you know that there are certain common problems that plague one project after another. Missed schedules and creeping requirements are not things that just happen to you once and then go away, never to appear again. Rather, they are part of the territory. We all know that. What's odd is that we don't plan our projects as if we knew it. Instead, we plan as if our past problems are locked in the past and will never rear their ugly heads again. Of course, you know that isn't a reasonable expectation." (Tom DeMarco & Timothy Lister, "Waltzing with Bears: Managing Risk on Software Projects", 2003)

"Many things can put a project off course: bureaucracy, unclear objectives, and lack of resources, to name a few. But it is the approach to design that largely determines how complex software can become. When complexity gets out of hand, developers can no longer understand the software well enough to change or extend it easily and safely. On the other hand, a good design can create opportunities to exploit those complex features." (Eric Evans, "Domain-Driven Design: Tackling complexity in the heart of software", 2003)

"A fundamental reason for the difficulties with modern engineering projects is their inherent complexity. The systems that these projects are working with or building have many interdependent parts, so that changes in one part often have effects on other parts of the system. These indirect effects are frequently unanticipated, as are collective behaviors that arise from the mutual interactions of multiple components. Both indirect and collective effects readily cause intolerable failures of the system. Moreover, when the task of the system is intrinsically complex, anticipating the many possible demands that can be placed upon the system, and designing a system that can respond in all of the necessary ways, is not feasible. This problem appears in the form of inadequate specifications, but the fundamental issue is whether it is even possible to generate adequate specifications for a complex system." (Yaneer Bar-Yam, "Making Things Work: Solving Complex Problems in a Complex World", 2004)

"The collapse of a particular project may appear to have a specific cause, but an overly high intrinsic complexity of these systems is a problem common to many of them. A chain always breaks first in one particular link, but if the weight it is required to hold is too high, failure of the chain is guaranteed."  (Yaneer Bar-Yam, "Making Things Work: Solving Complex Problems in a Complex World", 2004)

"As hard as it is to find good ideas, it's even more difficult to manage them. While the project is humming along, vision document in place and a strong creative momentum moving forward, there is another level of thinking that has to occur: how will the designs and ideas translate into decisions? Even if good designs and ideas are being investigated, and people are excited about what they're working on, the challenge of convergence toward specifications remains. If a shift of momentum toward definitive design decisions doesn't happen at the right time and isn't managed in the right way, disaster waits. For many reasons, project failure begins here." (Scott Berkun, "The Art of Project Management", 2005)

"Failure usually results from a lack of a common approach to accomplish the work as a team. Inadequate leadership fails to create the environment in which teams can flourish. Furthermore, potential team members are seldom trained in how to share their efforts to accomplish team goals. The team may also assume they know more about teamwork than they actually do. So we need to be able to differentiate between superficial teamwork and the real thing." (Kevin Forsberg et al, "Visualizing Project Management: Models and frameworks for mastering complex systems" 3rd Ed., 2005)

"Project failures can frequently be traced to unrealistic technical, cost, or schedule targets. Such targets may be entirely arbitrary or based on bad assumptions - setting team members up for failure. Furthermore, the goals that motivate one team member may not motivate another member. All tasks don’t have to be inherently motivating - that’s not sensible. But there have to be motivating factors, if by nothing more than participating in goal determination. This also helps ensure adequate opportunity and risk identification, analysis, and management." (Kevin Forsberg et al, "Visualizing Project Management: Models and frameworks for mastering complex systems" 3rd Ed., 2005)

"Projects are complex non-linear systems and have significant inertia. If you wait to see acute problems before taking action, you will be too late and may make things worse." (Scott Berkun, "Making Things Happen: Mastering Project Management", 2005)

"The appropriate models help avoid costly errors that can lead to failure. One of the major sources of project failure is f lawed requirements and scope management. Models of the project environment, therefore, need to address the development and management of project requirements. Continuing to work on the project solution with an insufficient understanding of stakeholder requirements and a deficient requirements development process often leads to expensive time delays and redesigns. This doesn’t have to be the case. A strong requirements development and management process model can provide that ounce of prevention." (Kevin Forsberg et al, "Visualizing Project Management: Models and frameworks for mastering complex systems" 3rd Ed., 2005)

"Any effort at large-scale reorganization - that is, any project spanning more than two years and, more generally, anything that has not already been done - is inevitably doomed to failure." (Corinne Maier, "Bonjour Laziness: Why Hard Work Doesn't Pay", 2007)

"In software management, coordination is not an afterthought or an ancillary matter; it is the heart of the work, and deciding what tools and methods to use can make or break a project. But getting sidetracked in managing those tools is a potent temptation." (Scott Rosenberg, "Dreaming in Code", 2007)

"As a general rule, implementations do not just spontaneously combust. Failures tend to stem from the aggregation of many issues. Although some issues may have been known since the early stages of the project (for example, the sales cycle or system design), implementation teams discover the majority of problems during the middle of the implementation, typically during some form of testing." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"Implementing new systems is not like baking a cake. Organizations cannot follow a recipe with the following ingredients: three consultants, six weeks of testing, two training classes, and a healthy dose of project management. Nor do projects bake for six months until complete, after which time everyone enjoys a delicious piece of cake. For all sorts of reasons, a well-conceived and well-run project may fail, whereas a horribly managed project may come in under budget, ahead of schedule, and do everything that the vendor promised at the onset." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"Pre-implementation, post-implementation, and ongoing data audits are invaluable tools for organizations. Used judiciously by knowledgeable and impartial resources, audits can detect, avoid, and minimize issues that can derail an implementation or cause a live system to fail. Rather than view them as superfluous expenses, organizations would be wise to conduct them at key points throughout the system’s life cycle." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"The best managed project may fail, whereas a horribly managed project may come in under budget, ahead of schedule, and do everything that the vendor promised at the onset. In reality, however, organizations are unlikely to find themselves in one of these extreme scenarios. On a fundamental level, successfully activating and utilizing a new system is about minimizing risk from day one until the end of the project and beyond. The organization that can do this stands the best chance of averting failure." (Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"Understanding the causes of system failures may help organizations avoid them, although there are no guarantees." 
(Phil Simon, "Why New Systems Fail: An Insider’s Guide to Successful IT Projects", 2010)

"Stakeholder management to me is key, as success or failure is in the eye of the beholder. Time, cost and quality fall prey to the perceptions of the key stakeholders, who may have nothing to do with the running of the project." (Peter Parkes, "NLP for Project Managers", 2011)

"Projects fail from under-communicating, not over-communicating. Even if resource constraints preclude the dependency that you want from being delivered any sooner, clarifying priorities and expectations enables you to plan ahead and work through alternatives." (Edmond Lau, "The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact", 2015)

"Reading reviews of failure can be a dispiriting exercise. It can also create a distorted perception of reality. Reform of the implementation of large programs and projects should not just be based on a litany of what has gone wrong. Many things go right and, for that very reason, go unnoticed." (Peter Shergold, "Learning from Failure", 2015)

"Complexity is one of the causes of failing projects; failing to split the project into smaller tasks also causes software quality issues or project failure. Complexity can also be caused by the size of the  project; if the project is too big, there is a huge possibility that the project may become complex and complicated." (Abu S Mahfuz,"Software Quality Assurance", 2016)

"A correct and sanity-checked judgment of both the financial and logistic feasibility of a project is absolutely critical to its eventual outcome; get it wrong and a failed project is almost guaranteed. In support of this statement we can refer to the contents of a number of serious ­academic studies which support the idea that two of the chief causes of the failure, particularly the financial failure, of major infrastructure ­projects are optimism bias and strategic misrepresentations. […] Optimism bias means that the original project sponsors and planners fooled themselves that the project would be easy and could be completed within a 'back-of-the-envelope' budget, perhaps because they didn’t have the experience to know it would be difficult and expensive, or perhaps they didn’t want to know. Strategic misrepresentations mean that even if they did know it would be more difficult and expensive than their published estimate, they lied about it so that the Public, the Banks, and Politicians, would support the idea." (Tony Martyr, "Why Projects Fail", 2018)

"No project should be allowed to proceed without clear specification and acceptance  criteria, that are understood by all participants." (Tony Martyr, "Why Projects Fail", 2018)

"[...] consistently good project results are hard to come by, yet most organisations continue to think they’re doing a great job. It’s got to the stage where project failure has become so commonplace that we’ve started to see it as success, or we just aren’t seeing clearly at all." (Tony Martyr, "Why Projects Fail", 2018)

"Every year more than two-thirds of projects are considered failures, and most organisations would not be surprised by this statistic. In most cases, however, failure was the result of not making a hard decision."  (Colin D Ellis, "The Project Book", 2019)

"Organisations whose IT projects failed usually deployed recognisable project management methodologies; the reasons for failure were invariably to do with failures of project governance rather than simply of operational management." (Alan Calder, "ISO/IEC 38500: A pocket guide" 2nd Ed, 2019)

"Part of the problem is that we take project failure personally, seeing it as a stain on our reputation. It’s worth remembering that while a project may fail, this doesn’t make you a failure as a leader. In fact, the research shows that those who embrace failure become much more resilient and make better decisions as a result, so in that sense failure can only be a good thing." (Colin D Ellis, "The Project Book", 2019)

"Remember, though, there are only two reasons for project failure: poor project sponsorship and poor project management. And given that the buck stops with you, you could argue there’s only one reason for project failure." (Colin D Ellis, "The Project Book", 2019)

"A project is usually considered a failure if it is late, is over budget, or does not meet the customer’s expectations. Without the control that project management provides, a project is more likely to have problems with one of these areas. A problem with only one constraint (scope, schedule, cost, resources, quality, and risk) can jeopardize the entire project." (Sandra F Rowe, "Project Management for Small Projects" 3rd Ed., 2020)
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.