Showing posts with label Project Management. Show all posts
Showing posts with label Project Management. Show all posts

08 April 2024

Business Intelligence: Why Data Projects Fail to Deliver Real-Life Impact (Part III: Failure through the Looking Glass)

Business Intelligence
Business Intelligence Series

There’s a huge volume of material available on project failure – resources that document why individual projects failed, while in general projects fail, why project members, managers and/or executives think projects fail, and there seems to be no other pleasant activity at the end of a project than to theorize why a project failed, the topic culminating occasionally with the blaming game. Success may generate applause, though is failure that attracts and stirs the most waves (irony, disapproval, and other similar behavior) and everybody seems to be an expert after the consumed endeavor. 

The mere definition of a project failure – not fulfilling project’s objectives within the set budget and timeframe - is a misnomer because budgets and timelines are estimated based on the information available at the beginning of the project, the amount of uncertainty for many projects being considerable, and data projects are no exceptions from it. The higher the uncertainty the less probable are the two estimates. Even simple projects can reveal uncertainty especially when the broader context of the projects is considered. 

Even if it’s not a common practice, one way to cope with uncertainty is to add a tolerance for the estimates, though even this practice probably will not always accommodate the full extent of the unknown as the tolerances are usually small. The general expectation is to have an accurate and precise landing, which for big or exploratory projects is seldom possible. 

Moreover, the assumptions under which the estimates hold are easily invalidated in praxis – resources’ availability, first time right, executive’s support to set priorities, requirements’ quality, technologies’ maturity, etc. If one looks beyond the reasons why projects fail in general, quite often the issues are more organizational than technological, the lack of knowledge and experience being one of the factors. 

Conversely, many projects will not get approved if the estimates don’t look positive, and therefore people are pressured in one way or another to make the numbers fit the expectations. Some projects, given their importance, need to be done even if the numbers don’t look good or can’t be quantified correctly. Other projects represent people’s subsistence on the job, respectively people self-occupation to create motion, though they can occasionally have also a positive impact for the organizations. These kinds of aspects almost never make it in statistics or surveys. Neither do the big issues people are afraid to talk about. Where to consider that in the light of politics and office’s grapevine the facts get distorted.

Data projects reflect all the symptoms of failure projects have in general, though when words like AI, Statistics or Machine Learning are used, the chances for failure are even higher given that the respective fields require a higher level of expertise, the appropriate use of technologies and adherence to the scientific process for the results to be valid. If projects can benefit from general receipts, respectively established procedures and methods, their range of applicability decreases when the mentioned areas are involved. 

Many data projects have an exploratory nature – seeing what’s possible - and therefore a considerable percentage will not reach production. Moreover, even those that reach that far might arrive to be stopped or discarded sooner or later if they don’t deliver the expected value, and probably many of the models created in the process are biased, irrelevant, or incorrectly apply the theory. Where to add that the mere use of tools and algorithms is not Data Science or Data Analysis. 

The challenge for many data projects is to identify which Project Management (PM) best practices to consider. Following all or no practices at all just increases the risks of failure!

Previous Post <<||>> Next Post

06 April 2024

Business Intelligence: Why Data Projects Fail to Deliver Real-Life Impact (Part II: There's Value in Failure)

Business Intelligence
Business Intelligence Series

"Results are nothing; the energies which produce them
and which again spring from them are everything."
(Wilhelm von Humboldt,  "On Language", 1836)

When the data is not available and is needed on a continuous basis then usually the solution is to redesign the processes and make sure the data becomes available at the needed quality level. Redesign involves additional costs for the business; therefore, it might be tempting to cancel or postpone data projects, at least until they become feasible, though they’re seldom feasible. 

Just because there’s a set of data, this doesn’t mean that there is important knowledge to be extracted from it, respectively that the investment is feasible. There’s however value in building experience in the internal resources, in identifying the challenges and the opportunities, in identifying what needs to be changed for harnessing the data. Unfortunately, organizations expect that somebody else will do the work for them instead of doing the jump by themselves, and this approach more likely will fail. It’s like expecting to get enlightened after a few theoretical sessions with a guru than walking the path by oneself. 

This is reflected also in organizations’ readiness to do the required endeavors for making the jump on the maturity scale. If organizations can’t approach such topics systematically and address the assumptions, opportunities, and risks adequately, respectively to manage the various aspects, it’s hard to believe that their data journey will be positive. 

A data journey shouldn’t be about politics even if some minds need to be changed in the process, at management as well as at lower level. If the leadership doesn’t recognize the importance of becoming an enabler for such initiatives, then the organization probably deserves to keep the status quo. The drive for change should come from the leadership even if we talk about data culture, data strategy, decision-making, or any critical aspect.

An organization will always need to find the balance between time, scope, cost, and quality, and this applies to operations, tactics, and strategies as well as to projects.  There are hard limits and lot of uncertainty associated with data projects and the tasks involved, limits reflected in cost and time estimations (which frankly are just expert’s rough guesses that can change for the worst in the light of new information). Therefore, especially in data projects one needs to be able to compromise, to change scope and timelines as seems fit, and why not, to cancel the projects if the objectives aren’t feasible anymore, respectively if compromises can’t be reached.

An organization must be able to take the risks and invest in failure, otherwise the opportunities for growth don’t change. Being able to split a roadmap into small iterative steps that allow besides breaking down the complexity and making progress to evaluate the progress and the knowledge resulted, respectively incorporate the feedback and knowledge in the next steps, can prove to be what organizations lack in coping with the high uncertainty. Instead, organizations seem to be fascinated by the big bang, thinking that technology can automatically fill the organizational gaps.

Doing the same thing repeatedly and expecting different results is called insanity. Unfortunately, this is what organizations and service providers do in what concerns Project Management in general and data projects in particular. Building something without a foundation, without making sure that the employees have the skillset, maturity and culture to manage the data-related tasks, challenges and opportunities is pure insanity!

Bottom line, harnessing the data requires a certain maturity and it starts with recognizing and pursuing opportunities, setting goals, following roadmaps, learning to fail and getting value from failure, respectively controlling the failure. Growth or instant enlightenment without a fair amount of sweat is possible, though that’s an exception for few in sight!

Previous Post <<||>> Next Post

14 December 2023

ERP Systems: Microsoft Dynamics 365's Invoice Capture (Some Thoughts to Start with)

Enterprise Resource Planning

Introduction

It's almost the year end and it's time for reviewing what went good and not that good during the year. On the "successful projects" list I can put the Invoice Capture implementation. I wrote on a previous post a short review on what the feature is about.

I had the chance of configuring Invoice Capture for Cost invoices (invoices without Purchase orders) while it was still in public preview, and we went live soon after the feature became generally available. The implementation had its challenges though in the end it was a positive experience, learning a lot from my colleagues, from Microsoft, other consultants and business users who embarked on the same journey. 

Where to Start?

Usually, it's a good idea to start with the documentation and the standard training material, which provides a good overview of what Invoice Capture is about, the steps needed for configuration, the processes involved, permissions, etc. 

You should check also the "Invoice Capture for Dynamics 365" group on Yammer (aka Viva) because besides the latest version of the Implementation Guide document are published in there also the Release notes and training videos associated with them, to which other users provide (lot of) feedback and questions. Some information is first available in the group and much later made available in the documentation. If you're facing an error or a challenge, more likely there's a conversation in there and the answer you're looking for. Otherwise, you can start a thread and the others will try to help. At least until now, Microsoft was quite active in helping.

Via FastTrack, Microsoft provided several sessions in Dec-2022 (preview) and Sep-2023 (GA) that can be used to get a good overview about the feature and its implementation. Frankly, I would start with the last session and then explore the other resources. In the process I found useful several other resources, mainly YouTube content - see Dan's Corner (link), DAFTD365 - and LinkedIn - see Hendrik M Larsen's posts

You might want to also check the Release planner for Finance, to see what features are in the pipeline, respectively the Ideas for Dynamics to get an idea what kind of improvements others wish for. 

In parallel, one can start sketching the "AS IS" and "TO BE" processes, and eventually put together a business case for using Invoice Capture to digitize the processing of Vendor invoices. This isn't a simple Change request, therefore it makes sense to start a project, though its scope is relatively small. 

Bridging the Gap

One can look at the "TO BE" process based on the functionality provided, respectively planned by Microsoft for Invoice capture, or look at the broader picture and sketch how an ideal digitized process should look like. If the gap between the two pictures is big, then might be a good idea to look at alternatives, which anyway should be done as part of the business case. There might be third-party tools out there (e,g, ExFlow) which provide similar functionality, however on the long term it makes sense to go with Microsoft, even if the full extent of the functionality might be not available. 

A review of other tools might be good - to understand how the ISV's approached the integration, what kind of features they provide, respectively whether the ideal digitized process makes sense. Conversely, this will imply more effort.

The current version of Invoice capture provides a good basis to build upon. One can use Power Apps or Power Automate to address some of the gaps, some gaps can be discussed with Microsoft and stress their importance, while other gaps are maybe not that important and can be dismissed. One way or another one must be ready to compromise as long as this doesn't have an important impact on the business. 

The Project

The scope of the project might be relatively small, though one should follow the best practices of Project Management and make sure that all important stakeholders are involved, that the right resources are available when needed, manage the requirements adequately, assure that the changes are adequately tested, that the users are trained, the process documented, etc.

It's important to understand that the simple configuration of Invoice capture will not be the end of the effort. As Microsoft will release further features directly and indirectly related to Invoice Capture, additional effort might be involved after the implementation went live to address the gaps, opportunities, as well as the risks. Moreover, Invoice Capture requires a learning curve; addressing the lessons learned might involve further changes in the system's setup as well in data's management. Therefore, further effort must be planned accordingly. 

Even if we talk about a full implementation or the implementation of a feature, the overall success tends to be more dependent on how the implementation is approached than on the technology involved. 

Closing Thoughts

Some of the points made here can be applied to similar feature implementations. Overall, it's important to gather enough information to start the project and in time to reach the level of depth required by it. Don't expect for things to be perfect, start small and evolve, prioritize, cover the gaps, optimize!

Previous post <<||>> Next post

03 October 2023

ERP Implementations: It’s a Matter of Complexity

 

ERP Implementation

There are many factors to blame for implementation process’ inefficiency, however many of the factors can be associated with the complexity of the project itself, respectively of the application(s) involved. The problem of complexity can be addressed by either answering to complexity with complexity, building a complex team to handle the tasks, which is seldom feasible even if many organizations do it, respectively by simplifying the implementation process and/or the application.

In what concerns the project, the complexity starts with requirement’s elicitation, the iterative transformations they suffer until the final functional requirements document is finalized, their evaluation and mapping to features, respectively gap’s identification. It’s a complex task because it involves understanding the business as well the functionality available in the target system(s). Then comes the effort estimation, which, as the name suggests, is just a guess based on available historical numbers and/or experts’ opinion. High-level requirements are easier to manage than low-level requirements, however they allow for more gaps in understanding. The more detailed the specifications, the more they should help in the estimation process, though that’s the theory. A considerable number of factors can impact the process.

Even if there are standard activities in the implementation process, the number of resources involved from the customer as well from the partner(s) side makes the whole planning process a nightmare for any Project Manager, no matter how experienced he/she is.

Ideally, each member of the team should behave like a trooper, knowing by instinct when and what needs to be done, which are the expectations, etc. This might be close to expectation on the partner side as the resources more likely participated in similar projects, though there’s always a mix between levels of expertise, resources migrating between projects. Unfortunately, that’s seldom (never) the case on the customer side as the gap between reality and expectation is considerable.

Each team member requires a minimum of information/knowledge so he/she can perform the activities assigned. Moreover, the volume of coordination and cooperation is considerably higher than in other projects, complexity that increases with organization’s size and is inverse proportional with organization’s maturity in managing projects and implementation-related activities. There’s thus a minimum of initial communication needed, and furthermore communication needs to occur between the parties involved. Moreover, the higher the lack of cohesion between the parties, the higher the need for communication and this applies especially when multiple organizations are involved in the project.

The triple constraint of Project Management between scope, cost, and time, respectively on quality has an important impact on the project. Resources need to be available when the project needs them and, especially on the partner side, only when they are needed. The implementation project to be feasible for the partner, its resources must work on several projects in parallel or the timing must be perfect, that no waiting times are involved, respectively the effort is concentrated only when needed. Such precision is possible maybe at project’s beginning, though the further the project evolves, the more challenging becomes the coordination of resources. Similar considerations apply to the customer as well.

Thus, a more realistic expectation is to have resources available only at certain points in time, and the resources should be capable of juggling between projects, respectively between project and other activities. Prioritizing is a must, and sometimes the operations or other projects have higher priority. When the time is not available, resources need to compromise by reducing the level of quality.

On the other side, it would be great if most of the effort could be concentrated at the beginning of the project, the later interactions being minimal.  

Previous <<||>> Next

04 April 2021

Strategic Management: Between Value and Waste I (Introduction)

 Mismanagement

Independently on whether Lean Management is considered in the context of Manufacturing, Software Development (SD), Project Management (PM) or any other business-related areas, there are three fundamental business concepts on which the whole scaffolding of the Lean philosophies is built upon, namely the ones of value, value stream and waste. 

From an economic standpoint, value refers to the monetary worth of a product, asset or service (further referred as product) to an organization, while from a qualitative perspective, it refers to the perceived benefit associated with its usage. The value is thus reflected in the costs associated with a product’s delivery (producer’s perspective), respectively the price paid on acquiring it and the degree to which the product can fulfill a demand (customer’s perspective).

Without diving too deep into theory of product valuation, the challenges revolve around reducing the costs associated with a product’s delivery, respectively selling it to a price the customer is willing to pay for, typically to address a given set of needs. Moreover, the customer is willing to pay only for the functions that satisfy the needs a product is thought to cover. From this friction of opposing driving forces, a product is designed and valued.

The value stream is the sequence of activities (also steps or processes) needed to deliver a product to customers. This formulation includes value-added and non-value-added activities, internal and external customers, respectively covers the full lifecycle of products and/or services in whatever form it occurs, either if is or not perceived by the customers.  

Waste is any activity that consumes resources but creates no value for the customers or, generally, for the stakeholders, be it internal or external. The waste is typically associated with the non-added value activities, activities that don’t produce value for stakeholders, and can increase directly or indirectly the costs of products especially when no attention is given to it and/or not recognized as such. Therefore, eliminating the waste can have an important impact on products’ costs and become one of the goals of Lean Management. Moreover, eliminating the waste is an incremental process that, when put in the context of continuous improvement, can lead to processes redesign and re-engineering.

Taiichi Ohno, the ‘father’ of the Toyota Production System (TPS), originally identified seven forms of waste (Japanese: muda): overproduction, waiting, transporting, inappropriate processing, unnecessary inventory, unnecessary/excess motion, and defects. Within the context of SD and PM, Tom and Marry Poppendieck [1] translated the types of wastes in concepts closer to the language of software developers: partially done work, extra processes, extra features, task switching, waiting, motion and, of course, defects. To this list were added later further types of waste associated with resources, confusion and work conditions.

Defects in form of errors and bugs, ineffective communication, rework and overwork, waiting, repetitive activities like handoffs or even unnecessary meetings are usually the visible part of products and projects and important from the perspective of stakeholders, which in extremis can become sensitive when their volume increases out of proportion.

Unfortunately, lurking in the deep waters of projects and wrecking everything that stands in their way are the other forms of waste less perceivable from stakeholders’ side: unclear requirements/goals, code not released or not tested, specifications not implemented, scrapped code, overutilized/underutilized resources, bureaucracy, suboptimal processes, unnecessary optimization, searching for information, mismanagement, task switching, improper work condition, confusion, to mention just the important activities associated to waste.

Through their elusive nature, independently on whether they are or not visible to stakeholders, they all impact the costs of projects and products when the proper attention is not given to them and not handled accordingly.

Lean Management - The Waste Iceberg

References:
[1] Mary Poppendieck & Tom Poppendieck (2003) Lean Software Development: An Agile Toolkit, Addison Wesley, ISBN: 0-321-15078-3

07 March 2021

Project Management: Agile Manifesto Reloaded II (Requirements Management)

Project Management

Independently of its scope and the methodology used, each software development project is made of the same blocks/phases arranged eventually differently. It starts with Requirements Managements (RM) subprocesses in which the functional and non-functional requirements are gathered, consolidated, prioritized and brought to a form which facilitates their understanding and estimation. It’s an iterative process as there can be overlapping in functionality, requirements that don’t bring any significant benefit when compared with the investment, respectively new aspects are discovered during the internal discussions or with the implementer.

As output of this phase, it’s important having a list of requirements that reflect customer’s needs in respect to the product(s) to be implemented. Once frozen, the list defines project’s scope and is used for estimating the costs, sketching a draft of the final solution, respectively of reaching a contractual agreement with the implementer. Ideally the set of requirements should be completed and be coherent while reflecting customer’s needs. It allows thus in theory to agree upon costs as well about an architecture and other important aspects (responsibilities/accountability).

Typically, each new requirement considered after this stage needs to go through a Change Management (CM) process in which it gets formulated to the needed level of detail, a cost, effort and impact analysis is performed, respectively the budget for it is approved or the change gets rejected. Ideally small changes can be considered as part of a buffer budget upfront, however in the end each change comes with a cost and project delays.

Some changes can come late in the project and can have an important impact on the whole architecture when important aspects were missed upfront. Moreover, when the number of changes goes beyond a certain limit it can lead to what is known as scope creep, with important consequences on project’s costs, timeline and quality. Therefore, to minimize the impact on the project, the number of changes needs to be kept to a minimum, typically considering only the critical changes, while the others can be still implemented after project’s end.

The agile manifesto’s principles impose an important constraint on the requirements - changing requirements is a good practice even late in the process – an assumption - best requirements emerge from self-organizing teams, and probably one implication – the requirements need to be defined together with the implementer.

The way changing requirements are handled seem to provide more flexibility though it’s actually a constraint imposed on the CM process which interfaces with the RM processes. Without a proper CM in place, any requirement might arrive to be implemented, independently on whether is feasible or not. This can easily make project’s costs explode, sometimes unnecessarily, while accommodating extreme behavior like changing the same functionality frequently, handling exceptions extensively, etc.

It’s usually helpful to define the requirements together with the implementer, as this can bring more quality in the process, even if more time needs to be invested. However, starting from a solid set of requirements is a critical factor for project’s success. The manifesto makes no direct statement about this. Just iterates that good requirements emerge from self-organizing teams which is not necessarily the case. 

The users who in theory can define the requirements best are the ones who have the deepest knowledge about an organization’s processes and IT architecture, typically the key users and/or IT experts. Self-organization revolves around how a team organizes itself and handles the various activities, though there’s no guarantee that it will address the important aspects, no matter how motivated the team is, how constant the pace, how excellent the technical details were handled or how good the final product works.

Previous Post <<||>>Next Post

Project Management: Agile Manifesto Reloaded I (Introduction)

 

Project Management

There are so many books written on agile methodologies, each attempting to depict the realities of software development projects. There are many truths considered in them, though they seem to blend in a complex texture in which the writer takes usually the position of a preacher in which the sins of the traditional technologies are contrasted with the agile principles. In extremis everything done in the past seems to be wrong, while the agile methods seem to be a panacea, which is seldom the case.

There are already 20 years since the agile manifesto was published and the methodologies adhering to the respective principles don’t seem to provide the expected success, suffering from the same chronical symptoms of their predecessors - they are poorly understood and implemented, tend to function after hammer’s principle, respectively the software development projects still deliver poor results. Moreover, there are more and more professionals who raise their voice against agile practices.

Frankly, the principles behind the agile manifesto make sense. A project should by definition satisfy stakeholders’ requirements, ideally through regular deliveries that incorporate the needed functionality while gradually seeking to get early feedback from customers, respectively involve the customer through all project’s duration, working together to deliver a feasible product. Moreover, self-organizing teams, face-to-face meetings, constant pace, technical excellence should allow minimizing the waste, respectively maximizing the efficiency in the project. Further aspects like simplicity, good design and architecture should establish a basis for success.

Re-reading the agile manifesto, even if each read pulls from experience more and more pro and cons, the manifesto continues to look like a Christmas wish-list. Even if the represented ideas make sense and satisfy a specific need, they are difficult to achieve in a project’s context and setup. Each wish introduces a constraint that brings with it its own limitations. Unfortunately, each policy introduced by a methodology follows the same pattern, no matter of the methodology considered. Moreover, the wishes cover only a small subset from a project’s texture, are general and let lot of space for interpretation and implementation, though the same can be said about any principles that don’t provide a coherent worldview or a conceptual model.

The software development industry needs a coherent worldview that reflects its assumptions, models, characteristics, laws and challenges. Software Engineering (SE) attempts providing such a worldview though unfortunately is too complex for many and there seem to be a big divide when considered in respect to the worldviews introduced by the various Project Management (PM) methodologies. Studying one or two PM methodologies, learning a few programming languages and even the hand on experience on a few projects won’t fill the gaps in knowledge associated with the SE worldview.

Organizations don’t seem to see the need for professionals of having a formal education in SE. On the other side is expected from employees to have by default some of the skillset required, which is not the case. Besides understanding and implementing a technology there are a set of knowledge areas in which the IT professional must have at least a high-level knowledge if it’s expected from him/her to think critically about the respective areas. Unfortunately, the lack of such knowledge leads sometimes to situations which can impact negatively projects.

Almost each important word from the agile manifesto pulls with it a set of concepts from a SE’ worldview – customer satisfaction, software delivery, working software, requirements management, change management, cooperation, teamwork, trust, motivation, communication, metrics, stakeholders’ management, good design, good architecture, lessons learned, performance management, etc. The manifesto needs to be regarded from a SE’s eyeglasses if one expects value from it.

Previous Post <<||>> Next Post

04 March 2021

Project Management: Projects' Dynamics II (Motion)

Project Management

Motion is the action or process of moving or being moved between an initial and a final or intermediate point. From the tinniest endeavors to the movement of the planets and beyond, everything is governed by motion. If the laws of nature seem to reveal an inner structural perfection, the activities people perform are quite often far from perfect, which is acceptable if we consider that (almost) everything is a learning process. What is probably less acceptable is the volume of inefficient motion we can easily categorize sometimes as waste.

The waste associated with motion can take many forms: sorting through a pile of tools to find the right one, searching for information, moving back and forth to reach a destination or achieve a goal, etc. Suboptimal motion can have important effects for an organization resulting in reduced productivity, respectively higher costs.

If for repetitive activities that involve a certain degree of similarity can be found typically a way to optimize the motion, the higher the uncertainty of the steps involved, the more difficult it becomes to optimize it. It’s the case of discovery endeavors in which the path between start and destination can’t be traced beforehand, respectively when the destination or path in between can’t be depicted to the needed level of detail. A strategy’s implementation, ERP implementations and other complex projects, especially the ones dealing with new technologies and/or incomplete knowledge, tend to be exploratory in nature and thus fall under this latter type a motion.

In other words, one must know at minimum the starting point, the destination, how to reach it and what it takes to reach it – resources, knowledge, skillset. When one has all this information one can go on and estimate how long it will take to reach the destination, though the estimate reflects the information available as well estimator’s skills in translating the information into a realistic roadmap. Each new information has the potential of impacting considerably the whole process, in extremis to the degree that one must start the journey anew. The complexity of such projects and the volume of uncertainty can make estimation difficult if not impossible, no matter how good estimators' skills are. At best an estimator can come with a best- and worst-case estimation, both however dependent on the assumptions made.

Moreover, complex projects are sensitive to the initial conditions or auspices under which they start. This sensitivity can turn a project in a totally different direction or pace, that can be reinforced positively or negatively as the project progresses. It’s a continuous interplay between internal and external factors and components that can create synergies or have adverse effects with the potential of reaching tipping points.

Related to the initial conditions, as the praxis sometimes shows, for entities found in continuous movement (like organizations) it’s also important to know from where one’s coming (and at what speed), as the previous impulse (driving force) can be further used or stirred as needed. Metaphorically, a project will need a certain time to find the right pace if it lacks the proper impulse.

Unless the team is trained to play and plays like an orchestra, the impact of deviations from expectations can be hardly quantified. To minimize the waste, ideally a project’s journey should minimally deviate from the optimal path, which can be challenging to achieve as a project’s mass can pull the project in one direction or the other. The more the project advances the bigger the mass, fact which can make a project unstoppable. When such high-mass projects are stopped, their impulse can continue to haunt the organization years after.

Previous Post <<||>> Next Post

Project Management: Projects' Dynamics I (Introduction)

Despite the considerable collection of books on Project Management (PM) and related methodologies, and the fact that projects are inherent endeavors in professional as well personal life (setups that would give in theory people the environment and exposure to different project types), people’s understanding on what it takes to plan and execute a project seems to be narrow and questionable sometimes. Moreover, their understanding diverges considerably from common sense. It’s also true that knowledge and common sense are relative when considering any human endeavor in which there are multiple roads to the same destination, or when learning requires time, effort, skills, and implies certain prerequisites, however the lack of such knowledge can hurt when endeavor’s success is a must and a team effort. 

Even if the lack of understanding about PM can be considered as minor when compared with other challenges/problems faced by a project, when one’s running fast to finish a race, even a small pebble in one’s running shoes can hurt a lot, especially when one doesn’t have the luxury to stop and remove the stone, as it would make sense to do.

It resides in the human nature to resist change, to seek for information that only confirm own opinions, to follow the same approach in handling challenges, even if the attempts are far from optimal, even if people who walked the same path tell you that there’s a better way and even sketch the path and provide information about what it takes to reach there. As it seems, there’s the predisposition to learn on the hard way, if there’s significant learning involved at all. Unfortunately, such situations occur in projects and the solutions often overrun the boundaries of PM, where social and communication skills must be brought into play. 

On the other side, there’s still hope that change can be managed optimally once the facts are explained to a certain level that facilitates understanding. However, such an attempt can prove to be quite a challenge, given the various setups in which PM takes place. The intersection between technologies and organizational setups lead to complex scenarios which make such work more difficult, even if projects’ challenges are of organizational rather than technological nature. 

When the knowledge we have about the world doesn’t fit our expectation, a simple heuristic is to return to the basics. A solid edifice can be built only on a solid foundation and the best foundation in coping with reality is to establish common ground with other people. One can achieve this by identifying their suppositions and expectations, by closing the gap in perception and understanding, by establishing a basis for communication, in which feedback is a must if one wants to make significant progress.

Despite of being explorative and time-consuming, establishing common ground can be challenging when addressing to an imaginary audience, which is quite often the situation. The practice shows however that progress can be made by starting with a set of well-formulated definitions, simple models, principles, and heuristics that have the potential of helping in sense-making.

The goal is thus to identify first the definitions that reflect the basic concepts that need to be considered. Once the concepts defined, they can be related to each other with the help of a few models. Even if fictitious, as simplifications of the reality, the models should allow playing with the concepts, facilitating concepts’ understanding. Principles (set of rules for reasoning) can be used together with heuristics (rules of thumb methods or techniques) for explaining the ‘known’ and approaching the ‘unknown’. Even maybe not perfect, these tools can help building theories or explanatory constructs.

||>>Next Post

04 August 2020

Project Management: Redefining Projects' Success I

Mismanagement

A project is typically considered as successful if has met the beforehand defined objectives within the allocated budget, timeframe and expected quality levels. Any negative deviation from any of these equates with a project failure. In other words, the success or failure of a project is judged as black or white with no grays in between, which is utopic, especially for mid to big software projects, typically associated with lot of uncertainty. According to this definition a project which had a delay of a few months, or the budget was overrun by 10%, or the users got only 90% from the planned functionality, or any combination of these negative deviations, can be considered as failed.

If a small project needed 6 instead of 3 months to complete, which is normal for projects with reduced priority, as long the project costs haven’t changed, then the increase in duration can be ignored. In exchange, 3 months of a delay for a 2 years project is normal, especially when the project is complex. Even if additional costs incurred within this timeframe, as long they are a small percentage in comparison with the overall project costs, then the impact can be acceptable for the business. On the other side, when the delays have an exponential growth with further implications, then the problem changes dramatically.

Big projects have typically a strategic importance. It’s the case of ERP implementations, which besides the technology changes have in theory have the potential to transform an organization pushing it to reach further performance levels. Such projects are estimated to take on average one to two years for a medium organization, however the delays can easily reach 50% to 100% from the initial estimation. Independently of what caused the delay, as long the organization achieved the intended goals and can cover project’s costs, one can say that the project made a (positive) difference.

Independently of project’s size, if 90% of the important functionality is available, then more likely the 10% can be covered in a first step with manual work, following in time to further invest into the system as part of a continuous improvement process. It’s maybe not ideal for the users, however the approach incorporates also the learning curve of working with the system and understanding ist possibilities and limitations. Of course, when the percentage of the available functionality decreases below a given limit, system’s acceptance is endangered, which users eventually start looking for alternatives.

There are also projects which opened the door to new possibilities and which require more investments to leverage the full capabilities. Some ERP implementations have this potential, despite overruns. Some of such investments are entitled while others are not. Related to this last category, there are projects which are on time, on budget, and the deliverables satisfy the quality criteria and objectives, however they make no difference for the organization despite the important investments made. Sure, some of the projects from this category are a must (e.g. updates, upgrade, technology changes), however there are also projects which can be considered as self-occupational hazard. In extremis such projects run in the background and cost organizations lot of energy and resources, while their effects are questionable.

At least from these examples the definition of a project's success needs to be changed or maybe standardized to consider not only intrinsic but also extrinsic aspects. In theory, that is the role of a Project Management Office (PMO), however it’s challenging to find an evaluation methodology that fits all needs. Further on, from same considerations, benchmarking projects across organizations and industries can prove to be a foolhardy attempt.

28 June 2020

Strategic Management: Simplicity II (A System's View)

Strategic Management

Each time one discusses in IT about software and hardware components interacting with each other, one talks about a composite referred to as a system. Even if the term Information System (IS) is related to it, a system is defined as a set of interrelated and interconnected components that can be considered together for specific purposes or simple convenience.

A component can be a piece of software or hardware, as well persons or groups if we extend the definition. The consideration of people becomes relevant especially in the context of ecologies, in which systems are placed in a broader context that considers people’s interaction with them, as this raises to important behavior that impacts system’s functioning.

Within a system each part has a role or function determined in respect to the whole as well as to the other parts. The role or function of the component is typically fixed, predefined, though there are also exceptions especially when the scope of a component is enlarged, respectively reduced to the degree that the component can be removed or ignored. What one considers or not considers as part of system defines a system’s boundaries; it’s what distinguishes it from other systems within the environment(s) considered.

The interaction between the components resumes in the exchange, transmission and processing of data found in different aggregations ranging from signals to complex data structures. If in non-IT-based systems the changes are determined by inflow, respectively outflow of energy, in IT the flow is considered in terms of data in its various aggregations (information, knowledge).  The data flow (also information flow) represents the ‘fluid’ that nourishes a system’s ‘organism’.

One can grasp the complexity in the moment one attempts to describe a system in terms of components, respectively the dependencies existing between them in term of data and processes. If in nature the processes are extrapolated, in IT they are predefined (even if the knowledge about them is not available). In addition, the less knowledge one has about the infrastructure, the higher the apparent complexity. Even if the system is not necessarily complex, the lack of knowledge and certainty about it makes it complex. The more one needs to dig for information and knowledge to get an acceptable level of knowledge and logical depth, the more time is needed for designing a solution.

Saint Exupéry’s definition of simplicity applies from a system’s functional point of view, though it doesn’t address the relative knowledge about the system, which often is implicit (in people’s heads). People have only fragmented knowledge about the system which makes it difficult to create the whole picture. It’s typically the role of system or process operational manuals, respectively of data descriptions, to make that knowledge explicit, also establishing a fundament for common knowledge and further communication and understanding.

Between the apparent (perceived) and real complexity of a system there’s an important gap that needs to be addressed if one wants to manage the systems adequately, respectively to simplify the systems. Often simplification happens when components or whole systems are replaced, consolidated, or migrated, a mix between these approaches existing as well. Simplifications at data level (aka data harmonization) or process level (aka process optimization and redesign) can have an important impact, being inherent to the good (optimal) functioning of systems.

Whether these changes occur in big-bang or gradual iterations it’s a question of available resources, organizational capabilities, including the ability to handle such projects, respectively the impact, opportunities and risks associated with such endeavors. Beyond this, it’s important to regard the problems from a systemic and systematic point of view, in which ecology’s role is important.

Previous Post <<||>> Next Post

Written: Jun-2020, Last Reviewed: Mar-2024

16 June 2020

Project Management: Planning Correctly Misundersood IV

Mismanagement

The relatively big number of Project Management (PM) methodologies considered nowadays makes it more and more difficult to understand the world of PM and make oneself understood, in the context in which terminology is used in explanations that defy the logic, in which people are stubborn in persisting that their understanding is the ultimate truth and, that between white and black there are no degrees of gray. Between all PM concepts project planning seems to be the most misunderstood, and this probably because all the activities revolve around it, while each methodology brings its own planning philosophy. Each methodology comes with its own story, its own imaginative description of what a perfect plan is about.

Independently of the methodology used there are three levels of planning. At highest level, the strategic one, the project is put in the context of other strategic activities – other projects and initiatives, as well business operations, competing altogether for the same financial and human resources.  At this level are the goals identified and put the basis for the successful execution of the project, including establishing the ground and integrating the main aspects of a project – risk, quality and communication. Here is decided which projects will be considered, in which sequence, how and when resources will be assigned. 

A project plan is typically written and further executed by having the tactical horizon in mind – the individual engagement of resources and actions, the actual means to reach the objectives set at strategic level. It’s the level where the actual project plan is detailed, where activities are sequenced and prioritized. Here each methodology has its own approach – whether the planning is done per deliverable, work package or any other approach used to partition the activities. It’s the level at which the various teams are coordinated toward specific targets. Thus the manageable unit is the team and not the individual, the deliverables or the work packages and not the individual tasks.

The operational level equates with the execution of a project’s activities. Even if the project manager oversights the activities, it’s in team’s duties to plan the activities having the set deliveries in mind. The project manager doesn’t need to know all the details, though he should be updated on a timely manner on the progress, the eventual risks and opportunities that arise in each area. This requires continuous coordination on vertical as well horizontal level.

The project manager typically oscillates between the strategic and tactical views of a project, while the operational level appears in the view only when operational themes are escalated or further coordination is needed. Even if this delimitation is clear in big projects, in the small projects the three levels melt into each other. Therefore the sprung from small to big projects and vice-versa can create issues when the approach is not tailored to project’s size and its further characteristics.

Attempting to plan each activity in the project at the lowest level of detail obscures the view, the complexity of the project kicking back sooner or later. Maintaining such a detailed plan can become a waste of time on the long term. In extremis a resource is used to update a plan, which easily can become obsolete by the time all activities were reviewed. This doesn’t mean that the project plan doesn’t need to be updated regularly, though the pace can be decided on each project’s specifics.

Therefore, one of the most important challenges in projects is finding the appropriate level of detail for planning, and there’s no general rule that works for all projects. Typically the choices alternate between work packages and deliverables. 

21 May 2020

Project Management: Planing Correctly Misundersood III

Mismanagement

One of the most misunderstood topics in Project Management seems to be the one of planning, and this probably because everyone has a good idea of what it means to plan an activity – we do it daily and most of the times (hopefully) we hit a bull’s-eye (or we have the impression we did that). You must do this and that, you have that dependency, you must coordinate with a few people, you must first reach that milestone before going further, you do one step at a time, and so on. It’s pretty easy, isn’t it?

From a bird’s eyes view project planning is like planning every other activity though there are several important differences. The most important one is of scale – the number of activities and resources involved, the level of coordination and communication, as well the quality with which occur, the level of uncertainty and control, respectively manageability. All these create a complexity that is hardly manageable by just one person. 

Another difference is the detail needed for the planning and targets’ reachability. Some believe that the plan needs to be done down to the lowest level of detail, which even if possible can prove to be an impediment to planning. Projects’ environment share some important characteristics with a battle field in terms of complexity of interactions, their dynamics and logistical requirements. Within an army’s structure there are levels of organization that require different mindsets and levels of planning. A general thinks primarily at strategic level in which troops and actions are seen as aggregations at the needed level of abstraction that makes their organization and planning manageable. The strategy is done however in collaboration with other generals and upper structures, while having defined the strategic goals the general must devise together with the immediate subalterns the tactics. In theory the project manager must regard the project from the same perspective. Results thus three levels of planning – strategic, done with the upper management, tactical done with the team members, respectively logistical, done within the team. That’s a way of breaking the complexity and dividing the responsibilities within the project. 

Projects’ final destination seem to have the character of a wish list more or less anchored in reality. From a technical point the target can be achievable though in big projects the most important challenges are of organizational nature – of being able to allocate and coordinate effectively the resources as needed by the project. The wish-like character is reflected also by the cost, scope, time triangle in respect to the expected quality – to some point in time one is forced to choose between two of them. On the other side, there’s the tendency to see the targets and milestones as fixed, with little room for deviation. One can easily forget that a strategic plan’s purpose is to set the objectives, identify the challenges and the possible lines of action, while a tactical plan’s objective is to devise the means to reach the objectives. Bringing everything together can easily obscure the view and, in extremis, the plan loses its actuality as soon was created (and approved). 

The most confusing aspect is probably the adherence of a plan to a given methodology, one dicing a project and thus a plan to fit a methodology by following blindly the rules and principles imposed by it instead of fitting the methodology to a project. Besides the fact that the methodologies are best practices but not necessarily good practices, what fits for an organization, they tend to be either too general, by specifying the what and not the how, or too restrictive (interpreted). 

20 May 2020

Project Management: Some Thoughts on Planning II

Mismanagement

A project’s dependency on resources’ (average) utilization time (UT) and quality expectations expressed as a quality factor (QF) doesn’t come as a surprise, as hopefully one is acquainted with project’s triangle which reflects the dependency between scope, cost and time in respect to quality. Even if this dependency is intuitive, it’s difficult to express it in numbers and study the way it affects the project. That was the purpose of the model built previously.
From the respective model there are a few things to ponder. First, it’s a utopia to plan with 90% UT, unless one is really sure that the resources have enough work to bring the idle time close to zero. A single person can achieve maybe a 90% UT if he works alone on the project, though even then there are phases in which the input or feedback from other people is necessary. The more people involved into the project and the higher the dependency between their activities, the higher the chances that the (average) UT will decrease considerably.
When in addition there’s also a geographical or organizational boundary between team members, the UT will decrease even more. In consequence, in big projects like ERP implementations the team members from customer and vendor side are allocated fully to the project; when this is not possible, then on the vendor side the consultants need to be involved in at least two projects to cover the idle time. Of course, with good planning, communication, and awareness of the work ahead one can try minimizing the idle time, though that’s less likely to happen.
Probably, a better idea would be planning with 75% or even 60% UT though the values depend on team's experience in handling similar projects. If the team members are involved also in operational activities or other projects, then a 50% UT is more realistic.
Secondly, in the previous post was considered in respect to quality the 80%-20% rule which applies to the various deliverables, though the rule has a punctual character. Taken on the average the rule is somehow attenuated. Therefore, in the model was considered a sprung between factors of 1 to 2 with a step of 0,25 for each 5% quality increase. It's needed to prove whether the values are realistic and how much they depend on project's characteristics.
On the other side, quality is difficult to quantify, and 100% quality is hypothetical. One discusses in theory about 3 sigma (the equivalent of 93,3 accuracy) or 4 sigma (99,4 accuracy) in respect to the number of errors found in the code, though from there on everything is fuzzy. In software projects each decision has the potential of leading to an error, and there’s lot of interpretability as long there’s no fix basis against to compare the deviations. One needs precise and correct specification for that.
I think that one should target in a first phase 80% quality (on average) and further build from there, try to improve the quality iteratively as the project goes on and as lessons are learned. In other words, a project plan, a concept, a design document doesn’t need to be perfect from the beginning but should be good enough to allow working with it. One can detail them as progress is made into the project, and hopefully their quality should converge to a value that is acceptable for the business.
Thirdly, in case a planning tool was used, one can use the model backwards to roughly prove timeline’s feasibility, dividing the planned effort by the estimated effort and the number of resources involved to identify the implied utilization time.  

19 May 2020

Project Management: Some Thoughts on Planning I

Mismanagement

One of the issues in Project Management (PM) planning is that the planner idealizes a resource and activities performed by it much like a machine. Unlike machines whose uptime can approach 100%, a human resource can work at most 90% of the available time (aka utilization time), the remaining 10% being typically associated with interruptions – internal emails and meetings, casual communications, pauses, etc. For resources split between projects or operations the utilization time can be at most 70%, however a realistic value is in general between 40% and 60% on average. What does it mean this for a project?
So, if a resource has a volume of work W, the amount of time needed to complete the work would be at best W/UT, where UT is the utilization time of the respective resource. “At best” because in each project there are additional idle time resulted from waste related activities – waiting for sign off, for information, for other resource to complete the time, etc.
The utilization time is not the only factor to consider. Upon case, the delivered work can reach maybe on average 80% of the expected quality. This applies to documentation and concepts as well for written code, bug testing and other project activities. To reach in the range of 100% one more likely will need 4 times of the effort associated with reaching 80% of the expected quality, however this value is dependent also on people’s professionalism and the degree with which the requirements were understood and possibly achievable. Therefore, the values vehiculated can be regarded as “boundary” values.
Let’s consider a quality factor (QF) which has a value of 1 for 80%, with an increase of 0,25 for each 5% of quality increase. Thus, with an initial effort estimation of 100 days, this is how the resulted effort modifies for various UT and QF values:

Considering that a project can target between 60% and 95% UT, and between 80% and 95% quality, for an initial estimation of 100 days the actual project duration can range between 117 and 292 days, where the lowest, respectively the right bound values are more realistic.
The model is simplistic as it doesn’t reflect the nonlinear aspect of the factors involved and the dependencies existing between them. It also doesn’t reflect the maturity of an organization to handle the projects and the tasks involved. However, it can be used to increase the awareness in how the utilization time and expected quality can affect a project’s timeline, and to check on whether one’s planning is realistic.
For example, at project’s start one can target an UT of 70% and a quality of 85%, which for 100 days of estimated effort will result in about 178 days of actual effort. Now diving the value by the number of resources involved, e.g. 4, it results that the project could be finished in about 44,5 days. This value can be compared then with the actual plan in which the activities are listed.
During the project it would be useful to look on how the UT changed and by how much, to understand the impact the change has on the project. For example, a decrease of 5% in utilization time can delay the project with 2,5 days which is not much, though for a project of 1000 days with talk already about one month. Same, it will be helpful to check how much the quality deviated from the expectation, because a decrease in quality by 5% can result in an additional effort of extra 8 days, which for 1000 days would mean almost 4 months of delay.

30 January 2020

Project Management: Methodologies (The Good, the Bad and the Ugly)

Mismanagement

The Good
: Nowadays there're several Project Management (PM) methodologies to choose from to address a project’s specifics and, when adapted and applied accordingly, a methodology can enable projects to be run and brought under control.

The Bad: Even if the theoretical basis of PM methodologies has been proved and perfected over the years, projects continue to fail at a disturbing rate. Of course, the reasons behind their failure are multiple, though often the failure reasons are rooted in how PM methodologies are taught, understood and implemented.

Same as a theoretical course in cooking won’t make one a good cook, a theoretical course in PM won’t make one a good Project Manager or knowledgeable team member in applying the learned methodology. Surprisingly, the expectation is exactly that – the team member got a training and is good to go. Moreover, people believe that managing a software project is like coordinating the building of a small treehouse. To some degree there are many similarities though the challenges typically lie in details, and these details often escape a standard course.

To bridge the gap between theory and practice is needed time for the learner to grow in the role, to learn the does and don’ts, and, most important, to learn how to use the tools at hand efficiently. The methodology is itself a tool making use of further tools in its processes – project plans, work breakdown structures, checklists, charters, reports, records, etc. These can be learned only through practice, hopefully with some help (aka mentoring) from an experienced person in the respective methodology, either the Project Manager itself, a trainer or other team member. Same as one can’t be thrown into the water and expected to traverse the Channel Tunnel, you can’t do that with a newbie.

There’s a natural fallacy to think that we've understood more than we have. We can observe our understanding's limits when we are confronted with the complexities involved in handing PM activities. A second fallacy is not believing other people’s warnings against using a tool or performing an activity in a certain way. A newbie’s mind has sometimes the predisposition of a child to try touching a hot stove even if warned against it. It’s part of the learning process, though some persist in such behavior without learning much. What’s even more dangerous is a newbie pretending to be an expert and this almost always ends badly.

The Ugly appears when the bad is brought to extreme, when methodologies are misused for the wrong purposes to the degree that they destroy anything in their way. Of course, a pool can be dug by using a spoon but does it make sense to do that? Just because a tool can be used for something it doesn’t mean it should be used for it as long there are better tools for the same. It seems a pretty logical thing though the contrary happens more often than we’d like. It starts with the preconception that one should use the tool one knows best, ignoring in the process the fit for purpose condition. What’s even more deplorable is breaking down a project to fit a methodology while ignoring the technical and logistical aspects.

Any tool can lead to damages when used excessively, in wrong places, at the wrong point in time or by the wrong person. Like the instruments in an orchestra, when an instrument plays the wrong note, it dissonates from the rest. When more instruments play wrongly, then the piece is unrecognizable. It’s the role of the bandmaster to make the players touch the right notes at the right time.

12 May 2019

Software Engineering: Programming (Misconceptions about Programming II)

Software Engineering

Continuation

One of the organizational stereotypes is having a big room full of cubicles filled with employees. Even if programmers can work in such settings, improperly designed environments restrict to a certain degree the creativity and productivity, making more difficult employees' collaboration and socialization. Despite having dedicated meeting rooms, an important part of the communication occurs ad-hoc. In open spaces each transient interruption can easily lead inadvertently to loss of concentration, which leads to wasted time, as one needs retaking thoughts’ thread and reviewing the last written code, and occasionally to bugs.

Programming is expected to be a 9 to 5 job with the effective working time of 8 hours. Subtracting the interruptions, the pauses one needs to take, the effective working time decreases to about 6 hours. In other words, to reach 8 hours of effective productivity one needs to work about 10 hours or so. Therefore, unless adequately planned, each project starts with a 20% of overtime. Moreover, even if a task is planned to take 8 hours, given the need of information the allocated time is split over multiple days. The higher the need for further clarifications the higher the chances for effort to expand. In extremis, the effort can double itself.

Spending extensive time in front of the computer can have adverse effects on programmers’ physical and psychical health. Same effect has the time pressure and some of the negative behavior that occurs in working environments. Also, the communication skills can suffer when they are not properly addressed. Unfortunately, few organizations give importance to these aspects, few offer a work free time balance, even if a programmer’s job best fits and requires such approach. What’s even more unfortunate is when organizations ignore the overtime, taking it as part of job’s description. It’s also one of the main reasons why programmers leave, why competent workforce is lost. In the end everyone’s replaceable, however what’s the price one must pay for it?

Trainings are offered typically within running projects as they can be easily billed. Besides the fact that this behavior takes time unnecessarily from a project’s schedule, it can easily make trainings ineffective when the programmers can’t immediately use the new knowledge. Moreover, considering resources that come and go, the unwillingness to invest in programmers can have incalculable effects on an organization performance, respectively on their personal development.

Organizations typically look for self-motivated resources, this request often encompassing organization’s whole motivational strategy. Long projects feel like a marathon in which is difficult to sustain the same rhythm for the whole duration of the project. Managers and team leaders need to work on programmers’ motivation if they need sustained performance. They must act as mentors and leaders altogether, not only to control tasks’ status and rave and storm each time deviations occur. It’s easy to complain about the status quo without doing anything to address the existing issues (challenges).

Especially in dysfunctional teams, programmers believe that management can’t contribute much to project’s technical aspects, while management sees little or no benefit in making developers integrant part of project's decisional process. Moreover, the lack of transparence and communication lead to a wide range of frictions between the various parties.

Probably the most difficult to understand is people’s stubbornness in expecting different behavior by following the same methods and of ignoring the common sense. It’s bewildering the easiness with which people ignore technological and Project Management principles and best practices. It resides in human nature the stubbornness of learning on the hard way despite the warnings of the experienced, however, despite the negative effects there’s often minimal learning in the process...

To be eventually continued…

07 May 2019

Strategic Management: Agile vs. Lean Organizations

Strategic Management

Agile and lean are two important concepts that pervaded the organizations in the past 20-30 years, though they continue to have little effect on organizations’ operations.

Agile is rooted in the need to respond promptly to the changing needs of an organization. The agile philosophy was primarily groomed in Software Development to reconcile the changing customer requirements with disciplined project execution, however it can be applied to an organization’s processes as well. An agile process is in general a process designed to deliver the intended results in an effective and efficient manner by addressing promptly the changing requirements in customers’ needs.

Lean is a systematic method for the minimization of waste, rooted as philosophy in manufacturing. The lean mindset attempts removing the non-value-added activities from processes because they bring no value for the customers. Thus a lean process is a process designed to deliver the intended results in an effective and efficient manner by focusing on the immediate needs of the customers, what customers want and value (when they want it). 

Effective means being successful in producing a desired or intended result, while efficient means achieving maximum productivity with minimum wasted effort or expense. The requirement for a process to be effective and efficient is translated in delivering what’s intended by using a minimum of steps designed in such a way that the quality of the end results is not affected, at least not for the essential characteristics. Efficiency is translated also in the fact that the information, material and resources’ flow suffer minimal delays.

Agile focuses in answering promptly the changing requirements in customers’ needs, while lean focuses on what customers wants and value while eliminating waste. Both mindsets seem to imply iterative and adaptive approaches in which the improvement happens gradually. Through their nature the two mindsets seem to complete each other. Some even equate agile with lean however an agile process is not necessarily lean and vice-versa.

To improve the effectiveness and efficiency of its operations an organization should aim developing processes that are agile and lean to optimize the information and material flows, while focusing on its users’ changing needs, and while eliminating continuously the activities that lead to waste. And waste can take so many forms – the unnecessary bureaucracy reflected in multiple and repetitive sign-offs and approvals, the lack of empowerment, not knowing what to do, etc.

There’s important time wasted just because the users don’t know or don’t understand an organization’s processes. If an organization can’t find rules that everyone understands then a process is doomed, independently of the key area the process belongs to. There’s also the tendency of attempting to address each exception within a process to the degree that multiple processes result. There’s no perfect process, however one can define the basic flow and document the main exceptions, while providing users some guidelines in navigating the unknown and unpredictable.

As part of same tendency it makes sense to move requests that respect a standard procedure on the list of standard requests instead of following futile steps just for the sake of it. It’s the case of requests that can be fulfilled with internal resources, e.g. the development of reports or extraction of data, provisioning of SharePoint websites, some performance optimizations, etc. In addition, one can unify processes that seem to be disconnected, e.g. the handling of changes as part of the Change Management respectively Project Management as they involve almost the same steps.

Probably it's in each organization’s interest to discover and explore the benefits of applying the agile and lean mindsets to its operation and integrate them in its culture
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.