Participating
in several ERP implementations, one has the expectation that things will change
for the better when moving from one implementation to another. Things change
positively in certain areas as experience is integrated, though on average the overall
performance seems to be the same. Thus, one may wonder, how can this happen? Of
course, there are so many explanations - what went wrong, what could have been
done better, and the list is usually quite big. However, the history repeats in
the next implementation. Something seems to be broken, or maybe this is the way
implementations should work, though I doubt this!
An ERP implementation
starts with a need and the customer usually has an idea of what the respective
need is about. It might even have a set of high-level or even low-level requirements,
which should be the case when starting on such a journey. Then the customer selects
an implementation partner, event followed by a period of discovery in which the
partner learns more about the business including the overall infrastructure,
business processes, data and people. Once the requirements are available, the
partner can evaluate them to identify the deviations from the standard functionality
available and that translate into customizations, sketch solutions, respectively
make a first estimate of the costs and resources needed.
Of course, there
can be multiple iterations of the process in which the requirements are reviewed,
reevaluated, justified, prioritized by all parties and a common understanding,
respectively an agreement on the scope and expectations is reached. In the
process some requirements are dropped, others are modified or postponed for a later
phase or later phases. The whole process can take a few months, though it’s
mandatory for creating a workable estimate used as basis for the statement of
work and the overall contract.
In parallel
the parties can also work on a project plan and agree upon a project
methodology, following that once the legal paperwork is signed, resources to be
allocated to the project. A common practice is then for the functional
consultants to generate based on the requirements a set of documents - functional
design documents (FDD), process diagrams - that should be used as basis for the
setup, for programming the customizations and User acceptance testing (UAT). Of
course, the documents need to be reviewed by the business, gaps or
misunderstandings mitigated, and this takes several iterations until the
business can sign-off on the respective documents. It’s the point where the
setup and programming can start, usually half a year, or even a year or more
after the initial steps.
Depending on
the scope, in the best-case scenario the setup will take one to two months, at
least until having a system ready for UAT with business data as needed for
Go-Live. The agreed customizations can translate in further months and effort
not only for programming, but also for testing, reviewing and further mitigations.
This would be the time when many of the key users see for the first time a
working version of the system, which frankly might be too late. Of course, they
read and reread the FDDs, though until this point everything was very abstract
and no matter how good such documents were written, they can’t replace the
hand-on experience with working with the system, discovering the functionality,
understanding how it works.
In the best-case
scenario, the key-users are satisfied with the results and the UAT,
respectively Go-Live can go on as planned, however the expectations for first
time right are seldom (never) met. Further iterations and delays are then involved.
Overall, the process doesn’t seem to be efficient!
No comments:
Post a Comment