Showing posts with label proof-of-concept. Show all posts
Showing posts with label proof-of-concept. Show all posts

11 June 2024

🧭🏭Business Intelligence: Microsoft Fabric (Part IV: Is Microsoft Fabric Ready?)

Business Intelligence Series
Business Intelligence Series

When writing a Business Case, besides the problem and solution(s) high-level descriptions, is important to roughly estimate how much it costs, how long it takes, respectively how many resources are needed and for what activities. A proof-of-concept (PoC) might not need an explicit business case, though the same high-level information is needed at least for the planning of resources and a formal approval.

Given that there are several analytical experiences in Microsoft Fabric (MF), it’s clear that can’t be anymore a reference architecture that can be recommended for customers. Frankly, that ship has sailed even since the introduction of Microsoft Synapse, if not earlier, with the movement to the cloud. Also, there’s no one size fits all as certain building blocks make sense only in certain scenarios (e.g. organization scale, data volume or source’s type). Moreover, even if MF has been generally available for quite some time, customers and service providers ask themselves whether the available features are enough for building analytics solutions based on it. 

“Is Fabric Ready?” was the topic of today’s Explicit Measures webcast [1]. Probably the answer is as usual “it depends” and the general recommendation is to do a PoC to check solution's feasibility. Conversely, MF may be the best approach to consider if integration with other systems (e.g. Dynamics 365, Dataverse) is needed. 

What the customers need are some rough realistic estimates they can base any planning upon (at least for a PoC if not for the whole project) in terms of making the data available into OneLake, building a semantic model, respectively processing and making the data available for consumption. Ideally, one needs a translation of the various steps as done earlier. For example, how long it takes to make the data available in OneLake, how long it takes to move the data physically or logically though the various layers, to build semantic models, etc. 

Probably, some things can be achieved in a matter of days, at least if one knows what one’s doing. However, we are talking here about a new architecture that may resemble for some of an unknow territory. Even if old and new techniques can be mixed, there are further implications or improvements that can be considered. There are many webcasts, blog posts and other material on how to do things, on what’s possible, though building a functioning solution from beginning to the end, even as PoC, requires more than putting all this together. 

Just making the data flow from point A to B or C is not enough - data security, data governance and a few other topics like scalability and availability need to be considered as well. Security and governance are also the areas in which probably more features must be considered. For many customers starting now with MF, the hope is that most of these features will be available during the time the solutions are ready for production.

From a cost perspective, there’s the cost of data at rest, in transit, the licensing for MF and the other components involved. Ideally, one should start small and increase capacities as needed, though small can vary from case to case, while it’s important to find out the optimum. Starting in the middle could be an alternative approach even if may involve higher costs. If one starts small, the costs for PoC can be neglectable, though sooner or later a compromise is needed to provide an acceptable performance. 

In terms of human resources, the topic is more complex (see [2]), and it depends largely on the nature of the project. The pool of skillsets is the most important constraint or enabler such projects can have.

Previous Post <<||>> Next Post

References:
[1] Explicit Measures (2024) Power BI tips Ep.327: Is Fabric Ready? (link)
[2] Explicit Measures (2024) Power BI tips Ep.321: Building and BI Team (link)

09 April 2024

🧭Business Intelligence: Why Data Projects Fail to Deliver Real-Life Impact (Part IV: Making It in the Statistics)

Business Intelligence
Business Intelligence Series

Various sources (e.g., [1], [2], [3]) advance the failure rates for data projects somewhere between 70% and 85%, rates which are a bit higher than the failure of standard projects estimated at 60-75% but not by much. This means that only 2-3 out of 10 projects will succeed and that’s another reason to plan for failure, respectively embrace the failure

Unfortunately, the statistics advanced on project failure have no solid fundament and should be regarded with circumspection as long the methodology and information about the population used for the estimates aren’t shared, though they do reflect an important point – many data projects do fail! It would be foolish to think that your project will not fail just because you’re a big company, and you have the best resources, and you have a proven rate of success, and you took all the precautions for the project not to fail.

Usually at the end of a project the team meets together to document the lessons learned in the hope that the next projects will benefit from them. The team did learn something, though as the practice shows even if the team managed to avoid some issues, other issues will impact the next similar project, leading to similar variances. One can summarize this as "on the average the impact of new issues and avoided known issues tends to zero out" or "on average, the plusses and minuses balance each other across projects". It’s probably a question of focus – if organizations focus too much on certain aspects, other aspects are ignored and/or unseen. 

So, your first data project will more likely fail. The question is: what do you do about it? It’s important to be aware of why projects and data projects fail, though starting to consider and monitor each possible issue can prove to be ineffective. One can, however, create a risk register from the list and estimate the rates for each of the potential failures, respectively focus on only the top 3-5 which have the highest risk. Of course, one should reevaluate the estimates on a regular basis though that’s Risk Management 101. 

Besides this, one should focus on how the team can make the project succeed. When adopting a technology, methodology or set of processes, it’s recommended to start with a proof-of-concept (PoC). To make the PoC a helpful experience it’s probably important to start with a topic that’s not too big to handle, but that also involves some complexity that would allow the organization to evaluate the targeted set of tools and technologies. It can also be a topic for which other organizations have made important progress, respectively succeed. The temptation is big to approach the most stringent issues in the organization, respectively to build something big that can have an enormous impact for the organization. Jumping too soon into such topics can just increase the chances of failure. 

One can also formulate the goals, objectives and further requirements in a form that allows the organization to build upon them even if the project fails. A PoC is about learning, building a foundation, doing the groundwork, exploring, mapping the unknown, and identifying what's still missing to make progress, respectively closing the full circle. A PoC is less about overachievement and a big impact, which can happen, though is a consequence of the good work done in the PoC. 

The bottom line, no matter whether you succeed or fail, once you start a project, you’ll still make it in the statistics! More important is what you’ve learnt after the first data project, respectively how you can use the respective knowledge in further projects to make a difference!

Previous Post <<||>> Next Post

References:
[1] Harvard Business Review (2023) Keep Your AI Projects on Track, by Iavor Bojinov (link)
[2] Cognilytica (2023) The Shocking Truth: 70-80% of AI Projects Fail! (link)
[3] VentureBeat (2019) Why do 87% of data science projects never make it into production? (link)

03 October 2023

🧮ERP: Implementations (Part IV: Introducing an Upfront Proof-of-Concept Setup)

 

ERP Implementation

The standard phases of an ERP implementation are mandatory and inflexible as there seems to exist a imposed succession of the phases rooted in customer’s need of having an upfront cost estimate for the project. Moreover, the concept-based approach reflected in the creation of a set of Functional Design Documents (FDDs), even if it’s supposed to increase an implementation’s accuracy, it brings considerable challenges and an effort volume that could be spent in other areas. E.g., having a proof-of-concept setup subproject early in the project seems to bring more benefits.

Usually, before or during the requirements gathering phase the functional consultants together with the key users look at the legacy system(s) and data, questions are asked on both sides, and the findings are hopefully documented, though the outputs are high-level ideas or process design sketches. The sessions are abstract, and besides diagrams there’s no feedback mechanism to make sure that the parties understood customer’s processes and data structures, respectively that the key users understood what the future system is supposed to deliver. Some projects consider the building of 'AS-IS' diagrams and/or user stories during this phase, though their impact on project’s outcomes is questionable.

Why not include in this phase also hand-on training sessions for the key users during which a system is set up based on the available information? For example, one can start with an existing shell of the system reflecting standard parameters used in the industry where the customer works. Starting from this shell the key users and consultants go through the various processes and business scenarios, change parameters, add master data manually, sketch how the process could look like, respectively understand the gaps from expectations, or maybe how the process can be changed to avoid customizations. That’s more effective than discussing over and over the data structures and processes!

Of course, this seems to increase exploratory phase's complexity, though the increase is apparent. Allowing key users to understand how the target system works has the potential of simplifying project's planning and execution. Besides reaching a common understanding of the functionality, the key users can better evaluate whether the target system satisfies the high-level requirements, respectively better perform the various activities - requirements’ definition, reviews and user acceptance testing benefiting altogether. Moreover, they can train and involve other users earlier.

For this to work there are several assumptions. First, that the functional consultants know the target system(s), which is not necessarily needed in other approaches where a person (e.g. business analyst) who can understand how a system works and can document processes is enough. Second, the key users must have a good understanding of the legacy systems. Third, the shell should reflect the business needs as much as possible. Fourth, the necessary financial resources need to be made available upfront. Fifth, the business commitment must be there, and with this the key users should focus only on the project.

However, the most important aspect is that the parties involved need to buy and support the idea! The FDDs bring a safety net and make sense for both parties, the setup being performed only after the signoff. On the other hand, because of the considerable number of iterations FDDs involve high costs. Performing first the setup as described above and writing later the FDDs, if still needed, should improve FDDs’ quality, and require fewer iterations.

This approach allows an important volume of work to be done upfront, and even if further effort is needed for customizations and testing, a lower level of coordination is needed later, reducing thus the complexity of the planning and of the overall project.

Previous <<||>> Next

03 February 2021

📦Data Migrations (DM): Conceptualization (Part III: Heuristics)

Data Migration

Probably one of the most difficult things to learn as a technical person is using the right technology for a given purpose, this mainly because one’s inclined using the tools one knows best. Moreover, technologies’ overlapping makes the task more and more challenging, the difference between competing technologies often residing in the details. Thus, identifying the gaps resumes in understanding the details of the problem(s) or need(s), respectively the advantages or disadvantages of a technology over the other. This is true especially about competing technologies, including the ones that replace other technologies.

There are simple heuristics, that can allow approaching such challenges. For example, heavy data processing belongs usually in databases, while import/export functionality belongs in an ETL tool.  Therefore, one can start looking at the problems from these two perspectives. Would the solution benefit from these two approaches or are there more appropriate technologies (e.g. data streaming, ELT, non-relational databases)? How much effort would involve building the solution? 

Commercial Off-The-Shelf (COTS) tools provided by third-party vendors usually offer specialized functionality in each area. Gartner and Forrester provide regular analyses of the main players in the important areas, analyses which can be used in theory as basis for further research. Even if COTS tend to be more expensive and can have some important functionality gaps, as long they are extensible, they can prove a good starting point for developing a solution. 

Sometimes it helps researching on the web what other people or organizations did, how they approached the same aspects, what technologies, techniques and best practices they used to overcome the challenges. One doesn’t need to reinvent the wheel even if it’s sometimes fun to do so. Moreover, a few hours of research can give one a basis of useful information and a better understanding over the work ahead.

On the other side sometimes it’s advisable to use the tools one knows best, however this can lead also to unusable and less performant solutions. For example, MS Excel and Access have been for years the tools of choice for building personal solutions that later grew into maintenance nightmares for the IT team. Ideally, they can still be used for data entry or data cleaning, though building solutions exclusively based on (one of) them can prove to be far than optimal. 

When one doesn’t know whether a technology or mix of technologies can be used to provide a solution, it’s recommended to start a proof-of-concept (PoC) that would allow addressing most important aspects of the needed solution. One can start small by focusing on the minimal functionality needed to check the main aspects and evolve the PoC during several iterations as needed.

For example, in the case of a Data Migration (DM) this would involve building the data extraction layer for an entity, implement several data transformations based on the defined mappings, consider building a few integrity rules for validation, respectively attempt importing the data into the target system. Once this accomplished, one can start increasing the volume of data to check how the solution behaves under stress. The volume of data can be increased incrementally or by considering all the data available. 

As soon the skeleton was built one can consider all the mappings, respectively add several entities to build the dependencies existing between them and other functionality. The prototype might not address all the requirements from the beginning, therefore consider the problems as they arise. For example, if the volume of data seems to cause problems then attempt splitting the data during processing in batches or considering specific optimization techniques like indexing or scaling techniques like increasing computing resources. 

Previous Post <<||>> Next Post

📦Data Migrations (DM): Conceptualization (Part II: Plan vs. Concept vs. Strategy)

Data Migration
Data Migrations Series

A concept is a document that describes at high level the set of necessary steps and their implications to achieve a desired result, typically making the object of a project. A concept is usually needed to provide more technical and nontechnical information about the desired solution, the context in which a set of steps are conducted, respectively the changes considered, how the changes will be implemented and the further aspects that need to be considered. It can include a high-level plan and sometimes also information that typically belong in a Business Case – goals,objectives, required resources, estimated effort and costs, risks and opportunities.

A concept is used primarily as basis for sign-off as well for establishing common ground and understanding. When approved, it’s used for the actual implementation and solution’s validation. The concept should be updated as the project progresses, respectively as new information are discovered.

Creating a concept for a DM can be considered as best practice because it allows documenting the context, the technical and organizational requirements and dependencies existing between the DM and other projects, how they will be addressed. The concept can include also a high-level plan of the main activities (following to be detailed in a separate document).

Especially when the concept has an exploratory nature (due to incomplete knowledge or other considerations), it can be validated with the help of a proof-of-concept (PoC), the realization of a high-level-design prototype that focuses on the main characteristics of the solution and allows thus identifying the challenges. Once the PoC implemented, the feedback can be used to round out the concept.

Building a PoC for a DM should be considered as objective even when the project doesn’t seem to meet any major challenges. The PoC should resume in addressing the most important DM requirements, ideally by implementing the whole or most important aspects of functionality (e.g. data extraction, data transformations, integrity validation, respectively the import into the target system) for one or two data entities. Once the PoC built, the team can use it as basis for the evolutive development of the solution during the iterations considered.

A strategy is a set of coordinated and sustainable actions following a set of well-defined goals, actions devised into a plan and designed to create value and overcome further challenges. A strategy has the character of a concept though it has a broader scope being usually considered when multiple projects or initiatives compete for the same resources to provide a broader context and handle the challenges, risks and opportunities. Moreover, the strategy takes an inventory of the current issues and architecture – the 'AS-IS' perspective and sketches the to 'TO-BE' perspective by devising a roadmap that bridges the gap between the two.

In the case of a DM a strategy might be required when multiple DM projects need to be performed in parallel or sequentially, as it can help the organization to better manage the migrations.

A plan is a high-level document that describes the tasks, schedule and resources required to carry on an activity. Even if it typically refers to the work or product breakdown structure, it can cover other information usually available in a Business Case. A project plan is used to guide both project execution and project control, while in the context of Strategic Management the (strategic) plan provides a high-level roadmap on how the defined goals and objectives will be achieved during the period covered by the strategy.

For small DM projects a plan can be in theory enough. As both a strategy and a concept can include a high-level plan, the names are in praxis interchangeable.

Previous Post <<||>> Next Post

15 May 2019

#️⃣Software Engineering: Programming (Part XV: Rapid Prototyping - Introduction)

Software Engineering
Software Engineering Series

Rapid (software) prototyping (RSP) is a group of techniques applied in Software Engineering to quickly build a prototype (aka mockup, wireframe) to verify the technical or factual realization and feasibility of an application architecture, process or business model. A similar notion is the one of Proof-of-Concept (PoC), which attempts to demonstrate by building a prototype, starting an experiment or a pilot project that a technical concept, business proposal or theory has practical potential. In other words in Software Engineering a RSP encompasses the techniques by which a PoC is lead.

In industries that consider physical products a prototype is typically a small-scale object made from inexpensive material that resembles the final product to a certain degree, some characteristics, details or features being completely ignored (e.g. the inner design, some components, the finishing, etc.). Building several prototypes is much easier and cheaper than building the end product, they allowing to play with a concept or idea until it gets close to the final product. Moreover, this approach reduces the risk of ending up with a product nobody wants.

A similar approach and reasoning is used in Software Engineering as well. Building a prototype allows focusing at the beginning on the essential characteristics or aspects of the application, process or (business) model under consideration. Upon case one can focus on the user interface (UI) , database access, integration mechanism or any other feature that involves a challenge. As in the case of the UI one can build several prototypes that demonstrate different designs or architectures. The initial prototype can go through a series of transformations until it reaches the desired form, following then to integrate more functionality and refine the end product gradually. This iterative and incremental approach is known as rapid evolutional prototyping.

A prototype is useful especially when dealing with the uncertainty, e.g. when adopting (new) technologies or methodologies, when mixing technologies within an architecture, when the details of the implementation are not known, when exploring an idea, when the requirements are expected to change often, etc. Building rapidly a prototype allows validating the requirements, responding agilely to change, getting customers’ feedback and sign-off as early as possible, showing them what’s possible, how the future application can look like, and this without investing too much effort. It’s easier to change a design or an architecture in the concept and design phases than later.

In BI prototyping resumes usually in building queries to identify the source of the data, reengineer the logic from the business application, prove whether the logic is technically feasible, feasibility being translate in robustness, performance, flexibility. In projects that have a broader scope one can attempt building the needed infrastructure for several reports, to make sure that the main requirements are met. Similarly, one can use prototyping to build a data warehouse or a data migration layer. Thus, one can build all or most of the logic for one or two entities, resolving the challenges for them, and once the challenges solved one can go ahead and integrate gradually the other entities.

Rapid prototyping can be used also in the implementation of a strategy or management system to prove the concepts behind. One can start thus with a narrow focus and integrate more functions, processes and business segments gradually in iterative and incremental steps, each step allowing to integrate the lesson learned, address the risks and opportunities, check the progress and change the direction as needed.

Rapid prototyping can prove to be a useful tool when given the chance to prove its benefits. Through its iterative and incremental approaches it allows to reach the targets efficiently



08 May 2018

📦Data Migrations (DM): Facts, Principles and Practices

Data Migration
Data Migrations Series

Introduction

Ask a person who is familiar with cars ‘how a car works‘ - you’ll get an answer even if it doesn’t entirely reflect the reality. Of course, the deeper one's knowledge about cars, the more elaborate or exact is the answer. One doesn't have to be a mechanic to give an acceptable explanation, though in order to design, repair or build a car one needs extensive knowledge about a car’s inner workings.

Keeping the proportions, the same statements are valid for the inner workings of Data Migrations (DM) – almost everybody in IT knows what a DM is, though to design or perform one you might need an expert.

The good news about DMs is that their inner workings are less complex than the ones of cars. Basically, a DM requires some understanding of data architecture, data modelling and data manipulation, and some knowledge of business data and processes. A data architect, a database developer, a data modeler or any other data specialist can approach such an endeavor. In theory, with some guidance also a person with knowledge about business data and processes can do the work. Even if DMs imply certain complexity, they are not rocket science! In fact, there are tools that can be used to do most the work, there some general principles and best practices about the architecture, planning and execution that can help in the process.

Principles and Best Practices

It might be useful to explain the difference between principles and best practices, because they’ll more likely lead you to success if you understood and incorporated them in your solutions. Principles as patterns of advice are general or fundamental ideas, truths or values stated in a context-independent manner. Practices on the other side are specific actions or applications of these principles stated in a context-dependent way. The difference between them is relatively thin, and therefore, they are easy to confound, though by looking at their generality, one can easily identify which is which.

For example, in the 60’s become known the “keep it simple, stupid” (aka KISS) principle, which states that a simple solution works better than a complex one, and therefore as key goal one should search the simplicity in design. Even if kind of pejorative, it’s a much simpler restatement of Occam’s razor –do something in the simplest manner possible because simpler is usually better. To apply it one must understand what simplicity means, and how it can be translated in designs. According to Hans Hofmann “the ability to simplify means to eliminate the unnecessary so that the necessary may speak” or in a quote quote attributed to Einstein: “everything should be made as simple as possible, but not simpler”. This is the range within which the best practices derived from KISS can be defined.

There are multiple practices that allow reducing the complexity of DM solutions: start with a Proof-of-Concept (PoC), start small and build incrementally, use off-the-shelf software, use the best tool for the purpose, use incremental data loads, split big data files into smaller ones, and so on. As can be seen all of them are direct actions that address specific aspects of the DM architecture or process.


Data Migration Truths

When looking at principles and best practices they seem to be further rooted in some basic truths or facts common to most DMs. When considered together, they offer a broader view and understanding of what a DM is about.  Here are some of the most important facts about DMs:

DM as a project:

  • A DM is a subproject with specific characteristics
  • A DM is typically a one-time activity before Go live
  • A DM’s success is entirely dependent or an organization’s capability of running projects
  • Responsibilities are not always clear
  • Requirements change as the project progresses
  • Resources aren't available when needed
  • Parallel migrations require a common strategy
  • A successful DM can be used as recipe for further migrations
  • A DM's success is a matter of perception
  • The volume of work increases toward the end


DM Architecture

  • A DM is more complex and messier than initially thought
  • A DM needs to be repeatable
  • A DM requires experts from various areas
  • There are several architectures to be considered
  • The migration approach is dependent on the future architecture
  • Management Systems have their own requirements
  • No matter how detailed the planning something is always forgotten
  • Knowledge of the source and target systems aren't always available
  • DM are too big to be performed manually
  • Some tasks are easier to be performed manually
  • Steps in the migration needs to be rerun
  • It takes several iterations before arriving to the final solution
  • Several data regulations apply
  • Fall-back is always an alternative
  • IT supports the migration project/processes
  • Technologies are enablers and not guarantees for success
  • Tools address only a set of the needed functionality
  • Troubleshooting needs to be performed before, during and after migrations
  • Failure/nonconformities need to be documented
  • A DM is an opportunity to improve the quality of the data
  • A DM needs to be transparent for the business


DM implications for the Business:

  • A DM requires a downtime for the system involved
  • The business has several expectations/assumptions
  • Some expectations are considered as self-evident
  • The initial assumptions are almost always wrong
  • A DM's success/failure depends on business' perception
  • Business' knowledge about the data and processes is relative
  • The business is involved for whole project’s duration
  • Business needs continuous communication
  • Data migration is mainly a business rather than a technical challenge
  • Business’ expertize in every data area is needed
  • DM and Data Quality (DQ) need to be part of a Data Management strategy
  • Old legacy system data have further value for the business
  • Reporting requirements come with their own data requirements


DM and Data Quality:

  • Not all required data are available
  • Data don't match the expectations
  • Quality of the data needs to be judged based on the target system
  • DQ is usually performed as a separate project with different timelines
  • Data don't have the same importance for the business
  • Improving DQ is a collective effort
  • Data cleaning needs to be done at the source (when possible)
  • Data cleaning is a business activity
  • The business is responsible for the data
  • Quality improvement is governed by 80-20 rule
  • No organization is willing paying for perfect data quality
  • If can’t be counted, it isn’t visible

More to come, stay tuned…

02 February 2007

🌁Software Engineering: Proof-of-Concept [PoC] (Definitions)

"A software trial that allows a prospect to test the product before buying it, while at the same time delivering a realistic slice of functionality." (Jill Dyché & Evan Levy, "Customer Data Integration: Reaching a Single Version of the Truth", 2006)

"Originally a term referring to a prototype proving a technical solution’s feasibility. Recently, the term has become synonymous with early-stage revenue-proving market feasibility. For example, without proof of concept revenue for early adopters, the inventor was having difficulty proving anyone needed what he had built." (Brad Feld & Sean Wise, "Startup Opportunities" 2nd Ed., 2017)

"A minimal implementation or execution of a process that serves as a sample sufficient to prove the success of the whole implementation or process." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"Is a realization of a certain method or idea in order to demonstrate its feasibility or a demonstration in principle with the aim of verifying that some concept or theory has practical potential. PoC represents a stage during the development of a product when it is established that the product will function as intended." (Soraya Sedkaoui & SRY Consulting, "Big Data Applications in Business", 2019)

"A demonstration of the feasibility of a particular method. PoC is used to test theoretical calculations and hypotheses in practice." (Kaspersky)

"A proof of concept (POC) is a demonstration of a product, service or solution in a sales context. A POC should demonstrate that the product or concept will fulfill customer requirements while also providing a compelling business case for adoption." (Gartner)

"A proof of concept (POC) is a demonstration to verify that certain concepts or theories have the potential for real-world application. In a nutshell, a POC represents the evidence demonstrating that a project or product is feasible and worthy enough to justify the expenses needed to support and develop it." (Techopedia) [source]

"A proof of concept (POC) is an exercise in which work is focused on determining whether an idea can be turned into a reality. A proof of concept is meant to determine the feasibility of the idea or to verify that the idea will function as envisioned." (Search CIO) [source]

"A proof of concept (POC) or a proof of principle is a realization of a certain method or idea to demonstrate its feasibility, or a demonstration in principle, whose purpose is to verify that some concept or theory has the potential of being used. A proof of concept is usually small and may or may not be complete." (SensiML)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.