15 May 2019

600 Words: Rapid Prototyping

Rapid (software) prototyping (RSP) is a group of techniques applied in Software Engineering to quickly build a prototype (aka mockup, wireframe) to verify the technical or factual realization and feasibility of an application architecture, process or business model. A similar notion is the one of Proof-of-Concept (PoC), which attempts to demonstrate by building a prototype, starting an experiment or a pilot project that a technical concept, business proposal or theory has practical potential. In other words in Software Engineering a RSP encompasses the techniques by which a PoC is lead.

In industries that consider physical products a prototype is typically a small-scale object made from inexpensive material that resembles the final product to a certain degree, some characteristics, details or features being completely ignored (e.g. the inner design, some components, the finishing, etc.). Building several prototypes is much easier and cheaper than building the end product, they allowing to play with a concept or idea until it gets close to the final product. Moreover, this approach reduces the risk of ending up with a product nobody wants.

A similar approach and reasoning is used in Software Engineering as well. Building a prototype allows focusing at the beginning on the essential characteristics or aspects of the application, process or business model under consideration. Upon case one can focus on the user interface (UI) , database access, integration mechanism or any other feature that involves a challenge. As in the case of the UI one can build several prototypes that demonstrate different designs or architectures. The initial prototype can go through a series of transformations until it reaches the desired form, following then to integrate more functionality and refine the end product gradually. This iterative and incremental approach is known as rapid evolutional prototyping.

A prototype is useful especially when dealing with the uncertainty, e.g. when adopting (new) technologies or methodologies, when mixing technologies within an architecture, when the details of the implementation are not known, when exploring an idea, when the requirements are expected to change often, etc. Building rapidly a prototype allows validating the requirements, responding agilely to change, getting customers’ feedback a d sign-off as early as possible, showing them what’s possible, how the future application can look like, and this without investing too much effort. It’s easier to change a design or an architecture in the concept and design phases than later.

In BI prototyping resumes usually in building queries to identify the source of the data, reengineer the logic from the business application, prove whether the logic is technically feasible, feasibility being translate in robustness, performance, flexibility. In projects that have a broader scope one can attempt building the needed infrastructure for several reports, to make sure that the main requirements are met. Similarly, one can use prototyping to build a data warehouse or a data migration layer. Thus, one can build all or most of the logic for one or two entities, resolving the challenges for them, and once the challenges solved one can go ahead and integrate gradually the other entities.

Rapid prototyping can be used also in the implementation of a strategy or management system to prove the concepts behind. One can start thus with a narrow focus and integrate more functions, processes and business segments gradually in iterative and incremental steps, each step allowing to integrate the lesson learned, address the risks and opportunities, check the progress and change the direction as needed.

Rapid prototyping can prove to be a useful tool when given the chance to prove its benefits. Through its iterative and incremental approaches it allows to reach the targets efficiently

13 May 2019

600 Words: Good Programmer, Bad Programmer

The use of denominations like “good” or “bad” related to programmers and programming carries with it a thin separation between these two perceptional poles that represent the end results of the programming process, reflecting the quality of the code delivered, respectively the quality of a programmer’s effort and  behavior as a whole. This means that the usage of the two denominations is often contextual, “good” and “bad” being moving points on a imaginary value scale with a wide range of values within and outside the interval determined by the two.

The “good programmer” label is a idealization of the traits associated with being a programmer – analyzing and understanding the requirements, filling the gaps when necessary, translating the requirements in robust designs, developing quality code with a minimum of overwork, delivering on-time, being able to help others, to work as part of a (self-organizing) team and alone, when the project requires it, to follow methodologies, processes or best practices, etc. The problem with such a definition is that there's no fix limit, considering that programmer’s job description can include an extensive range of requirements.

The “bad programmer” label is used in general when programmers (repeatedly) fail to reach others’ expectations, occasionally the labeling being done independently of one’s experience in the field. The volume of bugs and mistakes, the fuzziness of designs and of the code written, the lack of comments and documentation, the lack of adherence to methodologies, processes, best practices and naming conventions are often considered as indicators for such labels. Sometimes even the smallest mistakes or the wrong perceptions of one’s effort and abilities can trigger such labels.

Labeling people as "good" or "bad" has the tendency of reinforcing one's initial perception, in extremis leading to self-fulfilling prophecies - predictions that directly or indirectly cause themselves to become true, by the very terms on how the predictions came into being. Thus, when somebody labels another as "good" or "bad" he more likely will look for signs that reinforce his previous believes. This leads to situations in which "good" programmers’ mistakes are easier overlooked than "bad" programmers' mistakes, even if the mistakes are similar.

A good label can in theory motivate, while a bad label can easily demotivate, though their effects depend from person to person. Such labels can easily become a problem for beginners, bacause they can easily affect beginners' perception about themselves. It’s so easy to forget that programming is a continuous learning process in which knowledge is relative and highly contextual, each person having strengths and weaknesses.

Each programmer has a particular set of skills that differentiate him from other programmers. Each programmer is unique, aspect reflected in the code one writes. Expecting programmers to fit an ideal pattern is unrealistic. Instead of using labels one should attempt to strengthen the weaknesses and make adequate use of a person’s strengths. In this approach resides the seeds for personal growth and excellence.

There are also programmers who excel in certain areas - conceptual creativity, ability in problem identification, analysis and solving, speed, ingenuity of design and of making best use of the available tools, etc. Such programmers, as Randall Stross formulates it, “are an order of magnitude better” than others. The experience and skills harnessed with intelligence have this transformational power that is achievable by each programmer in time.

Even if we can’t always avoid such labeling, it’s important to become aware of the latent force the labels carry with them, the effect they have on our colleagues and teammates. A label can easily act as a boomerang, hitting us back long after it was thrown away.

12 May 2019

600 Words: Misconceptions about Programming - Part II

Continuation

One of the organizational stereotypes is having a big room full of cubicles filled with employees. Even if programmers can work in such settings, improperly designed environments restrict to a certain degree the creativity and productivity, making more difficult employees' collaboration and socialization. Despite having dedicated meeting rooms, an important part of the communication occurs ad-hoc. In open spaces each transient interruption can easily lead inadvertently to loss of concentration, which leads to wasted time, as one needs retaking thoughts’ thread and reviewing the last written code, and occasionally to bugs.

Programming is expected to be a 9 to 5 job with the effective working time of 8 hours. Subtracting the interruptions, the pauses one needs to take, the effective working time decreases to about 6 hours. In other words, to reach 8 hours of effective productivity one needs to work about 10 hours or so. Therefore, unless adequately planned, each project starts with a 20% of overtime. Moreover, even if a task is planned to take 8 hours, given the need of information the allocated time is split over multiple days. The higher the need for further clarifications the higher the chances for effort to expand. In extremis, the effort can double itself.

Spending extensive time in front of the computer can have adverse effects on programmers’ physical and psychical health. Same effect has the time pressure and some of the negative behavior that occurs in working environments. Also, the communication skills can suffer when they are not properly addressed. Unfortunately, few organizations give importance to these aspects, few offer a work free time balance, even if a programmer’s job best fits and requires such approach. What’s even more unfortunate is when organizations ignore the overtime, taking it as part of job’s description. It’s also one of the main reasons why programmers leave, why competent workforce is lost. In the end everyone’s replaceable, however what’s the price one must pay for it?

Trainings are offered typically within running projects as they can be easily billed. Besides the fact that this behavior takes time unnecessarily from a project’s schedule, it can easily make trainings ineffective when the programmers can’t immediately use the new knowledge. Moreover, considering resources that come and go, the unwillingness to invest in programmers can have incalculable effects on an organization performance, respectively on their personal development.

Organizations typically look for self-motivated resources, this request often encompassing organization’s whole motivational strategy. Long projects feel like a marathon in which is difficult to sustain the same rhythm for the whole duration of the project. Managers and team leaders need to work on programmers’ motivation if they need sustained performance. They must act as mentors and leaders altogether, not only to control tasks’ status and rave and storm each time deviations occur. It’s easy to complain about the status quo without doing anything to address the existing issues (challenges).

Especially in dysfunctional teams, programmers believe that management can’t contribute much to project’s technical aspects, while management sees little or no benefit in making developers integrant part of project's decisional process. Moreover, the lack of transparence and communication lead to a wide range of frictions between the various parties.

Probably the most difficult to understand is people’s stubbornness in expecting different behavior by following the same methods and of ignoring the common sense. It’s bewildering the easiness with which people ignore technological and Project Management principles and best practices. It resides in human nature the stubbornness of learning on the hard way despite the warnings of the experienced, however, despite the negative effects there’s often minimal learning in the process...

To be eventually continued…

600 Words: Misconceptions about Programming - Part I

Besides equating the programming process with a programmer’s capabilities, minimizing the importance of programming and programmers’ skills in the whole process (see previous post), there are several other misconceptions about programming that influence process' outcomes.

Having a deep knowledge of a programming language allows programmers to easily approach other programming languages, however each language has its own learning curve ranging from a few weeks to half of year or more. The learning curve is dependent on the complexity of the languages known and the language to be learned, same applying to frameworks and architectures, the scenarios in which the languages are used, etc. One unrealistic expectation is that the programmers are capablle of learning a new programming language or framework overnight, this expectation pushing more pressure on programmers’ shoulders as they need to compensate in a short time for the knowledge gap. No, the programming languages are not the same even if there’s high resemblance between them!

There’s lot of code available online, many of the programming tasks involve writing similar code. This makes people assume that programming can resume to copy-paste activities and, in extremis, that there’s no creativity into the act of programming. Beside the fact that using others’ code comes with certain copyright limitations, copy-pasting code is in general a way of introducing bugs in software. One can learn a lot from others’ code, though programmers' challenge resides in writing better code, in reusing code while finding the right the level of abstraction.  
 
There’s the tendency on the market to build whole applications using wizard-like functionality and of generating source-code based on data or ontological models. Such approaches work in a range of (limited) scenarios, and even if the trend is to automate as much in the process, is not what programming is about. Each such tool comes with its own limitations that sooner or later will push back. Changing the code in order to build new functionality or to optimize the code is often not a feasible solution as it imposes further limitations.

Programming is not only about writing code. It involves also problem-solving abilities, having a certain understanding about the business processes, in which the conceptual creativity and ingenuity of design can prove to be a good asset. Modelling and implementing processes help programmers gain a unique perspective within a business.

For a programmer the learning process never stops. The release cycle for the known tools becomes smaller, each release bringing a new set of functionalities. Moreover, there are always new frameworks, environments, architectures and methodologies to learn. There’s a considerable amount of effort in expanding one's (necessary) knowledge, effort usually not planned in projects or outside of them. Trainings help in the process, though they hardly scratch the surface. Often the programmer is forced to fill the knowledge gap in his free time. This adds up to the volume of overtime one must do on projects. On the long run it becomes challenging to find the needed time for learning.

In resource planning there’s the tendency to add or replace resources on projects, while neglecting the influence this might have on a project and its timeline. Each new resource needs some time to accommodate himself on the role, to understand project requirements, to take over the work of another. Moreover, resources are replaced on project with a minimal or even without the knowledge transfer necessary for the job ahead. Unfortunately, same behavior occurs in consultancy as well, consultants being moved from one known functional area into another unknown area, changing the resources like the engines of different types of car, expecting that everything will work as magic.

11 May 2019

600 Words: The Dark Side of Programming

As member of programmers' extended community, it’s hard to accept some of the views that inconsiderate programmers and their work. In some contexts, maybe the critics reveal some truths. It’s in human nature to generalize some of the bed experiences people have or to oversimplify some of programmers’ traits in stereotypes, however the generalizations and simplifications with pejorative connotations bring no service to the group criticized, as well to the critics.

The programmer finds himself at the end of the chain of command, and he’s therefore the easiest to blame for the problems existing in software development (SD). Some of the reasoning fallacies are equating the process of programming with programmers' capabilities, when the problems reside in the organization itself – the way it handles each step of the processes involved, the way it manages projects, the way it’s organized, the way it addresses cultural challenges, etc.

The meaningful part of the SD starts with requirements’ elicitation, the process of researching and discovering the requirements based on which a piece of software is built upon. The results of the programming process are as good as the inputs provided – the level of detail, accuracy and completeness with which the requirements were defined. It’s the known GIGO (garbage in, garbage out) principle. Even if he questions some of the requirements, for example, when they are contradictory or incomplete, each question adds more delays in the process because getting clarifying the open issues involves often several iterations. Thus, one must choose between being on time and delivering the expected quality. Another problem is that the pay-off and perception for the two is different from managerial and customers’ perspective.

A programmer’s work, the piece of software he developed, it’s seen late in the process, when it’s maybe too late to change something in utile time. This happens especially in waterfall methodology, this aspect being addressed by more modern technologies by involving the customers and getting constructive feedback early in the process, and by developing the software in iterations.

Being at the end of the chain command, programming is seen often as a low endeavor, minimizing its importance, maybe because it seems so obvious. Some even consider that anybody can program, and it’s true that, as each activity, anyone can learn to program, same as anyone can learn another craft, however as any craft it takes time and skills to master. The simple act of programming doesn’t make one a programmer, same as the act of singing doesn’t make one a singer. A programmer needs on average several years to achieve an acceptable level of mastery and profoundness. This can be done only by mastering one or more programming languages and frameworks, getting a good understanding of the SD processes and what the customers want, getting hand-on experience on a range of projects that allow programmers to learn and grow.

There are also affirmations that contain some degrees of truth. Overconfidence in one’s skills results in programmers not testing adequately their own work. Programmers attempt using the minimum of effort in achieving a task, the development environments and frameworks, the methodologies and other tools playing an important part. In extremis, through the hobbies, philosophies, behaviors and quirks they have, not necessarily good or bad, the programmers seem to isolate themselves.

In the end the various misconceptions about programmers have influence only to the degree they can pervade a community or an organization’s culture. The bottom line is, as Bjarne Stroustrup formulated it, “an organization that treats its programmers as morons will soon have programmers that are willing and able to act like morons only” [1].

References:
[1] "The C++ Programming Language" 2nd Ed., by Bjarne Stroustrup, 1991

10 May 2019

600 Words: Data Warehousing and Microsoft Dynamics 365

With Dynamics 365 (D365) Online Microsoft made an important strategical move on the ERP market, however in what concerns the BI & Data Warehousing (BI/DW) area Microsoft changed the rules of the game by allowing no direct SQL access to the production environment. This primarily means that will become challenging for organizations to use the existing DW infrastructure to access the D365 data, and for Vendors and Service Providers to provide BI/DW solutions integrated within the D365 platform.

D365 includes its own data warehouse (actually data mart) designed for financial reporting however as per now it can’t be extended to support other business areas. The solution favorited by Microsoft for DW seems to be the use of an Azure SQL Database aka BYOD (Bring Your Own Database) to which entity-based data can be exported incrementally (aka incremental push) or fully (aka full push) via the Data Management Framework (DMF) packages.

Because many of the D365 tables (e.g. Inventory Transactions, Products, Customers, Vendors) were overnormalized over the years and other tables were added as part of new functionality, to hide this complexity, Microsoft introduced a new layer of abstraction formed from data entities organized within an entity store. Data entities are view-like encapsulations of the underlying D365 table schema, the data import/export from and D365 being performed extensively over these data entities via the DMF, which extends the Data Import/Export Framework (DIXF).

One can use thus a BYOD as a direct source for other reporting tools as long they support a connection to Azure, otherwise the data can be further loaded into a database into the cloud, which seems to be the best option until now, as long the organization has other data that need to be consolidated for reporting. From here on, one deals with the traditional way of reporting and the available infrastructure can be extended to use an additional data source.

The BYOD solution comes with several restrictions: a package needs to be created for each business unit, no composite data entities can be exported, data entities that don’t have a unique key can’t be exported via an incremental push, data entities can change over times (new versions being available), while during synchronization no active locks should be on the database. In addition, organizations which followed this path report also some bugs that needed to be addressed via the Microsoft support. Even if the about 1700 available data entities facilitate to some degree data consumption, they seem to be more appropriate for data migrations and data integrations than for DW workloads.

In absence of direct SQL connectivity, in theory organizations can still use SSIS or similar integration tools to connect to D365 production databases and consume data entities via the Open Data Protocol (OData), a standard that defines a set of best practices for building and consuming RESTful APIs. Besides some architectural challenges, loading big tables with transactional data is reportedly slow and impracticable for loading a data warehouse. Therefore, the usability of such an architecture becomes limited in time.

Microsoft imposed a hard limitation upon its D365 architecture by making its production database inaccessible. Of course, there’s still time for Microsoft to do some magic and pull new solutions from the technology stack hat. Unfortunately, the constraints imposed to the production environments limit organizations’ choices of building a modern and flexible data warehouse. For the future it would be great if the DMF could be used directly with standard SQL Server databases, avoiding thus the need for the intermediary Azure database, or if a real-time operational solution could be provided out-of-the-box. We’ll see what the future brings...

07 May 2019

600 Words: How Big Is Your Report?

How big are your reports? How big reports needs to be? Do your reports really reflect your needs? Have they become too cluttered with data? Do you have too many reports on the same topic? How many is too many? These are the few of the questions BI developers and users should ask themselves altogether from time to time.

A report is any document with textual and/or graphical formatted output of data from one or more data sources, (previously) designed to convey a basis for decision making or operational activities. A report is characterized by the amount of data it holds (the datasets), the amount of data is based on (the source data), the number and complexity of the queries on which the report is based, the number of data sources, the manner in which data are structured (tabular, matrix, graphical), the filtering and sorting possibilities, as well by the navigability possibilities (drilldown, drill-through, slice-and-dice, etc.). 

On the other side for users are important characteristics like reports’ performance, the amount of useful information it conveys, the degree to which a report helps address a business need, the quality of data, the degree to which it satisfies the various policies, the look and feel, the possibility of exporting the data to standard file formats.

A report’s size is defined typically by the product of columns and records the report displays plus the formatting and various types of graphical content, however this depends on the filter criteria used by the user. Usually is considered the average size of a report based on the typical filters used. Nowadays networks and database specific techniques allow displaying fairly big reports (20-50 Mb) in a fairly amount of time (10-20 seconds) without affecting network, respectively database’s performance, which for most of the requests should be enough. When the users need bigger volumes of data then a direct data dump (extract) from the database should be considered, when possible. (A data export is not a report and they should be differentiated as such.)

The number of records that could be shown in a report is dependent on reporting framework’s capabilities, e.g. there are reporting tools that cope well with showing a few thousands records but have difficulties in showing or exporting tens of thousands of records. The best example into this respect is Excel and its well-known limitation of 65536 records (2^16)  and 256 columns (2^8) that in the meantime has been addressed in Excel 2007 and enlarged to 1 million records (2^20), respectively 16k (2^14). Even so the reporting tools that use older drivers can fail exporting all the data to Excel when the former limitation is reached.

In general, reports with too many columns tend to obfuscate data’s understanding and are more difficult to navigate. The more the user needs to scroll horizontally the higher in general the obfuscation. If the users really need 50 columns then they should be provided, however in general 20-25 should be enough for an operational report. Tactical and strategic reports need a restrained focus and the information should be provided in a screen without the need of scrolling.

When reports get too big is recommended to split the reports in two or more reports to address specific requirements, however this can lead to too many distinct reports, and further to duplication of effort for creating and documenting them, and the duplication of logic and data. Therefore, the challenge is to find the right balance between the volume of reports, their usability and the effort needed to manage them. In certain scenarios it makes even sense to consolidate similar reports.

600 Words: Agile vs. Lean Organizations

Agile and lean are two important concepts that pervaded the organizations in the past 20-30 years, though they continue to have little effect on organizations’ operations.

Agile is rooted in the need to respond promptly to the changing needs of an organization. The agile philosophy was primarily groomed in Software Development to reconcile the changing customer requirements with disciplined project execution, however it can be applied to an organization’s processes as well. An agile process is in general a process designed to deliver the intended results in an effective and efficient manner by addressing promptly the changing requirements in customers’ needs.

Lean is a systematic method for the minimization of waste, rooted as philosophy in manufacturing. The lean mindset attempts removing the non-value-added activities from processes because they bring no value for the customers. Thus a lean process is a process designed to deliver the intended results in an effective and efficient manner by focusing on the immediate needs of the customers, what customers want and value (when they want it). 

Effective means being successful in producing a desired or intended result, while efficient means achieving maximum productivity with minimum wasted effort or expense. The requirement for a process to be effective and efficient is translated in delivering what’s intended by using a minimum of steps designed in such a way that the quality of the end results is not affected, at least not for the essential characteristics. Efficiency is translated also in the fact that the information, material and resources’ flow suffer minimal delays.

Agile focuses in answering promptly the changing requirements in customers’ needs, while lean focuses on what customers wants and value while eliminating waste. Both mindsets seem to imply iterative and adaptive approaches in which the improvement happens gradually. Through their nature the two mindsets seem to complete each other. Some even equate agile with lean however an agile process is not necessarily lean and vice-versa.

To improve the effectiveness and efficiency of its operations an organization should aim developing processes that are agile and lean to optimize the information and material flows, while focusing on its users’ changing needs, and while eliminating continuously the activities that lead to waste. And waste can take so many forms – the unnecessary bureaucracy reflected in multiple and repetitive sign-offs and approvals, the lack of empowerment, not knowing what to do, etc.

There’s important time wasted just because the users don’t know or don’t understand an organization’s processes. If an organization can’t find rules that everyone understands then a process is doomed, independently of the key area the process belongs to. There’s also the tendency of attempting to address each exception within a process to the degree that multiple processes result. There’s no perfect process, however one can define the basic flow and document the main exceptions, while providing users some guidelines in navigating the unknown and unpredictable.

As part of same tendency it makes sense to move requests that respect a standard procedure on the list of standard requests instead of following futile steps just for the sake of it. It’s the case of requests that can be fulfilled with internal resources, e.g. the development of reports or extraction of data, provisioning of SharePoint websites, some performance optimizations, etc. In addition, one can unify processes that seem to be disconnected, e.g. the handling of changes as part of the Change Management respectively Project Management as they involve almost the same steps.

Probably is in each organization’s interest to discover and explore the benefits of applying the agile and lean mindsets to its operation and integrate them in its culture

600 Words: Project Agility under Eyeglasses – Part II


Employees are used to follow procedures and processes, and when they aren’t available insecurity rules - each day there’s another idea advanced how things are supposed to work. Practically, the Agile approaches (incl. Agile Prince2) focus on certain aspects and ignore specific Project Management activities that need to be performed inside of a project – releasing resources for the project, getting users on-board, getting management’s buy-in, etc. Therefore, they need to be used with a methodology that offers the lacking processes. Problematic is when is considered that the Agile approaches are self-consistent and the Project Management practices and principles don’t apply anymore.

It’s true that the Agile methods attempt reconciling disciplined project execution with creativity and innovation, however innovation is needed typically in design (incl. prototyping) , while in programing there isn’t lot of room for creativity per se. The real innovation appears when the customer lists the functionality it needs from a system and the vendor, after analyzing all the related requirements, is capable to evaluate and propose a solution from the industry trending technologies. That’s innovation and not changing controls in user interfaces!

User stories are good for situations in which an organization doesn’t know what’s doing or the tasks have a deep segmentation and specialization. Starting from user stories and building upwards to processes can prove to be a waste of time the customer pays for, while the approach leaves few room for innovation. In big projects it’s also difficult to sense the contradictions from user stories or their duplication. Even if the user stories allow maybe (but not necessarily) a better effort estimate the level of detail can become overwhelming for any skilled solution architect.

It’s also true that an agile approach needs a culture with certain characteristics. A culture can’t be changed with one project or several projects running in parallel. Typically, is recommended to start with a pilot test, assert organization’s readiness, disseminate knowledge, start several small to medium projects and build from there. For sure starting a big project with an agile methodology  will involve more challenges to the extent the challenges will push back.

One sign of agility is when self-organizing teams emerge within projects, however it takes time and training to build such teams. The seeds must be planted long before, for such teams to emerge. The key is being able of working in such teams. In extremis, conflicts appear when multiple self-organizing teams appear, each with its own political agenda, agendas that don’t necessarily match project manager or stakeholders’ agendas, and from here a large range of potential conflicts.

The psychological effect of tight sprints (iterations) and daily status meetings for the whole duration of a project is not to neglect. It builds unnecessary stress and, unless the planning reaches perfection, the programmer or consultant will often find himself in the position to be in defensive. The frequent meetings can easily become a source for nuisance and in extremis can lead to extreme behavior that can easily affect the productivity and involved persons’ health.

Personally, I wouldn’t recommend using an Agile methodology for a big project like an ERP implementation unless it was adequately adapted to organization’s needs. This doesn’t necessarily mean that the Agile methods aren’t suitable for big projects, it means that the risks are high because in big projects there’s the chance for all these mentioned issues to occur.

Despite the weak points of the Agile methods, when adequately applied, they have the chance of better performing than the “traditional” approaches. Even if people tend to see more the negative sides there’s lot of potential in being agile. 

600 Words: Project Agility under Eyeglasses – Part I

There are more and more posts in the cyberspace voicing against the agile practices, the way they are understood and implemented by organizations. Some try to be hilarious [5]; others try to keep the scholastic seriousness [1] [2] [3] [4], and all of them make some valid points. In each remark there’re some seeds of truth, even if context-dependent.

Personally, I embrace an agile approach when possible, however I find it difficult to choose between the agile methodologies available on the market because each of them introduces some concepts that contradict what it means to be agile – to respond promptly to business needs. It doesn’t mean that one must consider each requirement, but that’s appropriate to consider those which have business justification. Moreover, organizations need to adapt the methodologies to their needs, and seldom vice-versa.
Considering the Agile Manifesto, it’s difficult to take as serious statements that lack precision, formulations like “we value something over something else” are more of a wish than principles. When people don’t understand what the agile “principles” mean, one occasionally hears statements like “we need no documentation”, “we need no project plan”, “the project plan is not important”, “Change Management doesn’t apply to agile projects” or “we need only high-level requirements because we’ll figure out where we’re going on the way”. Because of the lack of precision, a mocker can variate the lesser concept to null and keep the validity of the agile “principles”.
The agile approaches seem to lack control. If you’re letting the users in charge of the scope then you risk having a product that offers a lot though misses the essential, and thus unusable or usable to a lower degree. Agile works good for prototyping something to show to the users or when the products are small enough to easily fit within an iteration, or when the vendor wants to gain a customer’s trust. Therefore, agile works good with BI projects that combine in general all three aspects.
An abomination is the work in fix sprints or iterations of one or a few weeks, and then chopping the functionality to fit the respective time intervals. If you have the luck of having sign-offs and other activities that steal your time, then the productive time reduces up to 50% (the smaller the iterations the higher the percentage). What’s even unconceivable is that people ignore the time spent with bureaucracy. If this way of working repeats in each iteration then the project duration multiplies by a factor between 2 or 4, the time spent on Project Management increasing by the same factor. What’s not understandable is that despite bureaucracy the adherence to delivery dates, budget and quality is still required.
Sometimes one has the feeling that people think that software development and other IT projects work like building a house or like the manufacturing of a mug. You choose the colors, the materials, the dimensions and voila the product is ready. IT projects involve lot of unforeseen and one must react agilely to it. Here resides one of the most important challenges.   
Communication is one important challenge in a project especially when multiple interests are involved. Face-to-face conversation is one of the nice-to-have items on the wish list however in praxis isn’t always possible. One can’t expect that all the resources are available to meet and decide. In addition, one needs to document everything from meeting minutes, to Business Cases and requirements. A certain flexibility in changing the requirements is needed though one can’t change them arbitrarily, there must be a concept behind otherwise the volume of overwork can easily make the budget for a project explode exponentially.
Resources:
[1] Harvard Business Review (2018) Why Agile Goes Awry - and How to Fix It, by Lindsay McGregor & Neel Doshi (Online) Available from: https://hbr.org/2018/10/why-agile-goes-awry-and-how-to-fix-it
[2] Forbes (2012) The Case Against Agile: Ten Perennial Management Objections, by Steve Denning  (Online) Available from:
https://www.forbes.com/sites/stevedenning/2012/04/17/the-case-against-agile-ten-perennial-management-objections/#6df0e6ea3a95 
[3] Springer (2018) Do Agile Methods Work for Large Software Projects?, by Magne Jørgensen  (Online) Available from:
https://link.springer.com/chapter/10.1007/978-3-319-91602-6_12
[4] Michael O Church (2015) Why “Agile” and especially Scrum are terrible  (Online) Available from:
https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/
[5] Dev.to (2019) Mockery of agile, by Artur Martsinkovskyi (Online) Available from: https://dev.to/arturmartsinkovskyi/mockery-of-agile-5bdf

06 May 2019

600 Words: Key Performance Indicators

Key Performance Indicators (KPIs) are quantifiable measurements (aka metrics) that reflect the critical success factor of an organization in respect to their strategic goals and objectives. They allow measuring the progress toward reaching the defined goals and, to some degree, forecasting the further  evolution. They help keeping the focus on the goals, increases awareness in what concerns the goals and provide visibility into the business.

As they reflect an organization’s objectives, KPIs need to be anchored and aligned with them. If there’s no association with an objective then one doesn’t deal with a KPI but with other form of performance metric. Therefore KPIs need to change with the objectives, they are not fix.

One important requirement for a KPI is to be defined using SMART (specific, measurable, attainable, relevant, time-bound) criteria. Thus a KPI needs to be clear and unambiguous (specific), needs to measure the progress against a goal (measurable), needs to be realistic (attainable), needs to be relevant for the business and its current strategy (relevant), and needs to specify when the result(s) can be achieved (time-bound). To the SMART criteria some consider also the requirement for a KPI to be periodically and consistently evaluated and reviewed (trackable) and agreed between the parties afected by it (agreed).

A KPI needs to be visible within an organization, understandable and non-redundant. Even if KPIs are a tool for the upper management, their definition and impact needs to be visible and understood by all the people working with it, even if this can lead to unexpected behavior. The requirement for non-redundancy implies a partition of the KPIs to limit the cases in which two or more KPIs provide the same information.

A KPI needs to be supported by actions and needs to trigger actions. It’s nice to have KPIs reported periodically to the upper management, though as long no action is triggered, there’s no value in it. A KPI is kind of reinforcement for questions like: “why are we doing good/bad?”. The negative variations must trigger some form of action, however also the positive variation could involve further analysis to understand what caused the improvement.

The variation of a KPI needs to be supported by facts – each variation needs to be explainable in one form or another. A number without a story remains a number that can or not be trusted. Therefore, it might be needed to have further metrics or reports that support the KPIs, that can be used to identify the sources for variation, in order to understand the data.

Last but not the least KPIs need to be documented. The documentation needs to include at minimum a rough definition that includes the rationale, the boundary as well the critical values, metric’s owners, unit of measure, etc. In addition, one can add historical information about the KPI in respect to when and what caused variations, respectively how the variations were brought under control.

KPIs vary from an organization to another, the variation in not only influenced by the different goals organizations might have, but also based on the fact that organizations tend to measure different things, often the wrong things. It’s in general recommended to have a small number of KPIs that reflect in one dasboard how the business is doing and what is important for the business.

KPIs provide a basis for change by providing insights into what needs to change to improve some aspects of the business. When adequately defined and measured, KPIs provide a good perspective over an organization’s effort in achieving its goals and objectives, and therefore a good tool for monitoring and stirring organization’s strategy.

05 May 2019

600 Words: Defining the Strategy

In a previous post an organization’s strategy was defined as a set of coordinated and sustainable actions following a set of well-defined goals, actions devised into a plan and designed to create value and overcome an organization’s challenges. In what follows are described succinctly the components of the strategy.

A strategy’s definition should start with the identification of organization’s vision, where the organization wants to be in the future, its mission statement, a precise description of what an organization does in turning the vision from concept to reality, its values - traits and qualities that are considered as representative, and its principlesthe guiding laws and truths for action. All these components have the purpose at defining at high-level the where (the vision), the why (the mission), the what (the core values) and by which means (the principles) of the strategy.

One of the next steps that can be followed in parallel is to take inventory of the available infrastructure: systems, processes, procedures, practices, policies, documentation, resources, roles and their responsibilities, KPIs and other metrics, ongoing projects and initiatives. Another step resumes in identifying the problems (challenges), risks and opportunities existing in the organization as part of a SWOT analysis adjusted to organization’s internal needs. One can extend the analysis to the market and geopolitical conditions and trends to identify further opportunities and risks. Within another step but not necessarily disconnected from the previous steps is devised where the organization could be once the problems, risks, threats and opportunities were addressed.

Then the gathered facts are divided into two perspectives – the “IS” perspective encompasses the problems together with the opportunities and threats existing in organization that define the status quo, while the “TO BE” perspective encompasses the wished state. A capability maturity model can be used to benchmark an organization’s current maturity in respect to industry practices, and, based on the wished capabilities, to identify organization’s future maturity.

Based on these the organization can start formulating its strategic goalsa set of long-range aims for a specific time-frame, from which are derived a (hierarchical) set of objectives, measurable steps an organization takes in order to achieve the goals. Each objective carries with it a rational, why the objective exists, an impact, how will the objective change the organization once achieved, and a target, how much of the objective needs to be achieved. In addition, one can link the objectives to form a set of hypothesis - predictive statements of cause and effect that involve approaches of dealing with the uncertainty. In order to pursue each objective are devised methods and means – the tactics (lines of action) that will be used to approach the various themes. It’s important to prioritize the tactics and differentiate between quick winners and long-term tactics, as well to define alternative lines of actions.

Then the tactics are augmented in a strategy plan (roadmap) that typically covers a minimum of 3 to 5 years with intermediate milestones. Following the financial cycles the strategy is split in yearly units for each objective being assigned intermediate targets. Linked to the plan are estimated the costs, effort and resources needed. Last but not the least are defined the roles, management and competency structures, with their responsibilities, competencies and proper level of authority, needed to support strategy’s implementation. Based on the set objectives are devised the KPIs used to measure the progress (success) and stir the strategy over its lifecycle.

By addressing all these aspects is created thus a first draft of the strategy that will need several iterations to mature, further changes deriving from the contact with the reality.

600 Words: The Reason behind a Strategy

Many of the efforts that go on in organizations are just castles built into the thin air, and even if some of the architectures are wonderful, without a foundation they tend to crash under their own weight. For example, the investment in a modern BI solution, in an ERP or CRM system, seldom meets an organization’s expectations, and what’s even more unfortunate is that the potential introduced by the investments is only to a small degree harnessed, while the same old problems continue to exist, typically in new contexts.

An architect more likely would ask himself: What would be that foundation needed to support a castle or the whole settlement the castle belongs to? From what needs to be made? How should it be structured? How often needs to be reconsolidated and when? Who will participate in its building and its maintenance? What it still needed to make the infrastructure self-reliant? What other architects do? What’s best practice in the field? Many questions for which the architect needs to find optimal answers.

The strength of an edifice lies in its foundation. Its main purpose is to provide a solid, durable, self-reliant and maintainable structure on which the edifice can be anchored, that can support the current and future load of the edifice, and that keeps the edifice standing in face of calamities. It must therefore address the core challenges faced by the edifice during its lifetime. When one has a group of edifices holding together as a settlement, there’s needed a foundation to support the whole settlement and not only one edifice. Moreover, the foundation needs to be customized to address environment’s characteristics and owners’ plans for further development.

The foundation on which modern organizations build their edifice is a strategy rooted in organizations’ reason of existence (the mission), wishes of becoming (the vision), beliefs (the core values) and fundamental truths (the principles). A strategy, a term borrowed from military, is a set of coordinated and sustainable actions following a set of well-defined goals, actions devised into a plan and designed to create value and overcome an organization’s challenges. Through its character a strategy is the perfect tool for addressing holistically the problems, opportunities, strengths and weaknesses existing in an organization, of aligning the objectives toward the same goals, of providing transparency and common understanding into the status quo and the road ahead.

Having defined a strategy will not make things happen by themselves, one needs also the capabilities of executing the strategy as a whole, one needs clear roles with responsibilities and proper authority. In addition, the strategy needs to be adapted in time to serve its purpose. This might mean changing the level of detail, changing the strategy when opportunities or threats were identified, when goals become obsolete. To make this possible is needed to define several processes to support the strategy through its whole lifecycle and a set of metrics to make the progress visible.

There are organizations that make it without having a written strategy, some go with the inertia provided by the adoption of tools, with the experience of individual workers that through their cooperation provide the improvement needed. In a higher or lower degree there’s a strategy fragmented in each individual or group, however the strategies don’t necessarily converge. The problem with such approaches is that the results are often suboptimal, especially because they are fragmented efforts, more likely with different contradictory goals.

As any other tool a strategy has a potential power that when adequately harnessed can help organizations achieve their (strategic) goals, though it depends on each organization to harness that potential.

04 May 2019

600 Words: Push vs. Pull

In data integrations, data migrations and data warehousing there is the need to move data between two or more systems. In the simplest scenario there are only two systems involved, a source and a target system, though there can be complex scenarios in which data from multiple sources need to be available in a common target system (as in the case of data warehouses/marts or data migrations), or data from one source (e.g. ERP systems) need to be available in other systems (e.g. Web shops, planning systems), or there can be complex cases in which there is a many-to-many relationship (e.g. data from two ERP systems are consolidated in other systems). 

The data can flow in one direction from the source systems to the target systems (aka unidirectional flow), though there can be situations in which once the data are modified in the target system they need to flow back to the source system (aka bidirectional flow), as in the case of planning or product development systems. In complex scenarios the communication may occur multiple times within same process until a final state is reached.

Independently of the number of systems and the type of communication involved, data need to flow between the systems as smooth as possible, assuring that the data are consistent between the various systems and available when needed. The architectures responsible for moving data between the sources are based on two simple mechanisms - push vs pull – or combinations of them.

A push mechanism makes data to be pushed from the source system into the target system(s), the source system being responsible for the operation. Typically the push can happen as soon as an event occurs in the source system, event that leads to or follows a change in the data. There can be also cases when is preferred to push the data at regular points in time (e.g. hourly, daily), especially when the changes aren’t needed immediately. This later scenario allows to still make changes to the data in the source until they are sent to other system(s). When the ability to make changes is critical this can be controlled over specific business rules.

A pull mechanism makes the data to be pulled from the source system into the target system, the target systems being responsible for the operation. This usually happens at regular points in time or on demand, however the target system has to check whether the data have been changed.

Hybrid scenarios may involve a middleware that sits between the systems, being responsible for pulling the data from the source systems and pushing them into the targets system. Another hybrid scenario is when the source system pushes the data to an intermediary repository, the target system(s) pulling the data on a need basis. The repository can reside on the source, target on in-between. A variation of it is when the source informs the target that a change happened at it’s up to the target to decide whether it needs the data or not.

The main differentiators between the various methods is the timeliness, completeness and consistency of the data. Timeliness refers to the urgency with which data need to be available in the target system(s), completeness refers to the degree to which the data are ready to be sent, while consistency refers to the degree the data from the source are consistent with the data from the target systems.

Based on their characteristics integrations seem to favor push methods while data migrations and data warehousing the pull methods, though which method suits the best depends entirely on the business needs under consideration.

600 Words: Programming as Art

Maybe seeing programming as an art is an idealistic thought, while attempting to describe programming as an art may seem an ingrate task. However, one can talk about the art of programming same way one can talk about the art of applying a craft. It’s a reflection of the mastery reached and what it takes to master something. Some call it art, others mastery, in the end it’s the drive that makes one surpass his own condition.

Besides an audience's experience with a creative skill, art means the study, process and product of a creative skill. Learning the art of programming, means primarily learning its vocabulary and its grammar, the language, then one has to learn the rules, how and when to break them, and in the end how to transcend the rules to create new languages. The poet uses metaphors and rhythm to describe the world he sees, the programmer uses abstractedness and patterns for the same. Programming is the art of using patterns to create new patterns, much like the poet does.

The drive of art is creativity independently if one talks about music, painting, poetry, mathematics or any other science. Programmer's creativity is reflected in the way he uses his tools and builds new ones. Despite the limits imposed by the programming languages he uses, the programmer can borrow anytime the knowledge of other sciences – mathematics, physics or biology – to describe the universe and make it understandable for machines. In fact, when we understand well enough something to explain to a computer we call it science [1].

Programming is both a science and an art. Paraphrasing Leonard Tippett [2], programming is a science in that its methods are basically systematic and have general application; and an art in that their successful application depends to a considerable degree on the skill and special experience of the programmer, and on his knowledge of the field of application. The programmer seems to borrow from an engineer’s natural curiosity, attention to detail, thirst for knowledge and continual improvement though these are already in programmer’s DNA.

In programming aesthetics is judged by the elegance with which one solves a problem and transcribes its implementation. The programmer is in a continuous quest with simplicity, reusability, abstractedness, elegance, time and complexity. Beauty resides in the simplicity of the code, the easiness with which complexity is reduced to computability, the way everything fit together in a whole. Through reusability and abstractedness the whole becomes more than the sum of its parts.

Programming takes its rigor and logic from mathematics. Even if the programmer is not a mathematician, he borrows from a mathematician’s way of seeing the world in structures, patterns, order, models (approximations), connectedness, networks, the designs converging to create new paradigms. Programmer's imagery conjures some part from a mathematician's art.

In extremis, through the structures and thought patterns, the programmer is in a continuous search for meanings, of creating a meaning to encompass other meanings, meanings which will hopefully converge to a greater good. It resembles the art of the philosopher, without the historical luggage.

Between the patterns of the mathematician and philosopher's search for truth, between poets artistry of manipulating the language to create new views and engineer’s cold search for formalism and methodic, programming is a way to understand the world and create new worlds. The programmer becomes the creator of glimpses of universes which, when put together like the pieces of a puzzle can create a new reality, not necessarily better, but a reality that reflects programmers’ art. For the one who learned to master a programming language nothing is impossible.

Quotations used:
(1)“Learning the art of programming, like most other disciplines, consists of first learning the rules and then learning when to break them.” (Joshua Bloch, “Effective Java”, 2001)
(2)“[Statistics] is both a science and an art. It is a science in that its methods are basically systematic and have general application; and an art in that their successful application depends to a considerable degree on the skill and special experience of the statistician, and on his knowledge of the field of application, e.g. economics.” (Leonard Tippett, “Statistics”, 1943)

02 May 2019

600 Words: Programmer, Coder or Developer?

Programmer, coder or (software) developer are terms used interchangeably to denote a person who writes a set of instructions for a computer or any other electronic device. Looking at the intrinsic meaning of the three denominations, a programmer is a person who writes programs, a coder is a person who writes code, and a developer is one who develops (makes grow) a piece of software. They look like redundant definitions, isn’t it?

A program is a stand-alone piece of code written for a given purpose – in general it’s used to transform inputs in outputs or specific actions, and involves a set of structures, libraries and other resources. Programming means primarily being able to write, understand, test and debug programs, however there can be other activities like designing, refactoring, documenting programs and the resources needed. It also involves the knowledge of a set of algorithms, libraries, architectures, methodologies and practices that can be used in the process.

Code may refer to a program, as well as parts of a program. Writing code means being able to use and understand a programming language’s instruction for a given result – validating input, acting on diverse events, formatting and transforming content, etc. The code doesn’t necessarily have to stand alone, often being incorporated inside of documents like web pages, web parts or reports.

Development of software usually means more than programing as the former is considered as a process in conceiving, specifying, designing, programming, documenting, testing and maintaining software. The gap between the two is neglectable as programming typically involves in practice the other activities as well.

Programmer and coder are unfortunately often used with a pejorative connotation. Therefore the denomination of developer seems fancier. An even fancier term is the one of software engineer, software engineering being the application of engineering to the development of software in a systematic method.

In IT there are several other roles which involve tangentially the writing of instructions – database administrator, security engineer, IT analyst, tester, designer, modeler, technical writer, etc. It looks like a soup of fancy denominations chosen expressly to confuse nontechnical people. Thus a person who covered many of the roles mentioned above, finds it sometimes difficult to define the most appropriate denomination.

A person who writes such code doesn’t have to be a programmer or even an IT professional. There are many tools on the market whose basic functionality can be extended with the help of scripts - Excel, Access, SSRS or SSIS. Many tools nowadays have basic drag and drop and wizard-based functionality which limits the need for coding, and the trend seems to move in this direction. Another trend is the building of minimizing the need for writing code to the degree that full applications can be built with drag and drops, however some degree of coding is still needed. It seems to be in demand the knowledge of one or two universal scripting languages and data-interchange formats.

Probably the main factor for naming somebody a programmer is whether he does this for a living. On the other side a person can identify himself as programmer even if his role involves only a small degree of programming or programing is more of a hobby. One can consider programming as a way of living, as a way of understanding and modelling life. This way of life borrows a little from the way of being of the mathematician, the philosopher and the engineer.

In the end is less important what’s the proper denomination. More important is with what one identifies himself and what one makes with his skills – the mental and machine-understandable universes one builds.

01 May 2019

600 Words: SQL Server Feature Bloat

In an old SSWUG editorial “SQL Server Feature Bloat” by Ben Taylor, the author raises the question on whether SQL Server features like the support for Unicode, the increase in page size for data storage to 8k or the storage of additional metadata and statistics create a feature bloat. He further asks on whether customers may consider other database solutions, and on whether this aspect is important for customers.

A software or feature bloat is the “process whereby successive versions of a computer program become perceptibly slower, use more memory, disk space or processing power, or have higher hardware requirements than the previous version - whilst making only dubious user-perceptible improvements or suffering from feature creep” (Wikipedia).

Taylor’s question seems to be entitled, especially when is considered the number of features added in the various releases of SQL Server. Independently on whether they attempt to improve performance, extend existing functionality or provide new functionality, many of these features target special usage and are hardly used by average applications that use SQL Server only for data storage. Often after upgrading to a new release, it may happen that the customers see no performance improvement in the features used or the performance even decays, while the new release needs more resources to perform the same tasks. This can make customers wonder on whether all these new features bring any benefit for them.

It’s easy to neglect the fact that the SQL Server is just used as storage layer in an architecture and more likely that some of the problems reside in the business or presentation layers. In addition, not always a solution is designed to take advantage of a database’s (latest) features. Besides, it may happen that the compatibility level is set to a lower value, so the latest functionality won’t be used at all.

Probably the customers hope that the magic will happen immediately after the upgrade. For some features like the ones regarding engine’s optimization are enabled by default and is expected a performance gain, however, to take advantage of the new features the existing applications need to be redesigned. With each new edition it’s important to look at the opportunities provided by the upgrades and analyze the performance benefit as there’s often a trade-off between benefit and effort on one side, respectively between technical advantages and disadvantages on the other.

The examples used by Taylor aren’t necessarily representative because they refer to changes made prior to SQL Server 2005 edition and there are good arguments for their implementation. The storage of additional metadata and statistics is neglectable in comparison with the size of the databases and the benefits, given that the database engine needs statistics so it can operate optimally. SQL Server moved from 2 KB pages to 8 KB pages between versions 6.5 and 7.0 probably because it offers a good performance with efficient use of space. The use of Unicode character set become a standard given that databases needed to support multiple languages.

Feature bloating is not a problem that concerns only SQL Server but also other database products like Oracle, DB2 or MySQL, and other types of software. Customers’ choice of using one vendor’s products over another is often a strategic decision in which the database is just a piece of a bigger architecture. In the TPC-H benchmarks SQL Server 2014 and 2016 scored during the last years higher than the competitors. It’s more likely that customers will move to SQL Server than vice-versa, when possible. Customers expect performance, stability and security and are willing to pay for them as long the gain is visible.

29 April 2019

600 Words: Project Planning to Extremes

It is sometimes helpful to take a step back, observe, and then logically generalize the extremes of the observed facts; if possible, without judging people’s behavior as there’s more to it as the eyes can perceive. In some cases however one can feel that the observed situations are really close to extreme. It’s the case of some tendencies met in project planning - not planning, planning for the sake of planning, expecting a plan to be perfect, setting a plan as fix, without the possibility of changing it in utile time, respectively changing the plan too often.

There are situations in which it’s better to be spontaneous and go with the flow. Managing a project isn’t one of these situations. As Lakein’s Law formulates it succinctly: “failing to plan is planning to fail”, or paraphrasing Eisenhower (1) and Clausewitz (2) - plans are useless as no plan ever survived contact with the enemy (reality), but planning is indispensable - as a plan increases awareness about project’s scope, actions, challenges, risks and opportunities, and allows devising the tactics and logistics needed to reach the set goals. Even if the plan doesn’t reflect anymore the reality, it can still be adapted to fit the new requirements. The more planning experience one has the more natural it becomes to close the gap between the initial plan and reality, and of adapting the plan as needed.

There’s an important difference between doing something because one is forced to do it and doing it because one sees and understands the value of planning. There's the tendency to plan for the sake of planning, because there's the compel to do it. Besides the fact that it documents the what, when, why and who, and that is used as a basis for action, the plan must reflect project’s current status and the activities planed for the next reporting cycle. As soon a plan is not able to reflect these aspects it becomes thus in time unusable.

The enemy of a good plan can prove to be the dream of a perfect plan (3). Some may think that the holy grail of planning is the perfect plan, that the project can’t start until all the activities were listed to the lowest detail and the effort thoroughly assigned. Few plans actually survive the contact with the reality and there can be lot of energy lost by working on the perfect plan.

Another similar behavior,  rooted mainly in the methodologies used, is that of not allowing a plan to be changed for a part or whole duration of the project. Publilius Syrus recognized more than two millennia ago that a plan that admits no modification is a bad plan (4) per se. Methodologies and practices that don’t allow a flexible way of changing the plan make no service to projects. Often changes need to occur immediately and not at an ideal point in time, when maybe the effect is lost.

Modern Project Management tools allow building the dependencies between the various activities and it’s inevitable that a change in one place will cause a chain reaction and lead to a contraction or dilatation of the plan, and this can happen with each planning iteration. In extremis the end date will alternate as the lines of a seismograph during an earthquake. It’s natural for this to happen in projects in a first phase, however it’s in Project Manager’s attribution to mitigate such variations.

The project plan is a reflection of the project and how it’s managed, therefore, one needs to give it the proper focus, how often and how detailed required.

Referenced quotes:
(1) “In preparing for battle I have always found that plans are useless, but planning is indispensable” (Eisenhower quoted by Nixon)
(2) “No plan ever survived contact with the enemy. ” (Carl von Clausewitz)
(3) “The enemy of a good plan is the dream of a perfect plan.” (Carl von Clausewitz)
(4) "It's a bad plan that admits of no modification." (Publilius Syrus)

600 Words: From Disintegration to Integration

No matter how tight the integration between the various systems or processes there will be always gaps that need to be addressed in one way or another. The problems are in general caused by design errors rooted in the complexity of the logic from the integration layer or from the systems integrated. The errors can range from missing or incorrect validation rules, mappings and parameters to data quality issues.

An unidirectional integration involves distributing data from one system (aka publisher) to one or more systems (aka subscribers), while in bidirectional integrations systems can act as publishers and subscribers, resulting thus complex data flows with multiple endpoints. In simplest integrations the records flow one-to-one between systems, though more complex scenarios can involve logic based on business rules, mappings and other type of transformations. The challenge is to reflect the states as needed by the system with minimal involvement from the users.

Typically it falls in application/process owners or key users’ responsibilities to make sure that the integration works smoothly. When the integration makes use of interface or staging tables they can be used as starting point for the troubleshooting, however even then the troubleshooting can be troublesome and involve a considerable manual effort. When possible the data can be exported manually from the various systems and matched in Excel or similar solutions. This leads often to personal or departmental solutions hard to maintain, control and support.

A better approach is to automatize the process by importing the data from the integrated systems at regular points in time into the same database (much like in a data warehouse), model the entities and the needed logic in there, and report the differences. Even if this approach involves a small investment in the beginning and some optimization in logic or performance over time, it can become a useful tool for troubleshooting the differences. Such solutions can be used successfully in multiple integration scenarios (e.g. web shop or ERP integrations).

A set of reports for each entity can help identify the differences between the various entities. Starting from the reported differences the users can identify, categorize and devise specific countermeasures for the various issues. The best time to have such a solution is shortly before or during UAT. This would allow to make sure that the integration layer really works, and helps correcting the issues as long they still have a small impact on the systems. Some integration issues might even lead to a postponement of the Go-Live. The second best time is during the time the first important issues were found, as the issues can be used as support for a Business Case for implementing this type of solutions.

In general, it’s recommended to fix the problems in the integration layer and use the reports only for troubleshooting and for assuring that the integration runs smoothly. There are however situations in which the integration problems can’t be fixed without creating more issues. It’s the case in which multiple systems are involved and integrated over an integration bus.
One extreme approach, not advisable though, is to build a second integration to correct the issues of the first. This solution might work in theory however there’s the risk of multiplying the issues is really high and the complexity of troubleshooting increases with the degree of dependency between the two integrations. It would be more advisable to rebuild the integration anew, however also this approach has its advantages and disadvantages.

Bottom line is that integration issues should be addressed while they are small and that an automated solution for comparing the data can help in the process

24 April 2019

600 Words: The Butterflies of Project Management

Expressed metaphorically as "the flap of a butterfly’s wings in Brazil set off a tornado in Texas”, in Chaos Theory the “butterfly effect” is a hypothesis rooted in Edward N Lorenz’s work on weather forecasting and used to depict the sensitive dependence on initial conditions in nonlinear processes, systems in which the change in input is not proportional to the change in output.  

Even if overstated, the flapping of wings advances the idea that a small change (the flap of wings) in the initial conditions of a system cascades to a large-scale chain of events leading to large-scale phenomena (the tornado) . The chain of events is known as the domino effect and represents the cumulative effect produced when one event sets off a chain of similar events. If the butterfly metaphor doesn’t catch up maybe it’s easier to visualize the impact as a big surfing wave – it starts small and increases in size to the degree that it can bring a boat to the shore or make an armada drown under its force. 

Projects start as narrow activities however the longer they take and the broader they become tend to accumulate force and behave like a wave, having the force to push or drawn an organization in the flood that comes with it. A project is not only a system but a complex ecosystem - aggregations of living organisms and nonliving components with complex interactions forming a unified whole with emergent behavior deriving from the structure rather than its components - groups of people tend to  self-organize, to swarm in one direction or another, much like birds do, while knowledge seems to converge from unrelated sources (aka consilience). 

 Quite often ignored, the context in which a project starts is very important, especially because these initial factors or conditions can have a considerable impact reflected in people’s perception regarding the state or outcomes of the project, perception reflected eventually also in the decisions made during the later phases of the project. The positive or negative auspices can be easily reinforced by similar events. Given the complex correlations and implications, aspects not always correct perceived and understood can have a domino effect. 

The preparations for the project start – the Business Case, setting up the project structure, communicating project’s expectation and addressing stakeholders’ expectations, the kick-off meeting, the approval of the needed resources, the knowledge available in the team, all these have a certain influence on the project. A bad start can haunt a project long time after its start, even if the project is on the right track and makes a positive impact. In reverse, a good start can shade away some mishaps on the way, however there’s also the danger that the mishaps are ignored and have greater negative impact on the project. It may look as common sense however the first image often counts and is kept in people’s memory for a long time. 

As people are higher perceptive to negative as to positive events, there are higher the chances that a multitude of negative aspects will have bigger impact on the project. It’s again something that one can address as the project progresses. It’s not necessarily about control but about being receptive to the messages around and of allowing people to give (constructive) feedback early in the project. It’s about using the positive force of a wave and turning negative flow into a positive one. 

Being aware of the importance of the initial context is just a first step toward harnessing waves or winds’ power, it takes action and leadership to pull the project in the right direction.

22 April 2019

600 Words: The Choice of Tools in Project Management

Beware the man of one book” (in Latin, “homo unius libri”), a warning generally attributed to Thomas Aquinas and having a twofold meaning. In its original interpretation it was referring to the people mastering a single chosen discipline, however the meaning degenerated in expressing the limitations of people who master just one book, and thus having a limited toolset of perspectives, mental models or heuristics. This later meaning is better reflected in Abraham Maslow adage: “If the only tool you have is a hammer, you tend to see every problem as a nail”, as people tend to use the tools they are used to also in situations in which other tools are more appropriate.

It’s sometimes admirable people and even organizations’ stubbornness in using the same tools in totally different scenarios, expecting though the same results, as well in similar scenarios expecting different results. It’s true, Mathematics has proven that the same techniques can be used successfully in different areas, however a mathematician’s universe and models are idealistically fractionalized to a certain degree from reality, full of simplified patterns and never-ending approximations. In contrast, the universe of Software Development and Project Management has a texture of complex patterns with multiple levels of dependencies and constraints, constraints highly sensitive to the initial conditions.

Project Management has managed to successfully derive tools like methodologies, processes, procedures, best practices and guidelines to address the realities of projects, however their use in praxis seems to be quite challenging. Probably, the challenge resides in stubbornness of not adapting the tools to the difficulties and tasks met. Even if the same phases and multiple similarities seems to exist, the process of building a house or other tangible artefact is quite different than the approaches used in development and implementation of software.

Software projects have high variability and are often explorative in nature. The end-product looks totally different than the initial scaffold. The technologies used come with opportunities and limitations that are difficult to predict in the planning phase. What on paper seems to work often doesn’t work in praxis as the devil lies typically in details. The challenges and limitations vary between industries, businesses and even projects within the same organization.

Even if for each project type there’s a methodology more suitable than another, in the end project particularities might pull the choice in one direction or another. Business Intelligence projects for example can benefit from agile approaches as they enable to better manage and deliver value by adapting the requirements to business needs as the project progresses. An agile approach works almost always better than a waterfall process. In contrast, ERP implementations seldom benefit from agile methodologies given the complexity of the project which makes from planning a real challenge, however this depends also on an organization’s dynamicity.
Especially when an organization has good experience with a methodology there’s the tendency to use the same methodology across all the projects run within the organization. This results in chopping down a project to fit an ideal form, which might be fine as long the particularities of each project are adequately addressed. Even if one methodology is not appropriate for a given scenario it doesn’t mean it can’t be used for it, however in the final equation enter also the cost, time, effort, and the quality of the end-results.
In general, one can cope with complexity by leveraging a broader set of mental models, heuristics and set of tools, and this can be done only though experimentation, through training and exposing employees to new types of experiences, through openness, through adapting the tools to the challenges ahead.

21 April 2019

600 Words: Project Management Combat Planning

Even if planning is the most critical activity in Project Management it seems to be also one of the most misunderstood concepts. Planning is critical because it charters the road ahead in terms of what, when, why and who, being used as a basis for action, communication, for determining the current status in respect to the initial plan, as well the critical activities ahead.

The misunderstandings derive maybe also from the fact that each methodology introduces its own approach to planning. PMI as traditional approach talks about baseline planning with respect to scope schedule and costs, about management plans, which besides the theme covered in the baseline, focus also on quality, human resources, risks, communication and procurement, and separate plans can be developed for requirements, change and configuration management, respectively process improvement. To them one can consider also action and contingency planning.

In Prince2 the product-based planning is done at three levels – at project, stage, respectively team level – while separate plans are done for exceptions in case of deviations from any of these plans; in addition there are plans for communication, quality and risk management. Scrum uses an agile approach looking at the product and sprint backlog, the progress being reviewed in stand-up meetings with the help of a burn-down chart. There are also other favors of planning like rapid application planning considered in Extreme Programming (XP), with an open, elastic and undeterministic approach. In Lean planning the focus is on maximizing the value while minimizing the waste, this being done by focusing on the value stream, the complete list of activities involved in delivering the end-product, value stream's flow being mapped with the help of visualization techniques such as Kanban, flowcharts or spaghetti diagrams.

With so many types of planning nothing can go wrong, isn’t it? However, just imagine customers' confusion when dealing with a change of methodology, especially when the concepts sound fuzzy and cryptic! Unfortunately, also the programmers and consultants seem to be bewildered by the various approaches and the philosophies supporting the methodologies used, their insecurity bringing no service for the project and customers’ peace of mind. A military strategist will more likely look puzzled at the whole unnecessary plethora of techniques. On the field an army has to act with the utmost concentration and speed, to which add principles like directedness, maneuver, unity, economy of effort, collaboration, flexibility, simplicity and sustainability. It’s what Project Management fails to deliver.

Similarly to projects, the plan made before the battle seldom matches the reality in the field. Planning is an exercise needed to divide the strategy in steps, echelon and prioritize them, evaluate the needed resources and coordinate them, understand the possible outcomes and risks, evaluate solutions and devise actions for them. With a good training, planning and coordination, each combatant knows his role in the battle, has a rough idea about difficulties, targets and possible ways to achieve them; while a good combatant knows always the next action. At the same time, the leader must have visibility over fight’s unfold, know the situation in the field and how much it diverged from the initial plan, thus when the variation is considerable he must change the plan by changing the priorities and make better use the resources available.

Even if there are multiple differences between the two battlefields, the projects follow the same patterns of engagement at different scales. Probably, Project Managers can learn quite of a deal by studying the classical combat strategists, and hopefully the management of projects would be more effective and efficient if the imperatives of planning, respectively management, were better understood and addressed.
Related Posts Plugin for WordPress, Blogger...