Showing posts with label software development. Show all posts
Showing posts with label software development. Show all posts

12 February 2024

🧭Business Intelligence: A One-Man Show (Part I: Some Personal Background and a Big Thanks!)

Business Intelligence Series
Business Intelligence Series

Over the past 24 years, I found myself often in the position of a "one man show" doing almost everything in the data space from requirements gathering to development, testing, deployment, maintenance/support (including troubleshooting and optimization), and Project Management, respectively from operations to strategic management, when was the case. Of course, different tasks of varying complexity are involved! Developing a SSRS or Power BI report has a smaller complexity than developing in the process also all or parts of the Data Warehouse, or Lakehouse nowadays, respectively of building the whole infrastructure needed for reporting. All I can say is that "I've been there, I've done that!". 

Before SSRS became popular, I even built for a customer a whole reporting solution based on SQL Server, HTML & XML, respectively COM+ objects for database access. UI’s look-and-feel was like SSRS, though there was no wizardry involved besides the creative use of programming and optimization techniques. Once I wrote an SQL query, the volume of work needed to build a report was comparable to the one in SSRS. It was a great opportunity to use my skillset, working previously as a web developer and VB/VBA programmer. I worked for many years as a Software Engineer, applying the knowledge acquired in the field whenever it made sense to do so, working alone or in a team, as the projects required.

During this time, I was involved in other types of projects and activities that had less to do with the building of reports and warehouses. Besides of the development of various desktop, web, and data-processing solutions, I was also involved in 6-8 ERP implementations, being responsible for the migration of data, building the architectures needed in the process, supporting key users in various areas like Data Quality or Data Management. I also did Project Management, Application Management, Release and Change Management, and even IT Management. Thus, there were at times at least two components involved - one component was data-related, while the other component had more diversity. It was a good experience, because the second component often needed knowledge of the first, and vice versa. 

For example, arriving to understand the data model and business processes behind an ERP system by building ad-hoc and standardized reports, allowed me to get a good understanding of what data is needed for a Data Migration, which are the dependencies, or the level of quality needed. Similarly, the knowledge acquired by building ETL-based pipelines and data warehouses allowed me to design and build flexible Data Migration solutions, both architectures being quite similar from many perspectives. Knowledge of the data models and architectures involved can facilitate the overall process and is a premise for building reliable performant solutions. 

Similar examples can also be given in Data Management, Data Operations, Data Governance, during and post-implementation ERP support, etc. Reports and data are needed also in the Management areas - it starts from knowing what data are needed in the supporting processes for providing transparency, of getting insights and bringing the processes under control, if needed.

Working alone, being able to build a solution from the beginning to the end was often a job requirement. This doesn't imply that I was a "lone wolf". The nature of a data professional or software engineer’s job requires you to interact with various businesspeople from report requesters to key users, internal and external consultants, intermediary managers, and even upper management. There was also the knowledge of many data professionals involved indirectly – the resources I used to learn from - books, tutorials, blogs, webcasts, code, and training material. I'm thankful for their help over all these years!

Previous Post <<||>> Next Post

19 October 2022

🌡Performance Management: Mastery (Part II: First Time Right - The Aim toward Operational Excellence)

 

Performance Management Series

Rooted in Six Sigma methodology as a step toward operational excellence, First Time Right (FTR) implies that any procedure is performed in the right manner the first time and every time. It equates to minimizing the waste in its various forms (inventory, motion, overprocessing, overproduction, waiting, transportation, defects). Like many quality concepts from the manufacturing industry, the concept was transported in the software development process as principle, process, goal and/or metric. Thus, it became part of Software Engineering, Project Management, Data Science, and any other similar endeavors whose outcome results in software products. 

Besides the quality aspect, FTR is rooted also in the economic imperative – the need to achieve something in the minimum amount of time with the minimum of effort. It’s about being efficient in delivering a product or achieving a given target. It can be associated with continuous improvement, learning and mastery, the aim being to encompass FTR as part of organization’s culture. 

Even if not explicitly declared, FTR lurks in each task planned. It seems that it became common practice to plan with the FTR in mind, however between this theoretical aim and practice there’s as usual an important gap. Unfortunately, planners, managers and even tasks' performers often forget that mistakes are made, that several iterations are needed to get the job done. It starts with the communication between people in clarifying the requirements and ends with the formal sign off. All the deviations from the FTR add up in the deviations between expected and actual effort, though probably more important are the deviations from the plan and all the consequences deriving from it. Especially in complex projects this adds up into a spiral of issues that can easily reinforce themselves. 

Many of the jobs that imply creativity, innovation, research or exploration require at least several iterations to get the job done and this is independent of participants’ professionalism and experience. Moreover, the more quality one needs, the higher the effort, the 80/20 being sometimes a good approximation of the effort needed. In extremis, aiming for perfection instead of excellence can make certain tasks a never-ending story. 

Achieving FTR requires practice - the more novelty, the higher the complexity, the communication or the synchronization needs, the more practice is needed. It starts with the individual to master the individual tasks and ends with the team, where communication, synchronization and other aspects need to be considered. The practice is usually achieved on hands-on work as part of the daily duties, project work, and so on. Unfortunately, it’s based primarily on individual experience, and seldom groomed in advance, as preparation for future tasks. That’s why sometimes when efficiency is needed in performing critical complex tasks, one also needs to consider the learning curve in achieving the required quality. 

Of course, many organizations demand from job applicants experience and, when possible, they hire people with experience, however the diversity, complexity and changing nature of tasks require further practice. This aspect is somehow recognized in the implementation in organizations of the various forms of DevOps, though how many organizations adopt it and enforce it on a regular basis? Moreover, a major requirement of nowadays businesses is to be agile, and besides the mere application of methodologies, being agile means to have also a FTR mindset. 

FTR starts with the wish for mastery at individual and team level and, with the right management attention, by allocating time for learning, self-development in the important areas, providing relevant feedback and building an infrastructure for knowledge sharing and harnessing, FTR can become part of organization’s culture. It’s up to each of us to do it!

04 April 2021

💼Project Management: Lean Management (Part I: Between Value and Waste I - An Introduction)

 Mismanagement

Independently on whether Lean Management is considered in the context of Manufacturing, Software Development (SD), Project Management (PM) or any other business-related areas, there are three fundamental business concepts on which the whole scaffolding of the Lean philosophies is built upon, namely the ones of value, value stream and waste. 

From an economic standpoint, value refers to the monetary worth of a product, asset or service (further referred as product) to an organization, while from a qualitative perspective, it refers to the perceived benefit associated with its usage. The value is thus reflected in the costs associated with a product’s delivery (producer’s perspective), respectively the price paid on acquiring it and the degree to which the product can fulfill a demand (customer’s perspective).

Without diving too deep into theory of product valuation, the challenges revolve around reducing the costs associated with a product’s delivery, respectively selling it to a price the customer is willing to pay for, typically to address a given set of needs. Moreover, the customer is willing to pay only for the functions that satisfy the needs a product is thought to cover. From this friction of opposing driving forces, a product is designed and valued.

The value stream is the sequence of activities (also steps or processes) needed to deliver a product to customers. This formulation includes value-added and non-value-added activities, internal and external customers, respectively covers the full lifecycle of products and/or services in whatever form it occurs, either if is or not perceived by the customers.  

Waste is any activity that consumes resources but creates no value for the customers or, generally, for the stakeholders, be it internal or external. The waste is typically associated with the non-added value activities, activities that don’t produce value for stakeholders, and can increase directly or indirectly the costs of products especially when no attention is given to it and/or not recognized as such. Therefore, eliminating the waste can have an important impact on products’ costs and become one of the goals of Lean Management. Moreover, eliminating the waste is an incremental process that, when put in the context of continuous improvement, can lead to processes redesign and re-engineering.

Taiichi Ohno, the ‘father’ of the Toyota Production System (TPS), originally identified seven forms of waste (Japanese: muda): overproduction, waiting, transporting, inappropriate processing, unnecessary inventory, unnecessary/excess motion, and defects. Within the context of SD and PM, Tom and Marry Poppendieck [1] translated the types of wastes in concepts closer to the language of software developers: partially done work, extra processes, extra features, task switching, waiting, motion and, of course, defects. To this list were added later further types of waste associated with resources, confusion and work conditions.

Defects in form of errors and bugs, ineffective communication, rework and overwork, waiting, repetitive activities like handoffs or even unnecessary meetings are usually the visible part of products and projects and important from the perspective of stakeholders, which in extremis can become sensitive when their volume increases out of proportion.

Unfortunately, lurking in the deep waters of projects and wrecking everything that stands in their way are the other forms of waste less perceivable from stakeholders’ side: unclear requirements/goals, code not released or not tested, specifications not implemented, scrapped code, overutilized/underutilized resources, bureaucracy, suboptimal processes, unnecessary optimization, searching for information, mismanagement, task switching, improper work condition, confusion, to mention just the important activities associated to waste.

Through their elusive nature, independently on whether they are or not visible to stakeholders, they all impact the costs of projects and products when the proper attention is not given to them and not handled accordingly.

Lean Management - The Waste Iceberg

References:
[1] Mary Poppendieck & Tom Poppendieck (2003) Lean Software Development: An Agile Toolkit, Addison Wesley, ISBN: 0-321-15078-3

07 March 2021

💼Project Management: Methodologies (Part II: Agile Manifesto Reloaded II - Requirements Management)

Project Management

Independently of its scope and the methodology used, each software development project is made of the same blocks/phases arranged eventually differently. It starts with Requirements Managements (RM) subprocesses in which the functional and non-functional requirements are gathered, consolidated, prioritized and brought to a form which facilitates their understanding and estimation. It’s an iterative process as there can be overlapping in functionality, requirements that don’t bring any significant benefit when compared with the investment, respectively new aspects are discovered during the internal discussions or with the implementer.

As output of this phase, it’s important having a list of requirements that reflect customer’s needs in respect to the product(s) to be implemented. Once frozen, the list defines project’s scope and is used for estimating the costs, sketching a draft of the final solution, respectively of reaching a contractual agreement with the implementer. Ideally the set of requirements should be completed and be coherent while reflecting customer’s needs. It allows thus in theory to agree upon costs as well about an architecture and other important aspects (responsibilities/accountability).

Typically, each new requirement considered after this stage needs to go through a Change Management (CM) process in which it gets formulated to the needed level of detail, a cost, effort and impact analysis is performed, respectively the budget for it is approved or the change gets rejected. Ideally small changes can be considered as part of a buffer budget upfront, however in the end each change comes with a cost and project delays.

Some changes can come late in the project and can have an important impact on the whole architecture when important aspects were missed upfront. Moreover, when the number of changes goes beyond a certain limit it can lead to what is known as scope creep, with important consequences on project’s costs, timeline and quality. Therefore, to minimize the impact on the project, the number of changes needs to be kept to a minimum, typically considering only the critical changes, while the others can be still implemented after project’s end.

The agile manifesto’s principles impose an important constraint on the requirements - changing requirements is a good practice even late in the process – an assumption - best requirements emerge from self-organizing teams, and probably one implication – the requirements need to be defined together with the implementer.

The way changing requirements are handled seem to provide more flexibility though it’s actually a constraint imposed on the CM process which interfaces with the RM processes. Without a proper CM in place, any requirement might arrive to be implemented, independently on whether is feasible or not. This can easily make project’s costs explode, sometimes unnecessarily, while accommodating extreme behavior like changing the same functionality frequently, handling exceptions extensively, etc.

It’s usually helpful to define the requirements together with the implementer, as this can bring more quality in the process, even if more time needs to be invested. However, starting from a solid set of requirements is a critical factor for project’s success. The manifesto makes no direct statement about this. Just iterates that good requirements emerge from self-organizing teams which is not necessarily the case. 

The users who in theory can define the requirements best are the ones who have the deepest knowledge about an organization’s processes and IT architecture, typically the key users and/or IT experts. Self-organization revolves around how a team organizes itself and handles the various activities, though there’s no guarantee that it will address the important aspects, no matter how motivated the team is, how constant the pace, how excellent the technical details were handled or how good the final product works.

Previous Post <<||>>Next Post

💼Project Management: Methodologies (Part I: Agile Manifesto Reloaded I - An Introduction)

 

Project Management

There are so many books written on agile methodologies, each attempting to depict the realities of software development projects. There are many truths considered in them, though they seem to blend in a complex texture in which the writer takes usually the position of a preacher in which the sins of the traditional technologies are contrasted with the agile principles. In extremis everything done in the past seems to be wrong, while the agile methods seem to be a panacea, which is seldom the case.

There are already 20 years since the agile manifesto was published and the methodologies adhering to the respective principles don’t seem to provide the expected success, suffering from the same chronical symptoms of their predecessors - they are poorly understood and implemented, tend to function after hammer’s principle, respectively the software development projects still deliver poor results. Moreover, there are more and more professionals who raise their voice against agile practices.

Frankly, the principles behind the agile manifesto make sense. A project should by definition satisfy stakeholders’ requirements, ideally through regular deliveries that incorporate the needed functionality while gradually seeking to get early feedback from customers, respectively involve the customer through all project’s duration, working together to deliver a feasible product. Moreover, self-organizing teams, face-to-face meetings, constant pace, technical excellence should allow minimizing the waste, respectively maximizing the efficiency in the project. Further aspects like simplicity, good design and architecture should establish a basis for success.

Re-reading the agile manifesto, even if each read pulls from experience more and more pro and cons, the manifesto continues to look like a Christmas wish-list. Even if the represented ideas make sense and satisfy a specific need, they are difficult to achieve in a project’s context and setup. Each wish introduces a constraint that brings with it its own limitations. Unfortunately, each policy introduced by a methodology follows the same pattern, no matter of the methodology considered. Moreover, the wishes cover only a small subset from a project’s texture, are general and let lot of space for interpretation and implementation, though the same can be said about any principles that don’t provide a coherent worldview or a conceptual model.

The software development industry needs a coherent worldview that reflects its assumptions, models, characteristics, laws and challenges. Software Engineering (SE) attempts providing such a worldview though unfortunately is too complex for many and there seem to be a big divide when considered in respect to the worldviews introduced by the various Project Management (PM) methodologies. Studying one or two PM methodologies, learning a few programming languages and even the hand on experience on a few projects won’t fill the gaps in knowledge associated with the SE worldview.

Organizations don’t seem to see the need for professionals of having a formal education in SE. On the other side is expected from employees to have by default some of the skillset required, which is not the case. Besides understanding and implementing a technology there are a set of knowledge areas in which the IT professional must have at least a high-level knowledge if it’s expected from him/her to think critically about the respective areas. Unfortunately, the lack of such knowledge leads sometimes to situations which can impact negatively projects.

Almost each important word from the agile manifesto pulls with it a set of concepts from a SE’ worldview – customer satisfaction, software delivery, working software, requirements management, change management, cooperation, teamwork, trust, motivation, communication, metrics, stakeholders’ management, good design, good architecture, lessons learned, performance management, etc. The manifesto needs to be regarded from a SE’s eyeglasses if one expects value from it.

Previous Post <<||>>  Next Post

04 February 2021

📦Data Migrations (DM): Conceptualization (Part VI: Data Migration Layer)

Data Migration
Data Migrations Series

Besides migrating the master and transactional data from the legacy systems there are usually three additional important business requirements for a Data Migration (DM) – migrate the data within expected timeline, with minimal disruption for the business, respectively within expected quality levels. Hence, DM’ timeline must match and synchronize with main project’s timeline in terms of main milestones, though the DM needs to be executed typically within a small timeframe of a few days during the Go-Live. In what concerns the third requirement, even if the data have high quality as available in the source systems or provided by the business, there are aspects like integration and consistency that rely primarily on the DM logic.

To address these requirements the DM logic must reach a certain level of performance and quality that allows importing the data as expected. From project’s beginning until UAT the DM team will integrate the various information iteratively, will need to test the changes several times, troubleshoot the deviations from expectations. The volume of effort required for these activities can be overwhelming. It’s not only important for the whole solution to be performant but each step must be designed so that besides fast execution, the changes and troubleshooting must involve a minimum of overhead.

For better understanding the importance, imagine a quest game in which the character has to go through a labyrinth with traps. If the player made a mistake he’ll need to restart from a certain distant point in time or even from the beginning. Now imagine that for each mistake he has the possibility of going one step back try a new option and move forward. For some it may look like cheating though in this way one can finish the game relatively quickly. It would be great if executing a DM could allow the same flexibility.

Unfortunately, unless the data are stored between steps or each step is a different package, an ETL solution doesn’t provide the flexibility of changing the code, moving one step behind, rerunning the step and performing troubleshooting, and this over and over again like in the quest game. To better illustrate the impact of such approach let’s consider that the DM has about 40 entities and one needs to perform on average 20 changes per entity. If one is able to move forwards and backwards probably each change will take about a few minutes to execute the code. Otherwise rerunning a whole package can take 5-10 times or even more as this can depend on packages’ size and data volume. For 800 changes only an additional minute per change equates with 800 minutes (about 13 hours).

In exchange, storing the data for an entity in a database for the important points of the processing and implementing the logic as a succession of SQL scripts allows this flexibility. The most important downside is that the steps need to be executed manually though this is a small price to pay for the flexibility and control gained. Moreover, with a few tricks one can load deltas as in the case of a phased DM.

To assure that the consistency of the data is kept one needs to build for each entity a set of validation queries that check for duplicates, for special cases, for data integrity, incorrect format, etc. The queries can be included in the sequence of logic used for the DM. Thus, one can react promptly to each unexpected value. When required, the validation rules can be built within reports and used in the data cleaning process by users, or even logged periodically per entity for tracking the progress.

Previous Post <<||>> Next Post

13 May 2019

#️⃣Software Engineering: Programming (Part XIV: Good Programmer, Bad Programmer)

Software Engineering
Software Engineering Series

The use of denominations like 'good' or 'bad' related to programmers and programming carries with it a thin separation between these two perceptional poles that represent the end results of the programming process, reflecting the quality of the code delivered, respectively the quality of a programmer’s effort and  behavior as a whole. This means that the usage of the two denominations is often contextual, 'good' and 'bad' being moving points on a imaginary value scale with a wide range of values within and outside the interval determined by the two.

The 'good programmer' label is a idealization of the traits associated with being a programmer – analyzing and understanding the requirements, filling the gaps when necessary, translating the requirements in robust designs, developing quality code with a minimum of overwork, delivering on-time, being able to help others, to work as part of a (self-organizing) team and alone, when the project requires it, to follow methodologies, processes or best practices, etc. The problem with such a definition is that there's no fix limit, considering that programmer’s job description can include an extensive range of requirements.

The 'bad programmer' label is used in general when programmers (repeatedly) fail to reach others’ expectations, occasionally the labeling being done independently of one’s experience in the field. The volume of bugs and mistakes, the fuzziness of designs and of the code written, the lack of comments and documentation, the lack of adherence to methodologies, processes, best practices and naming conventions are often considered as indicators for such labels. Sometimes even the smallest mistakes or the wrong perceptions of one’s effort and abilities can trigger such labels.

Labeling people as 'good' or 'bad' has the tendency of reinforcing one's initial perception, in extremis leading to self-fulfilling prophecies - predictions that directly or indirectly cause themselves to become true, by the very terms on how the predictions came into being. Thus, when somebody labels another as 'good' or 'bad' he more likely will look for signs that reinforce his previous believes. This leads to situations in which "good" programmers’ mistakes are easier overlooked than 'bad' programmers' mistakes, even if the mistakes are similar.

A good label can in theory motivate, while a bad label can easily demotivate, though their effects depend from person to person. Such labels can easily become a problem for beginners, because they can easily affect beginners' perception about themselves. It’s so easy to forget that programming is a continuous learning process in which knowledge is relative and highly contextual, each person having strengths and weaknesses.

Each programmer has a particular set of skills that differentiate him from other programmers. Each programmer is unique, aspect reflected in the code one writes. Expecting programmers to fit an ideal pattern is unrealistic. Instead of using labels one should attempt to strengthen the weaknesses and make adequate use of a person’s strengths. In this approach resides the seeds for personal growth and excellence.

There are also programmers who excel in certain areas - conceptual creativity, ability in problem identification, analysis and solving, speed, ingenuity of design and of making best use of the available tools, etc. Such programmers, as Randall Stross formulates it, “are an order of magnitude better” than others. The experience and skills harnessed with intelligence have this transformational power that is achievable by each programmer in time.

Even if we can’t always avoid such labeling, it’s important to become aware of the latent force the labels carry with them, the effect they have on our colleagues and teammates. A label can easily act as a boomerang, hitting us back long after it was thrown away.


12 May 2019

#️⃣Software Engineering: Programming (Part XIII: Misconceptions about Programming II)

Software Engineering

Continuation

One of the organizational stereotypes is having a big room full of cubicles filled with employees. Even if programmers can work in such settings, improperly designed environments restrict to a certain degree the creativity and productivity, making more difficult employees' collaboration and socialization. Despite having dedicated meeting rooms, an important part of the communication occurs ad-hoc. In open spaces each transient interruption can easily lead inadvertently to loss of concentration, which leads to wasted time, as one needs retaking thoughts’ thread and reviewing the last written code, and occasionally to bugs.

Programming is expected to be a 9 to 5 job with the effective working time of 8 hours. Subtracting the interruptions, the pauses one needs to take, the effective working time decreases to about 6 hours. In other words, to reach 8 hours of effective productivity one needs to work about 10 hours or so. Therefore, unless adequately planned, each project starts with a 20% of overtime. Moreover, even if a task is planned to take 8 hours, given the need of information the allocated time is split over multiple days. The higher the need for further clarifications the higher the chances for effort to expand. In extremis, the effort can double itself.

Spending extensive time in front of the computer can have adverse effects on programmers’ physical and psychical health. Same effect has the time pressure and some of the negative behavior that occurs in working environments. Also, the communication skills can suffer when they are not properly addressed. Unfortunately, few organizations give importance to these aspects, few offer a work free time balance, even if a programmer’s job best fits and requires such approach. What’s even more unfortunate is when organizations ignore the overtime, taking it as part of job’s description. It’s also one of the main reasons why programmers leave, why competent workforce is lost. In the end everyone’s replaceable, however what’s the price one must pay for it?

Trainings are offered typically within running projects as they can be easily billed. Besides the fact that this behavior takes time unnecessarily from a project’s schedule, it can easily make trainings ineffective when the programmers can’t immediately use the new knowledge. Moreover, considering resources that come and go, the unwillingness to invest in programmers can have incalculable effects on an organization performance, respectively on their personal development.

Organizations typically look for self-motivated resources, this request often encompassing organization’s whole motivational strategy. Long projects feel like a marathon in which is difficult to sustain the same rhythm for the whole duration of the project. Managers and team leaders need to work on programmers’ motivation if they need sustained performance. They must act as mentors and leaders altogether, not only to control tasks’ status and rave and storm each time deviations occur. It’s easy to complain about the status quo without doing anything to address the existing issues (challenges).

Especially in dysfunctional teams, programmers believe that management can’t contribute much to project’s technical aspects, while management sees little or no benefit in making developers integrant part of project's decisional process. Moreover, the lack of transparence and communication lead to a wide range of frictions between the various parties.

Probably the most difficult to understand is people’s stubbornness in expecting different behavior by following the same methods and of ignoring the common sense. It’s bewildering the easiness with which people ignore technological and Project Management principles and best practices. It resides in human nature the stubbornness of learning on the hard way despite the warnings of the experienced, however, despite the negative effects there’s often minimal learning in the process...

To be eventually continued…


#️⃣Software Engineering: Programming (Part XII: Misconceptions about Programming - Part I)

Software Engineering
Software Engineering Series

Besides equating the programming process with a programmer’s capabilities, minimizing the importance of programming and programmers’ skills in the whole process (see previous post), there are several other misconceptions about programming that influence process' outcomes.


Having a deep knowledge of a programming language allows programmers to easily approach other programming languages, however each language has its own learning curve ranging from a few weeks to half of year or more. The learning curve is dependent on the complexity of the languages known and the language to be learned, same applying to frameworks and architectures, the scenarios in which the languages are used, etc. One unrealistic expectation is that the programmers are capablle of learning a new programming language or framework overnight, this expectation pushing more pressure on programmers’ shoulders as they need to compensate in a short time for the knowledge gap. No, the programming languages are not the same even if there’s high resemblance between them!

There’s lot of code available online, many of the programming tasks involve writing similar code. This makes people assume that programming can resume to copy-paste activities and, in extremis, that there’s no creativity into the act of programming. Beside the fact that using others’ code comes with certain copyright limitations, copy-pasting code is in general a way of introducing bugs in software. One can learn a lot from others’ code, though programmers' challenge resides in writing better code, in reusing code while finding the right the level of abstraction.  
 
There’s the tendency on the market to build whole applications using wizard-like functionality and of generating source-code based on data or ontological models. Such approaches work in a range of (limited) scenarios, and even if the trend is to automate as much in the process, is not what programming is about. Each such tool comes with its own limitations that sooner or later will push back. Changing the code in order to build new functionality or to optimize the code is often not a feasible solution as it imposes further limitations.

Programming is not only about writing code. It involves also problem-solving abilities, having a certain understanding about the business processes, in which the conceptual creativity and ingenuity of design can prove to be a good asset. Modelling and implementing processes help programmers gain a unique perspective within a business.

For a programmer the learning process never stops. The release cycle for the known tools becomes smaller, each release bringing a new set of functionalities. Moreover, there are always new frameworks, environments, architectures and methodologies to learn. There’s a considerable amount of effort in expanding one's (necessary) knowledge, effort usually not planned in projects or outside of them. Trainings help in the process, though they hardly scratch the surface. Often the programmer is forced to fill the knowledge gap in his free time. This adds up to the volume of overtime one must do on projects. On the long run it becomes challenging to find the needed time for learning.

In resource planning there’s the tendency to add or replace resources on projects, while neglecting the influence this might have on a project and its timeline. Each new resource needs some time to accommodate himself on the role, to understand project requirements, to take over the work of another. Moreover, resources are replaced on project with a minimal or even without the knowledge transfer necessary for the job ahead. Unfortunately, same behavior occurs in consultancy as well, consultants being moved from one known functional area into another unknown area, changing the resources like the engines of different types of car, expecting that everything will work as magic.



11 May 2019

#️⃣Software Engineering: Programming (Part XI: The Dark Side)

Software Engineering
Software Engineering Series

As member of programmers' extended community, it’s hard to accept some of the views that inconsiderate programmers and their work. In some contexts, maybe the critics reveal some truths. It’s in human nature to generalize some of the bad experiences people have or to oversimplify some of programmers’ traits in stereotypes, however the generalizations and simplifications with pejorative connotations bring no service to the group criticized, as well to the critics.

The programmer finds himself at the end of the chain of command, and he’s therefore the easiest to blame for the problems existing in software development (SD). Some of the reasoning fallacies are equating the process of programming with programmers' capabilities, when the problems reside in the organization itself – the way it handles each step of the processes involved, the way it manages projects, the way it’s organized, the way it addresses cultural challenges, etc.

The meaningful part of the SD starts with requirements’ elicitation, the process of researching and discovering the requirements based on which a piece of software is built upon. The results of the programming process are as good as the inputs provided – the level of detail, accuracy and completeness with which the requirements were defined. It’s the known GIGO (garbage in, garbage out) principle. Even if he questions some of the requirements, for example, when they are contradictory or incomplete, each question adds more delays in the process because getting clarifying the open issues involves often several iterations. Thus, one must choose between being on time and delivering the expected quality. Another problem is that the pay-off and perception for the two is different from managerial and customers’ perspective.

A programmer’s work, the piece of software he developed, it’s seen late in the process, when it’s maybe too late to change something in utile time. This happens especially in waterfall methodology, this aspect being addressed by more modern technologies by involving the customers and getting constructive feedback early in the process, and by developing the software in iterations.

Being at the end of the chain command, programming is seen often as a low endeavor, minimizing its importance, maybe because it seems so obvious. Some even consider that anybody can program, and it’s true that, as each activity, anyone can learn to program, same as anyone can learn another craft, however as any craft it takes time and skills to master. The simple act of programming doesn’t make one a programmer, same as the act of singing doesn’t make one a singer. A programmer needs on average several years to achieve an acceptable level of mastery and profoundness. This can be done only by mastering one or more programming languages and frameworks, getting a good understanding of the SD processes and what the customers want, getting hand-on experience on a range of projects that allow programmers to learn and grow.

There are also affirmations that contain some degrees of truth. Overconfidence in one’s skills results in programmers not testing adequately their own work. Programmers attempt using the minimum of effort in achieving a task, the development environments and frameworks, the methodologies and other tools playing an important part. In extremis, through the hobbies, philosophies, behaviors and quirks they have, not necessarily good or bad, the programmers seem to isolate themselves.

In the end the various misconceptions about programmers have influence only to the degree they can pervade a community or an organization’s culture. The bottom line is, as Bjarne Stroustrup formulated it, “an organization that treats its programmers as morons will soon have programmers that are willing and able to act like morons only” [1].



References:
[1] "The C++ Programming Language" 2nd Ed., by Bjarne Stroustrup, 1991

07 May 2019

𖣯Strategic Management: Strategic Perspectives (Part I: Agile vs. Lean Organizations)

Strategic Management

Agile and lean are two important concepts that pervaded the organizations in the past 20-30 years, though they continue to have little effect on organizations’ operations.

Agile is rooted in the need to respond promptly to the changing needs of an organization. The agile philosophy was primarily groomed in Software Development to reconcile the changing customer requirements with disciplined project execution, however it can be applied to an organization’s processes as well. An agile process is in general a process designed to deliver the intended results in an effective and efficient manner by addressing promptly the changing requirements in customers’ needs.

Lean is a systematic method for the minimization of waste, rooted as philosophy in manufacturing. The lean mindset attempts removing the non-value-added activities from processes because they bring no value for the customers. Thus a lean process is a process designed to deliver the intended results in an effective and efficient manner by focusing on the immediate needs of the customers, what customers want and value (when they want it). 

Effective means being successful in producing a desired or intended result, while efficient means achieving maximum productivity with minimum wasted effort or expense. The requirement for a process to be effective and efficient is translated in delivering what’s intended by using a minimum of steps designed in such a way that the quality of the end results is not affected, at least not for the essential characteristics. Efficiency is translated also in the fact that the information, material and resources’ flow suffer minimal delays.

Agile focuses in answering promptly the changing requirements in customers’ needs, while lean focuses on what customers wants and value while eliminating waste. Both mindsets seem to imply iterative and adaptive approaches in which the improvement happens gradually. Through their nature the two mindsets seem to complete each other. Some even equate agile with lean however an agile process is not necessarily lean and vice-versa.

To improve the effectiveness and efficiency of its operations an organization should aim developing processes that are agile and lean to optimize the information and material flows, while focusing on its users’ changing needs, and while eliminating continuously the activities that lead to waste. And waste can take so many forms – the unnecessary bureaucracy reflected in multiple and repetitive sign-offs and approvals, the lack of empowerment, not knowing what to do, etc.

There’s important time wasted just because the users don’t know or don’t understand an organization’s processes. If an organization can’t find rules that everyone understands then a process is doomed, independently of the key area the process belongs to. There’s also the tendency of attempting to address each exception within a process to the degree that multiple processes result. There’s no perfect process, however one can define the basic flow and document the main exceptions, while providing users some guidelines in navigating the unknown and unpredictable.

As part of same tendency it makes sense to move requests that respect a standard procedure on the list of standard requests instead of following futile steps just for the sake of it. It’s the case of requests that can be fulfilled with internal resources, e.g. the development of reports or extraction of data, provisioning of SharePoint websites, some performance optimizations, etc. In addition, one can unify processes that seem to be disconnected, e.g. the handling of changes as part of the Change Management respectively Project Management as they involve almost the same steps.

Probably it's in each organization’s interest to discover and explore the benefits of applying the agile and lean mindsets to its operation and integrate them in its culture

Previous Post <<||>> Next Post

💼Project Management: Methodologies (Part III: Agility under Eyeglasses I)

Mismanagement

There are more and more posts in the cyberspace voicing against the agile practices, the way they are understood and implemented by organizations. Some try to be hilarious [5]; others try to keep the scholastic seriousness [1] [2] [3] [4], and all of them make some valid points. In each remark there’re some seeds of truth, even if context-dependent.

Personally, I embrace an agile approach when possible, however I find it difficult to choose between the agile methodologies available on the market because each of them introduces some concepts that contradict what it means to be agile – to respond promptly to business needs. It doesn’t mean that one must consider each requirement, but that’s appropriate to consider those which have business justification. Moreover, organizations need to adapt the methodologies to their needs, and seldom vice-versa.
Considering the Agile Manifesto, it’s difficult to take as serious statements that lack precision, formulations like “we value something over something else” are more of a wish than principles. When people don’t understand what the agile “principles” mean, one occasionally hears statements like “we need no documentation”, “we need no project plan”, “the project plan is not important”, “Change Management doesn’t apply to agile projects” or “we need only high-level requirements because we’ll figure out where we’re going on the way”. Because of the lack of precision, a mocker can variate the lesser concept to null and keep the validity of the agile “principles”.
The agile approaches seem to lack control. If you’re letting the users in charge of the scope then you risk having a product that offers a lot though misses the essential, and thus unusable or usable to a lower degree. Agile works good for prototyping something to show to the users or when the products are small enough to easily fit within an iteration, or when the vendor wants to gain a customer’s trust. Therefore, agile works good with BI projects that combine in general all three aspects.
An abomination is the work in fix sprints or iterations of one or a few weeks, and then chopping the functionality to fit the respective time intervals. If you have the luck of having sign-offs and other activities that steal your time, then the productive time reduces up to 50% (the smaller the iterations the higher the percentage). What’s even unconceivable is that people ignore the time spent with bureaucracy. If this way of working repeats in each iteration then the project duration multiplies by a factor between 2 or 4, the time spent on Project Management increasing by the same factor. What’s not understandable is that despite bureaucracy the adherence to delivery dates, budget and quality is still required.
Sometimes one has the feeling that people think that software development and other IT projects work like building a house or like the manufacturing of a mug. You choose the colors, the materials, the dimensions and voila the product is ready. IT projects involve lot of unforeseen and one must react agilely to it. Here resides one of the most important challenges.   
Communication is one important challenge in a project especially when multiple interests are involved. Face-to-face conversation is one of the nice-to-have items on the wish list however in praxis isn’t always possible. One can’t expect that all the resources are available to meet and decide. In addition, one needs to document everything from meeting minutes, to Business Cases and requirements. A certain flexibility in changing the requirements is needed though one can’t change them arbitrarily, there must be a concept behind otherwise the volume of overwork can easily make the budget for a project explode exponentially.
||>> Next Post (continuation) 
Resources:
[1] Harvard Business Review (2018) Why Agile Goes Awry - and How to Fix It, by Lindsay McGregor & Neel Doshi (Online) Available from: https://hbr.org/2018/10/why-agile-goes-awry-and-how-to-fix-it
[2] Forbes (2012) The Case Against Agile: Ten Perennial Management Objections, by Steve Denning  (Online) Available from:
https://www.forbes.com/sites/stevedenning/2012/04/17/the-case-against-agile-ten-perennial-management-objections/#6df0e6ea3a95 
[3] Springer (2018) Do Agile Methods Work for Large Software Projects?, by Magne Jørgensen  (Online) Available from:
https://link.springer.com/chapter/10.1007/978-3-319-91602-6_12
[4] Michael O Church (2015) Why “Agile” and especially Scrum are terrible  (Online) Available from:
https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/
[5] Dev.to (2019) Mockery of agile, by Artur Martsinkovskyi (Online) Available from: https://dev.to/arturmartsinkovskyi/mockery-of-agile-5bdf

02 May 2019

#️⃣Software Engineering: Programming (Part IX: Programmer, Coder or Developer?)

Software Engineering
Software Engineering Series

Programmer, coder or (software) developer are terms used interchangeably to denote a person who writes a set of instructions for a computer or any other electronic device. Looking at the intrinsic meaning of the three denominations, a programmer is a person who writes programs, a coder is a person who writes code, and a developer is one who develops (makes grow) a piece of software. They look like redundant definitions, isn’t it?

A program is a stand-alone piece of code written for a given purpose – in general it’s used to transform inputs in outputs or specific actions, and involves a set of structures, libraries and other resources. Programming means primarily being able to write, understand, test and debug programs, however there can be other activities like designing, refactoring, documenting programs and the resources needed. It also involves the knowledge of a set of algorithms, libraries, architectures, methodologies and practices that can be used in the process.

Code may refer to a program, as well as parts of a program. Writing code means being able to use and understand a programming language’s instruction for a given result – validating input, acting on diverse events, formatting and transforming content, etc. The code doesn’t necessarily have to stand alone, often being incorporated inside of documents like web pages, web parts or reports.

Development of software usually means more than programing as the former is considered as a process in conceiving, specifying, designing, programming, documenting, testing and maintaining software. The gap between the two is neglectable as programming typically involves in practice the other activities as well.

Programmer and coder are unfortunately often used with a pejorative connotation. Therefore the denomination of developer seems fancier. An even fancier term is the one of software engineer, software engineering being the application of engineering to the development of software in a systematic method.

In IT there are several other roles which involve tangentially the writing of instructions – database administrator, security engineer, IT analyst, tester, designer, modeler, technical writer, etc. It looks like a soup of fancy denominations chosen expressly to confuse nontechnical people. Thus a person who covered many of the roles mentioned above, finds it sometimes difficult to define the most appropriate denomination.

A person who writes such code doesn’t have to be a programmer or even an IT professional. There are many tools on the market whose basic functionality can be extended with the help of scripts - Excel, Access, SSRS or SSIS. Many tools nowadays have basic drag and drop and wizard-based functionality which limits the need for coding, and the trend seems to move in this direction. Another trend is the building of minimizing the need for writing code to the degree that full applications can be built with drag and drops, however some degree of coding is still needed. It seems to be in demand the knowledge of one or two universal scripting languages and data-interchange formats.

Probably the main factor for naming somebody a programmer is whether he does this for a living. On the other side a person can identify himself as programmer even if his role involves only a small degree of programming or programing is more of a hobby. One can consider programming as a way of living, as a way of understanding and modelling life. This way of life borrows a little from the way of being of the mathematician, the philosopher and the engineer.

In the end is less important what’s the proper denomination. More important is with what one identifies himself and what one makes with his skills – the mental and machine-understandable universes one builds.


21 April 2019

#️⃣Software Engineering: Programming (Part VIII: Pair Programming)

Software Engineering
Software Engineering Series

“Two heads are better than one” – a proverb whose wisdom is embraced today in the various forms of harnessing the collective intelligence. The use of groups in problem solving is based on principles like “the collective is more than the sum of its individuals” or that “the crowds are better on average at estimations than the experts”. All well and good, based on the rationality of the same proverb has been advanced the idea of having two developers working together on the same piece of code – one doing the programming while the other looks over the shoulder as a observer or navigator (whatever that means), reviewing each line of code as it is written, strategizing or simply being there.

This approach is known as pair programming and considered as an agile software development technique, adhering thus to the agile principles (see the agile manifesto). Beyond some intangible benefits, its intent is to reduce the volume of defects in software and thus ensure an acceptable quality of the deliverables. It’s also an extreme approach of the pear review concept.
Without considering whether pair programming adheres to the agile principles, the concept has several big loopholes. The first time I read about pair programming it took me some time to digest the idea – I was asking myself what programmer will do that on a daily basis, watching as other programmers code or being watched while coding, each line of code being followed by questions, affirmative or negative nodding… Beyond their statute of being lone wolves, programmers can cooperate when the tasks ahead requires it, however to ask a programmer watch actively as others program it won’t work on the long run!

Talking from my own experience as programmer and of a professional working together with other programmers, I know that a programmer sees each task as a challenge, a way of learning, of reaching beyond his own condition. Programming is a way of living, with its pluses and minuses.
Moreover, the complexity of the tasks doesn’t resume at handling the programming language but of resolving the right problem. Solving the right problem is not something that can one overcome with brute force but with intelligence. If using the programming language is the challenge then the problem lies somewhere else and other countermeasures must be taken!

Some studies have identified that the use of pair programming led to a reduction of defects in software, however the numbers are misleading as long they compare apples with pears. To statistically conclude that one method is better than the other means doing the same experiment with the different methods using a representative population. Unless one addressees the requirements of statistics the numbers advanced are just fiction!

Just think again about the main premise! One doubles the expenditure for a theoretical reduction of the defects?! Actually, it's more than double considering that different types of communication takes place. Without a proven basis the effort can be somewhere between 2.2 and 2.5 and for an average project this can be a lot! The costs might be bearable in situations in which the labor is cheap, however programmers’ cooperation is a must.

The whole concept of pair programming seems like a bogus idea, just like two drivers driving the same car! This approach might work when the difference in experience and skills between developers is considerable, that being met in universities or apprenticeship environments, in which the accent is put on learning and forming. It might work on handling complex tasks as some adepts declare, however even then is less likely that the average programmer will willingly do it!


12 March 2019

🧭Business Intelligence: Enterprise Reporting (Part XII: Reports’ Lifecycle)

Business Intelligence

Introduction

A report’s lifecycle is the sequence of stages through which a report goes during the timespan of its ownership. The main stages resume mainly to report’s definition, development, testing and deployment, however a report’s life occurs within the context of IT processes like Change, Incident/Problem, Access, Availability, Information Security and Knowledge Management. To them can add up Data Management processes like Data Governance, Data Quality and Metadata Management. Therefore, the extended reports’ lifecycle could take the following form:


The processes can be easily tailored to an organization’s needs, even if it may take several attempts until the best mix is found. The activities introduced by the supporting processes don’t necessarily change the way reports are developed as long the processes integrate smoothing in report’s authoring.

Definition Phase

The lifecycle of a report starts with a series of steps that lead to report’s definition and the requirements associated with it:



The starting point is the identification of a need for data. It can be a business question that needs to be answered, a decision that needs to be made, data needed to keep an operational, tactical or strategical objective under control, and so on. Such business situations can be referred simple as (business) problems.
Problem definition
Problem definition (statement) is the process by which a business issue or need is clearly and concisely stated. This step might seem trivial and implied, however in praxis correlated to it lies the most important volume of overwork.

The dictum “a problem well stated is a problem half-solved” applies as well in BI field. Unfortunately, there are cases in which the users want something else than stated or they leave important details out. Sometimes the users aren’t sure what they need/want, and it comes in developer’s attributions to help clarify the problem and put it within a context.

There are cases in which the users just request a report without specifying the problem they need to solve. This might do when the user has a good understanding of the data and the problem, however this approach does not always work. Personally, I find it useful to define for each report also the underneath problem. I see it as a “win-win” situation in which the user invests some knowledge into the developer and thus the developer will better understand the business, while in time he can provide better help. A thorough understanding of the business and knowledge of the users and their needs can help minimize the volume of overwork involved in reports’ development.
Requirements definition
Requirements definition is the process by which functional and non-functional expectations, targets and specifications are elicited and documented.

Functional requirements specify what the report must do - how the report is structured or formatted, how data need to be visualized or navigated, to what file formats need to be exported, on whether needs to be printed, how the data needs to be grouped, in which order, in what currency/language needs to be displayed, what data sources need to be used, etc. The functional requirements are typically listed in the use case and test script.

Non-functional requirements refer to requirements related to report’s accessibility, availability, performance, compliance, documentation, quality, maintainability, security or testability.

The degree to which a requirement can be fulfilled depends entirely on the reporting platform. It can be differentiated between soft and hard constraints. Soft constraints can be overcome by adding more processing power, memory or other types of resources, while hard constraints can’t be easily or at all overcome. Of course, not all requirements are equally important. Important not fulfilled requirements can make a report unusable and, in extremis, can lead to choosing one reporting platform over another.

The requirements can be elicited by a developer, an analyst/consultant or defined by the business itself. Organizations can simplify the process by defining a set of guidelines and standards that need to be considered in reports’ definition. Normally, is enough to reference the document(s) where the guidelines and standards are found. In contrast to other software artifacts, the requirements for reports can be gather in a simplified version of a document. Quite often a checklist can help identify these requirements upfront with a minimum of overhead.
Report definition
Report definition is the process by which report’s content, logic and layout are explicitly defined - what attributes are needed for output and from what source, what static/dynamic parameters are needed, how the data need to be displayed/formatted, what formulas, aggregations or ordering apply.

A report’s definition can be anything between a simple statement summarizing what the report is about and complex structures (mainly in form of a mapping) reflecting in detail each attribute, constraint, formula, grouping or sorting.

A good definition should allow a developer to create the report as needed by the users, eventually with minimal deviations implied by user’s understanding. The holy grail in report’s definition is finding a structure flexible enough to cover all the aspects of a report. Even if some structures allow such flexibility, sometimes it’s almost impossible not provide additional descriptions in textual forms. The less insight the developer has into the business, the more textual descriptions and visuals are needed to be included to support the knowledge gap.
GAP Analysis
GAP Analysis is the iterative process by which the current state of a software artifact or situation is compared with the potential or desired state. It became an integrant tool from professionals’ thinking to the extent its role as separate process is quite often ignored. In the context of reporting authoring it can be used when comparing the requirements against the current infrastructure and the data available, as well while comparing the developed report against the requirements.

It can happen that the technical and data constraints don’t allow building the report as needed by the users. The differences need to be mitigated and eventually the requirements need to be changed to accommodate the reality. In extremis must be considered whether the report still make sense in the light of the modified requirements.
Solution formulation
Solution formulation is the process by which a formal (technical) solution is defined for the given requirements. It’s a conceptualization (aka concept) of the requirements, and in many cases it’s just a short description by which means the report will be build and what data sources will be used. In more complex cases it can include details about the changes needed in the infrastructure to support the report (e.g. creation/extensions of tables and other database objects, ETL jobs, components, etc.), about the data that need to be collected, etc.

Of course, the conceptualization must be considered together with report’s definition. In fact, report’s definition can be considered as part of the conceptualization. A conceptualization can cover multiple reports, as well two or more different solutions can be provided for different sets of reports. The infrastructure can make a concept futile, either when there is a single reporting platform, or when clear rules are in place.
Prototyping
Prototyping is the iterative process of building a simplified version of the report for demonstration and evaluation purposes, so that users can better define the requirements or to prove the concept. The prototype is a preliminary version that can be refined successively until user’s requirements have a final form. It can take the form of a mock-up query to verify report’s technical and logical feasibility, and/or an Excel layout to depict how the report will look like. Prototypes can facilitate the communication between the parties involved and can be considered as part of the requirements.

A prototype might be needed 1 from 5 cases or so, however this number depends also on the number of queries available or of the knowledge of the source and business processes. Because a prototype can involve additional work, it’s important to identify those cases in which a prototype makes sense and keep the effort to a minimum, especially when an approval is involved in the process. Therefore, one should consider the most important characteristics that need to be proved (e.g. if the data can be aggregated, matched, displayed at the requested level of detail, or in the requested format).

With the help of self-service tools, the business has the capabilities to play with the data and find answers by itself, being able thus to create a prototyped version of the report. Once the report met business needs it can be standardized so it can be used organization-wide. It’s recommended to standardize the reports that are used as part of organization’s processes, otherwise self-service can become a bottleneck for the organization.
Change Management
Change Management is the process of ensuring that the changes performed to a system, in this case a BI tool or the whole BI infrastructure, are performed with minimal disruption for the business and that risks are kept under control. Changes can be requested via standard requests or change requests. A standard request (SR) is a pre-approved change that involves low risks, is relatively common and follows a predefined procedure. In contrast to SRs, a change request (CR) requires the authorization of a board, e.g. the Change Advisory Board (CAB), it often involves risks, an investment and the approach is not that common.

Both are hard-copy or electronic templates that allow to capture information about the changes and allow to document the change and track its status. They include typically the problem definition together with users’ requirements, report definition and the formulation of the solution. What differentiates them thus is the approval process that can be sometimes time-consuming, and the volume of formalism needed to manage the requests (e.g. tracking status, writing status reports, handling risks, etc.).

Unless infrastructural changes are necessary, the risks involved with the creation of reports are relatively small, especially when the reports are developed in-house. Reports developed by vendors involve more risks and imply investments that in a form or other need to be approved. Considering the particularities of the two approaches, personally I think that reports that can be developed with internal resources should be done via SRs, while reports developed externally should be done via CRs. Even if this categorization has the potential of creating some confusion, the use of SRs allows reducing the volume of effort necessary to manage the requests. I suppose there can be found solutions to request external changes via SRs as well (e.g. by using contingents and a set of well-defined rules).

24 July 2010

#️⃣Software Engineering: Programming (Part III: Football and Software Development)

 
Software Engineering
Software Engineering Series

I wanted to write this post during the South Africa World Cup 2010 though because of the lack of time and because I was waiting for some statistics I could use, here I am, two weeks after the final whistle of the game for the first place. Football and Software Development, two domains that seems to have nothing in common, even if many software developers like to play football, and many football players are spending lot of time in front of their laptops. There is actually an important coordinate in what concerns the two – team work. Of course, that’s common to many other sports, though there are some characteristics important mainly to soccer -the small rate of deliverables (goals), the rate of failures (wrong passes), and I bet there are other characteristics common to most of the team sports, like the division and specialization of work, migration of players, “project”-oriented work, flow of money, etc.

Looking back at the games from this World Cup, we have to notice that, with a few exceptions, there wasn’t a big difference between the teams anymore, trend that could be seen during the last championships too, and there were no more individual players gaining one game after the other. Nowadays it primes the collective work, the cohesion of a team, the way the players respect the tactical indications given to them, the way they communicate and feel each other on the field. It didn’t matter anymore that you were playing against a Ronaldo, a Messi, Lampard, Drogba or Rooney, small teams like Australia, Chile, New Zeeland or South Korea, fighting as equal against the favorite teams of this tournament. 

What is more important to notice, is that teams whose players cost and make millions, didn’t function well (as expected) because the team haven’t played as a team, because the sense of individuality primed, because there was no adequate communication inside the team, while the trainer didn’t knew how to make himself respected, how to select his team, how to make/take the best out of his players, or how to change the tactics to counteract the one of the adversary. So the team with the best paid/skilled players, the team which puts more effort or controls the game, the team which has the most dynamic, effective (from the number of goals), beautiful or pragmatic play doesn’t necessarily win the game, same as the best trainer can make mistakes too, can make himself easier misunderstood or become overnight a persona non grata for its team or public.

 The same observations could be applied also to software development, and with the risk of being criticized I would say that the team with the best developers/professionals does not necessarily make a project successful, especially when the sense of individuality primes for the one of team, when the team members don’t play as a team, when there is no adequate communication, when the managers doesn’t make himself respected and know how to make/take the best out of his players, how to make a team successful no matter of the team’s number and skill-set. I tend to believe that in software development, same as in football, it must matter the joy for playing in a team, the joy for playing, being an example of professionalism, collaborating in achieving the purpose, helping each other to become better, no matter if one is named player, trainer, masseur, doctor or federation member.

I think that trainers have to learn more about project management, given the fact that building and leading a competitive team has a lot to do with projects and project management, being driven by similar goals, objectives, scope, etc. On the other side I feel that managers have to learn more from the behavior and knowledge of a trainer, to know how to be authoritarian and when to be a friend. And I expect that there are many other aspects the two types of professionals share. 

Also IT professionals can learn from football players, especially in what concerns the team spirit, what it takes to be/become a team, what it means to have your place in the field, do it right and for the best of the team. Of course, the self-sacrifice must not be brought to extreme, as some players do. In what concerns the football players, they could learn from developers the simplicity, abnegation for their work, the abnegation of becoming better, of learning something new, of finding and knowing their place in life, and overall of being humble.

As for the executives dealing with IT projects they must learn that a defender can’t become overnight a goal-getter or a goalkeeper, that a new comer in the team needs time to adapt himself, and must be helped to become integrant part, that the new comer needs to find its pace and place in the team. Executives must learn that it takes time, effort and a good strategy to built good successful teams, and even if everything revolves around money (although it shouldn’t), there should be kept a balance between investments, effort and rewards, some continuity, respect and support toward achieving successful and competitive teams.

Note:
If you liked/found interesting this post, then you might be interested also in Yohasna’s post What can we learn form SPAIN's World Cup Victory in the world of Software? and my answer to it, Satya’s Football and Software teams.. How different are they?, or B. Dwolatzky’s article on Football and Software Development are both team sports.



Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.