Showing posts with label rapid prototyping. Show all posts
Showing posts with label rapid prototyping. Show all posts

18 February 2024

🧭Business Intelligence: A Software Engineer's Perspective (Part III: More of a One-Man Show)

Business Intelligence Series
Business Intelligence Series 

Probably, in some organizations there are still recounted stories about a hero who knew so much about the business and was technically proficient that he/she was able to provide data-driven answers to most business questions. Unfortunately, the times of solo representations are for long gone - the world moves too fast, there are too many questions looking for an answer, many of them requiring a solution before the problem was actually defined, a whole infrastructure is needed to be able to harness the potential of  technologies and data, the volume of knowledge required grows exponentially, etc. 

One of the approaches of handling the knowledge gap between the initial and required knowledge in solving problems based on data is to build all the required knowledge in one person, either on the business or the technical side. More common is to hire a data analyst and build the knowledge in the respective resource, and the approach has great chances to work until the volume of work exceeds a person's limits. The data analyst is forced to request to have the workload prioritized, which might work in certain occasions, while in others one needs to compromise on quality and/or do overtime, and all the issues deriving from this. 

There are also situations in which the complexity of the problem exceeds a person's ability to handle it, and that's not necessarily a matter of intelligence but of knowhow. Some organizations respond with complexity to complexity, while others are more creative and break the complexity in manageable pieces. In both cases, more resources are needed to cover the knowledge and resource gap. Hiring more data analysts can get the work done though it's not a recipe for success. The more diverse the team, the higher the chances to succeed, though again it's a matter of creativity and of covering the knowledge gaps. Sometimes, it's more productive to use the resources already available in organization, though this can involve other challenges. 

Even if much of the knowledge gets documented, as soon the data analyst leaves the organization a void is created until a similar resource is able to fill it. Organizations can better cope with these challenges if they disseminate the knowledge between data professionals respectively within the business. The more resources are involved the higher the level of retention and higher the chances of reusing the knowledge. However, the more people are involved, the higher the costs, especially the one associated with the waste of effort. 

Organizations can compromise by choosing 1-2 resources from each department to be involved in knowledge dissemination, ideally people with data and technology affinity. They shall become data citizens, people who use  data, data processing and visualization for building solutions that enable their job. Data citizens are expected to act as showmen in their knowledge domain and do their magic whenever such requirements arise.

Having a whole team of data citizens opens new opportunities for organizations, though such resources will need beside domain knowledge and data literacy also technical knowledge. Unfortunately, many people will reach their limitations in this area. Besides the learning effort, understanding what good architecture, design and techniques means is unfortunately not for everybody, and here's where the concept of citizen data analyst or citizen scientist breaks, and this independently of the tools used.

A data citizen's effort works best in data discovery, exploration and visualization scenarios where the rapid creation of prototypes reduces the time from idea to solution. However, the results are personal solutions that need to be validated by a technical person, pieces of the solutions maybe redesigned and moved until enterprise solutions result.

Previous Post <<||>> Next Post

15 May 2019

#️⃣Software Engineering: Programming (Part XV: Rapid Prototyping - Introduction)

Software Engineering
Software Engineering Series

Rapid (software) prototyping (RSP) is a group of techniques applied in Software Engineering to quickly build a prototype (aka mockup, wireframe) to verify the technical or factual realization and feasibility of an application architecture, process or business model. A similar notion is the one of Proof-of-Concept (PoC), which attempts to demonstrate by building a prototype, starting an experiment or a pilot project that a technical concept, business proposal or theory has practical potential. In other words in Software Engineering a RSP encompasses the techniques by which a PoC is lead.

In industries that consider physical products a prototype is typically a small-scale object made from inexpensive material that resembles the final product to a certain degree, some characteristics, details or features being completely ignored (e.g. the inner design, some components, the finishing, etc.). Building several prototypes is much easier and cheaper than building the end product, they allowing to play with a concept or idea until it gets close to the final product. Moreover, this approach reduces the risk of ending up with a product nobody wants.

A similar approach and reasoning is used in Software Engineering as well. Building a prototype allows focusing at the beginning on the essential characteristics or aspects of the application, process or (business) model under consideration. Upon case one can focus on the user interface (UI) , database access, integration mechanism or any other feature that involves a challenge. As in the case of the UI one can build several prototypes that demonstrate different designs or architectures. The initial prototype can go through a series of transformations until it reaches the desired form, following then to integrate more functionality and refine the end product gradually. This iterative and incremental approach is known as rapid evolutional prototyping.

A prototype is useful especially when dealing with the uncertainty, e.g. when adopting (new) technologies or methodologies, when mixing technologies within an architecture, when the details of the implementation are not known, when exploring an idea, when the requirements are expected to change often, etc. Building rapidly a prototype allows validating the requirements, responding agilely to change, getting customers’ feedback and sign-off as early as possible, showing them what’s possible, how the future application can look like, and this without investing too much effort. It’s easier to change a design or an architecture in the concept and design phases than later.

In BI prototyping resumes usually in building queries to identify the source of the data, reengineer the logic from the business application, prove whether the logic is technically feasible, feasibility being translate in robustness, performance, flexibility. In projects that have a broader scope one can attempt building the needed infrastructure for several reports, to make sure that the main requirements are met. Similarly, one can use prototyping to build a data warehouse or a data migration layer. Thus, one can build all or most of the logic for one or two entities, resolving the challenges for them, and once the challenges solved one can go ahead and integrate gradually the other entities.

Rapid prototyping can be used also in the implementation of a strategy or management system to prove the concepts behind. One can start thus with a narrow focus and integrate more functions, processes and business segments gradually in iterative and incremental steps, each step allowing to integrate the lesson learned, address the risks and opportunities, check the progress and change the direction as needed.

Rapid prototyping can prove to be a useful tool when given the chance to prove its benefits. Through its iterative and incremental approaches it allows to reach the targets efficiently



07 May 2019

💼Project Management: Methodologies (Part IV: Agility under Eyeglasses II)

Misanagement


Employees are used to follow procedures and processes, and when they aren’t available insecurity rules - each day there’s another idea advanced how things are supposed to work. Practically, the Agile approaches (incl. Agile Prince2) focus on certain aspects and ignore specific Project Management activities that need to be performed inside of a project – releasing resources for the project, getting users on-board, getting management’s buy-in, etc. Therefore, they need to be used with a methodology that offers the lacking processes. Problematic is when is considered that the Agile approaches are self-consistent and the Project Management practices and principles don’t apply anymore.

It’s true that the Agile methods attempt reconciling disciplined project execution with creativity and innovation, however innovation is needed typically in design (incl. prototyping) , while in programing there isn’t lot of room for creativity per se. The real innovation appears when the customer lists the functionality it needs from a system and the vendor, after analyzing all the related requirements, is capable to evaluate and propose a solution from the industry trending technologies. That’s innovation and not changing controls in user interfaces!

User stories are good for situations in which an organization doesn’t know what’s doing or the tasks have a deep segmentation and specialization. Starting from user stories and building upwards to processes can prove to be a waste of time the customer pays for, while the approach leaves few room for innovation. In big projects it’s also difficult to sense the contradictions from user stories or their duplication. Even if the user stories allow maybe (but not necessarily) a better effort estimate the level of detail can become overwhelming for any skilled solution architect.

It’s also true that an agile approach needs a culture with certain characteristics. A culture can’t be changed with one project or several projects running in parallel. Typically, is recommended to start with a pilot test, assert organization’s readiness, disseminate knowledge, start several small to medium projects and build from there. For sure starting a big project with an agile methodology  will involve more challenges to the extent the challenges will push back.

One sign of agility is when self-organizing teams emerge within projects, however it takes time and training to build such teams. The seeds must be planted long before, for such teams to emerge. The key is being able of working in such teams. In extremis, conflicts appear when multiple self-organizing teams appear, each with its own political agenda, agendas that don’t necessarily match project manager or stakeholders’ agendas, and from here a large range of potential conflicts.

The psychological effect of tight sprints (iterations) and daily status meetings for the whole duration of a project is not to neglect. It builds unnecessary stress and, unless the planning reaches perfection, the programmer or consultant will often find himself in the position to be in defensive. The frequent meetings can easily become a source for nuisance and in extremis can lead to extreme behavior that can easily affect the productivity and involved persons’ health.

Personally, I wouldn’t recommend using an Agile methodology for a big project like an ERP implementation unless it was adequately adapted to organization’s needs. This doesn’t necessarily mean that the Agile methods aren’t suitable for big projects, it means that the risks are high because in big projects there’s the chance for all these mentioned issues to occur.

Despite the weak points of the Agile methods, when adequately applied, they have the chance of better performing than the “traditional” approaches. Even if people tend to see more the negative sides there’s lot of potential in being agile.

💼Project Management: Methodologies (Part III: Agility under Eyeglasses I)

Mismanagement

There are more and more posts in the cyberspace voicing against the agile practices, the way they are understood and implemented by organizations. Some try to be hilarious [5]; others try to keep the scholastic seriousness [1] [2] [3] [4], and all of them make some valid points. In each remark there’re some seeds of truth, even if context-dependent.

Personally, I embrace an agile approach when possible, however I find it difficult to choose between the agile methodologies available on the market because each of them introduces some concepts that contradict what it means to be agile – to respond promptly to business needs. It doesn’t mean that one must consider each requirement, but that’s appropriate to consider those which have business justification. Moreover, organizations need to adapt the methodologies to their needs, and seldom vice-versa.
Considering the Agile Manifesto, it’s difficult to take as serious statements that lack precision, formulations like “we value something over something else” are more of a wish than principles. When people don’t understand what the agile “principles” mean, one occasionally hears statements like “we need no documentation”, “we need no project plan”, “the project plan is not important”, “Change Management doesn’t apply to agile projects” or “we need only high-level requirements because we’ll figure out where we’re going on the way”. Because of the lack of precision, a mocker can variate the lesser concept to null and keep the validity of the agile “principles”.
The agile approaches seem to lack control. If you’re letting the users in charge of the scope then you risk having a product that offers a lot though misses the essential, and thus unusable or usable to a lower degree. Agile works good for prototyping something to show to the users or when the products are small enough to easily fit within an iteration, or when the vendor wants to gain a customer’s trust. Therefore, agile works good with BI projects that combine in general all three aspects.
An abomination is the work in fix sprints or iterations of one or a few weeks, and then chopping the functionality to fit the respective time intervals. If you have the luck of having sign-offs and other activities that steal your time, then the productive time reduces up to 50% (the smaller the iterations the higher the percentage). What’s even unconceivable is that people ignore the time spent with bureaucracy. If this way of working repeats in each iteration then the project duration multiplies by a factor between 2 or 4, the time spent on Project Management increasing by the same factor. What’s not understandable is that despite bureaucracy the adherence to delivery dates, budget and quality is still required.
Sometimes one has the feeling that people think that software development and other IT projects work like building a house or like the manufacturing of a mug. You choose the colors, the materials, the dimensions and voila the product is ready. IT projects involve lot of unforeseen and one must react agilely to it. Here resides one of the most important challenges.   
Communication is one important challenge in a project especially when multiple interests are involved. Face-to-face conversation is one of the nice-to-have items on the wish list however in praxis isn’t always possible. One can’t expect that all the resources are available to meet and decide. In addition, one needs to document everything from meeting minutes, to Business Cases and requirements. A certain flexibility in changing the requirements is needed though one can’t change them arbitrarily, there must be a concept behind otherwise the volume of overwork can easily make the budget for a project explode exponentially.
||>> Next Post (continuation) 
Resources:
[1] Harvard Business Review (2018) Why Agile Goes Awry - and How to Fix It, by Lindsay McGregor & Neel Doshi (Online) Available from: https://hbr.org/2018/10/why-agile-goes-awry-and-how-to-fix-it
[2] Forbes (2012) The Case Against Agile: Ten Perennial Management Objections, by Steve Denning  (Online) Available from:
https://www.forbes.com/sites/stevedenning/2012/04/17/the-case-against-agile-ten-perennial-management-objections/#6df0e6ea3a95 
[3] Springer (2018) Do Agile Methods Work for Large Software Projects?, by Magne Jørgensen  (Online) Available from:
https://link.springer.com/chapter/10.1007/978-3-319-91602-6_12
[4] Michael O Church (2015) Why “Agile” and especially Scrum are terrible  (Online) Available from:
https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/
[5] Dev.to (2019) Mockery of agile, by Artur Martsinkovskyi (Online) Available from: https://dev.to/arturmartsinkovskyi/mockery-of-agile-5bdf

12 October 2010

🔏MS Office: Why I (dis)like MS Access

   In the previous post, “The Limitations of MS Access Database”, I highlighted a few of the limitations of MS Access  as database, ignoring the other two or three important aspects – Access as development, reporting, respectively data analysis platform. In this post I’ll retake the topic from a general and personal perspective, considering some of the features that I like and makes from Access a useful and powerful tool, attempting also to mention some of its usage limitations I run into over the time. With the risk of repeating myself, I can’t say I’m an expert in MS Access even if I provided several solutions based on it, its use in the various contexts being not always so inspired, that being one of the reasons why in the first post “Is MS Access or MS Excel the Answer to Your Problem?” I insisted on this aspect.

Ad-Hoc Database

   MS Access is a file server-based relational database, being one of the most used databases, though it can’t be compared with more mature RDBMS like Oracle, MySQL, SQL Server, Sybase, PostgreSQL, Teradata, Informix or DB2, richer in features, especially in what concerns their administration, transactional and concurrent processing, scalability, stability, availability, performance, reliability, portability, replication, integration, security, manageability, extensibility, the degree to which they fit in the overall architecture of an enterprise, of relevance being topics like Business Intelligence, Data Warehousing, SOA, Cloud Computing, etc. These are some of the reasons for which I categorized Access as a Personal or Ad-Hoc database, being, at least from my point of view, more appropriate for small-size or personal solutions. In essence Access has the characteristics of a relational database though the lack in the mentioned features makes it less desirable. Nobody denies Access’ usefulness, the point is that when compared with full-featured RDBMS Access has no chance, fact reflected also in the following market share diagrams:

MarketShare  gartner-database-deployment[1]
DBMS Market 2006 (JoinVision e-Services via[2]) DBMS Market 2008 (Gartner via MySQL) [1]

     Even if the diagrams are a few years old, I think they are still representative in what concerns the state of art in the world of databases, the first diagram providing an historical perspective, while the second the “actual” and “future” reflected tendencies. It’s not the first time I’m seeing MS Access and SQL Server represented together even if they belong to different technology stacks, Access’ strength and weakness being deeply rooted in its affiliation to MS Office set of tools. It would be interesting to know which was the ratio then between the number of Access and SQL Servers, and what’s the ratio now, SQL Server Express replacing Access’ role of personal or small-scale database.

    The statistics are less representative when it comes to people, their interests and immediate needs. The bottom line is that Access is an easy to use database with pretty low learning curve, you don’t need to know the fancy stuff about databases, you could experiment and learn it as an add-on to your job, making the consumption of data much easier, at least in theory. Are you having your data stored across several Excel files? You can import or copy paste them in Access and there you have an ad-hoc database, then create several queries on top of them with the Query Designer or Query Designer, and this without any knowledge of SQL. The saved queries could be reused much as the views, they could be parameterized, the parameters could be bounded much like the user-defined functions, and made available for further consumption. I can’t say I met any other similar software tool that simplifies so much the design and consumption of databases. The simplicity of Access query designing comes with its tribute, especially when you want to achieve more from your database, the minimum of features making difficult to design complex queries, Access requiring a different mindset in problem solving. In addition, those used with the rich features of RDBMS won’t feel too comfortable in using the Query Designer or Editor, the ANSI syntax it’s inflexible while the troubleshooting quite painful.

    I used Access as database only when I had no other alternative, preferring to store the data in a RDBMS like SQL Server or Oracle. In exchange I used Access as presentation layer, allowing users for example to access and analyze the data. In many occasions I played with Access databases in small projects or enhancing existing applications, spending many hours in tweaking Access queries or on porting such queries to other RDBMS. I had the occasion to work with several tools that were using Access as backend, one of them IQ Insight, used to assess the quality of data, was an interesting tool to work with though it was paying tribute to the stability and speed of its database, in a next implementation project deciding to take it out of the landscape, the performance of VB + SQL Server solution that replaced it, increased the performance from a matter of hours to minutes. I know that many people out there love Access as database, though once you acknowledged the performance power and flexibility of other databases, you don’t feel like returning in the past.

Data Analysis Tool

    When having multiple Excel or other data sources, you don’t need to store your data in Access itself, it’s enough to link your text, Excel or any other ODBC data source, built a query on top of them, and there you have on the fly your data at your disposal, something that Data Warehousing and Business Intelligence tools hardly manage to do when considering all the people’s needs. By importing the data in your Access database, you could even correct some of the inherent issues existing in data, use some mappings in order to translate the data, use several queries in order to aggregate the data at the needed level of detail or get new insights. From a mapping table or a query to creating a whole data analysis framework is just a small step, and this without too much involvement of the IT guys. Even more, the framework could be used by your colleagues too, they could use it directly or indirectly by re-linking the results of your analysis with a minimum of effort, they could even improve the character of your analysis or find other purposes for the data. Thus results a complex network of interconnected Access databases, and that’s a matter of time until it’s getting out of control, for example by not knowing how a change in one of the queries could impact the other known and unknown users of your data, on whether you are using the actual data, on whether the data have been tempered, and so on. There should be no wonder when people are arriving to report different numbers, when the numbers don’t tie together, though also more modern reporting frameworks are dealing with these types of issues, isn’t it? In addition, you arrive to have multiple instances of the same data or have data distributed and isolated in a uncontrolled fashion, not the best strategy for an enterprise though…

   I used Access as data access end point for data available in various data sources, allowing users to analyze and recombine the data by themselves, but this mainly in order to overcome the limitations of available standard reporting tools. This combined with the fact that has been attempted to move most of the logic created by users in a standardized form, limiting the risks of running into Access fallacies. Sure, there could be done more in order to avoid such pitfalls, for example having adequate reporting and data analysis tools, having in place a Data Management Policy which addresses common data problems, training users, etc.

Reporting

    The possibility to present the data in a reporting-like fashion is one of the greatest advantages of Access, the tabular structure being easy to integrate with charting, paging, results breaking, formulas, filtering/parameterization, rich formatting, subreports and other types of report structures (e.g. footer, header), in other words the ingredients of a typical report. The combination between ad-hoc data analysis and reporting,  it quite an advantage, depending on users’ skills in making most out of it. Reports’ functionality could be extended using Reports’ DOM and VBA, only the fact that a report could be entirely created and modified at runtime is quite of a deal.

   I used Access reports only in the applications which were built entirely on MS Access, whenever was possible preferring to move the reports on more standardized platforms. Sometimes I find it more useful to export the data directly to Excel or to a more portable format like PDF, thing also possible with Access reports, though eliminating thus the intermediary platform. Now it depends on users’ preferences and organizations’ infrastructure.

Rapid Prototyping

    Access could be used as frontend for various types of applications, and you don’t need to put too much effort in your application. Is enough to drop a form and link it to a table, then link the screens together and here you have an already functioning application, fact that makes from Access a tool ideal for rapid prototyping.

   I used Access in several projects for building proof of concept prototypes, allowing customers to gather requirements, evaluate the concept and the available functionality. There were also cases in which the prototypes were comparable as performance with the applications that replaced them, from some points of view even better, though that’s a matter of architecture, skills and sometimes infrastructure.

Extensibility: VBA

    A person could create in Access a data analysis framework, a report or a prototype without writing a single line of code, richer functionality being available by using VBA, which is nothing more than old-fashioned VB based on Access’ DOM. VBA extensibility refers here to the possibility of going beyond the wizarding and drag-and-drop functionality provided by Access, for example by adding complex validation into forms, linking forms, altering or creating content at runtime, etc. Not everybody needs to do go so far, however those who used formulas or have some programming experience would find VBA easy to learn. Those wanting to change the default behavior of Access or provide missing functionality then they will have to go deeper in VBA’s secrets, in using built-in or third party developed libraries. For example in order to change the “sequential” access of data provided by Access a programmer will have to use ADO or DAO, the built-in transactional functionality provided in the two libraries could be used to cover the lack of transactional processing not built-in in Access. With some exceptions, in theory you could do with VBA anything you do with old fashioned VB, though with VB.Net the gap to VBA increased considerably ( see Converting Code from VBA to Visual Basic .NET for differences). There are also some limitations, for example the adding of controls in Access forms at runtime, and I remember I found a few other with time, some of them deriving from bugs existing in the tool itself. 

Extensibility: .DLLs

   I was saying that it’s possible to use third party developed libraries in Access, this functionality relying on COM+ and its predecessors DCOM, COM or ActiveX, technologies that allow the communication between components not only on the local computer but also in distributed networks or internet as in the case of ActiveX. In this way it’s possible to encapsulate functionality in libraries saved as .dlls, distribute them with your applications or reuse them in other applications. Writing COM classes is a job for programming languages like C++, VB, VB.Net, C#, etc. The old-fashioned VB was great in creating and debugging COM components in just a question of minutes, in theory any piece of code could be encapsulated in such a component. Having the possibility to extend the functionality of MS Access with such libraries open the door to an unlimited number of architectural scenarios.

Extensibility: Add-ins

   Add-ins are forming a special type of components rooted in OLE, later based on COM, that use the MS Office DOM architecture, their primary utility relying in the fact that they make it possible to provide new features for MS Office itself. An example of such “bonus” features are Save as PDF add-in for Access 2007 or Open Database Connectivity add-in for Excel. I used Add-ins only to extend Excel’s UI-based functionality, therefore I can’t talk too much about their use in Access. For more see Building COM Add-ins for Office Applications material available in MSDN.

Database Templates

   I observed that in MS Access 2007 are available several templates (e.g. Assets, Contacts, Sales pipeline, etc.) that could be extended or used in learning how an application is designed. Doing a little research I found out that is possible to create templates for whole databases, reports or forms. I haven’t use templates until now in Access, but it could prove to be an interesting feature when common architectural or functional characteristics are found.

References:

[1] MySQL. (2010). Market Share. [Online] Available from: http://www.mysql.com/why-mysql/marketshare/ (Accessed: 10 October 2010)

[2] Creative System Design. (2010) Databases. [Online] Available from: http://online.creativesystemdesigns.com/projects/databases.asp (Accessed: 10 October 2010)

07 February 2007

🌁Software Engineering: Rapid Prototyping (Definitions)

"Rapid prototyping is a method that involves creating a series of prototypes in rapid, iterative cycles. Normally, a prototype is created quickly, presented to users in order to obtain feedback on the design, and then a new prototype is created that incorporates that feedback. This cycle is continued until a fairly stable, satisfactory design emerges, which informs the design of a production-scale system." (M Cameron Jones, "Patchwork Prototyping with Open Source Software", 2007)

"The use of rapid prototyping methodologies is to reduce the production time by using working models of the final product early in a project tends to eliminate time-consuming revisions later on, and by completing design tasks concurrently, rather than sequentially throughout the project. The steps are crunched together to reduce the amount of time needed to develop training or a product. The design and development phases are done simultaneously and the formative evaluation is done throughout the process." (Irene Chen, "Instructional Design Methodologies", 2008)

"A step in software development process that supports user’s involvement in design by allowing users to see and experience the final system before it is built." (Irina Kondratova & Ilia Goldfarb, "Culturally Appropriate Web User Interface Design Study: Research Methodology and Results", 2011)

"A family of technologies that enable the relatively fast and low-cost production of products for testing and evaluation purposes." (Rachel Heinen et al, "Tools for the Process: Technology to Support Creativity and Innovation", 2015)


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.