A Software Engineer and data professional's blog on SQL, data, databases, data architectures, data management, programming, Software Engineering, Project Management, ERP implementation and other IT related topics.
Pages
- 🏠Home
- 🗃️Posts
- 🗃️Definitions
- 🏭Fabric
- ⚡Power BI
- 🔢SQL Server
- 📚Data
- 📚Engineering
- 📚Management
- 📚SQL Server
- 📚Systems Thinking
- ✂...Quotes
- 🧾D365: GL
- 💸D365: AP
- 💰D365: AR
- 👥D365: HR
- ⛓️D365: SCM
- 🔤Acronyms
- 🪢Experts
- 🗃️Quotes
- 🔠Dataviz
- 🔠D365
- 🔠Fabric
- 🔠Engineering
- 🔠Management
- 🔡Glossary
- 🌐Resources
- 🏺Dataviz
- 🗺️Social
- 📅Events
- ℹ️ About
05 December 2010
🔦Process Management: Business Process Management (Definitions)
💎SQL Reloaded: A Look into Reverse
-- reversing a string using Reverse function SELECT Reverse('string') Example1 , Reverse(Reverse('string')) Example2 , Reverse(1234567890) Example3 , Reverse(Reverse(1234567890)) Example4
Example1 | Example2 | Example3 | Example4 |
gnirts | string | 987654321 | 1234567890 |
It seems there is not much to be said about Reverse function, and in the few situations that you need it, it’s great to have it, otherwise you would have to built it yourself. Let’s try to imagine there is no Reverse function in SQL Server, how would we provide such functionality by using existing functionality? In general such an exercise might look like lost time, though it could be useful in order to present several basic techniques. In this case the Reverse functionality could be provided by using a simple loop that iterates through string’s characters changing their order. The below piece of code should return the same output as above:
-- reversing a string using looping DECLARE @String varchar(50) DECLARE @Retval varchar(50) DECLARE @Index int SET @String = 'string' SET @Index = 1 SET @Retval = '' WHILE @Index <= Len(@String) BEGIN SET @Retval = Substring(@String, @Index, 1) + @Retval SET @Index = @Index + 1 END SELECT @Retval
Some type of problems whose solution involve the sequential processing of values within loops could be solved also using recursive techniques (recursion) that involve the sequential processing of values in which the output of a given intermediary step is based on the previous step. Recursion imply a slightly different view of the same problem, it could prove itself to be maybe less efficient in some cases and quite a useful technique in the others. SQL Server provides two types of recursion, one at function level involving functions, stored procedures are triggers, respectively at query level implemented in common table expressions. Recursive functions, functions that call themselves, have been for quite a while in SQL Server though few of the database books I read really talk about them. Probably I will detail the topic in another post, though for the moment I will show only the technique at work. In this case the recursive approach is quite simple:
-- reversing a string using a recursive function CREATE FUNCTION dbo.ReverseString( @string varchar(50)) RETURNS varchar(50) AS BEGIN RETURN (CASE WHEN Len(@string)>1 THEN dbo.ReverseString( Right(@string, len(@string)-1)) + Left(@string, 1) ELSE @string END) END
-- testing SELECT dbo.ReverseString('string') Example1 , dbo.ReverseString(dbo.ReverseString('string')) Example2
Common table expressions (CTE), since their introduction with SQL Server 2005, became quite an important technique in processing hierarchical structures like BOMs, which involve the traversal of the whole tree. A sequence is actually a tree having the width of 1 node, so it seems like a CTE’s job:
-- reversing a string using common table expression DECLARE @String varchar(50) SET @String = 'string' ;WITH CTE(String, [Index]) AS ( SELECT Cast('' as varchar(50)) String , 1 [Index] UNION ALL SELECT Cast(Substring(@String, [Index], 1) + String as varchar(50)) String , [Index] + 1 [Index] FROM CTE WHERE [Index]<=Len(@String) )
SELECT @String Reversed FROM CTE WHERE [Index]-1=Len(@String)
Unlike the previous two solutions, the CTE approach involves more work, work that maybe isn’t required just for the sake of using the technique. When multiple alternative approaches exist, I prefer using the (simplest) solution that offers the highest degree of flexibility. In this case is the loop, though in others it could be a recursive function or a CTE.
30 November 2010
🧭Business Intelligence: Enterprise Reporting (Part IX: How Many Reporting Systems Do You Need?)
Business Intelligence Series |
From practical reasons, I’d like to reformulate the above question as follows:
1. Is it enough only one reporting system or do we need more than one?
2. What’s the optimum number of reporting systems existing in an organization?
A blunt answer for the first question would be that an organization might need as many reporting systems as it’s needed in order to satisfy the existing reporting needs, in other words the demand. Straightforward answer but not really acceptable for a Manager, in particular, and an IT Professional, in general. Another blunt answer, this time for the second question, would be that an organization needs only “one and good” reporting system, though that’s contradicting the various constraints existing in IT world. So, what should it be?
Often, in order to understand the requirements of an organization and understanding the value of data it’s easier to look at the various types of data it stores – the bottom up approach. Typically we could make distinction between master vs. transactional data, raw vs. aggregated data, structured vs. semi-structured vs. non-structured data, cleaned vs. not cleaned data, historical vs. live data, mined, distributed or heterogenous data, to mention just a few of the important ways of categorizing the data.
So we have a huge volume of data collected over time, how could we harness them in order to reveal their real value? Excepting the operational use, the usefulness of raw data, data found in their unaltered form, stops usually at the point where their complexity falls beyond people’s limits of understanding them. Thus we arrived to aggregate, recombine or extract chunks from the data, bringing the data to a more comprehensible form. We even arrived to extract more information out of the data using specific techniques known as data mining methods, algorithms that allow us to identify associations, cluster or interpolate the data, the complexity of such algorithms evolving in considerable in the past two decades.
When it comes to managers and users they want the data at the level of detail and form that allows the easiest/lowest proximate understanding level, and, “yesterday”, to use a word that reflects the urgency of requirements. This could be done in theory by coming with a whole set of reports covering all possible requirements, though that’s not so efficient as investment and not always possible with a button click, as probably all we’d like to.
In the top down approach, as the data are remaining relatively constant, it’s easier to focus on the set of requirements and the reporting tool(s) used. It makes sense to choose the reporting tool that covers at least the most important requirements while for the other you have the choice to use workarounds or address them using two or more reporting solutions, attempting again to cover the most important requirements, and so on. The problem with having more than one reporting solution is that often data, logic and reports arrive to be replicated, the whole reporting infrastructure becoming more difficult to manage. On the other side as long you are having a clear (partitioned) delimitation between the reporting frameworks, logic and data are replicated at minimum, having two or more reporting solutions seems to be an acceptable trade. Examples of such delimitations are for example the OLTP vs. OLAP solutions, to which adds the data mining reporting solution(s).
There are also attempts to extend an existing reporting framework above the initial functionality, though often people arrived to reinvent the wheel or create little monsters they arrive to live with. Also this is an acceptable approach, as long the reporting framework allows that. Some attempts might succeed to provide what they were designed for, others not. There are also some good news from this perspective, the appearance of mash-ups and semantic technologies could in the future allow the integration of reporting systems, making possible things that nowadays are quite challenging. The future is open of unlimited possibilities, but better stick to the present! For that stay tuned for a second post!
12 October 2010
🔏MS Office: Why I (dis)like MS Access
In the previous post, “The Limitations of MS Access Database”, I highlighted a few of the limitations of MS Access as database, ignoring the other two or three important aspects – Access as development, reporting, respectively data analysis platform. In this post I’ll retake the topic from a general and personal perspective, considering some of the features that I like and makes from Access a useful and powerful tool, attempting also to mention some of its usage limitations I run into over the time. With the risk of repeating myself, I can’t say I’m an expert in MS Access even if I provided several solutions based on it, its use in the various contexts being not always so inspired, that being one of the reasons why in the first post “Is MS Access or MS Excel the Answer to Your Problem?” I insisted on this aspect.
Ad-Hoc Database
MS Access is a file server-based relational database, being one of the most used databases, though it can’t be compared with more mature RDBMS like Oracle, MySQL, SQL Server, Sybase, PostgreSQL, Teradata, Informix or DB2, richer in features, especially in what concerns their administration, transactional and concurrent processing, scalability, stability, availability, performance, reliability, portability, replication, integration, security, manageability, extensibility, the degree to which they fit in the overall architecture of an enterprise, of relevance being topics like Business Intelligence, Data Warehousing, SOA, Cloud Computing, etc. These are some of the reasons for which I categorized Access as a Personal or Ad-Hoc database, being, at least from my point of view, more appropriate for small-size or personal solutions. In essence Access has the characteristics of a relational database though the lack in the mentioned features makes it less desirable. Nobody denies Access’ usefulness, the point is that when compared with full-featured RDBMS Access has no chance, fact reflected also in the following market share diagrams:
DBMS Market 2006 (JoinVision e-Services via[2]) | DBMS Market 2008 (Gartner via MySQL) [1] |
Even if the diagrams are a few years old, I think they are still representative in what concerns the state of art in the world of databases, the first diagram providing an historical perspective, while the second the “actual” and “future” reflected tendencies. It’s not the first time I’m seeing MS Access and SQL Server represented together even if they belong to different technology stacks, Access’ strength and weakness being deeply rooted in its affiliation to MS Office set of tools. It would be interesting to know which was the ratio then between the number of Access and SQL Servers, and what’s the ratio now, SQL Server Express replacing Access’ role of personal or small-scale database.
The statistics are less representative when it comes to people, their interests and immediate needs. The bottom line is that Access is an easy to use database with pretty low learning curve, you don’t need to know the fancy stuff about databases, you could experiment and learn it as an add-on to your job, making the consumption of data much easier, at least in theory. Are you having your data stored across several Excel files? You can import or copy paste them in Access and there you have an ad-hoc database, then create several queries on top of them with the Query Designer or Query Designer, and this without any knowledge of SQL. The saved queries could be reused much as the views, they could be parameterized, the parameters could be bounded much like the user-defined functions, and made available for further consumption. I can’t say I met any other similar software tool that simplifies so much the design and consumption of databases. The simplicity of Access query designing comes with its tribute, especially when you want to achieve more from your database, the minimum of features making difficult to design complex queries, Access requiring a different mindset in problem solving. In addition, those used with the rich features of RDBMS won’t feel too comfortable in using the Query Designer or Editor, the ANSI syntax it’s inflexible while the troubleshooting quite painful.
I used Access as database only when I had no other alternative, preferring to store the data in a RDBMS like SQL Server or Oracle. In exchange I used Access as presentation layer, allowing users for example to access and analyze the data. In many occasions I played with Access databases in small projects or enhancing existing applications, spending many hours in tweaking Access queries or on porting such queries to other RDBMS. I had the occasion to work with several tools that were using Access as backend, one of them IQ Insight, used to assess the quality of data, was an interesting tool to work with though it was paying tribute to the stability and speed of its database, in a next implementation project deciding to take it out of the landscape, the performance of VB + SQL Server solution that replaced it, increased the performance from a matter of hours to minutes. I know that many people out there love Access as database, though once you acknowledged the performance power and flexibility of other databases, you don’t feel like returning in the past.
Data Analysis Tool
When having multiple Excel or other data sources, you don’t need to store your data in Access itself, it’s enough to link your text, Excel or any other ODBC data source, built a query on top of them, and there you have on the fly your data at your disposal, something that Data Warehousing and Business Intelligence tools hardly manage to do when considering all the people’s needs. By importing the data in your Access database, you could even correct some of the inherent issues existing in data, use some mappings in order to translate the data, use several queries in order to aggregate the data at the needed level of detail or get new insights. From a mapping table or a query to creating a whole data analysis framework is just a small step, and this without too much involvement of the IT guys. Even more, the framework could be used by your colleagues too, they could use it directly or indirectly by re-linking the results of your analysis with a minimum of effort, they could even improve the character of your analysis or find other purposes for the data. Thus results a complex network of interconnected Access databases, and that’s a matter of time until it’s getting out of control, for example by not knowing how a change in one of the queries could impact the other known and unknown users of your data, on whether you are using the actual data, on whether the data have been tempered, and so on. There should be no wonder when people are arriving to report different numbers, when the numbers don’t tie together, though also more modern reporting frameworks are dealing with these types of issues, isn’t it? In addition, you arrive to have multiple instances of the same data or have data distributed and isolated in a uncontrolled fashion, not the best strategy for an enterprise though…
I used Access as data access end point for data available in various data sources, allowing users to analyze and recombine the data by themselves, but this mainly in order to overcome the limitations of available standard reporting tools. This combined with the fact that has been attempted to move most of the logic created by users in a standardized form, limiting the risks of running into Access fallacies. Sure, there could be done more in order to avoid such pitfalls, for example having adequate reporting and data analysis tools, having in place a Data Management Policy which addresses common data problems, training users, etc.
Reporting
The possibility to present the data in a reporting-like fashion is one of the greatest advantages of Access, the tabular structure being easy to integrate with charting, paging, results breaking, formulas, filtering/parameterization, rich formatting, subreports and other types of report structures (e.g. footer, header), in other words the ingredients of a typical report. The combination between ad-hoc data analysis and reporting, it quite an advantage, depending on users’ skills in making most out of it. Reports’ functionality could be extended using Reports’ DOM and VBA, only the fact that a report could be entirely created and modified at runtime is quite of a deal.
I used Access reports only in the applications which were built entirely on MS Access, whenever was possible preferring to move the reports on more standardized platforms. Sometimes I find it more useful to export the data directly to Excel or to a more portable format like PDF, thing also possible with Access reports, though eliminating thus the intermediary platform. Now it depends on users’ preferences and organizations’ infrastructure.
Rapid Prototyping
Access could be used as frontend for various types of applications, and you don’t need to put too much effort in your application. Is enough to drop a form and link it to a table, then link the screens together and here you have an already functioning application, fact that makes from Access a tool ideal for rapid prototyping.
I used Access in several projects for building proof of concept prototypes, allowing customers to gather requirements, evaluate the concept and the available functionality. There were also cases in which the prototypes were comparable as performance with the applications that replaced them, from some points of view even better, though that’s a matter of architecture, skills and sometimes infrastructure.
Extensibility: VBA
A person could create in Access a data analysis framework, a report or a prototype without writing a single line of code, richer functionality being available by using VBA, which is nothing more than old-fashioned VB based on Access’ DOM. VBA extensibility refers here to the possibility of going beyond the wizarding and drag-and-drop functionality provided by Access, for example by adding complex validation into forms, linking forms, altering or creating content at runtime, etc. Not everybody needs to do go so far, however those who used formulas or have some programming experience would find VBA easy to learn. Those wanting to change the default behavior of Access or provide missing functionality then they will have to go deeper in VBA’s secrets, in using built-in or third party developed libraries. For example in order to change the “sequential” access of data provided by Access a programmer will have to use ADO or DAO, the built-in transactional functionality provided in the two libraries could be used to cover the lack of transactional processing not built-in in Access. With some exceptions, in theory you could do with VBA anything you do with old fashioned VB, though with VB.Net the gap to VBA increased considerably ( see Converting Code from VBA to Visual Basic .NET for differences). There are also some limitations, for example the adding of controls in Access forms at runtime, and I remember I found a few other with time, some of them deriving from bugs existing in the tool itself.
Extensibility: .DLLs
I was saying that it’s possible to use third party developed libraries in Access, this functionality relying on COM+ and its predecessors DCOM, COM or ActiveX, technologies that allow the communication between components not only on the local computer but also in distributed networks or internet as in the case of ActiveX. In this way it’s possible to encapsulate functionality in libraries saved as .dlls, distribute them with your applications or reuse them in other applications. Writing COM classes is a job for programming languages like C++, VB, VB.Net, C#, etc. The old-fashioned VB was great in creating and debugging COM components in just a question of minutes, in theory any piece of code could be encapsulated in such a component. Having the possibility to extend the functionality of MS Access with such libraries open the door to an unlimited number of architectural scenarios.
Extensibility: Add-ins
Add-ins are forming a special type of components rooted in OLE, later based on COM, that use the MS Office DOM architecture, their primary utility relying in the fact that they make it possible to provide new features for MS Office itself. An example of such “bonus” features are Save as PDF add-in for Access 2007 or Open Database Connectivity add-in for Excel. I used Add-ins only to extend Excel’s UI-based functionality, therefore I can’t talk too much about their use in Access. For more see Building COM Add-ins for Office Applications material available in MSDN.
Database Templates
I observed that in MS Access 2007 are available several templates (e.g. Assets, Contacts, Sales pipeline, etc.) that could be extended or used in learning how an application is designed. Doing a little research I found out that is possible to create templates for whole databases, reports or forms. I haven’t use templates until now in Access, but it could prove to be an interesting feature when common architectural or functional characteristics are found.
References:
[1] MySQL. (2010). Market Share. [Online] Available from: http://www.mysql.com/why-mysql/marketshare/ (Accessed: 10 October 2010)
[2] Creative System Design. (2010) Databases. [Online] Available from: http://online.creativesystemdesigns.com/projects/databases.asp (Accessed: 10 October 2010)
05 October 2010
🔏MS Office: The Limitations of MS Access Database
Another important limitation regards the size of an Access database, set to 2GB, it used to be more than sufficient a few years back, though nowadays, it’s sometimes the equivalent of a month/year of transactions. I never tried to count how many records could store a MS Access, though if I remember correctly, a relatively small to average table of 1000000 (10^6) records occupies about 100MB, using this logic 2GB could equate with about 20000000 (2*10^7) records, the equivalent of a small to average database size. Anyway, the numbers are relative, the actual size depends also on the number of objects the database stores, the size of attributes stored, on the fact that even if Access is supposed to have a limitation of 2GB, I met cases in which a database of 1GB was crashing a lot, needing to be repaired or backed up regularly.
One of the most painful issues when dealing with concurrent data access is the transaction processing that needs to guarantee the consistency and recoverability of operations. As Access is not handling the transactions, the programmer has to do that using ADO or DAO transactions. As many applications still don’t need pessimistic concurrency, with some effort and a good row versioning also this issue could be solved. Also the security-related issues could be solved programmatically by designing a role-based permission framework, though it occasionally it could be breached when the user is aware of the few Access hacks and has direct access to the database.
Excepting the above theoretical limitations, when MS Access is part of your solution, it’s always a good idea to know its maximal capacity specifications, this applying to all type of databases or technologies. Most probably you won’t want that in the middle of your project or even later you realize that you reach one of such limitations. I tried to put together a comparison between the maximal capacity specifications for 2000, 2007 and 2010 versions of MS Access and, for reference, the same specification for SQL Server (2000, 2005, 2008 R2). The respective information come mainly from Microsoft websites, with a few additions from [5] and [6].
MS Access
|
SQL Server
| ||||
Attribute |
2000 [1]
|
2007/2010 [2]
|
2000 [7]
|
2005 [4]
|
2008 R2 [3]
|
SQL statements size
|
64kb
|
64kb
|
64kb
|
64kb
|
64kb
|
# characters in Memo field
|
65535
|
65535
|
-
|
2^30-1
|
2^31-1
|
# characters in Text field
|
255
|
255
|
8000
|
8000
|
8000
|
# characters in object name
|
64
|
64
|
128
|
128
|
128
|
# characters in record
|
4000
|
4000
|
8000
|
8000
|
8000
|
# concurrent users
|
255
|
[1] Microsoft. 2010. Microsoft Access database specifications. [Online] Available form:
http://office.microsoft.com/en-us/access-help/access-specifications-HP005186808.aspx (Accessed: 04.10.2010)
[3] MSDN. (2010). Maximum Capacity Specifications for SQL Server: SQL Server 2008 R2. [Online] Available form: http://msdn.microsoft.com/en-us/library/ms143432.aspx (Accessed: 04.10.2010)
02 October 2010
#️⃣Software Engineering: Programming (Part V: Is MS Access or Excel the Answer to your Problems?
That’s one of the topics that followed me for years, quite often being asked by customers to provide a MS Access or MS Excel solution as an answer to a business need. The beauty of this question is that there is no right answer and, as I stressed out in several occasions, there is not always a straightforward answer to such a question in IT, the feasibility of an IT solution relying on many variables formulated typically in term of business and IT requirements.
- applications of basic to average complexity, that don’t require complex design or could be developed by a developer with average skills.
- applications that target a small number of users, usually a small group of max 10-20 concurrent users, it may be occasionally a whole department or it could be cross departmental as long the previous mentioned condition are met.
A Short Review
MS Excel is the perfect tool for storing non-relational tabular data, manipulating data manually or with the help of formulas, doing data analysis with pivoting and charting, or of querying various data sources. Its extensibility based on its DOM (Document Object Model), VBA (Visual Basic for Applications) and its IDE (Integrated Development Environment), Forms, add-ins, in-house or third-party developed libraries, the template and wizard-based approach, make from Excel a powerful development environment. I would say that Excel’s weakness resides in its intrinsic design, the DOM model which lacks a rich event model, in the fact that Excel is mainly a tool for data entry, analysis and reporting, the other types of functionality coming on a secondary plan. Excepting a few new features built in Excel itself, the important new functionality comes as add-on – SQL Server-based data mining add-in, MS Sharepoint Server-based Web Services features like multiuser collaboration, slicer and a few other.
Programming for the Masses/Citizens
The great thing about VBA is that also non-developers could successfully adventure in developing Office-based applications, the possibility of learning from the code built with “Record Macro” functionality allowing a small learning curve. Enabling “non-developers” to built applications makes from Office a powerful and altogether dangerous tool because such applications could be easily misused. Misused here refers to the fact that often is attempted to built in Excel or Access complex applications that sooner or later break apart under their complexity, that organizations arrive to have a multitude of such applications with no control over their existence, maintenance, security, etc.
The Architectural Perspective
There is another important architectural perspective – separating the data storage and eventually data processing from presentation. Also when using Access or Excel the data storage could be separated from presentation, though I’ve seen few solutions doing that, the three layers coexisting usually within the same tire. An Access solution could be split in two, one for database and other for UI and processing, allowing more flexibility in what concerns the architecture, security, version management, etc.
Excel is considered by many as a (relational) database, is it really so? It’s true the data could be stored in tabular format in which a sheet plays the role of a table and queryable through the various drivers available, though no primary key is available, less control over the data entered and many other features available in RDBMS need to be provided programmatically, again reinventing the wheel. Same as in the case of Access, Excel could be considered for data storage and presentation, its functionality being reduced when compared with the one of Access.
It’s all about Costs
In the past MS Excel and Access were quite cheap as "development platforms" when compared with the purchasing of existing IDE, especially when we consider their extensibility through VBA and IDE’s availability, thus the functionality vs. extensibility favorable ratio. Recently were introduced express (aka community) versions of powerful IDEs for Visual Studio, respectively open source IDE and development frameworks that provide rich capabilities, the report of forces changed dramatically in the favor of the later.
The Strategic Context
Several times I heard people talking about replacing the collection of Excel sheets with an Access solution. I know that in the absence of adequate solutions people arrive to store various types of data in Excel sheets, duplicating data, loosing the control over versions, data quality, making data unsecure/unavailable or un-processable. Without a good data management and infrastructure strategy the situation doesn’t change significantly by using an Access solution.
Talking about data management strategy, it concerns mainly the data quality as a function of its 6 main dimensions (accuracy, conformity, consistency, completeness, duplicates, referential integration) to which add data actuality, accessibility, security, relevance, usability, and so on. The main problem with personal solutions is that they lead to data and logic duplication, and even when such solutions are consolidated in one form or another, their consolidation and integration is quite complex because you have to consider not only the various designs but also the overall requirements from a higher perspective. On the other side it’s difficult to satisfy the needs of all the people in an organization, in a form or another, duplication of data being inevitable, with direct or indirect implications on data quality. It is required some effort and a good strategy in what concerns these aspects, finding the balance between the various requirements and the number of solutions to satisfy them.
Reformulating the Question
How can we determine which tool or set of tools is appropriate for our problem? Normally the answer to this question depends on the needed functionality. The hard road in answering this question is to identify all the requirements, the features available in the various tools, weight both of them, and decide what worth best. Unfortunately that’s not an easy task, it need to be considered not only actual but also future requirements, organization’s strategy, and whatever might come around.
Reports, best practices, lessons learned or other type of succinct content might help as well in taking a decision without going too deep in analyzing features and requirements thoroughly. Sometimes a gut feeling might work as well, especially when comes from a person with experience in the field. Other times you don’t have too many options – time, resources, knowledge, IT infrastructure, philosophy or politics reducing your area of maneuverability/decision. In the end we learn by doing, by fighting with the constraints and problems we have, hopefully we learn also from our or others’ mistakes…
PS: Even if I’m having several good cumulated years in developing solutions based on Excel and Access, and I can’t pretend that I know their full potential, especially when judged from the perspective of the new features introduced with Excel 2007 or 2010, even more when considering their integration with SharePoint, SQL Server or other similar platforms. The various software tools or platforms existing on the market allow people to mix functionality theoretically in unlimited ways, the separation of functionality between layers, SaaS (software as a service) and data meshes changing the way we program and perceive software development.
Previous Post <<||>> Next Post
30 September 2010
🚧Project Management: Project Management [PM] (Definitions)
29 September 2010
💠🛠️SQL Server: Administration (Part VI: Troubleshooting Who is Active)
About Me
- Adrian
- Koeln, NRW, Germany
- IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.