Showing posts with label best practices. Show all posts
Showing posts with label best practices. Show all posts

19 April 2024

⚡️Power BI: Preparatory Steps for Creating a Power BI Report

When creating a Power BI report consider the following steps when starting the actual work. The first five steps can be saved to a "template" that can be reused as starting point for each report.

Step 0: Check Power BI Desktop's version

Check whether you have the latest version, otherwise you can download it from the Microsoft website.
Given that most of the documentation, books and other resources are in English, it might be a good idea to install the English version.

Step 1: Enable the recommended options

File >> Options and settings >> Options >> Global >> Data Load:
.>> Time intelligence >> Auto date/time for new files >> (uncheck)
.>> Regional settings >> Application language >> set to English (United States)
.>> Regional settings >> Model language >> set to English (United States)

You can consider upon case also the following options (e.g. when the relationships are more complex than the feature can handle):
File >> Options and settings >> Options >> Current >> Data load:
.>> Relationship >> Import relationships from data sources on first load >> (uncheck)
.>> Relationship >> Autodetect new relationships after data is loaded >> (uncheck)

Step 2: Enable the options needed by the report

For example, you can enable visual calculations:
File >> Options and settings >> Options >> Preview features >> Visual calculations >> (check)

Comment:
Given that not all preview features are stable enough, instead of activating several features at once, it might be a good idea to do it individually and test first whether they work as expected. 

Step 3: Add a table for managing the measures

Add a new table (e.g. "dummy" with one column "OK"):

Results = ROW("dummy", "OK")

Add a dummy measure that could be deleted later when there's at least one other measure:
Test = ""

Hide the "OK" column and with this the table is moved to the top. The measures can be further organized within folders for easier maintenance. 

Step 4: Add the Calendar if time analysis is needed

Add a new table (e.g. "Calendar" with a "Date" column):

Calendar = Calendar(Date(Year(Today()-3*365),1,1),Date(Year(Today()+1*365),12,31))

Add the columns:

Year = Year('Calendar'[Date])
YearQuarter = 'Calendar'[Year] & "-Q" & 'Calendar'[Quarter]
Quarter = Quarter('Calendar'[Date])
QuarterName = "Q" & Quarter('Calendar'[Date])
Month = Month('Calendar'[Date])
MonthName = FORMAT('Calendar'[Date], "mmm")

Even if errors appear (as the columns aren't listed in the order of their dependencies), create first all the columns. Format the Date in a standard format (e.g. dd-mmm-yy) including for Date/Time for which the Time is not needed.

To get the values in the visual sorted by the MonthName:
Table view >> (select MonthName) >> Column tools >> Sort by column >> (select Month)

To get the values in the visual sorted by the QuarterName:
Table view >> (select QuarterName) >> Column tools >> Sort by column >> (select Quarter)

With these changes the filter could look like this:


Step 5: Add the corporate/personal theme

Consider using a corporate/personal theme at this stage. Without this the volume of work that needs to be done later can increase considerably. 

There are also themes generators, e.g. see powerbitips.com, a tool that simplifies the process of creating complex theme files. The tool is free however, users can save their theme files via a subscription service.

Set canvas settings (e.g. 1080 x 1920 pixels).

Step 6: Get the data

Consider the appropriate connectors for getting the data into the report. 

Step 7: Set/Validate the relationships

Check whether the relationships between tables set by default are correct, respectively set the relationships accordingly.

Step 8: Optimize the data model

Look for ways to optimize the data model.

Step 9: Apply the formatting

Format numeric values to represent their precision accordingly.
Format the dates in a standard format (e.g. "dd-mmm-yy") including for Date/Time for which the Time is not needed.

The formatting needs to be considered for the fields, measures and metrics added later as well. 

Step 10: Define the filters

Identify the filters that will be used more likely in pages and use the Sync slicers to synchronize the filters between pages, when appropriate:
View >> Sync slicers >> (select Page name) >> (check Synch) >> (check Visible)

Step 11: Add the visuals

At least for report's validation, consider using a visual that holds the detail data as represented in the other visuals on the page. Besides the fact that it allows users to validate the report, it also provides transparence, which facilitates report's adoption. 

22 March 2024

🧭Business Intelligence: Perspectives (Part IX: Dashboards Are Dead & Other Crap)

Business Intelligence
Business Intelligence Series

I find annoying the posts that declare that a technology is dead, as they seem to seek the sensational and, in the end, don't offer enough arguments for the positions taken; all is just surfing though a few random ideas. Almost each time I klick on such a link I find myself disappointed. Maybe it's just me - having too great expectations from ad-hoc experts who haven't understood the role of technologies and their lifecycle.

At least until now dashboards are the only visual tool that allows displaying related metrics in a consistent manner, reflecting business objectives, health, or other important perspective into an organization's performance. More recently notebooks seem to be getting closer given their capabilities of presenting data visualizations and some intermediary steps used to obtain the data, though they are still far away from offering similar capabilities. So, from where could come any justification against dashboard's utility? Even if I heard one or two expert voices saying that they don't need KPIs for managing an organization, organizations still need metrics to understand how the organization is doing as a whole and taken on parts. 

Many argue that the design of dashboards is poor, that they don't reflect data visualization best practices, or that they are too difficult to navigate. There are so many books on dashboard and/or graphic design that is almost impossible not to find such a book in any big library if one wants to learn more about design. There are many resources online as well, though it's tough to fight with a mind's stubbornness in showing no interest in what concerns the topic. Conversely, there's also lot of crap on the social networks that qualify after the mainstream as best practices. 

Frankly, design is important, though as long as the dashboards show the right data and the organization can guide itself on the respective numbers, the perfectionists can say whatever they want, even if they are right! Unfortunately, the numbers shown in dashboards raise entitled questions and the reasons are multiple. Do dashboards show the right numbers? Do they focus on the objectives or important issues? Can the number be trusted? Do they reflect reality? Can we use them in decision-making? 

There are so many things that can go wrong when building a dashboard - there are so many transformations that need to be performed, that the chances of failure are high. It's enough to have several blunders in the code or data visualizations for people to stop trusting the data shown.

Trust and quality are complex concepts and there’s no standard path to address them because they are a matter of perception, which can vary and change dynamically based on the situation. There are, however, approaches that allow to minimize this. One can start for example by providing transparency. For each dashboard provide also detailed reports that through drilldown (or also by running the reports separately if that’s not possible) allow to validate the numbers from the report. If users don’t trust the data or the report, then they should pinpoint what’s wrong. Of course, the two sources must be in synch, otherwise the validation will become more complex.

There are also issues related to the approach - the way a reporting tool was introduced, the way dashboards flooded the space, how people reacted, etc. Introducing a reporting tool for dashboards is also a matter of strategy, tactics and operations and the various aspects related to them must be addressed. Few organizations address this properly. Many organizations work after the principle "build it and they will come" even if they build the wrong thing!

Previous Post <<||>> Next Post

10 January 2024

📦Data Migrations (DM): Conceptualization (Part VIII: Create a Data Migration Strategy for Dynamics 365)

Data Migration
Data Migration Series

Microsoft made available a module on creating a Data Migration strategy (DMS) for Dynamics 365 solutions [1] and provides a template for the DMS workshop on GitHub. Despite the all-encompassing title, the module focuses mainly on the latter. Here are several points to consider in addition.

The workshop allows to gather most important aspects related to the DM, however it's useful to make the strategy explicit as part of a document and share it with the team. The DMS document should follow the general elaboration of a strategy by providing information about the current and targeted state, goals and objectives, assumptions, dependencies, risks and opportunities, the architecture (including alternatives that need to be explored), direct and interfacing processes, roles involved and their responsibilities, communication paths (including escalation), etc. 

Even if the template discusses general aspects, the treatment of best practices is minimal (see some technical and architectural aspects).

The DMS should describe when possible the high-level architecture planned for DM. The architecture section can be updated as the project progresses. 

One can plan for 3-4 dry-runs plus the UAT, though further iterations might be needed depending on the data quality, respectively the complexity of the logic needed. Each dry-run will have its own scope (in terms of entities and number of records) and objectives (in terms of passed tests), while the UAT attempts to mimic the same conditions the as DM for Go-Live.

Several iterations are usually needed to identify the entities in scope. Moreover, the scope will suffer several changes as new entities and migration areas are identified. There are high the chances that some data entities will be forgotten or postponed after Go Live. Upon case, the business might approach a phased implementation, each with their own DM,

It's useful to have a list of the entities available in the organization and map them to the entities available in the target system(s). This allows to identify the gaps on the DM side. There will be entities for which data need to be provided, entities for which there's no counterpart in the target system, respectively entities with different levels of detail.

An important step of the process is defining the business rules that apply - what subset of data is migrated (e.g. active referenced master data, open transactions, etc.). The definitions will suffer further changes as business perception about what's needed changes.

As they'll go through multiple changes, it's useful to have such lists in Excel or similar repositories and reference them in the strategy and further documents. 

The strategy can also target the post-Go-Live migration activities, though the process and architecture might be slightly different, even if many of the requirements still apply. One can devise the further aspects as part of the DMS or consider them as part of the Data Management strategy and operations.

 Despite the considerable overlapping of the DM with Data Management, at least until now the DM is not considered as part of the latter. Big organizations with periodic DMs might need to consider a broader strategy, following to address punctually the deviations from each project. 

The DM is usually considered as a subproject of the implementation and should be managed as such. Depending on its scope and characteristics (principles, acceptance criteria) the DM might use a different methodology and timeline than the main project. Defining project's goals/objectives, outcomes, stakeholders, timeline, phases, and milestones, respectively the high-level assumptions, dependencies, risks, opportunities and constraints is Project Management 101.

The DM is critical for an implementation's success. It would be useful to describe the architectural alternatives, as well to provide a series of architectural recommendations.

Previous Post <<||>> Next Post

Resources:
[1] Microsoft learn (2023) Create a data migration strategy for Dynamics 365 solutions (link)

20 March 2021

🧭Business Intelligence: New Technologies, Old Challenges (Part II - ETL vs. ELT)

 

Business Intelligence

Data lakes and similar cloud-based repositories drove the requirement of loading the raw data before performing any transformations on the data. At least that’s the approach the new wave of ELT (Extract, Load, Transform) technologies use to handle analytical and data integration workloads, which is probably recommendable for the mentioned cloud-based contexts. However, ELT technologies are especially relevant when is needed to handle data with high velocity, variance, validity or different value of truth (aka big data). This because they allow processing the workloads over architectures that can be scaled with workloads’ demands.

This is probably the most important aspect, even if there can be further advantages, like using built-in connectors to a wide range of sources or implementing complex data flow controls. The ETL (Extract, Transform, Load) tools have the same capabilities, maybe reduced to certain data sources, though their newer versions seem to bridge the gap.

One of the most stressed advantages of ELT is the possibility of having all the (business) data in the repository, though these are not technological advantages. The same can be obtained via ETL tools, even if this might involve upon case a bigger effort, effort depending on the functionality existing in each tool. It’s true that ETL solutions have a narrower scope by loading a subset of the available data, or that transformations are made before loading the data, though this depends on the scope considered while building the data warehouse or data mart, respectively the design of ETL packages, and both are a matter of choice, choices that can be traced back to business requirements or technical best practices.

Some of the advantages seen are context-dependent – the context in which the technologies are put, respectively the problems are solved. It is often imputed to ETL solutions that the available data are already prepared (aggregated, converted) and new requirements will drive additional effort. On the other side, in ELT-based solutions all the data are made available and eventually further transformed, but also here the level of transformations made depends on specific requirements. Independently of the approach used, the data are still available if needed, respectively involve certain effort for further processing.

Building usable and reliable data models is dependent on good design, and in the design process reside the most important challenges. In theory, some think that in ETL scenarios the design is done beforehand though that’s not necessarily true. One can pull the raw data from the source and build the data models in the target repositories.

Data conversion and cleaning is needed under both approaches. In some scenarios is ideal to do this upfront, minimizing the effect these processes have on data’s usage, while in other scenarios it’s helpful to address them later in the process, with the risk that each project will address them differently. This can become an issue and should be ideally addressed by design (e.g. by building an intermediate layer) or at least organizationally (e.g. enforcing best practices).

Advancing that ELT is better just because the data are true (being in raw form) can be taken only as a marketing slogan. The degree of truth data has depends on the way data reflects business’ processes and the way data are maintained, while their quality is judged entirely on their intended use. Even if raw data allow more flexibility in handling the various requests, the challenges involved in processing can be neglected only under the consequences that follow from this.

Looking at the analytics and data integration cloud-based technologies, they seem to allow both approaches, thus building optimal solutions relying on professionals’ wisdom of making appropriate choices.

Previous Post <<||>>Next Post

03 February 2021

📦Data Migrations (DM): Conceptualization (Part II: Plan vs. Concept vs. Strategy)

Data Migration
Data Migrations Series

A concept is a document that describes at high level the set of necessary steps and their implications to achieve a desired result, typically making the object of a project. A concept is usually needed to provide more technical and nontechnical information about the desired solution, the context in which a set of steps are conducted, respectively the changes considered, how the changes will be implemented and the further aspects that need to be considered. It can include a high-level plan and sometimes also information that typically belong in a Business Case – goals,objectives, required resources, estimated effort and costs, risks and opportunities.

A concept is used primarily as basis for sign-off as well for establishing common ground and understanding. When approved, it’s used for the actual implementation and solution’s validation. The concept should be updated as the project progresses, respectively as new information are discovered.

Creating a concept for a DM can be considered as best practice because it allows documenting the context, the technical and organizational requirements and dependencies existing between the DM and other projects, how they will be addressed. The concept can include also a high-level plan of the main activities (following to be detailed in a separate document).

Especially when the concept has an exploratory nature (due to incomplete knowledge or other considerations), it can be validated with the help of a proof-of-concept (PoC), the realization of a high-level-design prototype that focuses on the main characteristics of the solution and allows thus identifying the challenges. Once the PoC implemented, the feedback can be used to round out the concept.

Building a PoC for a DM should be considered as objective even when the project doesn’t seem to meet any major challenges. The PoC should resume in addressing the most important DM requirements, ideally by implementing the whole or most important aspects of functionality (e.g. data extraction, data transformations, integrity validation, respectively the import into the target system) for one or two data entities. Once the PoC built, the team can use it as basis for the evolutive development of the solution during the iterations considered.

A strategy is a set of coordinated and sustainable actions following a set of well-defined goals, actions devised into a plan and designed to create value and overcome further challenges. A strategy has the character of a concept though it has a broader scope being usually considered when multiple projects or initiatives compete for the same resources to provide a broader context and handle the challenges, risks and opportunities. Moreover, the strategy takes an inventory of the current issues and architecture – the 'AS-IS' perspective and sketches the to 'TO-BE' perspective by devising a roadmap that bridges the gap between the two.

In the case of a DM a strategy might be required when multiple DM projects need to be performed in parallel or sequentially, as it can help the organization to better manage the migrations.

A plan is a high-level document that describes the tasks, schedule and resources required to carry on an activity. Even if it typically refers to the work or product breakdown structure, it can cover other information usually available in a Business Case. A project plan is used to guide both project execution and project control, while in the context of Strategic Management the (strategic) plan provides a high-level roadmap on how the defined goals and objectives will be achieved during the period covered by the strategy.

For small DM projects a plan can be in theory enough. As both a strategy and a concept can include a high-level plan, the names are in praxis interchangeable.

Previous Post <<||>> Next Post

09 January 2021

🧮ERP: Planning (Part I: It’s all about Partnership - An Introduction)

ERP Implementation
ERP Implementations Series

Unless the organization (customer) implementing an ERP system has a strong IT team and the knowledge required for the implementation is available already in-house, the resources need to be acquired from the market, and probably the right thing to do is to identify a certified implementer (partner) which can fill the knowledge and skillset gaps, respectively which can help splitting the risks associated with such an implementation.

In theory, the customer provides knowledge about its processes, while the partner comes with expertise about the system to be implemented and further technologies, industry best practices, project methodologies, etc. Further on, the mix is leveraged to harness the knowledge and reach project’s objectives. 

In praxis however finding an implementer which can act as partner might be more challenging than expected. This because the implementer needs to understand customer’s business and where it’s heading, bridge the gap between functional requirements and system’s functionality, advise on areas of improvement, prepare the customer for the project and lead the customer through the changes, respectively establish a basis for the future. Some of the implications are seldom made explicit even if they are implied by what is needed by the project. 

Technology is seldom the issue in an ERP implementation, the challenges residing in handing the change and the logistics required. There are so many aspects to be considered and handled, and this can be challenging for any implementer no matter how long has been on the market or how experienced the resources are. Somebody needs to lead the change and the customer seldom has the knowledge to handle the change. In some cases, the implementer must make the customer aware of the implications, while in others needs to take the initiative and lead the change, though the customer needs to play along, which can be challenging also. 

Many aspects need to be handled at management level from a strategical point of view on customer’s side. It starts with assuring that the most important aspects of the business where considered, that the goals and objectives are clear, that the proper environment is created, and ends with the timely decision-making, with assuring that the resources are available when needed, that the needed organization structures and roles are in place, that the required knowledge is available before, during and after implementation, that the potential brought by the ERP system is harnessed for the years to come. 

A partnership allows in theory splitting the implementation risks as ERP implementations have a high rate of failure. Quite often the outcomes of such projects don’t meet the expectations, the systems being in extremis unusable or a bottleneck for the organization. Ideally one should work with the partner(s) and attempt solving the issues, split eventually the incurred cost overruns, find a middle way. Most of the times it’s recommended to find a solution together rather than coming to a litigation. 

Given the complex dependencies existing between the various parts of the project, the causes that lead to poor implementations are difficult to prove, as there are almost always grey areas. Moreover, the litigations can require a considerable time and resources to settle. These can be just extreme situations, and as long one has a good partner, there’s no need to think that far. On the other side, even if undesirable, one must be prepared also for such outcomes, even if the countermeasures may involve an additional effort. Therefore, one must address such issues in contracts by establishing the areas of accountability/responsibilities for each party, document adequately the requirements and further (important) communication, make sure that the deliverables have the expected quality, etc.

Previous Post <<||>> Next Post

29 July 2019

💻IT: Best Practices (Definitions)

"A preferred and repeatable action or set of actions completed to fulfill a specific requirement or set of requirements during the phases within a product-development process." (Clyde M Creveling, "Six Sigma for Technical Processes: An Overview for R Executives, Technical Leaders, and Engineering Managers", 2006)

"A process or method that is generally recognized to produce superior results. The application of these should result in a positive, measurable change." (Tilak Mitra et al, "SOA Governance", 2008)

"A technique or methodology that, through past experience and research, has proven to reliably lead to a desired result. A commitment to using the best practices in any field (for example, in the domain of IT Architecture) ensures leveraging past experience and all of the knowledge and technology at one’s disposal to ensure success." (Allen Dreibelbis et al, "Enterprise Master Data Management", 2008)

"An effective way of doing something. It can relate to anything from writing program code to IT governance." (Judith Hurwitz et al, "Service Oriented Architecture For Dummies" 2nd Ed., 2009)

"A best practice is commonly understood to be a well-proven, repeatable, and established technique, method, tool, process, or activity that is more certain in delivering the desired results. This indicates that a best practice typically has been used by a large number of people or organizations and/or over a long time, with significant results that are clearly superior over other practices. Knowledge patterns can be used to formalize the description of a best practice." (Jörg Rech et al, "Knowledge Patterns" [in "Encyclopedia of Knowledge Management" 2nd Ed.], 2011)

"A specific method that improves the performance of a team or an organization and can be replicated or adapted elsewhere. Best practices often take the form of guidelines, principles, or ideas that are endorsed by a person or governing body that attests to the viability of the best practice." (Gina Abudi & Brandon Toropov, "The Complete Idiot's Guide to Best Practices for Small Business", 2011)

"A technique, method, process, discipline, incentive, or reward generally considered to be more effective at delivering a particular outcome than by other means." (Craig S Mullins, "Database Administration", 2012)

"In general, Best Practices refer to the methods, currently recognized within a given industry or discipline, to achieve a stated goal or objective. In the OPM3 context, Best Practices are achieved when an organization demonstrates consistent organizational project management processes evidenced by successful outcomes." (Project Management Institute, "Organizational Project Management Maturity Model (OPM3)" 3rd Ed, 2013)

"An effective way of doing something. It can relate to anything from writing program code to IT governance." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"Those methods, processes, or procedures that have been proven to be the most effective, based on real-world experience and measured results." (Robert F Smallwood, "Information Governance: Concepts, Strategies, and Best Practices", 2014)

"Best practices are defined as commercial or professional procedures that are accepted or prescribed as being effective most of the time. It can also be considered a heuristic, in that is a rule of thumb that generally succeeds but is not guaranteed to always work in every instance." (Michael Winburn & Aaron Wheeler, "Cloud Storage Security", 2015)

"A 'benchmarking' approach where organisations determine who the leader in a particular practice is and then copy that approach. Useful for achieving efficiencies but may diminish differentiation if not used with caution at the strategic level." (Duncan Angwin & Stephen Cummings, "The Strategy Pathfinder" 3rd Ed., 2017)

"A proven activity or process that has been successfully used by multiple enterprises." (ISACA) 

"A superior method or innovative practice that contributes to the improved performance of an organization, usually recognized as best by other peer organizations." (American Society for Quality)

24 July 2019

💻IT: Information Technology Information Library [ITIL] (Definitions)

"A series of documents used to aid the implementation of a framework for IT service management (ITSM). This framework defines how service management is applied in specific organizations. Being a framework, it is completely customizable for an application within any type of business or organization that has a reliance on IT infrastructure." (Tilak Mitra et al, "SOA Governance", 2008)

"A framework and set of standards for IT governance based on best practices." (Judith Hurwitz et al, "Service Oriented Architecture For Dummies" 2nd Ed., 2009)

"A framework of supplier independent best practice management procedures for delivery of high quality IT services." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"a set of guidelines for developing and managing IT operations and services." (Bill Holtsnider & Brian D Jaffe, "IT Manager's Handbook" 3rd Ed., 2012)

"A framework and set of standards for IT governance based on best practices." (Marcia Kaufman et al, "Big Data For Dummies", 2013)

"A group of books written and released by the United Kingdom’s Office of Government and Commerce (OGC). ITIL documents best practices organizations can implement to provide consistent IT services. The library includes five books." (Darril Gibson, "Effective Help Desk Specialist Skills", 2014)

"A set of process-oriented best practices and guidance originally developed in the United Kingdom to standardize delivery of informational technology service management." (Robert F Smallwood, "Information Governance: Concepts, Strategies, and Best Practices", 2014)

"Best practices for information technology services management processes developed by the United Kingdom’s Office of Government Commerce." (Adam Gordon, "Official (ISC)2 Guide to the CISSP CBK" 4th Ed., 2015)

"The IT Infrastructure Library; a set of best practice publications for IT service management." (by Brian Johnson & Leon-Paul de Rouw, "Collaborative Business Design", 2017)

"The Information Technology Infrastructure Library (ITIL) presents pre-defined processes for IT service management. The fourth edition of ITIL depicts two key elements ITIL Service-Value-System (SVS) and a four dimensions model." (Anna Wiedemann et al, "Transforming Disciplined IT Functions: Guidelines for DevOps Integration", 2021)

"set of best practices guidance" (ITIL)

13 May 2019

#️⃣Software Engineering: Programming (Part XIV: Good Programmer, Bad Programmer)

Software Engineering
Software Engineering Series

The use of denominations like 'good' or 'bad' related to programmers and programming carries with it a thin separation between these two perceptional poles that represent the end results of the programming process, reflecting the quality of the code delivered, respectively the quality of a programmer’s effort and  behavior as a whole. This means that the usage of the two denominations is often contextual, 'good' and 'bad' being moving points on a imaginary value scale with a wide range of values within and outside the interval determined by the two.

The 'good programmer' label is a idealization of the traits associated with being a programmer – analyzing and understanding the requirements, filling the gaps when necessary, translating the requirements in robust designs, developing quality code with a minimum of overwork, delivering on-time, being able to help others, to work as part of a (self-organizing) team and alone, when the project requires it, to follow methodologies, processes or best practices, etc. The problem with such a definition is that there's no fix limit, considering that programmer’s job description can include an extensive range of requirements.

The 'bad programmer' label is used in general when programmers (repeatedly) fail to reach others’ expectations, occasionally the labeling being done independently of one’s experience in the field. The volume of bugs and mistakes, the fuzziness of designs and of the code written, the lack of comments and documentation, the lack of adherence to methodologies, processes, best practices and naming conventions are often considered as indicators for such labels. Sometimes even the smallest mistakes or the wrong perceptions of one’s effort and abilities can trigger such labels.

Labeling people as 'good' or 'bad' has the tendency of reinforcing one's initial perception, in extremis leading to self-fulfilling prophecies - predictions that directly or indirectly cause themselves to become true, by the very terms on how the predictions came into being. Thus, when somebody labels another as 'good' or 'bad' he more likely will look for signs that reinforce his previous believes. This leads to situations in which "good" programmers’ mistakes are easier overlooked than 'bad' programmers' mistakes, even if the mistakes are similar.

A good label can in theory motivate, while a bad label can easily demotivate, though their effects depend from person to person. Such labels can easily become a problem for beginners, because they can easily affect beginners' perception about themselves. It’s so easy to forget that programming is a continuous learning process in which knowledge is relative and highly contextual, each person having strengths and weaknesses.

Each programmer has a particular set of skills that differentiate him from other programmers. Each programmer is unique, aspect reflected in the code one writes. Expecting programmers to fit an ideal pattern is unrealistic. Instead of using labels one should attempt to strengthen the weaknesses and make adequate use of a person’s strengths. In this approach resides the seeds for personal growth and excellence.

There are also programmers who excel in certain areas - conceptual creativity, ability in problem identification, analysis and solving, speed, ingenuity of design and of making best use of the available tools, etc. Such programmers, as Randall Stross formulates it, “are an order of magnitude better” than others. The experience and skills harnessed with intelligence have this transformational power that is achievable by each programmer in time.

Even if we can’t always avoid such labeling, it’s important to become aware of the latent force the labels carry with them, the effect they have on our colleagues and teammates. A label can easily act as a boomerang, hitting us back long after it was thrown away.


12 May 2019

#️⃣Software Engineering: Programming (Part XIII: Misconceptions about Programming II)

Software Engineering

Continuation

One of the organizational stereotypes is having a big room full of cubicles filled with employees. Even if programmers can work in such settings, improperly designed environments restrict to a certain degree the creativity and productivity, making more difficult employees' collaboration and socialization. Despite having dedicated meeting rooms, an important part of the communication occurs ad-hoc. In open spaces each transient interruption can easily lead inadvertently to loss of concentration, which leads to wasted time, as one needs retaking thoughts’ thread and reviewing the last written code, and occasionally to bugs.

Programming is expected to be a 9 to 5 job with the effective working time of 8 hours. Subtracting the interruptions, the pauses one needs to take, the effective working time decreases to about 6 hours. In other words, to reach 8 hours of effective productivity one needs to work about 10 hours or so. Therefore, unless adequately planned, each project starts with a 20% of overtime. Moreover, even if a task is planned to take 8 hours, given the need of information the allocated time is split over multiple days. The higher the need for further clarifications the higher the chances for effort to expand. In extremis, the effort can double itself.

Spending extensive time in front of the computer can have adverse effects on programmers’ physical and psychical health. Same effect has the time pressure and some of the negative behavior that occurs in working environments. Also, the communication skills can suffer when they are not properly addressed. Unfortunately, few organizations give importance to these aspects, few offer a work free time balance, even if a programmer’s job best fits and requires such approach. What’s even more unfortunate is when organizations ignore the overtime, taking it as part of job’s description. It’s also one of the main reasons why programmers leave, why competent workforce is lost. In the end everyone’s replaceable, however what’s the price one must pay for it?

Trainings are offered typically within running projects as they can be easily billed. Besides the fact that this behavior takes time unnecessarily from a project’s schedule, it can easily make trainings ineffective when the programmers can’t immediately use the new knowledge. Moreover, considering resources that come and go, the unwillingness to invest in programmers can have incalculable effects on an organization performance, respectively on their personal development.

Organizations typically look for self-motivated resources, this request often encompassing organization’s whole motivational strategy. Long projects feel like a marathon in which is difficult to sustain the same rhythm for the whole duration of the project. Managers and team leaders need to work on programmers’ motivation if they need sustained performance. They must act as mentors and leaders altogether, not only to control tasks’ status and rave and storm each time deviations occur. It’s easy to complain about the status quo without doing anything to address the existing issues (challenges).

Especially in dysfunctional teams, programmers believe that management can’t contribute much to project’s technical aspects, while management sees little or no benefit in making developers integrant part of project's decisional process. Moreover, the lack of transparence and communication lead to a wide range of frictions between the various parties.

Probably the most difficult to understand is people’s stubbornness in expecting different behavior by following the same methods and of ignoring the common sense. It’s bewildering the easiness with which people ignore technological and Project Management principles and best practices. It resides in human nature the stubbornness of learning on the hard way despite the warnings of the experienced, however, despite the negative effects there’s often minimal learning in the process...

To be eventually continued…


30 October 2018

💠🛠️SQL Server: Administration (Troubleshooting Login Failed for User)

    Since the installation of an SQL Server 2017 on a virtual machine (VM) in the Microsoft Cloud started to appear in the error log records with the following message:

Login failed for user '<domain>\<computer>$'. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]
Error: 18456, Severity: 14, State: 5.


   From the text it seemed like a permission problem, thing confirmed by the documentation (see [1]), the Error Number and State correspond to a „User Id is not valid“ situation. In a first step I attempted to give permissions to the local account (dollar sign included). The account wasn’t found in the Active Directory (AD), though by typing the account directly in the “Login name” I managed to give temporarily sysadmin permission to the account. The error continued to appear in the error log. I looked then at the accounts under which the SQL Services run - nothing suspect in there.

   Except the error message, which was appearing with an alarming frequency (a few seconds apart), everything seemed to be working on the server. The volume of  records (a few hundred thousands over a few days) bloating the error log, as well the fact that I didn’t knew what’s going on made me take the time and further investigate the issue.

  Looking today at the Windows Logs for Applications I observed that the error is caused by an account used for the Microsoft SQL Server IaaS Agent and IaaS Query Service. Once I gave permissions to the account the error disappeared.

   The search for a best practice on what permissions to give to the IaaS Agent and IaaS Query Service lead me to [2]. To quote, the “Agent Service needs Local System rights to be able to install and configure SQL Server, attach disks and enable storage pool and manage automated security patching of Windows and SQL server”, while the “IaaS Query Service is started with an NT Service account which is a Sys Admin on the SQL Server”. In fact, this was the only resource I found that made a reference to the IaaS Query Service.

   This was just one of the many scenarios in which the above error appears. For more information see for example  [3], [4] or [5].

References:
[1] Microsoft (2017) MSSQLSERVER_18456 [Online] Available from: https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error?view=sql-server-2017
[2] SQL Database Engine Blog (2018) SQL Server IaaS Extension Query Service for SQL Server on Azure VM, by Mine Tokus Altug [Online] Available from:  https://blogs.msdn.microsoft.com/sqlserverstorageengine/2018/10/25/sql-server-iaas-extension-query-service-for-sql-server-on-azure-vm/
[3] Microsoft Support (2018) "Login failed for user" error message when you log on to SQL Server [Online] Available from: https://support.microsoft.com/en-sg/help/555332/login-failed-for-user-error-message-when-you-log-on-to-sql-server
[4] Microsoft Technet (2018) How to Troubleshoot Connecting to the SQL Server Database [Online] Available from: Engine https://social.technet.microsoft.com/wiki/contents/articles/2102.how-to-troubleshoot-connecting-to-the-sql-server-database-engine.aspx 
[5] Microsoft Blogs (2011)Troubleshoot Connectivity/Login failures (18456 State x) with SQL Server, by Sakthivel Chidambaram [Online] Available from: https://blogs.msdn.microsoft.com/sqlsakthi/2011/02/06/troubleshoot-connectivitylogin-failures-18456-state-x-with-sql-server/

09 June 2018

📦Data Migrations (DM): Guiding Principles

Data Migration
Data Migrations Series

Introduction

“An army of principles can penetrate where an army of soldiers cannot."
Thomas Paine

In life as well in IT principles serve as patterns of advice in form of general or fundamental ideas, truths or values stated in a context-independent manner. They can be used as guidelines in understanding and modeling the reality, the world we live in. With the invasion of technologies in our lives principles serve as a solid ground on which we can build castles – solutions for our problems. Each technology comes with its own set of principles that defines in general terms its usage. That's why most of the IT books attempt to catch these sets of principles. Unfortunately, few of the technical writers manage to define some meaningful principles and showcase their usages.

Many of the ideas considered as principles in papers on Data Migration (DM) are at best just practices, and some can be considered as best/good practices. Just because something worked good in a previous migration doesn’t mean automatically that the idea behind the respective decision turns automatically in a principle. Some of the advices advanced are just lessons learned in disguise. Principles through their generality apply to a broad range of cases, while practices are more activity specific.

A DM through its nature finds its characteristics at the intersection of several area - database-based architecture design, ETL workflows, data management, project management (PM) and services. From these areas one can pull a set of principles that can be used in building DM architectures.

Architecture Principles

“Architecture starts when you carefully put two bricks together.”
Ludwig Mies van der Rohe

There are several general principles that apply to the architecture of applications, independently of the technologies used or the industry, e.g. research first, keep it simple/small, start with the end in mind, model first, design to handle failure, secure by design (aka safety first), prototype, progress iteratively, focus on value, reuse (aka don't reinvent the wheel), test early, early feedback, refactor, govern, validate, document, right tool – right people, make it to last, make it sustainable, partition around limits, scale out, defensive coding, minimal intervention, use common sense, process orientation, follow the data, abstract, anticipate obsolescence, benchmark, single-responsibility, single dispatch, separation of concerns, right perspective.

To them add a range of application design characteristics that can be considered as principles as well: extensibility, modularity, adaptability, reusability, repeatability, modularity, performance, revocability, auditability, subject-orientation, traceability, robustness, locality, heterogeneity, consistency, atomicity, increased cohesion, reduced coupling, monitoring, usability, etc. There are several principles that can be transported from problem solving into design - divide and conquer, prioritize, system’s approach, take inventory, and so on.

A DM’s architecture has more to do with a data warehouse as it relies heavily on ETL tasks and data need to be stored for various purposes. Besides the principles of good database design, a few other principles apply: model (the domain) first, denormalize, design for performance, maintainability and security, validate continuously. From ETL area following principles can be considered: single point of processing, each step must have a purpose, minimize touch points, rest data for checkpoints, leverage existing knowledge, automate the steps, batch processing.

 In addition, considering their data-specific character, a DM can be regarded as one or several data products, though in contrast with typical data products DM have typically a limited purpose. From this area following principles could be considered: build trust with transparency, blend in, visualize the complex.

Data Management Principles

Considering that a DM’s focus is an organization's data, some principles need to focus on the management and governance of Data. Data Governance together with Data Quality, Data Architecture, Metadata Management, Master Data Management are functions of Data Management. The focus is on data, metadata and their lifecycle, on processes, ownership and roles and their responsibilities. With this in mind there can be defined several principles supposed to facilitate the functions of Data Management: manage data as asset, manage data lifecycle, the business owns the data, integration across the organization, make data/metadata accessible, transparent and auditable processes, one source of truth.

As part of DM there are customer, employee and vendor information which fall under the General Data Protection Regulation (GDPR) EU 2016/679 regulation which defines the legal framework for data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA) as well the export of personal data outside the EU and EEA. The regulation defines a set of principles that make its backbone: fairness, lawfulness and transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, accountability [6].

Overseas, the US Federal Trade Commission (FTC) issued in 2012, a report recommending organizations design and implement their own privacy programs based on a set of best practices. The report reaffirms the FTC’s focus on Fair Information Processing Principles, which include notice/awareness, choice/consent, access/participation, integrity/security, enforcement/redress [6].


Project Management (PM) Principles

"Management is doing things right […]"
Peter Drucker

A DM though its characteristics is a project and, to increase the chances of success, it needs to be managed as a project. Managing DM as a project is one of the most important principles to consider. The usage of a PM framework will further increase the chances of success, as long the framework is adequate for the purpose and the organization team is able to use the framework. PMI, Prince2, Agile/Scrum/Kanban are probably the most used PM methodologies and they come with their own sets of principles.

In general, all or some of the PM principles apply independently on whether is used alone or in combination with other PM methodologies: a single project manager, an informed and supportive management, a dedicated team of qualified people to do the work of the project, clearly defined goals addressing stakeholders’ priorities, an integrated plan and schedule, as well a budget of costs and/or resources required [1].

On the other side, an agile approach could prove to be a better match for a DM given that requirements change a lot, frequent and continuous deliveries are needed, collaboration is necessary, agile processes as well self-organizing teams can facilitate the migration. These are just a few of the catchwords that make the backbone of the Agile Manifesto (see [3]).

An agile form of Prince2 could be something to consider as well, especially when Prince2 is used as methodology for other projects. For Prince2 are the following principles to consider: continued business justification, learn from experience, defined roles and responsibilities, manage by stages, management by exception, focus on products, tailor to suit the project environment [2].

All these PM principles reveal important aspects to ponder upon, and maybe with a few exceptions, all can be incorporated in the way the DM project is managed.


Service Principles

Considering the dependencies existing between the DM and Data Quality as well to the broader project, a DM can have the characteristics of a service. It’s not an IT Service per se, as IT only supports technically and eventually from a PM perspective the project. Even if a DM is not a ITSM service, some of the ITIL principles can still apply: focus on value, design for experience, start where you are, work holistically, progress iteratively, observe directly, be transparent, collaborate and keep it simple [4].


Conclusion

“Obey the principles without being bound by them.”
Bruce Lee

Within a DM all the above principles can be considered, though the network of implication they create can easily shift the focus from the solution to the philosophical aspects, and that’s a marshy road to follow. Even if all principles are noble, not all can be considered. It would be utopic to consider each possible principle. The trick is to identify the most “important” principles (principles that make sense) and prioritize them according to existing requirements. In theory, this is a one-time process that involves establishing a “framework” of best/good practices for the DM, in next migrations needing only to consider the new facts and aspects.

Previous Post <<||>> Next Post

References:
[1] “Principles of project management”, by J. A. Bing, PM Network, 1994 (link)
[2] Axelos (2018) What is PRINCE2? (link)
[3] Agile Manifesto (2001) Principles behind the Agile Manifesto (link)
[4] Axelos (2018) ITIL® Practitioner 9 Guiding Principles (link)
[5] The Data Governance Institute (2018) Goals and Principles for Data Governance (link) 
[6] Navigating the Labyrinth: An Executive Guide to Data Management, by Laura Sebastian-Coleman for DAMA International, Technics Publications, 2018 (link)  

08 May 2018

📦Data Migrations (DM): Facts, Principles and Practices

Data Migration
Data Migrations Series

Introduction

Ask a person who is familiar with cars ‘how a car works‘ - you’ll get an answer even if it doesn’t entirely reflect the reality. Of course, the deeper one's knowledge about cars, the more elaborate or exact is the answer. One doesn't have to be a mechanic to give an acceptable explanation, though in order to design, repair or build a car one needs extensive knowledge about a car’s inner workings.

Keeping the proportions, the same statements are valid for the inner workings of Data Migrations (DM) – almost everybody in IT knows what a DM is, though to design or perform one you might need an expert.

The good news about DMs is that their inner workings are less complex than the ones of cars. Basically, a DM requires some understanding of data architecture, data modelling and data manipulation, and some knowledge of business data and processes. A data architect, a database developer, a data modeler or any other data specialist can approach such an endeavor. In theory, with some guidance also a person with knowledge about business data and processes can do the work. Even if DMs imply certain complexity, they are not rocket science! In fact, there are tools that can be used to do most the work, there some general principles and best practices about the architecture, planning and execution that can help in the process.

Principles and Best Practices

It might be useful to explain the difference between principles and best practices, because they’ll more likely lead you to success if you understood and incorporated them in your solutions. Principles as patterns of advice are general or fundamental ideas, truths or values stated in a context-independent manner. Practices on the other side are specific actions or applications of these principles stated in a context-dependent way. The difference between them is relatively thin, and therefore, they are easy to confound, though by looking at their generality, one can easily identify which is which.

For example, in the 60’s become known the “keep it simple, stupid” (aka KISS) principle, which states that a simple solution works better than a complex one, and therefore as key goal one should search the simplicity in design. Even if kind of pejorative, it’s a much simpler restatement of Occam’s razor –do something in the simplest manner possible because simpler is usually better. To apply it one must understand what simplicity means, and how it can be translated in designs. According to Hans Hofmann “the ability to simplify means to eliminate the unnecessary so that the necessary may speak” or in a quote quote attributed to Einstein: “everything should be made as simple as possible, but not simpler”. This is the range within which the best practices derived from KISS can be defined.

There are multiple practices that allow reducing the complexity of DM solutions: start with a Proof-of-Concept (PoC), start small and build incrementally, use off-the-shelf software, use the best tool for the purpose, use incremental data loads, split big data files into smaller ones, and so on. As can be seen all of them are direct actions that address specific aspects of the DM architecture or process.


Data Migration Truths

When looking at principles and best practices they seem to be further rooted in some basic truths or facts common to most DMs. When considered together, they offer a broader view and understanding of what a DM is about.  Here are some of the most important facts about DMs:

DM as a project:

  • A DM is a subproject with specific characteristics
  • A DM is typically a one-time activity before Go live
  • A DM’s success is entirely dependent or an organization’s capability of running projects
  • Responsibilities are not always clear
  • Requirements change as the project progresses
  • Resources aren't available when needed
  • Parallel migrations require a common strategy
  • A successful DM can be used as recipe for further migrations
  • A DM's success is a matter of perception
  • The volume of work increases toward the end


DM Architecture

  • A DM is more complex and messier than initially thought
  • A DM needs to be repeatable
  • A DM requires experts from various areas
  • There are several architectures to be considered
  • The migration approach is dependent on the future architecture
  • Management Systems have their own requirements
  • No matter how detailed the planning something is always forgotten
  • Knowledge of the source and target systems aren't always available
  • DM are too big to be performed manually
  • Some tasks are easier to be performed manually
  • Steps in the migration needs to be rerun
  • It takes several iterations before arriving to the final solution
  • Several data regulations apply
  • Fall-back is always an alternative
  • IT supports the migration project/processes
  • Technologies are enablers and not guarantees for success
  • Tools address only a set of the needed functionality
  • Troubleshooting needs to be performed before, during and after migrations
  • Failure/nonconformities need to be documented
  • A DM is an opportunity to improve the quality of the data
  • A DM needs to be transparent for the business


DM implications for the Business:

  • A DM requires a downtime for the system involved
  • The business has several expectations/assumptions
  • Some expectations are considered as self-evident
  • The initial assumptions are almost always wrong
  • A DM's success/failure depends on business' perception
  • Business' knowledge about the data and processes is relative
  • The business is involved for whole project’s duration
  • Business needs continuous communication
  • Data migration is mainly a business rather than a technical challenge
  • Business’ expertize in every data area is needed
  • DM and Data Quality (DQ) need to be part of a Data Management strategy
  • Old legacy system data have further value for the business
  • Reporting requirements come with their own data requirements


DM and Data Quality:

  • Not all required data are available
  • Data don't match the expectations
  • Quality of the data needs to be judged based on the target system
  • DQ is usually performed as a separate project with different timelines
  • Data don't have the same importance for the business
  • Improving DQ is a collective effort
  • Data cleaning needs to be done at the source (when possible)
  • Data cleaning is a business activity
  • The business is responsible for the data
  • Quality improvement is governed by 80-20 rule
  • No organization is willing paying for perfect data quality
  • If can’t be counted, it isn’t visible

More to come, stay tuned…

30 January 2011

💠🛠️ SQL Server: Learning Resources

During the past weeks I found several interesting learning resources on SQL Server 2008 topics, so here they are:

For those interest to upgrade their knowledge to SQL Server 2008 R2 Edition check the Microsoft SQL Server 2008 R2 Update for Developers Training Course videos on Channel9. They cover topics like Location Awareness, CLR Integration, StreamInsight, Reporting Services, Application and Multi-Server Management, improvements on BLOB, T-SQL and tools for Excel 2010 or SharePoint 2010.

The ER diagrams of SQL Server’s DMVs or DMFs are available for download as PDF at SQL Server 2008 Systems Views Map, and the same for SQL Server 2005, respectively SQL Server 2008 R2. Even if not all objects and possible relations are shown, they could still save you from lot of effort as the diagram depicts the relations between the most important entities and the cardinality existing between them, plus the additional constraints participating in the joins. The entities represented: Objects, Types and Indexes, Linked Servers, CLR, Database Mirroring, Service Broker, Resource Governor, Transactions, Databases and Storage, Traces and Events, Execution Environment, Server information.

There are several free SQL Server books on SQLServerCentral.com coming from Redgate: A. Kuznetsov’s Defensive Database Programming, B. McGehee’s SQL Server Maintenance Plans, How to become an Exceptional DBA, Mastering SQL Server Profiler and Brad’s Sure Guide to SQL Server 2008, J. Magnabosco’s Protecting SQL Server Data, G. Fritchey’s SQL Server Execution Plans and SQL Server Crib Sheet Compendium (coauthor with A. Prasanna), R. Landrum’s SQL Server Tacklebox, plus several volumes of ‘Best of SQL Server Central’ authors in Vol. 1-4, Vol. 5, Vol. 6 and Vol. 7, and several other books to come.

It seems there are a few new books also on Scribd website: R. Colledge’s SQL Server Administration in Action, R. Vieira’s Professional Microsoft SQL Server 2008 Programming, M. Lee and M. Mansfield’s SQL Server 2008 Administration Instant Reference, plus many other books that can be retrieved by searching SQL Server 2008 on the respective site.


There are several papers coming from Microsoft and several professionals. It worth to check them:
-   K. Lassen’s paper on Using Management Data Warehouse for Performance Monitoring, in which he provides several best practices for performance management of SQL Server 2008. Except the introduction in Management Data Warehouse features, he provides also an extension of the built-in functionality for indexes, plus several functions and queries.
-  D. Kiely's paper on SQL Server 2008 R2 Security Overview for Database Administrators covers some of the most important security features in SQL Server 2008. There is a similar paper targeting SQL Server 2005.
-  B. Beauchemin’s paper on SQL Server 2005 Security Best Practices - Operational and Administrative Tasks describes the best practices for setting up and maintaining security in SQL Server 2005.
-  SQL Server 2008 Compliance Guide a paper written to help organizations and individuals understand how to use the features of the Microsoft® SQL Server® 2008 database software to address their compliance need.
-  SQL Server 2008 Upgrade Technical Reference Guide essential phases, steps and best practices to upgrade existing instances of SQL Server 2000 and 2005 to SQL Server 2008.
-  Technet Wiki Checklists on Database Engine Security, Database Engine Security, Database Engine Connections and  Data Access. (probably more to come)
-   Also Microsoft maintains a list of White Papers on SQL Server 2008, respectively SQL Server 2005.

15 August 2010

Data Security: SQL Injection I - Introduction

Introduction

  If you are working in IT, most probably you’ve heard already about SQL Injection, if not then might be it’s a good idea to ask your colleagues and eventually your IT manager if your company has any policies related to it. If you are working for a software vendor or a consultancy company then SQL Injection countermeasure techniques might be quite well positioned in the list of best practices in what concerns the development of Web/Desktop Applications, Web Services or database-related logic adopted by your company. If you are working for a company, other than the two mentioned above, and have various software projects on the role or already in house, then most probably you’ll have to ask if the software vendors you are working with have took into consideration the SQL Injection threats and proved their solutions against them. On contrary, if you have nothing to do with IT at all, it might still be a good idea to ask your IT department if they have anything in place related to SQL Injection – Security Policy, security best practices, etc.

Definition

  Wikipedia defines SQL Injection as “a code injection technique that exploits a security vulnerability occurring in the database layer of an application” [3], the code injection being defined as “the exploitation of a computer bug that is caused by processing invalid data” [4]. For a programmer the definition is acceptable, though for other type of professionals it might not be so clear what’s about, especially when they are not familiar with IT terminology. I find more clear the definition provided by J. Clarke et. al, who in his book SQL Injection Attacks and Defense, defines SQL Injection as the vulnerability that results when you give an attacker the ability to influence the Structured Query Language (SQL) queries that an application passes to a back-end database” [2]. I will slightly modify the last definition and say that the SQL injection is a security vulnerability residing in the possibility to alter the intended behavior of the SQL Queries passed to the database.

Some Background

    At the beginning of our century, with the increase importance of Web Applications whose availability over WAN/Internet (networks) brought new security issues, the SQL Injection became a really hot topic given the damages such techniques could do to an application, with just a few tricks the “hacker” having the possibility to enter in the application and even in the machine hosting the database used, entering thus in the possession of sensitive information, and above all having the possibility of damaging the database. J. Clarke et. al remarks that the first connection between web applications and SQL injection is widely accredited to Rain Forest Puppy, who in an article titled “NT Web Technology Vulnerabilities” (see “ODBC and MS SQL server 6.5” section) written in 1998 for Phrack, an e-zine written by and for hackers [2], was describing the behavior specific to SQL Injection in relation to MS SQL Server 6.5. 
 
    I remember when my boss break us the news that we have to protect urgently our applications against SQL Injection, having to redesign some of the database objects and components in order to protect our applications against such techniques. I was then in my first or second year of professional experience, so the topic was new and quite intriguing not only for myself but also for my colleagues, some of them having a few more years of professional programming experience that I did and, I hope I’m not mistaking, none (or few) of them actually have heard about it. It was interesting to check how simple techniques could do so much damage. At those times there were few articles on SQL Injection and specific countermeasure techniques, not to mention best practices, so we were kind of groping in the dark in finding a countermeasure to the problem.

State of Art

    Since then, the number of search engines hits on the topic is quite impressive, many professionals approaching the problem in their way, Vendors started to design their solutions and make aware programmers on best practices in order to minimize this type of security threat, books were written on this topic, the awareness increased between developers and other type of IT professionals. Even if considerable effort has been made into this direction, and the topic appears often on the blogs, there are still many web sites not designed to address SQL Injection concerns. In 2007, The WhiteHat Security, placed SQL Injection on 5th position in top of vulnerabilities, estimating that 1 out of 5 web sites is vulnerable to SQL Injection [1]. In 8th Web Security Report based on 2009 data provided by WhiteHat Security[5], and as it seems also in 9th report [6], SQL Injection remains on the same position, what’s interesting to remark is the split per scripting technology provided in [6]: 
 
SQL Injection - Statistics WhiteHat

   In Web Hacking Incident Database maintained by Web Application Security Consortium, SQL Injection is considered as 17.97 % out of the total 512 reported top attack methods. Even if the number of reported attacks is insignificant in report to the number of sites available on the web, the percentage of cases seems to be in agreement with the number provided in WhiteHat Security reports.

Resources

   If the topic made you curious, you could find out more with just a simple search on the Web. There are many professionals who wrote on this topic, however it’s a good idea to start directly with the resources provided by the RDBMS vendors, for example Microsoft through its Security Research & Defense blog, in SQL Injection Attacks post has an interesting list of resources on this topic. A nice document on ‘How to write SQL injection proof PL/SQL’ comes from Oracle, an interesting presentation on ‘SQL Injection Myths and Fallacies’ was made at MySQL Conference & Expo, etc.

References:
[1] WhiteHat Security. [2007]. Website Security Statistics Report. [Online] Available from: http://www.whitehatsec.com/home/assets/WPStatsreport_100107.pdf (Accessed: 15 August 2010)
[2] J.Clarke et. al (2009). SQL Injection Attacks and Defense. Elsevier. ISBN: 978-1-59749-424-3
[3] Wikipedia. (2010). SQL Injection. [Online] Available from: http://en.wikipedia.org/wiki/Sql_injection (Accessed: 15 August 2010)
[4] Wikipedia. (2010). Code Injection. [Online] Available from: http://en.wikipedia.org/wiki/Code_injection (Accessed: 15 August 2010)
[5] WhiteHat Security. [2009]. Website Security Statistic Report, 8th Ed. [Online] Available from: http://www.whitehatsec.com/home/assets/WPstats_fall09_8th.pdf (Accessed: 15 August 2010)
[6] WhiteHat Security. [2010]. Website Security Statistic Report, 9th Ed. [Online] Available from: http://www.slideshare.net/jeremiahgrossman/whitehat-security-9th-website-security-statistics-report-3995771 (Accessed: 15 August 2010)

13 August 2010

💎SQL Reloaded: Temporary Tables vs. Table Variables and TempDB

Yesterday, I started to read Ken Henderson’s book, SQL Server 2005 Practical Troubleshooting: The Database Engine, diving directly into tempdb topic (Chapter 9, Tempdb issues). He mentions that metadata are created in system tables when a temporary table is created (see p.415). This means that when a temporary table is created, a record must be created in tempdb’s sys.tables and sys.columns system table, the respective records being deleted when the table is dropped. As I never looked at how the metadata of a temporary table look like, I thought is the case to do something in this direction, and here’s the code created for this purpose:

-- creating the temporary tables 
CREATE TABLE #temp ( 
   id int NOT NULL 
, value nvarchar(50) NOT NULL) 

-- retrieving the metadata 
SELECT t.name table_name 
, s.name column_name 
, t.type  
, t.type_desc  
, t.create_date  
FROM tempdb.sys.tables t 
    JOIN tempdb.sys.columns s 
      ON t.object_id = s.object_id 
WHERE t.name LIKE '%temp%' 

-- dropping the temporary table 
-- DROP TABLE #temp  -- see the 2nd note!

temp vs variable tables - temporary example 

Note:
By changing the width of table_name column could be seen that object’s name corresponding to the temporary tables is a combination from table’s name and, according to K. Henderson, the number designating the connection that owns the table.

If the temporary table is stored in tempdb and metadata are stored about it, what’s happening with a temporary table? Here’s the answer:

-- creating the table variable 
DECLARE @temp TABLE( 
  id int NOT NULL  
, value nvarchar(50) NOT NULL) 


-- retrieving the metadata 
SELECT t.name table_name 
, s.name column_name 
, t.type  
, t.type_desc  
, t.create_date  
FROM tempdb.sys.tables t 
     JOIN tempdb.sys.columns s 
      ON t.object_id = s.object_id 
WHERE t.name LIKE '%#%' 
   AND DateDiff(ss, t.create_date, GetDate()) BETWEEN -2 AND 2

temp vs variable tables - table variable example

As can be seen I had to put a little more effort in order to see a table variable’s metadata. As there is no name that could be used in order to identify the table, as object’s name is stored as a hex number, I had to restrain the list of tables by using the timestamp. Excepting the name, the metadata stored about the two types of tables are identical for the same table definition. Actually their definition is similar with the one of a “standard” table:

-- creating a "standard" table 
CREATE TABLE temp( 
  id int NOT NULL 
, value nvarchar(50) NOT NULL) 

-- retrieving the metadata 
SELECT t.name table_name 
, s.name column_name 
, t.type  
, t.type_desc  
, t.create_date  
FROM sys.tables t 
     JOIN sys.columns s 
      ON t.object_id = s.object_id 
WHERE t.name LIKE '%temp%' 

-- dropping the table 
-- DROP TABLE temp -- see the 2nd note!

temp vs variable tables - standard table example

Notes: 
(1) For exemplification I used a restrained list of attributes, when comparing the various table’s metadata could be used instead a SELECT * statement. The above examples reflect also the differences in declaring the three types of tables.
(2) Microsoft recommends not to drop the temporary tables explicitly, but let SQL Server handle this cleanup automatically and take thus advantage of the Optimistic Latching Algorithm, which helps prevent contention on TempDB [1].

Last updated: Oct-2024

References:
[1] Haripriya SB (2024) Do NOT drop #temp tables (link)
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.