Showing posts with label query. Show all posts
Showing posts with label query. Show all posts

04 February 2021

📦Data Migrations (DM): Conceptualization (Part VI: Data Migration Layer)

Data Migration
Data Migrations Series

Besides migrating the master and transactional data from the legacy systems there are usually three additional important business requirements for a Data Migration (DM) – migrate the data within expected timeline, with minimal disruption for the business, respectively within expected quality levels. Hence, DM’ timeline must match and synchronize with main project’s timeline in terms of main milestones, though the DM needs to be executed typically within a small timeframe of a few days during the Go-Live. In what concerns the third requirement, even if the data have high quality as available in the source systems or provided by the business, there are aspects like integration and consistency that rely primarily on the DM logic.

To address these requirements the DM logic must reach a certain level of performance and quality that allows importing the data as expected. From project’s beginning until UAT the DM team will integrate the various information iteratively, will need to test the changes several times, troubleshoot the deviations from expectations. The volume of effort required for these activities can be overwhelming. It’s not only important for the whole solution to be performant but each step must be designed so that besides fast execution, the changes and troubleshooting must involve a minimum of overhead.

For better understanding the importance, imagine a quest game in which the character has to go through a labyrinth with traps. If the player made a mistake he’ll need to restart from a certain distant point in time or even from the beginning. Now imagine that for each mistake he has the possibility of going one step back try a new option and move forward. For some it may look like cheating though in this way one can finish the game relatively quickly. It would be great if executing a DM could allow the same flexibility.

Unfortunately, unless the data are stored between steps or each step is a different package, an ETL solution doesn’t provide the flexibility of changing the code, moving one step behind, rerunning the step and performing troubleshooting, and this over and over again like in the quest game. To better illustrate the impact of such approach let’s consider that the DM has about 40 entities and one needs to perform on average 20 changes per entity. If one is able to move forwards and backwards probably each change will take about a few minutes to execute the code. Otherwise rerunning a whole package can take 5-10 times or even more as this can depend on packages’ size and data volume. For 800 changes only an additional minute per change equates with 800 minutes (about 13 hours).

In exchange, storing the data for an entity in a database for the important points of the processing and implementing the logic as a succession of SQL scripts allows this flexibility. The most important downside is that the steps need to be executed manually though this is a small price to pay for the flexibility and control gained. Moreover, with a few tricks one can load deltas as in the case of a phased DM.

To assure that the consistency of the data is kept one needs to build for each entity a set of validation queries that check for duplicates, for special cases, for data integrity, incorrect format, etc. The queries can be included in the sequence of logic used for the DM. Thus, one can react promptly to each unexpected value. When required, the validation rules can be built within reports and used in the data cleaning process by users, or even logged periodically per entity for tracking the progress.

Previous Post <<||>> Next Post

28 December 2020

🧊Data Warehousing: ETL (Part IV: The Load Subprocess)

Data Warehousing

As part of the ETL process, the Load subprocess is responsible for loading the data into the destination table(s). It covers in theory the final steps from the data pipeline and in most of the cases it matches the definition of the query used for data extraction, though this depends also on the transformations used in the solution.

A commonly used approach is dumping the data into an intermediary table from the staging area, table with no constraints that matches only the data types from the source. Once the data loaded, they are further copied into the production table. This approach allows minimizing the unavailability of the production table as the load from an external data source normally takes longer than copying the data within the same database or instance. That might not be the case when the data are available in the same data center, however loading the data first in a staging table facilitates troubleshooting and testing. This approach allows also dropping the indexes on the production table before loading the data and recreating them afterwards. In practice, this proves to be an efficient method for improving data loads’ efficiency.

In general, it’s recommended to import the data 1:1 compared with the source query, though the transformations used can increase or decrease the number of attributes considered. The recommendation applies as well to the cases in which data come from different sources, primarily to separate the pipelines, as systems can have different refreshing requirements and other constraints.

One can consider adding a timestamp reflecting the refresh date and upon case also additional metadata (e.g. identifier for source system, unique identifier for the record). The timestamp is especially important when the data are imported incrementally - only the data created since the last load are loaded. Except the unique identifier, these metadata can however be saved also in a separate table, with the same granularity as the table (1:1) or one record for each load per table and system, storing a reference to the respective record into the load table. There are seldom logical argumentations for using the former approach, while the latter works well when the metadata are used only for auditing purposes. If the metadata are needed in further data processing and performance is important, then the metadata can be considered directly in the load table(s).

A special approach is considered by the Data Vault methodology for Data Warehousing which seems to gain increasing acceptance, especially to address the various compliance requirements for tracking the change in records at most granular level. To achieve this the fact and dimension tables are split into several tables – the hub tables store the business keys together with load metadata, the link tables store the relationships between business keys, while satellite tables store the descriptions of the business keys (the other attributes except the business key) and reference tables store the dropdown values. Besides table’s denormalization there are several other constraints that apply. The denormalization of the data over multiple tables can increase the overall complexity and come with performance penalties, as more tables need to be joined, however it might be the price to pay if traceability and auditability are a must.

There are scenarios in which the requirements for the ETL packages are driven by the target (load) tables – the format is already given - one needing thus to accommodate the data into the existing tables or extended the respective tables to accommodate more attributes. It’s the case for load tables storing data from multiple systems with similar purpose (e.g. financial data from different ERP systems needed for consolidations).

27 December 2020

🧊Data Warehousing: ETL (Part III: The Extract Subprocess)

 

Data Warehousing
Data Warehousing Series

As part of the ETL process with applicability to Data Warehousing, Data Migrations, Data Integrations or similar scenarios the extraction subprocess is responsible for preparing and implementing the logic required to extract the data from the various source systems at the required level of detail. The extraction is done typically based on SQL queries as long one deals with relational databases or any OLEDB or ODBC-based data repositories including flat or MS Office files.

One can consider the preparation of the extraction logic as separate design subprocess of the targeted solution. Even if high-level design decisions are considered at the respective level, the low-level design needs to be considered at ETL package level. As part of the process are identified the source of the data in terms of system, tables and attributes to be imported, as well the joins, business and transformation rules that need to be applied on the data. This can involve reengineering the logic from the source system(s) as well data profiling, discovery or exploration activities.

A common practice is to copy the source tables 1:1 into the solution, eventually by considering only the needed attributes to minimize the necessary space, loading time and content’s complexity, even if this would add more effort into the design phase to identify only the needed attributes. If further attributes are identified at a later stage, the packages need to be modified accordingly. If the data volume or the number of unnecessary attributes is neglectable, copying the table 1:1 could prove to be the best strategy.

A second approach is to model within the extraction the (business) entity as designed within the source system. For example, the entity could be split over multiple tables from design or other purposes. Thus, the extraction query will attempt modeling the entity. This approach reduces to some degree the number of tables from the targeted solution, as well the number of ETL packages involved, while providing a clear depiction of the entities involved.

A third approach is to extract the data as needed by the target system, eventually as a mix between master and transaction data, fact which could easily lead to data redundancy with different timeliness and all the consequences resulting from this. This approach is usually met in solutions which require fast data availability in the detriment of design.

Unfortunately, there can be design constraints or choice considerations that could lead to a mix between these approaches. If the impact caused by the mix between the first two approaches is minimal, the third approach can cause more challenges, though it might be a small price to pay as long the considered data are disconnected from other data.

To reduce the redundancy of data, it’s recommended to consider as goal creating a unique source of facts, which can be obtained by minimizing as much as possible the overlaps between tables, respectively entities. Ideally there should be no overlaps. On the other sides the overlaps can be acceptable when the same data are available in more systems and the solution requires all the data to be available.

If the above approaches consider the vertical partitioning of the data, there can be also horizontal partitioning needs especially when a subset of the data is needed or when is needed to partition the data based on a set of values. In addition, one might be forced to include also transformation rules directly into the extraction logic, for example to handle conversion issues or minimize certain design overhead early in the process. In practice it makes sense to link such choices to business rules and document them accordingly.

Previous Post <<||>> Next Post

27 November 2020

🧊Data Warehousing: ETL (Part II: An Introduction)

 


ETL (Extract, Transform, Load) processes, technologies or tools are about extracting data from one or more data sources via a set of queries, performing changes on the data via conversions, aggregations, mappings or other types of transformations, respectively loading the data into target tables or other type of repositories. Thus, an ETL process allows moving and transforming data between predefined data structures on an ad-hoc basis or as part of stable repetitive processes, which makes ETL ideal for data warehousing, data integrations, data migrations or similar scenarios. 

ETL Data Flow

Extract: The extraction of data is done typically based on SQL queries from relational databases or any OLEDB or ODBC-based data repositories including flat or MS Office files, though modern ETL tools can support other type of queries (CAML, XQuery, DAX) or even NoSQL architectures (Handoop). This allows addressing a wide range of requirements, the complexity of the logic depending on the functionality provided by the query languages, respectively the extraction functionality available.  

Transform: The transformation logic can be implemented based on the functionality provided by the ETL tool, and can involve after case any combination of aggregates, conditional splits, merges, lookups, multicasts, pivoting/unpivoting, cleansing, data conversions, sampling, mapping or any other transformations that can be performed on an in-transit dataset. On the other side, quite often the same can be achieved with the help of SQL-based manipulations directly in the extraction logic or later in the process. SQL can prove to be occasionally faster and more flexible than the transformations provided by the ETL tool, however despite the overlaps, the two approaches can complement each other when used adequately. 

Load: The load is usually just a dump of the data into one or more final or intermediary tables with predefined structures. Unless the data don’t match the data type, format or further defined constraints, the load seldom involve further challenges as long the solution was designed adequately. 

Within the logical model, extract, transform and load can be considered as process by themselves. Within the object model provided by the ETL tool, they are considered in the mentioned sequence within a data flow, which within a set of workflow constraints defines how the data move through the pipeline – the sequence of processing steps considered. The basic unit of work is the data flow and the workflow it belongs to, unit that can be encapsulated in one container for easier management or simply convenience. Several containers can be linked within a workflow to create more complex behavior. 

The data flows and workflow constraints, together with the supporting connections and containers form an ETL package, the main unit of work for encapsulating and running ETL logic. ETL packages are scheduled and run as fit for the purpose.

With the right design, these building blocks allow enough flexibility in handling ad-hoc requests or of building complex solutions. This involves decisions on how to partition the ETL packages, respectively the data flows, in which order they should be run, where and in which sequence the data should be transformed, how to handle exceptions, how to build eventually intermediary data repositories, how to handles audit requirements, and so on. Each of these choices can prove to be important. 

The knowledge of the ETL architecture and functionality is quintessential in providing the right solution for the problem considered, however once the basics were understood the challenges typically reside in understanding the source and/or target structures, the logical and physical entities available, identify the way the data can be partitioned horizontally or vertically, respectively what type of transformations are required for moving the data, as required by the solution. 

Previous Post <<||>> Next Post

20 May 2020

💎🏭SQL Reloaded: Query Patterns in SQL Server (Part V: Matrix Representation)

In a previous post on Project Management (PM) I used Excel to represent the data into a matrix format. The same output can be obtained by using SQL in a simple query based on a CROSS JOIN between two numeric intervals created via a CTE within a table-valued function. For this purpose I used an old piece of code which displays the values within an interval given a step. It’s the typical loop with a specified step.
I have made mainly two changes to the old example - I defined the start, end and step to be numeric, while considering also an index into the output table. Here’s the modified function:

-- SQL Server 2005+: Numeric Interval 
CREATE FUNCTION dbo.fGetNInterval( 
  @Start numeric(18,2)
, @End numeric(18,2)
, @Step numeric(18,2)) 
RETURNS @Interval TABLE( 
  Ind int PRIMARY KEY
, Val numeric(18,2)) 
AS 
BEGIN 
 DECLARE @Sign smallint 
 SET @Sign = IsNull(NullIf(Sign(@End-@Start), 0), 1); 
 WITH CTE(Ind, Val) 
 AS( 
   SELECT Cast(1 as int) Ind
   , Cast(COALESCE(@Start, @End, 0) as numeric(18,2)) Val 
   UNION ALL 
   SELECT cast(Ind+1 as int) Ind
   , Cast(CTE.Val + @Step * @Sign as numeric(18,2)) Val 
   FROM CTE 
   WHERE (@End-CTE.Val)*@Sign-@Step>=0 
 ) 
 INSERT @Interval 
 SELECT Ind
 , Val 
 FROM CTE 
 OPTION(MAXRECURSION 0) 
 RETURN 
END 
As it can be seen, the function can be used to step forwards as well backwards:

--descending sequence 
SELECT UT.Val, 100/UT.Val
FROM dbo.fGetNInterval(100, 25, 5) UT

-- ascending sequence 
SELECT *
FROM dbo.fGetNInterval(1, 2, 0.25)
Based on these two sequences the matrix query can be written as follows:

-- matrix representation
DECLARE @EE as numeric(18,2) = 100
SELECT UT.Val [UT %]
, Max(CASE WHEN QF.Ind = 1 THEN Cast(QF.Val*@EE*100/UT.Val as numeric(18,1)) END) [80%]
, Max(CASE WHEN QF.Ind = 2 THEN Cast(QF.Val*@EE*100/UT.Val as numeric(18,1)) END) [85%]
, Max(CASE WHEN QF.Ind = 3 THEN Cast(QF.Val*@EE*100/UT.Val as numeric(18,1)) END) [90%]
, Max(CASE WHEN QF.Ind = 4 THEN Cast(QF.Val*@EE*100/UT.Val as numeric(18,1)) END) [95%]
, Max(CASE WHEN QF.Ind = 5 THEN Cast(QF.Val*@EE*100/UT.Val as numeric(18,1)) END) [100%]
FROM dbo.fGetNInterval(100, 25, 5) UT
, dbo.fGetNInterval(1, 2, 0.25) QF
GROUP BY UT.Ind
, UT.Val

Here’s the output:

Note:
The indexes were used to display the values into the needed format. The table-valued function could be created also without the respective indexes, however it would have complicated the query.

Previous Post <<||>> Next Post 

07 May 2019

🧭Business Intelligence: Perspectives (Part IV: How Big Is Your Report?)

Business Intelligence

How big are your reports? How big reports needs to be? Do your reports really reflect your needs? Have they become too cluttered with data? Do you have too many reports on the same topic? How many is too many? These are the few of the questions BI developers and users should ask themselves altogether from time to time.

A report is any document with textual and/or graphical formatted output of data from one or more data sources, (previously) designed to convey a basis for decision making or operational activities. A report is characterized by the amount of data it holds (the datasets), the amount of data is based on (the source data), the number and complexity of the queries on which the report is based, the number of data sources, the manner in which data are structured (tabular, matrix, graphical), the filtering and sorting possibilities, as well by the navigability possibilities (drilldown, drill-through, slice-and-dice, etc.). 

On the other side for users are important characteristics like reports’ performance, the amount of useful information it conveys, the degree to which a report helps address a business need, the quality of data, the degree to which it satisfies the various policies, the look and feel, the possibility of exporting the data to standard file formats.

A report’s size is defined typically by the product of columns and records the report displays plus the formatting and various types of graphical content, however this depends on the filter criteria used by the user. Usually is considered the average size of a report based on the typical filters used. Nowadays networks and database specific techniques allow displaying fairly big reports (20-50 Mb) in a fairly amount of time (10-20 seconds) without affecting network, respectively database’s performance, which for most of the requests should be enough. When the users need bigger volumes of data then a direct data dump (extract) from the database should be considered, when possible. (A data export is not a report and they should be differentiated as such.)

The number of records that could be shown in a report is dependent on reporting framework’s capabilities, e.g. there are reporting tools that cope well with showing a few thousands records but have difficulties in showing or exporting tens of thousands of records. The best example into this respect is Excel and its well-known limitation of 65536 records (2^16)  and 256 columns (2^8) that in the meantime has been addressed in Excel 2007 and enlarged to 1 million records (2^20), respectively 16k (2^14). Even so the reporting tools that use older drivers can fail exporting all the data to Excel when the former limitation is reached.

In general, reports with too many columns tend to obfuscate data’s understanding and are more difficult to navigate. The more the user needs to scroll horizontally the higher in general the obfuscation. If the users really need 50 columns then they should be provided, however in general 20-25 should be enough for an operational report. Tactical and strategic reports need a restrained focus and the information should be provided in a screen without the need of scrolling.

When reports get too big is recommended to split the reports in two or more reports to address specific requirements, however this can lead to too many distinct reports, and further to duplication of effort for creating and documenting them, and the duplication of logic and data. Therefore, the challenge is to find the right balance between the volume of reports, their usability and the effort needed to manage them. In certain scenarios it makes even sense to consolidate similar reports.

04 February 2017

💠🛠️SQL Server: Administration (Killing Sessions - Killing ‘em Softly and other Snake Stories)

Introduction

There are many posts on the web advising succinctly how to resolve a blocking situation by terminating a session via kill command, though few of them warn about its use and several important aspects that need to be considered. The command is powerful and, using an old adagio, “with power comes great responsibility”, responsibility not felt when reading between the lines. The easiness with people treat the topic can be seen in questions like “is it possibly to automate terminating sessions?” or in explicit recommendations of terminating the sessions when dealing with blockings.

A session is created when a client connects to a RDBMS (Relational Database Management System) like SQL Server, being nothing but an internal logical representation of the connection. It is used further on to perform work against the database(s) via (batches of) SQL statements. Along its lifetime, a session is uniquely identified by an SPID (Server Process ID) and addresses one SQL statement at a time. Therefore, when a problem with a session occurs, it can be traced back to a query, where the actual troubleshooting needs to be performed.

Even if each session has a defined scope and memory space, and cannot interact with other sessions, sessions can block each other when attempting to use the same data resources. Thus, a blocking occurs when one session holds a lock on a specific resource and a second session attempts to acquire a conflicting lock type on the same resource. In other words, the first session blocks the second session from acquiring a resource. It’s like a drive-in to a fast-food in which autos must line up into a queue to place an order. The second auto can’t place an order until the first don’t have the order – is blocked from placing an order. The third auto must wait for the second, and so on. Similarly, sessions wait in line for a resource, fact that leads to a blocking chain, with a head (the head/lead blocking) and a tail (the sessions that follow). It’s a FIFO (first in, first out) queue and using a little imagination one can compare it metaphorically with a snake. Even if imperfect, the metaphor is appropriate for highlighting some important aspects that can be summed up as follows:

  • Snakes have their roles in the ecosystem
  • Not all snakes are dangerous
  • Grab the snake by its head
  • Killing ‘em Softly
  • Search for a snake’s nest
  • Snakes can kill you in sleep
  • Snake taming

Warning: snakes as well blockings need to be handled by a specialist, so don’t do it by yourself unless you know what are you doing!

Snakes have their roles in the ecosystem

Snakes as middle-order predators have an important role in natural ecosystems, as they feed on prey species, whose numbers would increase exponentially if not kept under control. Fortunately, natural ecosystems have such mechanism that tend to auto-regulate themselves. Artificially built ecosystems need as well such auto-regulation mechanisms. As a series of dynamical mechanisms and components that work together toward a purpose, SQL Server is an (artificial) ecosystem that tends to auto-regulate itself. When its environment is adequately sized to handle the volume of information or data it must process then the system will behave smoothly. As soon it starts processing more data than it can handle, it starts misbehaving to the degree that one of its resources gets exhausted.

Just because a blocking occurs doesn’t mean that is a bad thing and needs to be terminated. Temporary blockings occur all the time, as unavoidable characteristic of any RDBMS with lock-based concurrency like SQL Server. They are however easier to observe in systems with heavy workload and concurrent access. The more users in the system touch the same data, the higher the chances for a block to occur. A good design database and application architecture typically minimize blockings’ occurrence and duration, making them almost unobservable. At the opposite extreme poor database design combined with poor application design can make from blockings a DBA’s nightmare. Persistent blockings can be a sign of poor database or application design or a sign that one of the environment’s limits was reached. It’s a sign that something must be done. Restarting the SQL server, terminating sessions or adding more resources have only a temporary effect. The opportunity lies typically in addressing poor database and application design issues, though this can be costlier with time.

Not all snakes are dangerous

A snake’s size is the easiest characteristic on identifying whether a snake is dangerous or not. Big snakes inspire fear for any mortal. Similarly, “big” blockings (blockings consuming an important percentage of the available resources) are dangerous and they have the potential of bringing the whole server down, eating its memory resources slowly until its life comes to a stop. It can be a slow as well a fast death.

Independently of their size, poisonous snakes are a danger for any living creature. By studying snakes’ characteristics like pupils’ shape and skin color patterns the folk devised simple general rules (with local applicability) for identifying whether snakes are poisonous or not. Thus, snakes with diamond-shaped pupils or having color patterns in which red touches yellow are likely/believed to be poisonous. By observing the behavior of blockings and learning about SQL Server’s internals one can with time understand the impact of each blocking on server’s performance.

Grab the snake by its head

Restraining a snake’s head assures that the snake is not able to bite, though it can be dangerous, as the snake might believe is dealing with a predator that is trying to hurt it, and reach accordingly. On the other side troubleshooting blockings must start with the head, the blocking session, as it’s the one which created the blocking problem in the first place.

In SQL Server sp_who and its alternative sp_who2 provide a list of all sessions, with their status, SPID and a reference with the SPID of the session blocking it. It displays thus all the blocking pairs. When one deals with a few blockings one can easily see whether the sessions form a blocking chain. Especially in environments under heavy load one can deal with a handful of blockings that make it difficult to identify all the formed blocking chains. Identifying blocking chains is necessary because by identifying and terminating directly the head blocking will often make the whole blocking chain disappear. The other sessions in the chain will perform thus their work undisturbed.

Going and terminating each blocking session in pairs as displayed in sp_who is not recommended as one terminates more sessions than needed, fact that could have unexpected repercussions. As a rule, one should restore system’s health by making minimal damage.

In many cases terminating the head session will make the blocking chain disperse, however there are cases in which the head session is replaced by other session (e.g. when the sessions involve the same or similar queries). One will need to repeat the needed steps until all blocking chain dissolve.

Killing ‘em Softly 

Killing a snake, no matter how blamable the act, it is sometimes necessary. Therefore, it should be used as ultimate approach, when there is no other alternative and when needed to save one’s or others’ life. Similarly killing a session should be done only in extremis, when necessary. For example, when server’s performance has deprecated considerably affecting other users, or when the session is hanging indefinitely.


Kill command is powerful, having the power of a hammer. The problem is that when you have a hammer, every session looks like a nail. Despite all the aplomb one has when using a tool like a hammer, one needs to be careful in dealing with blockings. A blocking not addressed correspondingly can kick back, and in special cases the bite can be deadly, for system as well for one’s job. Killing the beast is the easiest approach. Kill one beast and another one will take its territory. It’s one of the laws of nature applicable also to database environments. The difference is that if one doesn’t addresses the primary cause that lead to a blocking, the same type of snake more likely will appear repeatedly.


Unfortunately, the kill command is no bulletproof for terminating a session, it may only severe the snake. As the documentation warns, there can be cases in which the method won’t have any effect on the blocking, the blocking continuing to room around. So, might be a good idea to check whether the session disappeared and keep an eye on it until it totally disappeared. Especially when dealing with a blocking chain it can happen that the head session is replaced by another session, which probably was waiting for the same resources as the previous head session. It may happen that one deals with two or more blocking chains independent from each other. Such cases appear seldom but are possible.


Killing the head session with a blocking without gathering some data provides less opportunities for learning, for understanding what’s happening in your system, of identifying what caused the blocking to occur. Therefore, before jumping to kill a session, collect the data you need for further troubleshooting.

Search for a snake’s nest 

With the warning that unless one deals with small snakes, might not be advisable in searching for a snake’s nest, the idea behind this heuristic is that with a snake’s occurrence more likely there is also a nest not far away, where several other snakes might hide. Similarly, a query that causes permanent blockings might be the indication for code that generates a range of misbehaving queries. It can be same code or different pieces of code. One can attempt to improve the performance of a query that leads to blockings by adding more resources on the server or by optimizing SQL Server’s internals, though one can’t compensate for poor programming. When possible, one needs to tackle the problem at the source, otherwise performance improvements are only temporary.

Snakes can kill you in sleep 

When wondering into the wild as well when having snakes as pets one must take all measures to assure that nobody’s health is endangered. Same principle should apply to databases as well, and the first line of defense resides in actively monitoring the blockings and addressing them timely as they occur. Being too confident that nothing happens and no taking the necessary precautions can prove to be a bad strategy when a problem occurs. In some situations, the damage might be acceptable in comparison with the effort and costs needed to build the monitoring infrastructure, though for critical systems it can come with important costs.

Snakes’ Taming 

Having snakes as pets doesn’t seem like a good idea, and there are so many reasons why one shouldn’t do it (see PETA’s reasons)! On the other side, there are also people with uncommon hobbies, that not only limit themselves at having a snake pet, but try to tame them, to have them behave like pets. There are people who breed snakes to harness their venom for various purposes, occupation that requires handling snakes closely. There are also people who brought their relation with snakes at level of art, since ancient Egypt snake charming being a tradition in countries from Southeast Asia, Middle East, and North Africa. Even if not all snakes are tameable, snake’s taming and charming is possible. In the process the tamer must deprogram or control snakes’ behavior, following a specific methodology in a safe environment.

No matter how much one tries to avoid persistent blockings, one can learn from troubleshooting blockings, about their sources, behavior as well about own limitations. One complex blocking can be a good example with which one can test his knowledge about SQL Server internals as well about applications’ architecture. Each blocking provides a scenario in which one can learn something.

When fighting with a blocking, it’s wise to do it within a safe environment, typically a test or development environment. Fighting with it in a production environment can cause unnecessary stress and damage. So, if you don’t have a safe environment in which to carry the fight, then build one and try to keep the same essential characteristics as in production environment!

There will be also situations in which one must fight with a blocking in the production environment. Then, be careful in not damaging the data as well the environment, and take all the needed precautions!


Conclusion

The comparison between snakes and blockings might not be perfect, though hopefully it will imprint in reader’s mind the dangers of handling blockings inappropriately and increase the awareness in what concerns related topics.

25 March 2011

🧭Business Intelligence: Troubleshooting (Part II: Approaching a Query)

Business Intelligence Series
Business Intelligence Series

Introduction

You received a (long) query for troubleshooting, reviewing, conversion or any similar tasks. In addition, you don’t know much about the underlying table structure or business logic. So, what do you do then? For sure two things are intuitively clear: you don’t need to panic and, understanding the query may help you in your task. Understanding the query, it seems such a simple statement, though there is more to it. Here are some points on how to approach a query.

State your problem
 
“A problem well stated is a problem half solved” (Charles F. Kettering). Before performing any work, check what’s requested from you, whether you are having the information required for the task(s) ahead, for example documentation, valid examples, all code, etc. If something is missing, don’t hesitate to request all the information you need. While waiting for information, you can continue with next steps. As we don’t live in a perfect world, there will be also cases in which you’ll have to fill the gaps by yourself by performing additional research/work. When troubleshooting is important to understand what’s wrong and, when possible, have data against which to compare query’s output.

Save the work

Even if you are having a copy of the query somewhere on the server, save the previous version of the query and, when possible, use versioning. It might seem a redundant task, however the fact is that you never know when you need to refer to it and, as you’ll see next, it can/should be used as a baseline for validating the changes. In case you haven’t saved the query, check whether your RDBMS is tracking metadata about the queries run and, if the metadata were not reset in the meantime, you might be lucky enough to find a copy of your query.

I found that is important to save the daily work, the various analysis performed in order to understand a query, the various versions and even the data used for testing. All this work could help you letter to review what you made, the steps you missed, you can reuse one of the queries for further work, etc.

Break down

When the query is too complex, it could be useful to break the query into chunks that could be run and understood in isolation. Typically such chunks derive from query’s structure (e.g. inline queries, subqueries derived from unions). I found that often, focusing only a chunk of a query help isolating issues.

Restructure

Many programmers still write queries using the old non-ANSI joining syntax in which the join constraints appear in the WHERE clause, making the understanding and troubleshooting of a query more difficult. Often I found myself in the position of transforming first a query to ANSI SQL syntax, before performing further work on it. It’s actually a good occasion to gain a first understanding of query’s structure, but I’d prefer not to do it so often. In addition, during restructure phase it makes sense to differentiate between the join and filter constraints, this helping isolating the issue(s).

Check cardinalities

Wrong join constraints lead to duplicates or fewer records than expected, such differences being difficult to track when the variances in the numbers of records are quite small. Even if RDBMS come in developers’ help by providing metadata about the join relations, the columns and predicates participating in a join are not always so easy to identify. Therefore, in order to address this issue, it’s needed to check the constraints between any two tables between participating in a join. Sometimes, when the query is based on the table with the lowest level of detail, it can be enough to check the variations of the number of records.

Check filter constraints

Filter constraints are maybe more difficult to identify, especially when is needed to reengineer the logic built in applications. Many of the filter constraints are logical, though when you have no documentation about the schemas, is like rambling in the dark, having to check real examples and identify the various values and the impact they have on the behavior of your report.

Validate changes
 
So, you made the changes, everything looks perfect. Is it so? Often your intuition might tell you that the logic of a query is correct, though as software is not based on magic, at least not all the time, check some of the records to assure that the data are rendered as expected, check totals, compare the current with previous version, identify variations, etc. You don’t need to use all the technique you know, but to choose the best and minimal set of tools that allows you to validate the query.

Perform refactoring
    
Refactoring, the way to (continuous) code improvement, should become part of each developer’s philosophy about programming. A query, as any other piece of code, is rarely perfect as technical and factual knowledge is relative, features get deprecated and new techniques are introduced. On the other side, there is an old saying in IT – don’t change something that’s already working, so, there should be kept a balance between the two – the apparent and needed for change.

Document
    
I hope it’s not the case to stress the importance of documentation. From versioning to logic description, it’s a good practice to document the important parts of your work, especially the parts that will facilitate later work.

20 June 2010

💎SQL Reloaded: Troubleshooting Query Issues II (Errors)

Some weeks ago I wrote a post on troubleshooting, looking at the topic from problem solving perspective. In what concerns the troubleshooting of SQL scripts, now it depends also on the type of DML statement used or on the type of issues the developer deals with – errors thrown by the database engine, wrong number of records returned by a query, performance, accessibility or reusability related issues.

Many of the errors come normally from wrong usage of syntax and functionality or from misspells, in many situations the database query engine providing enough information, leading the developer closer to issue’s resolution, at least by narrowing the range of search, and here the problem solving approach could be useful. In such situations it is useful to do a quick search in the local/online documentation or resources using the error number or the full/partial error message. I know that’s not rocket science, though I’ve seen people acting like hypnotized by the error message, not knowing what do to next. A quick search using a powerful search engine like Google.com, Bing.com or Yahoo.com could lead you to the solution or help gathering more data about the possible causes. More likely that there was somebody in the same situation and has posted a question or even the answer somewhere in a forum or a blog. It’s always a good idea and time-effective to do a little research before posting a new question in a forum. Especially between beginners there are people who don’t know to make use of the documentation, help tools or even a search engine, therefore it could be useful for them to take some time and look on some tips on how to make better use of such tools.

In general when an error is thrown is provided an error number that has a specific meaning for the vendor, a general description, and in fortunate cases also the line where it occurs. For example in SQL Server the execution of the below query based on AdventureWorks database will return the next error message:

SELECT * 
FROM dbo.Purchasing.PurchaseOrderHeader POH 

Msg 208, Level 16, State 1, Line 1
Invalid object name 'dbo.Purchasing.PurchaseOrderHeader'


Even if in this case the error was created deliberately by using the name of a schema (e.g. dbo) instead of database’s name, this doesn’t mean that the cause/solution is so simple. For example the same error message could be thrown by an issue related to KB837970, KB310882, KB289551 and several other articles that could be found by using the following search text in Google:
site:support.microsoft.com Microsoft.com "Msg 208, Level 16"

Unfortunately in what concerns the errors appearing in queries, the localization of errors is not so clean, depending also on the order in which the query is parsed by the engine, thus the issue might be shown as occurring at a line other than its actual location, namely at the line where the query parsing finds an issue.
  
When the error message is in other languages than English, it makes sense to try to find the corresponding error message in English and use it in search because, as a quit important volume of documentation and other type of information are in English, the chances to find something are higher. Normally that could be done by using the error code, which in theory should have the same meaning independently of the language used.

Sometimes, when you didn’t knew how to do something and just used your inspiration and tried to use some functionality, and you are not fully aware of its implications and constraints, then it makes sense to consult the documentation or do a search on the web on how to do a certain task (e.g. How to load data from Oracle to SQL Server). 10 minutes spent in research could help you save a few hours later, so don’t jump blindly into coding! In many situations it could happen that a solution is already available, and if you find already the code, please consider not using it blindly, try to understand the how and why, its limitations and how it could be eventually improved, and some sorrow testing might be necessary too. 

Don’t try to reinvent the wheel just for the sake of it, in what concerns programming there are more likely others who had the same problem and found a neat solution to it. This doesn’t mean that you’ll have to do that for every problem you’ll have, but for tasks having higher complexity than the tasks you are accustomed with, the need for that decreasing with the level of expertise. Doing some research on a given topic can be time consuming, but could prove to be useful especially when other developers used other technique. Actually once you understood the idea, you could attempt to reconstruct the logic with your own coding and styling conventions, of course, not always it makes sense to do that.

Note:
You’ll have to consider also copyright infringements, but in theory the developers who posted code on the web for further reuse, want only some minimal credit for that (e.g. posting a link to the website where the code is found, mentioning developers’ name, etc.), while in some situations you could reuse the code only if you add to it substantial changes. Both statements apply not only to code but also other type of work (content) available freely on the web (e.g. articles, tutorials, etc.).

In case the search was unsuccessful then it makes sense to post your question on a forum, specifically in the area closest to your problem. Posting a question in the wrong forum has quite high chances to lead nowhere! Some forum members are quite critical about the users who post questions without doing some research, especially when there is already an answer available on the same forum. On forums, users have to be aware that they are not only wasting their time, but also readers’ time. On the other side, the more the points where the same question is posed, the higher are the chances for others to reach to them, and eventually get an answer, this depending also on how search engines index the web pages. 
 
When the same error appears for the second time, in theory the developer should know already how to solve it or the possible cause(s). Because the solutions are not always so simple to remember as is in human nature to forget, it makes sense to document the issues by describing the problem and the solutions (preferably with code) to such a level that it could understood also by other professionals.

When the number of errors occurring in your SQL scripts is quite high, especially when the same type of errors occurs, then maybe it makes sense to take a step back and ask yourself why that happens. Sometimes it just means that is time to take a break, though, more obvious, is maybe needed to review the theory/documentation, make use of defensive programming techniques or use an editor with powerful IntelliSense functionality that checks the syntax and highlights the errors as you type. If you are a beginner, don’t worry, with time coding and troubleshooting becomes much easier.

All the above ideas are simple common knowledge between developers, though such knowledge comes from (prolonged) experience – a combination of learning by doing and solving issues – “the school of hard knocks”. I just hope the above lines and the coming posts will be helpful to you.

17 August 2009

🛢DBMS: Query Optimizer (Definitions)

"SQL Server code that analyzes queries and database objects and selects the appropriate query plan. The SQL Server optimizer is a cost-based optimizer. It estimates the cost of each permutation of table accesses in terms of CPU cost and I/O cost." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A SQL server tool that formulates an optimum execution plan for a query." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"The SQL Server component responsible for generating the optimum execution plan for a query." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"The SQL Server database engine component responsible for generating efficient execution plans for SQL statements." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A term applied to a process, within a database engine, that attempts to find the fastest method of executing a SQL command against a database." (Gavin Powell, "Beginning Database Design", 2006)

"This is the component in SQL Server that analyzes your queries, compares them with available indexes, and decides which index will return a result set the fastest." (Joseph L Jorden & Dandy Weyn, "MCTS Microsoft SQL Server 2005: Implementation and Maintenance Study Guide - Exam 70-431", 2006)

"An optimization process running within SQL Server. Any queries submitted to SQL Server are first processed by the query optimizer. It determines the best way to run the query, including what indexes to use and what types of joins to use. The output is a query execution plan, sometimes called a query plan or just a plan." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A process that generates query plans. For each query, the optimizer generates a plan that matches the query to the index that will return results as efficiently as possible. The optimizer reuses the query plan each time the query runs. If a collection changes significantly, the optimizer creates a new query plan." (MongoDb, "Glossary", 2008)

"The Optimizer is an internal technology that is responsible for selecting the most efficient means to accessing or altering information. It uses detailed statistics about the database to make the right decision." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"A part of a DBMS that examines a nonprocedural data manipulation request and makes a determination of the most efficient way to process that request." (Jan L Harrington, "SQL Clearly Explained" 3rd Ed., 2010)

"The component of a relational database system responsible for analyzing SQL queries and producing optimal access paths for retrieving data from the database." (Craig S Mullins, "Database Administration", 2012)

"A component of the SQL and XQuery compiler that chooses an access plan for a data manipulation language statement by modeling the execution cost of many alternative access plans and choosing the one with the minimal estimated cost." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"Built-in database software that determines the most efficient way to execute a SQL statement by considering factors related to the objects referenced and the conditions specified in the statement." (Oracle)

"The MySQL component that determines the best indexes and join order to use for a query, based on characteristics and data distribution of the relevant tables." (MySQL)

28 June 2009

🛢DBMS: Pass-Through Query (Definitions)

"A query that is passed uninterpreted to an external server for evaluation. The result set returned by a pass-through query can be used in the FROM clause of a query like an ordinary base table." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A query that is passed through uninterrupted to an external database engine." (Marilyn Miller-White et al, "MCITP Administrator: Microsoft® SQL Server™ 2005 Optimization and Maintenance 70-444", 2007)

"An SQL-specific query you use to send commands directly to an ODBC database server." (Microsoft, "SQL Server 2012 Glossary", 2012)


10 June 2009

🛢DBMS: Correlated Subquery (Definitions)

 "A subquery that cannot be evaluated independently, but depends on the outer query for its results. Also called a repeating subquery, since the subquery is executed once for each row that might be selected by the outer query. See also nested query." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A repeating subquery. Many queries can be evaluated by executing the subquery once and substituting the resulting value or values into the WHERE clause of the outer query. In queries that include a correlated subquery, the subquery depends on the outer query for its values. This means that the subquery is executed repeatedly, one time for each row that is selected by the outer query." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A subquery that references a column in an outer statement. The inner query is executed for each candidate row in the outer statement." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A subquery that contains a reference to a column in the main, or parent, query." (Bob Bryla, "Oracle Database Foundations", 2004)

"A subquery that a DBMS cannot process completely before turning to the outer query. The DBMS must execute the subquery repeatedly for every row in the outer query." (Jan L Harrington, "SQL Clearly Explained 3rd Ed. ", 2010)

"A subquery that contains a correlated reference." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

🛢DBMS: Ad-hoc Query (Definitions)

"A query created for immediate execution. You can create an ad hoc query from scratch or by modifying an existing query that is saved in a text file." (Patrick Dalton, "Microsoft SQL Server Black Book", 1997)

"A query consisting of dynamically constructed SQL. Desktop query tools are often used to construct ad hoc queries. The opposite of a static query." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"An original or unplanned query that is used for in-depth analysis or to solve a specific problem." (Margaret Y Chu, "Blissful Data ", 2004)

"Any query that can’t be determined (isn’t stored and reused) prior to the moment the query is issued. It’s usually dynamically constructed Structured Query Language (SQL), often by desktop-resident query tools." (Sharon Allen & Evan Terry, "Beginning Relational Data Modeling" 2nd Ed., 2005)

"A query sent to a database by an end-user or power user, just trying to get some information quickly. Ad-hoc queries are subjected to a database where the content, structure, and performance of said query, are not necessarily catered for by the database model. The result could be a performance problem, and in extreme cases, even an apparent database halt." (Gavin Powell, "Beginning Database Design", 2006)

"A query issued infrequently, or on an as-needed basis." (Robert D Schneider & Darril Gibson, "Microsoft SQL Server 2008 All-in-One Desk Reference For Dummies", 2008)

"A query that is executed infrequently. Ad hoc queries can sometimes be troublesome because untrained users can create queries that are inefficient, causing the server to suffer significant performance degradation." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A query issued infrequently, or on an as-needed basis. Typically ad hoc queries are issued against remote data sources from a SQL Server, but can also be issued against SQL Server from a wide variety of sources. If you have a need to issue queries against a remote data source more often, linked servers are created to simplify the syntax of the ad hoc query." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"A query constructed and executed to answer an immediate and unanticipated question or need, in contrast to a planned query. For example, a dynamic SQL SELECT statement against a relational database, constructed by a knowledge worker using an English-like or point-and-click interface of a desktop-resident Business Intelligence tool. The data returned may drive further analysis and reporting." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A query constructed and executed to answer an immediate and unanticipated question or need." (Craig S Mullins, "Database Administration", 2012)

"Any spontaneous or unplanned question or query. It is a query that consists of dynamically constructed SQL and is one capability in a data-driven DSS." (Ciara Heavin & Daniel J Power, "Decision Support, Analytics, and Business Intelligence" 3rd Ed., 2017)

🛢DBMS: Query (Definitions)

"1. A request for the retrieval of data with a select statement. 2. Any SQL statement that manipulates data." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A specific request for data retrieval, modification, or deletion." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A SELECT statement." (Peter Gulutzan & Trudy Pelzer, "SQL Performance Tuning", 2002)

"A request for information based on the value of certain fields against a database or a data warehouse." (Margaret Y Chu, "Blissful Data ", 2004)

"A SQL SELECT statement that extracts data from a database." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"The mechanism to get data out of a database. A query is comprised of constraints used to filter data out of the results, and defines the data elements to be included in the result set and possibly some mathematical computations, grouping, or sorting of the data." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A computing function that requests data from the database, stating the parameters and constraints for the request." (Paulraj Ponniah, "Data Warehousing Fundamentals for IT Professionals", 2010)

"(1) A SQL SELECT statement written and issued against a database. (2) To request data from a database." (Craig S Mullins, "Database Administration", 2012)

"An operation that retrieves data from tables or views. For example, SELECT * FROM employees is a query." (Oracle, "Database SQL Tuning Guide Glossary", 2013)

"A procedure executed by a computer program in search of qualified data" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"A request for information from a database that is based on specific conditions: for example, a request for a list of all customers in a customer table whose balances are greater than USD1000. A component of certain SQL or XQuery statements that specifies a result set." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"A type of question accepted by a database about the data it holds. A complex query may ask the database to select records according to some criteria, aggregate certain quantities across those records, etc. Many databases accept queries in the specialised language SQL or dialects of it." (Open Data Handbook)

30 May 2009

🛢DBMS: Simple Protocol and RDF Query Language (Definitions)

"A simple query language for accessing RDF structures. As the majority of the query languages developed within a Web context, SPARQL is based on a strict ‘pattern-matching’ approach, which means that no inference facilities are directly associated with SPARQL. As the majority of the Web query languages, SPARQL makes use of a SQL-like format, employing then operators in the style of SELECT and WHERE." (Gian P Zarri, "RDF and OWL for Knowledge Management", 2011)

"An RDF query language standardized by the World Wide Web Consortium (W3C)." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"SPARQL is an RDF query language standardized by the World Wide Web Consortium (W3C). The acronym stands for SPARQL Protocol and RDF Query Language." (Michael Fellmann et al, "Supporting Semantic Verification of Process Models", 2012)

"An RDF query language; its name is a recursive acronym that stands for SPARQL Protocol and RDF Query Language." (Mahdi Gueffaz, "ScaleSem Approach to Check and to Query Semantic Graphs", 2015)

"An SQL-like, RDF query language and a recommendation by W3C, developed to manipulate and query the data stored in RDF format." (T R Gopalakrishnan Nair, "Intelligent Knowledge Systems", 2015)

"Is an RDF query language, that is, a semantic query language for databases, able to retrieve and manipulate data stored in Resource Description Framework format." (Fu Zhang et al, "A Review of Answering Queries over Ontologies Based on Databases", 2016)

"Is an RDF query language, that is, a semantic query language for databases, able to retrieve and manipulate data stored in Resource Description Framework format." (Fu Zhang & Haitao Cheng, "A Review of Answering Queries over Ontologies Based on Databases", 2016)

"SPARQL (Simple Protocol and RDF Query Language) is an RDF query language which is a W3C recommendation. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions." (Hairong Wang et al, "Fuzzy Querying of RDF with Bipolar Preference Conditions", 2016)

"SPARQL can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions." (Jingwei Cheng et al, "RDF Storage and Querying: A Literature Review", 2016)

"SPARQL (pronounced 'sparkle', a recursive acronym for SPARQL protocol and RDF query language) is an RDF query language, that is, a semantic query language for databases, able to retrieve and manipulate data stored in resource description framework (RDF) format." (Senthil K Narayanasamy & Dinakaran Muruganantham, "Effective Entity Linking and Disambiguation Algorithms for User-Generated Content (UGC)", 2018)

"SPARQL (Simple Protocol and RDF Query Language) is an RDF query language which is a W3C recommendation. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions." (Zongmin Ma & Li Yan, "Towards Massive RDF Storage in NoSQL Databases: A Survey", 2019)

"It is a query language on documents described in RDF." (Antonio Sarasa-Cabezuelo & José Luis Fernández-Vindel, "A Model for the Creation of Academic Activities Based on Visits", 2020)

"The SPARQL query language is a structured language for querying RDF data in a declarative fashion. Its core function is subgraph pattern matching, which corresponds to finding all graph homomorphism in the data graph for a query graph." (Kamalendu Pal, "Ontology-Assisted Enterprise Information Systems Integration in Manufacturing Supply Chain", 2020)

"Query language used to access and retrieve RDF data distributed in different geographical locations." (Janneth Chicaiza, "Leveraging Linked Data in Open Education", 2021)

"It is used for querying data in RDF format, in a similar way that SQL is used to query relational databases. SPARQL is a standard created and maintained by the World Wide Web Consortium. SPARQL is useful for getting data out of linked databases as an alternative to a more specific API." (Data.Gov.UK)

"A query language similar to SQL, used for queries to a linked-data triple store." ("Open Data Handbook")

01 May 2009

🛢DBMS: Structured Query Language (Definitions)

"A database query and programming language, commonly pronounced 'sequel'. SQL Server supports an ANSI-standard SQL definition for all computer systems, in addition to T-SQL (Transact-SQL), SQL's own version of the SQL language." (Owen Williams, "MCSE TestPrep: SQL Server 6.5 Design and Implementation", 1998)

"A database query and programming language originally developed by IBM for mainframe computers. It is widely used for accessing data and for querying, updating, and managing relational database systems. There is now an ANSI-standard SQL definition for all computer systems." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)

"A language used to insert, retrieve, modify, and delete data in a relational database. SQL also contains statements for defining and administering the objects in a database." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)

"A language used to insert, retrieve, modify, and delete data in a relational database. SQL also contains statements for defining and administering the objects in a database. SQL is the language supported by most relational databases, and is the subject of standards published by the International Standards Organization (ISO) and the American National Standards Institute (ANSI). SQL Server 2000 uses a version of the SQL language called T-SQL." (Thomas Moore, "EXAM CRAM™ 2: Designing and Implementing Databases with SQL Server 2000 Enterprise Edition", 2005)

 "An industry standard English-like language for building and manipulating relational databases." (Rod Stephens, "Beginning Database Design Solutions", 2008)

"Originally developed by IBM, this is a standards-based language that allows access to information stored in a relational database. See also Transact-SQL." (Robert D. Schneider and Darril Gibson, "Microsoft SQL Server 2008 All-In-One Desk Reference For Dummies", 2008)

"The language understood by RDBMSs and used to create and manage database objects and perform data manipulations (INSERT, UPDATE, DELETE, and SELECT queries). SQL Server uses a version of the SQL language called Transact-SQL." (Jim Joseph, "Microsoft SQL Server 2008 Reporting Services Unleashed", 2009)

"The programming language used to access data stored in a relational database." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A standard computer language designed to query any OLTP database, regardless of operating system." (Ken Withee, "Microsoft® Business Intelligence For Dummies®", 2010)

"A database query and programming language widely used for accessing, querying, updating, and managing data in relational database systems." (Microsoft, "SQL Server 2012 Glossary", 2012)

"A standardized language for defining and manipulating data in a relational database." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

16 March 2009

🛢DBMS: Query Plan [QP] (Definitions)

"The ordered set of steps required to carry out a query, complete with the access methods chosen for each table." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)

"A portion of a DBMS that determines the most efficient sequence of relational algebra operations to use to satisfy a query." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"The plan produced by an optimizer for processing a query." (S. Sumathi & S. Esakkirajan, "Fundamentals of Relational Database Management Systems", 2007)

"A query plan is a sequence of logical and physical operators and data flows that the SQL query optimizer returns for use by the query processor to retrieve or modify data." (Michael Coles, "Pro T-SQL 2008 Programmer's Guide", 2008)

"Once the query optimizer determines the best way to execute a query, it creates a query plan. This identifies all the elements of the query, including what indexes are used, what types of joins are employed, and more." (Darril Gibson, "MCITP SQL Server 2005 Database Developer All-in-One Exam Guide", 2008)

"A sequence of logical and physical operators and data flows that the SQL query optimizer returns for use by the query processor to retrieve or modify data." (Miguel Cebollero et al, "Pro T-SQL Programmer’s Guide" 4th Ed., 2015)

[adaptive query plan:] "An execution plan that changes after optimization because run-time conditions indicate that optimizer estimates are inaccurate. An adaptive query plan has different built-in plan options. During the first execution, before a specific subplan becomes active, the optimizer makes a final decision about which option to use. The optimizer bases its choice on observations made during the execution up to this point. Thus, an adaptive query plan enables the final plan for a statement to differ from the default plan." (Oracle)

[default plan:] "For an adaptive plan, the execution plan initially chosen by the optimizer using the statistics from the data dictionary. The default plan can differ from the final plan." (Oracle)

[execution plan:] "The combination of steps used by the database to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the session issuing the statement." (Oracle)

[query execution plan:] "The set of decisions made by the optimizer about how to perform a query most efficiently, including which index or indexes to use, and the order in which to join tables." (MySQL)

31 January 2009

🛢DBMS: Static SQL (Definitions)

[static query:] "A previously saved query that is optimized for a particular execution plan. The opposite of an ad hoc query." (Microsoft Corporation, "Microsoft SQL Server 7.0 Data Warehouse Training Kit", 2000)

"SQL that has been compiled prior to execution." (Ajay Gupta et al, "Informix Dynamic Server 11", 2007)

"SQL statements in an application that do not change at runtime and, therefore, can be hard-coded into the application." (John Goodson & Robert A Steward, "The Data Access Handbook", 2009)

[Static embedded SQL:] "Embedded SQL in which the entire SQL statement can be specified when the program is written, allowing the statement to be precompiled before the program is executed." (Jan L Harrington, "SQL Clearly Explained” 3rd Ed. , 2010)

[static query:] "A stored, precompiled SQL query, optimized for access against a particular database design." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

"A style of embedded SQL in which the SQL statements do not change while the application is running." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)

"SQL statements, embedded within a program, that are prepared during the program preparation process (before the program is executed). After being prepared, the SQL statement does not change (although values of host variables specified by the statement might change." (Kirsten A Larsen et al, "Extremely pureXML in DB2 10 for z/OS", 2011)

"SQL queries that are preprocessed and whose access paths are determined during the bind procedure, prior to execution." (Craig S Mullins, "Database Administration", 2012)

"SQL statements that are embedded within a program and are bound before the program is executed. After being bound, a static SQL statement does not change, although values of host variables specified by the statement can change. " (Sybase, "Open Server Server-Library/C Reference Manual", 2019)

"A program where access to the database has been predetermined by the programmer and any input from the user will not change this access pattern. Put another way, all SQL statements are already part of the program when it is executed." (Microfocus)

"A type of embedded SQL in which SQL statements are hard-coded and compiled when the rest of the program is compiled." (Microsoft) 

"It is called static SQL because the SQL statements in the program are static; that is, they do not change each time the program is run."  (Microsoft) [source]


Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.