Showing posts with label SQL Server 2008. Show all posts
Showing posts with label SQL Server 2008. Show all posts

05 November 2018

SQL Server Administration: End of Life for 2008 and 2008 R2 Versions

SQL Server 2008 and 2008 R2 versions are heading with steep steps toward the end of support - July 9, 2019. Besides an upgrade to upper versions, it seems there is also the opportunity to migrate to an Azure SQL Server Managed Instance, which allows a near 100% compatibility with an on-premises SQL Server installation, or to a Azure VM (see Franck Mercier’s post).

If you aren’t sure which versions of SQL Server you have in your organization here’s a script that can be run on all SQL Server 2005+ installations via SQL Server Management Studio:

SELECT SERVERPROPERTY('ComputerNamePhysicalNetBIOS') ComputerName
 , SERVERPROPERTY('Edition') Edition
 , SERVERPROPERTY('ProductVersion') ProductVersion
 , CASE Cast(SERVERPROPERTY('ProductVersion') as nvarchar(20))
     WHEN '9.00.5000.00' THEN 'SQL Server 2005'
     WHEN '10.0.6000.29' THEN 'SQL Server 2008' 
     WHEN '10.50.6000.34' THEN 'SQL Server 2008 R2' 
     WHEN '11.0.7001.0' THEN 'SQL Server 2012' 
     WHEN '12.0.6024.0' THEN 'SQL Server 2014' 
     WHEN '13.0.5026.0' THEN 'SQL Server 2016'
     WHEN '14.0.2002.14' THEN 'SQL Server 2017'
     ELSE 'unknown'
   END Product 


Resources:
[1] Microsoft (2018) How to determine the version, edition, and update level of SQL Server and its components [Online] Available from: https://support.microsoft.com/en-us/help/321185/how-to-determine-the-version-edition-and-update-level-of-sql-server-an
[2] Microsoft Blogs (2018) SQL Server 2008 end of support, by Franck Mercier [Online] Available from: https://blogs.technet.microsoft.com/franmer/2018/11/01/sql-server-2008-end-of-support-2/

18 June 2017

SQL Server Administration: Database Recovery on SQL Server 2017

I installed today SQL Server 2017 CTP 2.1 on my Lab PC without any apparent problems. It was time to recreate some of the databases I used for testing. As previously I had an evaluation version of SQL Server 2016, it expired without having a backup for one of the databases. I could recreate the database from scripts and reload the data from various text files. This would have been a relatively laborious task (estimated time > 1 hour), though the chances were pretty high that everything would go smoothly. As the database is relatively small (about 2 GB) and possible data loss was neglectable, I thought it would be possible to recover the data from the database with minimal loss in less than half of hour. I knew this was possible, as I was forced a few times in the past to recover data from damaged databases in SQL Server 2005, 2008 and 2012 environments, though being in a new environment I wasn’t sure how smooth will go and how long it would take.  

Plan A - Create the database with  ATTACH_REBUILD_LOG option:

As it seems the option is available in SQL Server 2017, so I attempted to create the database via the following script:
 
CREATE DATABASE  ON 
(FILENAME='I:\Data\.mdf') 
FOR ATTACH_REBUILD_LOG 

And as expected I run into the first error:
Msg 5120, Level 16, State 101, Line 1
Unable to open the physical file "I:\Data\.mdf". Operating system error 5: "5(Access is denied.)".
Msg 1802, Level 16, State 7, Line 1 CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

It looked like a permissions problem, though I wasn’t entirely sure which account is causing the problem. In the past I had problems with the Administrator account, so it was the first thing to try. Once I removed the permissions for Administrator account to the folder containing the database and gave it full control permissions again, I tried to create the database anew using the above script, running into the next error:

File activation failure. The physical file name "D:\Logs\_log.ldf" may be incorrect. The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Msg 1813, Level 16, State 2, Line 1 Could not open new database ''. CREATE DATABASE is aborted.

This approach seemed to lead nowhere, so it was time for Plan B.

Plan B - Recover the database into an empty database with the same name:

Step 1: Create a new database with the same name, stop the SQL Server, then copy the old file over the new file, and delete the new log file manually. Then restarted the server. After the restart the database will appear in Management Studio with the SUSPECT state.

Step 2:
Set the database in EMERGENCY mode:

ALTER DATABASE  SET EMERGENCY, SINGLE_USER

Step 3:
Rebuild the log file:

ALTER DATABASE <database_name> 
REBUILD LOG ON (Name=_Log', 
FileName='D:\Logs\.ldf')

The rebuild worked without problems.

Step 4: Set the database in MULTI_USER mode:

ALTER DATABASE  SET MULTI_USER 

Step 5:
Perform a consistency check:

DBCC CHECKDB () WITH ALL_ERRORMSGS, NO_INFOMSG 

After 15 minutes of work the database was back online.

Warnings:
Always attempt to recover the data for production databases from the backup files! Use the above steps only if there is no other alternative!
The consistency check might return errors. In this case one might need to run CHECKDB with REPAIR_ALLOW_DATA_LOSS several times [2], until the database was repaired.
After recovery there can be problems with the user access. It might be needed to delete the users from the recovered database and reassign their permissions!  

Resources:
[1] In Recovery (2008) Creating, detaching, re-attaching, and fixing a SUSPECT database, by Paul S Randal [Online] Available from: https://www.sqlskills.com/blogs/paul/creating-detaching-re-attaching-and-fixing-a-suspect-database/ 
[2] In Recovery (2009) Misconceptions around database repair, by Paul S Randal [Online] Available from: https://www.sqlskills.com/blogs/paul/misconceptions-around-database-repair/
[3] Microsoft Blogs (2013) Recovering from Log File Corruption, by Glen Small [Online] Available from: https://blogs.msdn.microsoft.com/glsmall/2013/11/14/recovering-from-log-file-corruption/

09 April 2011

SQL Reloaded: Table Value Constructors at Work

    Paradoxically, the first new SQL Server 2008 feature I learned, soon after the first CTP was made available, was the use of table value constructors, however I managed all this time without using it. This happened because I was either working on previous SQL Server versions or because other techniques were more appropriate to the problems I had to solve. So, a few days ago, the use of table value constructors came again on the table when I was weighting what’s the best approach to a problem posed by a colleague. Actually, the problem is quite simple – given a pair of values (id, new value) is needed to update a table with the “new value” corresponding to the given “id”. Previously, in similar cases, I preferred to load the data in a staging table because this was allowing me to quickly perform several checks on the set of values (e.g. check for duplicates, identify how many records will be updated). But what do you do when you are working directly in a production or any other type of environment in which you don’t have the possibility of using a staging table?

    For such cases you can either create a dataset on the fly with a set of UNIONS and reuse it in queries or, create an UPDATE statement for each pair of values. You can even automate the creation of UNION and UPDATE statements, however their use doesn’t seem so elegant, especially when you can use table value constructors. The feature seems to be hidden in documentation, as a matter of fact I found it only in the online documentation, the local help having no reference to it. Anyway, excepting a few posts on the same subject I can’t say I have seen any code until now making use of it. What’s interesting to note, before going into details, is that the ANSI SQL 94 (the earliest version of documentation I have) mentions vaguely table value constructors (p. 221), therefore I expect they are part of ANSI standard.

   According to MSDN a table value constructor allows a set of row value expressions to be constructed into a table within a single DML statement. As can be seen from the MSDN examples the list of values can contain a combination of list of values and queries, as long the data type matches and only a single scalar value is provided as a row value expression Another limitation concerns the maximum number of rows constructed, the maximum value being 1000. It’s important to know these limitations, because they might force you use other approaches. Before sketching a solution for this post’s problem, let’s look how a table can be constructed on the fly given a list of pair-values:

-- example table value constructor 
SELECT EmployeeId 
, VacationHours 
FROM ( VALUES (101, '23') 
     , (102, '82') 
     , (103, '66') 
     , (104, '17') 
      , (105, '64') 
      , (106, '56') 
      , (107, '107') 
      , (108, '50') 
      , (109, '109') 
      , (110, '48')) AS Data(EmployeeId, VacationHours) 

    The table thus constructed can be further used in an UPDATE statement, the following example being based on HumanResources.Employee from AdventureWorks database.

-- example table value constructor update UPDATE HumanResources.Employee 
SET VacationHours = Data.VacationHours 
FROM ( VALUES (101, '23') 
      , (102, '82') 
      , (103, '66') 
      , (104, '17') 
      , (105, '64') 
      , (106, '56') 
      , (107, '107') 
      , (108, '50') 
      , (109, '109') 
      , (110, '48')) AS Data(EmployeeId, VacationHours) 
WHERE HumanResources.Employee.EmployeeId = Data.EmployeeId 

    A similar construct can be be used to perform a SELECT, UPDATE or DELETE in which multiple columns participate in the JOIN. For example let’s consider a SELECT based on a set of pairs of values from the Sales.SalesOrderHeader table:

-- example table value constructor
SELECT SOH.* 
FROM Sales.SalesOrderHeader SOH 
     JOIN ( -- set of pairs 
        VALUES(10, 5069) 
         , (10, 6070) 
         , (10, 7060) 
         , (11, 407) 
         , (11, 1603) 
         , (11, 2737)) AS Data(CustomerId, CurrencyRateID) 
      ON SOH.CustomerId = Data.CustomerID 
    AND SOH.CurrencyRateID = Data.CurrencyRateID 

    As can be seen table value constructors are pretty simple to use, especially when dealing with a small number of values. If they are the best choice, that depends on the problem you are trying to solve and its specific constraints!

30 January 2011

Resources: SQL Server

    During the past weeks I found several interesting learning resources on SQL Server 2008 topics, so here they are:

    For those interest to upgrade their knowledge to SQL Server 2008 R2 Edition check the Microsoft SQL Server 2008 R2 Update for Developers Training Course videos on Channel9. They cover topics like Location Awareness, CLR Integration, StreamInsight, Reporting Services, Application and Multi-Server Management, improvements on BLOB, T-SQL and tools for Excel 2010 or SharePoint 2010.

    The ER diagrams of SQL Server’s DMVs or DMFs are available for download as PDF at SQL Server 2008 Systems Views Map, and the same for SQL Server 2005, respectively SQL Server 2008 R2. Even if not all objects and possible relations are shown, they could still save you from lot of effort as the diagram depicts the relations between the most important entities and the cardinality existing between them, plus the additional constraints participating in the joins. The entities represented: Objects, Types and Indexes, Linked Servers, CLR, Database Mirroring, Service Broker, Resource Governor, Transactions, Databases and Storage, Traces and Events, Execution Environment, Server information.

    There are several free SQL Server books on SQLServerCentral.com coming from Redgate: A. Kuznetsov’s Defensive Database Programming, B. McGehee’s SQL Server Maintenance Plans, How to become an Exceptional DBA, Mastering SQL Server Profiler and Brad’s Sure Guide to SQL Server 2008, J. Magnabosco’s Protecting SQL Server Data, G. Fritchey’s SQL Server Execution Plans and SQL Server Crib Sheet Compendium (coauthor with A. Prasanna), R. Landrum’s SQL Server Tacklebox, plus several volumes of ‘Best of SQL Server Central’ authors in Vol. 1-4, Vol. 5, Vol. 6 and Vol. 7, and several other books to come.

     It seems there are a few new books also on Scribd website: R. Colledge’s SQL Server Administration in Action, R. Vieira’s Professional Microsoft SQL Server 2008 Programming, M. Lee and M. Mansfield’s SQL Server 2008 Administration Instant Reference, plus many other books that can be retrieved by searching SQL Server 2008 on the respective site.


    There are several papers coming from Microsoft and several professionals. It worth to check them:
-   K. Lassen’s paper on Using Management Data Warehouse for Performance Monitoring, in which he provides several best practices for performance management of SQL Server 2008. Except the introduction in Management Data Warehouse features, he provides also an extension of the built-in functionality for indexes, plus several functions and queries.
-  D. Kiely's paper on SQL Server 2008 R2 Security Overview for Database Administrators covers some of the most important security features in SQL Server 2008. There is a similar paper targeting SQL Server 2005.
-  B. Beauchemin’s paper on SQL Server 2005 Security Best Practices - Operational and Administrative Tasks describes the best practices for setting up and maintaining security in SQL Server 2005.
-  SQL Server 2008 Compliance Guide a paper written to help organizations and individuals understand how to use the features of the Microsoft® SQL Server® 2008 database software to address their compliance need.
-  SQL Server 2008 Upgrade Technical Reference Guide essential phases, steps and best practices to upgrade existing instances of SQL Server 2000 and 2005 to SQL Server 2008.
-  Technet Wiki Checklists on Database Engine Security, Database Engine Security, Database Engine Connections and  Data Access. (probably more to come)
-   Also Microsoft maintains a list of White Papers on SQL Server 2008, respectively SQL Server 2005.

05 October 2010

MS Office: The Limitations of MS Access Database

In the previous post I was highlighting some general considerations on the use of MS Access and Excel as frameworks for building applications. I left many things out from the lack of time and space, therefore, as the title reveals, in this post I will focus simply on the limitations of MS Access considered as Database. I considered then that Access is a fairly good as database, recommending it for 10-20 concurrent users, fact that could equate, after case, maybe with a total of users that range between 1-100. Of course, this doesn’t mean that MS Access can’t do more, actually it supports 255 concurrent users and with a good design that limit could be reached.

Another important limitation regards the size of an Access database, set to 2GB, it used to be more than sufficient a few years back, though nowadays, it’s sometimes the equivalent of a month/year of transactions. I never tried to count how many records could store a MS Access, though if I remember correctly, a relatively small to average table of 1000000 (10^6) records occupies about 100MB, using this logic 2GB could equate with about 20000000 (2*10^7) records, the equivalent of a small to average database size. Anyway, the numbers are relative, the actual size depends also on the number of objects the database stores, the size of attributes stored, on the fact that even if Access is supposed to have a limitation of 2GB, I met cases in which a database of 1GB was crashing a lot, needing to be repaired or backed up regularly. 

Sometimes it could be repaired, other times not, unfortunately the “recovery” built within a MS Access can’t be compared with the recovery available in a RDBMS. That’s ok in the end, even mature databases crash from time to time, though the logs and transaction isolation models allow them to provide high recoverability and reliability, to which adds up scalability, availability, security and manageability. If all these are not essential for your database solution, the MS Access is ok, though you’ll have to invest effort in each of these area when you have to raise your standards.

One of the most painful issues when dealing with concurrent data access is the transaction processing that needs to guarantee the consistency and recoverability of operations. As Access is not handling the transactions, the programmer has to do that using ADO or DAO transactions. As many applications still don’t need pessimistic concurrency, with some effort and a good row versioning also this issue could be solved. Also the security-related issues could be solved programmatically by designing a role-based permission framework, though it occasionally it could be breached when the user is aware of the few Access hacks and has direct access to the database. 

Manageability resumes usually in controlling resources utilization, monitoring the progress of the actions running on the database. If Access is doing a relatively good job in what concerns the manageability of its objects, it has no reliable way to control their utilization, when a query is running for too long, the easiest way to solve this is to coldly kill the process belonging to Access. Not sure if it makes sense to philosophy about Access’ scalability and availability, at least can’t be comparable from this point of view with RDBMS for which failover clustering, mirroring, log shipping, online backup and in general online maintenance have an important impact on the two.

Excepting the above theoretical limitations, when MS Access is part of your solution, it’s always a good idea to know its maximal capacity specifications, this applying to all type of databases or technologies.  Most probably you won’t want that in the middle of your project or even later you realize that you reach one of such limitations. I tried to put together a comparison between the maximal capacity specifications for 2000, 2007 and 2010 versions of MS Access and, for reference, the same specification for SQL Server (2000, 2005, 2008 R2). The respective information come mainly from Microsoft websites, with a few additions from [5] and [6].


MS Access
SQL Server
Attribute
2000 [1]
2007/2010 [2]
2000 [7]
2005 [4]
2008 R2 [3]
 SQL statements size
64kb
64kb
64kb
64kb
64kb
# characters in Memo field
65535
65535
-
2^30-1
2^31-1
# characters in Text field
255
255
8000
8000
8000
# characters in object name
64
64
128
128
128
# characters in record
4000
4000
8000
8000
8000
# concurrent users
255

255


32767
# databases per instance
1
1
32767
32767
32767
# fields in index
10
10
16
16
16
# fields in recordset
255
255
4096
4096
4096
# fields in table
255
255
1024
1024
1024/30000
# files per database
1
1
32767
32767
32767
# forced relationships per table
32
32
253
253
253
# indexes per table
32
32
250 (1 clustered)
250 (1 clustered)
250 (1 clustered)
# instances


16
50
50
# joins in a query
16
16
32
32
32
# levels nested queries
50
50
32
32
32
# nested subqueries


32
32
32
# objects
32768
32768
2147483647
<>
</>
2147483647
2147483647
# open tables
2048
2048
2147483647
2147483647
2147483647
# roles per database
n/a
n/a
16379
16379
16379
# tables in a query
32
32
256
256
256
# users per database
n/a
n/a
16379
16379
16379
database size
<2GB
<2GB
1048516 TB
542272TB
542272TB
file size (data)
2GB
2GB
32TB
16TB
16TB
file size (log)
n/a
n/a
32TB
2TB
2TB


For my surprise the maximal capacity specifications of Access are comparable with the ones of SQL Server for many of the above attributes. Sure, there is a huge difference in what concerns the number of databases, the database/file size and the number of supported objects, quite relevant in the architecture of applications. Several other differences, for example the number of indexes supported per table or relationships per table, are less important for the majority of solutions. Another fact that is not remarked in the above table is the fact that the number of records in a table are typically limited by storage. Please note that many important features not available in Access were left out, therefore, for a better overview is advisable to check directly the referenced sources.

There are two one more personal observations for this post. Even if MS Access is great for non-SQL developers giving its nice Designer, for SQL developers it lacks a rich editor, the initial formatting being lost, this doubled by the poor support for later versions of the ANSI standard make from Access a tool to avoid.

References:
[1] Microsoft. 2010. Microsoft Access database specifications. [Online] Available form:
http://office.microsoft.com/en-us/access-help/access-specifications-HP005186808.aspx (Accessed: 04.10.2010)
[2] Microsoft. 2010. Access 2010 specifications [Online] Available form: http://office.microsoft.com/en-us/access-help/access-2010-specifications-HA010341462.aspx (Accessed: 04.10.2010)
[3] MSDN. (2010). Maximum Capacity Specifications for SQL Server: SQL Server 2008 R2. [Online] Available form: http://msdn.microsoft.com/en-us/library/ms143432.aspx (Accessed: 04.10.2010)
[4] MSDN. (2010). Maximum Capacity Specifications for SQL Server: SQL Server 2005. [Online] Available form: http://msdn.microsoft.com/en-us/library/ms143432(SQL.90).aspx (Accessed: 04.10.2010)
[5] SQL Server Helper. (2005). SQL Server 2005: Maximum Capacity Specifications. [Online] Available form: http://www.sql-server-helper.com/sql-server-2005/maximum-capacity-specifications.aspx (Accessed: 04.10.2010)
[6] MSDN. (2008).SQL 2005 and SQL 2008 database volume capacity. [Online] Available form: http://social.msdn.microsoft.com/forums/en-US/sqlgetstarted/thread/4225734e-e480-4b21-8cd4-4228ca2abf55/ (Accessed: 04.10.2010)
[7] MSDN. (2010). Maximum Capacity Specifications for SQL Server: SQL Server 2000. [Online] Available form: http://technet.microsoft.com/en-us/library/aa274604(SQL.80).aspx (Accessed: 04.10.2010)
[8] MSDN. (2010). Comparison of Microsoft Access SQL and ANSI SQL. [Online] Available form: http://msdn.microsoft.com/en-us/library/bb208890.aspx (Accessed: 04.10.2010)

13 March 2010

SSIS: Percentage Sampling Data Flow Transformation

    One of the problems relatively difficult to address in the past was the random sampling within a SQL Server data set. Of course there is always Excel that could be used for this purpose just with a few tricks, though what do you do when working with huge source dataset? As we will see below SSIS 2008’s Percentage Sampling Data Flow Transformation allows easily addressing this problem.

    Using the template SSIS package defined in Third Magic Class post, copy paste the Package.dtsx in the project and rename it (e.g. Package Percentage Sampling.dtsx), and from Toolbox add an Percentage Sampling Transformation and link it to the OLE DB Source. Access the Percentage Sampling Editor in which modify the Percentage of rows value from 10 to 50. It doesn’t really makes sense to rename the sample and unselected outputs, though you might need to do that when dealing with multiple Percentage Sampling Transformations.

SSIS - Percentage Sampling Transformation Editor
Note:
    The percentage of rows you’d like to work with depends entirely upon request, in many cases it’s indicated to determine statistically the size of your sample. Given the fact that the number of records in this example is quite small I preferred to use a medium dataset size.

    Link the Aggregate Transformation to the OLE DB Destination and in the Input Output Selection dialog select as Output the ‘Sampling Selected Output’, while in the OLE DB Destination Editor create a new table (e.g. Production.BikesSample). 
 
SSIS - Input Output Selection

   In the last step, before testing the whole package, in Control Flow tab change the Execute SQL Task’s SQLStatement property to point to the current destination table: 
 
TRUNCATE TABLE [Production].[BikesSample]  

  
SSIS - Percentage Sampling Data Flow

   Save the project, test (debug) the package (twice) and don’t forget to validate the output data: 
  
SELECT * 
FROM [Production].[BikesSample]   
 

Note:
   I was actually expecting to have 48 or 49 records (97:2=48.5) in the output and not 45, I wonder from where comes the difference?! That’s a topic I still have to investigate. I tried also to change the percentage of rows to 25 resulting an output of 23 of records (23*4=92), 75 resulting an output of 74 records, respectively 100, all the records being this time selected. At least the algorithm used by Microsoft partitions the output in complementary datasets.

SSIS: Aggregate Data Flow Transformation

    At a first look the SSIS Aggregate Data Flow Transformation doesn’t seem to be so useful given the fact that the same functionality could be easily obtained with an aggregate query, which actually has more flexibility in what concerns data manipulation, however must be not forgotten that the power of transformations reside in the way they are combined in order to solve a problem and not in their isolated use. On the other side in order to master the basics it makes sense to create simple examples that include a minimum of information/tasks. It’s also the case of this post, for exemplification I will use the following query based on AdventureWorks’ Purchase Orders Header/Detail tables:

SELECT POD.PurchaseOrderDetailID 
, POD.PurchaseOrderID 
, POD.ProductID 
, POD.OrderQty 
, POD.ReceivedQty 
, POD.UnitPrice 
, POD.OrderQty * POD.UnitPrice OrderValue 
, POD.ReceivedQty * POD.UnitPrice ReceiptValue 
, POH.OrderDate 
, POD.DueDate 
FROM Purchasing.PurchaseOrderDetail POD 
   JOIN Purchasing.PurchaseOrderHeader POH 
      ON POD.PurchaseOrderID = POH.PurchaseOrderID 
WHERE POH.Status IN (2, 4) -- 2-Approved, 4-Complete 

   Using the template SSIS package defined in Third Magic Class post, copy paste the Package.dtsx in the project and rename it (e.g. Package Aggregate.dtsx), then in Data Flow Task rename the Data Source (e.g. OLE DB Source POs) and change the SQL command text using the above query. From Toolbox add an Aggregate Transformation and link it to the OLE DB Source and access the Aggregate Transformation Editor in which, if everything went swell until now, it will appear a “table” based on the above query and check in a first phase the following attributes: ProductID, OrderQty, ReceivedQty, OrderValue and ReceiptValue. 

    Once an attribute is checked in the table, it will appear also as Input Column in the Editor, the same name being used also for Output Alias. For ProductID the Editor chosen the ‘Group by’ as Operation and ‘Sum’ for OrderValue. Change the Operation as Sum for the other three attributes like in the below screenshot. Add also the Order Date and once more the ProductID as Input Columns, for the first choose ‘Maximum’ as Operation while for the second the ‘Count’ in order to get the Last OrderDate, respectively the Number of Records. 
 
SSIS - Aggregate Transformation Editor

     Once the above changes completed, delete the existing OLE DB Destination and add another one, link the Aggregate Transformation to it, create a new table (e.g. Purchasing.PurchaseOrderProductAggregation) based on Aggregate’s input, accept the mappings and test the Data Flow Task: 
 
SSIS - Aggregate Data Flow Task

     In the last step, before testing the whole package, in Control Flow tab change the Execute SQL Task’s SQLStatement property to point to the current destination table:

TRUNCATE TABLE [Purchasing].[PurchaseOrderProductAggregation]   

   Save the project, test (debug) the package (twice) and don’t forget to validate the output data: 
 
SELECT * 
FROM [Purchasing].[PurchaseOrderProductAggregation]    

     The output from the above table should be actually the same with the output’s of the following aggregated query:

-- PO Product Aggregation SELECT POD.ProductID 
, SUM(POD.OrderQty) OrderQty 
, SUM(POD.ReceivedQty) ReceivedQty 
, SUM(POD.OrderQty * POD.UnitPrice) OrderValue 
, SUM(POD.ReceivedQty * POD.UnitPrice) ReceiptValue 
, Max(POH.OrderDate) LastOrderDate 
, COUNT(1) NumberRecords 
FROM Purchasing.PurchaseOrderDetail POD 
   JOIN Purchasing.PurchaseOrderHeader POH 
      ON POD.PurchaseOrderID = POH.PurchaseOrderID 
WHERE POH.Status IN (2, 4) -- 2-Approved, 4-Complete 
GROUP BY POD.ProductID 
ORDER BY POD.ProductID

Happy Coding!

07 February 2010

SSIS: SQL Server to Oracle Data Export (Second Magic Class)

    In a previous post I described the steps needed to be followed in order to export the data from SQL Server to Excel, this time I will describe the steps needed in order to export the data from SQL Server to Oracle. For this is requested to have Oracle database product installed on your computer, or in case you are having already an Oracle database server on the network then you’ll need only the Oracle Client installed locally, and a SQL Development Tool like SQL Developer or TOAD, with a plus for the later.

    Like in the previous tutorial we’ll use the Production.Product table from AdventureWorks database coming with SQL Server.

Step 1: Create an Oracle User

    If you already have an Oracle User created then you could skip this step, unless you want to create a User only for the current tutorial or for loading other AdventureWorks tables in Oracle. For this we will create first a permanent tablespace (e.g. AdventureWorks), an allocation of space in the database, then create the actual User (e.g. SQLServer) on the just created tablespace, action the will create a schema with the same name as the User, and finally grant the User ‘create session’, ‘create materialized view’ and ‘create table’ privileges.

    Open your SQL Developer tool of choice and connect to the Oracle database, then type the following statements and run them one by one, do not forget to provide a strong-typed password in IDENTIFIED BY clause, then create a new connection using the just created User.

 
-- Create TableSpace
CREATE BIGFILE TABLESPACE AdventureWorks
DATAFILE 'AdventureWorks.dat'
SIZE 20M AUTOEXTEND ON; 

-- Create User CREATE USER SQLServer 
IDENTIFIED BY <your_password>
DEFAULT TABLESPACE AdventureWorks 
QUOTA 20M ON AdventureWorks; 

-- Grant privileges
GRANT CREATE SESSION
, CREATE MATERIALIZED VIEW
, CREATE TABLE TO SQLServer; 

Step 2
: Start SQL Server Import and Export Wizard

    From SQL Server Management Studio choose the database from which you want to export the data (e.g. AdventureWorks), right click on it and from the floating menu choose Tasks/Export Data

Step 3: Choose a Data Source

    In ‘Choose a Data Source’ step select the ‘Data Source’, SQL Server Native Client 10.0 for exporting data from SQL Server, choose ‘Server name’ from the list of SQL Server available, select the Authentication mode and the Database (e. g. AdventureWorks), then proceed to the next step by clicking ‘Next’.

Step 4: Choose a Destination

    In ‘Choose a Destination’ step select the Destination, in this case ‘Oracle Provider for OLE DB’ and then click on ‘Properties’ in order to provide the connectivity details, in ‘Data Link Properties’ dialog just opened, enter the Data Source (the SID of your Oracle database), the ‘User name’ (e.g. SQLServer), the ‘Password’ and check ‘Allow saving password’ checkbox, without this last step being not possible to connect to the Oracle database. Then test the connection by clicking the ‘Test Connection’, and if the ‘Test connection succeeded’ proceed to the next step. 

Note:
    Excepting ‘Oracle Provider for OLE DB’ there are three other drivers that allows you to connect to an Oracle database: ‘Microsoft OLE DB Provider for Oracle’, ‘.Net Framework Data Provider for Oracle’, respectively ‘Oracle Data Provider for .Net’, each of them coming with their own downsizes and benefits.

Step 5: Specify Table Copy Or Query

    In this step choose: ‘Copy data from one or more tables or views’ option.

Step 6: Select Source Tables and Views

    In ‘Select Source Tables and Views’ step select the database objects (e.g. Production.vProducts) from which you’ll export the data. You could go with the provided Destination table, though if you are using the current settings Oracle table’s name will be “Product” (including quotes) and not Product as we’d expected. There are two options, go with the current settings and change table’s name and columns, they having also this problem, in Oracle, or create the table manually in Oracle. I prefer the second option because I could use the SQL script automatically generated by Oracle, for this with the just chosen table selected click on ‘Edit Mappings’ button that will bring the ‘Column Mappings’ dialog and click the ‘Edit SQL’ button, the query appearing as editable in ‘Create Table SQL Statement’ dialog. 


    Copy the query and copy paste it in the Oracle SQL Developer, then do a replace of the quotes and delete the rowguid column as we don’t needed, in plus when attempting to dump the data, given the incompatibility of data types, an error will be raised. There is a second change we’ll have to make, to rename the Size column to ProductSize, this because Size is a reserved keyword.

 
CREATE TABLE sqlserver.Product (
ProductID INTEGER NOT NULL,
Name NVARCHAR2(50) NOT NULL,
ProductNumber NVARCHAR2(25) NOT NULL,
MakeFlag NUMBER NOT NULL,
FinishedGoodsFlag NUMBER NOT NULL,
Color NVARCHAR2(15),
SafetyStockLevel INTEGER NOT NULL,
ReorderPoint INTEGER NOT NULL,
StandardCost NUMBER NOT NULL,
ListPrice NUMBER NOT NULL,
ProductSize NVARCHAR2(5),
SizeUnitMeasureCode NCHAR(3),
WeightUnitMeasureCode NCHAR(3),
Weight NUMBER,
DaysToManufacture INTEGER NOT NULL,
ProductLine NCHAR(2),
Class NCHAR(2),
Style NCHAR(2),
ProductSubcategoryID INTEGER,
ProductModelID INTEGER,
SellStartDate TIMESTAMP NOT NULL,
SellEndDate TIMESTAMP,
DiscontinuedDate TIMESTAMP,
ModifiedDate TIMESTAMP NOT NULL) 

   Unfortunately this approach has a downside, we have to go two steps back in Choose a Destination, then click on ‘Properties’ again and without doing any change test the connection by clicking the ‘Test Connection’, then proceed forward to the current step, and in ‘Select Source Tables and Views’ step select again the database object (e.g. Production.vProducts) from which you’ll export the data but this time in Destination dropdown control search for the table just created (e.g. sqlserver.Product).

    Because we have removed the rowguid column from the table created in Oracle, we’ll have to change the mappings, for this click on ‘Edit Mappings’ and in ‘Mapping’ list box from ‘Column Mappings’, in the Destination dropdown next to rowguid column select ‘’, thus the respective column being ignored from the mapping. Accept the change and proceed to the next step. 


Note:
    You could choose to export more than one table, though for this you’ll have to create manually each table in Oracle and then map the created tables in ‘Select Source Tables and Views’ step.

Step 7: Review Data Type Mapping

Step 8: Save and Run Package

    In ‘Save and Run Package’ check the ‘Run immediately’, respectively the ‘Save SSIS Package’ and ‘File System’ option, then proceed to the next step by clicking ‘Next’.

Step 9: Save SSIS Package

    In ‘Save SSIS Package’ step provide the intended Name (e.g. Export Products To Oracle) or Description of the Package, choose the location where the package will be saved, then proceed to the next step by clicking ‘Next’.

Step 10: Complete the Wizard

Step 11: Executing the Package

Step 12: Checking the Data

    If the previous step completed successfully, then you can copy the following statement in Oracle SQL Developer and execute it, the table should be already populated.

SELECT * 
FROM sqlserver.Product 

Note:

    If you used other schema to connect to, then you’ll have to replace the above schema (e.g. SQLServer) with your schema.
Related Posts Plugin for WordPress, Blogger...

About Me

My photo
IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.