A Software Engineer and data professional's blog on SQL, data, databases, data architectures, data management, programming, Software Engineering, Project Management, ERP implementation and other IT related topics.
Pages
- 🏠Home
- 🗃️Posts
- 🗃️Definitions
- 🏭Fabric
- ⚡Power BI
- 🔢SQL Server
- 📚Data
- 📚Engineering
- 📚Management
- 📚SQL Server
- 📚Systems Thinking
- ✂...Quotes
- 🧾D365: GL
- 💸D365: AP
- 💰D365: AR
- 👥D365: HR
- ⛓️D365: SCM
- 🔤Acronyms
- 🪢Experts
- 🗃️Quotes
- 🔠Dataviz
- 🔠D365
- 🔠Fabric
- 🔠Engineering
- 🔠Management
- 🔡Glossary
- 🌐Resources
- 🏺Dataviz
- 🗺️Social
- 📅Events
- ℹ️ About
04 January 2009
🛢DBMS: User-defined Functions [UDFs] (Definitions)
03 January 2009
🛢DBMS: Stored Procedures [SP] (Definitions)
02 January 2009
🛢DBMS: Views (Definitions)
"A virtual table, defined as a SQL SELECT statement, to provide a subset of data from one or more tables." (Craig S Mullins, "Database Administration" 2nd Ed, 2012)
01 January 2009
🛢DBMS: Database Object (Definitions)
"One of the components of a database: table, view, index, procedure, trigger, column, default, or rule." (Karen Paulsell et al, "Sybase SQL Server: Performance and Tuning Guide", 1996)
"One of the components of a database: a table, index, trigger, view, key, constraint, default, rule, user-defined data type, or stored procedure." (Microsoft Corporation, "SQL Server 7.0 System Administration Training Kit", 1999)
"Any structure or entity that exists in an Oracle database, such as a table, index, PL/SQL program, or view. For a list of database objects owned by the current user, look in the data dictionary's USEROBJECTS view." (Bill Pribyl & Steven Feuerstein, "Learning Oracle PL/SQL", 2001)
"Any database component. It could be a table, index, trigger, view, key, constraint, default, rule, user-defined data type, or stored procedure in a database." (Anthony Sequeira & Brian Alderman, "The SQL Server 2000 Book", 2003)
"Any of the various items included in a database including tables, views, diagrams, and so on." (Victor Isakov et al, "MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (70-444) Study Guide", 2007)
"Any object in a database, such as a table, a view, an index, a stored procedure, or a trigger." (Carlos Coronel et al, "Database Systems: Design, Implementation, and Management" 9th Ed., 2011)
"An object that exists in an installation of a database system, such as an instance, a database, a database partition group, a buffer pool, a table, or an index." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)
02 December 2008
🧭Business Intelligence: Perspectives (Part I: General Issues)
Data Quality
The problem with data starts usually at the source - ERP and other information systems (IS). In theory the system should cover all the basic reporting requirements existing in an enterprise, though that's seldom the case. Therefore, basic reporting needs arrive to be covered by ad-hoc developed tools which often include MS Excel/Access solutions, which are difficult to integrate and manage across organization.
Data Quality (DQ) is maybe the most ignored component in the attempt to build flexible, secure and reliable BI solutions. DQ is based on the validation implemented in source systems and the mechanisms used to cleanse the data before being reported, respectively on the efficiency and effectiveness of existing business processes and best practices.
DQ must be guaranteed for accurate decisions. If the quality is not validated and reviewed periodically, users will be reluctant in using the reports! The reports must be validated as part of the UAT process. Aggregated BI reports need detailed reports that can be used for validation, while the logic and data need to be synchronized accordingly.
The quality of decisions is based on the degree to which data were understood and presented to the decisional factors, though that’s not enough; it's need also a complete perspective, and maybe that’s why some business users prefer to prepare and aggregate data by themselves, the process allowing them in theory to get a deeper understanding of what’s happening.
Cooperation
BI implementations are also dependent on consultants’ skills and the degree to which they understood business’ requirements, on team’s cohesion and other project (management) related prerequisites, respectively on knowledge transfer and training.
Tools
Most of the BI tools available on the market don’t satisfy all business, respectively users’ requirements. Even if they excel in some features, they lack in others. Usually, more than one BI tool is needed to cover (most of) the requirements. When features are not available, or they are not mature enough, or they are difficult to learn, users will prefer to use tools they already know.
Another important consideration is that BI tools rely on data models, often inflexible from the point of the data they provide, lacking integrating additional datasets, algorithms and customizations. The overall requirements need to be considered more recently from the point of cloud computing technologies, which becomes steadily a requirement for nowadays business dynamics.
11 November 2008
🗄️Data Management: Data Quality (Part I: Information Systems' Perspective)
Data Management Series |
- Processes span different functions and/or roles, each of them maintaining the data they are interested in, without any agreement or coordination on the ownership. The lack of ownership is in general management’s fault.
- Within an enterprise many systems arrive to be integrated, the quality of the data depending on the quality and scope of the integrations, whether they were addressed fully or only superficially. Few integrations are stable and properly designed. If stability can be obtained in time, scope is seldom changed as it involves further investments, and thus the remaining data need to be maintained manually, respectively the issues need to be troubleshooted or let accumulate in the backlog.
- There are systems which are not integrated but use the same data, users needing to duplicate their effort, so they often focus on their immediate needs. Moreover, the lack of mappings between systems makes data analysis and review difficult.
- The lack of knowledge about the systems used in terms of processes, procedures, best practices, policies, etc. Users usually try to do their best based on the knowledge they have, and despite their best intent, the systems arrive to be misused just to get things done.
- Basic or inexistent validation for data entry in each important entry point (UI, integration interfaces, bulk upload functionality), system permissiveness (allowing workarounds), stability and reliability (bugs/defects).
- Inexistence of data quality control mechanisms or quality methodologies, respectively a Data and/or Quality Management strategy. If the data quality is not kept under review, it can easily decrease over time.
- The lack of a data culture and processes that support data quality.
- People lack consistency and/or the self-discipline to follow the processes and update the data as the processes requires it and not only the data to move to the next or final step. Therefore, the gap between reality and the one presented by the system is considerable.
- People are not motivated to improve data quality even if they may recognize the importance of doing that.
Data quality comes on the managers' agenda, especially during ERP implementations. Unfortunately, as soon as that happens, it also disappears, despite being warned of the consequences poor data quality might have on the implementation and further data use. An ERP implementation is supposed to be an opportunity for improving the data quality, though for many organizations it remains in this state. Once this opportunity passes, organizations need more financial and human resources to reach a fraction from the opportunity missed.
08 November 2008
💎SQL Reloaded: Dealing with data duplicates on SQL Server
Subject to duplication are whole records, a group of attributes (fields) or only single attributes. I depends from case to case. Often duplicates are easy to identify - it’s enough to let somebody who has the proper knowledge to look over them. But what you do when the volume of data is too large or when is need to automate the process as much as possible? Using the DISTINCT keyword in a SELECT statement might do the trick, while other times it requires more complicated validation, ranging from simple checks to Data Mining techniques.
I will try to exemplify the techniques I use to deal with duplicates with the help of a simple example based on table that tracks information about Assets:
-- create test table CREATE TABLE [dbo].[Assets]( [ID] [int] NOT NULL, [CreationDate] smalldatetime NOT NULL, [Vendor] [varchar](50) NULL, [Asset] [varchar](50) NULL, [Model] [varchar](50) NULL, [Owner] [varchar](50) NULL, [Tag] [varchar](50) NULL, [Quantity] [decimal](13, 2) NULL ) ON [PRIMARY]
Here's some test data:
-- insert test data (SQL Server 2000+) INSERT INTO dbo.Assets VALUES ('1', DATEADD(d,-5, GetDate()), 'IBM','Laptop 1','Model 1','Owner 1','XX0001','1') INSERT INTO dbo.Assets VALUES ('2', DATEADD(d,-4, GetDate()),'IBM','Laptop 2','Model 2','Owner 2','XX0002','1') INSERT INTO dbo.Assets VALUES ('3', DATEADD(d,-3, GetDate()),'Microsoft','Laptop 3','Model 3','Owner 2','WX0001','1') INSERT INTO dbo.Assets VALUES ('4', DATEADD(d,-3, GetDate()),'Microsoft','Laptop 3','Model 3','Owner 2','WX0001','1') INSERT INTO dbo.Assets VALUES ('5', DATEADD(d,-3, GetDate()),'Dell','Laptop 4','Model 4','Owner 3','DD0001','1') INSERT INTO dbo.Assets VALUES ('6', DATEADD(d,-1, GetDate()),'Dell','Laptop 4','Model 4','Owner 4','DD0001','1')
-- review the data SELECT ID, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM dbo.Assets
Output:
ID | CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
1 | 1/29/2024 10:46:00 PM | IBM | Laptop 1 | Model 1 | Owner 1 | XX0001 | 1 |
2 | 1/30/2024 10:46:00 PM | IBM | Laptop 2 | Model 2 | Owner 2 | XX0002 | 1 |
3 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
4 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
5 | 1/31/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 3 | DD0001 | 1 |
6 | 2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
-- retrieve the duplicates SELECT Vendor, Tag FROM dbo.Assets A GROUP BY Vendor, Tag HAVING COUNT(*)>1
Vendor | Tag |
Dell | DD0001 |
Microsoft | WX0001 |
-- retrieve duplicates with details SELECT A.Id, A.CreationDate, A.Vendor, A.Asset, A.Model, A.Owner, A.Tag, A.Quantity FROM dbo.Assets A JOIN (-- duplicates SELECT Vendor, Tag FROM dbo.Assets A GROUP BY Vendor, Tag HAVING COUNT(*)>1 ) B ON A.Vendor = B.Vendor AND A.Tag = B.Tag
Id | CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
5 | 1/31/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 3 | DD0001 | 1 |
6 | 2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
3 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
4 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
In a result set normally it's enough to use the DISTINCT keyword to remove duplicated rows:
-- select unique records SELECT DISTINCT CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM dbo.Assets
Output:
CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
1/29/2024 10:46:00 PM | IBM | Laptop 1 | Model 1 | Owner 1 | XX0001 | 1 |
1/30/2024 10:46:00 PM | IBM | Laptop 2 | Model 2 | Owner 2 | XX0002 | 1 |
1/31/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 3 | DD0001 | 1 |
1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
In our example only some combinations are duplicated while the other attributes might slightly differ, and therefore is needed another approach. First of all we need to identify which one is the most reliable record, in some cases the latest records entry should be the most accurate or closer to reality, but that’s not necessarily the truth. There are also cases in which we don’t care which the record that is selected is, but from experience these cases are few.
Oracle and SQL Server introduced the dense_rank() analytic function, which returns the rank of rows within the partition of a result set, without any gaps in the ranking. In our case the partition is determined by Vendor and Tag, following to identify which the logic used for raking. Supposing that we are always interested in the last record entered, the query would look like this:
-- retrieve duplicates via ranking functions SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity FROM (--subquery SELECT Id, CreationDate, Vendor, Asset, Model, Owner, Tag, Quantity , dense_rank() OVER(PARTITION BY Vendor, Tag ORDER BY CreationDate DESC , Id DESC) RANKING FROM dbo.Assets ) A WHERE RANKING = 1
Output:
CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
1/29/2024 10:46:00 PM | IBM | Laptop 1 | Model 1 | Owner 1 | XX0001 | 1 |
1/30/2024 10:46:00 PM | IBM | Laptop 2 | Model 2 | Owner 2 | XX0002 | 1 |
1/31/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 3 | DD0001 | 1 |
1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
Unfortunately, this technique doesn’t work in SQL Server 2000, where a different approach is needed. In most of the cases the unique identifier for a record is a sequential unique number, the highest id corresponding to the latest entered record. This would allow selecting the latest entered record, by using the Max function:
-- nonduplicated records (SQL server 2000+) SELECT A.Id, A.CreationDate, A.Vendor, A.Asset, A.Model, A.Owner, A.Tag, A.Quantity FROM dbo.Assets A JOIN ( -- last entry SELECT Vendor, Tag, MAX(Id) MaxId FROM dbo.Assets A GROUP BY Vendor, Tag -- HAVING count(*)>1 ) B ON A.Vendor = B.Vendor AND A.Tag = B.Tag AND A.ID = B.MaxId
Output:
Id | CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
4 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
2 | 1/30/2024 10:46:00 PM | IBM | Laptop 2 | Model 2 | Owner 2 | XX0002 | 1 |
1 | 1/29/2024 10:46:00 PM | IBM | Laptop 1 | Model 1 | Owner 1 | XX0001 | 1 |
6 | 2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
-- nonduplicated records (SQL server 2000+) SELECT A.Id, A.CreationDate, A.Vendor, A.Asset, A.Model, A.Owner, A.Tag, A.Quantity FROM dbo.Assets A JOIN ( -- last entry SELECT Vendor, Tag, MAX(Id) MaxId FROM dbo.Assets A GROUP BY Vendor, Tag -- HAVING count(*)>1 ) B ON A.Vendor = B.Vendor AND A.Tag = B.Tag AND A.ID = B.MaxId
Notes:
1. In other scenarios it’s important to select all the records matching extreme values (first, last), the dense_rank function becoming handy, however for versions that doesn’t supports it, a creation date attribute saves the day, when available, and it's unique:
-- nonduplicated records (SQL server 2000+) SELECT A.Id, A.CreationDate, A.Vendor, A.Asset, A.Model, A.Owner, A.Tag, A.Quantity FROM dbo.Assets A JOIN (-- last entry SELECT Vendor, Tag, MAX(CreationDate) LastCreationDate FROM dbo.Assets A GROUP BY Vendor, Tag -- HAVING count(*)>1 ) B ON A.Vendor = B.Vendor AND A.Tag = B.Tag AND DateDiff(d, A.CreationDate, B.LastCreationDate)=0
Id | CreationDate | Vendor | Asset | Model | Owner | Tag | Quantity |
6 | 2/2/2024 10:46:00 PM | Dell | Laptop 4 | Model 4 | Owner 4 | DD0001 | 1 |
1 | 1/29/2024 10:46:00 PM | IBM | Laptop 1 | Model 1 | Owner 1 | XX0001 | 1 |
2 | 1/30/2024 10:46:00 PM | IBM | Laptop 2 | Model 2 | Owner 2 | XX0002 | 1 |
3 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
4 | 1/31/2024 10:46:00 PM | Microsoft | Laptop 3 | Model 3 | Owner 2 | WX0001 | 1 |
3. Instead of using a single multi-row insertion I used multiple insertion statements because I preferred to make the tutorial usable also on SQL Server 2000. Here’s the single multi-row insertion statement:
-- insert test data (SQL Server 2005+) INSERT INTO dbo.Assets VALUES ('1', DATEADD(d,-5, GetDate()), 'IBM','Laptop 1','Model 1','Owner 1','XX0001','1') , ('2', DATEADD(d,-4, GetDate()),'IBM','Laptop 2','Model 2','Owner 2','XX0002','1') , ('3', DATEADD(d,-3, GetDate()),'Microsoft','Laptop 3','Model 3','Owner 2','WX0001','1') , ('4', DATEADD(d,-3, GetDate()),'Microsoft','Laptop 3','Model 3','Owner 2','WX0001','1') , ('5', DATEADD(d,-3, GetDate()),'Dell','Laptop 4','Model 4','Owner 3','DD0001','1') , ('6', DATEADD(d,-1, GetDate()),'Dell','Laptop 4','Model 4','Owner 4','DD0001','1')
4. The above techniques should work also in Oracle with two amendments, attributes’ type must be adapted to Oracle ones, while instead of SQL Server GetDate() function should be used the corresponding Oracle SYSDATE function, as below:
ERP Systems: Learning about Oracle APPS internals I
Oracle made available documentation about their products through Oracle Technology Network and Metalink. The first source contains documents mainly as pdf files, while Metalink provides richer content and it’s easier to use, however in order to access it, your company has to purchase an Oracle Support Identifier.
In Metalink, Oracle Applications’ documentation is grouped under eTRM (Electronic Technical Reference Manuals) section, while the pdf documents can be found under Oracle 11i Documentation Library, and many of them, especially for older versions, can be found also on the web, and revealed with a simple search by using tables' name or file’s name.
Both sources are by far incomplete, there are many gaps, not to forget that many of the Oracle implementations involve also some customization, information about these changes could find maybe in the documentation made during implementation/customization process.
Lately have appeared many blogs on Oracle Applications internals, and even if many of them resume by copying some material from Metalink or other documents, there are also professionals who respect themselves.
People can learn a lot by checking the objects that unveils the APPS internals, APPS.FND_TABLES providing the list of tables used, while APPS.FND_VIEWS provides the list of views, the problem with the later being that can't be done a search using the field that stores views' script, but the data can be exported to a text file and do the search in there (it won’t work to export the data completely to Excel). In time developers arrive to intuit how the views could be named, so a search on their name could help narrowing down the search.
Other professionals might be willing to help, so often it's a good idea to post questions on blogs, forums or social networks for professionals. Not all the questions get answered so rather than waiting for indirect enlightment, it’s better to do some research in parallel too.
There will be cases in which none of the specified sources will help you, most probably you'll have to reengineer Oracle Applications' internals by studying various business scenarios, and in this case the experimented users could help a lot.
🧭Business Intelligence: Enterprise Reporting (Part I: An Introduction)
Business Intelligence Series |
In general, there are 5 types of reporting needs:
- OLTP (On-Line Transaction Processing) system providing reports with actual (live) data;
- OLAP (On-Line Analytical Processing) reports with drill-down, roll-up, slice and dice or pivoting functionality, working with historical data, the data source(s) being refreshed periodically;
- ad-hoc reports – reports provided on request, often satisfying one time reports or reports with sporadic needs;
- Data Mining tool(s) focusing on knowledge discovery (aka Data Science);
- direct data access and analysis (aka self-service BI).
OLAP solutions presume the existence of a data warehouse that reflects the business model, and when intelligently built it can satisfy an important percentage from the BI requirements. Building a data warehouse or a set of data marts is an expensive and time consuming endeavor and rarely arrives to satisfy everybody’s needs. There are also vendors that provide commercial off-the-shelf data models and solutions, and at a first view they look like an important deal, however such models are inflexible and seldom cover all requirements. One can end up by customizing and extending the model, running in all kind of issues involving model’s design, flexibility, quality, resources and costs.
The need for ad-hoc reports will be there no matter how complete and flexible are your existing reports. There are always new requirements that must be fulfilled in utile time and not rely on the long cycle time needed for an OLTP/OLAP report. Actually many of the reports start as ad-hoc reports and once their scope and logic stabilized they are moved to the reporting solution. Talking about new reports requirements, it worth to mention that many of the users don’t know exactly what they want, what is possible to get and what information it makes sense to show and at what level of detail in order to have a report that reflects the reality.
Data Mining tools and models are supposed to leverage the value of an ERP system beyond the functionality provided by analytic reports by helping to find hidden patterns and trends in data, to elaborate predictions and estimates. Here I resume only saying that DM makes sense only when the business reached a certain maturity, and I’m considering here mainly the costs/value ratio (the expected benefits needing to be greater than the costs) and effort required from business side in pursuing such a project.
There are situations in which the functionality provided by reporting tools doesn’t fulfill users’ requirements, one of such situations being when users (aka data citizens) need to analyze data by themselves, to link data from different sources, especially Excel sheets. It’s true that vendors tried to address such requirements, though I don’t think they are mature enough, easy to use or allow users to go beyond their skills and knowledge.
29 October 2008
W3: Resource Description Framework (Definitions)
"A framework for constructing logical languages that can work together in the Semantic Web. A way of using XML for data rather than just documents." (Craig F Smith & H Peter Alesso, "Thinking on the Web: Berners-Lee, Gödel and Turing", 2008)
"An application of XML that enables the creation of rich, structured, machinereadable resource descriptions." (J P Getty Trust, "Introduction to Metadata" 2nd Ed., 2008)
"An example of ‘metadata’ language (metadata = data about data) used to describe generic ‘things’ (‘resources’, according to the RDF jargon) on the Web. An RDF document is a list of statements under the form of triples having the classical format: <object, property, value>, where the elements of the triples can be URIs (Universal Resource Identifiers), literals (mainly, free text) and variables. RDF statements are normally written into XML format (the so-called ‘RDF/XML syntax’)." (Gian P Zarri, "RDF and OWL for Knowledge Management", 2011)
"The basic technique for expressing knowledge on The Semantic Web." (DAMA International, "The DAMA Dictionary of Data Management", 2011)
"A graph model for describing formal Web resources and their metadata, to enable automatic processing of such descriptions." (Mahdi Gueffaz, "ScaleSem Approach to Check and to Query Semantic Graphs", 2015)
"Specified by W3C, is a conceptual data modeling framework. It is used to specify content over the World Wide Web, most commonly used by Semantic Web." (T R Gopalakrishnan Nair, "Intelligent Knowledge Systems", 2015)
"Resource Description Framework (RDF) is a framework for expressing information about resources. Resources can be anything, including documents, people, physical objects, and abstract concepts." (Fu Zhang & Haitao Cheng, "A Review of Answering Queries over Ontologies Based on Databases", 2016)
"Resource Description Framework (RDF) is a W3C (World Wide Web Consortium) recommendation which provides a generic mechanism for representing information about resources on the Web." (Hairong Wang et al, "Fuzzy Querying of RDF with Bipolar Preference Conditions", 2016)
"Resource Description Framework (RDF) is a W3C recommendation that provides a generic mechanism for giving machine readable semantics to resources. Resources can be anything we want to talk about on the Web, e.g., a single Web page, a person, a query, and so on." (Jingwei Cheng et al, "RDF Storage and Querying: A Literature Review", 2016)
"The Resource Description Framework (RDF) metamodel is a directed graph, so it identifies one node (the one from which the edge is pointing) as the subject of the triple, and the other node (the one to which the edge is pointing) as its object. The edge is referred to as the predicate of the triple." (Robert J Glushko, "The Discipline of Organizing: Professional Edition" 4th Ed., 2016)
"Resource description framework (RDF) is a family of world wide web consortium (W3C) specifications originally designed as a metadata data model." (Senthil K Narayanasamy & Dinakaran Muruganantham, "Effective Entity Linking and Disambiguation Algorithms for User-Generated Content (UGC)", 2018)
"A framework for representing information on the web." (Sybase, "Open Server Server-Library/C Reference Manual", 2019)
"Resource description framework (RDF) is a W3C (World Wide Web Consortium) recommendation which provides a generic mechanism for representing information about resources on the web." (Zongmin Ma & Li Yan, "Towards Massive RDF Storage in NoSQL Databases: A Survey", 2019)
"It is a language that allows to represent knowledge using triplets of the subject-predicate-object type." (Antonio Sarasa-Cabezuelo & José Luis Fernández-Vindel, "A Model for the Creation of Academic Activities Based on Visits", 2020)
"The RDF is a standard for representing knowledge on the web. It is primarily designed for building the semantic web and has been widely adopted in database and datamining communities. RDF models a fact as a triple which consists of a subject (s), a predicate (p), and an object (o)." (Kamalendu Pal, "Ontology-Assisted Enterprise Information Systems Integration in Manufacturing Supply Chain", 2020)
"It is a language that allows to represent knowledge using triplets of the subject-predicate-object type." (Antonio Sarasa-Cabezuelo, "Creation of Value-Added Services by Retrieving Information From Linked and Open Data Portals", 2021)
"Resource Description Framework, the native way of describing linked data. RDF is not exactly a data format; rather, there are a few equivalent formats in which RDF can be expressed, including an XML-based format. RDF data takes the form of ‘triples’ (each atomic piece of data has three parts, namely a subject, predicate and object), and can be stored in a specialised database called a triple store." ("Open Data Handbook")
About Me
- Adrian
- Koeln, NRW, Germany
- IT Professional with more than 24 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.