As companies generate more and more data, the traditional approach to data warehousing becomes increasingly difficult and costly to maintain. The Data Vault, a relatively new approach to data warehousing, offers a solution to this problem by providing a scalable, agile, and cost-effective way to manage large volumes of data. 

In this post, we will explore how Data Vaults are the future of data warehousing and why more and more companies are adopting this approach. We will also provide learning resources for those who want to dive deeper into the topic!

What is Data Vault?

Data Vault is a data warehouse modeling technique particularly suited to agile data warehouses. It offers a high degree of flexibility for extensions, a complete unit-temporal historicization of the data, and allows a strong parallelization of the data loading processes. Dan Linstedt developed Data Vault modeling in the 1990s.

After the first publication in 2000, she gained greater attention in 2002 through a series of articles. In 2007, Linstedt won the endorsement of Bill Inmon, who described it as the “optimal choice” for his Data Vault 2.0 architecture.

Anyone who deals with the term agile data warehouse will quickly end up with Data Vault. What is special about the technology is that it is focused on the needs of companies because it enables flexible, low-effort adjustments to a data warehouse.

Data Vault 2.0 considers the entire development process and the architecture and consists of the components method (implementation), architecture, and model. The advantage is that this approach considers all aspects of business intelligence with the underlying data warehouse during development.

The Data Vault model offers a modern solution for overcoming the limitations of traditional data modeling approaches. With its scalability, flexibility, and agility, it provides a solid foundation for building a data platform that can accommodate the complexity and diversity of modern data environments.

The Data Vault’s hub-and-spoke architecture and separation of entities and attributes enable data integration and harmonization across multiple systems and domains, facilitating incremental and agile development.

A crucial role of the Data Vault in building a data platform is to establish a single source of truth for all data. Its unified view of data and support for capturing and tracking historical data changes through satellite tables enable compliance, audit, regulatory requirements, and comprehensive analysis and reporting.

The Data Vault’s near real-time data integration capabilities via delta loading facilitates handling large volumes of data in fast-changing environments such as Big Data and IoT applications.

Data Vault vs. Traditional Data Warehouse Models

Third-Normal-Form (3NF) is one of the most renowned traditional data warehouse models, often preferred in many large implementations. Incidentally, this corresponds to the ideas of Bill Inmon, one of the “forefathers” of the data warehouse concept.

The Inmon architecture is based on the relational database model and eliminates data redundancy by breaking down data sources into smaller tables that are stored in data marts and are interconnected using primary and foreign keys. It ensures that data is consistent and accurate by enforcing referential integrity rules.

The goal of the normal form was to build a comprehensive, company-wide data model for the core data warehouse; however, it has scalability and flexibility issues due to highly coupled data marts, loading difficulties in near real-time mode, laborious requests, and top-down design, and implementation.

<img alt="inmon" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/inmon.png" data- decoding="async" height="760" src="data:image/svg xml,” width=”1307″>

Kimbal model, used for OLAP (online analytical processing) and data marts, is another famous data warehouse model in which fact tables contain aggregated data and dimension tables describe stored data in a star schema or snowflake schema design. In this architecture, data is organized into fact and dimension tables that are denormalized to simplify querying and analysis.

Kimbal is based on a dimensional model that is optimized for querying and reporting, making it ideal for business intelligence applications. However, it has had issues with the isolation of subject-oriented information, data redundancy, incompatible query structures, scalability difficulties, the inconsistent granularity of fact tables, synchronization issues, and the need for top-down design with bottom-up implementation. 

<img alt="kimball" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/kimball.png" data- decoding="async" height="768" src="data:image/svg xml,” width=”1295″>

In contrast, Data vault architecture is a hybrid approach that combines aspects of both 3NF and Kimball architectures. It is a model based on relational principles, data normalization, and redundancy mathematics that represents relationships between entities differently and structures table fields and timestamps differently.

In this architecture, all data is stored in a raw data vault or data lake, whereas the commonly used data is stored in a normalized format in a business vault which contains historical and context-specific data that can be used for reporting.

<img alt="data-vault" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/data-vault.png" data- decoding="async" height="273" src="data:image/svg xml,” width=”698″>

Data Vault addresses the issues in traditional models by being more efficient, scalable, and flexible. It allows for near-real-time loading, better data integrity, and easy expansion without affecting existing structures. The model can also be expanded without migrating the existing tables.

Modeling Approach Data Structure Design Approach
3NF Modeling Tables in 3NF Bottom-up
Kimbal Modeling Star Schema or Snowflake Schema Top-down
Data Vault Hub-and-Spoke Bottom-up

Architecture of Data Vault

Data Vault has a hub-and-spoke architecture and  essentially consists of three layers:

Staging Layer: Collects the raw data from the source systems, such as CRM or ERP

Data Warehouse Layer: When modeled as a Data Vault model, this layer includes:

  • Raw Data Vault: stores the raw data.
  • Business Data Vault: includes harmonized and transformed data based on business rules (optional).
  • Metrics Vault: stores runtime information (optional).
  • Operational Vault: stores the data that flows directly from operational systems into the data warehouse (optional.)

Data Mart Layer: This layer models data as star schema and/or other modeling techniques. It provides information for analysis and reporting.

<img alt="Data-vault-EDW-architecture-3" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/Data-vault-EDW-architecture-3.png" data- decoding="async" height="252" src="data:image/svg xml,” width=”607″>
Image Source: Lamia Yessad

Data Vault does not require a re-architecture. New functions can be built in parallel directly using the concepts and methods of Data Vault, and existing components are not lost. Frameworks can make the work significantly easier: they create a layer between the data warehouse and the developer and thus reduce the complexity of the implementation. 

Components of Data Vault

During modeling, Data Vault divides all information belonging to the object into three categories – in contrast to classic third normal form modeling. This information is then stored strictly separated from each other. The functional areas can be mapped in Data Vault in so-called hubs, links, and satellites:

#1. Hubs

Hubs are the heart of the core business concept, such as customer, seller, sale, or product. The hub table is formed around the business key (store name or location) when a new instance of that business key is first introduced into the data warehouse.

The hub contains no descriptive information and no FKs. It consists only of the business key, with a warehouse-generated sequence of ID or hash keys, load date/time stamp, and record source.

#2. Links

Links establish relationships between the business keys. Each entry in a link models nm relationships of any number of hubs. It allows the data vault to react flexibly to changes in the business logic of the source systems, such as changes in the cordiality of relationships. Just like the hub, the link does not contain any descriptive information. It consists of the sequence IDs of the hubs it references, a warehouse-generated sequence ID, load date/time stamp, and record source.

#3. Satellites

Satellites contain the descriptive information (context) for a business key stored in a hub or a relationship stored in a link. Satellites work “insert only,” meaning the complete data history is stored in the satellite. Multiple satellites can describe a single business key (or relationship). However, a satellite can only describe one key (hub or link).

<img alt="Data_Vault_Example" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/Data_Vault_Example.png" data- decoding="async" height="331" src="data:image/svg xml,” width=”893″>
Image Source: Carbidfischer 

How to build a Data Vault Model

Building a Data Vault model involves several steps, each of which is critical to ensuring the model is scalable, flexible, and able to meet the needs of the business:

#1. Identify Entities and Attributes

Identify the business entities and their corresponding attributes. It involves working closely with the business stakeholders to understand their requirements and the data they need to capture. Once these entities and attributes have been identified, separate them into hubs, links, and satellites.

#2. Define Entity Relationships and Create Links

Once you have identified the entities and attributes, the relationships between the entities are defined, and the links are created to represent these relationships. Each link is assigned a business key that identifies the relationship between the entities. The satellites are then added to capture the entities’ attributes and relationships.

#3. Establish Rules and Standards

After creating links, a set of rules and data vault modeling standards should be established to ensure the model is flexible and can handle changes over time. These rules and standards should be reviewed and updated regularly to ensure they remain relevant and aligned with the business needs.

#4. Populate the Model

Once the model has been created, it should be populated with data using an incremental loading approach. It involves loading the data into the hubs, links, and satellites using delta loads. The delta loads to ensure that only the changes made to the data are loaded, reducing the time and resources required for data integration.

#5. Test and Validate the Model

Finally, the model should be tested and validated to ensure it meets the business requirements and is scalable and flexible enough to handle future changes. Regular maintenance and updates should be performed to ensure the model remains aligned with the business needs and continues to provide a unified view of the data.

Data Vault Learning Resources

Mastering Data Vault can provide valuable skills and knowledge that are highly sought after in today’s data-driven industries. Here is a comprehensive list of resources, including courses and books, that can aid in learning the intricacies of Data Vault:

#1. Modeling Data Warehouse with Data Vault 2.0

<img alt="1st" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/1st.png" data- decoding="async" height="283" src="data:image/svg xml,” width=”1339″>

This Udemy course is a comprehensive introduction to the Data Vault 2.0 modeling approach, Agile project management, and Big Data integration. The course covers the basics and fundamentals of Data Vault 2.0, including its architecture and layers, business and information vaults, and advanced modeling techniques.

It teaches you how to design a Data Vault model from scratch, convert traditional models like 3NF and dimensional models to Data Vault, and understand the principles of dimensional modeling in Data Vault. The course requires basic knowledge of databases and SQL fundamentals.

With a high rating of 4.4 out of 5 and over 1,700 reviews, this best-selling course is suitable for anyone looking to build a strong foundation in Data Vault 2.0 and Big Data integration.

#2. Data Vault Modeling Explained With Use Case

<img alt="2-1" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/2-1.png" data- decoding="async" height="289" src="data:image/svg xml,” width=”1339″>

This Udemy course is aimed at guiding you in building a Data Vault Model using a practical business example. It serves as a beginner’s guide to Data Vault Modeling, covering key concepts such as the appropriate scenarios to use Data Vault models, the limitations of conventional OLAP Modeling, and a systematic approach to constructing a Data Vault Model. The course is accessible to individuals with minimal database knowledge.

#3. The Data Vault Guru: a pragmatic guide

The Data Vault Guru by Mr. Patrick Cuba is a comprehensive guide to the data vault methodology, which offers a unique opportunity to model the enterprise data warehouse using automation principles similar to those used in software delivery.

The book provides an overview of modern architecture and then offers a thorough guide on how to deliver a flexible data model that adapts to changes in the enterprise, the data vault.

Additionally, the book extends the data vault methodology by providing automated timeline correction, audit trails, metadata control, and integration with agile delivery tools.

#4. Building a Scalable Data Warehouse with Data Vault 2.0

This book provides readers with a comprehensive guide to creating a scalable data warehouse from start to finish using the Data Vault 2.0 methodology.

This book covers all the essential aspects of building a scalable data warehouse, including the Data Vault modeling technique, which is designed to prevent typical data warehousing failures.

The book features numerous examples to help readers understand the concepts clearly.  With its practical insights and real-world examples, this book is an essential resource for anyone interested in data warehousing.

#5. The Elephant in the Fridge: Guided Steps to Data Vault Success

The Elephant in the Fridge by John Giles is a practical guidebook that aims to help readers achieve Data Vault success by starting with the business and ending with the business.

The book focuses on the importance of enterprise ontology and business concept modeling and provides step-by-step guidance on how to apply these concepts to create a solid data model.

Through practical advice and sample patterns, the author offers a clear and uncomplicated explanation of complicated topics, making the book an excellent guide for those who are new to the Data Vault.

Final Words

Data Vault represents the future of data warehousing, offering companies significant advantages in terms of agility, scalability, and efficiency. It is particularly well-suited for businesses that need to load large volumes of data quickly and those that are looking to develop their business intelligence applications in an agile manner.

Furthermore, companies that have an existing silo architecture can benefit greatly from implementing an upstream core data warehouse using Data Vault.

You may also be interested in learning about the data lineage.