DWDM(BCS058) 2nd UNIT NOTES
DWDM(BCS058) 2nd UNIT NOTES
DWDM(BCS058) 2nd UNIT NOTES
Data Warehouses are information gathered from multiple sources and saved under a schema that is living on the
identical site. It is made with the aid of diverse techniques, inclusive of the following processes:
1. Data Cleanup: Data cleaning is the way of preparing statistics for analysis with the help of getting rid of or
enhancing incorrect, incomplete, irrelevant, duplicate, or irregularly formatted information. This fact is no longer
necessary or beneficial if you want to research the statistics because it is able to interrupt the technique or supply
false results.
2. Data Integration: Data integration is the process of integrating data from different assets into a unified view.
The integration method starts with a startup and includes steps that include refinement, ETL mapping, and
conversion. Data integration ultimately permits analytics tools to create powerful and cheap enterprise
intelligence. In a typical data integration procedure, the client sends a request for information to the master
server. The master server prepares the vital records for internal and external assets. Extracts facts from sources
and then integrates them into a single information set. It is then returned to the client for use.
3. Data Transformation: The process of converting information from one layout or shape to another is referred
to as data transformation. Data transformation is critical for features that include data integration and
information management. Data transformation has several capabilities: you can change the record types based on
the needs of your project; enrich or aggregate the records by removing invalid or duplicate data. Generally, the
technique consists of two stages.
Mapping data to discover how character fields are mapped, edited, inserted, filtered, and stored.
In the second step, you must:
https://www.scaler.com/topics/clustered-operating-system/ 1/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Extract data from the original source. The size of the supply can range from a connected tool to a dependable
useful resource along with a database or streaming resources, including telemetry or logging files from
clients who use your web application.
The target may be a database or a data warehouse that manages structured and unstructured records.
4. Loading Data: Data loading is the process of copying and loading data from a report, folder, or application to
a database or similar utility. This is usually done via copying digital data from the source and pasting or loading
the records into a data warehouse or processing tool. Data-loading is used in data extraction and loading
methods. Typically, such information is loaded in a different format than the original location of the source.
5. Data Refreshing: In this process, the data stored in the warehouse is periodically refreshed so that it
maintains its integrity. A data warehouse is a model of multidimensional data structures that are known as “Data
Cubes” in which every dimension represents an attribute or different set of attributes in the schema of the data
and each cell is used to store the value. Data is gathered from various sources such as hospitals, banks,
organizations, and many more and goes through a process called ETL (Extract, Transform, Load).
Extract: This process reads the data from the database of various sources.
Transform: It transforms the data stored inside the databases into data cubes so that it can be loaded into the
warehouse.
Load: It is a process of writing the transformed data into the data warehouse.
Building and maintaining a data warehouse involves several challenges, including:
Data quality: Ensuring data quality in a data warehouse is a major challenge. The data coming from various
sources may have inconsistencies, duplications, and inaccuracies, which can affect the overall quality of the data
in the warehouse.
Data integration: Integrating data from various sources into a data warehouse can be challenging, especially
when dealing with data that is structured differently or has different formats.
Data consistency: Maintaining data consistency across various data sources and over time is a challenge.
Changes in the source systems can affect the consistency of the data in the warehouse.
Data governance: Managing the access, use, and security of the data in the warehouse is another challenge.
Ensuring compliance with legal and regulatory requirements can also be challenging.
Performance: Ensuring that the data warehouse performs efficiently and delivers fast query response times can
be a challenge, particularly as the volume of data increases over time.
Data modeling: Designing an effective data model that reflects the needs of the organization and optimizes
query performance can be a challenge.
Data security: Ensuring the security of the data in the warehouse is a critical challenge, particularly as the data
warehouse contains sensitive information.
Resource allocation: Building and maintaining a data warehouse requires significant resources, including
skilled personnel, hardware, and software, which can be a challenge to allocate and manage effectively.
Advantages:
1. Improved decision making: Data warehousing and data mining can help to improve decision making by
providing insights and information that would otherwise be difficult or impossible to obtain.
2. Increased efficiency: Data warehousing and data mining can help to increase efficiency by automating the
process of extracting, cleaning, and analyzing data.
3. Improved data quality: Data warehousing and data mining can help to improve the quality of data by
identifying and correcting errors, inconsistencies, and missing data.
4. Improved data security: Data warehousing and data mining can help to improve data security by providing
a central repository for storing data and controlling access to that data.
5. Improved scalability: Data warehousing and data mining can help to improve scalability by providing a
way to manage and analyze large amounts of data.
https://www.scaler.com/topics/clustered-operating-system/ 2/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
https://www.scaler.com/topics/clustered-operating-system/ 3/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
PARALLEL PROCESSORS
DEFINITION
The processing of large amounts of data is typical for data warehouse environments. Depending on the available
hardware resources, sooner or later the point is reached where a job cannot be processed on a single processor resp.
cannot be represented by a single process anymore. The reasons for that are:
Time requirements demand the use of multiple processors
Systems resources (memory, disk space, temporary table space, rollback segments, . . .) are limited.
Recurrent errors require the repetition of the process.
Parallelization by RDBMS parallel processing
Modern database systems are capable of parallel query processing. Queries and sometimes also changes on large
amounts of data can be parallelized within the database server and use multiple processors concurrently.
Advantages of this solution are:
No resp. only little development effort is needed
Only a small overhead is produced by this kind of parallelization
https://www.scaler.com/topics/clustered-operating-system/ 4/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Parallel processing
Parallel processing is a method in computing of running two or more processors (CPUs) to handle separate parts of
an overall task. Breaking up different parts of a task among multiple processors will help reduce the amount of
time to run a program. Any system that has more than one CPU can perform parallel processing, as well as multi-
core processors which are commonly found on computers today.
Parallel processing is commonly used to perform complex tasks and computations. Data scientists will commonly
make use of parallel processing for compute and data-intensive tasks.
Clustered Systems
Clustered systems are similar to parallel systems as they both have multiple CPUs. However a major difference is
that clustered systems are created by two or more individual computer systems merged together. Basically, they
have independent computer systems with a common storage and the systems work together.
https://www.scaler.com/topics/clustered-operating-system/ 5/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
he clustered systems are a combination of hardware clusters and software clusters. The hardware clusters help in
sharing of high performance disks between the systems. The software clusters makes all the systems work together
.
Each node in the clustered systems contains the cluster software. This software monitors the cluster system and
makes sure it is working as required. If any one of the nodes in the clustered system fail, then the rest of the nodes
take control of its storage and resources and try to restart.
Types of Clustered Systems
There are primarily two types of clustered systems i.e. asymmetric clustering system and symmetric clustering
system. Details about these are given as follows −
Asymmetric Clustering System
In this system, one of the nodes in the clustered system is in hot standby mode and all the others run the required
applications. The hot standby mode is a failsafe in which a hot standby node is part of the system . The hot standby
node continuously monitors the server and if it fails, the hot standby node takes its place.
Symmetric Clustering System
In symmetric clustering system two or more nodes all run applications as well as monitor each other. This is more
efficient than asymmetric system as it uses all the hardware and doesn't keep a node merely as a hot standby
Attributes of Clustered Systems
There are many different purposes that a clustered system can be used for. Some of these can be scientific
calculations, web support etc. The clustering systems that embody some major attributes are −
Load Balancing Clusters
In this type of clusters, the nodes in the system share the workload to provide a better performance. For
example: A web based cluster may assign different web queries to different nodes so that the system
performance is optimized. Some clustered systems use a round robin mechanism to assign requests to
different nodes in the system.
High Availability Clusters
https://www.scaler.com/topics/clustered-operating-system/ 6/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
These clusters improve the availability of the clustered system. They have extra nodes which are only used
if some of the system components fail. So, high availability clusters remove single points of failure i.e.
nodes whose failure leads to the failure of the system. These types of clusters are also known as failover
clusters or HA clusters.
Benefits of Clustered Systems
The difference benefits of clustered systems are as follows −
Performance
Clustered systems result in high performance as they contain two or more individual computer systems
merged together. These work as a parallel unit and result in much better performance for the system.
Fault Tolerance
Clustered systems are quite fault tolerant and the loss of one node does not result in the loss of the system.
They may even contain one or more nodes in hot standby mode which allows them to take the place of
failed nodes.
Scalability
Clustered systems are quite scalable as it is easy to add a new node to the system. There is no need to take
the entire cluster down to add a new node.
In the past, data warehouse schemas were often strictly enforced across an enterprise, but in modern
implementations where storage is increasingly inexpensive, schemas have become less constrained. Despite this
loosening or sometimes total abandonment of data warehouse schemas, knowledge of the foundational schema
designs can be important to both maintaining legacy resources and for creating modern data warehouse design that
learns from the past.
The basic components of all data warehouse schemas are fact and dimension tables. The different combination of
these two central elements compose almost the entirety of all data warehouse schema designs.
Fact Table
A fact table aggregates metrics, measurements, or facts about business processes. In this example, fact tables are
connected to dimension tables to form a schema architecture representing how data relates within the data
warehouse. F
act tables store primary keys of dimension tables as foreign keys within the fact table.
Dimension Table
Dimension tables are non-denormalized tables used to store data attributes or dimensions. As mentioned above, the
primary key of a dimension table is stored as a foreign key in the fact table. Dimension tables are not joined
together. Instead, they are joined via association through the central fact table.
https://www.scaler.com/topics/clustered-operating-system/ 8/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
https://www.scaler.com/topics/clustered-operating-system/ 9/3
9
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Star data warehouse schemas create a denormalized database that enables quick querying responses
The primary key in the dimension table is joined to the fact table by the foreign key
Each dimension in the star schema maps to one dimension table
Dimension tables within a star scheme are not to be connected directly
Star schema creates denormalized dimension tables
Snowflake Schema
The Snowflake Schema is a data warehouse schema that encompasses a logical arrangement of dimension tables.
This data warehouse schema builds on the star schema by adding
additional sub-dimension tables that relate to first-order dimension tables joined to the fact table.
Just like the relationship between the foreign key in the fact table and the primary key in the dimension table, with
the snowflake schema approach, a primary key in a sub-dimension table will relate to a foreign key within the
higher order dimension table.
Snowflake schema creates normalized dimension tables – a database structuring strategy that organizes tables to
reduce redundancy. The purpose of normalization is to eliminate any redundant data to reduce overhead.
https://www.scaler.com/topics/clustered-operating-system/ 10/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Galaxy Schema
The Galaxy Data Warehouse Schema, also known as a Fact Constellation Schema, acts as the
next iteration of the data warehouse schema. Unlike the Star Schema and Snowflake Schema, the Galaxy Schema
uses multiple fact tables connected with shared normalized dimension tables. Galaxy Schema can be thought of as
star schema interlinked and completely normalized, avoiding any kind of redundancy or inconsistency of data.
https://www.scaler.com/topics/clustered-operating-system/ 11/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Galaxy Schema is multidimensional acting as a strong design consideration for complex database systems
Galaxy Schema reduces redundancy to near zero redundancy as a result of normalization
Galaxy Schema is known for high data quality and accuracy and lends to effective reporting and analytics
Data Warehouses are information gathered from multiple sources and saved under a schema that is living on the
identical site. It is made with the aid of diverse techniques, inclusive of the following processes:
1. Data Cleanup: Data cleaning is the way of preparing statistics for analysis with the help of getting rid of or
enhancing incorrect, incomplete, irrelevant, duplicate, or irregularly formatted information. This fact is no longer
necessary or beneficial if you want to research the statistics because it is able to interrupt the technique or supply
false results.
2. Data Integration: Data integration is the process of integrating data from different assets into a unified view.
The integration method starts with a startup and includes steps that include refinement, ETL mapping, and
conversion. Data integration ultimately permits analytics tools to create powerful and cheap enterprise
intelligence. In a typical data integration procedure, the client sends a request for information to the master
server. The master server prepares the vital records for internal and external assets. Extracts fa cts from sources
and then integrates them into a single information set. It is then returned to the client for use.
3. Data Transformation: The process of converting information from one layout or shape to another is referred
to as data transformation. Data transformation is critical for features that include data integration and
information management. Data transformation has several capabilities: you can change the record types based on
the needs of your project; enrich or aggregate the records by removing invalid or duplicate data. Generally, the
technique consists of two stages.
Mapping data to discover how character fields are mapped, edited, inserted, filtered, and stored.
https://www.scaler.com/topics/clustered-operating-system/ 12/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
The target may be a database or a data warehouse that manages structured and unstructured records.
4. Loading Data: Data loading is the process of copying and loading data from a report, folder, or application to
a database or similar utility. This is usually done via copying digital data from the source and pasting or loading
the records into a data warehouse or processing tool. Data-loading is used in data extraction and loading
methods. Typically, such information is loaded in a different format than the original location of the source.
5. Data Refreshing: In this process, the data stored in the warehouse is periodically refreshed so that it
maintains its integrity. A data warehouse is a model of multidimensional data structures that are known as “Data
Cubes” in which every dimension represents an attribute or different set of attributes in the schema of t he data
and each cell is used to store the value. Data is gathered from various sources such as hospitals, banks,
organizations, and many more and goes through a process called ETL (Extract, Transform, Load).
Extract: This process reads the data from the database of various sources.
Transform: It transforms the data stored inside the databases into data cubes so that it can be loaded into the
warehouse.
Load: It is a process of writing the transformed data into the data warehouse.
Building and maintaining a data warehouse involves several challenges, including:
Data quality: Ensuring data quality in a data warehouse is a major challenge. The data coming from various
sources may have inconsistencies, duplications, and inaccuracies, which can affect the overall quality of the data
in the warehouse.
Data integration: Integrating data from various sources into a data warehouse can be challenging, especially
when dealing with data that is structured differently or has different formats.
Data consistency: Maintaining data consistency across various data sources and over time is a challenge.
Changes in the source systems can affect the consistency of the data in the warehouse.
Data governance: Managing the access, use, and security of the data in the warehouse is another challenge.
Ensuring compliance with legal and regulatory requirements can also be challenging.
Performance: Ensuring that the data warehouse performs efficiently and delivers fast query response times can
be a challenge, particularly as the volume of data increases over time.
Data modeling: Designing an effective data model that reflects the needs of the organization and optimizes
query performance can be a challenge.
Data security: Ensuring the security of the data in the warehouse is a critical challenge, particularly as the data
warehouse contains sensitive information.
Resource allocation: Building and maintaining a data warehouse requires significant resources, including
skilled personnel, hardware, and software, which can be a challenge to allocate and manage effectively.
Advantages:
6. Improved decision making: Data warehousing and data mining can help to improve decision making by
providing insights and information that would otherwise be difficult or impossible to obtain.
7. Increased efficiency: Data warehousing and data mining can help to increase efficiency by automating the
process of extracting, cleaning, and analyzing data.
8. Improved data quality: Data warehousing and data mining can help to improve the quality of data by
identifying and correcting errors, inconsistencies, and missing data.
9. Improved data security: Data warehousing and data mining can help to improve data security by providing
a central repository for storing data and controlling access to that data.
10. Improved scalability: Data warehousing and data mining can help to improve scalability by providing a
way to manage and analyze large amounts of data.
https://www.scaler.com/topics/clustered-operating-system/ 13/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Data normalization
Organizing data into separate tables to eliminate redundancy and maintain data integrity. This can significantly
reduce disk footprint and improve performance.
https://www.scaler.com/topics/clustered-operating-system/ 14/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
5. Data Indexing: After the data is loaded into the data warehouse, it is indexed to make it easy to search and
retrieve the data. This step also involves creating summary tables and materialized views to improve query
performance.
6. Data Maintenance: The final step in the data warehouse process is to maintain the data and ensure that it is
accurate and up-to-date. This may involve periodically refreshing the data, archiving old data, and
monitoring the data for errors or inconsistencies.
The data warehouse process is an iterative process that is repeated as new data is added to the warehouse. It is a
crucial step for data mining process, as it allows for the storage, management and organization of large amount
of data which is needed to be mined. Data mining process can be applied to the data in the data warehouse to
uncover hidden patterns, relationships, and insights that can be used to make informed business decisions.
Data Warehouses are information gathered from multiple sources and saved under a schema that is living on the
identical site. It is made with the aid of diverse techniques, inclusive of the following processes:
1. Data Cleanup: Data cleaning is the way of preparing statistics for analysis with the help of getting rid of or
enhancing incorrect, incomplete, irrelevant, duplicate, or irregularly formatted information. This fact is no longer
necessary or beneficial if you want to research the statistics because it is able to interrupt the technique or supply
false results.
2. Data Integration: Data integration is the process of integrating data from different assets into a unified view.
The integration method starts with a startup and includes steps that include refinement, ETL mapping, and
conversion. Data integration ultimately permits analytics tools to create powerful and cheap enterprise
intelligence. In a typical data integration procedure, the client sends a request for information to the master
server. The master server prepares the vital records for internal and external assets. Extracts facts from sources
and then integrates them into a single information set. It is then returned to the client for use.
3. Data Transformation: The process of converting information from one layout or shape to another is referred
to as data transformation. Data transformation is critical for features that include data integration and
information management. Data transformation has several capabilities: you can change the record types based on
the needs of your project; enrich or aggregate the records by removing invalid or duplicate data. Generally, the
technique consists of two stages.
In the first step, you should:
Perform an information search that identifies assets and data types.
Mapping data to discover how character fields are mapped, edited, inserted, filtered, and stored.
In the second step, you must:
Extract data from the original source. The size of the supply can range from a connected tool to a dependable
useful resource along with a database or streaming resources, including telemetry or logging files from
clients who use your web application.
The target may be a database or a data warehouse that manages structured and unstructured records.
4. Loading Data: Data loading is the process of copying and loading data from a report, folder, or application to
a database or similar utility. This is usually done via copying digital data from the source and pasting or loading
the records into a data warehouse or processing tool. Data-loading is used in data extraction and loading
methods. Typically, such information is loaded in a different format than the original location of the source.
5. Data Refreshing: In this process, the data stored in the warehouse is periodically refreshed so that it
maintains its integrity. A data warehouse is a model of multidimensional data structures that are known as “Data
Cubes” in which every dimension represents an attribute or different set of attributes in the schema of the data
and each cell is used to store the value. Data is gathered from various sources such as hospitals, banks,
organizations, and many more and goes through a process called ETL (Extract, Transform, Load).
Extract: This process reads the data from the database of various sources.
https://www.scaler.com/topics/clustered-operating-system/ 15/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Transform: It transforms the data stored inside the databases into data cubes so that it can be loaded into the
warehouse.
Load: It is a process of writing the transformed data into the data warehouse.
This process can be seen in the illustration below:
https://www.scaler.com/topics/clustered-operating-system/ 16/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
1. High cost: Data warehousing and data mining can be expensive to implement and maintain, especially for
organizations with limited resources.
2. Complexity: Data warehousing and data mining can be complex and difficult to implement, especially for
organizations that lack the necessary expertise or resources.
3. Data privacy concerns: Data warehousing and data mining can raise concerns about data privacy, as large
amounts of data are collected, stored, and analyzed.
4. Limited flexibility: Data warehousing and data mining can be limited in terms of flexibility, as they are
designed to work with structured data and may not be suitable for unstructured data.
5. Limited scalability: Data warehousing and data mining can be limited in terms of scalability, as they may
not be able to handle very large amounts of data.
Overall, data warehousing and data mining can be powerful tools for organizations that need to extract insights
from large amounts of data. However, they also come with their own set of challenges and limitations, an d
organizations need to carefully consider the costs and benefits before implementing them.
Characteristics and Functions of Data warehouse
A data warehouse is a centralized repository for storing and managing large amounts of data from various
sources for analysis and reporting. It is optimized for fast querying and analysis, enabling organizations to make
informed decisions by providing a single source of truth for data. Data warehousing typically involves
transforming and integrating data from multiple sources into a unified, organized, and consistent format.
Prerequisite – Data Warehousing Data warehouse can be controlled when the user has a shared way of
explaining the trends that are introduced as specific subject. Below are major characteristics of data warehouse
:
1. Subject-oriented – A data warehouse is always a subject oriented as it delivers information about a theme
instead of organization’s current operations. It can be achieved on specific theme. That means the data
warehousing process is proposed to handle with a specific theme which is more defined. These themes can
be sales, distributions, marketing etc.
A data warehouse never put emphasis only current operations. Instead, it focuses on demonstrating and
analysis of data to make various decision. It also delivers an easy and precise demonstration around
particular theme by eliminating data which is not required to make the decisions.
2. Integrated – It is somewhere same as subject orientation which is made in a reliable format. Integration
means founding a shared entity to scale the all similar data from the different databases. The data also
required to be resided into various data warehouse in shared and generally granted manner.
A data warehouse is built by integrating data from various sources of data such that a mainframe and a
relational database. In addition, it must have reliable naming conventions, format and codes. Integration of
data warehouse benefits in effective analysis of data. Reliability in naming conventions, column scaling,
encoding structure etc. should be confirmed. Integration of data warehouse handles various subject related
warehouse.
https://www.scaler.com/topics/clustered-operating-system/ 17/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
3. Time-Variant – In this data is maintained via different intervals of time such as weekly, monthly, or
annually etc. It founds various time limit which are structured between the large datasets and are held in
online transaction process (OLTP). The time limits for data warehouse is wide-ranged than that of
operational systems. The data resided in data warehouse is predictable with a specific interval of time and
delivers information from the historical perspective. It comprises elements of time explicitly or implicitly.
Another feature of time-variance is that once data is stored in the data warehouse then it cannot be modified,
alter, or updated. Data is stored with a time dimension, allowing for analysis of data over time.
4. Non-Volatile – As the name defines the data resided in data warehouse is permanent. It also means that data
is not erased or deleted when new data is inserted. It includes the mammoth quantity of data that is inserted
into modification between the selected quantity on logical business. It evaluates the analysis within the
technologies of warehouse. Data is not updated, once it is stored in the data warehouse, to maintain the
historical data.
In this, data is read-only and refreshed at particular intervals. This is beneficial in analysing historical data
and in comprehension the functionality. It does not need transaction process, recapture and concurrency
control mechanism. Functionalities such as delete, update, and insert that are done in an operational
application are lost in data warehouse environment. Two types of data operations done in the data warehouse
are:
Data Loading
Data Access
1. Subject Oriented: Focuses on a specific area or subject such as sales, customers, or inventory.
2. Integrated: Integrates data from multiple sources into a single, consistent format.
3. Read-Optimized: Designed for fast querying and analysis, with indexing and aggregations to support
reporting.
4. Summary Data: Data is summarized and aggregated for faster querying and analysis.
5. Historical Data: Stores large amounts of historical data, making it possible to analyze trends and patterns
over time.
6. Schema-on-Write: Data is transformed and structured according to a predefined schema before it is loaded
into the data warehouse.
7. Query-Driven: Supports ad-hoc querying and reporting by business users, without the need for technical
support.
Functions of Data warehouse: It works as a collection of data and here is organized by various communities
that endures the features to recover the data functions. It has stocked facts about the tables which have high
transaction levels which are observed so as to define the data warehousing techniques and major functions which
are involved in this are mentioned below:
1. Data Consolidation: The process of combining multiple data sources into a single data repository in a data
warehouse. This ensures a consistent and accurate view of the data.
2. Data Cleaning: The process of identifying and removing errors, inconsistencies, and irrelevant data from the
data sources before they are integrated into the data warehouse. This helps ensure the data is accurate and
trustworthy.
3. Data Integration: The process of combining data from multiple sources into a single, unified data repository
in a data warehouse. This involves transforming the data into a consistent format and resolving any conflicts
or discrepancies between the data sources. Data integration is an essential step in the data warehousing
process to ensure that the data is accurate and usable for analysis. Data from multiple sources can be
integrated into a single data repository for analysis.
4. Data Storage: A data warehouse can store large amounts of historical data and make it easily accessible for
analysis.
5. Data Transformation: Data can be transformed and cleaned to remove inconsistencies, duplicate data, or
irrelevant information.
6. Data Analysis: Data can be analyzed and visualized in various ways to gain insights and make informed
decisions.
7. Data Reporting: A data warehouse can provide various reports and dashboards for different departments and
stakeholders.
8. Data Mining: Data can be mined for patterns and trends to support decision-making and strategic planning.
https://www.scaler.com/topics/clustered-operating-system/ 18/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
9. Performance Optimization: Data warehouse systems are optimized for fast querying and analysis, providing
quick access to data.
Data warehouse planning involves:
Defining requirements: Understand the business requirements for the data warehouse, including the goals of the
company, department, and users
Identifying data sources: List all relevant data sources, and assess their volume, variety, velocity, and quality
Creating a schema: Design a preliminary schema to meet user requirements, and map source system fields to the
schema
Estimating costs and resources: Estimate the costs and resources required for the project
Identifying risks and dependencies: Identify any risks and dependencies for the project
Meta data
This data about data is used to build, maintain, and manage the data warehouse. It also provides interactive
access to help users find data and understand content.
Data integration
Data warehouses are often assembled from a variety of data sources with different formats and purposes.
ETL is a key process to bring all the data together in a standard environment.
Optimization
Collecting warehouse operations data during optimization can help identify inefficiencies and guide future
decisions.
Capacity planning
Data warehouses grow quickly and need to be managed to keep budgets under control.
Data Warehouse Implementation
There are various implementation in data warehouses which are as follows
1. Requirements analysis and capacity planning: The first process in data warehousing involves defining
enterprise needs, defining architectures, carrying out capacity planning, and selecting the hardware and
software tools. This step will contain be consulting senior management as well as the different stakeholder.
2. Hardware integration: Once the hardware and software has been selected, they require to be put by
integrating the servers, the storage methods, and the user software tools.
3. Modeling: Modelling is a significant stage that involves designing the warehouse schema and views.
This may contain using a modeling tool if the data warehouses are sophisticated.
4. Physical modeling: For the data warehouses to perform efficiently, physical modeling is needed. This
contains designing the physical data warehouse organization, data placement, data partitioning, deciding on
access techniques, and indexing.
5. Sources: The information for the data warehouse is likely to come from several data sources. This step
contains identifying and connecting the sources using the gateway, ODBC drives, or another wrapper.
6. ETL: The data from the source system will require to go through an ETL phase. The process of
designing and implementing the ETL phase may contain defining a suitable ETL tool vendors and
purchasing and implementing the tools. This may contains customize the tool to suit the need of the
enterprises.
https://www.scaler.com/topics/clustered-operating-system/ 20/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
7. Populate the data warehouses: Once the ETL tools have been agreed upon, testing the tools will be
needed, perhaps using a staging area. Once everything is working adequately, the ETL tools may be used in
populating the warehouses given the schema and view definition.
8. User applications: For the data warehouses to be helpful, there must be end-user applications. This step
contains designing and implementing applications required by the end-users.
9. Roll-out the warehouses and applications: Once the data warehouse has been populated and the end-
client applications tested, the warehouse system and the operations may be rolled out for the user's
community to use.
Implementation Guidelines
1. Build incrementally: Data warehouses must be built incrementally. Generally, it is recommended that a
data marts may be created with one particular project in mind, and once it is implemented, several other
sections of the enterprise may also want to implement similar systems. An enterprise data warehouses can
then be implemented in an iterative manner allowing all data marts to extract information from the data
warehouse.
2. Need a champion: A data warehouses project must have a champion who is active to carry out
considerable researches into expected price and benefit of the project. Data warehousing projects requires
inputs from many units in an enterprise and therefore needs to be driven by someone who is needed for
interacting with people in the enterprises and can actively persuade colleagues.
3. Senior management support: A data warehouses project must be fully supported by senior
management. Given the resource-intensive feature of such project and the time they can take to implement,
a warehouse project signal for a sustained commitment from senior management.
4. Ensure quality: The only record that has been cleaned and is of a quality that is implicit by the
organizations should be loaded in the data warehouses.
5. Corporate strategy: A data warehouse project must be suitable for corporate strategies and business
goals. The purpose of the project must be defined before the beginning of the projects.
https://www.scaler.com/topics/clustered-operating-system/ 21/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
6. Business plan: The financial costs (hardware, software, and peopleware), expected advantage, and a
project plan for a data warehouses project must be clearly outlined and understood by all stakeholders.
Without such understanding, rumors about expenditure and benefits can become the only sources of data,
subversion the projects.
7. Training: Data warehouses projects must not overlook data warehouses training requirements. For a
data warehouses project to be successful, the customers must be trained to use the warehouses and to
understand its capabilities.
8. Adaptability: The project should build in flexibility so that changes may be made to the data
warehouses if and when required. Like any system, a data warehouse will require to change, as the needs of
an enterprise change.
9. Joint management: The project must be handled by both IT and business professionals in the enterprise. To
ensure that proper communication with the stakeholder and which the project is the target for assisting the
enterprise's business, the business professional must be involved in the project along with technical professionals
HARDWARE AND OPERATING SYSTEMS
Hardware and operating systems make up the computing environment for your data warehouse. All the data
extraction, transformation, integration, and staging jobs run on the selected hardware under the chosen operating
system. When you transport the consolidated and integrated data from the staging area to your data warehouse
repository, you make use of the server hardware and the operating system software. When the queries are initiated
from the client workstations, the server hardware, in conjunction with the database software, executes the queries
and produces the results.
Here are some general guidelines for hardware selection, not entirely specific to hardware for the data warehouse.
Scalability. When your data warehouse grows in terms of the number of users, the number of queries, and the
complexity of the queries, ensure that your selected hardware could be scaled up.
Support. Vendor support is crucial for hardware maintenance. Make sure that the support from the hardware
vendor is at the highest possible level.
Vendor Reference. It is important to check vendor references with other sites using hardware from this vendor.
You do not want to be caught with your data warehouse being down because of hardware malfunctions when the
CEO wants some critical analysis to be completed.
Vendor Stability. Check on the stability and staying power of the vendor.
Client-Server Model
The Client-server model is a distributed application structure that partitions task or workload between the
providers of a resource or service, called servers, and service requesters called clients. In the client -server
architecture, when the client computer sends a request for data to the server through the internet, the server
accepts the requested process and deliver the data packets requested back to the client. Clients do not share any
of their resources. Examples of Client-Server Model are Email, World Wide Web, etc.
How the Client-Server Model works ?
In this article we are going to take a dive into the Client-Server model and have a look at how
the Internet works via, web browsers. This article will help us in having a solid foundation of the WEB and help
in working with WEB technologies with ease.
Client: When we talk the word Client, it mean to talk of a person or an organization using a particular
service. Similarly in the digital world a Client is a computer (Host) i.e. capable of receiving information or
using a particular service from the service providers (Servers).
Servers: Similarly, when we talk the word Servers, It mean a person or medium that serves something.
Similarly in this digital world a Server is a remote computer which provides information (data) or access to
particular services.
https://www.scaler.com/topics/clustered-operating-system/ 22/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
So, its basically the Client requesting something and the Server serving it as long as its present in the database.
https://www.scaler.com/topics/clustered-operating-system/ 23/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
PARALLEL PROCESSORS
DEFINITION
The processing of large amounts of data is typical for data warehouse environments. Depending on the available
hardware resources, sooner or later the point is reached where a job cannot be processed on a single processor resp.
cannot be represented by a single process anymore. The reasons for that are:
Time requirements demand the use of multiple processors
Systems resources (memory, disk space, temporary table space, rollback segments, . . .) are limited.
Recurrent errors require the repetition of the process.
Parallelization by RDBMS parallel processing
Modern database systems are capable of parallel query processing. Queries and sometimes also changes on large
amounts of data can be parallelized within the database server and use multiple processors concurrently.
Advantages of this solution are:
No resp. only little development effort is needed
Only a small overhead is produced by this kind of parallelization
Parallel processing
Parallel processing is a method in computing of running two or more processors (CPUs) to handle separate parts of
an overall task. Breaking up different parts of a task among multiple processors will help reduce the amount of
time to run a program. Any system that has more than one CPU can perform parallel processing, as well as multi-
core processors which are commonly found on computers today.
Parallel processing is commonly used to perform complex tasks and computations. Data scientists will commonly
make use of parallel processing for compute and data-intensive tasks.
Clustered Systems
Clustered systems are similar to parallel systems as they both have multiple CPUs. However a major difference is
that clustered systems are created by two or more individual computer systems merged together. Basically, they
have independent computer systems with a common storage and the systems work together.
https://www.scaler.com/topics/clustered-operating-system/ 24/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
he clustered systems are a combination of hardware clusters and software clusters. The hardware clusters help in
sharing of high
performance disks between the systems. The software clusters makes all the systems work together .
Each node in the clustered systems contains the cluster software. This software monitors the cluster system and
makes sure it is working as required. If any one of the nodes in the clustered system fail, then the rest of the nodes
take control of its storage and resources and try to restart.
Types of Clustered Systems
There are primarily two types of clustered systems i.e. asymmetric clustering system and symmetric clustering
system. Details about these are given as follows −
Asymmetric Clustering System
In this system, one of the nodes in the clustered system is in hot standby mode and all the others run the required
applications. The hot standby mode is a failsafe in which a hot standby node is part of the system . The hot standby
node continuously monitors the server and if it fails, the hot standby node takes its place.
Symmetric Clustering System
In symmetric clustering system two or more nodes all run applications as well as monitor each other. This is more
efficient than asymmetric system as it uses all the hardware and doesn't keep a node merely as a hot standby
Attributes of Clustered Systems
There are many different purposes that a clustered system can be used for. Some of these can be scientific
calculations, web support etc. The clustering systems that embody some major attributes are −
Load Balancing Clusters
In this type of clusters, the nodes in the system share the workload to provide a better performance. For
example: A web based cluster may assign different web queries to different nodes so that the system
performance is optimized. Some clustered systems use a round robin mechanism to assign requests to
different nodes in the system.
High Availability Clusters
These clusters improve the availability of the clustered system. They have extra nodes which are only used
if some of the system components fail. So, high availability clusters remove single points of failure i.e.
nodes whose failure leads to the failure of the system. These types of clusters are also known as failover
clusters or HA clusters.
Benefits of Clustered Systems
The difference benefits of clustered systems are as follows −
Performance
Clustered systems result in high performance as they contain two or more individual computer systems
merged together. These work as a parallel unit and result in much better performance for the system.
https://www.scaler.com/topics/clustered-operating-system/ 25/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Fault Tolerance
Clustered systems are quite fault tolerant and the loss of one node does not result in the loss of the system.
They may even contain one or more nodes in hot standby mode which allows them to take the place of
failed nodes.
Scalability
Clustered systems are quite scalable as it is easy to add a new node to the system. There is no need to take
the entire cluster down to add a new node.
https://www.scaler.com/topics/clustered-operating-system/ 26/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
What is paralle l
processing in a data warehouse?
Parallel processing is a method in computing of running two or more processors, or CPUs, to handle separate parts
of an overall task. Breaking up different parts of a task among multiple processors helps reduce the amount of time
it takes to run a program.
What is a cluster system in data warehousing?
Clustered systems are similar to parallel systems as they both have multiple CPUs. However a major difference is
that clustered systems are created by two or more individual computer systems merged together. Basically, they
have independent computer systems with a common storage and the systems work together
Some common parallel computing architectures include shared memory systems, distributed memory systems, and
hybrid systems. Shared memory systems allow multiple processors to access a common memory space, while
distributed memory systems use separate memory spaces for each processor.
Parallel processing uses two or more processors or CPUs simultaneously to handle various components of a single
activity. Systems can slash a program's execution time by dividing a task's many parts among several processors.
Benefits of parallel computing. The advantages of parallel computing are that computers can execute code more
efficiently, which can save time and money by sorting through “big data” faster than ever. Parallel programming
can also solve more complex problems, bringing more resources to the table.
For example, when a person looks at a firetruck, they will see the red color, fire hose, and logo all at once to
quickly recognize it for what it is. Parallel processing allows people to make such observations quickly, rather than
analyzing each part of the object or situation separately.
https://www.scaler.com/topics/clustered-operating-system/ 28/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
In a computer system, a cluster is a group of servers and other resources that act like a single system and enable
high availability, load balancing and parallel processing. These systems can range from a two-node system of two
personal computers (PCs) to a supercomputer that has a cluster architecture. 2.
https://www.scaler.com/topics/clustered-operating-system/ 29/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Clustering is a Machine Learning technique, unveils hidden patterns in data without labels. Discover its types,
including partition-based, hierarchical, and density-based clustering, and
explore its
The types of clustered operating systems can be divided into three types: Asymmetric Clustering Systems,
Symmetric Clustering Systems, and Parallel Cluster Systems.
operating system is somewhat similar to the parallel system because both systems use multiple CPUs i.e. a group
of computers. The difference between the two is that the clustered systems are composed of two or more
individual systems linked with each other. In a clustered operating system`, the computers or CPUs share single
https://www.scaler.com/topics/clustered-operating-system/ 30/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Clustered systems comprise two or more individual systems that use multiple CPUs like parallel systems. The
primary difference between the clustered operating system and the parallel system is that the systems are
connected in the clustered systems and in the parallel systems they're not connected.
The below image shows the representation of the clustered operating system.
All the systems of the clustered operating system have independent processing power and capacity i.e. they have
their CPUs and shared storage media. These systems work together with a shared storage media to complete all
the tasks. The below diagram illustrates the meaning of the clustered operating system.
A cluster operating system is a combination of hardware and software clusters. The hardware clusters help in
https://www.scaler.com/topics/clustered-operating-system/ 31/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
sharing high-performance disks between all the computer systems or the nodes. Whereas the software cluster
ensures and manages the working of all the systems together. Each node in the cluster system contains the
cluster software. This software keeps an eye on the entire cluster system and ensures it works properly. If any of
the nodes in the cluster system fails, then the rest of the nodes of the system take control of their resources and
try to restart.
There are mainly three types of clustered operating systems, and they are below:
In an asymmetric clustering system, one of the nodes is in a hot standby node i.e. it continuously monitors the
entire system. All the other nodes run the required applications or tasks. The hot-standby node is a component of
the cluster system, and it's completely fail-safe. The hot-standby node is used as a replacement if any of the
nodes from the system fails. The below image shows the representation of the asymmetric clustering system.
https://www.scaler.com/topics/clustered-operating-system/ 32/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
In a symmetric clustering system, all the nodes run applications and at the same time monitor other nodes as
well. This type of clustering system is more effective than the asymmetric system as it doesn't have a specific
hot-standby node instead all the nodes monitor other nodes in the system.
In a parallel cluster system, multiple users are allowed to access similar data on the common shared storage
among the nodes. This is possible using special software and other applications.
https://www.scaler.com/topics/clustered-operating-system/ 33/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
The clustered operating system can be classified based on its functionality, such as -
computing load which results in higher performance. For example, a cluster that is web-based may assign web
High Availability Clusters are also called HA Clusters. This cluster has extra nodes in the system and ensures
that all the resources are available in the cluster. If any of the system components fail then the high availability
clusters remove those nodes whose failure causes the stoppage of the system so that the application runs
smoothly.
Fail-over is the procedure of transferring applications and data resources from a malfunctioning system to
another system in the cluster. This cluster system is used widely for crucial applications or tasks such
There are various advantages of cluster operating systems and they are mentioned below:
Superior Availability: The failure of a single node in the system doesn't mean the loss of service or
the task in the system. As every node in the cluster is running on an individual CPU and if any of the
https://www.scaler.com/topics/clustered-operating-system/ 34/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
node failures occur then it can be pulled down for maintenance while the remaining nodes can take the
Efficiency: The cluster operating systems are more cost-effective as compared to highlyreliableand
Error Tolerance: If any error or fault occurs in any of the nodes of the system then the system does
not halt. Because the failed node can be swapped with the hot standby node in such situations.
Performance: The cluster operating systems results in high performance as there are more than two
nodes available that are merged. These nodes work as a parallel unit and hence produce a better result.
Scalability: These cluster systems are scalable as it is easy to add more nodes to the system. This can
increase the system's improvement, fault tolerance, and overall speed of the system.
Speed of processing: The cluster operating system offers great availability and performance speed
High Cost: The major disadvantage of the cluster operating system is that it requires more cost to
Maintenance: The cluster resources are challenging to maintain and manage and thus require a high
https://www.scaler.com/topics/clustered-operating-system/ 35/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
Conclusion
In a clustered operating system, the computers or CPUs share single storage, and they
The primary difference between the clustered operating system and the parallel
system is that the systems are connected in the clustered systems and in the
The types of clustered operating systems can be divided into three types:
Cluster Systems.
The cluster operating systems results in high performance as there are more than
database system can be complex and requires careful consideration of factors such as data
distribution, replication, and consistency.
Distributed databases duplicate data across multiple nodes or locations, ensuring data is
available even during system failures. If one node fails, the system redirects requests to other
operational nodes. This automatic and swift failover process minimizes downtime and data
loss.
A distributed database, or DDBMS, is a database management system that stores data across
multiple interconnected sites or nodes spread across a network. This decentralized
architecture provides several benefits, including enhanced scalability, fault tolerance, and
improved performance.
This open-source, distributed NoSQL database management system offers high availability
without a single point of failure, ensuring robust scalability and consistent performance.
https://www.scaler.com/topics/clustered-operating-system/ 37/
39
9/23/24, 11:12 AM Clustered Operating System - Scaler Topics
A distributed database system allows applications to access data from local and remote
databases. In a homogenous distributed database system, each database is an Oracle
Database. In a heterogeneous distributed database system, at least one of the databases is not
an Oracle Database.
Prepared By:
Dr.Anand Sharma
(H.O.D. CSE/IT DEPTT. ACET,Aligarh)
https://www.scaler.com/topics/clustered-operating-system/ 39/
39