Unit I & II

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Miracle Academy

Software Engineering:
Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of
software engineering is an efficient and reliable software product.
IEEE defines software engineering as:
(1) The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to software.
(2) The study of approaches as in the above statement.

Need of Software Engineering


The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.

 Large software - It is easier to build a wall than to a house or building, likewise, as


the size of software become large engineering has to step to give it a scientific
process.
 Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing one.
 Cost- As hardware industry has shown its skills and huge manufacturing has lower
down the price of computer and electronic hardware. But the cost of software remains
high if proper process is not adapted.
 Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which user works. If the nature of software is
always changing, new enhancements (improvements) need to be done in the existing
one. This is where software engineering plays a good role.
 Quality Management- Better process of software development provides better and
quality software product.

Characteristics of good software


A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:

 Operational
 Transitional
 Maintenance

1
Miracle Academy

Well-engineered and crafted software is expected to have the following characteristics:

Operational
This tells us how well software works in operations. It can be measured on:

 Budget ,Usability , Efficiency, Correctness ,Functionality, Dependability ,Security,


Safety

Transitional
This aspect is important when the software is moved from one platform to another:

 Portability , Interoperability, Reusability, Adaptability

Maintenance
This aspect briefs about how well a software has the capabilities to maintain itself in the
ever-changing environment:

 Modularity, Maintainability, Flexibility, Scalability


In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-time
software products.
SDLC
Software Development Life Cycle, SDLC for short, is a well-defined, structured sequence of
stages in software engineering to develop the intended (planned) software product.

SDLC Activities
SDLC provides a series of steps to be followed to design and develop a software product
efficiently. SDLC framework includes the following steps:

2
Miracle Academy

Communication
This is the first step where the user initiates the request for a desired software product. He
contacts the service provider and tries to negotiate (convey or consult) the terms. He submits
his request to the service providing organization in writing.

Requirement Gathering
This step onwards the software development team works to carry on the project. The team
holds discussions with various stakeholders (interested party) from problem domain and
tries to bring out as much information as possible on their requirements. The requirements
are planned and separated into user requirements, system requirements and functional
requirements. The requirements are collected using a number of practices as given -

 studying the existing or obsolete (outdated) system and software,


 conducting interviews of users and developers,
 referring to the database or
 Collecting answers from the questionnaires.

Feasibility Study
After requirement gathering, the team comes up with a rough plan of software process. At
this step the team analyzes if software can be made to fulfill all requirements of the user and
if there is any possibility of software being no more useful. It is found out, if the project is
financially (cost & benefit analysis like Development, Operating, Hardware & Software,
Personal, Facility, Supply, Tangible or Intangible, Direct or Indirect cost etc), practically,
legal and technologically feasible for the organization to take up. There are many
algorithms available, which help the developers to conclude the feasibility of a software
project.

System Analysis
3
Miracle Academy

At this step the developers decide a roadmap of their plan and try to bring up the best
software model suitable for the project. System analysis includes Understanding of software
product limitations, learning system related problems or changes to be done in existing
systems beforehand, identifying and addressing the impact of project on organization and
personnel etc. The project team analyzes the scope of the project and plans the schedule and
resources accordingly.

Software Design
Next step is to bring down whole knowledge of requirements and analysis on the desk and
design the software product. The inputs from users and information gathered in requirement
gathering phase are the inputs of this step. The output of this step comes in the form of two
designs; logical design and physical design. Engineers produce meta-data and data
dictionaries, logical diagrams, data-flow diagrams (DFD) and in some cases pseudo codes.

Coding
This step is also known as programming phase. The implementation of software design
starts in terms of writing program code in the suitable programming language and
developing error-free executable programs efficiently.

Testing
An estimate says that 50% of whole software development process should be tested. Errors
may loss the software from critical level to its own removal. Software testing is done while
coding by the developers and thorough testing is conducted by testing experts at various
levels of code such as module testing, program testing, product testing, in-house testing and
testing the product at user’s end. Early discovery of errors and their remedy (medicine or
solution) is the key to reliable software.

Integration
Software may need to be integrated with the libraries, databases and other program(s). This
stage of SDLC is involved in the integration of software with outer world entities.

Implementation
This means installing the software on user machines. At times, software needs post-
installation configurations at user end. Software is tested for portability and adaptability and
integration related issues are solved during implementation.

Operation and Maintenance


This phase confirms the software operation in terms of more efficiency and less errors. If
required, the users are trained on, or aided with the documentation on how to operate the
software and how to keep the software operational. The software is maintained timely by
updating the code according to the changes taking place in user end environment or
technology. This phase may face challenges from hidden bugs and real-world unidentified
problems.

Disposition (Nature)
As time elapses (passes), the software may decline on the performance front. It may go
completely obsolete or may need powerful upgradation. Hence a pressing need to eliminate
a major portion of the system arises. This phase includes collecting data and required
software components, closing down the system, planning disposition activity and
terminating system at appropriate end-of-system time.

Software Development Paradigm


4
Miracle Academy

The software development paradigm helps developer to select a strategy to develop the
software. A software development paradigm has its own set of tools, methods and
procedures, which are expressed clearly and defines software development life cycle. A few
of software development paradigms or process models are defined as follows:

Waterfall Model/ Linear/ Traditional Model


Waterfall model is the simplest model of software development paradigm. It says the all the
phases of SDLC will function one after another in linear manner. That is, when the first
phase is finished then only the second phase will start and so on.

This model assumes that everything is carried out and taken place perfectly as planned in the
previous stage and there is no need to think about the past issues that may arise in the next
phase. This model does not work smoothly if there are some issues left at the previous step.
The sequential nature of model does not allow us go back and undo or redo our actions.
This model is best suited when developers already have designed and developed similar
software in the past and are aware of all its domains.

Iterative Model
This model leads the software development process in iterations (repetitions). It projects the
process of development in cyclic manner repeating every step after every cycle of SDLC
process.

The software is first developed on very small scale and all the steps are followed which are
taken into consideration. Then, on every next iteration, more features and modules are
designed, coded, tested and added to the software. Every cycle produces a software, which is
complete in itself and has more features and capabilities than that of the previous one.
After each iteration, the management team can do work on risk management and prepare for
the next iteration. Because a cycle includes small portion of whole software process, it is
easier to manage the development process but it consumes more resources.
5
Miracle Academy

Spiral Model
Spiral model is a combination of both, iterative model and one of the SDLC model. It can be
seen as if we choose one SDLC model and combine it with cyclic process (iterative model).

This model considers risk, which often goes un-noticed by most other models. The model
starts with determining objectives and constraints of the software at the start of one iteration.
Next phase is of prototyping the software. This includes risk analysis. Then one standard
SDLC model is used to build the software. In the fourth phase of the plan of next iteration is
prepared.

Prototyping Model

Prototyping Model has following six SDLC phases as follow:

Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the requirements of the
system are defined in detail. During the process, the users of the system are interviewed to
know what is their expectation from the system.

Step 2: Quick design


6
Miracle Academy

The second phase is a preliminary design or a quick design. In this stage, a simple design of
the system is created. However, it is not a complete design. It gives a brief idea of the system
to the user. The quick design helps in developing the prototype.

Step 3: Build a Prototype

In this phase, an actual prototype is designed based on the information gathered from quick
design. It is a small working model of the required system.

Step 4: Initial user evaluation

In this stage, the proposed system is presented to the client for an initial evaluation. It helps to
find out the strength and weakness of the working model. Comment and suggestion are
collected from the customer and provided to the developer.

Step 5: Refining prototype

If the user is not happy with the current prototype, we need to refine the prototype according
to the user's feedback and suggestions.

This phase will not over until all the requirements specified by the user are met. Once the
user is satisfied with the developed prototype, a final system is developed based on the
approved final prototype.

Software Quality
Software Quality Management is a process that ensures the required level of software
quality is achieved when it reaches the users, so that they are satisfied by its performance.
The process involves quality assurance, quality planning, and quality control.
In the software engineering context, software quality reflects both functional quality as
well as structural quality.

McCall’s Factor Model


This model classifies all software requirements into 11 software quality factors. The 11
factors are grouped into three categories – product operation, product revision, and product
transition factors.
 Product operation factors − Correctness, Reliability, Efficiency, Integrity,
Usability.
 Product revision factors − Maintainability, Flexibility, Testability.
 Product transition factors − Portability, Reusability, Interoperability.

What is Software Requirement Specification - [SRS]?


A software requirements specification (SRS) is a document that captures complete
description about how the system is expected to perform. It is usually signed off at the end
of requirements engineering phase.

Qualities of SRS:
 Correct
 Unambiguous (Clear-cut)
 Complete

7
Miracle Academy

 Consistent
 Ranked for importance and/or stability
 Verifiable
 Modifiable
 Traceable

Types of Requirements:
The below diagram depicts the various types of requirements that are captured during SRS.

8
Miracle Academy

COCOMO Model
Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.COCOMO is one
of the most generally used software estimation models in the world. COCOMO predicts the
efforts and schedule of a software product based on the size of the software.

The necessary steps in this model are:


1. Get an initial estimate of the development effort from evaluation of thousands of
delivered lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the
multiplying factors i.e., multiply the values in step1 and step2.
The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size. To
determine the initial effort (power) Ei in person-months the equation used is of the type is
shown below
Ei=a*(KDLOC)b
The value of the constant a and b are depends on the project type.

In COCOMO, projects are categorized into three types:


1. Organic
2. Semidetached
3. Embedded
1.Organic: A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects. Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.
2. Semidetached: A development project can be treated with semidetached type if the
development consists of a mixture of experienced and inexperienced staff. Team members
may have finite experience in related systems but may be unfamiliar with some aspects of the
order being developed. Example of Semidetached system includes developing a new
operating system (OS), a Database Management System (DBMS), and complex
inventory management system.
3. Embedded: A development project is treated to be of an embedded type, if the software
being developed is strongly coupled to complex hardware, or if the strict regulations on the
operational method exist. For Example: ATM, Air Traffic control.
For three product categories, Bohem provides a different set of expression to predict effort (in
a unit of person month)and development time from the size of estimation in KLOC(Kilo Line
of code) efforts estimation takes into account the productivity loss due to holidays, weekly
off, coffee breaks, etc.
According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model
1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the
project parameters. The following expressions give the basic COCOMO estimation model:
Effort=a*(KLOC) b PM
Tdev=c*(efforts)d Months
Avg Staff Size= Effort Persons
TDev
Productivity= Kbc Kbc/PM
Effort
Where

9
Miracle Academy

KLOC is the estimated size of the software product indicate in Kilo Lines of Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
Estimation of development effort
For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:
Mode A B C D
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Organic: Effort = 2.4(KLOC) 1.05 PM


Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development time
For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months

Semi-detached: Tdev = 2.5(Effort) 0.35 Months

Embedded: Tdev = 2.5(Effort) 0.32 Months

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a*(KLOC) b PM
Tdev=c*(efforts)d Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM

10
Miracle Academy

Software Design Levels


Software design yields three levels of results:
 Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
 High-level Design- The high-level design breaks the ‘single entity-multiple
component’ concept of architectural design into less-abstracted view of sub-systems
and modules and shows their interaction with each other. High-level design focuses
on how the system along with all of its components can be implemented in forms of
modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen
as a system and its sub-systems in the previous two designs. It is more detailed
towards modules and their implementations. It defines logical structure of each
module and their interfaces to communicate with other modules.
Modularization
Modularization is a technique to divide a software system into multiple discrete and
independent modules, which are expected to be capable of carrying out task(s)
independently. These modules may work as basic constructs for the entire software.
Designers manage to design modules such that they can be executed and/or compiled
separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of a
software.
Advantage of modularization:
 Smaller components are easier to maintain
 Program can be divided based on functional parts
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again
 Concurrent execution can be made possible
 Desired from security aspect
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules based
on some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other
to work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely –
 Functional cohesion - It is considered to be the highest degree of cohesion, and it is
highly expected. Elements of module in functional cohesion are grouped because they
all contribute to a single well-defined function. It can also be reused.
 Sequential cohesion - When elements of module are grouped because the output of
one element serves as input to another and so on, it is called sequential cohesion.
 Communicational cohesion - When elements of module are grouped together, which
are executed sequentially and work on same data (information), it is called
communicational cohesion.

11
Miracle Academy

 Procedural cohesion - When elements of module are grouped together, which are
executed sequentially in order to perform a task, it is called procedural cohesion.
 Temporal Cohesion - When elements of module are organized such that they are
processed at a similar point in time, it is called temporal cohesion.
 Logical cohesion - When logically categorized elements are put together into a
module, it is called logical cohesion.
 Co-incidental cohesion - It is unplanned and random cohesion, which might be the
result of breaking the program into smaller modules for the sake of modularization.
Because it is unplanned, it may serve confusion to the programmers and is generally
not-accepted.
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The lower
the coupling, the better the program.
There are five levels of coupling, namely –
 Data coupling- Data coupling is when two modules interact with each other by
means of passing data (as parameter). If a module passes data structure as parameter,
then the receiving module should use all its components.
 Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
 Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
 Common coupling- When multiple modules have read and write access to some
global data, it is called common or global coupling.
 Content coupling - When a module can directly access or modify or refer to the
content of another module, it is called content level coupling.
Ideally, no coupling is considered to be the best.

Design Verification

The output of software design process is design documentation, pseudo codes, detailed logic
diagrams, process diagrams, and detailed description of all functional or non-functional
requirements.
The next phase, which is the implementation of software, depends on all outputs mentioned
above.
It is then becomes necessary to verify the output before proceeding to the next phase. The
early any mistake is detected, the better it is or it might not be detected until testing of the
product. If the outputs of design phase are in formal notation form, then their associated
tools for verification should be used otherwise a thorough design review can be used for
verification and validation.
By structured verification approach, reviewers can detect defects that might be caused by
overlooking some conditions. A good design review is important for good software design,
accuracy and quality.

Design Notation
Software analysis and design includes all activities, which help the transformation of
requirement specification into implementation. Requirement specifications specify all
functional and non-functional expectations from the software. These requirement

12
Miracle Academy

specifications come in the shape of human readable and understandable documents, to which
a computer has nothing to do.
Software analysis and design is the intermediate stage, which helps human-readable
requirements to be transformed into actual code.
Few analysis and design tools used by software designers:
Data Flow Diagram

Data flow diagram is graphical representation of flow of data in an information system. It is


capable of showing incoming data flow, outgoing data flow and stored data. The DFD does
not mention anything about how data flows through the system.
There is a noticeable (famous) difference between DFD and Flowchart. The flowchart
displays flow of control in program modules. DFDs display flow of data in the system at
various levels. DFD does not contain any control or branch elements.
It is also known as Bubble Charts or Data Flow Graph

Types of DFD

Data Flow Diagrams are either Logical or Physical.


 Logical DFD - This type of DFD concentrates on the system process, and flow of
data in the system. For example in a Banking software system, how data is moved
between different entities.
 Physical DFD - This type of DFD shows how the data flow is actually implemented
in the system. It is more specific and close to the implementation.
DFD Components

DFD can represent Source, destination, storage and flow of data using the following set of
components -

 Entities - Entities are source and destination of information data. Entities are
represented by a rectangles with their respective names.
 Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
 Data Storage - There are two variants of data storage - it can either be represented as
a rectangle with absence of both smaller sides or as an open-sided rectangle with only
one side missing.
 Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.
Levels of DFD

 Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which displays the
entire information system as one diagram hiding all the underlying details. Level 0
DFDs are also known as context level DFDs.

13
Miracle Academy

 Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level 1
DFD displays basic modules in the system and flow of data among various modules.
Level 1 DFD also mentions basic processes and sources of information.

 Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.
Higher level DFDs can be transformed into more specific lower level DFDs with
deeper level of understanding unless the desired level of specification is achieved.
Structure Charts

Structure chart is a chart derived from Data Flow Diagram. It represents the system in more
detail than DFD. It breaks down the entire system into lowest functional modules, describes
functions and sub-functions of each module of the system to a greater detail than DFD.
Structure chart represents hierarchical structure of modules. At each layer a specific task is
performed.
Here are the symbols used in construction of structure charts -

14
Miracle Academy

 Module - It represents process or subroutine or task. A control module branches to


more than one sub-module. Library Modules are re-usable and invokeable from any

module.
 Condition - It is represented by small diamond at the base of module. It diaplays that
control module can select any of sub-routine based on some condition.

 Jump - An arrow is shown pointing inside the module to displays that the control will

jump in the middle of the sub-module.


 Loop - A curved arrow represents loop in the module. All sub-modules covered by
loop repeat execution of module.

15
Miracle Academy

 Data flow - A directed arrow with empty circle at the end represents data flow.

 Control flow - A directed arrow with filled circle at the end represents control flow.

HIPO Diagram

HIPO (Hierarchical Input Process Output) diagram is a combination of two organized


method to analyze the system and provide the means of documentation. HIPO model was
developed by IBM in year 1970.
HIPO diagram represents the hierarchy of modules in the software system. Analyst uses
HIPO diagram in order to obtain high-level view of system functions. It decomposes
functions into sub-functions in a hierarchical manner. It depicts the functions performed by
system.
HIPO diagrams are good for documentation purpose. Their graphical representation makes it
easier for designers and managers to get the pictorial idea of the system structure.

In contrast to IPO (Input Process Output) diagram, which depicts the flow of control and
data in a module, HIPO does not provide any information about data flow or control flow.

16
Miracle Academy

Example

Both parts of HIPO diagram, Hierarchical presentation and IPO Chart are used for structure
design of software program as well as documentation of the same.

Structured English

Most programmers are unaware of the large picture of software so they only rely (trust) on
what their managers tell them to do. It is the responsibility of higher software management
to provide accurate information to the programmers to develop accurate yet fast code.
Other forms of methods, which use graphs or diagrams, may are sometimes interpreted
differently by different people.
Hence, analysts and designers of the software come up with tools such as Structured
English. It is nothing but the description of what is required to code and how to code it.
Structured English helps the programmer to write error-free code.
Other form of methods, which use graphs or diagrams, may are sometimes interpreted
differently by different people. Here, both Structured English and Pseudo-Code tries to
reduce that understanding gap.
Structured English is the It uses plain English words in structured programming paradigm. It
is not the ultimate code but a kind of description what is required to code and how to code it.
The following are some tokens of structured programming.
IF-THEN-ELSE,
DO-WHILE-UNTIL
Analyst uses the same variable and data name, which are stored in Data Dictionary, making
it much simpler to write and understand the code.
Example

We take the same example of Customer Authentication in the online shopping environment.
This procedure to authenticate customer can be written in Structured English as:
Enter Customer_Name
SEEK Customer_Name in Customer_Name_DB file
IF Customer_Name found THEN
Call procedure USER_PASSWORD_AUTHENTICATE()
ELSE
PRINT error message
Call procedure NEW_CUSTOMER_REQUEST()
ENDIF

17
Miracle Academy

The code written in Structured English is more like day-to-day spoken English. It can not be
implemented directly as a code of software. Structured English is independent of
programming language.
Pseudo-Code

Pseudo code is written more close to programming language. It may be considered as better
programming language, full of comments and descriptions.
Pseudo code avoids variable declaration but they are written using some actual
programming language’s constructs, like C, Fortran, Pascal etc.
Pseudo code contains more programming details than Structured English. It provides a
method to perform the task, as if a computer is executing the code.
Decision Tables

A Decision table represents conditions and the respective actions to be taken to address
them, in a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar information into a
single table and then by combining tables it delivers easy and convenient decision-making.
Creating Decision Table

To create the decision table, the developer must follow basic four steps:
 Identify all possible conditions to be addressed
 Determine actions for all identified conditions
 Create Maximum possible rules
 Define action for each rule
Decision Tables should be verified by end-users and can lately be simplified by eliminating
duplicate rules and actions.
Example

Let us take a simple example of day-to-day problem with our Internet connectivity. We
begin by identifying all problems that can arise while starting the internet and their
respective possible solutions.
We list all possible problems under column conditions and the prospective actions under
column Actions.

Conditions/Actions Rules

Shows Connected N N N N Y Y Y Y
Conditions Ping is Working N N Y Y N N Y Y
Opens Website Y N Y N Y N Y N

Check network cable X

Check internet router X X X X


Actions Restart Web Browser X
Contact Service provider X X X X X X
Do no action

18
Miracle Academy

Table : Decision Table – In-house Internet Troubleshooting

Entity-Relationship Model

Entity-Relationship model is a type of database model based on the notion of real world
entities and relationship among them. We can map real world scenario onto ER database
model. ER Model creates a set of entities with their attributes, a set of constraints and
relation among them.
ER Model is best used for the conceptual design of database. ER Model can be represented
as follows :

 Entity - An entity in ER Model is a real world being, which has some properties
called attributes. Every attribute is defined by its corresponding set of values,
called domain.
For example, Consider a school database. Here, a student is an entity. Student has
various attributes like name, id, age and class etc.
 Relationship - The logical association among entities is called relationship.
Relationships are mapped with entities in various ways. Mapping cardinalities define
the number of associations between two entities.
Mapping cardinalities:
o one to one
o one to many
o many to one
o many to many
Data Dictionary

Data dictionary is the centralized collection of information about data. It stores meaning and
origin of data, its relationship with other data, data format for usage etc. Data dictionary has
hard definitions of all names in order to facilitate user and software designers.
Data dictionary is often referenced as meta-data (data about data) source. It is created along
with DFD (Data Flow Diagram) model of software program and is expected to be updated
whenever DFD is changed or updated.
Requirement of Data Dictionary

The data is referenced via data dictionary while designing and implementing software. Data
dictionary removes any chances of ambiguity (doubt). It helps keeping work of
programmers and designers synchronized (matched) while using same object reference
everywhere in the program.

19
Miracle Academy

Data dictionary provides a way of documentation for the complete database system in one
place. Validation of DFD is carried out using data dictionary.
Contents

Data dictionary should contain information about the following


Data Flow
 Data Structure
 Data Elements
 Data Stores
 Data Processing
Data Flow is described by means of DFDs as studied earlier and represented in algebraic
form as described.

= Composed of

{} Repetition
() Optional
+ And
[/] Or
Example

Address = House No + (Street / Area) + City + State


Course ID = Course Number + Course Name + Course Level + Course Grades
Data Elements

Data elements consist of Name and descriptions of Data and Control Items, Internal or
External data stores etc. with the following details:

Primary Name
 Secondary Name (Alias)
 Use-case (How and where to use)
 Content Description (Notation etc. )
 Supplementary Information (preset values, constraints etc.)
Data Store

It stores the information from where the data enters into the system and exists out of the
system. The Data Store may include -
 Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on different machine.
 Tables
oNaming convention
o Indexing property
Data Processing

20
Miracle Academy

There are two types of Data Processing:


 Logical: As user sees it
 Physical: As software sees it

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy