Advanced Digital Systems Design 01
Advanced Digital Systems Design 01
Advanced Digital Systems Design 01
Grading Criteria:
A ≥ 70% B+ = 65-69% B= 60-64% B- = 55-59% C+ = 50-54% C=46-49% C- = 40-45% D = 35-39% E= 30-
34% F= 0-29%
Discrimination and Sexual Harassment Policy:
The University/Institution of is committed to fostering an environment free from discrimination,
including sexual or gender-based harassment or misconduct. It is the policy of the University to maintain
an academic and work environment free of discrimination, including harassment. The University
prohibits discrimination and harassment against any person because of age, ancestry, color, disability or
handicap, national origin, race, religious creed, sex, sexual orientation or gender identity.
Behaviors including sexual harassment, sexual misconduct, dating violence and stalking, as well as
retaliation for reporting any of these acts are not tolerated and will be punishable according to law.
Any of these incidences reported to the Dean of Faculty, Registrar or Deputy Registrar will be dealt
with sternly according to University code of conduct.
Disability Policy:
The University/Institution is committed to: complying with all relevant legislation regarding access and
equity for people with disabilities; providing services and support for students with disabilities to enable
them to participate fully and independently in the academic, cultural and social life of the university;
providing access for staff with disabilities to enable them to participate fully and independently in all
aspects of their work and career development.
Student
Problem Analysis
Concepts
data
Course Outcomes
1. To design digital Lecture, I D D D Classwork,
circuits using laboratory Assignment,
programming work Final Exams,
logic devices
Lab Reports
2. Demonstrate Lecture, I D D I Classwork,
the function of laboratory Assignment,
adder/subtracter work, Final Exams,
circuits using Problem Lab Reports
gates Solving
3. To Analyze and Lecture, D D D I Classwork,
determine the Problem Assignment,
reliability of Solving, Final Exams
circuit Tutorials
components
4. To perform Lecture, I D D A Classwork,
circuit fault Problem Assignment,
diagnosis and Solving, Final Exams
Testing
Tutorials
FOURAH BAY COLLEGE
INSTITUTION
EENG424
MODULE CODE
References have been made to a number of text books which are listed in the section for Further
Reading, and modifications made for a coherent and reader-friendly presentation with worked
examples and assignments provided.
RELIABILITY: Failure Rate Curve – Burn-in; Useful life and Wear-out; Systems Reliability (R(t)
= exp(-λt), where λ is defined as the Failure Rate
Mean time between failures-MTBF and Mean time to failures-MTTF; Mean Time To Repair-
MODULE DESCRIPTION MTTR; Availability and the factors that affect it; Type of failures; Reliability function (series,
parallel and series-parallel); Factors that affect reliability; Maintainability.
FAULT DIAGNOSIS AND TESTING: Fault detection and Location; Gate sensitivity; Fault-
test for a 2-input AND gate; Undetectable faults; Bridging faults; Fault detection table; Fault
Library; Two-level circuit fault detection in AND/OR circuit; Two-level circuit fault detection
in OR/AND circuit; Boolean difference; Testing Techniques; Designing for Testability.
References
1. http://www.reliabilityeducation.com/ReliabilityPredictionBasics.pdf
SUGGESTED 2. http://www.mtl-inst.com/images/uploads/datasheets/App_Notes/AN9030.pdf
READINGREFERENCE 3. http://en.wikipedia.org/wiki/Failure_rate
TEXTS/MANUALS/WEBSITES 4. G C Loveday,1989. Electronic Testing and Fault Diagnosis. Longman Scientific & Technical. ISBN
0-582-03865-0
1.1 Reliability
The probability of the survival of components in an electronic system or of the sub-systems that constitute the system or that of the
system as a whole within a given period of observation is referred to as its Reliability R(t). We shall now derive the relationship of this
figure of merit and the failure rate, which is a factor usually provided by the manufacturers.
Derivation
Let us consider the degradation of N identical components under stress conditions, i.e. functioning under conditions of pressure,
temperature, humidity, etc.
Let S(t) denote the number of surviving components, i.e. the number of components still operating at time t, after the start of the ageing
experiment.
Let F(t) represent the number of components that have failed up to time t.
Thus the probability of survival of the components also known as the Reliability R(t) is given as
𝑆(𝑡)
𝑅(𝑡) =
𝑁
...1.1
The probability of failure of the components also known as the Un-reliability Q(t) is given by
𝐹(𝑡)
𝑄(𝑡) =
𝑁
...1.2
Now
Hence
The failure rate also known as the hazard rate Z(t) is defined as the Number of failures per unit time compared with the number of
surviving components and given as
1 𝑑𝐹(𝑡)
𝑍(𝑡) = .
𝑆(𝑡) 𝑑𝑡
...1.5
Let us now discuss the relationship known as the Bath-tub curve of Failure rate in practical systems by referring to Figure 1.0
Time(hrs)
‘Burn-In’ Period
At the end of a production line some of the components might have inherent deficiencies which could lead to their failure during the early
run of the system. There could be defective diodes or other electronic components used in the production since components are normally
received through bulk purchase which may not have 100% perfect items. The system is normally subjected to a period of run-time referred
to as ‘Burn-in’ which is meant to identify such manufacturer’s defective components which would be removed thereby reducing the failure
rate of the system. The presence of these components leads to a significant failure rate which reduces speedily during the early life
period of the system.
During the normal operating period of the system, referred to as its Useful Life, the failure rate is a constant given as λ. This status is
maintained for the period shown until the increased ageing period where the normal wear and tear of the system leads to a rise in the
failure rate. This period is referred to as the ‘Wear Out’ period in the curve of failure rate.
Returning to the derivation let us assume operation of the components is in the useful life period with a constant failure rate λ, hence let
Z(t) = λ ...1.6
Hence we obtain
𝑑𝐹(𝑡) 𝑑𝑅(𝑡)
= −𝑁
𝑑𝑡 𝑑𝑡
...1.7
Substituting Eqns. (1.6) and (1.7) into (1.5), we obtain
𝑁 𝑑𝑅(𝑡)
𝜆=− .
𝑆(𝑡) 𝑑𝑡
1 𝑑𝑅(𝑡)
𝜆=− .
𝑅(𝑡) 𝑑𝑡
Noting that R(t)=S(t)/N
i.e.
𝑑𝑅(𝑡)
𝜆𝑑𝑡 = −
𝑅(𝑡)
Integrating we obtain,
𝑡 𝑡 𝑅(𝑡)
𝑑𝑅(𝑡) 𝑑𝑅(𝑡)
𝜆 ∫ 𝑑𝑡 = − ∫ =∫
0 0 𝑅(𝑡) 1 𝑅(𝑡)
Noting that at t=0, S(t) = N, giving R(t) = S(t)/N = 1 and at time t, Reliability is R(t), hence the limits of integration
i.e.
𝑅(𝑡)
𝜆[𝑡]𝑡0 = −𝑙𝑜𝑔𝑒 [𝑅(𝑡)]1
Giving
𝑅(𝑡) = exp(−𝜆𝑡)
..1.8
This is the expression defining the exponential relationship between the Reliability and Failure Rate, where λ is usually expressed as
percentage failures per 1000 hours or as failures per hour.
(𝜆𝑡)2 (𝜆𝑡)3
Now R(t) = 1 - 𝜆𝑡 + − +⋯
2! 3!
For a system containing k types of components, each with a failure rate, λi, where i = 1, 2, 3...., k, the overall system failure rate is given by
𝜆𝑜𝑣𝑒𝑟𝑎𝑙𝑙 = ∑ 𝑁𝑖 𝜆𝑖
𝑖=1
...1.10
The average time a system will run or operate before failures occur is referred to as the Mean Time Between Failure, MTBF. This is a
more useful figure of merit to define a system. It has units of hours and is defined as follows :
∞
𝑀𝑇𝐵𝐹 = ∫ 𝑅(𝑡)𝑑𝑡
0
...1.11
This can be seen as the area under the Reliability curve with respect to time, t.
i.e.
∞
𝑀𝑇𝐵𝐹 = ∫ exp(−𝜆𝑡)𝑑𝑡
0
1 1
= = − 𝜆 [exp − 𝜆𝑡]∞
0 =𝜆
...1.12
i.e the MTBF is the reciprocal of the Failure Rate of a system or component.
NB
If λ is the number of failures per hour then the MTBF has units of hours.
Example 1.1
Let the number of components in a system be 4000 and let the failure rate be given by 0.02% per 1000 hours. Calculate the MTBF of the
system.
Solution
This implies that the average number of failures per hour is given by:
0.02 1
𝑁𝜆 = . . 4000 = 8𝑥10−4 𝑓𝑎𝑖𝑙𝑢𝑟𝑒𝑠𝑝𝑒𝑟ℎ𝑜𝑢𝑟
100 1000
Returning to the analysis we note that from Eqns (1.8) and (1.12), we obtain
𝑅(𝑡) = exp(−𝜆𝑡)
𝑡
= exp(− )
𝑀𝑇𝐵𝐹
...1.13
For t = MTBF, i,e the operating or constraint time equals the MTBF we obtain
R(t) = 36.8%.
i.e. a system with a MTBF of 100 hrs has only a 36.8% chance of running for 100 hrs without failure. In other words the system has a 63.2%
chance of failure in 100 hours.
This then explains the definition of Reliability as given in the BS 42000 Part 2 Specifications as the ability of an item to perform a
required function (without failure) under stated conditions for a stated period of time, where the item could be a component, an
instrument or a system.
Figure 1.2 provides a sketch of the graph of R(t) against time measured as a factor of the MTBF.
R(t)
1.0
0.5
0.36
t
MTBF 2MTBF 3MTBF
Figure 1.2. Reliability versus time in MTBF
R(t) = 1- λt
𝑡
= 1 − 𝑀𝑇𝐵𝐹
Example 1.2
A 1st generation computer contains 10,000 valves, each with a failure rate of 0.5% per 1000 hrs. Find the period of 99% reliability.
Solution
0.005
𝜆= = 5𝑥10−6 ℎ𝑜𝑢𝑟𝑠
1000
We can define the Mean Time To Fail of the individual components especially when these cannot be repaired as the MTBF will apply to the
system as a whole.
1 1
𝑀𝑇𝑇𝐹 = = = 200,000ℎ𝑜𝑢𝑟𝑠 ≅ 23𝑦𝑒𝑎𝑟𝑠
𝜆 5𝑥10−6
Although this seems quite impressive, if however the failure of any one valve renders the failure of the system then we obtain the following
:
R(t) = 1 - λovt
Also we note
1 1
𝑀𝑇𝐵𝐹 = = = 20ℎ𝑜𝑢𝑟𝑠
𝜆𝑜𝑣 5𝑥10−2
If the number of such components constituting the system is further increased then the MTBF is further decreased.
NB
1. The MTBF is an average value observed over a given period.
2. The term MTTF is usually applied to components that cannot be repaired while the term MTBF applies to systems or
instruments that can be repaired
Example
A maintenance engineer observes the following periods (in hours) between failures of four subsystems in a plant given in Table Q1.
The Maintainability of a system is the probability that a failed system will be restored to its working condition within a specified time. This
figure of merit can be seen as the probability of isolating and repairing a fault in a system within a given time.
If the rate of Repair is represented by μ, then we can define the Mean Time To Repair a system , MTTR as the reciprocal of μ, i.e.
1
𝑀𝑇𝑇𝑅 =
𝜇
...1.15
We can therefore write the Maintainability M(t) as
𝑡
𝑀(𝑡) = 1 − exp(−𝜇𝑡) = 1 − exp(− )
𝑀𝑇𝑇𝑅
...1.16
where t is defined as the permissible time constraint for the maintenance action to take place, i.e. the time to restore the system.
This factor is really a function of the design of the equipment or system, the level of skilled workers, availability of spare parts and
appropriate tools and testing equipment.
This is the one of three figures of merit of systems referred to as the RAM factors, i.e. Reliability, Availability and Maintainability. The
Maintainability of a system can be viewed as a measure of the time required to restore a certain percentage of all system failures or the
probability (P) of restoring a failed system to its working condition within a time T. The Availability will be discussed shortly. The
Maintainability relationship is depicted in Fig.1.3
Fig. 1.3. P(t) versus t(Time to repair)
Source : www.reliabilityanalytics.com. Posted 3 September 2011. Move cursor to the image for the full link address.
There are a number of comparisons between R(t) and M(t). Let us now consider these factors as shown in Table 1.0
Table 1.0. Comparison of R(t) and M(t)
Reliability Maintainability Comments
𝑅(𝑡) = exp(−𝜆𝑡) 𝑀(𝑡) = 1 − exp(−𝜇𝑡) M(t) = the probability
of successfully
completing a repair
job within a time t.
This formula is similar
to the Probability of
failure Q(t) or system
Unreliability discussed
earlier where Q(t) =
1- R(t) and µ replaces
λ in the Reliability
expression.
t= time to failure t = time to restore
λ = failure rate µ = repair rate
In general, maintainable systems must have a predictable MTTR for various fault conditions. The system’s repair time consists of
The passive repair time is determined by time taken by the engineers to travel to the site plus tea breaks, and any other stoppage time
taken during the maintenance of the equipment, T1
The Active Repair time is influenced by the system’s design and consists of the following:
(Elicit class contribution before reverting to notes)
I. T2, defined as the time between the occurrence of a failure and the system user or operator, becoming aware that the fault has
occurred.
II. T3, defines the time needed to detect the fault and isolate the components responsible
III. T4 is defined as the time required to replace the faulty component
IV. T5 is defined as the time needed to verify that the fault has been removed and the system is fully functioning
i.e.
System repair time = f(T1,T2,T3,T4,T5)
5
𝑆𝑦𝑠𝑡𝑒𝑚𝑟𝑒𝑝𝑎𝑖𝑟𝑡𝑖𝑚𝑒 = ∑ 𝑇𝑖
1
...1.17
1.5 The Availability of a system
This is defined as the probability that the system will be functioning according to expectations at any time during its scheduled working
period.
Derivation
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑦 =
𝑇𝑜𝑡𝑎𝑙𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑎𝑙𝑝𝑒𝑟𝑖𝑜𝑑
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
=
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒 + 𝑆𝑦𝑠𝑡𝑒𝑚𝑑𝑜𝑤𝑛𝑡𝑖𝑚𝑒
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
=
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒 + (𝑁𝑜. 𝑜𝑓𝑓𝑎𝑖𝑙𝑢𝑟𝑒𝑠𝑥𝑀𝑇𝑇𝑅)
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
=
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒 + (𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒𝑥𝜆𝑥𝑀𝑇𝑇𝑅)
1
=
1 + 𝜆𝑥𝑀𝑇𝑇𝑅)
𝑀𝑇𝐵𝐹
𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑦 =
𝑀𝑇𝐵𝐹 + 𝑀𝑇𝑇𝑅
...1.18
Where
1
𝜆=
𝑀𝑇𝐵𝐹
Alternatively the Availability can be expressed in terms of the MTBF and System’s Mean Down Time (MDT) as follows.
The Mean Up Time of a system is the Mean Time Between failure = MTBF
The Mean Down Time is represented by MDT
Hence the Availability of the system can be written as :
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑦 =
𝑇𝑜𝑡𝑎𝑙𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑎𝑙𝑝𝑒𝑟𝑖𝑜𝑑
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒
=
𝑆𝑦𝑠𝑡𝑒𝑚𝑢𝑝𝑡𝑖𝑚𝑒 + 𝑆𝑦𝑠𝑡𝑒𝑚𝑑𝑜𝑤𝑛𝑡𝑖𝑚𝑒
𝑀𝑇𝐵𝐹
=
𝑀𝑇𝐵𝐹 + 𝑀𝑇𝐷
NB.
The MTTR may not always be the same as the MDT for the following reasons:
1. The Failure may not have been detected at the time it occurred.
2. The repair work may not start soon after it was reported
3. The equipment may not be put in operation immediately after the repairs have been carried out.
The Availability can only be accurately determined when the true Mean Down Time of the system is factored into the equation. Using the
MTTR assumes that the system is repaired immediately it becomes faulty. Whichever expression is used, the true time for which the
system is not in use or unavailable must be used in calculating the Availability.
Low Low Low Not too desirable, as faults occur frequently even though
they are repaired relatively quickly. This will impact on the
integrity and reliability of the system
Low High Low Very undesirable. Here the faults occur frequently and the
time to repair is high. The system down time is therefore
high.
High Low High This is the most desirable as the failure rate is low and the
faults are cleared quickly when they appear, hence the
Availability is high.
High High An MTBF that is high implies a failure rate that is low, i.e.
faults occur rarely. However, with a high average time to
repair, the system down time when faults do occur will be
high, even though they seldom occur. This will impact
negatively on the system’s availability. This is not very
desirable, though a better option than Cases 1and 2.
Assignment 1.0
By referring to Table 1.0 above and practical examples discuss the implications of combinations of levels of MTTR and MTBF in determining
the examples’ Availability values.