Unit-4 Probability

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

DIMS

BUSINESS STATISTICS & ANALYTICS

UNIT-04

Probability Meaning and Approaches of Probability Theory

In our day to day life the “probability” or “chance” is very commonly used term. Sometimes, we use to
say “Probably it may rain tomorrow”, “Probably Mr. X may come for taking his class today”, “Probably
you are right”. All these terms, possibility and probability convey the same meaning. But in statistics
probability has certain special connotation unlike in Layman’s view.

The theory of probability has been developed in 17th century. It has got its origin from games, tossing
coins, throwing a dice, drawing a card from a pack. In 1954 Antoine Gornband had taken an initiation and
an interest for this area.

After him many authors in statistics had tried to remodel the idea given by the former. The “probability”
has become one of the basic tools of statistics. Sometimes statistical analysis becomes paralyzed without
the theorem of probability. “Probability of a given event is defined as the expected frequency of
occurrence of the event among events of a like sort.” (Garrett)

The probability theory provides a means of getting an idea of the likelihood of occurrence of different
events resulting from a random experiment in terms of quantitative measures ranging between zero and
one. The probability is zero for an impossible event and one for an event which is certain to occur.

Approaches of Probability Theory

1. Classical Probability

The classical approach to probability is one of the oldest and simplest school of thought. It has been
originated in 18th century which explains probability concerning games of chances such as throwing coin,
dice, drawing cards etc.

The definition of probability has been given by a French mathematician named “Laplace”. According to
him probability is the ratio of the number of favourable cases among the number of equally likely cases.

Or in other words, the ratio suggested by classical approach is:

Pr. = Number of favourable cases/Number of equally likely cases

For example, if a coin is tossed, and if it is asked what is the probability of the occurrence of the head,
then the number of the favourable case = 1, the number of the equally likely cases = 2.

Pr. of head = 1/2

Symbolically it can be expressed as:


P = Pr. (A) = a/n, q = Pr. (B) or (not A) = b/n

1 – a/n = b/n = (or) a + b = 1 and also p + q = 1

p = 1 – q, and q = 1 – p and if a + b = 1 then so also a/n + b/n = 1

In this approach the probability varies from 0 to 1. When probability is zero it denotes that it is impossible
to occur.

If probability is 1 then there is certainty for occurrence, i.e. the event is bound to occur.

Example:

From a bag containing 20 black and 25 white balls, a ball is drawn randomly. What is the probability that
it is black.

Pr. of a black ball = 20/45 = 4/9 = p, 25 Pr. of a white ball = 25/45 = 5/9 = q

p = 4/9 and q = 5/9 (p + q= 4/9 + 5/9= 1)

2. Relative Frequency Theory of Probability

This approach to probability is a protest against the classical approach. It indicates the fact that if n is
increased upto the ∞, we can find out the probability of p or q.

Example:

If n is ∞, then Pr. of A= a/n = .5, Pr. of B = b/n = 5

If an event occurs a times out of n its relative frequency is a/n. When n becomes ∞, is called the limit of
relative frequency.

Pr. (A) = limit a/n

where n → ∞

Pr. (B) = limit bl.t. here → ∞.

Axiomatic approach

An axiomatic approach is taken to define probability as a set function where the elements of the domain
are the sets and the elements of range are real numbers. If event A is an element in the domain of this
function, P(A) is the customary notation used to designate the corresponding element in the range.

Probability Function

A probability function p(A) is a function mapping the event space A of a random experiment into the
interval [0,1] according to the following axioms;
Axiom 1. For any event A, 0 ≤ P(A) ≤ 1

Axiom 2. P(Ω) = 1

Axiom 3. If A and B are any two mutually exclusive events then,

P(A ∪ B)) = P(A) + P(B)

As given in the third axiom the addition property of the probability can be extended to any number of
events as long as the events are mutually exclusive. If the events are not mutually exclusive then;

P(A ∪ B) = P(A) + P(B) – P(A∩B)

P(A∩B) is Φ if both the events are mutually exclusive.

If there are two types of objects among the objects of similar or other natures then the probability of one
object i.e. Pr. of A = .5, then Pr. of B = .5.

Addition and Multiplication Theorems

Addition theorem on probability:

If A and B are any two events then the probability of happening of at least one of the events is defined
as P(AUB) = P(A) + P(B)- P(A∩B).

Proof:

Since events are nothing but sets,

From set theory, we have

n(AUB) = n(A) + n(B)- n(A∩B).

Dividing the above equation by n(S), (where S is the sample space)

n(AUB)/ n(S) = n(A)/ n(S) + n(B)/ n(S)- n(A∩B)/ n(S)

Then by the definition of probability,

P(AUB) = P(A) + P(B)- P(A∩B).

Example:

If the probability of solving a problem by two students George and James are 1/2 and 1/3 respectively
then what is the probability of the problem to be solved.

Solution:
Let A and B be the probabilities of solving the problem by George and James respectively.

Then P(A)=1/2 and P(B)=1/3.

The problem will be solved if it is solved at least by one of them also.

So, we need to find P(AUB).

By addition theorem on probability, we have

P(AUB) = P(A) + P(B)- P(A∩B).

P(AUB) = 1/2 +.1/3 – 1/2 * 1/3 = 1/2 +1/3-1/6 = (3+2-1)/6 = 4/6 = 2/3

Note:

If A and B are any two mutually exclusive events then P(A∩B)=0.

Then P(AUB) = P(A)+P(B).

Multiplication theorem on probability

If A and B are any two events of a sample space such that P(A) ≠0 and P(B)≠0, then

P(A∩B) = P(A) * P(B|A) = P(B) *P(A|B).

Example: If P(A) = 1/5 P(B|A) = 1/3 then what is P(A∩B)?

Solution: P(A∩B) = P(A) * P(B|A) = 1/5 * 1/3 = 1/15

INDEPENDENT EVENTS:

Two events A and B are said to be independent if there is no change in the happening of an event with
the happening of the other event.

i.e. Two events A and B are said to be independent if

P(A|B) = P(A) where P(B)≠0.

P(B|A) = P(B) where P(A)≠0.

i.e. Two events A and B are said to be independent if

P(A∩B) = P(A) * P(B).


Example:

While laying the pack of cards, let A be the event of drawing a diamond and B be the event of drawing
an ace.

Then P(A) = 13/52 = 1/4 and P(B) = 4/52=1/13

Now, A∩B = drawing a king card from hearts.

Then P(A∩B) = 1/52

Now, P(A/B) = P(A∩B)/P(B) = (1/52)/(1/13) = 1/4 = P(A).

So, A and B are independent.

*Here, P(A∩B) = = = P(A) * P(B)]

Note:

(1) If 3 events A,B and C are independent the

P(A∩B∩C) = P(A)*P(B)*P(C).

(2) If A and B are any two events, then P(AUB) = 1-P(A’)P(B’).

Baye’s Rule

Bayes’ theorem is a way to figure out conditional probability. Conditional probability is the probability of
an event happening, given that it has some relationship to one or more other events. For example, your
probability of getting a parking space is connected to the time of day you park, where you park, and
what conventions are going on at any time. Bayes’ theorem is slightly more nuanced. In a nutshell, it
gives you the actual probability of an event given information about tests.

“Events” Are different from “tests.” For example, there is a test for liver disease, but that’s separate
from the event of actually having liver disease.

Tests are flawed: just because you have a positive test does not mean you actually have the disease.
Many tests have a high false positive rate. Rare events tend to have higher false positive rates than
more common events. We’re not just talking about medical tests here. For example, spam filtering can
have high false positive rates. Bayes’ theorem takes the test results and calculates your real
probability that the test has identified the event.

Bayes’ Theorem (also known as Bayes’ rule) is a deceptively simple formula used to calculate conditional
probability. The Theorem was named after English mathematician Thomas Bayes (1701-1761). The
formal definition for the rule is:
In most cases, you can’t just plug numbers into an equation; You have to figure out what your “tests”
and “events” are first. For two events, A and B, Bayes’ theorem allows you to figure out p(A|B) (the
probability that event A happened, given that test B was positive) from p(B|A) (the probability that test
B happened, given that event A happened). It can be a little tricky to wrap your head around as
technically you’re working backwards; you may have to switch your tests and events around, which can
get confusing. An example should clarify what I mean by “switch the tests and events around.”

Bayes’ Theorem Example

You might be interested in finding out a patient’s probability of having liver disease if they are an
alcoholic. “Being an alcoholic” is the test (kind of like a litmus test) for liver disease.

A could mean the event “Patient has liver disease.” Past data tells you that 10% of patients entering
your clinic have liver disease. P(A) = 0.10.

B could mean the litmus test that “Patient is an alcoholic.” Five percent of the clinic’s patients are
alcoholics. P(B) = 0.05.

You might also know that among those patients diagnosed with liver disease, 7% are alcoholics. This is
your B|A: the probability that a patient is alcoholic, given that they have liver disease, is 7%.

Bayes’ theorem tells you:


P(A|B) = (0.07 * 0.1)/0.05 = 0.14
In other words, if the patient is an alcoholic, their chances of having liver disease is 0.14 (14%). This is a
large increase from the 10% suggested by past data. But it’s still unlikely that any particular patient has
liver disease.

Probability Distribution: Binominal Poisson, Normal Distribution

Probability Distributions

Probability theory is the foundation for statistical inference. A probability distribution is a device for
indicating the values that a random variable may have. There are two categories of random
variables. These are discrete random variables and continuous random variables.

Discrete random variable

The probability distribution of a discrete random variable specifies all possible values of a discrete
random variable along with their respective probabilities.

Examples can be

 Frequency distribution
 Probability distribution (relative frequency distribution)
 Cumulative frequency

Examples of discrete probability distributions are the binomial distribution and the Poisson distribution.

Binomial Distribution

A binomial experiment is a probability experiment with the following properties.

1. Each trial can have only two outcomes which can be considered success or failure.
2. There must be a fixed number of trials.
3. The outcomes of each trial must be independent of each other.
4. The probability of success must remain the same in each trial.

The outcomes of a binomial experiment are called a binomial distribution.

Poisson Distribution

The Poisson distribution is based on the Poisson process.

1. The occurrences of the events are independent in an interval.


2. An infinite number of occurrences of the event are possible in the interval.
3. The probability of a single event in the interval is proportional to the length of the interval.
4. In an infinitely small portion of the interval, the probability of more than one occurrence of the event
is negligible.

Continuous probability distributions

A continuous variable can assume any value within a specified interval of values assumed by the
variable. In a general case, with a large number of class intervals, the frequency polygon begins to
resemble a smooth curve.

A continuous probability distribution is a probability density function. The area under the smooth curve
is equal to 1 and the frequency of occurrence of values between any two points equals the total area
under the curve between the two points and the x-axis.

The Normal Distribution

The normal distribution is the most important distribution in biostatistics. It is frequently called the
Gaussian distribution. The two parameters of the normal distribution are the mean (m) and the
standard deviation (s). The graph has a familiar bell-shaped curve.
Graph of a Normal Distribution

Characteristics of the normal distribution

1. It is symmetrical about m.

2. The mean, median and mode are all equal.

3. The total area under the curve above the x-axis is 1 square unit. Therefore 50% is to the right
of m and 50% is to the left of m.

4. Perpendiculars of:
± s contain about 68%;
±2 s contain about 95%;
±3 s contain about 99.7%
of the area under the curve.

The standard normal distribution

A normal distribution is determined by m and s. This creates a family of distributions depending on


whatever the values of m and s are. The standard normal distribution has m =0 and s =1.

Finding normal curve areas

1. The table gives areas between – and the value of .

2. Find the z value in tenths in the column at left margin and locate its row. Find the hundredths place
in the appropriate column.

3. Read the value of the area (P) from the body of the table where the row and column intersect. Note
that P is the probability that a given value of z is as large as it is in its location. Values of P are in the
form of a decimal point and four places. This constitutes a decimal percent.

Finding probabilities
We find probabilities using the table and a four-step procedure as illustrated below.

a) What is the probability that z < -1.96?

(1) Sketch a normal curve


(2) Draw a line for z = -1.96
(3) Find the area in the table
(4) The answer is the area to the left of the line P(z < -1.96) = .0250

b) What is the probability that -1.96 < z < 1.96?

(1) Sketch a normal curve


(2) Draw lines for lower z = -1.96, and upper z = 1.96
(3) Find the area in the table corresponding to each value
(4) The answer is the area between the values–subtract lower from upper P(-1.96 < z < 1.96) = .9750 –
.0250 = .9500

c) What is the probability that z > 1.96?


(1) Sketch a normal curve
(2) Draw a line for z = 1.96
(3) Find the area in the table
(4) The answer is the area to the right of the line; found by subtracting table value from 1.0000; P(z >
1.96) =1.0000 – .9750 = .0250

Applications of the Normal distribution

The normal distribution is used as a model to study many different variables. We can use the normal
distribution to answer probability questions about random variables. Some examples of variables that
are normally distributed are human height and intelligence.

Solving normal distribution application problems

In this explanation we add an additional step. Following the model of the normal distribution, a given
value of x must be converted to a z score before it can be looked up in the z table.

(1) Write the given information


(2) Sketch a normal curve
(3) Convert x to a z score
(4) Find the appropriate value(s) in the table
(5) Complete the answer

Illustrative Example: Total fingerprint ridge count in humans is approximately normally distributed with
mean of 140 and standard deviation of 50. Find the probability that an individual picked at random will
have a ridge count less than 100. We follow the steps to find the solution.

(1) Write the given information

m = 140
s = 50
x = 100

(3) Convert x to a z score

(4) Find the appropriate value(s) in the table

A value of z = -0.8 gives an area of .2119 which corresponds to the probability P (z < -0.8)

(5) Complete the answer

The probability that x is less than 100 is .2119.

Application of Binomial; Poisson and Normal distributions

The binomial distribution has its applications in experiments in probability subject to certain constraints.
These are:

1. There is a fixed number of trials – for example toss a coin 20 times.


2. The outcomes are independent and there are just two possible outcomes-in the example I will
use, these are head and tail.
3. The probability of a head plus the probability of a tail must equal 1.
4. The probability of 8 heads and 12 tails would be 20C8 x P(H)^8 x P(T)^12.
5. Now any experiment in which the outcomes are of just two kinds and whose probability
combined equals 1, can be regarded as binomial.

Data from the analyses of reference samples often must be used to determine the quality of the data
being produced by laboratories that routinely make chemical analyses of environmental samples. When
a laboratory analyzes many reference samples, binomial distributions can be used in evaluating
laboratory performance. The number of standard deviations (that is, the difference between the
reported value and most probable value divided by the theoretical standard deviation) is calculated for
each analysis. Individual values exceeding two standard deviations are considered unacceptable, and a
binomial distribution is used to determine if overall performance is satisfactory or unsatisfactory.
Similarly, analytical bias is examined by applying a binomial distribution to the number of positive and
negative standard deviations.

Binomial Distribution Questions with Solutions

Let us practice some important questions on binomial distribution in probability.

Question 1:

Find the binomial distribution of getting a six in three tosses of an unbiased dice.

Solution:

Let X be the random variable of getting six. Then X can be 0, 1, 2, 3.

Here, n = 3

p = Probability of getting a six in a toss = ⅙

q = Probability of not getting a six in a toss = 1 – ⅙ = ⅚

P(X = 0) = nCr pr q(n – r) = 3C0 (⅙)0 (⅚)3 – 0

= 1 × 1 × 125/216 = 125/216

P(X = 1) = nCr pr q(n – r) = 3C1 (⅙)1 (⅚)3 – 1

= 3 × ⅙ × 25/36 = 25/72

P(X = 2) = nCr pr q(n – r) = 3C2 (⅙)2 (⅚)3 – 2

= 3 × 1/36 × ⅚ = 5/72

P(X = 3) = nCr pr q(n – r) = 3C3 (⅙)3 (⅚)3 – 3

= 1 × 1/216 × 1 = 1/216

The required binomial distribution of X is:


X 0 1 2 3
p(X) 125/216 25/72 5/72 1/216
Question 2:

Find the probability distribution of the number of doublets in four throws of a pair of dice.

Solution:

There are 36 total possible outcomes for a throw of dice, for which the following outcomes are the
success of the experiment: {(1,1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)}

p = probability of getting doublets = 6/36 = ⅙

q = probability of getting not getting doublets = 1 – ⅙ = ⅚

X: numbers of doublets, then X = 0, 1, 2, 3, and 4.

P(X = 0) = nCr pr q(n – r) = 4C0 (⅙)0 (⅚)4 – 0

= 1 × 1 × 625/1296

P(X = 1) = nCr pr q(n – r) = 4C1 (⅙)1 (⅚)4 – 1

= 4 × ⅙ × 125/216 = 125/324

P(X = 2) = nCr pr q(n – r) = 4C2 (⅙)2 (⅚)4 – 2

= 6 × 1/36 × 25/36

= 25/216

P(X = 3) = nCr pr q(n – r) = 4C3 (⅙)3 (⅚)4 – 3

= 4 × 1/216 × ⅚ = 20/1296

P(X = 4) = nCr pr q(n – r) = 4C4 (⅙)4 (⅚)4 – 4

= 1 × 1/1296 × 1 = 1/1296.

∴ The required probability distribution is:


X 0 1 2 3 4
P(X) 625/1296 125/324 25/216 20/1296 1/1296
Question 3:

Find the probability of getting at least 5 times head-on tossing an unbiased coin for 6 times by using the
binomial distribution.

Solution:

p = P(getting an head in a single toss) = ½

q = P(not getting an head in a single toss) = ½

X = successfully getting a head

P(X ≥ 5) = P(getting at least 5 heads) = P(X = 5) + P(X = 6)

= 6C5 (½)5 (½)(6 – 5) + 6C6 (½)6 (½)6 – 6

= 6 × (½)6 + 1 × (½)6 = 7/24.

Hence, the probability of getting at least 5 heads is 7/24.

Question 4:

There are four fused bulbs in a lot of 10 good bulbs. If three bulbs are drawn at random with
replacement, find the probability of distribution of the number of fused bulbs drawn.

Solution:

This is a problem of binomial distribution as the event of drawing a fused bulb is independent.

p = P(drawing a fused bulb) = 4/(10 + 4) = 2/7

q = P(drawing a bulb which is not fused) = 1 – 2/7 = 5/7

X = event of drawing a fused bulb

X can take up the values 0, 1, 2, 3

P(X = 0) = P(getting zero fused bulbs in all draws)


= nCr pr q(n – r)

= 3C0 (2/7)0 (5/7)(3 – 0)

= 1 × 1 × (125/343) = 125/343

P(X = 1) = P (getting one time fused bulb)

= nCr pr q(n – r)

= 3C1 (2/7)1 (5/7)(3 – 1)

= 3 × (2/7) × (25/49) = 150/343

P(X = 2) = P(getting two times fused bulbs)

= nCr pr q(n – r)

= 3C2 (2/7)2 (5/7)(3 – 2)

= 3 × 4/49 × (5/7) = 60/343

P(X = 3) = (P(getting three times fused bulb)

= nCr pr q(n – r)

= 3C3 (2/7)3 (5/7)(3 – 3)

= 1 × 8/343 × 1 = 8/343

The required probability distribution:

X 0 1 2 3

P(X) 125/343 150/343 60/343 8/343


Also Read:

 Random Variables
 Bayes Theorem
 Mean and Variance of Random Variables
Question 5:
On average, every one out of 10 telephones is found busy. Six telephone numbers are selected at
random. Find the probability that four of them will be busy.

Solution:

Let X: event of getting a busy phone number

p = P(probability of getting a phone number busy) = 1/10

q = P(probability of not getting a phone number busy) = 9/10

The required probability = P(X = 4) = 6C4 p4 q(6 – 4)

= 15 × (1/10)4 × (9/10)2

= 15 × 81/106

= 0.001215.

Question 6:

An unbiased dice is thrown until three sixes are obtained. Find the probability of obtaining the third six
in the sixth throw.

Solution:

Since each throw is independent of the previous throws, we can apply the binomial distribution formula
to find the probability.

p = P(getting a six in a throw) = ⅙

q = P(not getting a six in a throw) = 1 – ⅙ = ⅚

According to the question, two sixes are already obtained in the previous throws.

∴ Required probability = P(getting exactly two sixes in five throws) × P(getting a six in the sixth throw)

= 5C2 p2q3 × 1C1p1 q 1 – 1

= 10 × (⅙)2 × (⅚)3 × 1 × (⅙)

= 10 × (⅙)3 × (⅚)3
= 625/23328.

Question 7:

The probability of a boy guessing a correct answer is ¼. How many questions must he answer so that the
probability of guessing the correct answer at least once is greater than ⅔?

Solution:

p = P(guessing a correct answer) = ¼

q = P(not guessing a correct answer) = ¾

Let him answers n number of questions, then

P(X ≥ 1) = P(guessing at least one correct answer out of n questions) = 1 – P(no success) = 1 – qn

Given, 1 – qn > ⅔ ⇒ 1 – (¾)n > ⅔

⇒ (¾)n < ⅓

Now, let us check the above inequality for different values of n = 1, 2, 3, 4, …

When n = 1

¾≮⅓

When n = 2

(¾)2 ≮ ⅓

When n = 3

(¾)3 ≮ ⅓

When n = 4

(¾)4 < ⅓.

Thus, he must answer at least 4 questions.

Question 8:
When a biased coin is tossed, the probability of getting a head 3 times more than the probability of
getting a tail. Find the probability distribution for getting a tail, if the coin is tossed twice.

Solution:

Let the probability of getting a tail be p, then the probability of getting a head will be 3p

Now, p + 3p = 1 ⇒ p = ¼

q = P(not getting a tail) = 1 – ¼ = ¾

X = event of getting a tail in a toss

Then, possible values of x will be 0, 1, 2

P(X = 0) = 2C0 p0 q2 – 0

= 1 × 1 × (¾)2

= 9/16

P(X = 1) = 2C1 p1 q2 – 1

= 2 × (¼) × (¾)

=⅜

P(X = 2) = 2C2 p2 q2 – 2

= 1 × (¼)2 × 1

= 1/16

The probability distribution for getting the tail is:

X 0 1 2
P(X) 9/16 3/8 1/16
Also try:Binomial Distribution Calculator

Question 9:
A bag contains 5 green balls and 3 red balls. If two balls are drawn from the bag randomly with
replacement, find the probability distribution of the number of green balls drawn.

Solution:

Let p = P(getting a green ball) = 5/(5 + 3) = 5/8

q = P(not getting a green ball) = 1 – 5/8 = 3/8

X = event of drawing the green ball, then the value of X could be 0, 1, 2

P(X = 0) = Probability of getting no green ball = 2C0 p0 q2 – 0 = 1 × 1 × (3/8)2 = 9/64

P(X = 1) = Probability of getting one green ball = 2C1 p1 q2 – 1 = 2 × (⅝) × (⅜) = 15/32

P(X = 2) = Probability of getting 2 green balls = 2C2 p2 q2 – 2 = 1 × (⅝)2 × (⅜)0 = 25/64

The required probability distribution is:

X 0 1 2
P(X) 9/64 15/32 25/64
Question 10:

Find the probability distribution of getting the number of fours in three throws of a dice. Also, find the
mean and variance of the distribution.

Solution:

Let, p = P(getting a four in a throw of dice) = ⅙

q = P(not getting a four in a throw of dice) = ⅚

X: number of four obtained, then the value of X could be 0, 1, 2, 3.

P(X = 0) = 3C0 p0q3 – 0 = 1 × (⅚)3 = 125/216

P(X = 1) = 3C1 p1q3 – 1 = 3 × (⅙) × (⅚)2 = 75/216

P(X = 2) = 3C2 p2q3 – 2 = 1 × (⅙)2 × (⅚)3 – 2 = 15/216

P(X = 3) = 3C3 p3q3 – 3 = 1 × (⅙)3 × (⅚)3 – 3 = 1/216


The required probability distribution

X 0 1 2 3
P(X) 125/216 75/216 15/216 1/216
Mean = np = 3 × ⅙ = 1.2

Variance = npq = 3 × ⅙ × ⅚ = 5/12.

Normal Distribution Problem and Solution

Problem 1: For some computers, the time period between charges of the battery is normally
distributed with a mean of 50 hours and a standard deviation of 15 hours. Rohan has one of these
computers and needs to know the probability that the time period will be between 50 and 70 hours.

Solution: Let x be the random variable that represents the time period.

Given Mean, μ= 50

and standard deviation, σ = 15

To find: Pprobability that x is between 50 and 70 or P( 50< x < 70)

By using the transformation equation, we know;

z = (X – μ) / σ

For x = 50 , z = (50 – 50) / 15 = 0

For x = 70 , z = (70 – 50) / 15 = 1.33

P( 50< x < 70) = P( 0< z < 1.33) = [area to the left of z = 1.33] – [area to the left of z = 0]

From the table we get the value, such as;

P( 0< z < 1.33) = 0.9082 – 0.5 = 0.4082

The probability that Rohan’s computer has a time period between 50 and 70 hours is equal to 0.4082.

Problem 2: The speeds of cars are measured using a radar unit, on a motorway. The speeds are
normally distributed with a mean of 90 km/hr and a standard deviation of 10 km/hr. What is the
probability that a car selected at chance is moving at more than 100 km/hr?
Solution: Let the speed of cars is represented by a random variable ‘x’.

Now, given mean, μ = 90 and standard deviation, σ = 10.

To find: Probability that x is higher than 100 or P(x > 100)

By using the transformation equation, we know;

z = (X – μ) / σ

Hence,

For x = 100 , z = (100 – 90) / 10 = 1

P(x > 90) = P(z > 1) = [total area] – [area to the left of z = 1]

P(z > 1) = 1 – 0.8413 = 0.1587

NOTE: Apart from this, questions are given in class and for practice; questions from the
book are also to be done.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy