Probability addition and multiplication theorems. Dependent and independent events. Dependent and independent events Probability of two independent events occurring

Theorem

The probability of two events occurring is equal to the product of the probabilities of one of them and the conditional probability of the other, calculated under the condition that the first took place.

$P(A B)=P(A) \cdot P(B | A)$

Event $A$ is called event independent$B$ if the probability of event $A$ does not depend on whether event $B$ occurred or not. Event $A$ is called event dependent$B$ if the probability of event $A$ changes depending on whether event $B$ occurs or not.

The probability of event $A$, calculated given that another event $B$ took place, is called conditional probability of an event$A$ and is denoted by $P(A | B)$ .

The condition for the independence of event $A$ from event $B$ can be written as:

$$P(A | B)=P(A)$$

and the dependence condition is in the form:

$$P(A | B) \neq P(A)$$

Corollary 1. If event $A$ does not depend on event $B$, then event $B$ does not depend on event $A$.

Corollary 2. The probability of the product of two independent events is equal to the product of the probabilities of these events:

$$P(A B)=P(A) \cdot P(B)$$

The probability multiplication theorem can be generalized to the case of an arbitrary number of events. In general terms, it is formulated as follows.

The probability of several events occurring is equal to the product of the probabilities of these events, and the probability of each subsequent event in order is calculated provided that all previous ones took place:

$$P\left(A_(1) A_(2) \ldots A_(n)\right)=P\left(A_(1)\right) \cdot P\left(A_(2) | A_(1) \right) \cdot P\left(A_(3) | A_(1) A_(2)\right) \cdots \cdots P\left(A_(n) | A_(1) A_(2) \ldots A_( n-1)\right)$$

In the case of independent events, the theorem simplifies and takes the form:

$$P\left(A_(1) A_(2) \ldots A_(n)\right)=P\left(A_(1)\right) \cdot P\left(A_(2)\right) \cdot P\left(A_(3)\right) \cdot \ldots \cdot P\left(A_(n)\right)$$

that is, the probability of producing independent events is equal to the product of the probabilities of these events:

$$P\left(\prod_(i=1)^(n) A_(i)\right)=\prod_(i=1)^(n) P\left(A_(i)\right)$$

Examples of problem solving

Example

Exercise. There are 2 white and 3 black balls in the urn. Two balls are taken out of the urn in a row and are not returned. Find the probability that both balls are white.

Solution. Let event $A$ be the appearance of two white balls. This event is the product of two events:

$$A=A_(1) A_(2)$$

where event $A_1$ is the appearance of a white ball during the first removal, $A_2$ is the appearance of a white ball during the second removal. Then, by the probability multiplication theorem

$$P(A)=P\left(A_(1) A_(2)\right)=P\left(A_(1)\right) \cdot P\left(A_(2) | A_(1)\ right)=\frac(2)(5) \cdot \frac(1)(4)=\frac(1)(10)=0.1$$

Answer. $0,1$

Example

Exercise. There are 2 white and 3 black balls in the urn. Two balls are drawn in a row from the urn. After the first draw, the ball is returned to the urn and the balls in the urn are mixed. Find the probability that both balls are white.

Solution. In this case, the events $A_1$ and $A_2$ are independent, and then the required probability

$$P(A)=P\left(A_(1) A_(2)\right)=P\left(A_(1)\right) \cdot P\left(A_(2)\right)=\frac (2)(5) \cdot \frac(2)(5)=\frac(4)(25)=0.16$$

Independent events

In the practical application of probabilistic-statistical decision-making methods, the concept of independence is constantly used. For example, when applying statistical methods for product quality management, they talk about independent measurements of the values ​​of controlled parameters of the product units included in the sample, the independence of the occurrence of defects of one type from the occurrence of defects of another type, etc. The independence of random events is understood in probabilistic models in the following sense.

Definition 2. Events A And IN are called independent if P(AB) = P(A) P(B). Several events A, IN, WITH,... are called independent if the probability of their joint implementation is equal to the product of the probabilities of each of them occurring separately: R(ABC…) = R(A)R(IN)R(WITH)…

This definition corresponds to the intuition of independence: the occurrence or non-occurrence of one event should not affect the occurrence or non-occurrence of another. Sometimes the ratio R(AB) = R(A) R(IN|A) = P(B)P(A|B), valid for P(A)P(B) > 0, also called the probability multiplication theorem.

Statement 1. Let events A And IN independent. Then the events and are independent, the events and IN independent, events A and independent (here - the event opposite A, and - event opposite IN).

Indeed, from property c) in (3) it follows that for events WITH And D, whose product is empty, P(C+ D) = P(C) + P(D). Since the intersection AB And IN is empty, but there is a union IN, That P(AB) + P(B) = P(B). Since A and B are independent, then P(B) = P(B) - P(AB) = P(B) - P(A)P(B) = P(B)(1 - P(A)). Let us now note that from relations (1) and (2) it follows that P() = 1 – P(A). Means, P(B) = P()P(B).

Derivation of equality P(A) = P(A)P() differs from the previous one only by replacement everywhere A on IN, A IN on A.

To prove independence And let's take advantage of the fact that events AB, B, A, do not have pairwise common elements, but in total they constitute the entire space of elementary events. Hence, R(AB) + P(B) + P(A) + P() = 1. Using the previously proven relations, we obtain that P(B)= 1 -R(AB) - P(B)( 1 - P(A)) - P(A)( 1 - P(B))= ( 1 – R(A))( 1 – P(B)) = P()P(), which was what needed to be proven.

Example 3. Consider the experiment of throwing a die with the numbers 1, 2, 3, 4, 5,6 written on its faces. We assume that all edges have the same chance of being on top. Let us construct the corresponding probability space. Let us show that the events “at the top is a face with an even number” and “at the top is a face with a number divisible by 3” are independent.

Example analysis. The space of elementary outcomes consists of 6 elements: “at the top is the edge with 1”, “at the top is the edge with 2”, ..., “at the top is the edge with 6”. The event “on top – a face with an even number” consists of three elementary events – when 2, 4 or 6 is on top. The event “on top – a face with a number divisible by 3” consists of two elementary events – when 3 or 6 is on top. Since Since all edges have the same chance of being on top, then all elementary events must have the same probability. Since there are 6 elementary events in total, each of them has a probability of 1/6. By definition, the event “at the top is a face with an even number” has a probability of ½, and the event “at the top is a face with a number divisible by 3” has a probability of 1/3. The product of these events consists of one elementary event “at the top - edge with 6”, and therefore has a probability of 1/6. Since 1/6 = ½ x 1/3, the events in question are independent according to the definition of independence.

Events A, B are called independent, if the probability of each of them does not depend on whether another event occurred or not. The probabilities of independent events are called unconditional.

Events A, B are called dependent, if the probability of each of them depends on whether another event occurred or not. The probability of event B, calculated under the assumption that another event A has already occurred, is called conditional probability.

If two events A and B are independent, then the equalities are true:

P(B) = P(B/A), P(A) = P(A/B) or P(B/A) – P(B) = 0(9)

The probability of the product of two dependent events A, B is equal to the product of the probability of one of them by the conditional probability of the other:

P(AB) = P(B) ∙ P(A/B) or P(AB) = P(A) ∙ P(B/A) (10)

Probability of event B given the occurrence of event A:

Probability of the product of two independent events A, B is equal to the product of their probabilities:

P(AB) = P(A) ∙ P(B)(12)

If several events are pairwise independent, then it does not follow that they are independent in the aggregate.

Events A 1, A 2, ..., A n (n>2) are called independent in the aggregate if the probability of each of them does not depend on whether any of the other events occurred or not.

The probability of the joint occurrence of several events that are independent in the aggregate is equal to the product of the probabilities of these events:

P(A 1 ∙A 2 ∙A 3 ∙…∙A n) = P(A 1)∙P(A 2)∙P(A 3)∙…∙P(A n). (13)

End of work -

This topic belongs to the section:

Lecture notes: basic concepts of probability theory and statistics used in econometrics

Kazan State.. Financial and Economic Institute.. Department of Statistics and Econometrics..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material was useful to you, you can save it to your page on social networks:

All topics in this section:

Discrete random variable
The most complete, exhaustive description of a discrete variable is its distribution law. The distribution law of a random variable is any relation established

Continuous random variable
For a continuous SV, it is impossible to determine the probability that it will take on some specific value (point probability). Since any interval contains an infinite number of values, it is likely

Relationship between random variables
Many economic indicators are determined by several numbers, being multidimensional SVs. An ordered set of X = (X1, X2, ..., Xn) random in

Selective observation
The general population is the set of all possible values ​​or realizations of the studied SV X under a given real set of conditions. Sampling

Calculation of sample characteristics
For any CV X, in addition to determining its distribution function, it is desirable to indicate numerical characteristics, the most important of which are: - mathematical expectation; - dispersion

Normal distribution
The normal distribution (Gaussian distribution) is an extreme case of almost all real probability distributions. Therefore, it is used in a very large number of real applications of the theory

Student distribution
Let SV U ~ N (0,1), SV V is a quantity independent of U, distributed according to the χ2 law with n degrees of freedom. Then the value

Fisher distribution
Let V and W be independent SVs distributed according to the χ2 law with degrees of freedom v1 = m and v2 = n, respectively. Then the value

Point estimates and their properties
Let us estimate some parameter of the observed SW

Wealth
An estimate is called an unbiased parameter estimate if its mathematics

Properties of sample estimates
At the initial stage, a sample numerical characteristic is taken as an estimate of one or another numerical characteristic (mathematical expectation, dispersion, etc.). Then, examining this assessment, it is determined

Confidence interval for the variance of normal SV
Let X ~ N (m, σ2) and and are unknown. Let for evaluation

Verification criteria. Critical region
The statistical hypothesis is checked on the basis of sample data. For this purpose, a specially selected SV (statistics, criterion) is used, the exact or approximate value of which is known. E

Let's start with independent events. Events are independent , if the probability of occurrence any of them does not depend on the appearance/non-appearance of other events of the set under consideration (in all possible combinations).

Theorem for multiplying the probabilities of independent events: probability of co-occurrence of independent events A And IN equal to the product of the probabilities of these events: P(AB) = P(A) × P(B)

Let's return to the simplest example of the 1st lesson, in which two coins are tossed and the following events:

– as a result of the toss, the 1st coin will land heads;
– as a result of the toss, the 2nd coin will land heads.

Let's find the probability of the event A 1 A 2 (heads will appear on the 1st coin And an eagle will appear on the 2nd coin - remember how to readproduct of events !) . The probability of heads on one coin does not depend in any way on the result of throwing another coin, therefore, events A 1 and A 2 are independent. According to the theorem of multiplication of probabilities of independent events:

P(A 1 A 2) = P(A 1) × P(A 2) = × =
Likewise:

= × = × = – probability that the 1st coin will land heads And on the 2nd tails;

= × = × = – probability that heads will appear on the 1st coin And on the 2nd tails;

= × = × = – probability that tails will appear on the 1st coin And on the 2nd eagle.



Note that the events , , , form full group and the sum of their probabilities is equal to one: + + + = = 1

The multiplication theorem obviously extends to b O a larger number of independent events, so, for example, if the events A, B, C independent, then the probability of their joint attack is equal to: P(ABC) = P(A) × P(B) × P(C).

Problem 3

Each of the three boxes contains 10 parts. The first box contains 8 standard parts, the second – 7, the third – 9. One part is randomly removed from each box. Find the probability that all parts will be standard.

Solution: The probability of drawing a standard or non-standard part from any box does not depend on what parts are taken from other boxes, so the problem deals with independent events. Consider the following independent events:

S 1– a standard part is removed from the 1st box;

S 2– a standard part was removed from the 2nd box;

S 3– a standard part is removed from the 3rd box.

According to the classical definition: P(S 1) = = 0,8; P(S 2) = = 0,7; P(S 3)= = 0.9; are the corresponding probabilities.

Event of interest to us (a standard part will be removed from the 1st boxAnd from 2nd standardAnd from 3rd standard) expressed by the product S 1 S 2 S 3.

According to the theorem of multiplication of probabilities of independent events:

R( S 1 S 2 S 3) = P(S 1) × P(S 2) × P(S 3) = 0.8 × 0.7 × 0.9 = 0.504– the probability that one standard part will be removed from 3 boxes.

Answer: the probability that all parts will be standard is 0.504

Problem 4 (for independent decision)

Three urns contain 6 white and 4 black balls. One ball is drawn at random from each urn. Find the probability that: a) all three balls will be white; b) all three balls will be the same color.

Based on the information received, guess how to deal with the “be” paragraph. An approximate sample solution is designed in an academic style with a detailed description of all events given at the end of the lesson.

Dependent Events. Event X called dependent , if its probability P(X) depends from one or b O more events that have already happened. You don’t have to look far for examples – just go to the nearest store:

X– tomorrow at 19.00 fresh bread will be on sale.

The likelihood of this event depends on many other events: whether fresh bread will be delivered tomorrow, whether it will be sold out before 7 pm or not, etc. Depending on various circumstances, this event may be reliable P(X)= 1, and impossible P(X)= 0. Thus, the event X is dependent.

Another example, IN– at the exam, the student will receive a simple ticket.

If you are not the very first, then the event IN will be dependent, since its probability P(B) will depend on what tickets classmates have already drawn.

Initially, being just a collection of information and empirical observations about the game of dice, the theory of probability became a thorough science. The first to give it a mathematical framework were Fermat and Pascal.

From thinking about the eternal to the theory of probability

The two individuals to whom probability theory owes many of its fundamental formulas, Blaise Pascal and Thomas Bayes, are known as deeply religious people, the latter being a Presbyterian minister. Apparently, the desire of these two scientists to prove the fallacy of the opinion about a certain Fortune giving good luck to her favorites gave impetus to research in this area. After all, in fact, any gambling game with its winnings and losses is just a symphony of mathematical principles.

Thanks to the passion of the Chevalier de Mere, who was equally a gambler and a man not indifferent to science, Pascal was forced to find a way to calculate probability. De Mere was interested in the following question: “How many times do you need to throw two dice in pairs so that the probability of getting 12 points exceeds 50%?” The second question, which was of great interest to the gentleman: “How to divide the bet between the participants in the unfinished game?” Of course, Pascal successfully answered both questions of de Mere, who became the unwitting initiator of the development of probability theory. It is interesting that the person of de Mere remained known in this area, and not in literature.

Previously, no mathematician had ever attempted to calculate the probabilities of events, since it was believed that this was only a guessing solution. Blaise Pascal gave the first definition of the probability of an event and showed that it is a specific figure that can be justified mathematically. Probability theory has become the basis for statistics and is widely used in modern science.

What is randomness

If we consider a test that can be repeated an infinite number of times, then we can define a random event. This is one of the likely outcomes of the experiment.

Experience is the implementation of specific actions under constant conditions.

To be able to work with the results of the experiment, events are usually designated by the letters A, B, C, D, E...

Probability of a random event

In order to begin the mathematical part of probability, it is necessary to define all its components.

The probability of an event is a numerical measure of the possibility of some event (A or B) occurring as a result of an experience. The probability is denoted as P(A) or P(B).

In probability theory they distinguish:

  • reliable the event is guaranteed to occur as a result of the experience P(Ω) = 1;
  • impossible the event can never happen P(Ø) = 0;
  • random an event lies between reliable and impossible, that is, the probability of its occurrence is possible, but not guaranteed (the probability of a random event is always within the range 0≤Р(А)≤ 1).

Relationships between events

Both one and the sum of events A+B are considered, when the event is counted when at least one of the components, A or B, or both, A and B, is fulfilled.

In relation to each other, events can be:

  • Equally possible.
  • Compatible.
  • Incompatible.
  • Opposite (mutually exclusive).
  • Dependent.

If two events can happen with equal probability, then they equally possible.

If the occurrence of event A does not reduce to zero the probability of the occurrence of event B, then they compatible.

If events A and B never occur simultaneously in the same experience, then they are called incompatible. Tossing a coin is a good example: the appearance of heads is automatically the non-appearance of heads.

The probability for the sum of such incompatible events consists of the sum of the probabilities of each of the events:

P(A+B)=P(A)+P(B)

If the occurrence of one event makes the occurrence of another impossible, then they are called opposite. Then one of them is designated as A, and the other - Ā (read as “not A”). The occurrence of event A means that Ā did not occur. These two events form a complete group with a sum of probabilities equal to 1.

Dependent events have mutual influence, decreasing or increasing the probability of each other.

Relationships between events. Examples

Using examples it is much easier to understand the principles of probability theory and combinations of events.

The experiment that will be carried out consists of taking balls out of a box, and the result of each experiment is an elementary outcome.

An event is one of the possible outcomes of an experiment - a red ball, a blue ball, a ball with number six, etc.

Test No. 1. There are 6 balls involved, three of which are blue with odd numbers on them, and the other three are red with even numbers.

Test No. 2. There are 6 blue balls with numbers from one to six.

Based on this example, we can name combinations:

  • Reliable event. In Spanish No. 2 the event “get the blue ball” is reliable, since the probability of its occurrence is equal to 1, since all the balls are blue and there can be no miss. Whereas the event “get the ball with the number 1” is random.
  • Impossible event. In Spanish No. 1 with blue and red balls, the event “getting the purple ball” is impossible, since the probability of its occurrence is 0.
  • Equally possible events. In Spanish No. 1, the events “get the ball with the number 2” and “get the ball with the number 3” are equally possible, and the events “get the ball with an even number” and “get the ball with the number 2” have different probabilities.
  • Compatible Events. Getting a six twice in a row while throwing a die is a compatible event.
  • Incompatible events. In the same Spanish No. 1, the events “get a red ball” and “get a ball with an odd number” cannot be combined in the same experience.
  • Opposite events. The most striking example of this is coin tossing, where drawing heads is equivalent to not drawing tails, and the sum of their probabilities is always 1 (full group).
  • Dependent Events. So, in Spanish No. 1, you can set the goal of drawing the red ball twice in a row. Whether or not it is retrieved the first time affects the likelihood of being retrieved the second time.

It can be seen that the first event significantly affects the probability of the second (40% and 60%).

Event probability formula

The transition from fortune-telling to precise data occurs through the translation of the topic into a mathematical plane. That is, judgments about a random event such as “high probability” or “minimal probability” can be translated into specific numerical data. It is already permissible to evaluate, compare and enter such material into more complex calculations.

From a calculation point of view, determining the probability of an event is the ratio of the number of elementary positive outcomes to the number of all possible outcomes of experience regarding a specific event. Probability is denoted by P(A), where P stands for the word “probabilite”, which is translated from French as “probability”.

So, the formula for the probability of an event is:

Where m is the number of favorable outcomes for event A, n is the sum of all outcomes possible for this experience. In this case, the probability of an event always lies between 0 and 1:

0 ≤ P(A)≤ 1.

Calculation of the probability of an event. Example

Let's take Spanish. No. 1 with balls, which was described earlier: 3 blue balls with the numbers 1/3/5 and 3 red balls with the numbers 2/4/6.

Based on this test, several different problems can be considered:

  • A - red ball falling out. There are 3 red balls, and there are 6 options in total. This is the simplest example in which the probability of an event is P(A)=3/6=0.5.
  • B - rolling an even number. There are 3 even numbers (2,4,6), and the total number of possible numerical options is 6. The probability of this event is P(B)=3/6=0.5.
  • C - the occurrence of a number greater than 2. There are 4 such options (3,4,5,6) out of a total number of possible outcomes of 6. The probability of event C is equal to P(C)=4/6=0.67.

As can be seen from the calculations, event C has a higher probability, since the number of probable positive outcomes is higher than in A and B.

Incompatible events

Such events cannot appear simultaneously in the same experience. As in Spanish No. 1 it is impossible to get a blue and a red ball at the same time. That is, you can get either a blue or a red ball. In the same way, an even and an odd number cannot appear in a dice at the same time.

The probability of two events is considered as the probability of their sum or product. The sum of such events A+B is considered to be an event that consists of the occurrence of event A or B, and the product of them AB is the occurrence of both. For example, the appearance of two sixes at once on the faces of two dice in one throw.

The sum of several events is an event that presupposes the occurrence of at least one of them. The production of several events is the joint occurrence of them all.

In probability theory, as a rule, the use of the conjunction “and” denotes a sum, and the conjunction “or” - multiplication. Formulas with examples will help you understand the logic of addition and multiplication in probability theory.

Probability of the sum of incompatible events

If the probability of incompatible events is considered, then the probability of the sum of events is equal to the addition of their probabilities:

P(A+B)=P(A)+P(B)

For example: let's calculate the probability that in Spanish. No. 1 with blue and red balls, a number between 1 and 4 will appear. We will calculate not in one action, but by the sum of the probabilities of the elementary components. So, in such an experiment there are only 6 balls or 6 of all possible outcomes. The numbers that satisfy the condition are 2 and 3. The probability of getting the number 2 is 1/6, the probability of getting the number 3 is also 1/6. The probability of getting a number between 1 and 4 is:

The probability of the sum of incompatible events of a complete group is 1.

So, if in an experiment with a cube we add up the probabilities of all numbers appearing, the result will be one.

This is also true for opposite events, for example in the experiment with a coin, where one side is the event A, and the other is the opposite event Ā, as is known,

P(A) + P(Ā) = 1

Probability of incompatible events occurring

Probability multiplication is used when considering the occurrence of two or more incompatible events in one observation. The probability that events A and B will appear in it simultaneously is equal to the product of their probabilities, or:

P(A*B)=P(A)*P(B)

For example, the probability that in Spanish No. 1, as a result of two attempts, a blue ball will appear twice, equal to

That is, the probability of an event occurring when, as a result of two attempts to extract balls, only blue balls are extracted is 25%. It is very easy to do practical experiments on this problem and see if this is actually the case.

Joint events

Events are considered joint when the occurrence of one of them can coincide with the occurrence of another. Despite the fact that they are joint, the probability of independent events is considered. For example, throwing two dice can give a result when the number 6 appears on both of them. Although the events coincided and appeared at the same time, they are independent of each other - only one six could fall out, the second die has no influence on it.

The probability of joint events is considered as the probability of their sum.

Probability of the sum of joint events. Example

The probability of the sum of events A and B, which are joint in relation to each other, is equal to the sum of the probabilities of the event minus the probability of their occurrence (that is, their joint occurrence):

R joint (A+B)=P(A)+P(B)- P(AB)

Let's assume that the probability of hitting the target with one shot is 0.4. Then event A is hitting the target in the first attempt, B - in the second. These events are joint, since it is possible that you can hit the target with both the first and second shots. But events are not dependent. What is the probability of the event of hitting the target with two shots (at least with one)? According to the formula:

0,4+0,4-0,4*0,4=0,64

The answer to the question is: “The probability of hitting the target with two shots is 64%.”

This formula for the probability of an event can also be applied to incompatible events, where the probability of the joint occurrence of an event P(AB) = 0. This means that the probability of the sum of incompatible events can be considered a special case of the proposed formula.

Geometry of probability for clarity

Interestingly, the probability of the sum of joint events can be represented as two areas A and B, which intersect with each other. As can be seen from the picture, the area of ​​their union is equal to the total area minus the area of ​​their intersection. This geometric explanation makes the seemingly illogical formula more understandable. Note that geometric solutions are not uncommon in probability theory.

Determining the probability of the sum of many (more than two) joint events is quite cumbersome. To calculate it, you need to use the formulas that are provided for these cases.

Dependent Events

Events are called dependent if the occurrence of one (A) of them affects the probability of the occurrence of another (B). Moreover, the influence of both the occurrence of event A and its non-occurrence is taken into account. Although events are called dependent by definition, only one of them is dependent (B). Ordinary probability was denoted as P(B) or the probability of independent events. In the case of dependent events, a new concept is introduced - conditional probability P A (B), which is the probability of a dependent event B, subject to the occurrence of event A (hypothesis), on which it depends.

But event A is also random, so it also has a probability that needs and can be taken into account in the calculations performed. The following example will show how to work with dependent events and a hypothesis.

An example of calculating the probability of dependent events

A good example for calculating dependent events would be a standard deck of cards.

Using a deck of 36 cards as an example, let’s look at dependent events. We need to determine the probability that the second card drawn from the deck will be of diamonds if the first card drawn is:

  1. Bubnovaya.
  2. A different color.

Obviously, the probability of the second event B depends on the first A. So, if the first option is true, that there is 1 card (35) and 1 diamond (8) less in the deck, the probability of event B:

R A (B) =8/35=0.23

If the second option is true, then the deck has 35 cards, and the full number of diamonds (9) is still retained, then the probability of the following event B:

R A (B) =9/35=0.26.

It can be seen that if event A is conditioned on the fact that the first card is a diamond, then the probability of event B decreases, and vice versa.

Multiplying dependent events

Guided by the previous chapter, we accept the first event (A) as a fact, but in essence, it is of a random nature. The probability of this event, namely drawing a diamond from a deck of cards, is equal to:

P(A) = 9/36=1/4

Since the theory does not exist on its own, but is intended to serve for practical purposes, it is fair to note that what is most often needed is the probability of producing dependent events.

According to the theorem on the product of probabilities of dependent events, the probability of occurrence of jointly dependent events A and B is equal to the probability of one event A, multiplied by the conditional probability of event B (dependent on A):

P(AB) = P(A) *P A(B)

Then, in the deck example, the probability of drawing two cards with the suit of diamonds is:

9/36*8/35=0.0571, or 5.7%

And the probability of extracting not diamonds first, and then diamonds, is equal to:

27/36*9/35=0.19, or 19%

It can be seen that the probability of event B occurring is greater provided that the first card drawn is of a suit other than diamonds. This result is quite logical and understandable.

Total probability of an event

When a problem with conditional probabilities becomes multifaceted, it cannot be calculated using conventional methods. When there are more than two hypotheses, namely A1, A2,…, A n, ..forms a complete group of events provided:

  • P(A i)>0, i=1,2,…
  • A i ∩ A j =Ø,i≠j.
  • Σ k A k =Ω.

So, the formula for the total probability for event B with a complete group of random events A1, A2,..., A n is equal to:

A look into the future

The probability of a random event is extremely necessary in many areas of science: econometrics, statistics, physics, etc. Since some processes cannot be described deterministically, since they themselves are probabilistic in nature, special working methods are required. The theory of event probability can be used in any technological field as a way to determine the possibility of an error or malfunction.

We can say that by recognizing probability, we in some way take a theoretical step into the future, looking at it through the prism of formulas.

Related articles: