# Probability Theory

Probability Theory

Probability theory is a branch of mathematics that deals with the analysis of random phenomena. We all know that it is impossible to determine the actual outcome of an event before it occurs. But the outcome might be any of the several possible outcomes. We can determine the actual outcome by chance.

Several scholars have given their definition of the word probability. The interpretation of probabilities as relative frequencies is particularly important for the development and applications of the mathematical theory of probability. In this concept, simple games involving coins, dice, cards, and roulette wheels are used as examples. In game of chance, the distinctive feature is that the outcome of a given trial cannot be predicted with certainty.  However, the collective results of a large number of trials display some regularity.

For example, when tossing a coin, the probability of getting “heads” equals one-half. This implies that in several tosses, the relative frequency with which the “heads” occur will be approximately one-half. It doesn’t, however, contain any implication concerning the outcome of any given toss. Other examples involve groups of people, genes, molecules of a gas, etc.

Fundamental concepts related to probability theory that our online experts are well-versed with

• Random Experiment

A random experiment refers to a physical event or situation whose outcome cannot be predicted until it is observed.

• Sample Space

This is a set of all possible outcomes of a random experiment.

• Random variables

This is a type of variable whose possible values are numerical outcomes of a random experiment. Random variables are classified into two. A discrete random variable that may take on only a countable number of distinct values like 0, 1, 2, 3… This class of random variables is usually counted (but not necessarily). The second type of random variable is continuous random variable which take an infinite number of possible values. They are usually measurements.

• Probability

Probability is the chance or likelihood that an event or situation will occur when a random experiment is conducted. We can quantify probability as a number between 0 and 1. 0 indicates impossibility while 1 indicated certainty. When we say that an event has a higher probability, what we mean is that the event will occur. For example, when we toss a coin, we can only expect two results, either “heads” or “tails”. These two outcomes are equally probable. This means that the probability of “heads” equals that of “tails”. There is no other outcome, so the probability of either “heads” or “tails” is ½, which can also be written as 50% or 0.5.

• Conditional Probability

Conditional probability means that for our event to occur another event must have already occurred. For example, suppose we have two events, A and B. Our event of interest is A and B has already occurred, then the conditional probability of A given B can be written as P (A|B).

• Independence

If the probability of one event occurring does not in any way affect the probability of another event occurring, then the two events are said to be independent of each other. In other words, our observation about one event does not affect the probability of the other event. For example, if we rolled a dice and flipped a coin. The probability of getting a tail or head on the coin is not in any way influenced by the probability of getting any number face on the dice.

• Conditional Independence

Two events are said to be conditionally independent given a third event if and only if the occurrence of the first and second events are independent in their conditional probability given C. In other words event, A and B are said to be conditionally independent given C, only and only if we have knowledge that event C has already occurred. The knowledge of if A occurs offers no additional information on the likelihood of B occurring. Also, the knowledge of if B occurs offers no additional information of the chances of A occurring.

• Expectation

If we have a random variable X, then we can write its expectation as E(X). The mean of the N values will be approximately equal to E(X) for large N, if we observe N random values of X. To be more explicit, expectation just like the name suggests is what you would expect the outcome of an experiment to be on an average if the experiment is repeated several times.

• Variance

Variance is the measure of how concentrated the distribution of a random variable is around its mean.

Opt for our probability theory assignment help if you are struggling with your homework in any of these areas or any other.