Answer:
See explanation below.
Step-by-step explanation:
If our random variable X is discrete the expected value is given by:
[tex] E(X) = \mu = \sum_{i=1}^n X_i P(X_i)[/tex]
Where [tex] X_i[/tex] represent the possible values for the random variable and P the respective probabilities, so then is like a weighted average. The only difference is that the mean is defined as:
[tex] \bar X = \frac{\sum_{i=1}^n X_i}{n}[/tex]
On this mean the weight for each observation is [tex] \frac{1}{n}[/tex] and for the expected value are different. But the formulas are equivalent.
If our random variable is continuous then the expected value is given by:
[tex] E(X) =\mu = \int_{a}^b f(x) dx [/tex]
Where [tex] f(x)[/tex] represent the density function for the random variable and a is the lower limit and b the upper limit where the random variable is defined.
And again is analogous to the mean since we are finding the area below the curve of a function.
We assume that is called mean because is a measure of central tendency in order to see where we have the first moment of a random variable. And since takes in count all the weigths for the possible values for the random variable makes sense called mean.