To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!

Wednesday, February 3, 2010

Probability

Probability is a value that specifies whether or not an event is likely to happen. The value of probability generally lies between zero to one. If the probability of a happening of an event comes out to be zero, then that event would be considered successful. If the probability of a happening of an event comes out to be one, then that event would be considered a failure.
There are certain definitions of probability.

Statistics Solutions is the country's leader in probability and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.

A sample space S in probability is a non empty set whose elements are called outcomes. The events in the probability are nothing but the subsets of the sample space.

A probability space consists of the sample space and the probability function, which involves the mapping of the events to the real numbers in an interval of zero in such a way that the probability of the sample space is one. If A0 ,A1, ….. is the sequence of disjointed events, then the probability of the union of the sequence will be equal to the sum of the probability of all the disjointed events.

Conditional probability is that type of probability that denotes the probability of a particular event when it is given that another particular event has occurred, provided that the probability of the occurrence of the other particular event is not equal to zero.

There is a product rule in probability that states that the probability of the intersection of any two particular events is equal to the product between the probability of the second event and the conditional probability of the events.

The theorem of total probability states that if the sample space is the disjointed union of events, for example B1, B2, …. then for all events of A, then the probability of A will be equal to the sum of the probability of the intersection between the event A and the disjointed events Bi.

Suppose the two events, A and B, have a positive probability. In this case, the event A would be independent of B if and only if the conditional probability of A given the events B is equal to the probability of A. It is important to remember that this independence probability would be applicable only when the probability of the event B would not be equal to zero.

There is also an independence product rule in probability that states that the probability of the intersection of the two events is equal to the product of the probability of the event A and the probability of the event B. It is important to remember that in the theory of probability, the disjointed events are not the same as that of the independent events.

The theory of probability is the logic of science. According to James Clerk Maxwell (1850), the true logic involves the calculus of probability, which takes into consideration the magnitude of the probability that is supposed to be reasonable.

The theory of probability can be described with a popular example— the tossing of a coin with possible outcomes of “heads” or “tails.” Suppose “heads” is considered a success and “tails” is considered a failure. Thus, the probability of a success (“heads”) will be the probability of the value one, and the probability of failure (“tails”) is the value of zero. Similarly, rolling dice is another popular example based on the theory of probability.

Monday, February 1, 2010

F-test

An F-test is conducted by the researcher on the basis of the F statistic. The F statistic in the F-test is defined as the ratio between the two independent chi square variates that are divided by their respective degree of freedom. The F-test follows the Snedecor’s F- distribution.

Statistics Solutions is the country's leader in F-test and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.

The F-test contains some applications that are used in statistical theory. This document will detail the applications of the F-test.

The F-test is used by a researcher in order to carry out the test for the equality of the two population variances. If a researcher wants to test whether or not two independent samples have been drawn from a normal population with the same variability, then he generally employs the F-test.

The F-test is also used by the researcher to determine whether or not the two independent estimates of the population variances are homogeneous in nature.

An example depicting the above case in which the F-test is applied is, for example, if two sets of pumpkins are grown under two different experimental conditions. In this case, the researcher would select a random sample of size 9 and 11. The standard deviations of their weights are 0.6 and 0.8 respectively. After making an assumption that the distribution of their weights is normal, the researcher conducts an F-test to test the hypothesis on whether or not the true variances are equal.

The researcher uses the F-test to test the significance of an observed multiple correlation coefficient. The F-test is also used by the researcher to test the significance of an observed sample correlation ratio. The sample correlation ratio is defined as a measure of association as the statistical dispersion in the categories within the sample as a whole. Its significance is tested by the researcher using the F-test.

The researcher should note that there is some association between the t and F distributions of the F-test. According to this association, if a statistic t follows a student’s t distribution with ‘n’ degrees of freedom, then the square of this statistic will follow Snedecor’s F distribution, as in the F-test, with 1 and n degrees of freedom.

The F-test also has some other associations, like the association between the F-test and chi square distribution.

Due to such relationships, the F-test has many properties, like chi square. The F-values in the F-test are all non negative. The F-distribution in the F-test is always non-symmetrically distributed. The mean in F-distribution in the F-test is approximately one. There are two independent degrees of freedom in F distribution, one in the numerator and the other in the denominator. There are many different F distributions in the F-test, one for every pair of degree of freedom.

The F-test is a parametric test that helps the researcher draw out an inference about the data that is drawn from a particular population. The F-test is called a parametric test because of the presence of parameters in the F- test. These parameters in the F-test are the mean and variance. The mode of the F-test is the value that is most frequently in a data set and it is always less than unity. According to Karl Pearson’s coefficient of skewness, the F-test is highly positively skewed. The probability distribution of F increases steadily before reaching the peak, and then it starts decreasing in order to become tangential at infinity. Thus, we can say that the axis of F is asymptote to the right tail.