Thursday, December 17, 2009
Autocorrelation
Statistics Solutions is the country's leader in autocorrelation and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
The process of autocorrelation is defined as the correlation that exists between the members of the series of the observations that are planned with respect to time.
If two types of data are considered – a cross sectional type of data and a time series type of data—then for the cross sectional type of data, if the change in the income of a particular person affects the consumption expenditure of another household other than his, then autocorrelation is present in the data. Similarly, for the time series type of data, if an output is low in one quarter due to a labor strike, and if the data showing low output continues in the next quarter as well, then autocorrelation is supposed to be present in the data.
The process of autocorrelation is defined as the type of lag correlation for a given type of series with itself, which is lagged by several numbers of time units. On the other hand, serial autocorrelation is that type of autocorrelation that is defined as the process of lag correlation between two series in time series data.
There are certain patterns that are exhibited by autocorrelation.
Autocorrelation exhibits patterns among the residual errors. Autocorrelation also occurs in cases when the error shows a cyclical kind of pattern, etc.
The major reason why autocorrelation occurs is because of the inertia or sluggishness that is present in time series data.
The occurrence of the non stationary property in time series data also gives rise to the phenomenon of autocorrelation. Thus, to make the time series almost free of the problem of autocorrelation, the researcher should always make the data stationary.
The researcher should know that autocorrelation can be positive as well as negative. Economic time series generally exhibits positive autocorrelation as the series moves in an upward or downward pattern. If the series moves in a constant upward and downward movement, then autocorrelation is negative.
The major consequence of using ordinary least square (OLS) in the presence of autocorrelation is that it will simply make the estimator inefficient. As a result, the hypothesis testing procedures will give inaccurate results due to the presence of autocorrelation.
There is a popular test called the Durbin Watson test that detects the presence of autocorrelation. This test is conducted under the null hypothesis that there is no autocorrelation in the data. A test statistic called ‘d’ is computed, which is defined as the ratio between the sum of the square of the difference in the residuals with ith and (i-1) time and the square of the residual in ith time. If the upper critical value of the test comes out to be less than the value of ‘d,’ then there is no autocorrelation. If the lower critical value of the test is more than the value of ‘d,’ then there is autocorrelation.
If one detects autocorrelation in the data, then the first thing a researcher should do is that he should try to find whether or not the autocorrelation is pure. If it is pure autocorrelation, then one can transform it into the original model, which is free from pure autocorrelation.
Wednesday, December 16, 2009
Canonical Correlation
Statistics Solutions is the country's leader in canonical correlation and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
The process of canonical correlation is considered the member of the multiple general linear hypotheses, and therefore the assumptions of multiple regressions are also assumed in canonical correlation as well.
There are concepts and terms associated with canonical correlation. These concepts and terms will help a researcher better understand canonical correlation. They are as follows:
1. Canonical variable or variate: A canonical variable in canonical correlation is defined as the linear combination of the set of original variables. These variables in canonical correlation are a form of latent variables.
2. Eigen values: The value of the Eigen values in canonical correlation are considered as approximately being equal to the square of the value of the canonical correlation. The Eigen values basically reflect the proportion of the variance in the canonical variate, which is explained by the canonical correlation that relates to the two sets of variables.
3. Canonical Weight: The other name for canonical weight is the canonical coefficient. The canonical weight in canonical correlation must first be standardized. It is then used to assess the relative importance of the contribution of the individual’s variable.
4. Canonical communality coefficient: This coefficient in canonical correlation is defined as the sum of the squared structure coefficients for the given type of variable.
5. Redundancy coefficient, d: This coefficient in canonical correlation basically measures the percent of the variance of the original variables of one set that is predicted from the other set through canonical variables.
6. Likelihood ratio test: This significance test in canonical correlation is used to carry out the significance test of all the sources of the linear relationship between the two canonical variables.
There are certain assumptions that are made by the researcher for conducting canonical correlation. They are as follows:
1. It is assumed that the interval type of data is used to carry out canonical correlation.
2. It is assumed in canonical correlation that the relationships should be linear in nature.
3. It is assumed that there should be low multicollinearity in the data while performing canonical correlation. If the two sets of data are highly inter-correlated, then the coefficient of the canonical correlation is unstable.
4. There should be unrestricted variance in canonical correlation. If the variance is not unrestricted, then this might make the canonical correlation look unstable.
Most researchers think that canonical correlation is computed in SPSS. However, canonical correlation is obtained while computing MANOVA in SPSS. In MANOVA, canonical correlation is used in data sets where one refers to the one set of variables as the dependent and the other as the covariates.
Tuesday, December 15, 2009
Chi Square test
Statistics Solutions is the country's leader in chi square tests and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
There are varieties of chi square tests that are used by the researcher. They are cross tabulation, chi square test for the goodness of fit, likelihood ratio test, chi square test, etc.
The task of the chi square test is to test the statistical significance of the observed relationship with respect to the expected relationship. The chi square statistic is used by the researcher for determining whether or not a relationship exists.
In the chi square test, the null hypothesis is assumed as there not being an association between the two variables that are observed in the study. The chi square test is calculated by evaluating the cell frequencies that involve the expected frequencies in those types of cases when there is no association between the variables. The comparison between the expected type of frequency and the actual observed frequency is then made in the chi square test. The computation of the expected frequency in the chi square test is calculated as the product of the total number of observations in the row and the column, which is divided by the total size of the sample.
The calculation of the chi square type of statistic in the chi square test is done by computing the sum of the square of the deviation between the observed and the expected frequency, which is divided by the expected frequency.
The researcher should know that the greater the difference between the observed and expected cell frequency, the larger the value of the chi square statistic in the chi square test.
In order to determine if the association between the two variables exists, the probability of obtaining a value of chi square should be larger than the one obtained from the chi square test of cross tabulation.
There is one more popular test called the chi square test for goodness of fit.
This type of chi square test called the chi square test for goodness of fit helps the researcher to understand whether or not the sample drawn from a certain population has a specific distribution and whether or not it actually belongs to that specified distribution. This type of chi square test can be applicable to only discrete types of distribution, like Poisson, binomial, etc. This type of chi square test is an alternative test for the non parametric test called the Kolmogorov Smrinov goodness of fit test.
The null hypothesis assumed by the researcher in this type of chi square test is that the data drawn from the population follows the specified distribution. The chi square statistic in this chi square test is defined in a similar manner to the definition in the above type of test. One of the important points to be noted by the researcher is that the expected number of frequencies in this type of chi square test should be at least five. This means that the chi square test will not be valid for those whose expected cell frequency is less than five.
There are certain assumptions in the chi square test.
The random sampling of data is assumed in the chi square test.
In the chi square test, a sample with a sufficiently large size is assumed. If the chi square test is conducted on a sample with a smaller size, then the chi square test will yield inaccurate inferences. The researcher, by using the chi square test on small samples, might end up committing a Type II error.
In the chi square test, the observations are always assumed to be independent of each other.
In the chi square test, the observations must have the same fundamental distribution.
Tuesday, November 24, 2009
Analysis Of Variance (ANOVA)
Statistics Solutions is the country's leader in Analysis of Variance (ANOVA) and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
The type of variable on which the Analysis of Variance (ANOVA) is applicable is also an important issue. Analysis of Variance (ANOVA) is applicable in cases where the interval or a ratio type of the dependent variable and one or more categorical type of independent variable is involved. The researchers should also note that the categorical type of variables is considered as the factors in the Analysis of Variance (ANOVA). The combination of the factor levels or the categories in the Analysis of Variance (ANOVA) is generally termed as the treatments.
The Analysis of Variance (ANOVA) technique, which consists of only one categorical type of independent variable, or in other words a single factor, is called one way Analysis of Variance (ANOVA). On the other hand, if the Analysis of Variance (ANOVA) technique consists of two or more than two factors or categorical types of variables or independent variables, then it is called n way Analysis of Variance (ANOVA). In this, the term ‘n’ refers to the number of factors in the Analysis of Variance (ANOVA).
Like regression analysis, the process of Analysis of Variance (ANOVA) also requires the calculation of multiple sums of squares for evaluating the test statistic that is used for testing the null and alternative hypothesis. There is also one difference in Analysis of Variance (ANOVA) and regression analysis, and that is that Analysis of Variance (ANOVA) uses separate and combined means and variances for the samples while evaluating the values that are applicable for the sum of the squares.
Often, the researcher questions what type of test statistic is used for testing the significant difference. The test statistic is nothing but the F statistic that is used in Analysis of Variance (ANOVA). The F test statistic is defined as the ratio between the sample variances. The task of the F test in Analysis of Variance (ANOVA) is to carry out the test of significance of the variability of the components existing in the study.
The most important question is about the assumptions in Analysis of Variance (ANOVA).
The first assumption of Analysis of Variance (ANOVA) is that each sample has been drawn from the population by the process of random sampling.
The second assumption of Analysis of Variance (ANOVA) is that the population from which each sample is randomly drawn should follow normal distribution. In other words, this means that in Analysis of Variance (ANOVA), it is assumed that the error term is normally distributed having its mean as zero and the variance as σ2e.
The third assumption of Analysis of Variance (ANOVA) is that there is homogeneity within the variances of the populations from which the sample has been drawn.
The fourth assumption of Analysis of Variance (ANOVA) is that the population that consists of the random effects (A) is normally distributed having ‘0’ as the mean and σ2a as the variance.
Thursday, November 19, 2009
Validity
Statistics Solutions is the country's leader in validity and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
There are basically four major types of Validity. These types are Internal Validity, External Validity, Statistically Conclusive Validity and Construct Validity.
Internal Validity refers to that type of validity where there is a causal relationship between the variables. Internal Validity signifies the causal relationship between the dependent and the independent type of variable. Internal Validity refers to those factors that are the reason for affecting the dependent variable. This type of validity is used in the case of the design of experiments where the treatments are randomly assigned.
External Validity refers to that type of validity where there is a causal relationship between the cause and the effect. The cause and effect in this type of validity are those that are generalized or transferred either to different people or different treatment variables and the measurement variable.
Statistically conclusive validity refers to that type of validity in which the researcher is interested about the inference on the degree of association between the two variables. For instance, in the study of the association between the two variables, the researcher reaches statistically conclusive Validity only if he has performed statistical significance tests upon the hypotheses predicted by him. This type of validity is violated when the researcher reaches two types of errors, namely type I error and type II error.
Type I error causes violation of this type of validity because in this type of error, the researcher rejects the hypothesis which was indeed true.
Type II error causes violation of this type of validity because in this type of error, the researcher accepts the hypothesis which was indeed false.
Construct Validity refers to that type of validity in which the construct of the test is involved in predicting the relationship for the dependent type of variable. For example, construct validity can be drawn with the help of Cronbach’s alpha. In Cronbach’s alpha, it is assumed that if its value is 0.80, then it is considered good for confirmation, and if its value is 0.70, then it is adequate. So, if the construct satisfies such conditions, then the validity holds. Otherwise, it does not.
Convergent/divergent validation and factor analysis is also used to test this type of validity.
There is a strong relationship between validity and reliability. A test is said to be unreliable if it does not hold the conditions of validity. Reliability is a necessary property of the test, but is not the sufficient condition for validity.
Thus, validity plays the significant role in making an accurate inference about the data.
There are certain things that act as a threat to validity. These are as follows:
If the researcher collects insufficient data to attain validity in the inference, this is not feasible because insufficient data will not represent the population as a whole.
If the researcher measures the sample of the population with too few measurement variables, then he also cannot achieve validity of that sample.
If the researcher selects the wrong type of sample, then he too cannot achieve validity in the inference about the population.
If the researcher selects an inaccurate measurement method during analysis, then the researcher would not be able to achieve validity.
Tuesday, November 17, 2009
Kaplan-Meier survival analysis (KMSA)
Statistics Solutions is the country's leader in Kaplan-Meier survival analysis (KMSA) and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
Kaplan-Meier survival analysis (KMSA) consists of certain terms that are very important to know and understand, as these terms form the basis of a strong understanding of Kaplan-Meier survival analysis (KMSA).
The censored cases in Kaplan-Meier survival analysis (KMSA) indicate those cases in which the event has not yet occurred. In this case of Kaplan-Meier survival analysis (KMSA), the event is considered as the variable of interest for the researcher. Kaplan-Meier survival analysis (KMSA) can efficiently compute the survival functions in those cases that are censored in nature.
The time is considered as the continuous variable in Kaplan-Meier survival analysis (KMSA). However, the researcher should note that in Kaplan-Meier survival analysis (KMSA), the initial time of the occurrence of the event must be clearly defined.
There is a variable called a status variable in Kaplan-Meier survival analysis (KMSA). This variable in Kaplan-Meier survival analysis (KMSA) defines the terminal event. This variable in Kaplan-Meier survival analysis (KMSA) should always be continuous in nature and should always be a categorical type of variable.
There is a variable called the stratification variable in Kaplan-Meier survival analysis (KMSA). As the name suggests, the stratification variable in Kaplan-Meier survival analysis (KMSA) should be a categorical type of variable. This variable in Kaplan-Meier survival analysis (KMSA) represents the grouping effect. In the medical field, the stratification variable in Kaplan-Meier survival analysis (KMSA) can be types of cancer, like lung cancer, blood cancer, etc.
The researcher should note that Kaplan-Meier survival analysis (KMSA) provides incorrect results when covariates other than the time are considered as the prominent aspect in obtaining the extent of a certain consequence.
There is a variable called a factor variable in Kaplan-Meier survival analysis (KMSA). The factor variable in Kaplan-Meier survival analysis (KMSA) should be of categorical type. This type of variable in Kaplan-Meier survival analysis (KMSA) is used by the researcher to indicate the causal effect of a particular consequence. For example, in the case of the previous example, the treatment applied to decrease the effect of the cancer in the body is considered to be the factor variable in Kaplan-Meier survival analysis (KMSA).
The factor variable in Kaplan-Meier survival analysis (KMSA) is the main grouping variable, whereas the stratification variable is the sub grouping variable in Kaplan-Meier survival analysis (KMSA).
Kaplan-Meier survival analysis (KMSA) can be carried out by the researcher with the help of SPSS software.
The log rank test in Kaplan-Meier survival analysis (KMSA) provided in SPSS allows the investigator to examine whether or not the survival functions are equivalent to each other, by measuring their individual time points.
There are certain assumptions that are made in Kaplan-Meier survival analysis (KMSA). For one, it is assumed in Kaplan-Meier survival analysis (KMSA) that the events that occur in the survival function are the dependent variables that depend only upon the time. This is due to the fact that it has been assumed in Kaplan-Meier survival analysis (KMSA) that survival is always based upon time. Thus, this implies that in Kaplan-Meier survival analysis (KMSA), both the censored and uncensored cases perform in similar manners.
Monday, November 16, 2009
Hierarchical Linear Modeling
Statistics Solutions is the country's leader in hierarchical linear modeling and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
Hierarchical Linear Modeling is generally used to monitor the determination of the relationship among a dependent variable (like test scores) and one or more independent variables (like a student’s background, his previous academic record, etc).
In Hierarchical Linear Modeling, the assumption of the classical regression theory that the observations of any one individual are not systematically related to the observations related to any other individual is violated. This assumption is violated in Hierarchical Linear Modeling because this yields biased estimates by applying this assumption in classical regression theory.
Hierarchical Linear Modeling is also called the method of multi level modeling. Hierarchical Linear Modeling allows the researcher working on educational data to systematically ask questions about how policies can affect a student’s test scores.
The advantage of Hierarchical Linear Modeling is that Hierarchical Linear Modeling allows the researcher to openly examine the effects on student test scores when the policy relevant variables are used on it (like the class size, or the introduction of a particular reform etc.).
Hierarchical Linear Modeling is conducted by the researcher in two steps.
In the first step of Hierarchical Linear Modeling, the researcher must conduct the analyses individually for every school (in the case of educational data) or some other unit in the system.
The first step of Hierarchical Linear Modeling can be very well explained with the help of the following example. In the first step of Hierarchical Linear Modeling, the student’s academic scores in science are regressed on a set of student level predictor variables like a student’s background and a binary variable representing the student’s sex.
In the first step of Hierarchical Linear Modeling, the equation would be expressed mathematically as the following:
(Science)ij=β0j+β1j(SBG)ij+β2j(Male)ij+eij. In this first step of Hierarchical Linear Modeling, β0 would signify the level of performance for each school under consideration after controlling the SBG (student’s background) and sex. In this first step of Hierarchical Linear Modeling, β1 and β2 indicate the extent to which inequalities exist among the student with respect to the two different variables taken under consideration.
In the second step of Hierarchical Linear Modeling, the regression parameters that are obtained from the first step of Hierarchical Linear Modeling become the outcome variables of interest.
The second step of Hierarchical Linear Modeling can be very well explained with the help of the following example. In the second step of Hierarchical Linear Modeling, the outcome variables mean the estimate of the magnitude of consequence of the policy variable. In the second step of Hierarchical Linear Modeling, the β0j is given by the following formula:
β0j = Y00 + Y01(class size)j + Y02 (Discipline)j + U01.
In the second step of Hierarchical Linear Modeling, Y01 indicates the expected gain (or loss) in the test score of science due to an average reduction in the size of the class. In the second step of Hierarchical Linear Modeling, Y02 signifies the effect of the policy of the discipline implemented in the school.
According to Goldstein in 1995 and Raudenbush and Bryk in 1986, Hierarchical Linear Modeling’s statistical and computing techniques involve the incorporation of a multi level model into a single one. This is where regression analyses is performed (it has been already explained in the above two steps of Hierarchical Linear Modeling). Hierarchical Linear Modeling estimates the parameters specified in the model with the help of iterative procedures.
Friday, November 13, 2009
Fisher Exact test
Statistics Solutions is the country's leader in fisher exact test and dissertation consulting. Contact Statistics Solutions today for a free 30-minute consultation.
The Fisher Exact test tests the probability of getting a table that is as strong due to the chance of sampling. In the case of the Fisher Exact test, the word ‘strong’ is defined as the proportion of the cases that are diagonal with the most cases.
The Fisher Exact test is generally used in one tailed tests. However, the Fisher Exact test can also be used as a two tailed test as well. The Fisher Exact test is sometimes called a Fisher Irwin test. The Fisher Exact test is called by this name because the Fisher Exact test was developed at the same time by Fisher, Irwin and Yates in 1930.
In SPSS, the Fisher Exact test is computed in addition to the chi square test for a 2X2 table when the table consists of a cell where the expected number of frequencies is fewer than 5.
There are certain terminologies that help in understanding the theory of Fisher Exact test.
The Fisher Exact test uses the following formula:
p= ( ( a + b ) ! ( c + d ) ! ( a + c ) ! ( b + d ) ! ) / a ! b ! c ! d ! N !
In this formula of the Fisher Exact test, the ‘a,’ ‘b,’ ‘c’ and ‘d’ are the individual frequencies of the 2X2 contingency table, and ‘N’ is the total frequency.
The Fisher Exact test uses this formula to obtain the probability of the combination of the frequencies that are actually obtained. The Fisher Exact test also involves the finding of the probability of every possible combination which indicates more evidence of association.
There are certain assumptions on which the Fisher Exact test is based.
In the Fisher Exact test, it is assumed that the sample that has been drawn from the population is done by the process of random sampling. This assumption of the Fisher Exact test is also assumed in general in all the significance tests.
In the Fisher Exact test, a directional hypothesis is assumed. The directional hypothesis assumed in the Fisher Exact test is nothing but the hypothesis based on the one tailed test. In other words, the directional hypothesis assumed in the Fisher Exact test is that type of hypothesis which predicts either a positive association or a negative association, but not both.
In the Fisher Exact test, it is assumed that the value of the first person or the unit of items that are being sampled do not get affected by the value of the second person or the other unit of item being sampled. This assumption of the Fisher Exact test would be violated if the data is pooled or united.
In the Fisher Exact test, mutual exclusivity within the observations is assumed. In other words, in the Fisher Exact test, the given case should fall in only one cell in the table.
In the Fisher Exact test, the dichotomous level of measurement of the variables is assumed.
Thursday, November 12, 2009
t-test
Statistics Solutions is the country's leader in t-test and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
This parametric test, called the t-test, is based on the student’s t statistic. This statistic in the t-test is based upon the assumption that the samples are drawn from a normal population. It is assumed in the t-test that the mean of the normal population exists. The shape of the distribution of the t-test is a bell shaped appearance.
The t-test is applicable in those cases where the size of the sample is less than 30. If the sample size is more than 30 and the t-test is carried out on it, then the inference drawn would not be valid as the distribution of the t-test and the normal distribution would not be noticeable.
The parametric test called the t-test is called parametric because it consists of the parameters called the mean and the variance. There are chiefly three types of t-tests: one sample t-test, two independent sample t-tests, and paired sample t-test.
The first type of t-test is applicable in those cases where the testing of one sample is done. For example, if the researcher wants to test whether or not at least 65 % of the students of a particular school would pass their 10 standard board exam, he could use this test. To conduct this type of t-test, a suitable null and alternative hypothesis is created by the researcher. The next step for the researcher is to construct the test statistic. In this case, the test statistic would be t-test. An appropriate level of significance would be selected by the researcher to conduct the t-test of the null hypothesis. The appropriate level of significance for conducting t-test is generally 0.05(which is the same in other significant tests as well). The level of significance refers to the probability that there would be a false rejection of the null hypothesis on which the t-test would be carried out.
Now, the comparison of the tabulated value of the t-test and the calculated value of the t-test is done by the researcher. If the calculated value of the t-test is more than the tabulated value, then the null hypothesis is rejected at that level of significance. In the opposite case of t-test, the null hypothesis is accepted.
Similarly, in the case of the second type of t-test, two independent samples are tested by comparing their significances with the help of the t-test. So, all the steps carried out in the previous step would remain the same, except that the hypothesis assumed by the researcher in this case would be for two independent samples.
Similarly, in the case of the paired sample t-test, the paired type of categories are tested and all the steps would remain the same, except that the hypothesis on which the t-test would be conducted will now be formulated according to the third type of t-test.
Monday, November 9, 2009
Null hypothesis and Alternative Hypothesis
Statistics Solutions is the country's leader in dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
The criteria of the research problem in the form of null hypothesis and alternative hypothesis should be expressed as a relationship between two or more variables. The criteria of the null hypothesis and alternative hypothesis is that the statements should be the one that expresses the relationship between the two or more measurable variables. The null hypothesis and alternative hypothesis should carry clear implications for testing and stating relations.
The major differences between the null hypothesis and alternative hypothesis and the research problems are that the research problems are simple questions that cannot be tested. The null hypothesis and alternative hypothesis, however, can be tested.
The null hypothesis and alternative hypothesis are required to be fragmented properly before the data collection and interpretation phase in the research. A well fragmented null hypothesis and alternative hypothesis indicates that the researcher has adequate knowledge in that particular area and is thus able to take the investigation further because they can use a much more systematic system. The null hypothesis and alternative hypothesis give direction to the researcher on his/her collection and interpretation of data.
The null hypothesis and alternative hypothesis are useful only if the null hypothesis and alternative hypothesis state the expected relationship between the variables or if the null hypothesis and alternative hypothesis are consistent with the existing body of knowledge. The null hypothesis and alternative hypothesis should be expressed as simply and concisely as possible. The null hypothesis and alternative hypothesis are useful if the null hypothesis and alternative hypothesis have explanatory power.
The purpose and importance of the null hypothesis and alternative hypothesis are that the null hypothesis and alternative hypothesis provide an approximate description of the phenomena. The purpose of the null hypothesis and alternative hypothesis is to provide the researcher or an investigator with a relational statement that is directly tested in a research study. The purpose of the null hypothesis and alternative hypothesis is to provide the framework for reporting the inferences of the study. The purpose of the null hypothesis and alternative hypothesis is to behave as a working instrument of the theory. The purpose of the null hypothesis and alternative hypothesis is to prove whether or not the test is supported, which is separated from the investigator’s own values and decisions. The null hypothesis and alternative hypothesis also provide direction to the research.
The null hypothesis is generally denoted as H0. The null hypothesis states the exact opposite of what an investigator or an experimenter predicts or expects. The null hypothesis basically defines the statement which states that there is no exact or actual relationship between the variables.
The alternative hypothesis is generally denoted as H1. The alternative hypothesis makes a statement that suggests or advises a potential result or an outcome that an investigator or the researcher may expect. The alternative hypothesis has been categorized into two categories: directional alternative hypothesis and non directional alternative hypothesis.
The directional hypothesis is a kind of alternative hypothesis that explains the direction of the expected findings. Sometimes this type of alternative hypothesis is developed to examine the relationship among the variables rather than a comparison between the groups.
The non directional hypothesis is a kind of alternative hypothesis that has no definite direction of the expected findings being specified.
Friday, November 6, 2009
LISREL
Statistics Solutions is the country's leader in LISREL consulting and dissertation consulting. Contact Statistics Solutions today for a free 30-minute consultation.
The following are some basic features of LISREL:
Starting of LISREL: Select “LISREL” from the start menu or create a shortcut and start from the short cut.
Importing data in to LISREL: To enter data into LISREL, select “import options” from the file menu.
Opening a new window: In LISREL file, the “new “option is used to open a new window. From the new option we can open syntax, output, path diagram or data window as required.
Data manipulation: In the “data” option of LISREL, there are options like the variable properties, select variable, sort case, insert variable, delete variable, assign weight, etc.
Transform option: Like SPSS, LISREL also has an option to record or compute a new variable by using the “transform” option.
Statistics option: In LISREL, by using the statistics option, we can perform all the statistical models. LISREL can handle a number of models that include measurement models, no recursive models, hierarchical linear models, confirmatory factor analysis models, ordinal regression models, multiple group comparisons model, etc.
Graph option: Like many other statistical software, LISREL also has the option for graphs. By using the “graph” option in LISREL, we can produce high quality univariate, bivariate and multivariate charts.
Advance modeling: In LISREL, the multilevel option provides the flexibility to perform advance level modeling. By using the multilevel option, we can perform advance level linear and non-linear statistical methods.
View and Window option: Like any other statistical software, LISREL also has the view and window option. View option has the basic features like the tool bar, status bar, etc. By using the window option, we can arrange the window in a horizontal or vertical manner.
Advantages of LISREL:
1. This software provides the full information about the model coefficient which increases the power of the model.
2. It provides good treatment to the missing value.
3. It provides significance testing for all the coefficients.
4. It imposes restrictions on models if that is what is wanted.
Drawbacks of LISREL:
1. It is complicated to handle when someone is a novice.
2. The interaction effects are hard to handle.
3. Correlation matrix is used in SEM and it is assumed that these correlations are derived from the multivariate normality distribution. This assumption does not look valid.
Kolmogorov Smrinov’s one sample test
Statistics Solutions is the country's leader in Kolmogorov Smirinov's one sample test and dissertation consulting. Contact Statistics Solutions today for a free 30-minute consultation.
In Kolmogorov Smrinov’s one sample test, it is assumed that the distribution of the underlying variables being tested is continuous in nature. The Kolmogorov Smrinov’s one sample test is appropriate for those types of variables that are tested at least on an ordinal scale.
One usually conducts Kolmogorov Smrinov’s one sample test in order to test the normality assumption in analysis of variances.
Suppose, for example, F0(x) has a completely specified cumulative relative frequency distribution function in Kolmogorov Smrinov’s one sample test. In this case in Kolmogorov Smrinov’s one sample test, the theoretical distribution under the null hypothesis for any value of F0(x) is the proportion of the cases that are expected to have values which are equal to or are less than the value of x.
Suppose Sn(x) is the observed cumulative relative frequency distribution function of a random sample of ‘n’ observations in Kolmogorov Smrinov’s one sample test. If xi is any possible value in Kolmogorov Smrinov’s one sample test, then Sn(xi) = Fi/n , where Fi is nothing but the number of expected proportions of observations which are less than or equal to xi.
Now, according to the null hypothesis in Kolmogorov Smrinov’s one sample test, it is expected that for every value of xi, Sn(xi) should be fairly close to F0(xi). In other words, in Kolmogorov Smrinov’s one sample test, if the null hypothesis is true, then the difference between Sn(xi) and F0(xi) is small and should be within the limits of the random error.
The Kolmogorov Smrinov’s one sample test focuses on the largest of the deviations. The largest deviation in Kolmogorov Smrinov’s one sample test is called the maximum deviation. The maximum deviation in Kolmogorov Smrinov’s one sample test is the largest absolute difference between the cumulative observed proportion and the cumulative proportion expected on the basis of the hypothesized distribution. The sampling distribution of the maximum deviation in Kolmogorov Smrinov’s one sample test under the null hypothesis is generally known.
There are certain assumptions that are made in Kolmogorov Smrinov’s one sample test.
It is assumed that in Kolmogorov Smrinov’s one sample test, the sample is drawn from the population by the process of random sampling.
It is assumed in Kolmogorov Smrinov’s one sample test that the level of data variables should be continuous interval or ratio types in order to get the exact results. If approximate results are required by the researcher through Kolmogorov Smrinov’s one sample test, then the researcher can use ordinal data or grouped interval level of data.
Kolmogorov Smrinov’s one sample test is also used for ordinal scale of data when the large-sample assumptions of the chi-square goodness-of-fit test are not met.
The hypothetical distribution is specified in advance in Kolmogorov Smrinov’s one sample test.
In the case of the normal distribution in Kolmogorov Smrinov’s one sample test, the expected sample mean and sample standard deviation should always be specified in advance.
In the case of Poisson distribution and in the case of exponential distribution in Kolmogorov Smrinov’s one sample test, the expected sample mean should always be specified in advance.
In the case of uniform distribution in Kolmogorov Smrinov’s one sample test, the expected range which consists of the minimum and maximum values, should always be specified in advance.
Thursday, October 29, 2009
Descriptive measure
Statistics Solutions is the country's leader in descriptive measure and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
There is a tendency for data to cause variation about the descriptive measure of central tendency. This descriptive measure of deviation is also called the descriptive measure of variation or dispersion.
The descriptive measure of deviation is a descriptive measure of the extent to which an individual item may vary. This descriptive measure of deviation of the data should satisfy certain properties which have been laid down by Prof. Yule. These properties are as follows:
The descriptive measure of deviation should be rigidly defined. This descriptive measure should be flexible in calculation and should be easy to understand. This descriptive measure should be based on all the observations. This descriptive measure should be open to further mathematical treatment. This descriptive measure should not be affected by the fluctuations of the sampling.
The descriptive measure of dispersion has been classified into two broad categories.
The descriptive measure of dispersion involves expressing the spread of observations in terms of distance. Such categories of descriptive measure of deviations include range and inters quartile range (or quartile deviation).
The descriptive measure of deviation called range is defined as the difference between the two extreme observations of the distribution. Suppose A and B are the greatest and the smallest observations respectively. In this case, the descriptive measure of deviation (i.e. range) is Range= A-B.
The descriptive measure of deviation called inter quartile range or quartile deviation is also called semi inter quartile range. This descriptive measure is defined mathematically as: Q= (Q3 – Q1)/ 2, where Q3 is the third quartiles and Q1 is the first quartiles. This descriptive measure is definitely a better measure than the previous descriptive measure, as this descriptive measure makes use of 50% of the data. However, this descriptive measure ignores the other 50 % of the data, therefore, this descriptive measure cannot be regarded as a reliable descriptive measure.
The descriptive measure expresses the spread of observations in terms of the average of deviations of the observations from some central value. Such categories of descriptive measure of deviation include mean deviation and standard deviation.
The mean deviation is a descriptive measure of deviation based on all the observations and is a much better type of descriptive measure then other descriptive measure of deviations. However, since in this type of descriptive measure of deviation the sign of the deviation has been ignored, this descriptive measure becomes useless for further mathematical treatment.
The standard deviation is a descriptive measure of deviation that is generally denoted by the Greek letter (σ). This type of descriptive measure is defined as the positive square root of the arithmetic mean of the squares of the deviation of the given values from their arithmetic mean. In this type of descriptive measure, the deviation is being squared. Thus, this descriptive measure overcomes the drawback of the descriptive measure of the mean deviation.
This is the only descriptive measure of deviation which satisfies almost all the ideal properties of the descriptive measure of deviation laid down by Prof. Yule, except for the general nature of extracting the square root which is generally not readily comprehensible for a non-mathematical person. It should be observed that this type of descriptive measure gives greater weight age to the extreme values and is not as popular in terms of being used by economists or businessmen.
Monday, October 12, 2009
Statistics Analysis
Statistics Solutions is the country's leader in statistics analysis. Contact Statistics Solutions today for a free 30-minute consultation.
Because statistics analysis can be so difficult, it is important for doctoral degree seeking students to get help on the statistics analysis. And one excellent place to look for help is a student’s advisor. The student’s advisor should be the first step in terms of getting help on the statistics analysis. However, not all advisors can provide the student with the help that the student needs to complete the statistic analysis. This is true for many reasons, the least of which is that the student’s advisor is usually busy and only has a limited amount of time. Additionally, many advisors wait for the students to make mistakes before offering a helping hand—and those mistakes can be incredibly costly in terms of time and energy expended by the degree seeking student.
When a student’s advisor is unable to provide the assistance, support, guidance and aide required for the statistics analysis, the doctoral degree seeking student should turn to experts. Those experts are dissertation consultants and dissertation consultants can provide the student with everything that he or she needs to complete the difficult and lengthy statistics analysis.
Proper statistics analysis first starts with the gathering of data. For one must gather an extensive amount of data in order to obtain statistics on that data! But how does a student know how to gather that data? And how does a student know how many samples he or she should use when gathering that data?
In order for the statistics analysis to be accurate, precise and dependable, the student must follow accurate, precise and laid-out rules of statistics when it comes to gathering data. A dissertation consultant can teach the student these precise rules of statistics—and this, of course, will greatly help the student when it comes to the statistics analysis.
Additionally, there are rules and procedures that must be followed every step of the way when it comes to statistics analysis. Dissertation consultants know the rules, procedures, regulations and methodology of statistics, and thus, with the help of dissertation consultants, the dissertation writing student will know how to accurately perform statistics analysis for his or her dissertation. And when the student learns these methodologies of statistics, he or she will be guaranteed to have proper, accurate, dependable and precise statistics analysis.
Once the student has learned how to do proper statistics analysis, he or she can complete the dissertation on time and with success. Additionally, because the student actually understands the statistics analysis, he or she will be able to accurately defend his or her dissertation. This is extremely important as this oral defense is the very last thing that is standing in between the doctoral degree seeking student and the coveted diploma. With a dissertation consultant guiding a doctoral degree seeking student every single step of the way and with a dissertation consultant making sure that the statistics analysis is done accurately and correctly, the doctoral degree seeking student will finish on time and with the degree confidently attained.
Tuesday, October 6, 2009
Statistics Analysis
Statistics Solutions is the country's leader in statistics analysis. Contact Statistics Solutions today for a free 30-minute consultation.
Though statistics analysis is necessary for all dissertations and for all PhD candidates to complete, many PhD students have a very difficult time completing the statistic analysis properly and accurately. This struggle with the statistical analysis is quite common and the reason for this struggle is because PhD students have not had enough experience with statistics to know how to complete the complicated statistical procedures and statistical analysis necessary for the dissertation. Granted, PhD students have spent years and years in school—but they have not spent years and years studying statistics, and this is often what is necessary to complete the statistics analysis with ease and success.
It is obviously very important to complete the statistics analysis properly for all PhD students. What is not so obvious, however, is that a student can get help on the statistics analysis and this help can save the student much time and frustration. This help on statistics analysis usually comes in the form of a statistical consultant. A statistical consultant is a trained statistician who can help a PhD student with every single statistical process of the dissertation—including with the statistics analysis. A statistical consultant can guide the student through the statistics methodologies and with this guidance, a PhD student is sure to always be on the right track. And when a PhD student is always on the right track, the statistics analysis will be accurate and the PhD student will no longer have to struggle with the statistics analysis.
A student, then, can make sure that the statistics analysis is completed properly with the help of a statistical consultant. A statistical consultant is also very easily attainable and a statistical consultant can start at any point in the dissertation process. Obviously, a student should not waste time struggling through the statistical procedures and the statistics analysis alone when there is such help available—so a dissertation writing student should seek help on the statistics analysis and the statistical procedures early on. The sooner a PhD student seeks help from a statistical consultant, the more help can be provided.
It is also incredibly important for the PhD student to realize that a statistical consultant providing help on the statistics analysis will be able to instruct the student. In fact, this is often the most valuable part of getting help on the statistics analysis from a statistical consultant. One on one help with statistics and the statistics analysis can be incredibly useful because the statistical consultant can go at the pace of the PhD student. Unlike what happens in a room full of students when the PhD student is enrolled in a class, the instruction between a statistical consultant and a PhD student is one on one. And this can mean all the difference between a student struggling to keep up with a room full of students, or a student no longer feeling intimidated by peers around him/her. Thus, the help provided by a statistical consultant is absolutely unmatched as the statistical consultant is able to give individualized attention to the student and the statistical consultant is able to make sure that the PhD student actually understands the statistical procedures and the statistics analysis.
There is no better way to get help on the dissertation and on the statistics analysis than to seek professional help in the form of a statistical consultant. Once the PhD student does get this much needed help, he or she will see results immediately.
Friday, October 2, 2009
PhD Statistics Analyses
Statistics Solutions is the country's leader in PhD statistics analyses. Contact Statistics Solutions today for a free 30-minute consultation.
The PhD statistics analyses come after all of the data has been collected by the doctoral degree seeking student. And again, countless hours have already been spent by the doctoral student in the collection of the data, before the PhD statistics analyses have even begun. Thus, when a PhD student gets to the PhD statistics analyses, he or she is worn out and sick of the dissertation.
What’s more, the PhD statistics analyses takes a student even more time than the gathering of data—especially if the PhD student has never performed PhD statistics analyses. PhD statistics analyses require that the doctoral degree seeing student have extensive training in and with statistical and statistical procures and methodologies. Many PhD students do not have this training and experience with statistics, so many students struggle mightily when it comes time to perform the PhD statistics analyses.
Doctoral degree seeking students do not have to struggle on their PhD statistics analyses, however, because doctoral degree seeking students have many options available when it comes to getting help on the PhD statistics analyses. The first place where many doctoral degree seeking students turn, and rightfully so, is to their advisor. The doctoral student’s advisor can indeed help the doctoral student, because that is what an advisor is there for. However, many advisors are not readily available and many advisors are not able to offer the doctoral student help when he or she needs it. Thus, while it is always advisable to go to an advisor, sometimes that advisor is not able to help the doctoral student.
The next place where many PhD students turn when it comes to getting PhD statistics analyses is the internet. This too is an okay place for students to turn as there are many websites dedicated to offering help to PhD students when it comes to the PhD statistics analyses. However, if one were to go online and actually type PhD statistics analyses into the search engine, he or she would find that thousands and thousands of hits come up. Additionally, the information that is provided about the PhD statistics analyses is oftentimes contradictory. This is true because PhD statistics analyses requires extensive knowledge of statistics—and this is not something that can be acquired online. In fact, people spend years and years studying statistics, so it would be impossible to have all of that information summed up on-line. The internet, then, can be somewhat helpful, but it is not the best place to turn when it comes to PhD statistics analyses.
The best place for a struggling doctoral student to turn for help on the PhD statistics analyses is a dissertation consulting firm. A dissertation consulting firm can provide the student with everything that he or she needs to finish the dissertation and to perform the PhD statistics analyses with accuracy and precision. A dissertation consulting firm will offer the PhD candidate valuable instruction when it comes to the PhD statistics analyses and a dissertation consulting firm will make sure that the student is on the right track in terms of the PhD statistics analyses. With the help of a dissertation consulting firm, the doctoral degree seeking student will be well on his or her way to completing the dissertation and will have absolutely no trouble with the complicated and difficult PhD statistics analyses.
Thursday, October 1, 2009
Dissertation Data Analysis
Statistics Solutions is the country's leader in dissertation data analysis and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
While it takes months and months and months to gather accurate and valid data, that time spent on gathering the data can be wasted if the data is not then used properly—if proper dissertation data analysis is not performed correctly. Dissertation data analysis is very difficult to perform, especially if the doctoral student is working on his or her first dissertation. Dissertation data analysis is especially difficult to perform because it requires that the doctoral student knows all there is to know about statistics, statistical procedures and statistical methodologies. Thus, without the proper expertise and know-how in statistics, doctoral students can flounder through the dissertation data analysis part of the dissertation, and essentially, all the hard work and energy spent on gathering accurate data can be wasted.
This does not have to happen, however, as there are dissertation consultants who can help any doctoral student with the dissertation data analysis. Indeed, dissertation consultants can help the student make sense of the dissertation data analysis and dissertation consultants can provide the knowhow and expertise that the PhD student lacks. This is especially helpful in the dissertation data analysis phase, as a dissertation consultant is trained in all things concerning statistics—including having extensive training in dissertation data analysis.
There is no sense, then, for a PhD student do “go it alone” and attempt to figure out the dissertation data analysis parts of the dissertation all by him/herself. Help on the dissertation data analysis is incredibly easily attainable because dissertation consultants are very easy to contact and to obtain. In fact, a simple internet search will yield thousands of hits for dissertation consultants—mainly because dissertation consultants are that good at helping students on their dissertations and on the dissertation data analysis portions of their dissertations. For help on the dissertation data analysis, there is no better solution than to seek the professional help of a dissertation consultant who can take any PhD student through the lengthy, difficult and challenging aspects of the dissertation data analysis.
Many students hesitate, however, before seeking help on the dissertation data analysis and before contacting a dissertation consultant. PhD students hesitate for several reasons, one of them being the fact that they are so used to doing everything alone. It is always good to get help, however, and this is equally true on the dissertation data analysis. Some students, while ready to get help, wonder if it is ethical to use a dissertation consultant to get help on the dissertation data analysis. While this is definitely worth thinking about, it is absolutely imperative that a PhD student understand that a dissertation consultant simply helps a student—simply offers assistance in the challenging aspects of the dissertation. A dissertation consultant, then, does NOT do the work for the student. Rather, a dissertation consultant instructs the student and provides the student with very valuable teachings. This instruction is perhaps one of the biggest benefits of getting a dissertation consultant—they do not do the work for the PhD student, they instruct the student and guide the student so that the student can do all of the statistical procedures and the dissertation data analysis on his or her own. And truly, there is no better help than this.
Wednesday, September 30, 2009
How do I do dissertation data analysis?
One of the most challenging and difficult parts of the already very challenging dissertation is to accurately and precisely analyze the data that you have collected for your dissertation. Dissertation data analysis involves compiling, understanding, collecting, and processing thousands of numbers, and this is by no means easy. This is made even more difficult by the fact that many doctoral students are not trained in dissertation data analysis. To do proper dissertation data analysis, a doctoral student must have a good deal of statistical know how, experience, and understanding. With this statistical know how, experience and understanding, a doctoral student can certainly complete the dissertation data analysis on his or her own; but if a student does not have statistical expertise, it is a good decision to get help on the dissertation data analysis.
Statistics Solutions is the country's leader in dissertation data analysis. Contact Statistics Solutions today for a free 30-minute consultation.
Why do I have to do dissertation data analysis?
If you are working on your dissertation and proving something, you have gathered and acquired facts, figures, numbers and statistics to prove your point. It is not enough to simply gather and accumulate these numbers and figures, however, as dissertation data analysis must be done so that you can apply these numbers to your dissertation. In performing proper dissertation data analysis, you will be providing scientific backing to your thesis and conclusion, thus dissertation data analysis must be completed in order for a doctoral student to receive his or her degree and finish the dissertation.
Who can help me with dissertation data analysis?
Usually, students turn to their dissertation advisor for help on all things concerning the dissertation. This can be extremely useful, as a student’s advisor knows how to point the student in the proper direction and an advisor knows what the approval panel will be looking for when the approval panel goes to judge the student’s dissertation. And some advisors provide wonderful on-going support that helps the doctoral student throughout the entire task of the dissertation. Other advisors, however, are not always available when the student needs him or her and this can be extremely frustrating. Still other advisors wait until the student makes a mistake somewhere along the way and then provides information and guidance to the student that would have been helpful before the student actually made the mistake (as mistakes can take weeks and months to fix—depending on where the mistake happens and when the student realizes the mistake). There is no need, however, to be frustrated with advisors who are not always available to a doctoral student when he or she needs him/her. Instead, doctoral students can now seek out help from a dissertation consultant and that dissertation consultant can provide one on one, hands on guidance, assistance and help on al things concerning the dissertation—including on the dissertation data analysis—whenever the doctoral student needs that help. With the help of a dissertation consultant, the student will be able to accurately and precisely perform the dissertation data analysis because dissertation consultants are trained in statistics and can easily “make sense” of both statistical procedures and of thousands and thousands of numbers that are used in those statistical procedures and the dissertation data analysis. Thus, with the help of a dissertation consultant, the dissertation data analysis and every other aspect of the dissertation will be manageable, accurate and relatively easy to complete. It is best, therefore, to get help on the dissertation data analysis. And if a student’s advisor is not available to offer that help on the dissertation data analysis, a dissertation consultant can more than fill in as the advisor, offering invaluable and unmatched help to the student.
Wednesday, September 23, 2009
What is a Statistical Analysis Consultant?
A statistical analysis consultant is a trained and professional statistician who can be hired to help anyone who is struggling with statistics. A statistical analysis consultant will provide one on one help to anyone who needs to use statistical procedures and gather statistical data and analysis.
Who uses a statistical analysis consultant?
Many people turn to a statistical analysis consultant. In fact, statistical analysis consultants are useful for many different people in many different fields. Doctors and nurses, for example, turn to a statistical analysis consultant when they want to find out the effectiveness of a particular drug or a particular dosage. Additionally, the government uses statistical analysis consultants all the time in order to measure the accuracy and effectiveness of certain government programs. Business owners also turn to a statistical analysis consultant so that they can get an accurate measurement of what works and what does not work in their business or company. And finally, doctoral students often turn to a statistical analysis consultant as doctoral students must use statistics heavily and often in their dissertations.
Why do doctoral students use a statistical analysis consultant?
Many doctoral students hire a statistical analysis consultant because many doctoral students struggle with the statistical aspects of their dissertations. This is true because many doctoral students do not have the training or background in statistics that they need to complete the sophisticated and complicated aspects of their dissertation. The dissertation relies very heavily on statistics because statistics are used to prove whatever it is that the student has set out to prove in his or her dissertation. Thus, doctoral degree seeking students often turn to a statistical analysis consultant so that the doctoral degree seeking student can get the help that he or she needs in order to finish his or her dissertation. With the help of a statistical analysis consultant, the doctoral student will be well on his or her way to obtaining his or her degree, and finishing all of his or her statistical analyses for the dissertation.
How specifically can a statistical analysis consultant help a doctoral degree seeking student?
Specifically, a statistical analysis consultant can help a doctoral student with every single statistical process and procedure of the dissertation. This help provided by a statistical analysis consultant includes giving valuable help to the doctoral student in the following ways:
• A statistical analysis consultant can help the student phrase his or her topic statistically—so that it makes statistical sense
• A statistical analysis consultant can help the student with the proposal phase of the dissertation where the student has to explain what statistical processes and procedures he or she will follow throughout the dissertation
• A statistical analysis consultant can help the student determine the sample sizes for the data collection
• A statistical analysis consultant can help the doctoral student gather the data in a way that the data is neither skewed nor biased
• A statistical analysis consultant can help the doctoral student analyze the data that has been collected
• A statistical analysis consultant can help the student make sense of and interpret all of the data that has been collected
• A statistical analysis consultant can help the student apply the results of the data and the statistics to the student’s dissertation.
Thus, a statistical analysis consultant can prove to be a tremendous help to all degree seeking students as they work through the statistical portions of their dissertations.
Friday, August 21, 2009
t-test
The t-test involves the single interval dependent variable and a dichotomous independent variable if the researcher wishes to conduct the t-test for the difference of means. The t-test can also be used to compare the means for two dependent samples and two independent samples. Additionally, the t-test can be used to test between a sample mean and a known mean, which is also called the t-test for one sample.
Statistics Solutions is the country's leader in t-test and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.
The t-test is a parametric test that makes a very popular and obvious assumption—that of normal distribution or normal population. The researcher should note that if all the assumptions of the t-test are met, then the t-test becomes the most powerful. It is the most powerful test of any particular two sample non-parametric test.
The t-test is basically employed in those cases where the size of the sample is generally less than 30. If, however, the sample size is larger than 30, then instead of using the t-test, the researcher employs the z test.
The t-test is mainly based upon the student’s t distribution. The calculation of the t-test is different for comparison between the independent and the dependent samples, but the inference drawn from the t-test is the same.
The critical value in the t-test is the value that is found in the table of values of the t distribution for a given level of significance. If the value that has been calculated by using the t-test is more than the critical t value, then the null hypothesis that has been assumed in the t-test is rejected. But, if the value that has been calculated by using the t-test is less than the critical t value, then the null hypothesis that is assumed in the t-test is accepted.
The confidence limits in the t-test basically construct the upper bound and the lower bound on an estimate for a given level of significance. The confidence interval in the t-test is the range within these bounds. Such limits are employed in the t-test because such limits provide additional information on the relative meaningfulness of the estimates.
In SPSS, the t-test is conducted by selecting the “compare means” from the “analyze” menu and then by clicking any option, depending upon the type of t-test to be conducted by the researcher in SPSS. If two samples are involved, then the researcher can either employ an independent sample t-test or a paired sample t-test, depending on the type of data.
The following are some assumptions that have been assumed in the t-test:
The first assumption in the t-test is that the distribution, or the population under consideration, is that of normal distribution or normal population. For satisfying this assumption, there are certain tests for normality. The researcher should note that the t-test can draw invalid conclusions when the two samples come from widely different shaped distributions. Some statisticians suggest that the t-test should be normally distributed for the sample size, which is mainly less than 15.
The second assumption made in the t-test is that of the homogeneity of the variances in the sample. SPSS employs a test for testing the homoscedastic nature of the sample in the t-test. This test is called "Levene's Test for Equality of Variances," with F value and corresponding significance. The researcher should note that the t-test will result in invalid inferences if the two samples are unequal in size and also have unequal variances.
The third assumption is that in the t-test it does not matter whether the sample is a dependent or independent sample. This is because the inference drawn from the t-test will remain the same whether the sample is independent or dependent; only the calculation of the t-test will differ.