To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!

Thursday, August 20, 2009

Chi square test

The definition of chi square in the chi square test is defined as the square of the standard normal variable.

Statistics Solutions is the country's leader in chi square test and dissertation statistics. Contact Statistics Solutions today for a free 30-minute consultation.

The chi square test is basically a test for approximating the large values of ‘n.’ Here ‘n’ is considered as the number of observations under consideration.

There are different varieties of the chi square test where the chi square statistic finds its application. They are as follows:

A chi square test is used to test the hypothetical value of the population variance.

A chi square test is used to test the goodness of fit.

A chi square test is used to test the independence of attributes.

A chi square test is used to test the homogeneity of independent estimates of the population variance.

A chi square test is used to test the homogeneity of independent estimates of the population correlation coefficient.

The chi square distribution involved in the chi square test is a continuous kind of distribution. The range of the chi square distribution in the chi square test is from zero to infinity. The probability density function (pdf) of the statistic involved in the chi square test is given by the following:

f(x)=(exp-{χ2/2} (χ2)(n/2)-1)/2n/2г(n/2); 0<∞

Among these entire chi square tests that are mentioned above, the most popular chi square tests are the chi square test for the goodness of fit and the chi square test for the independence of attributes.

The chi square test for the independence of attributes is conducted on the observations that are assigned in the contingency tables. It should be noted that this type of chi square test is carried out only upon those variables that are of categorical type.

Let us state an example in which the chi square test for the independence of the attributes is carried out. Suppose two sample polls of votes for two candidates A and B for a public office are taken, one from among the residents of rural areas and one from urban areas. In this case, there are two variable votes and two areas that are categorized as A and B, rural and urban respectively. The chi square test is carried out here for examining whether the nature of the area is associated to voting preference in the election in the two areas.

The second popular test is the chi square test for goodness of fit. This is a very powerful chi square test for testing the significance of the discrepancy between theory and experiments. This popular chi square test was introduced by Prof. Karl Pearson. This popular chi square test enables the researcher to find out whether the deviation of the experiment from theory has occurred by chance or due to inadequacy of the theory.

This popular chi square test is considered as an approximate test for testing the large values of ‘n.’

There are certain conditions that must be satisfied while conducting the chi square test. They are as follows:

The sample observations in the chi square test must be independent from each other.

The constraints on the cell frequencies in the chi square test must be linear in nature. In other words, this means that in the chi square test, the sum of the observed frequencies must be equal to the sum of the expected frequencies.

The total frequency in the chi square test, which is ‘N,’ must be reasonably large, which means that it should be greater than 50.

The theoretical cell frequency in the chi square test must not be less than five.

Monday, August 17, 2009

Methodology

Methodology refers to the way of doing things in every field. This document will detail the statistical methodology used in the field of medicine and nursing. There is a methodology called testing of hypothesis and this methodology is extensively used in the field of medicine and nursing. The testing of hypothesis methodology is a kind of a confirmatory test that helps the researcher understand whether or not the hypothesis he/she made is true. This methodology consists of some terms which are often used by the researcher while making some of the other statistical inferences about the drug being tested.

Statistics Solutions is the country's leader in methodology and dissertation consulting. Contact Statistics Solutions today for a free 30-minute consultation.

There are many terminologies used in this methodology. The term null hypothesis is used in this methodology and represents the theory that states that there is no significant difference in the two products being tested. Thus, the methodology of null hypothesis in this case will be stated in the following way: there is no statistical significant difference in the new drug and the current drug. On the other hand, the methodology behind the alternative hypothesis states that it is the complementary of null hypothesis. So in the field of medicine and nursing, the methodology of the alternative hypothesis will be stated in the following way: there might be some statistical significance in the new drug and the current drug.

The methodology behind Type I error involves the rejection of the correct sample. In the field of medicine and nursing, according to the methodology of Type I error, it will reject the correct sample of the drug. On the other hand, the methodology behind Type II error involves the acceptance of an incorrect sample. In the field of medicine and nursing, according to the methodology of Type II error, it will accept a defective drug assuming it is as an effective drug. According to this methodology, this error is one of the most serious errors in this field.

The methodology behind the test statistic is that it is the value that helps the researcher to decide whether a null hypothesis should be accepted or rejected.

The methodology behind the critical region is that it is the set of values of the test statistic for which the null hypothesis is rejected in tests of hypothesis. The critical region methodology is also called the region of rejection.

The methodology behind the level of significance is that it is the probability that there will be a false rejection of the null hypothesis. Usually, this methodology is chosen by the researcher as 0.05.

The methodology behind the power in tests of hypothesis is that it measures the test’s ability to reject the null hypothesis when the null hypothesis is false. In other words, this methodology helps the researcher in making a correct decision. The maximum value of this methodology should be one and the minimum should be 0.

The methodology behind the one sided test will be discussed with the help of an example. Suppose the researcher wants to test whether or not there is any statistical difference between the current drug and the new drug. According to this methodology, the alternative hypothesis will be that the current drug is more effective than the new drug or the current drug is less effective than the new drug. On the other hand, according to the methodology behind the two sided test, the alternative hypothesis will be that there is some statistical significant difference in the new drug and the current drug to be tested.

Thursday, August 6, 2009

Path Analysis

Path analysis is an extended generalized form of the regression model. Path analysis is used for comparing two or more causal models from the correlation matrix. Path analysis is done diagrammatically in the form of circles and arrows that indicate the causation. The task of path analysis is to predict the regression weight. The regression weight predicted during path analysis is then compared to the observed correlation matrix. In path analysis, the goodness of fit test is done in order to show that the model is the best possible fit.

Statistics Solutions is the country's leader in statistical consulting and path analysis. Contact Statistics Solutions today for a free 30-minute consultation.

While conducting path analysis, a researcher comes across some key terminologies used during path analysis. The following terminologies are used during path analysis:

For researchers, the first thing to tackle is the question that they want answered. The question here is what kind of estimation method is to be used in path analysis. Ordinary least squares (OLS) method and maximum likelihood methods are used to estimate the path.

Additionally, there is a term called path model in path analysis. Path model in path analysis is nothing but a diagram that indicates independent variables, intermediate variables and dependent variables. The arrows with a double head indicate that the covariance is being calculated between the two variables in path analysis.

The exogenous variables in path analysis are those variables with no error pointed towards them, except for the measurement error. The endogenous variables in path analysis can have both approaching and withdrawing arrows.

The path coefficient in path analysis is the same as that of the standardized regression coefficient. This coefficient in path analysis indicates the direct effects of an independent variable on the dependent variable.

Since the estimation method is ordinary least squares (OLS), there is a term called disturbance terms in path analysis. These terms in path analysis are nothing but the residual error terms. These terms in path analysis merely indicate the variances which are unexplained and the errors that occurred during measurement (i.e. the measurement errors).

As discussed, goodness of fit test is used in path analysis, and therefore chi square statistics is also used in path analysis. The values that are not significant in path analysis indicate the model with a good fit.

Path analysis is generally conducted with the help of analysis of a moment structures (AMOS), which is an added module in SPSS. Other than the analysis of a moment structures (AMOS), there is other statistical software like SAS, LISREL, etc. that can be used to conduct path analysis. According to Kline (1998), an adequate sample size should be 10 times the cases of parameters in path analysis. The ideal sample size should be 20 times the cases of parameters in path analysis.

Since path analysis is a statistical method, it has assumptions. The following are the assumptions of path analysis:

In path analysis, the relationship between the variables should be linear in nature. The data used in path analysis should have an interval scale. In order to reduce disturbances in the data, the theory of path analysis assumes that the error terms should not be correlated with the variables.

Path Analysis, however, also has some limitations. Although path analysis can evaluate or test two or more causal hypotheses, path analysis cannot establish the direction of causality.

Path analysis is useful only in cases where a small number of hypotheses (that can be represented by a single path) are being tested.

Thursday, July 23, 2009

Methodology in Psychology

Methodology refers to the theoretical analysis of the methods appropriate to a particular field of study. The purpose of this paper is to discuss statistical methodology in the field of psychology.

Statistics Solutions is the country's leader is statistical consulting and methodology. Contact Statistics Solutions today for a free 30-minute consultation.

In the field of psychology, statistical methodology (like statistical significance testing) is being done. The methodology consists of statistical significance tests, such as t-tests. The methodology called t-test is used to compare the statistical significance of the two samples under study. Suppose one wants to compare the literacy rate of two regions. In this case, t-test methodology is useful. The null hypothesis in this methodology will be that there is no statistically significant difference between the literacy rate of the two samples drawn from region A and B. Suppose, in this methodology, that the calculated t statistic is more than the tabulated t statistic. The null hypothesis assumed in this methodology will be rejected at a particular level of significance.

A statistical methodology called ANOVA, i.e. Analysis of Variance, is used to examine the differences in the mean values of the dependent variable associated with the effect of the controlled independent variables after taking the influence of the uncontrolled independent variables into account.

ANOVA, or one way methodology, involves only one categorical variable, or a single factor. Similarly, if two or more factors are involved in the methodology, then this methodology can be termed as ANOVA n way methodology. The following two assumptions are assumed in this methodology:
  • The samples drawn from a population in this methodology should be random in nature.
  • The variance in this methodology should be homogeneous in nature

A statistical methodology, called partial correlation, is used in the field of psychology. This methodology is the measure of the relationship between two variables while controlling or adjusting the effect of one or more additional variables. In psychology, this methodology is useful in behavioral studies. Since psychology is a branch of social science, quantitative methodology can be done through SPSS, which is a statistical software for social sciences.

There are certain terms used in this methodology that can help in understanding this methodology in a precise manner. The term control variables used in this methodology refers to those variables that draw out variances obtained from the initial correlated variables. The order of correlation in this methodology refers to the correlation with a controlled variable. For example, first order partial correlation methodology is the one that has a single control variable.Other than quantitative methodology, there are two techniques of qualitative methodology that are used in this field. Those two techniques in this methodology are namely Delphi Process and the Nominal Group Technique. The prime objective of Delphi Process methodology is to create a reliable and creative investigation of ideas for enabling suitable information for appropriate decision making. This methodology operates as a useful communication device which in turn facilitates the formation of group judgments, which helps in retrieving the appropriate response. Nominal Group Technique in this methodology is a balanced method involving overall participation. In this methodology, the term “balanced” is used because this methodology encourages equal participation of all the group respondents. It means that this methodology involves ideas and views of a group of people rather than an individual.

The idea behind the theory of Nominal Group Technique methodology is the biggest advantage over Delphi Process methodology. This advantage also cites the major difference between the two methodologies. The difference in these two methodologies is that the information obtained using the Nominal Group methodology is more reliable because the responses were obtained from each and every participant.

Monday, June 29, 2009

Normal Curve Tests of Means and Proportions

The normal curve tests of means and proportions refer to those tests that are basic methods of testing the possible differences between two samples. The normal curve tests of means and proportions can also be referred to as parametric tests under the assumption that the population follows a normal distribution. Normal curve tests of means and proportions are used when the size of the sample is more than 29.

Statistics Solutions is the country's leader in statistical consulting and can assist with your dissertation, thesis or research statistics. Contact Statistics Solutions today for a free 30-minute consultation.

There are certain conceptual terms that are helpful to know to better understand the normal curve tests of means and proportions.

The deviation scores in normal curve tests of means and proportions are defined mathematically as the difference between the observed score and the mean for any particular variable. The deviation score in normal curve tests of means and proportions is generally zero, since half of the deviations are above the mean value and the other half are below the mean value.

The standard error in normal curve tests of means and proportions is used to estimate the variability of the sample means. Since there is only one sample in the normal curve tests of means and proportions, the estimated standard error can be computed by the ration between the standard deviation and the square root of the sample size.

The confidence limits in the normal curve tests of means and proportions set the upper and the lower bounds on an estimate for a given level of significance. In the normal curve tests of means and proportions, these limits are regarded by the researchers as they provide additional information about the estimates.

The normal curve tests of proportions in normal curve tests of means and proportions are used to test the difference in the proportions or the percentages rather than the means.

When two independent samples are tested by the researcher in the normal curve tests of means and proportions, then some different formulas are used, although the main aim remains the same. Thus, the comparison of the z values with the critical values in the table is done under the normal curve.

When the correlated two samples test is used by the researcher in the normal curve tests of means and proportions, then the two correlated samples are factored into the formulas for the two sample means and proportions test. The notations that are in these samples of the normal curve tests of means and proportions are similar to the previous samples with an addition of the Pearsonian correlation.

There are also certain assumptions in the normal curve tests of means and proportions.
The first assumption of the normal curve tests of means and proportions is that as the name suggests, the variable of interest should be normally distributed in the population.

The second assumption of the normal curve tests of means and proportions is that the data should be of interval scale.

The third assumption of the normal curve tests of means and proportions is that the size of the sample should not be small.

The fourth assumption of the normal curve tests of means and proportions is that there should be homogeneity within the variances. This assumption of the normal curve tests of means and proportions is used in two sample testing cases.

Friday, June 26, 2009

Reliability Analysis

In Reliability analysis, the word reliability refers to the fact that a scale should consistently reflect the construct it is measuring. There are certain times and situations where Reliability analysis can be useful.

Statistics Solutions is the country's leader in statistical data analysis and can assist with reliability analysis for your dissertation, thesis or research project. Contact Statistics Solutions today for a free 30-minute consultation.

An aspect in which the researcher can think about Reliability analysis is when two observations under study that are equivalent to each other in terms of the construct being measured also have the equivalent outcome.

There is a popular technique of Reliability analysis called the split half reliability. This method of Reliability analysis splits the data into two parts. The score for each participant in the Reliability analysis is then computed on the basis of each half of the scale. In that type of Reliability analysis, if the scale is very reliable, then the value of the person’s score on one half of the scale would be equivalent to the score on the other half. In this type of Reliability analysis, the previous fact should remain true for all the participants.

The major problem with this type of Reliability analysis is that there are several ways in which a set of data can be divided into two parts, and therefore the outcome could be numerous.

In order to overcome this problem in this type of Reliability analysis, Cronbach (1951) introduced a measure that is a common measure in Reliability analysis. This measure of Reliability analysis is loosely equivalent to the splitting of the data in two halves in every possible manner and further computing the correlation coefficient for each split. The average of these values is similar to the value of Cronbach’s alpha in Reliability analysis.

There are basically two versions of alpha in Reliability analysis. The first version of alpha in Reliability analysis is the normal version. The second version of alpha in Reliability analysis is the standardized version.

The normal version of alpha in Reliability analysis is applicable when the items on a scale are summed to produce a single score for that scale. The standardized version of alpha in Reliability analysis is applicable when the items on a scale are standardized before they are summed up.
According to Kline (1999), the acceptable value of alpha in Reliability analysis is 0.8 in the case of intelligence tests, and the acceptable value of alpha in Reliability analysis is 0.7 in the case of ability tests.

There are certain assumptions that are assumed in Reliability analysis.

While conducting Reliability analysis in SPSS, the researcher should click on “Tukey’s test of additivity” as additivity is assumed in Reliability analysis.

In Reliability analysis, independence within the observations is assumed. However, it should be noted by the researcher that the test retest type of Reliability analysis involves the correlated data between the observations which do not pose a statistical problem in assessing the reliability in Reliability analysis.

In Reliability analysis, it is assumed that the errors are uncorrelated to each other. This means that in Reliability analysis, there exists no association among the errors and therefore all the errors in Reliability analysis are different.

In Reliability analysis, to attain reliability in the data, the coding done by the researcher should be consistent. This means that in Reliability analysis, the high values must be coded consistently, such that they have the same meaning across the items.

In the split half type of Reliability analysis, the random assignment of the subjects is assumed. Generally, in this type of Reliability analysis, the odd numbered items fall in one category and the even numbered items fall in the other.

Monday, June 15, 2009

Dissertation Statistics Tutoring

Any student working on a dissertation can benefit from dissertation statistics help. Dissertation statistics help makes the process of writing a dissertation easy, manageable and understandable.

Statistics Solutions is the country's leader in dissertation statistics help. Contact Statistics Solutions today for a free 30-minute consultation.

Dissertation statistics help can provide invaluable help, guidance and assistance to anyone writing a dissertation.The dissertation is the hardest and most challenging part of any student’s academic career. For this reason, it is important to seek dissertation statistics help. Unfortunately, many students do not choose to seek dissertation statistics help until they have reached a ‘stopping point’ or a problem. And while dissertation statistics help can fix that problem, the student has already wasted valuable time as he or she did not seek dissertation statistics help sooner.

Dissertation statistics help provides help throughout the dissertation—from the very beginning, to the very end of the dissertation. Dissertation statistics help is provided by trained professionals who are well versed in all things regarding dissertations and statistics. Dissertation statistics help is also provided by people who have already received their doctoral degrees, and thus dissertation statistics help is provided by people who have gone through every single stress that the student has gone through. This can be important as it is important to be able to understand the needs of the student who is acquiring dissertation statistics help.

Dissertations rely heavily on statistics and unfortunately many students do not know enough about statistics to be successful in their research and design for their project. Here again dissertation statistics help can assist the student. Dissertation statistics help will step in at the very beginning of the project and will help the student design that project properly. Dissertation statistics help will go over every single thing that needs to be done and dissertation statistics help will make sure that the student has the proper tools to complete the project. There will be no “wrong turns’ with dissertation statistics help, as dissertation statistics help will be there assisting the student every single step of the way.

One of the most important advantages of obtaining dissertation statistics help is the availability and accessibility of dissertation statistics help. While all students have advisors, these advisors are not always available. What’s more, most time these advisors tell students what is wrong AFTER the student has already made the mistake. Thus, the student must go back and continually redo aspects of his or her dissertation. Because dissertation statistics help is always available, however, the student will not make costly mistakes because he or she has someone looking over his/her shoulder and ensuring that he or she does not make any mistakes. Thus, much time will be saved simply by having dissertation statistics help.

Dissertation statistics help is well worth any cost associated with it. In fact, if a student is debating whether or not to use dissertation statistics help, a simple phone call can clear any misconceptions about the price of dissertation statistics help. Once the student inquires about the pricing, he or she will quickly see that the time and energy saved by having dissertation statistics help is well worth what the student must pay for the dissertation statistics help.

Dissertation statistics help will make the acquisition of any dissertation easier and more obtainable. While other students who do not seek dissertation statistics help struggle with many aspects of their dissertation, the student who is smart enough to seek dissertation statistics help will finish on-time and with much less stress. Because dissertation statistics help provides guidance, help, and assistance throughout the entire process, the student is sure to finish on-time and with a great amount of success.