In Reliability analysis, the word reliability refers to the fact that a scale should consistently reflect the construct it is measuring. There are certain times and situations where Reliability analysis can be useful.
Statistics Solutions is the country's leader in statistical data analysis and can assist with reliability analysis for your dissertation, thesis or research project. Contact Statistics Solutions today for a free 30-minute consultation.
An aspect in which the researcher can think about Reliability analysis is when two observations under study that are equivalent to each other in terms of the construct being measured also have the equivalent outcome.
There is a popular technique of Reliability analysis called the split half reliability. This method of Reliability analysis splits the data into two parts. The score for each participant in the Reliability analysis is then computed on the basis of each half of the scale. In that type of Reliability analysis, if the scale is very reliable, then the value of the person’s score on one half of the scale would be equivalent to the score on the other half. In this type of Reliability analysis, the previous fact should remain true for all the participants.
The major problem with this type of Reliability analysis is that there are several ways in which a set of data can be divided into two parts, and therefore the outcome could be numerous.
In order to overcome this problem in this type of Reliability analysis, Cronbach (1951) introduced a measure that is a common measure in Reliability analysis. This measure of Reliability analysis is loosely equivalent to the splitting of the data in two halves in every possible manner and further computing the correlation coefficient for each split. The average of these values is similar to the value of Cronbach’s alpha in Reliability analysis.
There are basically two versions of alpha in Reliability analysis. The first version of alpha in Reliability analysis is the normal version. The second version of alpha in Reliability analysis is the standardized version.
The normal version of alpha in Reliability analysis is applicable when the items on a scale are summed to produce a single score for that scale. The standardized version of alpha in Reliability analysis is applicable when the items on a scale are standardized before they are summed up.
According to Kline (1999), the acceptable value of alpha in Reliability analysis is 0.8 in the case of intelligence tests, and the acceptable value of alpha in Reliability analysis is 0.7 in the case of ability tests.
There are certain assumptions that are assumed in Reliability analysis.
While conducting Reliability analysis in SPSS, the researcher should click on “Tukey’s test of additivity” as additivity is assumed in Reliability analysis.
In Reliability analysis, independence within the observations is assumed. However, it should be noted by the researcher that the test retest type of Reliability analysis involves the correlated data between the observations which do not pose a statistical problem in assessing the reliability in Reliability analysis.
In Reliability analysis, it is assumed that the errors are uncorrelated to each other. This means that in Reliability analysis, there exists no association among the errors and therefore all the errors in Reliability analysis are different.
In Reliability analysis, to attain reliability in the data, the coding done by the researcher should be consistent. This means that in Reliability analysis, the high values must be coded consistently, such that they have the same meaning across the items.
In the split half type of Reliability analysis, the random assignment of the subjects is assumed. Generally, in this type of Reliability analysis, the odd numbered items fall in one category and the even numbered items fall in the other.
To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!
Friday, June 26, 2009
Monday, June 15, 2009
Dissertation Statistics Tutoring
Any student working on a dissertation can benefit from dissertation statistics help. Dissertation statistics help makes the process of writing a dissertation easy, manageable and understandable.
Statistics Solutions is the country's leader in dissertation statistics help. Contact Statistics Solutions today for a free 30-minute consultation.
Dissertation statistics help can provide invaluable help, guidance and assistance to anyone writing a dissertation.The dissertation is the hardest and most challenging part of any student’s academic career. For this reason, it is important to seek dissertation statistics help. Unfortunately, many students do not choose to seek dissertation statistics help until they have reached a ‘stopping point’ or a problem. And while dissertation statistics help can fix that problem, the student has already wasted valuable time as he or she did not seek dissertation statistics help sooner.
Dissertation statistics help provides help throughout the dissertation—from the very beginning, to the very end of the dissertation. Dissertation statistics help is provided by trained professionals who are well versed in all things regarding dissertations and statistics. Dissertation statistics help is also provided by people who have already received their doctoral degrees, and thus dissertation statistics help is provided by people who have gone through every single stress that the student has gone through. This can be important as it is important to be able to understand the needs of the student who is acquiring dissertation statistics help.
Dissertations rely heavily on statistics and unfortunately many students do not know enough about statistics to be successful in their research and design for their project. Here again dissertation statistics help can assist the student. Dissertation statistics help will step in at the very beginning of the project and will help the student design that project properly. Dissertation statistics help will go over every single thing that needs to be done and dissertation statistics help will make sure that the student has the proper tools to complete the project. There will be no “wrong turns’ with dissertation statistics help, as dissertation statistics help will be there assisting the student every single step of the way.
One of the most important advantages of obtaining dissertation statistics help is the availability and accessibility of dissertation statistics help. While all students have advisors, these advisors are not always available. What’s more, most time these advisors tell students what is wrong AFTER the student has already made the mistake. Thus, the student must go back and continually redo aspects of his or her dissertation. Because dissertation statistics help is always available, however, the student will not make costly mistakes because he or she has someone looking over his/her shoulder and ensuring that he or she does not make any mistakes. Thus, much time will be saved simply by having dissertation statistics help.
Dissertation statistics help is well worth any cost associated with it. In fact, if a student is debating whether or not to use dissertation statistics help, a simple phone call can clear any misconceptions about the price of dissertation statistics help. Once the student inquires about the pricing, he or she will quickly see that the time and energy saved by having dissertation statistics help is well worth what the student must pay for the dissertation statistics help.
Dissertation statistics help will make the acquisition of any dissertation easier and more obtainable. While other students who do not seek dissertation statistics help struggle with many aspects of their dissertation, the student who is smart enough to seek dissertation statistics help will finish on-time and with much less stress. Because dissertation statistics help provides guidance, help, and assistance throughout the entire process, the student is sure to finish on-time and with a great amount of success.
Statistics Solutions is the country's leader in dissertation statistics help. Contact Statistics Solutions today for a free 30-minute consultation.
Dissertation statistics help can provide invaluable help, guidance and assistance to anyone writing a dissertation.The dissertation is the hardest and most challenging part of any student’s academic career. For this reason, it is important to seek dissertation statistics help. Unfortunately, many students do not choose to seek dissertation statistics help until they have reached a ‘stopping point’ or a problem. And while dissertation statistics help can fix that problem, the student has already wasted valuable time as he or she did not seek dissertation statistics help sooner.
Dissertation statistics help provides help throughout the dissertation—from the very beginning, to the very end of the dissertation. Dissertation statistics help is provided by trained professionals who are well versed in all things regarding dissertations and statistics. Dissertation statistics help is also provided by people who have already received their doctoral degrees, and thus dissertation statistics help is provided by people who have gone through every single stress that the student has gone through. This can be important as it is important to be able to understand the needs of the student who is acquiring dissertation statistics help.
Dissertations rely heavily on statistics and unfortunately many students do not know enough about statistics to be successful in their research and design for their project. Here again dissertation statistics help can assist the student. Dissertation statistics help will step in at the very beginning of the project and will help the student design that project properly. Dissertation statistics help will go over every single thing that needs to be done and dissertation statistics help will make sure that the student has the proper tools to complete the project. There will be no “wrong turns’ with dissertation statistics help, as dissertation statistics help will be there assisting the student every single step of the way.
One of the most important advantages of obtaining dissertation statistics help is the availability and accessibility of dissertation statistics help. While all students have advisors, these advisors are not always available. What’s more, most time these advisors tell students what is wrong AFTER the student has already made the mistake. Thus, the student must go back and continually redo aspects of his or her dissertation. Because dissertation statistics help is always available, however, the student will not make costly mistakes because he or she has someone looking over his/her shoulder and ensuring that he or she does not make any mistakes. Thus, much time will be saved simply by having dissertation statistics help.
Dissertation statistics help is well worth any cost associated with it. In fact, if a student is debating whether or not to use dissertation statistics help, a simple phone call can clear any misconceptions about the price of dissertation statistics help. Once the student inquires about the pricing, he or she will quickly see that the time and energy saved by having dissertation statistics help is well worth what the student must pay for the dissertation statistics help.
Dissertation statistics help will make the acquisition of any dissertation easier and more obtainable. While other students who do not seek dissertation statistics help struggle with many aspects of their dissertation, the student who is smart enough to seek dissertation statistics help will finish on-time and with much less stress. Because dissertation statistics help provides guidance, help, and assistance throughout the entire process, the student is sure to finish on-time and with a great amount of success.
Thursday, April 9, 2009
Statistical Data Analysis
Statistics is basically a science that involves data collection, data interpretation and finally, data validation. Statistical data analysis is a procedure of performing various statistical operations. Statistical data analysis is a kind of quantitative research, which seeks to quantify the data, and typically, applies some form of statistical analysis. Quantitative data in statistical data analysis basically involves descriptive data, such as survey data and observational data.
Statistical data analysis generally involves some form of statistical tools, which a layman cannot perform without having any statistical knowledge. There are various software packages to perform statistical data analysis. This software includes Statistical Analysis System (SAS), Statistical Package for the Social Sciences (SPSS), Stat soft, etc.
Data in statistical data analysis consists of variable(s). Sometimes the data in statistical data analysis is univariate or multivariate. Depending upon the number of variables in statistical data analysis, the researcher performs different statistical techniques.
If the data in statistical data analysis is multiple in numbers, then several multivariate statistical data analysis can be performed. The multivariate statistical data analyses are factor statistical data analysis, discriminant statistical data analysis, etc. Similarly, if the data in statistical data analysis is singular in number, then the univariate statistical data analysis is performed. This includes t test for significance, z test, f test, ANOVA one way, etc.
The data in statistical data analysis is basically of 2 types, namely, continuous data and discreet data. The continuous data in statistical data analysis is the one that cannot be counted. For example, intensity of a light can be measured but cannot be counted. The discreet data in statistical data analysis is the one that can be counted. For example, the number of bulbs can be counted.
The continuous data in statistical data analysis is distributed under continuous distribution function, which can also be called the probability density function, or simply pdf.
The discreet data in statistical data analysis is distributed under discreet distribution function, which can also be called the probability mass function or simple pmf.
We use the word ‘density’ in continuous data of statistical data analysis because density cannot be counted, but can be measured. We use the word ‘mass’ in discreet data of statistical data analysis because mass cannot be counted.
There are various pdf’s and pmf’s in statistical data analysis. For example, Poisson distribution is the commonly known pmf, and normal distribution is the commonly known pdf in statistical data analysis.
These distributions in statistical data analysis help us to understand which data falls under which distribution. If the data in statistical data analysis is about the intensity of a bulb, then the data would be falling in Poisson distribution.
There is a major task in statistical data analysis, which comprises of statistical inference. The statistical inference in statistical data analysis is mainly comprised of two parts: estimation and tests of hypothesis.
Estimation in statistical data analysis mainly involves parametric data—the data that consists of parameters. On the other hand, tests of hypothesis in statistical data analysis mainly involve non parametric data— the data that consists of no parameters.
For more information on statistical consulting, click here.
Statistical data analysis generally involves some form of statistical tools, which a layman cannot perform without having any statistical knowledge. There are various software packages to perform statistical data analysis. This software includes Statistical Analysis System (SAS), Statistical Package for the Social Sciences (SPSS), Stat soft, etc.
Data in statistical data analysis consists of variable(s). Sometimes the data in statistical data analysis is univariate or multivariate. Depending upon the number of variables in statistical data analysis, the researcher performs different statistical techniques.
If the data in statistical data analysis is multiple in numbers, then several multivariate statistical data analysis can be performed. The multivariate statistical data analyses are factor statistical data analysis, discriminant statistical data analysis, etc. Similarly, if the data in statistical data analysis is singular in number, then the univariate statistical data analysis is performed. This includes t test for significance, z test, f test, ANOVA one way, etc.
The data in statistical data analysis is basically of 2 types, namely, continuous data and discreet data. The continuous data in statistical data analysis is the one that cannot be counted. For example, intensity of a light can be measured but cannot be counted. The discreet data in statistical data analysis is the one that can be counted. For example, the number of bulbs can be counted.
The continuous data in statistical data analysis is distributed under continuous distribution function, which can also be called the probability density function, or simply pdf.
The discreet data in statistical data analysis is distributed under discreet distribution function, which can also be called the probability mass function or simple pmf.
We use the word ‘density’ in continuous data of statistical data analysis because density cannot be counted, but can be measured. We use the word ‘mass’ in discreet data of statistical data analysis because mass cannot be counted.
There are various pdf’s and pmf’s in statistical data analysis. For example, Poisson distribution is the commonly known pmf, and normal distribution is the commonly known pdf in statistical data analysis.
These distributions in statistical data analysis help us to understand which data falls under which distribution. If the data in statistical data analysis is about the intensity of a bulb, then the data would be falling in Poisson distribution.
There is a major task in statistical data analysis, which comprises of statistical inference. The statistical inference in statistical data analysis is mainly comprised of two parts: estimation and tests of hypothesis.
Estimation in statistical data analysis mainly involves parametric data—the data that consists of parameters. On the other hand, tests of hypothesis in statistical data analysis mainly involve non parametric data— the data that consists of no parameters.
For more information on statistical consulting, click here.
Monday, April 6, 2009
Data Analysis
Data analysis is a procedure of collecting and analyzing raw data by interpreting the inference out of raw data. Data analysis is one of the important aspects of the analyst’s work. Data analysis plays a crucial role in deciding whether or not the retrieved data is reliable.
Data analysis is basically a two-step procedure that involves collecting and analyzing data. Data analysis can be explained with the help of the following example:
Suppose a researcher has conducted a survey in order to know if the manufacturing of auto parts in an auto industry is more in Pune or in Chennai. The first step of data analysis is to collect the data through primary or secondary research. The next step of data analysis is to make an inference about the collected data. The second step of data analysis in this case will involve SWOT Analysis. SWOT Analysis stands for Strength, Weakness, Opportunity and Threat of the data under study.
Primary research in data analysis is the one that involves collection of data through questionnaires or telephone interviews. Secondary research in data analysis is the one that involves collection of data using the internet.
There are basically two types of data analysis. These two types are as follows:
Qualitative data analysis: This kind of data analysis is the one that consists of an unstructured, exploratory research methodology based on small samples intended to provide an insight into the problem being solved.
Quantitative data analysis: On the other hand, this kind of data analysis seeks to quantify the data and typically involves some form of statistical data analysis.
Quantitative data analysis can be performed in those cases when one needs to get statistical inferences about the data. In such cases, data analysis is done by using some statistical techniques. These statistical techniques include Factor Analysis, Discriminate Analysis, etc.
A technical analyst performs data analysis by interpreting the charts using a time series technique, and he/she forecasts the price trends of a particular commodity or share. Thus, data analysis can be used to forecast about the data as well.
Data analysis is an integral part of every research work. The validity of data can be known only through data analysis.
In statistics, data analysis is done on quantitative data. Data analysis in relation to quantitative data analysis can be divided into descriptive statistics, exploratory data analysis and confirmatory data analysis.
Descriptive Statistics in data analysis involves techniques like mean, median, mode, variance, standard deviation, etc.
Exploratory data analysis involves the following steps:
· Formulation of a problem in data analysis.
· Identifying alternative courses of action in data analysis.
· Developing hypotheses in data analysis.
· Isolating key variables and relationships for further examination in data analysis.
· Gaining insights for developing an approach to the formulated problem in data analysis.
Sometimes, qualitative data analysis is undertaken to explain the findings obtained from quantitative data analysis. Thus, one can say that both qualitative data analysis and quantitative data analysis are interrelated with each other.
Data analysis is also synonymous to data modeling. Data modeling is a process in which a perfect model (which represents the data as a whole) is being fitted during the data analysis.
For information on statistical consulting, click here.
Data analysis is basically a two-step procedure that involves collecting and analyzing data. Data analysis can be explained with the help of the following example:
Suppose a researcher has conducted a survey in order to know if the manufacturing of auto parts in an auto industry is more in Pune or in Chennai. The first step of data analysis is to collect the data through primary or secondary research. The next step of data analysis is to make an inference about the collected data. The second step of data analysis in this case will involve SWOT Analysis. SWOT Analysis stands for Strength, Weakness, Opportunity and Threat of the data under study.
Primary research in data analysis is the one that involves collection of data through questionnaires or telephone interviews. Secondary research in data analysis is the one that involves collection of data using the internet.
There are basically two types of data analysis. These two types are as follows:
Qualitative data analysis: This kind of data analysis is the one that consists of an unstructured, exploratory research methodology based on small samples intended to provide an insight into the problem being solved.
Quantitative data analysis: On the other hand, this kind of data analysis seeks to quantify the data and typically involves some form of statistical data analysis.
Quantitative data analysis can be performed in those cases when one needs to get statistical inferences about the data. In such cases, data analysis is done by using some statistical techniques. These statistical techniques include Factor Analysis, Discriminate Analysis, etc.
A technical analyst performs data analysis by interpreting the charts using a time series technique, and he/she forecasts the price trends of a particular commodity or share. Thus, data analysis can be used to forecast about the data as well.
Data analysis is an integral part of every research work. The validity of data can be known only through data analysis.
In statistics, data analysis is done on quantitative data. Data analysis in relation to quantitative data analysis can be divided into descriptive statistics, exploratory data analysis and confirmatory data analysis.
Descriptive Statistics in data analysis involves techniques like mean, median, mode, variance, standard deviation, etc.
Exploratory data analysis involves the following steps:
· Formulation of a problem in data analysis.
· Identifying alternative courses of action in data analysis.
· Developing hypotheses in data analysis.
· Isolating key variables and relationships for further examination in data analysis.
· Gaining insights for developing an approach to the formulated problem in data analysis.
Sometimes, qualitative data analysis is undertaken to explain the findings obtained from quantitative data analysis. Thus, one can say that both qualitative data analysis and quantitative data analysis are interrelated with each other.
Data analysis is also synonymous to data modeling. Data modeling is a process in which a perfect model (which represents the data as a whole) is being fitted during the data analysis.
For information on statistical consulting, click here.
Wednesday, April 1, 2009
Statistics Help
Fact and verity go-hand-in-hand in today’s world of constant change and practicality. Statistics has become a necessary facet in daily activity. With statistics as a core subject towards the proper functioning of organizations and firms, help and assistance is made accessible to all those in need. Statistics help has been a compulsory need for businesses as it ensures functionality and efficiency within organizations. Statistical help is used for guiding and assisting clients like students, researchers and members of the business or government communities as it analyzes complicated statistical problems. Backed up by strong statistical facts, organizations with statistical help use these surveys and analyses to create constructive findings and conclusions.
Statistics form a core element in the proper execution of activities and functions of organizations. Statistics form a crucial part in determining business activities and behavior. In the line of business, applications such as risk assessment, data analysis, data mining and decision support can all be carried out through statistics. Statistical help and analysis go a long way at determining business processes. It also helps set a proper course through its research and findings.
Statistics is also applied by students when they write theses, dissertations, reports and term papers. Statistical help and validity may be required for the report or paper. Statistics help also aids organizations at expediting the growth rate and progress. Statistics help should originate from a well-trained work force that is highly skilled at statistics and is experienced and knowledgeable in the field. Such statistics help can be acquired from experts like professors, business consultants, researchers and specialized statistical consultants. The consultants should have good communication skills for interacting with clients, a good scientific and analytical brain, statistical understanding and should be computer proficient. Statisticians should be able to comprehend the needs of the client and fulfill them as per the clients’ requirements. Statistics help is necessarily centered on the needs of the clients, be it analysis, research, survey, etc. Hence, budget should be taken into consideration as quality is emphasized.
When it comes to statistics, the bounds are endless. More than half of the world’s population today depends on statistics help and assistance. Almost every domain of life relies on statistics. The influence of statistics help in the present day is highly credited. While statistics help gains milestones in the field of business, it also achieves goals in the lines of medicine. Statistics help has become such a prevalent feature that the need for firms providing statistics help is on the rise. While some people have skills and qualifications in fields other than that of statistics, to ensure completion of their work or reports, statistics is required. Consequently, such persons fall back and depend on statistics help to guide them at achieving their desired results.
Given that the competition between firms today is on the rise, statistics help assists organizations in achieving more prospects and thus they gain leverage against other contenders. Some organizations, especially the smaller firms who do not have the required skill and ability to perform statistical analyses, rely on statistical help to further their benefits.
Statistics help is a very relevant instrument in today’s world. It warrants efficiency and accuracy and is applicable to almost every sphere of life. The possibilities that statistics help provide goes beyond measure. Putting an end to ball-park figures and estimates, statistics help has taken the world by storm with its precision and exactness and it remains an indispensable part of everyday activities.
For help with your statistical analysis, click here.
Statistics form a core element in the proper execution of activities and functions of organizations. Statistics form a crucial part in determining business activities and behavior. In the line of business, applications such as risk assessment, data analysis, data mining and decision support can all be carried out through statistics. Statistical help and analysis go a long way at determining business processes. It also helps set a proper course through its research and findings.
Statistics is also applied by students when they write theses, dissertations, reports and term papers. Statistical help and validity may be required for the report or paper. Statistics help also aids organizations at expediting the growth rate and progress. Statistics help should originate from a well-trained work force that is highly skilled at statistics and is experienced and knowledgeable in the field. Such statistics help can be acquired from experts like professors, business consultants, researchers and specialized statistical consultants. The consultants should have good communication skills for interacting with clients, a good scientific and analytical brain, statistical understanding and should be computer proficient. Statisticians should be able to comprehend the needs of the client and fulfill them as per the clients’ requirements. Statistics help is necessarily centered on the needs of the clients, be it analysis, research, survey, etc. Hence, budget should be taken into consideration as quality is emphasized.
When it comes to statistics, the bounds are endless. More than half of the world’s population today depends on statistics help and assistance. Almost every domain of life relies on statistics. The influence of statistics help in the present day is highly credited. While statistics help gains milestones in the field of business, it also achieves goals in the lines of medicine. Statistics help has become such a prevalent feature that the need for firms providing statistics help is on the rise. While some people have skills and qualifications in fields other than that of statistics, to ensure completion of their work or reports, statistics is required. Consequently, such persons fall back and depend on statistics help to guide them at achieving their desired results.
Given that the competition between firms today is on the rise, statistics help assists organizations in achieving more prospects and thus they gain leverage against other contenders. Some organizations, especially the smaller firms who do not have the required skill and ability to perform statistical analyses, rely on statistical help to further their benefits.
Statistics help is a very relevant instrument in today’s world. It warrants efficiency and accuracy and is applicable to almost every sphere of life. The possibilities that statistics help provide goes beyond measure. Putting an end to ball-park figures and estimates, statistics help has taken the world by storm with its precision and exactness and it remains an indispensable part of everyday activities.
For help with your statistical analysis, click here.
Exploratory Factor Analysis
Factor Analysis is a general name denoting a class of procedures primarily used for data reduction and summarization. In research, there are a large number of variables which are extensively correlated and must be reduced to a manageable level. Relationships among sets of many interrelated variables are examined and represented in terms of a few underlying factors.
There are basically 2 approaches to Factor Analysis:
· Exploratory Factor Analysis (EFA) seeks to uncover the underlying structure of a relatively large set of variables. The researcher has a priori assumption that any indicator may be associated with any factor. This is the most common form of factor analysis. There is no prior theory and one uses factor loadings to intuit the factor structure of the data.
· Confirmatory Factor Analysis (CFA) seeks to determine if the number of factors and the loadings of measured (indicator) variables on them conform to what is expected on the basis of pre-established theory. Indicator variables are selected on the basis of prior theory, and factor analysis is used to see if they loaded, as predicted, on the expected number of factors.
The basic difference between Exploratory Factor Analysis and CFA is that in CFA, a researcher’s a priori assumption is that each factor (the number and labels of which may be specified a priori) is associated with a specified subset of indicator variables. The major limitation behind Exploratory Factor Analysis is its simplicity. Hence, the researcher will not get a reliable inference. Therefore, Exploratory Factor Analysis is used less as compared to Confirmatory Factor Analysis.
The following techniques are used in both the approaches—both Exploratory Factor Analysis and CFA:
· Principal Component Technique: This technique is used in Exploratory Factor Analysis, where the total variance in the data is considered. The diagonal of the correlation matrix consists of unities, and full variance is brought into the factor matrix. Principal technique is recommended when the primary concern is to determine the minimum number of factors that will account for maximum variance in the data for use in subsequent multivariate analysis.
There are some techniques, in addition to Principal Component Technique, that are used in Exploratory factor analysis and Confirmatory factor analysis and that are complex. These techniques are also called Extraction Methods. These techniques are as follows:
· Image factoring: This technique in Exploratory Factor Analysis is based on the correlation matrix of predicted or dependent variables rather than actual variables. In this, we predict each variable from the others by using multiple regressions.
· Maximum likelihood factoring(MLF): This technique in Exploratory Factor Analysis is based on a linear combination of variables to form factors, where the parameter estimates are such that they are most likely to have resulted in the observed correlation matrix, by using Maximum Likelihood Estimation (MLE) methods and assuming multivariate normality. Correlations are weighted by each variable's uniqueness. Here, uniqueness refers to the difference between the variability of a variable and its communality. MLF generates a chi-square goodness-of-fit test. The researcher can increase the number of factors one at a time until a satisfactory goodness-of-fit is obtained.
· Alpha factoring: This technique in Exploratory Factor Analysis is based on the maximization of the reliability of factors, assuming that the variables are randomly sampled from a very large set of variables. Unlike other methods, this method does not assume sampled cases and fixed variables.
· Unweighted least squares (ULS) factoring: This technique in Exploratory Factor Analysis is based upon minimizing the sum of squared differences between the observed and estimated correlation matrices, without counting the diagonal.
· Generalized least squares (GLS) factoring: This technique in Exploratory Factor Analysis is based on adjusting ULS by measuring the correlations, which are inversely proportional to their uniqueness (more unique variables weight less). Like MLF, GLS also generates a chi-square goodness-of-fit test. The researcher can increase the number of factors one at a time until a satisfactory goodness-of-fit is obtained.
The major disadvantage of using these techniques in Exploratory Factor Analysis is that they are quiet complex and are not recommended for an inexperienced user. Hence, these methods are usually not used in extraction methods. For help with these techniques, click here.
There are basically 2 approaches to Factor Analysis:
· Exploratory Factor Analysis (EFA) seeks to uncover the underlying structure of a relatively large set of variables. The researcher has a priori assumption that any indicator may be associated with any factor. This is the most common form of factor analysis. There is no prior theory and one uses factor loadings to intuit the factor structure of the data.
· Confirmatory Factor Analysis (CFA) seeks to determine if the number of factors and the loadings of measured (indicator) variables on them conform to what is expected on the basis of pre-established theory. Indicator variables are selected on the basis of prior theory, and factor analysis is used to see if they loaded, as predicted, on the expected number of factors.
The basic difference between Exploratory Factor Analysis and CFA is that in CFA, a researcher’s a priori assumption is that each factor (the number and labels of which may be specified a priori) is associated with a specified subset of indicator variables. The major limitation behind Exploratory Factor Analysis is its simplicity. Hence, the researcher will not get a reliable inference. Therefore, Exploratory Factor Analysis is used less as compared to Confirmatory Factor Analysis.
The following techniques are used in both the approaches—both Exploratory Factor Analysis and CFA:
· Principal Component Technique: This technique is used in Exploratory Factor Analysis, where the total variance in the data is considered. The diagonal of the correlation matrix consists of unities, and full variance is brought into the factor matrix. Principal technique is recommended when the primary concern is to determine the minimum number of factors that will account for maximum variance in the data for use in subsequent multivariate analysis.
There are some techniques, in addition to Principal Component Technique, that are used in Exploratory factor analysis and Confirmatory factor analysis and that are complex. These techniques are also called Extraction Methods. These techniques are as follows:
· Image factoring: This technique in Exploratory Factor Analysis is based on the correlation matrix of predicted or dependent variables rather than actual variables. In this, we predict each variable from the others by using multiple regressions.
· Maximum likelihood factoring(MLF): This technique in Exploratory Factor Analysis is based on a linear combination of variables to form factors, where the parameter estimates are such that they are most likely to have resulted in the observed correlation matrix, by using Maximum Likelihood Estimation (MLE) methods and assuming multivariate normality. Correlations are weighted by each variable's uniqueness. Here, uniqueness refers to the difference between the variability of a variable and its communality. MLF generates a chi-square goodness-of-fit test. The researcher can increase the number of factors one at a time until a satisfactory goodness-of-fit is obtained.
· Alpha factoring: This technique in Exploratory Factor Analysis is based on the maximization of the reliability of factors, assuming that the variables are randomly sampled from a very large set of variables. Unlike other methods, this method does not assume sampled cases and fixed variables.
· Unweighted least squares (ULS) factoring: This technique in Exploratory Factor Analysis is based upon minimizing the sum of squared differences between the observed and estimated correlation matrices, without counting the diagonal.
· Generalized least squares (GLS) factoring: This technique in Exploratory Factor Analysis is based on adjusting ULS by measuring the correlations, which are inversely proportional to their uniqueness (more unique variables weight less). Like MLF, GLS also generates a chi-square goodness-of-fit test. The researcher can increase the number of factors one at a time until a satisfactory goodness-of-fit is obtained.
The major disadvantage of using these techniques in Exploratory Factor Analysis is that they are quiet complex and are not recommended for an inexperienced user. Hence, these methods are usually not used in extraction methods. For help with these techniques, click here.
Tuesday, March 31, 2009
Screening of the Data
Careful analysis of data applicability after collection and before analysis is probably the most time-consuming part of data analysis (Tabachnick & Fidell, 2001). This step is, however, of utmost importance as it provides the foundation for any subsequent analysis and decision-making which rests on the accuracy of the data. Incorrect analysis of the data during purification, including EFA, and before conducting confirmatory SEM analysis may result in poor fitting models or, worse, models that are inadmissible.
Data screening is important when employing covariance-based techniques such as structural equation modelling where assumptions are stricter than for the standard t-test. Many of the parametric statistical tests (based on probability distribution theory) involved in this study assume that: (a) normally distributed data – the data are from a normally distributed population, (b) homogeneity of variance – the variances in correlational designs should be the same for each level of each variable, (c) interval data – data where the distance between any two points is the same and is assumed in this study for Likert data, and (d) independence – the data from each respondent has no effect on any other respondent’s scores.
Many of the common estimation methods in SEM (such as maximum-likelihood estimation) assume: (a) “all univariate distributions are normal, (b) joint distribution of any pair of the variables is bivariate normal, and (c) all bivariate scatterplots are linear and homoscedastic” (Kline, 2005, p. 49). Unfortunately, SPSS does not offer an assessment of multivariate normality but Field (2005) and others (Kline, 2005; Tabachnick & Fidell, 2001) recommend first assessing univariate normality. The data were checked for plausible ranges and examination was satisfactory. There were no data out of range.
Data screening is important when employing covariance-based techniques such as structural equation modelling where assumptions are stricter than for the standard t-test. Many of the parametric statistical tests (based on probability distribution theory) involved in this study assume that: (a) normally distributed data – the data are from a normally distributed population, (b) homogeneity of variance – the variances in correlational designs should be the same for each level of each variable, (c) interval data – data where the distance between any two points is the same and is assumed in this study for Likert data, and (d) independence – the data from each respondent has no effect on any other respondent’s scores.
Many of the common estimation methods in SEM (such as maximum-likelihood estimation) assume: (a) “all univariate distributions are normal, (b) joint distribution of any pair of the variables is bivariate normal, and (c) all bivariate scatterplots are linear and homoscedastic” (Kline, 2005, p. 49). Unfortunately, SPSS does not offer an assessment of multivariate normality but Field (2005) and others (Kline, 2005; Tabachnick & Fidell, 2001) recommend first assessing univariate normality. The data were checked for plausible ranges and examination was satisfactory. There were no data out of range.
Subscribe to:
Posts (Atom)