To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!

Friday, November 30, 2012

Scientific Merit Review (SMR): Constructs, Variables and Operational Definitions



We were working with a dissertation client recently helping them understand the SMR section 3.3 Constructs, 3.4 Variables, and 3.5 Operational definitions.  As we read it, it seemed that the SMR writers got this one correct.  Essentially, the pattern of 3.3-3.5 describes the theoretical constructs (3.3), discuss each of the variables (3.4), and tie together how the constructs are measured by the variables (3.5).
3.3 Constructs section is the place to talk about the theoretical constructs.  For example, self-efficacy is a construct that can be looked at from social learning theory perspective, attribution theory, motivation theory, etc.  Constructs should describe just that, constructs from a theoretical perspective.  Every construct in your research questions should be described here.
In the Variables (section 3.4) of the SMR is the section to talk about the variables, and levels of the variables.  For example, one could be assessing participants’ sense of locus of control, stability, and controllability, and each of these measures could range on a continuous scale from 1-10, or be scored on an ordinal scale of low-medium-high.  Every variable that is used in your study needs to be talked about here.
The Operational definition (3.5) is the section to put 3.3 and 3.4 together: once you talk about the constructs and explain the scales, the operational definition is simply how the constructs in 3.3 are measured by the scales in section 3.4. 
The bottom-line is that your language is going to matter: if you don’t have the correct language, you are going to get it kicked-back to you, causing you even more time and tuition dollars—and it’s frustrating.  At Statistics Solutions, we help in the latter two sections, or see if you can get a hold of your advisors’ previous students to see how things were written. 
Remember one thing: you only have to do this once!  You will get through it and you will succeed!

Monday, November 12, 2012

Bonferroni Correction



When the same dependent variable is used multiple times in analysis it increases the likelihood of committing a Type 1 error.  Type 1 error occurs when a researcher incorrectly rejects a true null hypothesis.  To correct for this type of error, a Bonferroni type adjustment is typically made.  This is done by dividing the alpha level (typically set at .05) by the number of tests (n).  In this example we will assume three analyses use the same dependent variable.  The standard alpha level of .05 would be divided by three (number of analyses for each DV) and the new alpha level would be established at .017. This level would be used to determine statistical significance for the corresponding analyses (Tabachnick & Fidell, 2012). 

Tuesday, November 6, 2012

Capella University and the Scientific Merit Review



For the past 20 years Statistics Solutions’ mission is to help graduate students graduate.  Whether you go to Berkeley or Capella, students need help.  Students (“learner” always reminded me of Milgram’s 60’s Obedience to Authority study) at Capella however have a couple of things working against them.  First, they’re not on campus to get the help they need, and second, they’re paying tuition as the process continues.  One of the places students get stuck is writing aspects of the Capella’s Scientific merit review (SMR).

My staff and I have worked with over 2000 graduate students, and despite the resources at the universities, some still need help from an objective, non-evaluative professional.  We are such professionals!   When we work with students they typically get stuck in the same few places:  research questions, proposed data analysis, and the target population and participation selection. 
Research questions are easy to handle: make sure the constructs (your measures) are obtainable and measure what you want to measure, AND you arrange these constructs in statistical language.  For example, if you have constructs A and B, and want to relate them (read “correlate” them), then say that.  If you are assessing whether A predicts B (read “regression”), then say predict, impact, or account for variability in B.  

Capella’s Scientific Merit Review also asks for a data plan.  When it comes to data analysis plans, these plans are based on two things: the statistical language you used in the research questions and the level of measurement of your variables.  We have resources on our website or if you need more 1-to-1 help you can go to click here.  By the way, Capella will send you back for a round of revisions (tuition not included) if you don’t have this correct.  When I went to school 100 years ago, the IRB which we would have sent our SMR to, made sure that we didn’t hurt our participants but now they look at everything.  And let’s face it, the revision costs you both time and money. 

Sample size is typically trickier still (even with the help of G-power).  There are two tricks: selecting the right analysis (see data plan above) and selecting the effect size.  Effect size can be derived by looking at past research using these constructs and analyses, then calculating or seeing the effect size used.  There’s also a realistic aspect too:  for dissertations—and I’ve seen 1000’s of them—large and medium effect sizes, requiring relatively small sample (under 100 participants),  is the norm.  Requesting a small effect size (small effect take a lot of people to detect) requires typically 300-500 participants—and this is just not reasonable for a dissertation student to obtain.  Here are a couple of resources (sample size tool; power analysis) to get you started.  I should note that the exception is when you are conducting EFA, CFA, path analysis, and structural equation modeling; these techniques typically require 150 or more participants. 

I’m going to leave you with a Dissertation Template to look at.  It’s free and you may find some definition of terms helpful.  

Good luck with your Scientific Merit Review and call us if you run into trouble.  Contact us at: http://www.statisticssolutions.com/contact or call us at (877) 437-8622 (M-F, 9-5 EST)


PS:  A Stanford Ph.D. student just called; their private stats consultant just took another job.  See, everybody needs help sometimes, even schools with lots of resources!