RES601 MODULE 2 CASE, SLP AND DISCUSSION

Ace your studies with our custom writing services! We've got your back for top grades and timely submissions, so you can say goodbye to the stress. Trust us to get you there!


Order a Similar Paper Order a Different Paper

Module 2 – Home

MODELS, MEDIATION, AND MODERATION

Modular Learning Outcomes

Upon successful completion of this module, the student will be able to satisfy the following outcomes:

  • Case
    • Distinguish between a conceptual definition of a variable and an operational definition.
    • Explain the process of operationalizing variables by constructing appropriate definitions and measures.
    • Explain the uses and limitations of multi-item scales for measuring complex constructs.
    • Distinguish among several definitions of validity, with particular emphasis on construct validity, and describe the relationship between validity and reliability.
  • SLP
    • Use various statistical techniques for estimating reliability and validity in different contexts.
    • Construct and use multi-item scales in data analysis, and interpret the results.
    • Clean a data set by defining missing values and correcting erroneous data entries.
  • Discussion
    • Weigh the costs and benefits of alternative methods for sampling in the context of Internet-based research.

Module Overview

By now you have acquired familiarity and basic facility with a number of statistical procedures, including basic hypothesis testing and regression modeling. These tools allow us to interpret the covariation of real-world phenomena in terms of relationships between those phenomena.

But in the behavioral science domains, it’s often not so easy to determine what it is we are in fact analyzing, since phenomena are often defined only by their measurements. Intelligence is what is measured by intelligence tests; attitudes are what are measured by attitudes scales. Even behavior is hard to categorize; is that man over there a terrorist or a freedom fighter? Or just a common criminal? The same number of people may be dead in any event, but if we are to interpret the behavior, which is what behavioral science is all about, we need to be able to apply labels. And it is at this point of labeling that we have to face the question of just how we define the meaning of our data and how much confidence we have that it will stand up to the kind of assumptions required by the statistical tests we propose to apply.

The process of translating constructs into measurable variables is called “operational definition”, or “operationalization”. Sometimes the correspondence between the theoretical constructs and the operational variables is quite close. For example, if you are looking at how humans gain weight as they get older, you probably will be using a construct called Age, and another called Weight. These can be operationalized (that is, turned into variables called “Age” and “Weight”—or “Ralph” and “Jane”, for that matter; the names don’t matter as long as you can remember what they mean) by recording the number of years that have passed since each person was born, and measuring his/her poundage on a scale. This gives you two numbers—and when you have turned your phenomena into numbers, all the bounties of statistics become available to you.

Sometimes the correspondence is more tenuous. For example, you might have a theory that the performance of a top management team is a function of the members’ ability to communicate with each other. Performance can be measured by profitability, among other things. But you may be unable or unwilling to try to directly measure this capacity for communication, or maybe they won’t let you ask the kind of sensitive questions that you know would be needed for a real measurement. So instead, you measure the number of years that they have worked together, on the assumption that over time they learn to read and understand each other, so they communicate better. So to test the theory, you correlate profitability with number of years the team has worked together. But of course in your published article, what you’ll stress is the connection you’ve found between performance and communication.

The Operational Definition is the detailed description of how a concept or variable will be measured and how values will be assigned. Suppose that we’re studying criminal behavior. One operational definition of prior criminal behavior may use reported arrests for felony offenses based on an FBI fingerprint search, while another operational definition may involve self-reported criminal history obtained by response to a short list of questions on a standardized questionnaire. Both can legitimately be argued to be appropriate, yet they will probably yield very different empirical results.

The situation becomes even more complicated when we introduce the use of multi-item scales or indices as composite measures of complex constructs. The more complicated the construct around which the proposed measure is organized, the less likely it is that any one single item can adequately represent it. And since most of the models we like feature fairly complex constructs, the use of scaling and other forms of composite variable analysis has become almost universal among researchers studying phenomena involving attitudes and behaviors. Scales are most typically constructed by taking a set of similar items with similar response categories, with each item presumed to represent some part of the total construct, and then either averaging or summing the individual values to produce a single composite value. When back in the 1940s Rensis Likert first introduced the composite scale that has come to be associated with his name, he could hardly have foreseen all of the variations on a theme that this has introduced, some legitimate and some suspect.

Like anything else in statistics, scaling has an underlying mathematical rationale based on a set of assumptions about the nature of the data—assumptions that are easy to overlook and are often dealt with rather lightly. The ubiquity of scaling in business and management research has both desensitized researchers to its finer points and at the same time rendered scales more suspect than the researchers would like. There’s certainly nothing wrong with scaling—it’s critical to constructing operational definitions that really mean something in many of the studies in business, including a high proportion of dissertation research projects. This makes it all the more important to do it right. We need to understand very thoroughly what we are doing when we create and interpret scales, whether we adopt and use scales created and tested by others or engage in the even more daunting exercise of trying to create our own scales.

In this module, we’ll begin our discussion and treatment of scales and indices, a topic to which we’ll return again before the end of your research training. It’s a source of considerable frustration for many students at the dissertation stage, and we think it deserves fairly careful discussion before you get there. The actual construction of scales is pretty simple, as we noted—the key problems are deciding what items to include, and assessing the critical properties of reliability and validity. Together, these two properties, each capable of being defined and understood in a number of different sorts of ways, establish for most readers the legitimacy of the use of a scale. Scales used in your dissertation must be reliable and valid.

Thanks for the Inter-Nomological Network (INN), now it is easier to find reliable and valid scales for the popular constructs in this website http://inn.theorizeit.org/. This site contains over 80,000 constructs manually extracted from top journals in ten disciplines. Each construct comes with a name, definition (if provided by original author), measures, and citations. Once you find a scales you should go to the research paper where it was used to verify the reliability and validity values for each scale.

Module 2 – Background

MODELS, MEDIATION, AND MODERATION

Required Reading

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. Retrieved December 15, 2008 from http://www.public.asu.edu/~davidpm/classes/psy536/Baron.pdf

Sullivan, G. M., Feinn, R. (2012). Using effect size—or why the P value is not enough. J Grad Med Educ. 4(3): 279-282. Available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/

Frazier, P. F., Tix, A., P. & Barron, K. E. (2004). Testing moderator and mediator effects in counseling psychology research. Journal of Counseling Psychology, 51(1),115–134.

Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879-891. Retrieved from http://quantpsy.org/pubs/preacher_hayes_2008b.pdf

Required Videos

Cromer, K. (2020, August 16). RES601 mod 2 SLP factor analysis, reliability testing, creating variables and constructs [Video]. YouTube. https://youtu.be/B0RpawZfjYo

Grande, T. (2014, August 19). Factor analysis using SPSS [Video]. YouTube. https://www.youtube.com/watch?v=pRA3Wapx7fY

how2stats. (2011, September 15). Cronbach’s Alpha – SPSS (part 1) [Video]. YouTube. https://www.youtube.com/watch?v=2gHvHm2SE5s

how2stats. (2011, September 15). Cronbach’s Alpha – SPSS (part 2) [Video]. YouTube. https://www.youtube.com/watch?v=9rS49o1rdnk

Optional Reading

David A. Kenny website. View the mediation and moderation links at http://davidakenny.net/

Eveland, J. D. (2004). Diagramming theoretical models. PowerPoint presentation.

Giere, R. N. (2006). Using models to represent reality. University of Minnesota, Workshop presentation. Retrieved from http://www.tc.umn.edu/~giere/UMRR.pdf

Module 2 – Case

MODELS, MEDIATION, AND MODERATION

Assignment Overview

Mediation and Moderation are two concepts that are confusing to many students. Please take a few minutes to read these two articles before beginning the Assignment.

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. Retrieved December 15, 2008 from http://www.public.asu.edu/~davidpm/classes/psy536/Baron.pdf

Frazier, P. F., Tix, A. P. & Barron, K. E. (2004). Testing moderator and mediator effects in counseling psychology research. Journal of Counseling Psychology, 51(1),115–134.

Case Assignment

Prepare an essay in which you define, provide examples of, and explain the testing of hypotheses positing mediation and moderation. Remember to cite your sources.

Assignment Expectations

Your assignment will be graded using the following criteria:

Assignment-driven Criteria: Student demonstrates mastery covering all key elements of the assignment.

Critical Thinking: Student demonstrates mastery conceptualizing the problem. Viewpoints and assumptions of experts are analyzed, synthesized, and evaluated. Conclusions are logically presented with appropriate rationale.

Scholarly Writing: Student demonstrates mastery and proficiency in scholarly writing, following required structure and organization of the assignment; using relevant and quality sources to support ideas; using in-text citations of sources; and properly formatting references (bibliography).

Professionalism and Timeliness: Student demonstrates excellence in taking responsibility for learning, adhering to course requirement policies and expectations; Assignment submitted on time or with professor’s pre-approved assignment extension.

Module 2 – SLP

MODELS, MEDIATION, AND MODERATION

Now you have a dataset ready for further analysis. Since each of you probably made different decisions in the data cleaning, I would like you to start with a fresh dataset so that we are all on the same page. Load RES601 Module 2 SPSS Dataset.sav in SPSS. This explanatory video will walk you through the assignment:

RES601 Mod 2 SLP Factor Analysis, Reliability Testing, Creating Variables and Constructs https://youtu.be/B0RpawZfjYo.

  1. Conduct a factor analysis for each construct. Select Direct Oblimin Rotation. Remove items that do not load cleanly on the correct variable at above a .3. For examples please see:Grande, T. (2014) Factor Analysis using SPSS. https://www.youtube.com/watch?v=pRA3Wapx7fY
  2. Test each variable for reliability. Exclude additional items if required to achieve a Cronbach’s Alpha of .7 or higher. For examples please see:Howtostats (2011) Cronbach’s Alpha – SPSS (part 1). https://www.youtube.com/watch?v=2gHvHm2SE5sHowtostats (2011) Cronbach’s Alpha – SPSS (part 2).https://www.youtube.com/watch?v=9rS49o1rdnk
  3. Report your findings in an essay. Include properly formatted tables reflecting the loadings and the reliability summary for each variable.

SLP Assignment Expectations

Your assignment will be graded using the following criteria:

Assignment-driven Criteria: Expressing quantitative analysis of data to support the discussion showing what evidence is used and how it is contextualized.

Interpretation: Explaining information presented in mathematical terms (e.g., equations, graphs, diagrams, tables, words).

Presentation: Ability to convert relevant information into various mathematical terms (e.g., equations, graphs, diagrams, tables, words).

Conclusions: Drawing appropriate conclusions based on the analysis of data.

Timeliness and Professionalism: Student demonstrates excellence in taking responsibility for learning; adhering to course requirement policies and expectations. Assignment submitted on time or collaborated with professor for an approved extension on due date.

Module 2 – Outcomes

MODELS, MEDIATION, AND MODERATION

  • Module
    • Explain the process of operationalizing variables by constructing appropriate definitions and measures.
    • Explain the uses and limitations of multi-item scales for measuring complex constructs.
    • Distinguish among several definitions of validity, with particular emphasis on construct validity, and describe the relationship between validity and reliability.
    • Use various statistical techniques for estimating reliability and validity in different contexts.
    • Distinguish between a conceptual definition of a variable and an operational definition.
  • Case
    • Distinguish between a conceptual definition of a variable and an operational definition.
    • Explain the process of operationalizing variables by constructing appropriate definitions and measures.
    • Explain the uses and limitations of multi-item scales for measuring complex constructs.
    • Distinguish among several definitions of validity, with particular emphasis on construct validity, and describe the relationship between validity and reliability.
  • SLP
    • Use various statistical techniques for estimating reliability and validity in different contexts.
    • Construct and use multi-item scales in data analysis, and interpret the results.
    • Clean a data set by defining missing values and correcting erroneous data entries.
  • Discussion
    • Weigh the costs and benefits of alternative methods for sampling in the context of Internet-based research.

Discussion 2

 

Previous Next 

How can moderation and mediation be used to address specific research questions within your area of interest? Give an example of potential associations which could be examined with moderation and mediation analyses.

 

Writerbay.net

Looking for top-notch essay writing services? We've got you covered! Connect with our writing experts today. Placing your order is easy, taking less than 5 minutes. Click below to get started.


Order a Similar Paper Order a Different Paper