CARMA Instructors Network

CARMA Instructors Network

We are pleased to announce the creation of our CARMA Instructors Network as a new benefit available to our Institutional Members. This Network is established to support collaboration and sharing of instructional resources among those who teach doctoral level research methods courses for management/business at our Institutional member schools. Special emphasis will be given to helping instructors of research methods courses learn how they can use CARMA programs and events that are freely available to all students (and faculty) at member schools as part of their course design and classroom experience.

These free resources for instructors teaching research methods courses include:

  • Invitations for instructors to meet the CARMA Team during the three live-online sessions in the spring semester. These sessions will cover teaching approaches and resources related to three topics common to an introductory research methods course: measurement, design, analysis, and qualitative approaches. Dates for these sessions include February 17, March 31, April 28 (all sessions from 10:00 -11:00 am ET).
  • Access to instructor-only receptions after each of four live Webcast Lectures in Spring 2023.

These free resources above supplement CARMA resources available to all faculty and students at member schools, including 250+ recordings from the CARMA Video Library and live CARMA Live On-Line Short Courses. More information on CARMA and our events/programs can be found on our website: https://carmattu.com

Registration for CARMA Instructor Network:

Faculty from CARMA Institutional Member schools are eligible to participate in our CARMA Instructors Network. A list of member schools can be found at https://carmattu.com/22-23-inst-members/

To register for the Instructors Network:

  • Login as a CARMA Website User. (If you are not a Website User, please sign-up as a Website User.)
  • Once you login, in the CARMA User Area, under the “Register/Purchase” tab, select the “Register for Special Collections” link.
  • Select the collection “CARMA Instructors Network 22-23”.
  • Enter the discount code “INSNET2223” and then Checkout

CARMA Instructors Collection – Presenters & Topics

The Instructor’s Collection recordings have been selected to cover the breadth of topics often covered in an introduction to research methods doctoral course. The topics are general in nature and material is presented at a level appropriate for those without advanced statistical or research methods training. These recordings can be easily accessed by students at Institutional Member schools, and these students and instructors pay no additional fee for CARMA’s on-demand access.

Presenter: Topic:
Dr. Andreas Schwab Better Methods for Statistical Significance Tests
Dr. Sang Eun Woo Big Data Concepts
Dr. Adam Meade Careless Responding in Survey Research
Dr. Nathan Podsakoff Common Method Biases
Dr. Lisa Harlow Confirmatory Factor Analysis Application
Dr. Rhonda Reger Content Analysis for Macro Research
Dr. Gilad Chen Design Considerations
Dr. Justin DeSimone Dirty Data in Survey Research
Dr. Allison Gabriel Event Sampling Methods
Dr. Daniel Beal Experience Sampling Methods
Dr. Scott Turner Mixed Methods in Strategy and Organizations Research
Dr. Dan Putka Modern Prediction Methods
Dr. Mike Howard Network Analysis
Dr. Mike Withers Omitted Variable Bias
Dr. Samantha Anderson Power Analysis with Regression Models
Dr. Jason Colquitt Quantifying Content Validity
Dr. George Banks Questionable Research Practices
Dr. Jose Cortina Restricted Variance Interactions
Dr. Elaine Hollensbe Rigor/Trustworthiness in Qualitative Research
Dr. Tim Pollock Roles of the Methods and Results Sections
Dr. Eric Heggestad Scale Adaptation
Dr. Fred Oswald Statistical Analysis with Big Data

Multiple Linear Regression

Dr. Lisa Lambert Storytelling Through Statistics
Dr. Donald Bergh Verifying Empirical Research Findings

CARMA Instructors Collection – Abstracts

Dr. Samantha Anderson – Power Analysis with Regression Models: Sample Size Planning for Power and Accuracy – Fall 2021

This presentation will center on planning the appropriate sample size for studies using linear regression analyses. Just as researcher goals and intentions can vary across studies, different approaches to sample size planning have been developed to help design studies that will be successful at reaching these goals. I will begin with some background on sample size and statistical power in the behavioral sciences and how power has an impact on the field. Then, given that deciding upon an appropriate value for the effect size can be especially challenging when sample size planning for regression, I will describe two approaches to sample size planning for desired statistical power, focusing in particular on an approach that circumvents some of the complexity in parameterizing the effect size. After demonstrating sample size planning in practice with freely available software, I will describe and briefly demonstrate sample size planning for accurate estimation.

Dr. George Banks – Questionable Research Practices – Fall 2017

Questionable research or reporting practices (QRPs) contribute to a growing concern regarding the credibility of research in the social and natural sciences. Such practices include design, analytic, or reporting practices that may introduce biased evidence, which can have harmful implications for evidence-based practice, theory development, and perceptions of the rigor of science. To assess the extent to which QRPs are actually a concern, we conducted a systematic review to consider the evidence on QRPs. Using a triangulation approach (e.g., by reviewing data from observations, sensitivity analyses, and surveys), we identified the good, the bad, and the ugly. Drawing upon the findings, a series of studies have been designed and executed to evaluate the effectiveness of various types of interventions (e.g., open data policies, results-blind reviews, study pre-registration, reviewer training). The goal of this program of research is to encourage a systematic, collegial, and constructive dialogue regarding QRPs in social and natural science research.

 Dr. Daniel Beal – Experience Sampling Methods – Spring 2018

Experience sampling methods (ESMs) attempt to capture a representative sample of an individual’s experiences in their natural environment. Since their inception over 40 years ago, many variations of ESMs have been developed, given new names, and fitted to a wide variety of purposes. The goal of this lecture is to describe some of the most prominent forms of these methods and the contexts in which they are most useful. Rather than providing a silver bullet method for each occasion, one might instead think of these methods as a suite of offerings to be used in tandem to help uncover important insights about temporal patterns and knowledge about how constructs emerge at various time points. As I discuss how these methods link together, I will also point out their advantages, challenges, and shortcomings. Finally, I will discuss ways in which all of these methods can inform higher levels of analysis.

Dr. Donald Bergh – Verifying Empirical Research Findings – Spring 2018

Reproducibility is obtaining the same results when re-analyzing the same data. It can be used to confirm the findings reported in a focal study and serve as a preliminary step in the replication process. This presentation will discuss why reproducibility has become an important part of the research process, describe how to test it, identify when those results are robust, report on findings from applying those tests to published work, relate their relevance to replication, and offer recommendations for the publication process.

Dr. Gilad Chen – Design Considerations for Simple and Complex Multilevel Studies – Spring 2021

With the proliferation of simple and complex multilevel studies in organizational research, study design considerations have taken a backseat relative to more emphasis on applying complex statistical methods.  And yet, study design features — such as sample and sampling considerations — play a major role in both effects detected and inferences drawn from multilevel studies. In this webcast, focusing on micro and meso organizational studies, I will discuss various aspects of design consideration in different multilevel studies.  I will particularly focus on 2-level and 3-level studies of individuals nested in groups, longitudinal studies (experiential sampling methods and growth modeling), and more complex studies of individuals nested under different units simultaneously (e.g., multiple teams membership studies).

Dr. Jason Colquitt – Quantifying Content Validity – Spring 2021

Articles that introduce new measures tend to focus their attention on factor structure, convergent validity, and discriminant validity, with less attention paid to content validity. Moreover, when content validity is examined, there is often ambiguity surrounding the appropriate tests and the standards for acceptability. In this talk, I’ll guide the audience through Colquitt, Sabey, Rodell, and Hill’s (2019) extension of Hinkin and Tracey’s (1999) quantitative approach to content validation. Using a running example, the talk will focus on creating construct definitions, generating items, choosing appropriate orbiting measures, gathering validation data, calculating definitional correspondence, calculating definitional distinctiveness, and applying evaluation criteria. The talk will include all the specific details needed to apply the method, along with a discussion of the judgment calls encountered when doing so.

Dr. Jose Cortina – Restricted Variance Interactions – Fall 2018

Restricted variance (RV) interactions are interactions that exist because the values of one variable are compressed at certain levels of the moderator, thus changing the relationship between the compressed variable and other variables. These interactions can be found in every topic area and level of analysis, within individuals and between firms. The great advantage of RV interactions is they are relatively simple to justify. The purpose of this presentation is to explain and illustrate the RV interaction.

Dr. Justin DeSimone – Recommendations for Discouraging, Identifying, and Removing Dirty Data in Survey Research – Fall 2019

This presentation focuses on problematic, undesirable, and low-quality data (or “dirty data”) in survey research. After defining and discussing the potential influence of dirty data, the presentation suggests strategies for minimizing the impact of low-quality data in an effort to enhance the trustworthiness of survey data. Specifically, this presentation covers (a) ways to discourage participants from providing low-quality or low-effort data, (b) methods of identifying dirty data in a dataset, and (c) the debate about when to retain and when to remove dirty data. This presentation also discusses the potential influence of survey administration format (e.g., pencil-and-paper, computer, online) on dirty data.

Dr. Allison Gabriel – Event Sampling Methods – Spring 2019

Scholars are increasingly using experience sampling methods (ESM) within the organizational sciences in order to better examine questions tied to intra-individual, dynamic phenomenon. Yet, when designing and conceptualizing ESM studies, scholars often face several critical issues that must be delineated regarding: (1) how the method can help inform and/or develop within-person theory; (2) how intra-individual constructs should be conceptualized and captured; and (3) how, analytically and methodologically, the data should be modeled to account for interdependent assessments and other confounding factors. The current presentation provides an overview of the types of ESM studies that can be conducted, and presents new ideas that researchers should begin considering when designing—and, ideally, publishing—ESM research.

Dr. Lisa Harlow – Confirmatory Factor Analysis Application with R-lavaan – Spring 2021

Despite some initial startup time to learn R, it is gaining popularity in use due to being an open-source program with no direct costs, wide access around the world, the capability to be used on Windows, MacOS, and other platforms, the ability to import data from different kinds of files (e.g., Excel, SPSS, SAS, etc.), the availability of more than 2,000 statistical packages, and easy access to information, tutorials, and other input on R through online resources. The current talk presents a confirmatory factor analysis application (CFA) using the open-source R program, lavaan. In the talk, I’ll feature a number of steps in the application, including: (1) a theoretical model drawn from Risman (2004) who emphasizes that individual, interactional and institutional factors are needed to understand the nature of structures or constructs such as Career Satisfaction; (2) preliminary data analyses to check the data and internal consistency of the constructs; (3) analyzing three CFA models of four Work-Environment constructs (i.e., Career Influence, Work Climate, Work Respect, and Career Satisfaction); and (4) interpreting the results and briefly suggesting possible next steps. Work-environment data from 265 faculty at a New England University will be made available, as will a copy of the R code and output for any who would like to gain practice analyzing this CFA application with lavaan.

Dr. Eric Heggestad – Scale Adaptation in the Organizational Sciences – Spring 2020

Take a quick look at some of your favorite empirical research and you will almost certainly a case where the authors have “adapted” one or more of the measures used in their research. Scale adaptation is a catch-all term that is used to indicate that the authors changed something about the scale: the number of items, the situational context of the items, the organizational-level of the construct, the response scale, etc. It is such a common practice that it doesn’t even seem to raise to the level of awareness when we read (or review) an article. But we should be aware and, in fact, concerned; scale adaptation can have important consequences on the validity of the scales we use. In this presentation I will talk about our research to document the commonness of scale adaptation and to identify the key ways authors are adapting scales. I will also talk through the results of our survey of journal reviewers and psychometrics experts, documenting their levels of concern regarding various forms of scale adaptation (e.g., shortening a scale, changing the context, changing the time-frame, etc.). I will provide a demonstration of an application we have constructed to help authors shorten scales (a very common form of adaptation) for their research.

Dr. Elaine Hollensbe – Rigor/Trustworthiness in Qualitative Research – Spring 2022

One of the challenges associated with qualitative research is making sure that it is done in a rigorous way that ensures readers and reviewers “trust” the study’s findings and theoretical contributions.  In this webcast, I will be discussing ways to build trustworthiness into qualitative work throughout the research process.  In addition to going through tactics and tools to establish trustworthiness, I will be providing examples to show how these tactics and tools have been used effectively by others.  The goal is to increase understanding of what qualitative rigor means, build confidence in designing and conducting trustworthy qualitative research, and share techniques for communicating rigor and trustworthiness in qualitative papers.  Conducting qualitative research that is rigorous and worthy of trust not only increases the likelihood that this work will be published but also that it will contribute in a valuable and enduring way to scholarship.  The webcast is targeted toward researchers doing or interested in doing qualitative research, as well as those involved in evaluating and reviewing qualitative work.

Dr. Mike Howard – Network Analysis – Spring 2021

Network analysis has become increasingly popular in management research. It allows scholars to explore the formation and evolution of social ties at many levels of analysis, from advice-giving networks among coworkers to the formation of alliances or affiliations between organizations. Among many other applications, network analysis enables us to study social status, the dynamics of competitive rivalry, or the diffusion of innovations and new strategies between firms. This CARMA webcast provides an introduction to the types of research questions that can be pursued through the analysis of network tie formation and evolution. It will cover the basic approach to data structure and design, along with examples and information on developing studies using exponential random graph models and stochastic actor-oriented models.

Dr. Lisa Lambert – Storytelling Through Statistics – Spring 2018

Writers are often encouraged to develop the story line of their manuscripts, to challenge assumed knowledge by shifting or creating consensus around knowledge in a domain, complete with a compelling “hook” to capture readers’ attention. A great deal of attention is typically devoted to the verbal story of a manuscript but authors often neglect to craft the statistical story in their Methods & Results sections. I identify four principles for writing effective Methods & Results and illustrate each principle with both violations and with positive examples of adherence to the principles.  If you follow these four principles, your statistical story will improve which will increase the likelihood you will publish your story, enter your chosen academic conversation, and contribute your bit to the body of knowledge.

Dr. Adam Meade – Understanding and Detecting Careless Responding in Survey Research – Fall 2016

Careless responding on surveys introduces error into datasets and can affect estimates of reliability, factor structure, as well as results of hypothesis testing. As such, screening data for careless responses prior to analysis has become a necessary step. This talk will address (1) potential ramifications of failing to address careless responding, (2) causes of careless responding, (3) methods of identifying careless responding, and (4) recommendations for preventing careless responses.

Dr. Fred Oswald – Statistical Analysis with Big Data – Fall 2015

This one-hour webcast will provide demonstrations of several statistical methods associated with analyzing big data, methods that are relatively new in organizational research and practice. In addition to these examples, general concepts, communicating finding, and educational needs for graduate students are three topics that will also be discussed.

Dr. Fred Oswald – Multiple Linear Regression: Strengthening Conceptual Knowledge and Practical Skills – Fall 2022

Multiple linear regression (MLR) is one of the most useful and used methodological tools in the social sciences. This webinar will strengthen attendees’  conceptual knowledge and practical skills in MLR further through a series of topics presented: (a) relationships between descriptive statistics, visualization, and MLR; (b) conducting MLR with interaction and quadratic terms; (c) useful MLR reporting standards; and (d) MLR as the basis for path analysis, factor analysis (EFA, CFA, SEM), and multilevel modeling.

Dr. Nathan Podsakoff – A Tutorial on the Causes, Consequences, and Remedies for Common Method Biases – Spring 2017

Despite considerable growth in our knowledge about the potential detrimental effects that method biases can have on the reliability and validity of our measures and on the relationships between constructs, there is still a lack of understanding about the causes, consequences, and remedies for dealing with these forms of bias. Our experiences indicate that common errors include: (a) treating method biases as if they come from a single source and (b) the overreliance on post hoc statistical remedies, which have important limitations. In an effort to address these issues, the purpose of this tutorial is to help clarify the potential effects that method biases can have on research findings, briefly describe the sources of these biases, and recommend remedies that researchers can incorporate a priori into the study design process to minimize the effects of specific sources of method bias.

Dr. Tim Pollock – How to Use Storytelling in Academic Writing: The Roles of the Methods and Results Sections – Fall 2021

In this webinar, author and former Academy of Management Journal Associate Editor Tim Pollock introduces the concept of storytelling to academic writing. He applies the five-act storytelling structure from drama, captured in Freytag’s pyramid, to the structuring your research story, with a particular focus on writing effective methods and results sections. He will briefly review the four major types of validity that the Methods and Results sections address, the tradeoffs facing all empirical research, and then discuss each section’s purpose, the challenges in writing the methods and results sections, and how to overcome them.

Dr. Dan Putka – Modern Prediction Methods – Spring 2019

In this presentation, I’ll provide attendees with a schema for understanding developments in modern predictive modeling, and the potential value modern methods offer over traditional methods such as OLS and logistic regression. I will present a strategy for selecting modeling methods appropriate for one’s prediction problem, and key considerations that go into making such decisions. I will close with an overview of key resources for learning more. A supplemental dataset, R code, and annotated R output will be provided for further study post-session.

Dr. Rhonda Reger – Content Analysis for Macro Research – Spring 2021

The use of content analysis is burgeoning in macro management research. The term encompasses a variety of types of analysis: Content analysis is to words (and other unstructured data) as statistics is to numbers (also called structured data). It includes a variety of analytic approaches, ranging from purely qualitative analyses typically found in grounded theorizing and case-based research to highly quantitative analyses that convert words and other unstructured data into numerical tables for further quantitative analysis. Common examples of qualitative content analyses include grounded theorizing, narrative analysis, discourse analysis, rhetorical analysis, semiotic analysis, interpretative phenomenological analysis, and conversation analysis. Major types of quantitative content include dictionary-based approaches, topic modelling, and natural language processing. This CARMA webcast provides a brief tutorial on these methods and suggest key criteria to help researchers choose which type of content analytic methods may be useful to help them answer their research questions while leveraging the research team’s strengths.

Dr. Andreas Schwab – Why and How to Replace Statistical Significance Tests with Better Methods – Fall 2018

The purpose of this presentation is to increase the awareness among management researchers of the severe limitations of statistical significance tests and to introduce the benefits alternative approaches can provide. Statistical significance tests have been criticized by methodologists on various grounds.  Their criticism suggests that in the management literature, the extensive use of statistical significance in quantitative research has led to the accumulation of deceptive findings.  Consequently, management journals are full of “statistically significant” results that are so small that they are unlikely to be replicated by other studies and too small to be practically relevant.  In a field that aspires to provide useful advice to managers, we need to focus on practically important effects that are robust across a wide variety of settings.  This presentation provides a comprehensive critique of limitations of statistical significance tests and introduces alternative approaches that promise to address these limitations — such as, effect size evaluations, graphs of effect distributions, baseline models, and Bayesian statistics.

Dr. Scott Turner – Mixed Methods in Strategy and Organizations Research – Spring 2017

All methods individually are flawed, but these limitations can be mitigated through mixed methods research, which combines methodologies to provide better answers to our research questions. This presentation discusses a research design framework for mixed methods work that is based on the principles of triangulation. Core elements for the research design framework include theoretical purpose, i.e., theory development and/or theory testing; and methodological purpose, i.e., prioritizing generalizability, precision in control and measurement, and authenticity of context. From this foundation, consideration is given to how the multiple methodologies are linked together to accomplish the theoretical purpose, focusing on three types of linking processes: convergent triangulation, holistic triangulation, and convergent and holistic triangulation. The implications of these linking processes for the theory at hand are discussed, taking into account the following theoretical attributes: generality/specificity, simplicity/complexity, and accuracy/inaccuracy. These ideas are drawn together into a roadmap that can serve as a design guide for organizational scholars conducting mixed methods studies.

Dr. Mike Withers – Omitted Variable Bias – Spring 2022

The omission of relevant explanatory variables in a regression model generally causes its estimators to be biased. This issue is referred to as omitted variable bias (OVB) and is recognized as one of the primary sources of endogeneity. In turn, the concern of OVB is often a key motivating reason for adopting instrumental variable techniques. These techniques typically involve a two-step procedure that constructs a version of the independent variable that does not feature variance due to the omitted variable. While these techniques can help alleviate the OVB concern, they also have critical assumptions that must be met regarding the instrumental variables employed (i.e., relevance and exogeneity). Even when these assumptions are met, instrumental variable techniques are often less efficient than ordinary least square regression. Recently, the impact threshold of a confounding variable (ITCV) has been introduced in organization research. The ITCV can be used to understand whether a statistical inference is changed because of the potential for an omitted variable. In this talk, the issue of OVB will be formally defined, and both instrumental variable techniques and the ITCV will be discussed as ways to help alleviate this concern.

Dr. Sang Eun Woo – Big Data Concepts – Fall 2019

The goal of this lecture is to provide a broad overview of conceptual issues related to big data. First I will discuss big data as a ‘phenomenon’ from etymological, ontological, and epistemological angles. After that I will briefly go over how big data has been and/or can be applied to various workplace HR solutions, highlighting a few promising directions (i.e., big data ‘applications’). Then, I will devote the rest of the talk to discussing big data as ‘research method’ within the field of psychological/organizational sciences. I will emphasize the need for greater openness toward data-driven (inductive and abductive) modes of science while cautioning against premature claims of causality and measurement validity. Some tangible examples will be provided to elaborate on these two points. Lastly, I will conclude with a brief discussion of different perspectives on theory and causation that may shape future dialogues around big data science within our field.