Session 1: June 7-10, Six Course Options | Session 2: June 14-17, Six Course Options | Session 3: June 21-24, Six Course Options
Short Course Sessions and Groupings
We offer three sessions which allows course participants the opportunity to take three back-to-back-to-back courses that complement one another. All courses in a session are taught concurrently, so a participant can take only one course per session.
CARMA Workshop: Basics of R (included free with the short course registration)
This four-hour Workshop provides information on the package R to prepare attendees for follow-up training in CARMA Short Courses that use R. By attending this online workshop, participants will learn basic skills for using the R Studio interface to: load and activate R packages, import and manage data, and create and execute syntax. Having these basic skills will allow Short Course participants to more easily learn about use of R for data analysis and will enable Short Course instructors to better plan and deliver their content. This Workshop is free of charge and only available to those who will be attending one of the CARMA Short Courses. It will be available on-line.
During this Basics of R Workshop, attendees will learn:
1. Using R through the R Studio interface
2. Importing data into R
3. R data sets (a.k.a data frames and tibbles)
4. Data types
5. Subsetting columns of data and selecting cases
6. Recoding data and dealing with missing data
7. Merging data (columns and rows)
8. Output objects
9. User defined functions
10. Getting help
Session 1: June 7-10, Six Course Options (Choose One)
This course will provide a gentle introduction to the R computing platform and the R-Studio interface. We will cover the basics of R such as importing and exporting data, understanding R data structures, and R packages. You will also learn strategies for data manipulation within R (compute, recode, selecting cases, etc.) and best practices for data management. We will work through examples of how to conduct basic statistical analyses in R (descriptive, correlation, regression, T-test, ANOVA) and graph those results. Finally, we will explore user-defined functions in R and lay the groundwork for understanding how to perform more complex analyses presented in other CARMA short courses.
The CARMA Introduction to Multilevel Analysis short course provides both (1) the theoretical foundation, and (2) the resources and skills necessary to conduct basic multilevel analyses. Emphasis will be placed on techniques for traditional, hierarchically nested data (e.g., children in classrooms; employees in teams). The first part of the course introduces issues related to multilevel theory (e.g., multilevel constructs; principles of multilevel theory building; cross-level inferences and cross-level biases). The second part of the course discusses issues related to multilevel measurement (e.g., aggregation; aggregation bias; composition and compilation models of emergence; estimating within-group agreement). The last part of the course focuses on the specification of basic 2-level models (e.g., children nested in classrooms; soldiers nested in platoons; employees nested within work teams) analyzed via multilevel regression (i.e., random coefficient regression; hierarchical linear model; mixed effects model). The R software package will be introduced, explained, and emphasized during this short course in preparation for the advanced short course offered in Session II. Participants who prefer HLM, SAS, SPSS, or MPlus (and have expertise with these programs) have the option of completing some assignments with these programs. Participants are encouraged to also bring datasets to the course and apply the principles to their specific areas of research. The course is best suited for faculty and graduate students who are familiar with traditional (i.e., single-level) multiple regression analysis, but have little (if any) expertise related to conducting multilevel analyses.
Module 1: Multilevel Theory: Constructs, Inferences, and Composition Models
This introductory course requires no previous knowledge of structural equation modeling (SEM), but participants should possess a strong understanding of regression AND have understanding about the basic data handling functions using R. All illustrations and in-class exercises will make use of the R LAVAAN package, and participants will be expected to have LAVAAN installed on their laptop computers prior to beginning of the course. No course time will be spent going over basic R data handling and installing the LAVAAN package. The course will start with an overview of the principals underlying SEM. Subsequently, we move into measurement model evaluation including confirmatory factor analysis (CFA). Time will be spent on interpreting the parameter estimates and comparing competing measurement models for correlated constructs. We will then move onto path model evaluation where paths representing “causal” relations are placed between the latent variables. Again, time will be spent on interpreting the various parameter estimates and determining whether the path models add anything above their underlying measurement models. If time permits, longitudinal models will be introduced.
Required Software: R installed with LAVAAN package
Text mining refers to the discovery of patterns in natural language text – particularly in large data sets of short text segments, such as one might find in social media, web pages, or news articles. Text mining techniques can support theory development by uncovering patterns that would be challenging to find with traditional techniques. Text mining can also be used alongside standard confirmatory statistical techniques such as regression and classification. In this CARMA short course, we will use R and R-Studio to get started with text mining and various related methods for statistical analysis of text.
We will begin by briefly reviewing the basics of R, add on packages, and text analysis essentials. I recommend that you take CARMA’s basic R introductory R course if you have no prior familiarity with programming languages. We will discuss the conceptual steps involved in text mining, and then use R to put some of those concepts to work on open data sets I will provide. Students are welcome to bring their own data sets as well, but this is not required. We will examine the creation of document feature matrices, dictionary based sentiment analysis, exploratory topic modeling, structural topic modeling, and word embedding. We will test some predictive techniques, using features of text documents as predictors. Time permitting we will examine the processing steps in traditional natural language processing as well as some newer methods such as BERT.
Students who participate successfully in this short course can expect to learn enough about text mining to begin experimenting with these tools in research. The ideal participant will have an interest in improving their skill with R, knowledge of basic descriptive and inferential statistics, and curiosity about exploring empirically driven strategies for analysis of large data sets containing text. No prior knowledge of text mining or natural language processing is needed.
This introductory course will help you develop your model, develop and select measures, design survey instruments and execute your data collection. Topics include designing your project (developing a model, selecting variables, sampling requirements). Because it is necessary to establish adequate construct validity before testing hypotheses, we cover a wide variety of procedures for assessing construct validity (including EFA/CFA). Then we will apply this understanding of up-to-date construct validity practices to scale development techniques by creating new measures or revising existing measures that can pass the hurdles posed by tests of construct validity. We draw from research on how respondents interpret surveys to reveal principles for how to design your questionnaire to obtain high quality data. Finally, we will cover procedures for managing the data collection and for cleaning your data (missing data, outliers, identifying careless responders). If you wish, bring your research ideas because there will be opportunities to advance your own project within the workshop.
Meta-analyses have now become a staple of research in the organizational sciences. Their purpose is to summarize and clarify the extant literature through systematic and transparent means. Meta-analyses help answer long-standing questions, address existing debates, and highlight opportunities for future research. Despite their prominence, knowledge and expertise in meta-analysis is still restricted to a relatively small group of scholars. This short course is intended to expand that group by familiarizing individuals with the key concepts and procedures of meta-analysis with a practical focus. Specifically, the goal is to provide the necessary tools to conduct and publish a meta-analysis/systematic review using best practices. We will cover how to; (a) develop research questions that can be addressed with meta-analysis, (b) conduct a thorough search of the literature, (c) provide accurate and reliable coding, (d) correct for various statistical artifacts, and (e) analyze bivariate relationships (e.g., correlations, mean differences) as well as multivariate ones using meta-regression and meta-SEM. The course is introductory, so no formal training in meta-analysis is needed. Familiarity with some basic statistical concepts such as sampling error, correlation, and variation is sufficient.
This short course will begin with an introduction to linear regression analysis with R, including models for single/multiple predictors and model comparison techniques. Particular attention will be paid to using regression to test models involving mediation and moderation, followed by consideration of advanced topics including multivariate regression, use of polynomial regression, logistic regression, and the general linear model. Exploratory factor analysis and MANOVA will also be covered. For all topics, examples will be discussed and assignments completed using either data provided by the instructor or by the short course participants.
This CARMA Advanced Multilevel Analysis short course provides both (1) the theoretical foundation, and (2) the resources and skills necessary to conduct basic and advanced multilevel analyses. The course covers both basic models (e.g., 2-level mixed and growth models), and more advanced topics (e.g., 3-level models, multilevel moderated-mediation models, and multiple-unit multilevel models). Practical exercises, with real-world research data, are conducted in R and Mplus. Participants are encouraged to bring datasets to the course and apply the principles to their specific areas of research. The course is best suited for faculty and graduate students who have at least some foundational understanding of conducting multilevel analyses.
Module 1: Basic mixed effects (2-level) models, testing in R and Mplus Module 2: Longitudinal studied in R and Mplus: within-person experiential sampling methods and growth models Module 3: Complex multilevel models part 1: 3-level models in R and moderated-mediation models in Mplus Module 4: Complex multilevel models part 2 (plus open discussion and consultations): Multiple unit memberships in R (using lme4 package)
This course is aimed at faculty and students with an introductory understanding of structural equation methods who seek a better understanding of the challenging process of making judgments about the adequacy of their models. Those who attend should have experience in fitting structural equation models with software such as LISREL, MPlus, EQS, AMOS, or LAVAAN. This experience requirement can be met by completion of graduate coursework or Introduction to SEM Short Course. Attendees will be expected to use their own laptop computers installed with their SEM software, and they should also know how to import data from an SPSS/Excel/CSV save file into their SEM software program. Attendees will learn out to interpret and report results from SEM analyses, and how to conduct model comparisons to obtain information relevant to inferences about their models, as well as advantages and disadvantages of different approaches to model evaluation. Attendees are encouraged to bring their own data for use during parts of the short course.
The course will consist of five sections, with each section having a lecture and lab component using exercises and data provided by the instructor:
• Review of model specification and parameter estimation
• Overview of model evaluation
• Logic and computations for goodness-of-fit measures
• Analysis of residuals and latent variables
• Model comparison strategies
Required Software: Your preferred SEM software package
Organizational research and practice has made great advances in understanding the world of work by applying traditional statistical methods to their research questions, whether it is through simple predictive modeling (e.g., linear regression and ANOVA), factor analysis (e.g., EFA and CFA), or through more extensive modeling (e.g., SEM and latent transition analysis). However, these models could be so simple that underlying complex relationships in the data get overlooked; or alternatively, these models could be too complex for the data where they look like they fit well, but really they have been capitalizing on chance (i.e., the models would not work well if they were applied to an independent data set).
To overcome these latter potential limitations, a variety of machine learning methods that make serious attempts at not underfitting the data (e.g., mining for complexity where it exists) yet not overfitting the data as well (e.g., cross-validating models on new data). Machine learning methods generally focus on clustering approaches (e.g., k-means, dbscan, agglomerative clustering), and predictive approaches (e.g., random forests, LASSO regression, and support vector machines). These models can usefully supplement traditional approaches in organizational research — or in some cases they may supplant them, even out of necessity (e.g., when the number of variables exceeds the number of cases).
This CARMA short course is a hands-on experience, where you will use R and RStudio to analyze and interpret these clustering and predictive models. [If you are not familiar with the basics of how to navigate and use R, then you are strongly recommended to take CARMA’s introductory R course.] We will use openly available data sets, R code that has already been developed, and we will discuss, run, interpret a wide variety of important machine learning models together. Time permitting, we will explore methods for comparing the performance of these statistical learning models against one another.
This course will equip workshop attendees with the skills to perform their own clustering and predictive modeling using machine learning, where they can then apply these skills in their research, practice, and teaching.
Multi-level research in the organizational sciences (e.g., OB, strategy, entrepreneurship) has become fairly mainstream in the last few decades. Despite this attention to levels issues in general, dyads and the dyadic level of analyses remains a “forgotten level” ( Kenny, Kashy & Cook, 2006) relative to individuals, teams and organizations. This trend is unfortunate as relationships (one-one associations in organizations such as supervisor-subordinate, coworkers, firm-firm) are the building block of all phenomena that pervades organizational life. This course introduces 1) the importance of dyadic research, 2) the pitfalls of ignoring the dyadic level (both conceptually and statistically) and 3) a six step model building exercise for dyads as a unique level of analysis conceptually and empirically. This last component includes a focus on how to build dyad level theories, conceptualizing constructs and their emergence at this level, research design choices with a focus on nesting vs. cross-classification and data analyses. Students who participate successfully in this short course can expect to leave with a toolbox of conceptual and empirical knowledge and hands on skills to develop and test dyadic models in their research. The presenter will demonstrate cross-classified modeling via HCM (available in the HLM software) but the same principles could be applied via R as well.
Python is a general purpose programming language that includes a robust ecosystem of data science tools. These tools allow for fast, flexible, reusable, and reproducible data processes that make researchers more efficient and rigorous with existing study designs, while transparently scaling up to big data designs. This short course focuses on the foundational skills of identifying, collecting, and preparing data using Python. We will begin with an overview, emphasizing the specific skills that have a high return on investment for researchers. Then, we will walk through foundational Python skills for working with data. Using those skills, we will cover collecting data at scale using several techniques, including programmatic interfaces for obtaining data from WRDS, application programming interfaces (APIs) for a wide range of academic and popular data (e.g., The New York Times), web scraping for quantitative and text data, and computer-assisted manual data collections. From there, we will assemble and transform data to produce a ready-for-analysis dataset that is authoritatively documented in both code and comments, and which maintains those qualities through the variable additions, alternative measure construction, and robustness checks common to real projects.
By the end of the course, you will have the skills—and many hands–on code examples—to conduct a rigorous and efficient pilot study, and to understand the work needed to scale it up. The course design does not assume any prior training, though reasonable spreadsheet skills and some familiarity with one of the commonly–used commercial statistical systems is helpful. In particular, no prior knowledge of Python is required, and we will cover a general introduction to Python in the beginning of the course content.
Session 3: June 21-24, Six Course Options (Choose One)
This short course introduces the concepts and methodology of Bayesian statistics. Topics include Bayes’ rule, likelihood functions, prior and posterior distributions, Bayesian point estimates and intervals, Bayesian hypothesis testing, and prior specification. Additional topics include Bayesian regression, model selection, prediction, diagnostics, Bayes factors, and exploratory factor analysis. We also review practical implementations of Markov chain Monte Carlo and hierarchical models using R and JAGS and discuss conceptual differences between the Bayesian and frequentist paradigms.
The CARMA “Advanced Multilevel and Longitudinal Analyses using R Mixed-Effects Models” short course provides the (1) theoretical foundation, and (2) resources and skills necessary to conduct a variety of advanced multilevel and longitudinal analyses using the R mixed-effect modeling packages nlme and lme4. The course briefly reviews basic models (e.g., 2-level mixed and growth models) before addressing more advanced topics (econometric fixed-effect models for panel data, discontinuous growth models, consensus emergent models, and multilevel models for dichotomous outcomes). Practical exercises, with real-world research data are provided. Participants are encouraged to bring datasets to the course and apply the principles to their specific areas of research. The course is best suited for faculty and graduate students who have a foundational understanding of mixed-effects models.
Module 1: Two-Level Mixed-Effect and Growth Model Review
Exercises and examples using lme in R Module 2: Econometric Fixed-Effects Models vs Mixed-Effects Models
Exercises and examples using lme in R Module 3: Discontinuous growth models for more complex longitudinal data
Exercises and examples using lme in R Module 4: Bayes Estimates
Exercises and examples using lme in R Module 5: Three-level models and consensus emergence models
Exercises and examples using lme and lmer (lme4) in R Module 6: Generalized Linear Mixed-Effects Models for Dichotomous Outcomes
Exercises and examples using glmer (lme4) in R
The short course covers three advanced structural equation modeling (SEM) topics: (a) testing measurement invariance; (b) latent growth modeling; and (c) evaluating reciprocal relationships in SEM. The instructor lectures about half of the time with the remaining time devoted to having participants run examples with actual data provided by the instructor. Participants go home with usable examples and syntax. The measurement invariance testing section focuses on the procedures as outlined in the Vandenberg and Lance (2000) Organizational Research Methods article. Namely, we will cover the 9 invariance tests starting with the tests of equal variance-covariance matrices and ending with tests of latent mean differences. We will use a multi-sample approach in undertaking the invariance tests, and you will be shown how to test latent mean differences using the latent means of the latent variables within each group. The workshop then advances to operationalizing latent growth models within the SEM framework. Essentially, this is how to use one’s longitudinal data to actually capture the dynamic processes in one’s theory by creating vectors of change across time. The participant will also be exposed to modeling how the change in one variable impacts change in another. We will also use mixed modeling. And at the end of it, I introduce the participants to latent profile modeling with latent growth curves. The final piece is the testing of models with feedback loops via an SEM-Journal article by Edward Rigdon (1995). We will go through his 4 different models and what they mean. In doing so, we will extensively cover model identification as it is particularly important to testing reciprocal effects.
While the instruction will be carried out using the R-package LAVAAN, participants are welcome to use another SEM package. If you do so, you should have strong familiarity with that package and its functionality as the instructor will not be able to provide assistance in its use.
In this course, you will learn how to create novel datasets from information found for free on the internet using only R and your own computer. First, after a brief introduction to data source theory, web architecture, and web design, we will explore the collection of unstructured data by scraping web pages directly through several small hands-on projects. Second, we will explore the collection of structured data by learning how to send queries directly to service providers like Google, Facebook and Twitter via their APIs. Third, we will briefly explore natural language processing as an analytic approach for analyzing scraped data. Finally, we will walk through the various ethical and legal issues to be navigated whenever launching a web scraping project.
For decades, difference scores have been used in studies of fit, similarity, and agreement in organizational research. Despite their widespread use, difference scores have numerous methodological problems. These problems can be overcome by using polynomial regression and response surface methodology to test hypotheses that motivate the use of difference scores. These methods avoid problems with difference scores, capture the effects difference scores are intended to represent, and can examine relationships that are more complex than those implied by difference scores.
This short course will review problems with difference scores, introduce polynomial regression and response surface methodology, and illustrate the application of these methods using empirical examples. Specific topics to be addressed include: (a) types of difference scores; (b) questions that difference scores are intended to address; (c) problems with difference scores; (d) polynomial regression as an alternative to difference scores; (e) testing constraints imposed by difference scores; (f) analyzing quadratic regression equations using response surface methodology; (g) difference scores as dependent variables; and (h) answers to frequently asked questions.
* – To receive these prices, you must complete your registration during the dates specified.
** – These prices reflect a 50% discount that you receive if you are student/faculty at an organization that is a member of the CARMA Institutional Premium Membership OR the CARMA Institutional Basic Membership Program.
*** – These prices reflect a discount in which you register for 2 courses and receive $100 off.
*****– These prices reflect a discount in which you register for 3 courses and receive $150 off. For this discount. please contact us (firstname.lastname@example.org) before you register.
*****–These prices reflect a 25% discount for members of following associations; Academy of Management (AOM), Southern Management Association (SMA), Society for Industrial and Organizational Psychology (SIOP), Asia AOM (AAOM), International Association for Chinese Management Research (IACMR), European Academy of Management (EURAM), European Association of Work and Organizational Psychology (EAWOP), Academy of International of Business (AIB), Australia, New Zealand Academy of Management (ANZAM), Indian Academy of Management (INDAM), Midwest Academy of Management (MAM), and Iberoamerican Academy of Management (IAOM), PhD Project, Women in Research Methods (WRM).
If you are a member of AOM, SIOP, SMA, AAOM, IACMR, EURAM, EAWOP, AIB, ANZAM, INDAM, MAM, PhD Project, WRM and IAOM you can use one of the following discount codes when registering for these short courses:
Faculty Code: d495-4415
Student Code: 6cfd-062b
Refund Policy: Full refund will be provided up to 2 weeks before the first day of the session. After that date, partial refund (50%) will be provided.