PROFESSIONAL DEVELOPMENT INSTITUTES

GENERATING EVIDENCE

Summer Research Training Institute
on Cluster-randomized Trials (CRTs)
for Established Researchers

https://www.ipr.northwestern.edu/workshops/annual-summer-workshops/2018-crt/

The Center runs an annual 10-day training institute on CRTs, with a focus on providing participants with the knowledge and skills to conduct a Goal 3 or 4 randomized experiments in education.

The purpose of this training is to prepare current education researchers to plan, design, conduct, analyze, and interpret the results of cluster-randomized trials.

The course will enable participants to:

  • Describe the principles underlying randomized experiments and their advantages for making causal inferences.
  • Understand the hierarchical structure of populations in education (students nested in classes nested in schools) and its implications for study design and analysis of data.
  • Select appropriate measures for assessing outcomes, describing implementation fidelity, and capturing process variables.
  • Acquire knowledge and strategies for designing and conducting a cluster-randomized trial.
  • Recruit and retain a sample of schools and students for the duration of the trial and manage trial operations.

SYNTHESIZING EVIDENCE

Meta-analysis Training Institute (MATI)

https://www.meta-analysis-training-institute.com/

Loyola University Chicago, in collaboration with the Stepp Center, runs an annual 6-day training institute on advanced meta-analysis methods, providing researchers with the knowledge and skills they need to conduct a Goal 1 study in education. MATI is funded by the Institute of Education Sciences.

The purpose of this training institute is to provide researchers with previous training or experience in meta-analysis with training in the use of R for analysis, including advanced methods in:

  • Advanced effect sizes (e.g., pre-post, regression coefficients, clustered)
  • Power analysis for main effects and moderators
  • Methods for handling dependent effect sizes (robust variance estimation)
  • Meta-regression
  • Missing data (both outcomes and moderators)
  • Publication bias
  • Meta-SEM
  • Single-case designs
  • Interpretation of results (particularly with heterogeneity)
  • Choosing a database management system (e.g., FileMaker vs Excel)
  • Setting up a database
  • Training coders and assessing reliability
  • Budgeting
  • Scoping and coding
  • Timelines