SEMINARS

Seminar Series 2018-19

LEAH WELTY

Associate Professor of Preventive Medicine (Biostatistics) and Psychiatry and Behavioral Sciences, Feinberg School of Medicine

Title: Facilitating reproducible research through direct connection of data analysis with manuscript preparation in Microsoft Word
Abstract: This talk will introduce a free, open-source program for conducting reproducible research and creating dynamic documents using Microsoft Word and Stata, SAS, and R. Called StatTag, this program was recently developed to address a critical need in the research community: There were no broadly accessible tools to integrate document preparation in Word with statistical code, results, and data. Popular tools such as knitR and Markdown use plain text editors for document preparation. Despite the merits of these programs, Microsoft Word is ubiquitous for manuscript preparation in many fields, such as medicine, in which conducting reproducible research is increasingly important. Furthermore, current tools are one-directional: No downstream changes to the rendered RTF/Word documents are reflected in the source code. We developed StatTag to fill this void. StatTag provides an interface to edit statistical code directly from Word, and allows users to embed statistical output from that code (estimates, tables, figures) within Word. Output can be individually or collectively updated in one click with a behind-the-scenes call to the statistical program. With StatTag, modification of a dataset or analysis will no longer entail transcribing results into Word. This talk will include worked examples, and will be accessible to many users.
Date: November 28, 2018
Location: IPR Conference Room, 617 Library Place

LUKE MIRATRIX

Assistant Professor of Education, Harvard Graduate School of Education

Title: Simulating for uncertainty with interrupted time series designs
Abstract: Despite our best efforts, sometimes we are forced to use the interrupted time series (ITS) design as an identification strategy for potential policy change when we only have a single treated unit and no comparable controls. For example, with recent county- and state-wide criminal justice reform efforts, where judicial bodies have changed bail-setting practices for everyone to reduce rates of pre-trial detention while maintaining court order and public safety, we have no natural or plausible comparison group other than the past. In these contexts, it is imperative to model pre-policy trends with a light touch, allowing for structures such as autoregressive departures from any pre-existing trend to accurately assess the true uncertainty of our projections given our modeling assumptions. One way forward is to use simulation, generating a distribution of plausible counterfactual trajectories to compare to the observed. This approach naturally allows for incorporating seasonality and other time-varying covariates. It provides confidence intervals along with point estimates for the potential impacts of policy change.
Date: February 13, 2019
Location: IPR Conference Room, 617 Library Place

MICHAEL WEISS

Senior Associate, MDRC

Title: An applied researcher’s guide to intent-to-treat effects from multisite (blocked) individually randomized trials: Estimands, estimators, and estimates
Abstract: Researchers face many choices when designing, analyzing data from, and interpreting results from multisite (blocked) individually randomized control trials (multisite RCTs). The most common parameter of interest in multisite RCTs is the overall average intent-to-treat (ITT) effect. But even this parameter is not simple; even defining it requires decisions. The researcher needs to determine whether to estimate the average effect across individuals, or the average effect across sites. Furthermore, the researcher can target the average effect for the experimental sample, or, by viewing those units as a sample from a larger population, target the average effect for a broader population. If treatment effects vary across sites, these estimands can differ. Once an estimand is selected, the researcher must choose among estimators to estimate their chosen estimand. Weiss and his colleagues describe 13 common estimators, consider which estimands they are most appropriate for, and discuss their properties in the face of treatment effect heterogeneity. Using data from 12 large multisite RCTs of educational and job training programs, they estimate the ITT effect and its associated standard error using each estimator and compare and contrast the results. This allows researchers to assess the extent that each of these decisions matter in practice. Guidance for applied researchers is provided.
Date: February 27, 2019
Location: IPR Conference Room, 617 Library Place