skip to primary navigation skip to content
Loading Events

« All Events

  • This event has passed.

BSU seminar: ‘Adapting Real-World Experimentation To Balance Enhancement of User Experiences with Statistically Robust Scientific Discovery’

May 13, 2021 @ 2:00 pm - 3:00 pm


Speaker: Professor Joseph Jay Williams, University of Toronto

Short Title: Adapting Real-World Experimentation To Balance Enhancement of User Experiences with Statistically Robust Scientific Discovery

Long Title: Perpetually Enhancing User Interfaces in Tandem with Advancing Scientific Research in Education & Mental Health: Enabling Reliable Statistical Analysis of the Data Collected by Algorithms that Trade Off Exploration & Exploitation

Abstract: How can we transform the everyday technology people use into intelligent, self-improving systems? For example, how can we perpetually enhance text messages for managing stress, or personalize explanations in online courses? Our work explores the use of randomized adaptive experiments that test alternative actions (e.g. text messages, explanations), aiming to gain greater statistical confidence about the value of actions, in tandem with rapidly using this data to give better actions to future users.

To help characterize the problems that arise in statistical analysis of data collected while trading off exploration and exploitation, we present a real-world case study of applying the multi-armed bandit algorithm TS (Thompson Sampling) to adaptive experiments. TS aims to assign people to actions in proportion to the probability those actions are optimal. We present empirical results on how the reliability of statistical analysis is impacted by Thompson Sampling, compared to a traditional experiment using uniform random assignment. This helps characterize a substantial problem to be solved – using a reward maximizing algorithm can cause substantial issues in statistical analysis of the data. More precisely, an adaptive algorithm can increase both false positives (believing actions have different effects when they do not) and false negatives (failing to detect differences between actions). We show how statistical analyses can be modified to take into account properties of the algorithm, but that these do not fully address the problem raised.

We therefore introduce an algorithm which assigns a proportion of participants uniformly randomly and the remaining participants via Thompson sampling. The probability that a participant is assigned using Uniform Random (UR) allocation is set to the posterior probability that the difference between two arms is ‘small’ (below a certain threshold), allowing for more UR exploration when there is little or no reward to be gained by exploiting. The resulting data can enable more accurate statistical inferences from hypothesis testing by detecting small effects when they exist (reducing false negatives), and reducing false positives.

The work we present aims to surface the underappreciated complexity of using adaptive experimentation to both enable scientific/statistical discovery and help real-world users The current work takes a first step towards computationally characterizing some of the problems that arise, and what potential solutions might look like, in order to inform and invite multidisciplinary collaboration between researchers in machine learning, statistics, and the social-behavioral sciences.

Bio: Joseph Jay Williams is an Assistant Professor in Computer Science (and a Vector Institute Faculty Affiliate, with courtesy appointments in Statistics & Psychology) at the University of Toronto, leading the Intelligent Adaptive Interventions research group. He was previously an Assistant Professor at the National University of Singapore’s School of Computing in the department of Information Systems & Analytics, a Research Fellow at Harvard’s Office of the Vice Provost for Advances in Learning, and a member of the Intelligent Interactive Systems Group in Computer Science. He completed a postdoc at Stanford University in Summer 2014, working with the Office of the Vice Provost for Online Learning and the Open Learning Initiative. He received his PhD from UC Berkeley in Computational Cognitive Science (with Tom Griffiths and Tania Lombrozo), where he applied Bayesian statistics and machine learning to model how people learn and reason. He received his B.Sc. from University of Toronto in Cognitive Science, Artificial Intelligence and Mathematics, and is originally from Trinidad and Tobago. More information about the Intelligent Adaptive Intervention group’s research and papers is at

This will be a virtual seminar. If you would like to attend the live seminar, please email: 


May 13, 2021
2:00 pm - 3:00 pm
Event Categories:


Virtual Seminar via MS Teams


MRC Biostatistics Unit