The 18th INFORMS Applied Probability Conference will take place between the 5th and 8^{th} of July on the campus of Koç University in Istanbul, Turkey.

Plenary Speakers will include Halil Mete Soner, ETH Zurich, Switzerland; John N. Tsitsiklis, Massachusetts Institute of Technology, US; and Kavita Ramanan, Brown University, US.

The Applied Probability Society is a subdivision of the Institute for Operations Research and the Management Sciences (INFORMS). The Society is concerned with the application of probability theory to systems that involve random phenomena, for example, manufacturing, communication network, computer network, service, and financial systems. The Society promotes the development and use of methods for the improvement of evaluation, control, and design of these systems.

For details visit the conference website at http://home.ku.edu.tr/~aps2015/

## Greek Stochastics Meeting - Sequential and On-line Learning

The seventh edition of the Greek Stochastics meeting will take place in Chania, Crete, Greece, between the 11th and 13th of July at the Mediterranean Agronomic Institute of Chania.

The meeting's primary aim is to facilitate a broad discussion of current research themes related to Sequential and On-line Learning. It will consist of three short courses by Gabor Lugosi (Universitat Pompeu Fabra), Phil Dawid (University of Cambridge) and Nicolas Chopin (ENSAE, Paris). There will also be a few contributed talks and poster presentations.

For details visit the meeting website at http://www.stochastics.gr/meetings/eta/

Sofia Villar, postdoctoral research fellow in clinical trials methodology at the MRC Biostatistics Unit, will participate in both events and give contributed talks.

**Title:** “*Novel bandit-based solutions for practical stochastic scheduling problems”*.

**Summary:** The multi-armed bandit problem describes a sequential experiment in which the goal is to achieve the largest possible mean reward by choosing from different reward distributions with unknown parameters. This problem has become a paradigmatic framework to describe the dilemma between exploration (learning about distributions' parameters) and exploitation (earning from distributions that look superior based on limited data), which characterises any data based learning process.

Over the past 50 years bandit-based solutions, and particularly the concept of index policy introduced by Gittins and Jones, have been fruitfully deployed to address a wide variety of stochastic scheduling problems arising in practice. Deriving such solutions poses various research challenges, but offers significant computational and performance advantages. In this talk I will illustrate this point by presenting recent results from two Bayesian bandit models of the optimal allocation of patients in a clinical trial and the scheduling of sensors in a network to detect smart targets, in which either the derivation of an index policy or the practical implementation of existing index policies poses complex research questions.

Part of this talk is based on recent joint work with Jack Bowden and James Wason.