

To be Announced Soon
Information
Artificial intelligence has become an integral component of scientific operations at large-scale experimental facilities, including light, neutron, and laser sources. These facilities generate substantial volumes of data on a daily basis, rendering artificial intelligence essential not only for the analysis of complex datasets and the acceleration of scientific discovery, but also for experimental planning, instrumentation control, and the design of future experimental configurations.
Nonetheless, several fundamental challenges persist. In particular, experimental science frequently lacks annotated data or well-defined ground truths, necessitating artificial intelligence techniques that are effective under low-supervision conditions and robust to variation in scientific settings. Addressing these challenges requires the development of novel approaches, including resilient feature learning, differentiable simulation-based design frameworks, reinforcement learning for optimisation, and the systematic integration of these components.
This talk will examine a selection of these challenges and present recent work from the Rutherford Appleton Laboratory, United Kingdom. This includes scientific benchmarking initiatives that prioritise trustworthiness and scientific relevance over computational performance, techniques for resilient feature learning to enable unsupervised analysis of scientific datasets, and preliminary investigations into the use of large language and code-generation models for automating elements of experimental workflows.
