UKRI Trustworthy Autonomous Systems

Node on Trust

Investigating how to build, maintain and manage trust in robotic and autonomous systems

Project Description

Engineered systems are increasingly being used autonomously, making decisions and taking actions without human intervention. These Autonomous Systems are already being deployed in industrial sectors but in controlled scenarios (e.g. static automated production lines, fixed sensors). They start to get into difficulties when the task increases in complexity or the environment is uncontrolled (e.g. drones for offshore windfarm inspection), or where there is a high interaction with people and entities in the world (e.g. self-driving cars) or where they have to work as a team (e.g. cobots working in a factory).

Our Vision is that these systems learn situations where trust is typically lost unnecessarily, adapting this prediction for specific people and contexts. Trust will be managed through transparent interaction, increasing the confidence of the stakeholders to use the Autonomous Systems, meaning that they can be adopted in scenarios never before thought possible, such as doing the jobs that endanger humans (e.g. first responders or pandemic related tasks).

The Node will create a UK research centre of excellence for trust that will inform the design of Autonomous Systems going forward, ensuring that they are widely used and accepted in a variety of applications. This cross-cutting multidisciplinary approach is grounded in Psychology and Cognitive Science and consists of three "pillars of trust": 1) computational models of human trust in autonomous systems including Theory of Mind; 2) adaptation of these models in the face of errors and uncontrolled environments; and 3) user validation and evaluation across a broad range of sectors in realistic scenarios.

This Node will explore how to best establish, maintain and repair trust by incorporating the subjective view of human trust towards Autonomous Systems, thus maximising their positive societal and economic benefits.

Model

Develop cognitive models of trust

Adapt

Adapt to varying tasks, users and over time

Validate

Validate with real industry-relevant applications