AI & Decision Making and Law
SAILS Workshop III on AI, Decision Making & Law:
Roadblocks for AI, Decision Making & Law and how to overcome them
Workshop
Introduction
AI systems are increasingly used in decision making processes in which they are given a variety of autonomy. The degree of autonomy firstly depends on the contribution of humans in the decision making process itself which concerns the involvement of humans in design of the architecture of the AI and the input of information as well as the interaction between the AI system and humans in the process. Secondly, it depends on the intermediary role humans may have in the actions that follow up on the decision or (direct) legal effects of the decision. One of the areas in which AI systems are used are corporations and board decision making. Another important field is health care where professionals such as doctors use AI for diagnosis and treatment options. A third area where AI systems are utilised is policy making and governmental decisions. This raises novel doctrinal and practical questions about responsibility, representation and validity and finally liability. It also challenges the axioms and behavioural preconceptions on which several rules are based, making it a topic ripe for multidisciplinary research with scientists in the fields of artificial intelligence, health care, social and organisational psychology, public policy and ethics.
During and after the conference call on 30–9–2019 the following specifics were added. The interdepartmental Cairelab has been formed to determine where AI is and can be used in practice, which blockades (including technical, legal and ethical blockades) prevent or hinder such usage and how these blockades can be overcome (including in discussions with legislators and health care insurers). There are plenty of use cases within the LUMC which can be discussed, including NeLL (Niels Chavannes and Douwe Atsma), AI-driven health care path navigation (Daan Hommes) and radiology (Mark van Buchem). Preliminary questions which arise are: to what extent is a doctor able to use AI to advise him/her on a decision to be taken or even have AI take the decision for him/her, to what extent is he/she liable for following or not following that advice, can and if so, to what extent can the doctor check how AI came to the advice and if the doctor increasingly relies on AI to what extent does his/her resulting lack of training influence his/her dependence on AI? From a company law perspective, similar questions arise in the context of decision making in the board of directors advised by AI and with respect to algorithmic decision making by government.
It seems that the research questions of the LUMC and company law use cases overlap to a large extent and provide a lot of common ground for further research. On a more granular level, sector specific legislation and the application thereof may be different.
Workshop aims
This workshop:
- seeks to facilitate collaboration between different SAILS parties that are interested in the field of human — AI decision making and its legal consequences
- aims to identify overarching research question and collaborations as well as specific projects and action points for such projects