Ethics

Integreat will have a research activity in ethical aspects of knowledge-driven machine learning (ML) to analyse its fundamental ethical dilemmas. We will employ the methods of analytical philosophy, in conjunction with experimental philosophy, to examine three interrelated themes which pertain to all other research themes and define the ethical ground of our four objectives.

Respect for persons 

Integreat will go further than the standard person-centred approach to encompass a broader ethical spectrum that seeks to identify and avoid reification – a reductive, exploitative tendency. The use of machine learning (ML) to manipulate people’s choices is reifying, but “nudging” people in directions that they could autonomously agree to is less so. Determining the boundaries between nudging and manipulation is key here. The result of technology is often to alienate ourselves from responsibility. Again, this leads to reification, through the loss of (moral) agency. Integreat will aim to incorporate researchers themselves as moral agents from the outset of the research as well as addressing issues of moral responsibility on a broader level.

Transparency vs. reliabilism (the “black box” problem) 

Current literature on ethics and AI suggests a dichotomy between transparency and reliabilism. Those who advocate transparency believe this is essential for trust in AI systems. Some go so far as to argue that there is a “right to an explanation”. Others claim that transparency is simply not achievable. Conversely, “reliabilists” assert that if systems are accurate enough, transparency may be less important. Designers of ML systems thus face a dilemma: should they favour accuracy over explicability? Trust in decisions is given and received not just based on access to information or transparency. Integreat will ascertain what role transparency can and should play in our understanding of trust. Our findings will feed into our researchers’ choices and work priorities.

Justice

Justice concerns the distribution of burdens and benefits. For ethically sound ML, we need “responsible design”. One way of addressing this is to develop a code of ethics for ML researchers. We will determine whether such a code is feasible, and, if so, what principles it should be based on, together with Integreat’s researchers themselves. In general, ethics research will involve a continuous dynamic interaction with researchers at Integreat, to identify and tackle ethical challenges as they arise. 


Key researchers in this research theme:

Publisert 3. juli 2023 10:55 - Sist endret 6. sep. 2023 18:24