An in-depth understanding of the main claims made by each of the theories we consider and the EU AI Act. The ability to identify the structure of arguments and theories. The ability to present focused objections to arguments and theories. The ability to rationally defend a point of view, possibly original, and to communicate effectively.
Prerequisiti
The student should have basic knowledge of the main tools and techniques used in AI, and an introductory knowledge of formal methods.
Metodi didattici
Lectures. Discussion sessions. Seminars. Guided readings of research papers. Talks by invited experts.
Verifica Apprendimento
Multiple-choice written test. Sample questions will be discussed during the course. The test will include questions from all modules, and the vote will be unique.
Testi
Law and AI safety
Federico L.G. Faroldi, Lecture Notes, Ethics, Law and AI, Normative Risk Lab, 2024.
EU AI Act (Regulation (EU) 2024/1689): https://eur-lex.europa.eu/eli/reg/2024/1689/oj Chapters I, II, III (Sections 1, 2, 3), IV, V and Annexes II, III.
Possibly, papers shared in google drive.
— Ethics of AI
Coeckelbergh, M. (2010) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12, 235–241.
Gunkel D.J. (2020) Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol 22:307–320;
Johnson, D.G. (2006). Computer systems: Moral entities but not moral agents. Ethics Inf Technol 8, 195-204;
Pitt JC. (2014) “Guns don’t kill, people kill”; values in and/or around technologies. In: Kroes P, Verbeek PP, eds. The moral status of technical artefacts. Dordrecht: Springer p. 89–101;
Redaelli, R. (2023). Different approaches to the moral status of AI: a comparative analysis of paradigmatic trends in Science and Technology Studies. Discov Artif Intell 3, 25 (2023);
Redaelli, R. (2024) Slides;
Santoni de Sio F, Mecacci G (2021) Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol 34:1057–1084;
Sullins JP (2006) When is a robot a moral agent? Inter Rev Inform Ethics 6:23–30;
van de Poel, I. (2020). Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385-409;
—---------------------------------------------------------------------------------------------------------------- AI and Society
C. Larese, Lecture notes (Ethics, Law and AI, 2024-2025).
C.B. Frey (2019) The Technology Trap: Capital, Labour, and Power in the Age of Automation, Princeton, NJ: Princeton University Press. Introduction (pp. 1-28) and Part IV (pp. 223-295).
Contenuti
The course aims at introducing and discussing some of the main current problems and approaches to the ethics and law of artificial intelligence. The Law and AI safety module will deal with the problems of definition of AI techniques in legal texts, actual and projected uses of AI in the civil and criminal domain, the EU AI Act, the control and alignment problems, normative uncertainty and normative risk, and the human compatible approach. The Ethics of AI module will be devoted to some of the main ethical issues raised by artificial intelligence, among which are the problem of the incorporation of biases by artificial intelligence, and the questions of the moral status and moral responsibility of AI. The AI and Society module will deal with the economic and political impact of AI. In particular, it will explore how technological change impacts inequality in society, by transforming the demand for labour and skills and disrupting hiring practices, and analyse the challenges and opportunities to democracy created by AI.
Lingua Insegnamento
INGLESE
Altre informazioni
Inclusive teaching: individual and group tutorships, extra material (including videos) for selected topics.