How to deal with risks of AI suffering
Resource type
Journal Article
Author/contributor
- Dung, Leonard (Author)
Title
How to deal with risks of AI suffering
Abstract
We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.
Publication
Inquiry
Volume
0
Issue
0
Pages
1-29
ISSN
0020-174X
Accessed
3/18/25, 6:55 PM
Library Catalog
Taylor and Francis+NEJM
Extra
Publisher: Routledge
_eprint: https://doi.org/10.1080/0020174X.2023.2238287
Citation
Dung, L. (n.d.). How to deal with risks of AI suffering. Inquiry, 0(0), 1–29. https://doi.org/10.1080/0020174X.2023.2238287
Link to this record