Does the use of large language models (such as Chat-GPT) entail a 'credit-blame asymmetry', as Mann and colleagues argue?
- Milena Knight
- Nov 5, 2023
- 4 min read
The advancement of large language models (LLMs) does entail a ‘credit-blame asymmetry’. Indeed, as Mann and colleagues explain, the use of LLM (e.g., Chat-GPT) consists of an asymmetry where users of LLMs are subjected to an elevated bar when claiming credit, however, the standards around assigning blame to these users remains the same.[1] To clarify, the credit-blame asymmetry stipulates that the use of generative AI makes it harder for the user to earn credit for their outputs, but does not change the standards for assigning blame for any harms or errors. This asymmetry within LLMs brings forth many ethical concerns around legitimacy and authenticity, as well as legal and moral liability, which results in both negative and positive responsibility gaps. Negative responsibility gaps refer to situations where there is uncertainty about who will carry the blame for a LLM’s outputs. By contrast, positive responsibility gaps refer to the “analogous situations regarding the taking of credit.”[2] Additionally, the use of LLMs can result in under-credit. To define, under-credit occurs when the user of a LLM receives less credit for their A.I generated work such as art and writing (‘positive outputs’), as they have used or copied the LLM’s output. This requires less thought and effort in contrast to if this user were to create this art or writing by themselves. Given this, they are given less credit for the work they produced with the A.I tool.
Overall, Mann and colleagues focus on the moral implications around the use of LLM generated content, in particular the assignment of blame and credit to its users. They detail how this asymmetry can result in positive and negative responsibility gaps, and hence, result in under-credit. Further to this, the credit-blame asymmetry arises due to the lack of transparency and legal reform around LLMs. To reduce these responsibility gaps and lessen the asymmetry’s effects, Mann and colleagues urge for greater expectations, regulations, and legislation around the use of generative A.I.
While I agree with Mann and colleagues’ arguments, I would like to develop the assignment of blame further. Indeed, I believe that greater blame should be placed on users and creators of LLMs, if they used or created the A.I with ill intent. Through this assignment of blame, creators and users of LLMs face less blame for the A.I negative outputs (e.g., misinformation), if they caused such by accident. However, if these individuals used or created the A.I with ill intentions, to bring forth negative outputs, they in turn face a greater degree of blame. This because it is worse to intend a poor outcome in contrast to accidently bringing it into fruition. Certainly, it is worse for one to intend harm, in contrast to accidentally causing harm. For instance, it is worse for one to intentionally kick another, in contrast to accidentally kicking another. Another example would be that it is worse for a creator of a LLM to intend for its A.I to be racist and instill (potentially their own) racial stereotypes. It is another thing entirely if these outcomes were brought forth accidentally, due to the LLMs opaque operations and use of worldly data. Through this, users and creators of LLMs are assigned blame accordingly and justly. Overall, I believe that establishing this criterion can act as an extension to Mann and colleagues’ argument(s) and lessen negative responsibility gaps which arise from the credit-blame asymmetry.
In the creation of this paper, I felt a loss of pride in my work. The writing of this paper is not wholly my own, which is why I feel a loss of pride when submitting it for grading. If I had not used a LLM, I believe that I would uphold a greater sense of accomplishment for this paper. Additionally, I found that I have to be careful when using a LLM, for it can consist of misinterpretations. Certainly, when using a LLM for this assignment, I found that the A.I generated many misinterpretations on Mann and colleagues’ arguments. It even created its own argument of ‘over-blaming’, which is a concept that isn’t explored within this paper. Given this, LLMs are seemingly unreliable when summarising scholarly papers. However, while I dislike LLMs for the above reasons, I still enjoyed the use of a LLM when writing this paper, as it relieved me of stress. Indeed, having that reassurance that I can use A.I as a tool to guide, edit, and shorten my writing, made the writing experience less stressful, and more enjoyable. Additionally, it was very efficient at creating my citations – which is normally a tedious task – providing me with greater leisure time.
Ultimately, LLMs do consist of a credit-blame asymmetry. However, the standard of blame it places on users and creators can be developed further, where more blame is placed on individuals with ill intent. Moreover, while the use of LLMs can result in false interpretations and under-crediting, it is nonetheless beneficial, as it can reduce stress and increase leisure time.
Words: 825
Appendix :
A.I has been used to form and construct certain aspects of this paper. In this paper, I have exclusively used the LLM developed by Bing. Aspects of this paper which have been produced by this LLM have been highlighted in yellow. Additionally, I utilised this LLM to form my footnotes and citation(s).
References:
Porsdam Mann, Sebastian, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, et al. “Generative AI Entails a Credit–blame Asymmetry.” Nature Machine Intelligence 5, no. 5 (2023): 472–75. https://doi.org/10.1038/s42256-023-0047-8.
[1] Sebastian Porsdam Mann et al., “Generative AI Entails a Credit–blame Asymmetry,” Nature Machine Intelligence 5, no. 5 (2023): 472, https://doi.org/10.1038/s42256-023-0047-8.
[2] Ibid.
Comments