Why artifical intelligence is not an author

botfo AS1-945 generative AI large- chatbots Academies and learned societies Authorship Bibliography. Library science. Information resources Z
DOI: 10.3897/ese.2025.e142904 Publication Date: 2025-02-13T08:21:06Z
ABSTRACT
Generative AI/chatbots provide a valuable new writing tool, but they are just software products, and software does not have a legal persona. You cannot sue, arraign, fine, imprison or otherwise punish a chatbot. This is one reason why many journals, as well as COPE, ICMJE and WAME, among other practitioners’ organisations, advise against identifying AI as an author. Furthermore, chatbots produce a statistically generated language, or botfo, by applying probability to the materials they have scanned. It is a strangely dehumanised language, lacking intentionality and containing conscious and unconscious bias. Ultimately, this paper argues that we should not call chatbots authors since they are unaccountable, and can’t think, judge or be jailed.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....