Investigating marker accuracy in differentiating between university scripts written by students and those produced using ChatGPT
DOI:
10.37074/jalt.2023.6.2.13
Publication Date:
2023-07-24T23:26:52Z
AUTHORS (6)
ABSTRACT
The introduction of OpenAI's ChatGPT has widely been considered a turning point for assessment in higher education. Whilst we find ourselves on the precipice profoundly disruptive technology, generative artificial intelligence (AI) is here to stay. At present, institutions around world are considering how best respond such new and emerging tools, ranging from outright bans re-evaluating strategies. In evaluating extent problem that these tools pose marking assessments, study was designed investigate marker accuracy differentiating between scripts prepared by students those produced using AI. A survey containing undergraduate reflective writing postgraduate extended essays administered markers at medical school Wales, UK. were asked assess style content, indicate whether they believed have or ChatGPT. Of 34 recruited, only 23% 19% able correctly identify scripts, respectively. significant effect suspected script authorship found X²(4, n=34) = 10.41, p<0.05, suggesting written content holds clues as assign authorship. We recommend consideration be given AI can responsibly integrated into strategies expanding our definition what constitutes academic misconduct light this technology.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....