Spotting generative AI in school: can ChatGPT help a teacher distinguish between authored and artificially generated text?
DOI:
https://doi.org/10.51707/2618-0529-2024-30-07Keywords:
artificial intelligence, ChatGPT, academic integrity, AI policiesAbstract
The article addresses the issue of incorporating the generative models of artificial intelligence in science and education, examining their various applications in writing scientific papers, teaching, and learning, as well as considering the potential impact of generative AI on academic integrity. In the context of school education, the widespread use of AI by students for writing various types of texts raises justified concerns about AI’s detrimental impact on quality of education, development of writing and communication skills. Therefore, it is important for teachers to be able to differentiate between texts written by students and those generated by AI. Although there are specialized programs for text authorship recognition, one of the most common methods among teachers for distinguishing between authored and generated texts is text verification using ChatGPT. Hence, the main part of the article is devoted to investigating the accuracy of ChatGPT 3.5 in spotting text authorship. It is shown that the model has low efficiency in performing such tasks and is highly likely to produce both false positive and false negative results. Specifically, texts lacking numerical data and references and using a formal style of language are highly likely to be attributed to AI, regardless of authorship. Additionally, the accuracy of the response highly depends on the formulation of prompts. Therefore, ChatGPT 3.5 cannot be recommended to educators as the primary tool for determining the authorship of students’ texts. Instead, teachers should consider prior experience of interaction with the student and modify tasks requiring to include references, numerical data, etc. The article concludes with reflections on further improving and developing strategies to prevent the inappropriate use of artificial intelligence in violating academic integrity in schools. Specifically, it is recommended to focus on cultivating a culture of responsibility for authorship of works and to seek ways to integrate ChatGPT and other generative models into the educational process, rather than prohibiting their use.
References
Bonyhady, N. (2024). Is this one word the shortcut to detecting AI-written work? The Australian Financial Review. Retrieved from https://www.afr.com/technology/is-this-one-word-the-shortcut-to-detecting-ai-written-work-20240417-p5fko6.
Clarifies the Responsible use of AI Tools in Academic Content Creation. Т&F Newsroom. 2023. Retrieved from https://newsroom.taylorandfrancis-group.com/taylor-francis-clarifies-the-responsi-ble-use-of-ai-tools-in-academic-content-creation/.
Scopus AI: Trusted content. Powered by responsible AI. Elsevier. Retrieved from https://www.elsevier.com/products/scopus/scopus-ai.
The use of generative AI and AI-assisted technologies in writing for Elsevier. Elsevier. Retrieved from https://www.elsevier.com/about/policies-and-stan-dards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier.
Holden H. H., & Vinson V. (2023). Change to policy on the use of generative AI and large language models. Science. Retrieved from https://www.science.org/content/blog-post/change-policy-use-genera-tive-ai-and-large-language-models.
Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6, Article 100203.
Sailer, M., Bauer, E., Hofmann, R., Kiesewetter, J., Glas, J., Gurevych, I., et al. (2023). Adaptive feedback from artificial neural networks facilitates pre-service teachers’ diagnostic reasoning in simulation-based learning. Learning and Instruction, 83, Article 101620. DOI: https://doi.org/10.1016/j.learninstruc.2022.101620.
Jansen, T., Meyer, J., Fleckenstein, J., Horbach, A., Keller, S., & Möller, J. (2024). Individualizing goal-setting interventions using automated writing evaluation to support secondary school students’ text revisions. Learning and Instruction, 89, Article 101847. DOI: https://doi.org/10.1016/j.learninstruc.2023.101847.
Cotton, D., Cotton, P., & Shipway, J. R. (2023). Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61 (2). DOI: https://doi.org/10.1080/14703297.2023.2190148.
Ipek, Z. H., Gozum, A. C., Papadakis, S., & Kallogianakis, M. (2023). Educational applications of the ChatGPT AI system: A systematic review research. Educational Process: International Journal, 12 (3), 26–55. DOI: https://doi.org/10.22521/edupij.2023.123.2.
Köbis, N., & Mossink, L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in Human Behavior, 114, Article 106553.
Gunser, V. E., Gottschling, S., Brucker, B., Richter, S., Çakir, D. C., & Gerjets, P. (2022). The pure poet: How good is the subjective credibility and stylistic quality of literary short texts written with an artificial intelligence tool as compared to texts written by humanauthors? In2Writing 2022 : Proceedings of the first workshop on intelligent and interactive writing assistants. (Рp. 60–61). Dublin, Ireland : Association for Computational Linguistics.
Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6, Article 100209.
Fowler, G. A. (2023). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. Retrieved from https://www. washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/.
Desaire, H., Chua, A. E., Isom, M., Jarosova, R., & Hua, D. (2023). Distinguishing academic science writing from humans or ChatGPT with over 99 % accuracy using off-the-shelf machine learning tools. Cell Reports in Physical Science, 4, Article 101426.
Alshurafat, H., Al Shbail, M. O., Hamdan, A., Al-Dmour, A., & Ensour, W. (2024). Factors affecting accounting students’ misuse of ChatGPT: An application of the fraud triangle theory. Journal of Financial Reporting and Accounting, 22 (2), 274–288. DOI: https://doi.org/10.1108/JFRA-04-2023-0182.
Simms, R. C. (2024). Work With ChatGPT, Not Against: 3 Teaching Strategies That Harness the Power of Artificial Intelligence. Nurse Educator, 49 (3), 158–161. DOI: https://doi.org/10.1097/NNE.0000000000001634.
Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21, Article 21. DOI: https://doi.org/10.1186/s41239-024-00453-6.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Scientific notes of Junior Academy of Sciences of Ukraine

This work is licensed under a Creative Commons Attribution 4.0 International License.