The concept of artificial intelligence-generated texts has been the subject of ongoing debate among academic circles.
The concept of AI-generated texts has been the subject of ongoing debate among academic circles. Some argue that these systems enable cheating by allowing students to quickly generate essays on any topic, while others – those who have created these AI systems – would suggest that they help democratize education by assisting students with varying abilities.
However, setting aside the issue of cheating, it’s essential to consider the broader implications of these systems for educational practices. Using software that can produce academic writing forces people to consider the true crux: What makes academic writing unique? What elements are necessary for academic writing?
These questions, however, are difficult to answer. The creativity and originality of computers are highly subjective metrics and are dependent on the receiver’s perceptions of the work. The situation in which people view the behavior or output of a computer as creative is called the “Lovelace Effect”, named after the computer pioneer Ada Lovelace.
Research has shown that the Lovelace Effect is influenced by a combination of cultural ideas of creativity, the actual functionality of the software or hardware, and the context of its presentation. Social conditions, such as historical and geographical backgrounds, influence people’s reactions to computer creativity.
When we apply this to the discussion of computer-generated essays, this implies that as essay-writing systems advance and evolve, so will our standards for what constitutes an excellently-written academic essay. The progression of these systems means that we must frequently reassess our criteria for academic writing and strive to ensure educators do not fall prey to the Lovelace Effect.
While the discussion about essay generation has introduced many to AI authorship, these systems have been implemented in journalism for quite some time. A commonly automated category in journalistic writing was sports outcomes and financial values. Because these texts were based on facts (i.e. football scores or stock rates), the difference between a journalist and a computer was not so apparent. Compare this to editorial writing and the result would have been vastly different.
Tools such as GPT-3 require us to reconsider the skills that students need to showcase in their assignments. For instance, academic writing that is more structured might be more prone to be replicated by a computer. However, these systems still face difficulty in maintaining consistent trains of thought over extended periods, which could make them less effective in more “free-form” disciplines such as English literature or history. Examining how academic writing can be creative and original can help us understand when and where automation is functional.
Automating academic writing should encourage us to rethink our writing goals and how we teach students to write. Further reflection on this topic may also be beneficial by training our eyes and minds to distinguish between writing produced by humans and writing made by computers. Automated writing will impact various fields, including academia, journalism, and marketing (if it isn’t already). Instead of fighting this, we can learn how to evolve with technology and study how to write uniquely to help students and educators better navigate the future.
How do you think AI will affect academic writing? Will it be beneficial or detrimental? Share with us your thoughts in the comments below!
This article is published in collaboration with Social Science Space, an online social network that brings social scientists together to explore, share, and shape the significant issues in social science – from funding to impact. To read the full original text, click here.