Skip to Content
× You are about to create a new metadata only record. This record does not auto assign a DOI. To allocate a new DOI use the 'Upload data and allocate DOI' option.

The English Literature essay question in the age of AI: Unlocking the black box of AI responses

While recent years have seen increasingly diverse modes of assessment in Higher Education English, the advent of generative AI has prompted fresh concern that the essay remains the default practice in the subject, and one that – whether for administrative, workload, or pedagogical reasons – cannot suddenly be reimagined even if it is now heavily exposed to academic misconduct. In this paper, we try to unlock the ‘black box’ of generative AI by exploring the ways in which ChatGPT responds variously to different forms of wording commonly used in subject exam questions, taking as our case study a suite of questions used in Level 1 and Level 3 English Literature modules at a UK university. We know that, pre-AI, the precise wording of essay questions can significantly affect the learning outcomes, formal structures, and methods that students are expected to adopt in response. Drawing on computational and inductive analysis, we identify that slight changes in the framing can elicit small but identifiably different outputs from generative AI. This allows us to recommend how questions might be set in order either (potentially) to expose AI use where it has been employed, or (idealistically) to encourage students to engage in some degree of independent writing and thinking to mitigate obvious AI limitations. It also informs a basic human-led heuristic by which the most egregious AI use may, albeit unreliably and provisionally, be detected either at an individual or cohort level.

Descriptions

Collection icon

Actions

Items in this Collection

List of items in this collection
  Title Date Uploaded Visibility Action