When an intriguing call for papers appeared exploring AI co-creation, I felt compelled to test boundaries despite having slim to none academic publishing credentials. The concept resonated instantly, though self-doubt crept in studying full details. Could conversational technology collaborate on speculative scholarly work? Curiosity won out over uncertainty’s paralysis. If nothing else, illuminating ethical application merits investigation.
It was a Special Issue Call by the Irish Journal of Technology Enhanced Learning (IJTEL) with the title: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence.
I chose Claude, an AI assistant from Anthropic, entering an intensive weekend iteration. There were three options to choose from 1. Position Paper, 2. Short Report or 3. Book Review. I went with the book review. I fed Claude an 1884 novel called Flatland: A Romance of Many Dimensions by Edwin Abbott. Claude rapidly generated an abstract and book review excerpt about Flatland’s dimensional metaphors. However, hurried passes produced explanations minus critical analysis to create cohesion. Through clear prompting, I pushed Claude to incorporate additional theories, doubling the length of certain passages. It relied completely on my explicit redirects to shape fragments into cogent framing. After ten iterations I felt confident we had a useful book review.
Our accepted article examined generative AI’s promise and pitfalls, affirming Claude’s usefulness accelerating drafting under firm direction. But truly comprehending nuance and context without significant human oversight appears premature. Still, well-defined augmentation roles provide productivity upside versus total autonomy today. In other words, the current sweet spot for AI writing tools involves utilising their ability to rapidly generate content under a researcher’s close direction and oversight, rather than granting them high levels of autonomy to complete complex tasks alone start to finish.
More pressingly, this collaboration underscored ethical questions arising as generative models gain sophistication. If AI meaningfully impacts literature reviews, translation works or even initial thesis drafting one day, how can scholars utilise those productivity benefits responsibly? Tools excelling at prose introduce complex attribution and usage monitoring challenges threatening integrity.
Rather than reactively restricting technology based on risks, proactive pedagogical probes enlighten wise integration guardrails. Insights from transparent experiments clarifying current versus aspirational capabilities inform ethical development ahead.
Forward-thinking educators can guide this age of invention toward positive ends by spearheading ethical explorations today. Our thoughtful efforts now, probing human-AI collaboration’s realities versus ambitions, construct vital foundations upholding academic integrity as new tools progress from speculative potential to educational reality.
We have the power to shape what comes through asking tough questions in times of uncertainty. As educators we shoulder the responsibility to model how inquiry protects core values even amidst rapid change. And through ethical leadership, we just might uncover new sustainable and inclusive ways to progress.
Want further reading on this topic?
Ethics of Artificial Intelligence – UNESCO
Ethics guidelines for trustworthy AI – by the EU
Ethics of AI – a MOOC by the University of Helsinki
If you are interested in reading my more personal account about this project you can do so here.