AI-Powered Course Material Production

Introduction: The Global Campus of the University of Helsinki is committed to exploring innovative methods for enhancing educational experiences. As part of this ongoing mission, our recent “AI methods in course material production” presentation at the university’s Learning Adventure showcased the potential of cutting-edge AI technologies in creating engaging and dynamic course materials. While our primary audience was the university community, we believe these insights hold value for all EdTech enthusiasts.

In this blog post, we’ll share key takeaways from our presentation, which encompassed five sections: Text, Images, Audio, Video, and Q&A.

  1. Text: Harnessing ChatGPT’s Potential. Kicking off our presentation, we introduced ChatGPT, an AI language model developed by OpenAI. By delving into the concept of prompting, we unveiled various techniques, including Chain of Thought (CoT) methods. Highlighting the effectiveness of role prompting, we showcased ChatGPT’s self-criticism and self-evaluation features as a means to generate meaningful responses.
  2. Images: Visualising Ideas with Midjourney. Transitioning to text-to-image (T2I) generation, we presented Midjourney as an exemplary case. Demonstrating seamless integration between Discord and Midjourney, we revealed the process of creating images through prompting in Discord. For a deeper understanding of the Midjourney case, we invite you to read our earlier blog post here.
    It’s worth noting that in addition to Midjourney, there are several other AI-based applications that allow for the creation of images through text. One notable example is DALL-E, which uses a transformer-based neural network to generate images from textual descriptions. And let’s not forget about StableDiffsusion, a new AI-based technology that allows for the generation of high-quality, realistic images by leveraging the power of diffusion models. With so many exciting applications available, the possibilities for creating images through text are truly endless.
  3. Audio: Bringing Text to Life through AI. Our third segment explored the realm of text-to-audio conversion. We shed light on AI tools and techniques capable of generating lifelike speech from written text, making course materials more engaging and accessible to auditory learners.
  4. Video: Creating Dynamic Learning Experiences with AI. In the penultimate section, we investigated AI’s potential in video production. Discussing the role of artificial intelligence in crafting compelling and informative videos, we emphasised the importance of delivering course content in a visually engaging manner. In addition to Synthesia and Colossyan, there are several other noteworthy applications that are worth exploring. One such application is D-ID, which is a deep learning-based technology that allows for the anonymisation and replacing of faces with natural or fantastical looking options in videos using AI-generated imagery. With the increasing demand for video content in today’s digital landscape, these and other AI-based text-to-video applications offer opportunities for teachers and students to create high-quality videos quickly and easily.
  5. Q&A: Encouraging Audience Interaction. To wrap up our presentation, we engaged the audience in group discussions, addressing questions and concerns stemming from the event. This interactive session fostered a deeper understanding of AI’s role in education and promoted collaboration within our university community. Participants were interested in for example if it was possible to produce materials in Finnish language with these new tools and yes, usually that is also possible.

Conclusion: Embracing AI-powered tools like ChatGPT, Midjourney, and other text-to-audio and video production solutions is revolutionising the way we develop and deliver course materials. By adopting these innovations, we can create more engaging, accessible, and dynamic learning experiences for students across the globe.

AI is not taking away your job, it’s the person that adopts AI that is taking away your job!

ThingLink basics, tags

View of a laboratory in ThingLink.


Greetings, avid reader! Allow me to introduce you to the delightful world of ThingLink. Where the only limit is your own imagination! If you’re looking to add a touch of finesse to your images and videos, look no further than the humble tag. These interactive buttons bring your multimedia content to life and they’re the secret ingredient of ThingLink. In this blog post, I’ll give you a quick rundown of ThingLink in the form of a video.
BTW we used ThingLink for our very first project. You can read about it in our blog entry called Message from the future.

First of all, let’s define what ThingLink is. Simply put, ThingLink is an online platform that allows you to add interactive tags to your images and videos. These tags can include text, images, audio, and video, making your multimedia content much more engaging and interactive. Whether you’re a teacher, a student, or simply someone looking to add a touch of sophistication to your social media posts, ThingLink is the tool for you. Check out the following Miro board where I have put together a very simply yet effective sequence of slides to highlight what ThinLink is.

Now, why is this relevant for our context which is higher education and Edtech and at the end of the day the learners? Well, for one, it is a very intuitive tool and it gives students control over their own learning journey. No more dull lectures or tedious presentations. With ThingLink, students can interact with the material in multiple ways, truly grasping and internalising the information. And let’s be honest, who doesn’t love a bit of control? According to Yarbrough (2019) “more visuals are not only learner preferred, training and educational professionals have identified that visuals support efficient teaching.”

A ThingLink can be considered a version of an infographic. There are ample studies supporting the claim that infographics are very powerful tools. What makes infographics and in an extended way ThingLink too, so useful? Visuals tend to stick in long-term memory, they transmit messages faster and improve comprehension to name a few (Shiftlearning).

Here is a roughly 7 min video walking you through all the tags and how to create them. In Edit mode tags can can be dragged around the base image – you can even pull a line from under a tag and anchor it to a specific point. 

In a next tutorial blog post with video we’ll have a look the settings and dive into the immersive world of 360° images and videos in ThingLink.  

In conclusion, ThingLink is the tool you didn’t know you needed. With its interactive tags and multimedia-rich approach, ThingLink empowers students to take charge of their studies and reach their full potential. So what are you waiting for? Give it a go and see the magic unfold!

BTW when storyboarding the video I had a clear vision of how to implement text to speech (TTS) with an AI voice – little did I know how NOT easy this was  Stay tuned as at some point I will write a how to post about the process of producing the above video. 


Yarbrough, J. R. (2019). Infographics: In support of online visual learning. Academy of Educational Leadership Journal, 23(2), 1–15.

Shiftlearning. Blog post: Studies confirm the Power of Visuals in eLearning.