UniPID collaboration

Empowering Education through Artificial Intelligence

Earlier this year, I had the privilege of conducting an online workshop on the use of artificial intelligence (AI). The event, organised by UniPID, is in line with the broader vision of Global Campus to harness the power of AI and bring it to professionals in higher education.

I introduced the attendees to innovative tools like ChatGPT and Midjourney. ChatGPT, for instance, offers a conversational approach that can assist teachers in course design, from creating exercises and tasks to developing syllabi. On the other hand, Midjourney stands out with its unique ability to generate images that closely represent real-life objects, enabling teachers to bring their imaginative ideas to life in visual formats.

We delved into the potential of AI in creating personalised learning experiences, ensuring that students from diverse backgrounds can receive quality education tailored to their needs. Furthermore, we touched on the ethical implications of the use of AI in education. Global Campus emphasises the importance of responsible AI use, and it was enlightening to engage with educators and stakeholders on this critical topic.

This workshop was a reminder of the incredible impact we can achieve when we collaborate, share knowledge, and drive innovation.

In conclusion, my experience at the workshop was both enlightening and inspiring. We’re not just envisioning the future of education; we’re actively shaping it through workshops like this. I’m excited about the possibilities that lie ahead.

Read UniPID’s take on the workshop: Teachers’ workshop: the use of Artificial Intelligence in virtual education.

Laval Virtual

My Journey Through Laval Virtual: The Quest for Immersion in the World of Virtual Reality

To give you some context you will find a brief summary about myself at the end of this post.

As an enthusiast of the concept of virtual realities, I was eager to attend Laval Virtual, the premier event showcasing the latest advancements in VR, augmented reality (AR), and mixed reality (MR) in short XR in as someone put it humbly the capital of VR! I couldn’t wait to dive into the world of cutting-edge technology, engaging discussions, and artistic creations. Throughout the event, I found myself constantly questioning: “What truly defines immersive experiences in XR?”
This is my personal journey to and through Laval Virtual as I explored innovative brands, participated in thought-provoking discussions, and found inspiration in the arts, all in the quest for immersion. But before I let you read the text itself I offer you the possibility to immersive yourself in a few 360° photos from various stages of my trip.

You can freely navigate with your mouse within the 360° image, there is even a hotspot you can click on and move on to the next image.

Note: While writing this on Friday 14 April, Laval Virtual is still on-going, but unfortunately I have to catch a train and then plane to make it home still today. Greetings from Paris Charles de Gaulle airport.

Discoveries

During my first day on Wednesday 12 April, I was thrilled to explore the innovative brands and technologies being showcased. As a newish VR enthusiast, I was particularly impressed by Movella‘s Xsense,  found the approach L.A.P.S. is taking interesting and thought Olfy had taken the next logic step in bringing one more of our senses to the XR table, very refreshing. Xsense’s groundbreaking work in motion capture, live avatar performance solutions, and sensory integration expanded, seeing it live, my understanding of what’s possible in VR and AR experiences.
L.A.P.S.’s solutions on the other hand enable real-time facial expression tracking, which allows avatars to mirror the movements and emotions of the performers in real-time. Finally Olfy is a virtual reality system that simulates smells to create a more immersive experience. In their own words: The sense of smell allows virtual reality experiences to be more engaging and immersive. Our goal is to enhance the emotions and effectiveness of virtual experiences by allowing you to experience them 100% (Olfy).

Thought-Provoking Conversations

As I attended the various discussions and keynotes, the Immersive Digital Learning topic stood out as a highlight for me, particularly the engaging panel discussion featuring Anaïs Pierre, Bogdan Constantinescu, Jayesh Pillai, and Thierry Koscielniak. Anaïs passionately emphasised that technology serves as a tool, and we must prioritise learning goals before seeking the appropriate technological solution. The use of tech tools must be purposeful and meaningful.
This is exactly how I feel about the use of technology in general. Content and (in my case usually) learning goals first, otherwise your course or product will not find its full potential.

I listened to Kent Bye’s fast-paced talk on the topic of XR moral dilemmas and ethical considerations. He discussed several crucial concerns that we should all be pondering, including the digital divide in access to XR technology, threats to privacy (e.g., biometric data), apparently there is no legislation on this and similar issues.

Furthermore a new concept I hadn’t heard before: mental privacy which is part of a proposed set of rights called the Neuro rights. Mental privacy refers to the protection of an individual’s thoughts, emotions, and mental processes from unauthorised storing and access, particularly in the context of novel technologies that can potentially monitor or manipulate these aspects of the human experience. This really jump-started my brain and I am still processing all the possible implications and reflecting on the ethical and practical considerations of using XR technologies in education and beyond.

During another panel discussion, I was introduced to the concept of eco-design, which involves reducing the energy footprint of eg. VR headsets. It became clear that as VR technology evolves, we must be aware of and address the energy-intensive nature of these devices to create a more sustainable and ethical future. By incorporating eco-design principles, we can minimise the environmental impact of VR and ensure a more responsible approach to technology.

Finding Inspiration in the Arts

Now of course Recto VRso cannot go unmentioned in this blog post. Recto VRso is a component of Laval Virtual, which this year took place partially at Le Quarante, a cultural center in Laval and L’Espace Mayenne. It showcases innovative uses of XR in the field of art and culture. Attendees can interact with some of the  installations that push the boundaries of what is possible in XR art, providing a platform for networking and inspiration in the field – further fueling my passion for the intersection of technology and creativity.

I’d like to single out one installation called Memory house by artist Jiahe Zhao.
According to the artists own words an my rough translation into English: Numerous items hold cherished memories, ranging from beloved family toys to treasured travel souvenirs. “Memory House” offers a solution for storing these memories in a virtual realm. By using 3D scanning technology, memory objects are brought into the virtual space, and through transformation by AI (artificial intelligence), they are manifested into a one-of-a-kind virtual edifice that is ideal for exploration.

And yet Recto VRso seems to be an exception. Over and over have I noticed that XR related applications are heavily situated in the engineering, training/onboarding and skills learning sectors, very little is to be found in the liberal arts world, sadly. Having an academic background in field of liberal arts myself I was excited to discover the creative applications of XR technology at Laval Virtual. I found inspiration in BavARt, an AR-specialised firm that combines art and technology in a Pokémon Go-style app.

Immersion?

As I left LavalVR, I couldn’t help but reflect on my initial question of what defines immersive experiences in VR. Throughout my journey, I discovered that immersion is not just about cutting-edge technology and realistic visuals. It also involves connecting with our emotions, bridging the gap between the digital and physical worlds, and finding inspiration in the creative fusion of art and technology. The quest for immersion is a never-ending journey, one that continuously ignites my enthusiasm for discovering the boundless opportunities that await in the world of virtual reality.

Final thoughts

This was my first conference I attend on my own, I knew no one at Laval and because I am not really an extrovert, but an observer if you will, I struggle connecting with  strangers. Most attendees were there in larger parties and would therefore communicate and entertain themselves among each other which makes it for outsiders as myself hard to mingle. I am not complaining just stating what I noticed.
Yet at the hotel I couldn’t help but be involved in socialising with the French during breakfast. As people were arriving to the breakfast they were greeting everybody and they were greeted by the ones already there. People would ask each where they’re from and what they’d do and so forth. There was active communication throughout  breakfast. I loved it

And finally, note that the venues for the Laval Virtual are kilometers away from each other and not just quelques pas (a few steps) as was repeated by officials a few times =)  Nonetheless I took this as an opportunity to walk and boy did I do some walking, a total of 27km on two and a hald days between my hotel and the three venues.

Thank you Laval Virtual et à la prochaine!

***

Background

I am fairly new to the XR world, in fact it was only last October (2022) that I had a VR headset on for the first time, imagine that! Having studied languages, folkloristics and other liberal arts subjects (at the University of Helsinki (UH)) I have noticed that I have a somewhat different approach to technology than most of my peers. I have been working in several positions at UH over the last, almost two decades including as an International exchange expert and Edtech specialist before joining the Global campus team. As a lifelong learner I started studying university pedagogy last year at UH believing it will give me a strong understanding of educational technology. Furthermore I am the chair of Una Europa’s Educational design and technology cluster. For more coherent  info on me see the Us section.

AI-Powered Course Material Production

Introduction: The Global Campus of the University of Helsinki is committed to exploring innovative methods for enhancing educational experiences. As part of this ongoing mission, our recent “AI methods in course material production” presentation at the university’s Learning Adventure showcased the potential of cutting-edge AI technologies in creating engaging and dynamic course materials. While our primary audience was the university community, we believe these insights hold value for all EdTech enthusiasts.

In this blog post, we’ll share key takeaways from our presentation, which encompassed five sections: Text, Images, Audio, Video, and Q&A.

  1. Text: Harnessing ChatGPT’s Potential. Kicking off our presentation, we introduced ChatGPT, an AI language model developed by OpenAI. By delving into the concept of prompting, we unveiled various techniques, including Chain of Thought (CoT) methods. Highlighting the effectiveness of role prompting, we showcased ChatGPT’s self-criticism and self-evaluation features as a means to generate meaningful responses.
  2. Images: Visualising Ideas with Midjourney. Transitioning to text-to-image (T2I) generation, we presented Midjourney as an exemplary case. Demonstrating seamless integration between Discord and Midjourney, we revealed the process of creating images through prompting in Discord. For a deeper understanding of the Midjourney case, we invite you to read our earlier blog post here.
    It’s worth noting that in addition to Midjourney, there are several other AI-based applications that allow for the creation of images through text. One notable example is DALL-E, which uses a transformer-based neural network to generate images from textual descriptions. And let’s not forget about StableDiffsusion, a new AI-based technology that allows for the generation of high-quality, realistic images by leveraging the power of diffusion models. With so many exciting applications available, the possibilities for creating images through text are truly endless.
  3. Audio: Bringing Text to Life through AI. Our third segment explored the realm of text-to-audio conversion. We shed light on AI tools and techniques capable of generating lifelike speech from written text, making course materials more engaging and accessible to auditory learners.
  4. Video: Creating Dynamic Learning Experiences with AI. In the penultimate section, we investigated AI’s potential in video production. Discussing the role of artificial intelligence in crafting compelling and informative videos, we emphasised the importance of delivering course content in a visually engaging manner. In addition to Synthesia and Colossyan, there are several other noteworthy applications that are worth exploring. One such application is D-ID, which is a deep learning-based technology that allows for the anonymisation and replacing of faces with natural or fantastical looking options in videos using AI-generated imagery. With the increasing demand for video content in today’s digital landscape, these and other AI-based text-to-video applications offer opportunities for teachers and students to create high-quality videos quickly and easily.
  5. Q&A: Encouraging Audience Interaction. To wrap up our presentation, we engaged the audience in group discussions, addressing questions and concerns stemming from the event. This interactive session fostered a deeper understanding of AI’s role in education and promoted collaboration within our university community. Participants were interested in for example if it was possible to produce materials in Finnish language with these new tools and yes, usually that is also possible.

Conclusion: Embracing AI-powered tools like ChatGPT, Midjourney, and other text-to-audio and video production solutions is revolutionising the way we develop and deliver course materials. By adopting these innovations, we can create more engaging, accessible, and dynamic learning experiences for students across the globe.

AI is not taking away your job, it’s the person that adopts AI that is taking away your job!

ThingLink basics, tags

View of a laboratory in ThingLink.

 

Greetings, avid reader! Allow me to introduce you to the delightful world of ThingLink. Where the only limit is your own imagination! If you’re looking to add a touch of finesse to your images and videos, look no further than the humble tag. These interactive buttons bring your multimedia content to life and they’re the secret ingredient of ThingLink. In this blog post, I’ll give you a quick rundown of ThingLink in the form of a video.
BTW we used ThingLink for our very first project. You can read about it in our blog entry called Message from the future.

First of all, let’s define what ThingLink is. Simply put, ThingLink is an online platform that allows you to add interactive tags to your images and videos. These tags can include text, images, audio, and video, making your multimedia content much more engaging and interactive. Whether you’re a teacher, a student, or simply someone looking to add a touch of sophistication to your social media posts, ThingLink is the tool for you. Check out the following Miro board where I have put together a very simply yet effective sequence of slides to highlight what ThinLink is.

Now, why is this relevant for our context which is higher education and Edtech and at the end of the day the learners? Well, for one, it is a very intuitive tool and it gives students control over their own learning journey. No more dull lectures or tedious presentations. With ThingLink, students can interact with the material in multiple ways, truly grasping and internalising the information. And let’s be honest, who doesn’t love a bit of control? According to Yarbrough (2019) “more visuals are not only learner preferred, training and educational professionals have identified that visuals support efficient teaching.”

A ThingLink can be considered a version of an infographic. There are ample studies supporting the claim that infographics are very powerful tools. What makes infographics and in an extended way ThingLink too, so useful? Visuals tend to stick in long-term memory, they transmit messages faster and improve comprehension to name a few (Shiftlearning).

Here is a roughly 7 min video walking you through all the tags and how to create them. In Edit mode tags can can be dragged around the base image – you can even pull a line from under a tag and anchor it to a specific point. 

In a next tutorial blog post with video we’ll have a look the settings and dive into the immersive world of 360° images and videos in ThingLink.  

In conclusion, ThingLink is the tool you didn’t know you needed. With its interactive tags and multimedia-rich approach, ThingLink empowers students to take charge of their studies and reach their full potential. So what are you waiting for? Give it a go and see the magic unfold!

BTW when storyboarding the video I had a clear vision of how to implement text to speech (TTS) with an AI voice – little did I know how NOT easy this was  Stay tuned as at some point I will write a how to post about the process of producing the above video. 

Source:

Yarbrough, J. R. (2019). Infographics: In support of online visual learning. Academy of Educational Leadership Journal, 23(2), 1–15.

Shiftlearning. Blog post: Studies confirm the Power of Visuals in eLearning.

 

Harnessing AI – the Midjourney case

AI generated landscape

Let’s discuss AI generated imagery for a second.

What is it?

As an example I use Midjourney. Generating images using artificial intelligence (AI) tools such as Midjourney on Discord has the potential to revolutionise the field of visual content creation. Midjourney, an open-source platform, utilises machine learning algorithms to generate images based on user input. In short so called Convolutional Neural Networks (CNNs) create artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). This technology has many practical applications such as in graphic design, digital art, and scientific research. However, the ethical implications of AI-generated images must also be considered, more on this in a bit.

When using Midjourney on Discord, users can input a variety of parameters to generate images. This can include text, numbers, or even other images. The algorithm then processes this input and creates a unique image based on the parameters provided. This allows for a high degree of customisation and creativity when generating images. Additionally, Midjourney also allows the user to generate versions of those images, enabling thus a set of variations of the base image.

Here is a short video on how to use Midjourney via Discord.

One of the key benefits of using Midjourney on Discord is the community aspect of the tool. Users can share their input and generated images with others in real-time, allowing for a collaborative and interactive experience. This is particularly useful for designers and artists working on a project together, as it allows them to quickly and easily share ideas and feedback. Additionally, the Discord integration allows for easy sharing of the generated images, making it easy to share the final output with others.

Are there any issues?

One major advantage of using AI to generate images is its ability to produce a high volume of unique and varied content. This is particularly beneficial in fields such as advertising and graphic design where a steady stream of fresh and engaging visuals is essential. Furthermore, the use of machine learning algorithms in image generation allows for a high degree of customisation and creativity in the final output.

However, there are also valid concerns regarding the ethical implications of AI-generated images. One of the main concerns is the potential for AI-generated images to perpetuate harmful stereotypes and biases. For example, if an AI model is trained on a dataset that contains a disproportionate number of images of a certain race or gender, it may produce images that reinforce these stereotypes. Additionally, the use of AI-generated images in fields such as journalism and news reporting raises concerns about the authenticity and accuracy of the content.

A good example of what consequences training on a specific dataset can mean came up in a recent class action lawsuit in federal court in San Francisco, USA filed by a group of artists – the case is still on-going. Apparently

“text image generators were trained off of a data set called LAION, and they basically are billions of images that help to train the generators. And where artists take issue with it is that our images were put into these data sets and then used to create the generators without our consent.”

Source: NYTimes podcast: Hardfork, Jan 20 2023.

It is important to note that these concerns are not unique to AI-generated images, but rather are issues that have long been present in the field of visual content creation. However, the use of AI does amplify these concerns, and it is crucial that proper measures are taken to mitigate these risks. This can include using diverse and representative training datasets (with consent?), implementing robust ethical guidelines, and providing transparency about the source and authenticity of AI-generated images. In conclusion, the use of AI to generate images has the potential to greatly benefit various fields if implemented correctly.

Overall, Midjourney is a powerful tool for generating images on Discord. Its ability to process input from users and generate unique images, along with its editing tools and collaborative features, make it a valuable tool for a wide range of fields. Whether you’re a designer, artist, or researcher, Midjourney can help you create stunning visual content quickly and easily.

Prompts

Midjourney uses prompts to instruct the NST what the image is suppose to look like. It always starts with a forward slash and IMAGINE (/IMAGINE) and your descriptive text eg. I used the following prompt line for the owl in the right hand side column:

[/IMAGINE logo, funky, scifi, bioluminescence, owl on transparent background]

which resulted in this 4 image square (below). I then chose to Upscale #1 and Version #2 and ended up with a 1024×1024 px sized image of the owl I wanted.

For further reading on how to perfect your prompts to get the result you are happy with I suggest you head over to Midjourney’s Documentation page or check out PromptHero and while you are at it have a look at the Midjourney Community Showcase.

AI generated owl

Message from the future

Person looking through a stand-alone window in a dystopian landscape.

Sustainable health course

This is about Global campus’ first project, the Sustainable health course by the Faculty of Pharmacy of the University of Helsinki.

Conceived and orchestrated by Ilkka Miettinen PhD (pharm.), the course is based on the 17 Sustainable Development Goals as formulated by the United Nations. Sasa Tkalcan, on behalf of Global campus, assumed the lead in crafting a lightly gamified storyline and after a few tests using drone footage, 360° images and a VR headset Sasa came up with a few mockups which served as a basis for creating a visually striking concept for the course.

Ilkka and Sasa formed a dynamic working partnership as they collaborated on the project. Sasa, having previously worked extensively with ThingLink, was well-versed in the tool’s capabilities and thus elected it as the platform for the VR components of the course. The core concept was to build a VR environment, which nonetheless would also function on a plain screen, for an introduction in which the learner encounters a hologram and is presented with an assignment. Upon successful completion of the course, the learner is reunited with the hologram in a modified environment, where they are presented with a certificate.

Hologram of a person

The project involved filming a professor in a studio to be transformed into the hologram delivering the assignment. However, in the interest of preserving the element of surprise for prospective learners, details shall remain undisclosed. The project was completed on schedule and the course was made available online on the 10th of January, 2023.