Text to Image – a look at Midjourney’s parameters

The field of GenAI technologies is rapidly evolving, regularly introducing exciting new applications, features and updates. In this blog post, I’ll explore a particular text-to-image (T2I) tool that has seen some cool updates in the past couple of months.

So we are returning to Midjourney to see what the new features are, but also to give you an overview of the most important parameters which make all the difference when prompting for something more specific. You might want to read up on the basics which I have written about in this previous blog post called Harnessing AI – the Midjourney case.

I’ll be using the latest version available which currently is Midjourney version 6 Alpha (April 5, 2024) moreover, my preferred method of prompting is through the web browser. This is now possible in version 6 to users who have generated more than 1000 images in Midjourney (MJ). The Discord interface is too overwhelming with all the servers, emojis and you needed to remember the names for the parameters, which in my case lead to a lot of typos. Now, with the clean browser interface, you have the prompt line at the top of the page and the parameters are buttons and sliders – very easy to use.

All prompting that is done through the web browser can be done through the Discord interface too, don’t worry, just remember to start your prompt with /imagine. When using parameters you need to indicate this with two dashes in front of the parameter itself. For instance, if you want to instruct MJ to create an image with an aspect ratio of 5:7, meaning that the sides are 5 units wide and 7 units high you need to write: –ar 5:7. In the browser you just move the slider to the desired ratio. If you prefer writing dashes and parameters you can still do this in the web browser. Please note, Mac has the habit of combining the two dashes, so don’t worry if in this post you see a long line instead of a double dash.

Having a common baseline is important, thus my prompting will use the default settings which you can see in the second image with the parameter options open. There is one exception, under Model the Mode should be Standard, not Raw, but we’ll come back to this. If I use Raw I will let you know in the prompt.

MJ has quite a few parameters and I won’t discuss them all here, just the ones I think need explaining and may be the most impactful ones. You can familiarise yourself with the ones I am not discussing at MJ’s own Parameter list. Here are the ones I will cover, some in more depth some less:

  • No
  • Tile
  • Character Reference
  • Style Reference
  • Stylize

Raw vs. Standard

Let’s go back to the Raw vs. Standard Mode question. Raw can be considered another parameter expressed as: –style raw. It is used in MJ’s own words:

to reduce the Midjourney default aesthetic. 

If you compare the two sets of images above, you notice the difference, although both sets have the exact same prompt: a woman sitting in a café, frontal view. The first four use the Standard Mode and the last four Raw. For me, Raw is the equivalent to photo-realistic in other words, if you aim for life-like, photo-realistic images (of people) you most likely succeed when using Raw and not the MJ default aesthetic, Standard. It’s not perfect, mind you, always check for mistakes like excessive fingers, limbs etc.

No

Ever wanted to create an image but you couldn’t get rid of an element? Well, the parameter –no works quite well in certain cases, where you have something specific you want to exclude. Just add the parameter and in this case you need to add it manually even in the browser version, since this one as some others is not available as a button. The prompt for the following images is: a delicious hamburger, lush, vibrant colours, french fries –no cheese –style raw.

Tile

This parameter is more of an artistic one. It helps you to create a never ending pattern.
The image may stand out on its own, yet its full potential unfolds when used repeatedly to craft a larger composition, integrated onto an object’s surface, or utilised as a desktop background. To get a seamless pattern you need to use for instance Photoshop where you can create a pattern from the image and apply it to whatever you want.

Character Reference

Now we are getting to the newer and interesting stuff. You may or may not have noticed character reference (–cref + image URL) is not on MJ’s list of parameters. In short, –cref allows you to use a reference image and to tell Midjourney you want for instance this same person with say different clothing and maybe in a different environment. To achieve better results it pays off to define the character more thoroughly, a mere a woman sitting in a café, frontal view will not necessarily yield good results. To highlight this I first used as the reference image the same woman from the café series earlier above – it’s the second image from the left side. After this you will see my second test run with a more elaborate prompt. 


And here is my second –cref run with a more elaborate prompt: a 30 years Caucasian woman with black hair standing in a street corner. The more you define the character the less you give MJ room to come up with variations, for instance having Caucasian in the prompt you eliminate characters from other ethnic backgrounds, the same goes for the hair colour etc. Well, as you can see it’s not nearly perfect, but this is something you can now work with. Try a couple of Reruns and pick the best matching image from the iterations.

Despite a specific prompt, it’s good to understand that prompting and re-prompting, doing variations is essential to achieving good results – especially when you prompt for something particular. Some sort of expectation management is in place here, I believe. It’s illusory to expect that all four images depict the very same character as referenced to and it’s actually impossible, if you think of it, there exists no such person after all!

Style Reference

Similar to character reference, style reference (–sref + image URL) allows you to use a reference image to tell Midjourney you want to transfer the style of the reference image to the new image. Now, style includes stuff like the over all feeling, colour scheme and so on, but not necessarily the style itself and certainly NOT the objects or subjects.

Style as a term in this particular case is problematic as there is the possibility of confusing two different usages of style when working with MJ. When prompting for a specific art style like in the very first image in this blog post: Caravaggio’s painting depicting the cutest white mouse ever, eating cheese on a kitchen table, soft light coming in from a window. You could also prompt: The cutest white mouse ever, eating cheese on a kitchen table, soft light coming in from a window in the style of Caravaggio’s painting. In my (granted, no so extensive) testing I have not been able to transfer Caravaggio’s style as such to new image using –sref. Instead the new image would receive the over all feeling and colours of the reference image. I say this – again – to curb expectations. Furthermore, –sref doesn’t work too convincingly when referencing from Standard mode to Raw mode or vice versa.

My reference image is the snow flake from the Tile section above and I applied it to the second case (woman in her thirties) in the character reference section. As you can see some of the colours and ornamental elements are clearly depicted in the new image.

The second image I applied the reference to is the burger. Here too you can see the influence of the reference image, be it in a creative way, for instance in the last one 🫐.

Stylize

And finally, we come to Stylize (–stylize or just –s). In the browser version you have it as a slider called Stylization under Aesthetics. According to MJ Low stylization values produce images that closely match the prompt but are less artistic (https://docs.midjourney.com/docs/en/stylize-1). My interpretation of this is that the lower the value the “more raw” the image becomes and the higher the value the more freedom MJ is given to apply an artistic touch to the image. Compare the following two series.

How do you interpret the clear difference between these two series? I think it is a significant one. The ones with low stylize values look more ordinary, like you and me as opposed to the ones with a high stylize value that could easily be models, used in commercials, tv ads or on a runway. Now, it seems that our society’s current beauty standards would consider the stylised ones to be more beautiful.

Before I let you go I want to make the following point. It’s crucial to reflect on an important issue within GenAI: bias. Although Midjourney has made notable progress in recognising and addressing diversity, including but not limited to gender, ethnicity, and age. However, true inclusivity extends far beyond these categories. It encompasses understanding and actively working against biases related to generational differences, sexual orientations, religious and spiritual beliefs, disabilities, and socioeconomic backgrounds. When utilising any image creation tool, it’s important to apply the same level of critical scrutiny and questioning that we use with text-to-text GenAI systems (ChatGPT, Claude etc.) to the images and texts we get.

Thank you for reading, happy prompting!

Hyytiälä forest station – workshop retreat

Global campus at Hyytiälä forest station

A quick update on the Global campus team.

We visited our university’s Hyytiälä Forest Station for several intensive workshop sessions on March 18 and 19. We concentrated our efforts on fine tuning Serendip‘s first episode called Boreal forest. New to Serendip, our immersive virtual adventure? Make sure to view the trailer and read about it on the site. In addition to our work we also dedicated time to team building and bonding activities.

Global campus at Hyytiälä forest station
Unfortunately due to image size ration two of our team members were “cut out” from the header image – here you have all!

The location and facilities are ideal for this type of work, where the team remains together for an extended period, and all services, including accommodation, food and social activities (sauna!), are organised by Hyytiälä forest station.

I created a scenario type Thinglink called Workshop retreat at Hyytiälä forest station based on the images I had taken with a drone, 360° camera and smart phone.

If you’re new to Thinglink, please take a moment to familiarise yourself with the following guidelines: For the best viewing experience, I recommend using a large screen. The 360° images allow rotation, offering a comprehensive view of the surroundings. Some images include tags with detailed information. To navigate to the next image, click the ‘Proceed’ button located in the upper right-hand corner.

You can access the material through the following links according to your preferences. In any case, remember to go full screen for maximal immersive experience and in case you have a VR headset handy to use it!

With an Accessibility player

View the Thinglink in the web browser

or embedded below, in this post:

Enjoy!

Lake Kuivajärvi, Hyytiälä forest station
Lake Kuivajärvi, Hyytiälä forest station

Kwizie – passive video watching bye bye

EDIT 22.2.2024 – DISCLAIMER – Global campus has received full access to Kwizie for testing purposes. The findings have not been influenced by Kwizie.

The Interactive Journey with Kwizie

In an era where digital innovation is at the forefront of educational transformation, Kwizie has the potential to redefine the way we engage with video content for learning. This platform bridges the gap between passive viewing and active, gamified learning experiences. Here’s a quick look at how Kwizie could reshape educational engagement through its innovative features and user-centric approach.

Transforming Passive Videos into Interactive Learning Experiences

Kwizie makes learning more dynamic and interactive. By converting in theory any video into a comprehensive quiz, it introduces a novel way to learn, catering to diverse subjects and languages. This flexibility is a testament to Kwizie’s commitment to making learning accessible and engaging.

Who is it for? Obviously teachers will benefit from this handy and user-friendly tool. In as little as 8 mouse clicks you can prepare an Instant Quiz – just like that. It takes some more clicks if you want to customise the quiz and have more control over the number of chapters and questions.

But a life long learner just as well can benefit from this tool. Say you have a concept you always wanted to learn properly. In my case it was the internal combustion engine, how does it work? I never really cared about this, but think it is part of general knowledge. Now with Kwizie I learn this in minutes. Care to test it yourself? For this purpose I chose a different topic, in the true spirit of sustainability – Global campus’ main theme: How does composting works? Have a go!

Kwizie live tab
Kwizie, live tab

Core Features Unveiled:

  • Multilingual and Multifaceted: With support for numerous languages, Kwizie ensures that learners can access content in their preferred language, breaking down barriers to education.
  • Optimised for Mobile Learning: Recognising the importance of mobile accessibility, Kwizie delivers a pleasant experience across devices, ensuring learners can engage anytime, anywhere.
  • Customisation at Your Fingertips: The platform offers a variety of customisation options, allowing educators (the Quiz Master) to tailor quizzes to their audience’s age and learning objectives, providing a personalised learning experience.
  • Effortless Sharing Mechanisms: Sharing knowledge has never been easier, Kwizie’s uses  QR codes for quiz distribution, fostering a collaborative learning environment.

 

Kwizie – How to, phase 1

Kwizie – How to, phase 2

Kwizie – How to, phase 3

Kwizie – How to, phase 4

Uncovering Kwizie’s Potential

To truly understand Kwizie’s impact, I embarked on a comprehensive testing journey, exploring its capabilities across a spectrum of videos and subjects. From environmental science to theoretical physics, the platform’s versatility was put to the test, revealing insightful nuances about its functionality and user experience. Here are my takeaways:

Insights from the Field:

  • Ease of Use: Creating quizzes is a breeze, thanks to Kwizie’s user-friendly interface that guides you through the process, from video selection to finalising quiz questions.
  • Interactive Learning: The platform’s use of timers adds an element of excitement to quizzes, though the option to pause between questions would enhance user control.
  • Educational Value: Kwizie excels in reinforcing learning objectives, with automated chapter summaries and key concepts highlighting its utility as a robust educational tool.
  • Room for Improvement: While already an impressive tool, I have encountered  a few things that in my opinion need attention. I found found some Youtube videos that did not work as expected eg. Analysing video phase never finished or when in the Live tab Creating a new Quiz was not possible. Accessibility wise there is an issue with failing WCAG AAA, a minor issue in the field of colour contrast I admit, but something easily corrected. I have reported back to Kwizie about my findings.

Reflections on Kwizie’s Educational Impact

Through testing, Kwizie’s role as an innovative educational platform became evident. Its  LMS integration (via API) and the ability for learners to contest answers exemplify its potential to not just educate but to also engage learners in meaningful ways. The call for a broader array of question types and improved accessibility features presents an opportunity for Kwizie to further refine its offerings.

Envisioning the Future of Digital Education with Kwizie

As we look towards the future, Kwizie offers a platform that not only enhances the learning experience but also empowers educators and learners alike. Its ongoing evolution and adaptation to user feedback will undoubtedly continue to shape the landscape of digital education.

Wrapping Up: Kwizie as a Catalyst for Educational Evolution

Kwizie’s digital education platform and its ability to transform video content into interactive quizzes represents a meaningful step forward in educational technology, offering new pathways for learning that are both engaging and accessible. As digital education continues to evolve, platforms like Kwizie will play a central role in shaping the future of learning, making it more dynamic, inclusive, and effective for everyone involved.

A short walk through

Finally I recorded the making of one quiz, so you can see how easy it really is.

Do you want to take this quiz too? No problem, here is the quiz about sauna in English.

 

Pushing the boundaries

Imaginative landscape from Abbott's Flatlands. Image generated in Midjourney

When an intriguing call for papers appeared exploring AI co-creation, I felt compelled to test boundaries despite having slim to none academic publishing credentials. The concept resonated instantly, though self-doubt crept in studying full details. Could conversational technology collaborate on speculative scholarly work? Curiosity won out over uncertainty’s paralysis. If nothing else, illuminating ethical application merits investigation.

It was a Special Issue Call by the Irish Journal of Technology Enhanced Learning (IJTEL) with the title: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence.

I chose Claude, an AI assistant from Anthropic, entering an intensive weekend iteration. There were three options to choose from 1. Position Paper, 2. Short Report or 3. Book Review. I went with the book review. I fed Claude an 1884 novel called Flatland: A Romance of Many Dimensions by Edwin Abbott. Claude rapidly generated an abstract and book review excerpt about Flatland’s dimensional metaphors. However, hurried passes produced explanations minus critical analysis to create cohesion. Through clear prompting, I pushed Claude to incorporate additional theories, doubling the length of certain passages. It relied completely on my explicit redirects to shape fragments into cogent framing. After ten iterations I felt confident we had a useful book review.

Our accepted article examined generative AI’s promise and pitfalls, affirming Claude’s usefulness accelerating drafting under firm direction. But truly comprehending nuance and context without significant human oversight appears premature. Still, well-defined augmentation roles provide productivity upside versus total autonomy today. In other words, the current sweet spot for AI writing tools involves utilising their ability to rapidly generate content under a researcher’s close direction and oversight, rather than granting them high levels of autonomy to complete complex tasks alone start to finish.

More pressingly, this collaboration underscored ethical questions arising as generative models gain sophistication. If AI meaningfully impacts literature reviews, translation works or even initial thesis drafting one day, how can scholars utilise those productivity benefits responsibly? Tools excelling at prose introduce complex attribution and usage monitoring challenges threatening integrity.

Rather than reactively restricting technology based on risks, proactive pedagogical probes enlighten wise integration guardrails. Insights from transparent experiments clarifying current versus aspirational capabilities inform ethical development ahead.

Imaginative landscape from Abbott's Flatlands. Image generated in Midjourney.
Imaginative landscape from Abbott’s Flatlands. Image generated in Midjourney.

Forward-thinking educators can guide this age of invention toward positive ends by spearheading ethical explorations today. Our thoughtful efforts now, probing human-AI collaboration’s realities versus ambitions, construct vital foundations upholding academic integrity as new tools progress from speculative potential to educational reality.

We have the power to shape what comes through asking tough questions in times of uncertainty. As educators we shoulder the responsibility to model how inquiry protects core values even amidst rapid change. And through ethical leadership, we just might uncover new sustainable and inclusive ways to progress.

Want further reading on this topic?

Ethics of Artificial Intelligence – UNESCO

Ethics guidelines for trustworthy AI – by the EU

Ethics of AI – a MOOC by the University of Helsinki

If you are interested in reading my more personal account about this project you can do so here.

Aiming to support creative work of students: The ChatGPT application as part of a Master’s level online course

Computer on the table with beautiful scenery seen in the window

This blog post is written by professor Kalle Juuti and university lecturer Vilhelmiina Harju from the Faculty of Educational Sciences.

Currently, in the field of education, the hot topic is the applications of generative AI and how they impact learning and teaching. During the rapid technological changes, it needs to be discussed, how we understand new generative AI applications, their possibilities and potential drawbacks in education. Further, it is important to consider learning and studying with these tools. For example, do we understand new applications as tools for producing essays and other learning assignments, or do we see them as an opportunity to ideate and develop our own thinking? Do we use these tools in a way that actually excludes ourselves from a learning process or can we use them in a way that in which we actively seek to develop our understanding and self-regulate our learning process? In higher education, where students have traditionally written a lot of texts to prove what they’ve learned, the development of generative AI tools means we need to rethink what and how we teach, as well as how we evaluate students’ learning. In particular, it is important to develop pedagogical approaches that exploit new tools in a way that support students’ new creative work and development of understanding.

In this blog post, we describe, how ChatGPT3.5 application “CurreChat”, operated by the University of Helsinki, was used in a Master’s level education online course in spring 2023. One aim was to integrate the use of a generative AI application into course assignments in a pedagogically meaningful way. ChatGPT was considered as a tool to support students’ creative work. Another aim was to practice reporting on the use of the AI application according to university guidelines.

In the course, students were asked to construct a solution concept to a problem they had identified in the field of education. Weekly course assignments were linked to different phases of concept construction process, and finally the whole working was documented and reflected in a portfolio. Students were given an opportunity to use the AI application in doing the assignments (e.g., identifying problems, ideating solutions, getting feedback on ideas, and reflecting on impact). For each assignment, students were given tips on how to use the tool in a way that would support their work. Students were also asked to write a short description each week on how they used the tool. The main principle was that a student were asked first to send text to ChatGPT and then to human to read.

The university’s own interface “CurreChat” connected to OpenAI’s ChatGPT3.5. The use of university’s own interface was seen as important because it was not wanted that students would have to log in to the services with university’s external IDs. In addition, the interface ensured more secure connection to language model. The assumption was that the material students entered into the application would not be reused elsewhere.

Students reported that they used the tools in a variety of ways as part of their course assignments. Some tried the application in a wide range of ways while others were more cautious. Some reported that they benefited from using the generative AI tool at different phases of their work, while others found it rather useless in their work. A key observation we made from the teaching experiment is that joint practicing and instruction in the use of generative AI tool is important for the tool to best support students’ creative work and learning.

Vilhelmiina Harju, University Lecturer, Faculty of Educational Sciences

Kalle Juuti, Professor, Title of Docent (pedagogy of science), Faculty of Educational Sciences

Introducing the Visual Consistent Character Creator: A New Era with GPT Builder

Introduction

Welcome to an exciting exploration of the innovative new GPT Builder tool (by OpenAI) and my ambitious first project with it — the Visual Consistent Character Creator, an attempt to unlock the holy grail of the text to image sphere. This pioneering tool represents a major leap forward in AI-assisted creativity, combining GPT Builder’s capabilities with the user’s unique imagination.

What exactly does it do? The GPT Builder assists you in creating your own personal AI assistant tailored to your needs – in a nut shell.

In this case, my aim was not just to create a character generator, but to come up with a tool that allows me to construct aesthetically pleasing and above all consistent looking AI-generated characters. As a cherry on top I had the GPT Builder formulate the prompt in a Midjourney readable syntax.

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

The Innovative Process: Enabling Detail and Consistency

My Visual Consistent Character Creator enables this through three key capabilities:

  1. Comprehensive trait selection — this allows for diverse and highly customised characters.
  2. Sequential, step-by-step trait selection — this ensures (or at least strives to achieve) coherence and precision in line with GPT Builder’s innovative approach.
  3. Quality check through AI-enhanced portraits using DALL-E — as a first step to ensure some level of consistency has been achieved before moving to Midjourney (or other text to image generators).

By combining these strengths, my tool can cater to a wide spectrum of creative needs while maintaining visual consistency and artistic flair. Let’s have a look at the process.

The Innovative Process: Detail and Consistency

1. Comprehensive Trait Selection At the outset, I focused on defining a wide array of character traits, mainly physical attributes (and some secret ingredients I am not revealing). I created a template for this, a matrix of sorts. This was done keeping in mind the need to match the high standards of character portrayal as seen in Midjourney. Every trait was carefully chosen to ensure that my GPT Builder could cater to diverse creative needs while maintaining a high level of detail and visual consistency.

2. Sequential Interactivity for Enhanced Precision A standout feature of the Visual Consistent Character Creator is its methodical, step-by-step trait definition process. Reflecting the innovative approach of the GPT Builder, this process ensures that each character trait is not only distinct but also contributes to a coherent overall portrait. This phase was somewhat tricky as the GPT Builder, although always complying, not always “remembered” my instructions and occasionnally would show signs of hallucinations.

3. AI-Enhanced Portrait Prompt Meeting Midjourney Syntax In the final step, after the Visual Consistent Character Creator has summarised the character’s traits and the user has confirmed them I instructed the tool to generate two types of portraits — a detailed close-up and a full-body image with DALL-E which conveniently sits in this workflow as it is part of OpenAI’s ecoystem, so the user never has to leave the browser window. At the very end the Visual Consistent Character Creator creates a prompt the user can copy&paste into Midjourney.

After some testing however, I have to say that the consistency of the characters is quite impressive, but only when generating with DALL-E. When exporting the prompts to Midjourney, the consistency is less evident. A major advantage of integrated DALL-E in ChatGPT is the possibility to discuss the result with GPT and ask for modifications on the generated image. This is huge!

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

The Potential Impacts: Opportunities and Challenges

By significantly enhancing the character design process, a visual character designer assistant like I just build with GPT Builder could revolutionise creative sectors like gaming, animation and graphic novels. The ability to quickly generate consistently looking, detailed and high-quality characters could greatly accelerate production and encourage more experimentation.

However, this also risks reducing human input in creative roles. As AI becomes increasingly capable of mimicking human artistry, important questions around originality and authenticity arise. While AI art tools offer exciting new opportunities, maintaining a balanced perspective regarding their applications will be vital so that we can benefit from their potential while responsibly managing their risks.

Overall, as an innovative new frontier in AI-assisted creativity, this tool promises to take character design into an exciting new era. By harnessing its capabilities thoughtfully, we can unlock immense creative potential.

Screenshot of ChatGPT interface creating iterations of John, a fictional character generated using my tailored GPT with OpenAI’s GPT Builder. Generated by DALL-E.

Serendip – an Immersive Sustainability Learning Adventure

Launching Serendip project on November 9, 2023!

When we started the Global campus project in 2022, we were given the freedom to experiment the limits of online learning. We were expected to do really bold, even risky EdTech experiments. So, we thought very carefully how we could use our time wisely. We wanted to know what this university wants or needs? What could be something bold that would benefit all the members of the university community regardless of the faculty and beyond?

One of the strategic goals for the University of Helsinki is to advance ecological sustainability and responsibility. The University is dedicated to integrate the themes of sustainability into all education programmes.

Well-designed digital and physical environments for work, teaching and learning will enhance ecological sustainability and promote encounters with others, support creativity, renew forms of collaboration and improve accessibility.

(University of Helsinki Strategy 2021-2023)

Following this mission, sustainability became a topic that would be the glue of our work. In the design process, we asked from teachers and students what they are missing regarding sustainability education. We learned that a virtual space where students would gather together around the world to solve the sustainability challenges would be the secret wish of the sustainability teachers.

Students, on the other hand, wanted to travel in 3D worlds and learn how to influence stakeholders. They wished to improve their skills in finding the intervention points in decision-making processes. Students also desired to see hope and use their all senses. We knew we wanted to do this. And this was the foundation for a bold EdTech experiment, the project called Serendip*.

Based on our pedagogical framework, we believe that learning should be engaging and fun but also at the same time personalized and efficient. By offering students a visually appealing virtual reality learning environment with diverse multi-disciplinary learning content and a chance to actually train the sustainability competencies, we can help students to become the change agents this world needs.

The learning content has been developed together with researchers, teachers and students from different disciplines. The research-based content together with state-of-the-art technologies make an engaging learning experience. In virtual reality we could make impossible possible, travel in time and place and practice empathy.

Also, we identified that by taking the AI tools to the next level, we could increase the interaction between a student and the learning content. Therefore, we designed virtual AI-powered characters for different pedagogical purposes for the game. Each discussion is different and personalized, based on the student´s own interests.

The first game episode, the Boreal Forest, one of the tipping elements in earth´s climate system, is an adventure through snow and woods. It combines forest economy, forest ecology and well-being with Indigenous studies. It helps the students to practice their systems-thinking, values-thinking and intrapersonal skills.

We see that you have a role to play in sustainability, so we are happy to invite you to participate as a teacher, a student or a subject-matter expert and co-create with us the further episodes. Learn more on serendip.fi and join the adventure by sharing us how you would like to take part by filling in the form. Can a learning environment for the sustainability education look like this?

* Serendip = The word serendipity, originating from an old Persian fairytale “the Three Princes of Serendip”, means unplanned fortunate discoveries. The Serendip Learning Adventure is based on serendipitous learning approach where, through exploration, learners might discover unexpected and interesting connections among phenomena which can lead to meaningful learning. Serendipity, as valuable unexplored sources for learning, can be fostered through engagement and interaction. We see that sustainability challenges need innovations which can be results of serendipitous events. 

AWEXR 2023, Vienna

AWEXR 2023 - Vienna

 

Yesterday and today (24. and 25.10. 2023) I attended the AWE XR Europe in Vienna. As usual when I travel, I chose to walk as much as possible and avoid public transportation. This time too, I took the train from the airport to Wiener Mitte and from there I made my way to the event venue, a distance of about 6km. Walking allows me to truly experience a city – its beat, vibe, smells, and soundscape. It’s the best way to get a feel for the local culture.  Exploring on foot ensures at least a minimum of movement during conferences and fairs –  and take in the sensory experience of a new place. Ironically, after deliberately immersing myself in the sights, sounds and smells of Vienna, I then spent the much of the event dealing with virtual and augmented worlds.

AWEXR 2023 - Vienna AWEXR 2023 - Vienna

The exhibition area at AWE XR Europe had a different feel compared to Laval VR earlier this year, with fewer exhibitors overall. I felt at AWEXR the focus was primarily on technical and engineering XR applications. There were exceptions like for instance hixr’s Time Travel Berlin and Chronopolis.

AWEXR 2023 - Vienna

Some notable observations

The playground area featured different XR experiences like Artivive and Nettle VR. Interesting exhibitors included Copresence app for virtual collaboration, Cognitive 3D for 3D modeling and Ikarus 3D’s 3D modeling software and product visualisation in general.

For me the one of the standout exhibits at AWE XR Europe was Time Travel Berlin’s, an immersive XR experience that virtually transports users back in time to 1920s Berlin. Unlike most other exhibits focused on technical demonstrations, both Time Travel Berlin and Chronopolis highlighted the humanities applications of XR through an incredibly detailed historical recreation in the case of Time travel Berlin.
Participants are immersed in a vivid simulation of the vibrant Pariser Platz outside the Brandenburg Gate, populated with period vehicles, hotels, and pedestrian crowds. The meticulous attention to accuracy and human-centered storytelling made it feel like walking through history. It showed that while XR enables engineering feats like 3D modeling, it can also profoundly enhance fields like education, heritage preservation, and narrative experiences. This innovative humanities-focused use of immersive technology was a refreshing change from the predominant tech demos. As one of the few exhibits bridging STEM and the humanities, it was undoubtedly a highlight of AWE XR Europe.

Another interesting exhibitor was Holonet, a Croatian startup offering a VR collaboration platform using realistic hologram avatars based on user photos. Their system supports see-through capabilities on glasses that have the feature. This creates a much more immersive experience than conventional VR avatars.

Delta Reality, another Croatian company, focused on delivering exceptional, meticulously crafted XR experiences. Their talented team brings together experts across technology, design and creativity to push the boundaries of immersive content.

AWEXR 2023 - Vienna

Overall, AWE XR Europe provided a great opportunity to see the latest innovations in XR from both startups and established players. The perceived smaller size enabled more intimate networking and discovery compared to other larger events. I was able to connect with key players in the European XR ecosystem and bring back valuable insights for our team. A highlight was reconnecting with James Mifsud of ArborXR, whom I had met earlier this year at Laval XR. It was great to catch up with James and attend the afterparty together. The impromptu dinner with other attendees especially the ones from Croatia was especially enjoyable and inspirational. Events like AWEXR enable these valuable personal connections within the close-knit XR community.

AWEXR 2023 - Vienna AWEXR 2023 - Vienna

AWEXR 2023 - Vienna AWEXR 2023 - Vienna

UniPID collaboration

Empowering Education through Artificial Intelligence

Earlier this year, I had the privilege of conducting an online workshop on the use of artificial intelligence (AI). The event, organised by UniPID, is in line with the broader vision of Global Campus to harness the power of AI and bring it to professionals in higher education.

I introduced the attendees to innovative tools like ChatGPT and Midjourney. ChatGPT, for instance, offers a conversational approach that can assist teachers in course design, from creating exercises and tasks to developing syllabi. On the other hand, Midjourney stands out with its unique ability to generate images that closely represent real-life objects, enabling teachers to bring their imaginative ideas to life in visual formats.

We delved into the potential of AI in creating personalised learning experiences, ensuring that students from diverse backgrounds can receive quality education tailored to their needs. Furthermore, we touched on the ethical implications of the use of AI in education. Global Campus emphasises the importance of responsible AI use, and it was enlightening to engage with educators and stakeholders on this critical topic.

This workshop was a reminder of the incredible impact we can achieve when we collaborate, share knowledge, and drive innovation.

In conclusion, my experience at the workshop was both enlightening and inspiring. We’re not just envisioning the future of education; we’re actively shaping it through workshops like this. I’m excited about the possibilities that lie ahead.

Read UniPID’s take on the workshop: Teachers’ workshop: the use of Artificial Intelligence in virtual education.

Uncover Finnish Education MOOC

The University of Helsinki’s first Massive Open Online Course (MOOC) on the topic of education is now available for local and global audiences. Uncover Finnish Education MOOC is presenting the current situation of the Finnish education from a systemic perspective. Designed based on the students’ interests and needs, the course covers topics such as underlying values, educational ecosystem, administration aspects, curriculum development, quality enhancement, teacher education and current challenges. The content is presented in a wide variety of formats such as text, podcasts, videos, and VR resources.  

The course has been developed by the Faculty of Educational Sciences in collaboration with the Global Innovation Network for Teaching and Learning (GINTL), a Ministry of Education and Culture-funded network of 20 Finnish higher education institutions (universities and universities of applied sciences). The vision of the development team was to create a course which is captivating, meets the needs of the learners, promotes personalized learning, brings creative approaches to online environments, has a modern and stylish UI and is available for everyone. Sharing a similar vision for online learning, Global Campus joined this course as a development partner. Interested in experimenting with VR tools in online environments, Global Campus supported the design of several features for the Uncover Finnish Education MOOC, as seen below. 

AI videos 

The course includes two AI videos. One is presenting the structure of the Finnish education system and it is based on an infographic from the resource library of the Ministry of Education and Culture. The second video presents the learning experiences of Ella Kämper, a student at the University of Helsinki. Ella wrote the script and provided the photos and videos. The process of creating the videos was very smooth, faster and with less effort than it would have been to record videos in a studio. The editing of the videos was done in Premier Pro. You can see Ella as an avatar in the video below.  

Simulations 

At the suggestion of the Global Campus team, we developed two simulation exercises for the course.  And I must confess that it has been one of the best choices we have made for this course. Shortly into the design process of the simulations, I understood the high value these types of exercises can have in supporting the students’ learning. Being able to immerse yourself in a specific situation, practice different skills, and make decisions, is an opportunity that cannot be usually provided during courses.  

To develop the simulations, we worked with two experts from 3DBear, a company which provides service solutions for AR and VR learning. Both experts had a pedagogical background, which was very useful when developing educational content. With them and a couple of course content authors, we developed one simulation about outdoor learning, which can be also used as a professional development tool and another video simulation, where the course students can experience being a Finnish teacher in a teacher- students- guardian meeting. You can spot the simulations in the Chapters 4 and 5 of the course.  

Immersive content 

We knew early in the course design that we would like to include immersive and interactive content. We wanted to create possibilities for the students to learn by discovery and by doing. Therefore, under the advice of the Global Campus team, we have used Thinglink to develop several interactive resources. Thinglink proved to be a very handy and versatile tool, which catered very well to our need for immersive content. We have created interactive resources using 360 ° photos and infographics. 

Due to the intervention of Global Campus in this course, the variety of the content formats has increased considerably. Including AI and VR resources in online learning environments can make a difference on the students’ learning. We are hoping that Uncover Finnish Education MOOC will bring a holistic learning experience to everyone studying it. Take the course and let us know what do you think about the use of emerging technologies in online learning environments.  

Mihaela Nyyssönen

Uncover Finnish Education MOOC project planner/Faculty of Educational Sciences

E-Learning Designer/ Global Campus

University of Helsinki