Internship at Global Campus team

At the end of the year 2023 Global Campus team was looking for an intern to work with Serendip – I took my chance and applied. I started at the end of January and now my internship is coming to an end.

I am currently studying in the Forest Science Masters programme at the University of Helsinki and I have done my master’s courses at the same time with this part-time internship. I have a bachelor’s degree in Forest Sciences (forest ecology and management), which has given me a solid background to work with the first episode of Serendip which is Boreal Forest.

My four-month internship has been interesting and educational. For me this was the first time to work as a forest expert in so-called knowledge work and to work in an international team, using English as working language. My colleagues have been supportive all the way and it has been fun to work together.

This internship has well supported my development as a future forest expert as I have been able to do different types of tasks from participating in meetings with other experts and working as part of the team as well as working independently with the scientific content of the Boreal Forest episode. I also got to establish a new work contact and collaboration.

One of my favorite things during my internship was our team trip to Hyytiälä Forest Station. I got to be part of the planning of this trip and I led an introductory tour in the forest, presenting some of the research done at the station. During this outdoor activity, I also got to teach GC team members how to use relascope for measuring, which has now been photographed by our team leader Ulla.

It feels good that through Serendip, not only our team members but other people from different fields will get the chance to learn more about boreal forests and sustainability. I am thankful for this opportunity to do an internship in this multicultural team.

Best, Helmi Lilleberg

GLOBAL CAMPUS GOES TO FUTURE CAMP! 

A Future camp was arranged at Hyytiälä Forest Station at the end of April as part of PUUSTAUS project, which brings together key stakeholders in the forestry sector and education network. Building a sustainable future, particularly in forestry field, requires open minded innovative thinking. Therefore, the aim of the camp was to strengthen students’ futures thinking skills and to provide a platform for networking. Themes such as foresight in the forest sector, future studies and the factors of change were discussed. Additionally, activities to foster interpersonal communication and entrepreneurial skills were implemented.  

The camp was a perfect occasion to test the latest version of the Boreal Forest episode of Serendip. The participants, all of whom had expertise in the forestry field, were able to explore the Serendip environment in a dedicated testing session during the first day of the camp. Useful feedback and fruitful ideas were collected, and interesting conversations and interactions continued throughout the camp.  

Overall, the camp was a great opportunity to receive external feedback, while also reflecting on the future of the boreal forest and its implications for society and the economy at large. It also provided a space to reflect on forestry education and on how it can be reimagined and innovated, all while being immersed in the ideal setting: the Finnish boreal forest. This included engaging in traditional Finnish activities, such as staying at the kota and having a relaxing sauna.  

Helmi and Letizia 

Knowing wood species in immersive way – forest science students co-creating virtual arboretum 

We launched a new exercise for master level course Wood Science (FOR-267), where students can themselves dive into both real the Arboretum in Viikki , and co-create a virtual, immersive learning forest of several tree species.  

Together with Global Campus, Viikki arboretum experts and collaborator from University of Eastern Finland, we picked the browser-based platform Thinglink which allows for the use of immersive 360° images and videos. Sasa Tkalcan of Global campus created a base structure for an elaborate Thinglink with drone, 360° and smart phone imagery that served as the base platform for the students to work with. A short introduction on how to use Thinglink was provided to the students at the beginning. The students were divided into groups and assigned a tree species, from which they then took photos, videos, and, also added texts based on the literature reviews following a matrix given in the course.  

Teacher’s opinion: “The outcomes by the students are amazing – they had the freedom to implement the tasks by using all the technical possibilities Thinglink offers, and their own creativity. The results really encourage to continue the joint development of creative learning tasks in the coming years. Also, working with Global campus has been very eye-opening, and inspiring. As a new teacher at the university, I have enjoyed having fruitful discussions and receiving guidance in applying new educational methods and digital tools in the courses of my responsibility.  

Screenshot from one of the students’ work (Jacob Payne)

The new tools, such as Thinglink, can provide our students with up-to-date, competitive learning spaces in virtual forms to indulge in material science  education. When virtual learning spaces, that are freely accessible any time, are combined with contact teaching exercises in the campus, we reached a well-balanced course structure that engages students in content using multiple methods. 

The next course already starts in the first period in fall 2024! Join the fun and fruitful way of learning in the field of wood science!
Tuula Jyske
Wood, Science and Well-being research group

Screenshot from one of the students’ work (Jacob Payne)

Student comments: 

Jake: “I really enjoyed this part of the course. It was unusual to be able to use such intuitive multimedia (especially the tagging system and the ability to “move” between areas), and in my opinion it’s a much more interesting way to engage with the content than traditional approaches. It was also nice to be able to spend time outdoors and make my own observations of plants in nature as part of the course. Thinglink does have some usability issues, but these can be worked around and overall, it’s easy enough to use”. 

Screenshot from one of the students’ work (Jacob Payne)

Text to Image – a look at Midjourney’s parameters

The field of GenAI technologies is rapidly evolving, regularly introducing exciting new applications, features and updates. In this blog post, I’ll explore a particular text-to-image (T2I) tool that has seen some cool updates in the past couple of months.

So we are returning to Midjourney to see what the new features are, but also to give you an overview of the most important parameters which make all the difference when prompting for something more specific. You might want to read up on the basics which I have written about in this previous blog post called Harnessing AI – the Midjourney case.

I’ll be using the latest version available which currently is Midjourney version 6 Alpha (April 5, 2024) moreover, my preferred method of prompting is through the web browser. This is now possible in version 6 to users who have generated more than 1000 images in Midjourney (MJ). The Discord interface is too overwhelming with all the servers, emojis and you needed to remember the names for the parameters, which in my case lead to a lot of typos. Now, with the clean browser interface, you have the prompt line at the top of the page and the parameters are buttons and sliders – very easy to use.

All prompting that is done through the web browser can be done through the Discord interface too, don’t worry, just remember to start your prompt with /imagine. When using parameters you need to indicate this with two dashes in front of the parameter itself. For instance, if you want to instruct MJ to create an image with an aspect ratio of 5:7, meaning that the sides are 5 units wide and 7 units high you need to write: –ar 5:7. In the browser you just move the slider to the desired ratio. If you prefer writing dashes and parameters you can still do this in the web browser. Please note, Mac has the habit of combining the two dashes, so don’t worry if in this post you see a long line instead of a double dash.

Having a common baseline is important, thus my prompting will use the default settings which you can see in the second image with the parameter options open. There is one exception, under Model the Mode should be Standard, not Raw, but we’ll come back to this. If I use Raw I will let you know in the prompt.

MJ has quite a few parameters and I won’t discuss them all here, just the ones I think need explaining and may be the most impactful ones. You can familiarise yourself with the ones I am not discussing at MJ’s own Parameter list. Here are the ones I will cover, some in more depth some less:

  • No
  • Tile
  • Character Reference
  • Style Reference
  • Stylize

Raw vs. Standard

Let’s go back to the Raw vs. Standard Mode question. Raw can be considered another parameter expressed as: –style raw. It is used in MJ’s own words:

to reduce the Midjourney default aesthetic. 

If you compare the two sets of images above, you notice the difference, although both sets have the exact same prompt: a woman sitting in a café, frontal view. The first four use the Standard Mode and the last four Raw. For me, Raw is the equivalent to photo-realistic in other words, if you aim for life-like, photo-realistic images (of people) you most likely succeed when using Raw and not the MJ default aesthetic, Standard. It’s not perfect, mind you, always check for mistakes like excessive fingers, limbs etc.

No

Ever wanted to create an image but you couldn’t get rid of an element? Well, the parameter –no works quite well in certain cases, where you have something specific you want to exclude. Just add the parameter and in this case you need to add it manually even in the browser version, since this one as some others is not available as a button. The prompt for the following images is: a delicious hamburger, lush, vibrant colours, french fries –no cheese –style raw.

Tile

This parameter is more of an artistic one. It helps you to create a never ending pattern.
The image may stand out on its own, yet its full potential unfolds when used repeatedly to craft a larger composition, integrated onto an object’s surface, or utilised as a desktop background. To get a seamless pattern you need to use for instance Photoshop where you can create a pattern from the image and apply it to whatever you want.

Character Reference

Now we are getting to the newer and interesting stuff. You may or may not have noticed character reference (–cref + image URL) is not on MJ’s list of parameters. In short, –cref allows you to use a reference image and to tell Midjourney you want for instance this same person with say different clothing and maybe in a different environment. To achieve better results it pays off to define the character more thoroughly, a mere a woman sitting in a café, frontal view will not necessarily yield good results. To highlight this I first used as the reference image the same woman from the café series earlier above – it’s the second image from the left side. After this you will see my second test run with a more elaborate prompt. 


And here is my second –cref run with a more elaborate prompt: a 30 years Caucasian woman with black hair standing in a street corner. The more you define the character the less you give MJ room to come up with variations, for instance having Caucasian in the prompt you eliminate characters from other ethnic backgrounds, the same goes for the hair colour etc. Well, as you can see it’s not nearly perfect, but this is something you can now work with. Try a couple of Reruns and pick the best matching image from the iterations.

Despite a specific prompt, it’s good to understand that prompting and re-prompting, doing variations is essential to achieving good results – especially when you prompt for something particular. Some sort of expectation management is in place here, I believe. It’s illusory to expect that all four images depict the very same character as referenced to and it’s actually impossible, if you think of it, there exists no such person after all!

Style Reference

Similar to character reference, style reference (–sref + image URL) allows you to use a reference image to tell Midjourney you want to transfer the style of the reference image to the new image. Now, style includes stuff like the over all feeling, colour scheme and so on, but not necessarily the style itself and certainly NOT the objects or subjects.

Style as a term in this particular case is problematic as there is the possibility of confusing two different usages of style when working with MJ. When prompting for a specific art style like in the very first image in this blog post: Caravaggio’s painting depicting the cutest white mouse ever, eating cheese on a kitchen table, soft light coming in from a window. You could also prompt: The cutest white mouse ever, eating cheese on a kitchen table, soft light coming in from a window in the style of Caravaggio’s painting. In my (granted, no so extensive) testing I have not been able to transfer Caravaggio’s style as such to new image using –sref. Instead the new image would receive the over all feeling and colours of the reference image. I say this – again – to curb expectations. Furthermore, –sref doesn’t work too convincingly when referencing from Standard mode to Raw mode or vice versa.

My reference image is the snow flake from the Tile section above and I applied it to the second case (woman in her thirties) in the character reference section. As you can see some of the colours and ornamental elements are clearly depicted in the new image.

The second image I applied the reference to is the burger. Here too you can see the influence of the reference image, be it in a creative way, for instance in the last one 🫐.

Stylize

And finally, we come to Stylize (–stylize or just –s). In the browser version you have it as a slider called Stylization under Aesthetics. According to MJ Low stylization values produce images that closely match the prompt but are less artistic (https://docs.midjourney.com/docs/en/stylize-1). My interpretation of this is that the lower the value the “more raw” the image becomes and the higher the value the more freedom MJ is given to apply an artistic touch to the image. Compare the following two series.

How do you interpret the clear difference between these two series? I think it is a significant one. The ones with low stylize values look more ordinary, like you and me as opposed to the ones with a high stylize value that could easily be models, used in commercials, tv ads or on a runway. Now, it seems that our society’s current beauty standards would consider the stylised ones to be more beautiful.

Before I let you go I want to make the following point. It’s crucial to reflect on an important issue within GenAI: bias. Although Midjourney has made notable progress in recognising and addressing diversity, including but not limited to gender, ethnicity, and age. However, true inclusivity extends far beyond these categories. It encompasses understanding and actively working against biases related to generational differences, sexual orientations, religious and spiritual beliefs, disabilities, and socioeconomic backgrounds. When utilising any image creation tool, it’s important to apply the same level of critical scrutiny and questioning that we use with text-to-text GenAI systems (ChatGPT, Claude etc.) to the images and texts we get.

Thank you for reading, happy prompting!

Hyytiälä forest station – workshop retreat

Global campus at Hyytiälä forest station

A quick update on the Global campus team.

We visited our university’s Hyytiälä Forest Station for several intensive workshop sessions on March 18 and 19. We concentrated our efforts on fine tuning Serendip‘s first episode called Boreal forest. New to Serendip, our immersive virtual adventure? Make sure to view the trailer and read about it on the site. In addition to our work we also dedicated time to team building and bonding activities.

Global campus at Hyytiälä forest station
Unfortunately due to image size ration two of our team members were “cut out” from the header image – here you have all!

The location and facilities are ideal for this type of work, where the team remains together for an extended period, and all services, including accommodation, food and social activities (sauna!), are organised by Hyytiälä forest station.

I created a scenario type Thinglink called Workshop retreat at Hyytiälä forest station based on the images I had taken with a drone, 360° camera and smart phone.

If you’re new to Thinglink, please take a moment to familiarise yourself with the following guidelines: For the best viewing experience, I recommend using a large screen. The 360° images allow rotation, offering a comprehensive view of the surroundings. Some images include tags with detailed information. To navigate to the next image, click the ‘Proceed’ button located in the upper right-hand corner.

You can access the material through the following links according to your preferences. In any case, remember to go full screen for maximal immersive experience and in case you have a VR headset handy to use it!

With an Accessibility player

View the Thinglink in the web browser

or embedded below, in this post:

Enjoy!

Lake Kuivajärvi, Hyytiälä forest station
Lake Kuivajärvi, Hyytiälä forest station

Kwizie – passive video watching bye bye

EDIT 22.2.2024 – DISCLAIMER – Global campus has received full access to Kwizie for testing purposes. The findings have not been influenced by Kwizie.

The Interactive Journey with Kwizie

In an era where digital innovation is at the forefront of educational transformation, Kwizie has the potential to redefine the way we engage with video content for learning. This platform bridges the gap between passive viewing and active, gamified learning experiences. Here’s a quick look at how Kwizie could reshape educational engagement through its innovative features and user-centric approach.

Transforming Passive Videos into Interactive Learning Experiences

Kwizie makes learning more dynamic and interactive. By converting in theory any video into a comprehensive quiz, it introduces a novel way to learn, catering to diverse subjects and languages. This flexibility is a testament to Kwizie’s commitment to making learning accessible and engaging.

Who is it for? Obviously teachers will benefit from this handy and user-friendly tool. In as little as 8 mouse clicks you can prepare an Instant Quiz – just like that. It takes some more clicks if you want to customise the quiz and have more control over the number of chapters and questions.

But a life long learner just as well can benefit from this tool. Say you have a concept you always wanted to learn properly. In my case it was the internal combustion engine, how does it work? I never really cared about this, but think it is part of general knowledge. Now with Kwizie I learn this in minutes. Care to test it yourself? For this purpose I chose a different topic, in the true spirit of sustainability – Global campus’ main theme: How does composting works? Have a go!

Kwizie live tab
Kwizie, live tab

Core Features Unveiled:

  • Multilingual and Multifaceted: With support for numerous languages, Kwizie ensures that learners can access content in their preferred language, breaking down barriers to education.
  • Optimised for Mobile Learning: Recognising the importance of mobile accessibility, Kwizie delivers a pleasant experience across devices, ensuring learners can engage anytime, anywhere.
  • Customisation at Your Fingertips: The platform offers a variety of customisation options, allowing educators (the Quiz Master) to tailor quizzes to their audience’s age and learning objectives, providing a personalised learning experience.
  • Effortless Sharing Mechanisms: Sharing knowledge has never been easier, Kwizie’s uses  QR codes for quiz distribution, fostering a collaborative learning environment.

 

Kwizie – How to, phase 1

Kwizie – How to, phase 2

Kwizie – How to, phase 3

Kwizie – How to, phase 4

Uncovering Kwizie’s Potential

To truly understand Kwizie’s impact, I embarked on a comprehensive testing journey, exploring its capabilities across a spectrum of videos and subjects. From environmental science to theoretical physics, the platform’s versatility was put to the test, revealing insightful nuances about its functionality and user experience. Here are my takeaways:

Insights from the Field:

  • Ease of Use: Creating quizzes is a breeze, thanks to Kwizie’s user-friendly interface that guides you through the process, from video selection to finalising quiz questions.
  • Interactive Learning: The platform’s use of timers adds an element of excitement to quizzes, though the option to pause between questions would enhance user control.
  • Educational Value: Kwizie excels in reinforcing learning objectives, with automated chapter summaries and key concepts highlighting its utility as a robust educational tool.
  • Room for Improvement: While already an impressive tool, I have encountered  a few things that in my opinion need attention. I found found some Youtube videos that did not work as expected eg. Analysing video phase never finished or when in the Live tab Creating a new Quiz was not possible. Accessibility wise there is an issue with failing WCAG AAA, a minor issue in the field of colour contrast I admit, but something easily corrected. I have reported back to Kwizie about my findings.

Reflections on Kwizie’s Educational Impact

Through testing, Kwizie’s role as an innovative educational platform became evident. Its  LMS integration (via API) and the ability for learners to contest answers exemplify its potential to not just educate but to also engage learners in meaningful ways. The call for a broader array of question types and improved accessibility features presents an opportunity for Kwizie to further refine its offerings.

Envisioning the Future of Digital Education with Kwizie

As we look towards the future, Kwizie offers a platform that not only enhances the learning experience but also empowers educators and learners alike. Its ongoing evolution and adaptation to user feedback will undoubtedly continue to shape the landscape of digital education.

Wrapping Up: Kwizie as a Catalyst for Educational Evolution

Kwizie’s digital education platform and its ability to transform video content into interactive quizzes represents a meaningful step forward in educational technology, offering new pathways for learning that are both engaging and accessible. As digital education continues to evolve, platforms like Kwizie will play a central role in shaping the future of learning, making it more dynamic, inclusive, and effective for everyone involved.

A short walk through

Finally I recorded the making of one quiz, so you can see how easy it really is.

Do you want to take this quiz too? No problem, here is the quiz about sauna in English.

 

Pushing the boundaries

Imaginative landscape from Abbott's Flatlands. Image generated in Midjourney

When an intriguing call for papers appeared exploring AI co-creation, I felt compelled to test boundaries despite having slim to none academic publishing credentials. The concept resonated instantly, though self-doubt crept in studying full details. Could conversational technology collaborate on speculative scholarly work? Curiosity won out over uncertainty’s paralysis. If nothing else, illuminating ethical application merits investigation.

It was a Special Issue Call by the Irish Journal of Technology Enhanced Learning (IJTEL) with the title: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence.

I chose Claude, an AI assistant from Anthropic, entering an intensive weekend iteration. There were three options to choose from 1. Position Paper, 2. Short Report or 3. Book Review. I went with the book review. I fed Claude an 1884 novel called Flatland: A Romance of Many Dimensions by Edwin Abbott. Claude rapidly generated an abstract and book review excerpt about Flatland’s dimensional metaphors. However, hurried passes produced explanations minus critical analysis to create cohesion. Through clear prompting, I pushed Claude to incorporate additional theories, doubling the length of certain passages. It relied completely on my explicit redirects to shape fragments into cogent framing. After ten iterations I felt confident we had a useful book review.

Our accepted article examined generative AI’s promise and pitfalls, affirming Claude’s usefulness accelerating drafting under firm direction. But truly comprehending nuance and context without significant human oversight appears premature. Still, well-defined augmentation roles provide productivity upside versus total autonomy today. In other words, the current sweet spot for AI writing tools involves utilising their ability to rapidly generate content under a researcher’s close direction and oversight, rather than granting them high levels of autonomy to complete complex tasks alone start to finish.

More pressingly, this collaboration underscored ethical questions arising as generative models gain sophistication. If AI meaningfully impacts literature reviews, translation works or even initial thesis drafting one day, how can scholars utilise those productivity benefits responsibly? Tools excelling at prose introduce complex attribution and usage monitoring challenges threatening integrity.

Rather than reactively restricting technology based on risks, proactive pedagogical probes enlighten wise integration guardrails. Insights from transparent experiments clarifying current versus aspirational capabilities inform ethical development ahead.

Imaginative landscape from Abbott's Flatlands. Image generated in Midjourney.
Imaginative landscape from Abbott’s Flatlands. Image generated in Midjourney.

Forward-thinking educators can guide this age of invention toward positive ends by spearheading ethical explorations today. Our thoughtful efforts now, probing human-AI collaboration’s realities versus ambitions, construct vital foundations upholding academic integrity as new tools progress from speculative potential to educational reality.

We have the power to shape what comes through asking tough questions in times of uncertainty. As educators we shoulder the responsibility to model how inquiry protects core values even amidst rapid change. And through ethical leadership, we just might uncover new sustainable and inclusive ways to progress.

Want further reading on this topic?

Ethics of Artificial Intelligence – UNESCO

Ethics guidelines for trustworthy AI – by the EU

Ethics of AI – a MOOC by the University of Helsinki

If you are interested in reading my more personal account about this project you can do so here.

Aiming to support creative work of students: The ChatGPT application as part of a Master’s level online course

Computer on the table with beautiful scenery seen in the window

This blog post is written by professor Kalle Juuti and university lecturer Vilhelmiina Harju from the Faculty of Educational Sciences.

Currently, in the field of education, the hot topic is the applications of generative AI and how they impact learning and teaching. During the rapid technological changes, it needs to be discussed, how we understand new generative AI applications, their possibilities and potential drawbacks in education. Further, it is important to consider learning and studying with these tools. For example, do we understand new applications as tools for producing essays and other learning assignments, or do we see them as an opportunity to ideate and develop our own thinking? Do we use these tools in a way that actually excludes ourselves from a learning process or can we use them in a way that in which we actively seek to develop our understanding and self-regulate our learning process? In higher education, where students have traditionally written a lot of texts to prove what they’ve learned, the development of generative AI tools means we need to rethink what and how we teach, as well as how we evaluate students’ learning. In particular, it is important to develop pedagogical approaches that exploit new tools in a way that support students’ new creative work and development of understanding.

In this blog post, we describe, how ChatGPT3.5 application “CurreChat”, operated by the University of Helsinki, was used in a Master’s level education online course in spring 2023. One aim was to integrate the use of a generative AI application into course assignments in a pedagogically meaningful way. ChatGPT was considered as a tool to support students’ creative work. Another aim was to practice reporting on the use of the AI application according to university guidelines.

In the course, students were asked to construct a solution concept to a problem they had identified in the field of education. Weekly course assignments were linked to different phases of concept construction process, and finally the whole working was documented and reflected in a portfolio. Students were given an opportunity to use the AI application in doing the assignments (e.g., identifying problems, ideating solutions, getting feedback on ideas, and reflecting on impact). For each assignment, students were given tips on how to use the tool in a way that would support their work. Students were also asked to write a short description each week on how they used the tool. The main principle was that a student were asked first to send text to ChatGPT and then to human to read.

The university’s own interface “CurreChat” connected to OpenAI’s ChatGPT3.5. The use of university’s own interface was seen as important because it was not wanted that students would have to log in to the services with university’s external IDs. In addition, the interface ensured more secure connection to language model. The assumption was that the material students entered into the application would not be reused elsewhere.

Students reported that they used the tools in a variety of ways as part of their course assignments. Some tried the application in a wide range of ways while others were more cautious. Some reported that they benefited from using the generative AI tool at different phases of their work, while others found it rather useless in their work. A key observation we made from the teaching experiment is that joint practicing and instruction in the use of generative AI tool is important for the tool to best support students’ creative work and learning.

Vilhelmiina Harju, University Lecturer, Faculty of Educational Sciences

Kalle Juuti, Professor, Title of Docent (pedagogy of science), Faculty of Educational Sciences

Introducing the Visual Consistent Character Creator: A New Era with GPT Builder

Introduction

Welcome to an exciting exploration of the innovative new GPT Builder tool (by OpenAI) and my ambitious first project with it — the Visual Consistent Character Creator, an attempt to unlock the holy grail of the text to image sphere. This pioneering tool represents a major leap forward in AI-assisted creativity, combining GPT Builder’s capabilities with the user’s unique imagination.

What exactly does it do? The GPT Builder assists you in creating your own personal AI assistant tailored to your needs – in a nut shell.

In this case, my aim was not just to create a character generator, but to come up with a tool that allows me to construct aesthetically pleasing and above all consistent looking AI-generated characters. As a cherry on top I had the GPT Builder formulate the prompt in a Midjourney readable syntax.

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

The Innovative Process: Enabling Detail and Consistency

My Visual Consistent Character Creator enables this through three key capabilities:

  1. Comprehensive trait selection — this allows for diverse and highly customised characters.
  2. Sequential, step-by-step trait selection — this ensures (or at least strives to achieve) coherence and precision in line with GPT Builder’s innovative approach.
  3. Quality check through AI-enhanced portraits using DALL-E — as a first step to ensure some level of consistency has been achieved before moving to Midjourney (or other text to image generators).

By combining these strengths, my tool can cater to a wide spectrum of creative needs while maintaining visual consistency and artistic flair. Let’s have a look at the process.

The Innovative Process: Detail and Consistency

1. Comprehensive Trait Selection At the outset, I focused on defining a wide array of character traits, mainly physical attributes (and some secret ingredients I am not revealing). I created a template for this, a matrix of sorts. This was done keeping in mind the need to match the high standards of character portrayal as seen in Midjourney. Every trait was carefully chosen to ensure that my GPT Builder could cater to diverse creative needs while maintaining a high level of detail and visual consistency.

2. Sequential Interactivity for Enhanced Precision A standout feature of the Visual Consistent Character Creator is its methodical, step-by-step trait definition process. Reflecting the innovative approach of the GPT Builder, this process ensures that each character trait is not only distinct but also contributes to a coherent overall portrait. This phase was somewhat tricky as the GPT Builder, although always complying, not always “remembered” my instructions and occasionnally would show signs of hallucinations.

3. AI-Enhanced Portrait Prompt Meeting Midjourney Syntax In the final step, after the Visual Consistent Character Creator has summarised the character’s traits and the user has confirmed them I instructed the tool to generate two types of portraits — a detailed close-up and a full-body image with DALL-E which conveniently sits in this workflow as it is part of OpenAI’s ecoystem, so the user never has to leave the browser window. At the very end the Visual Consistent Character Creator creates a prompt the user can copy&paste into Midjourney.

After some testing however, I have to say that the consistency of the characters is quite impressive, but only when generating with DALL-E. When exporting the prompts to Midjourney, the consistency is less evident. A major advantage of integrated DALL-E in ChatGPT is the possibility to discuss the result with GPT and ask for modifications on the generated image. This is huge!

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

John, a fictional character generated using a tailored GPT with OpenAI’s GPT Builder. The initial generation of John by DALL-E.

The Potential Impacts: Opportunities and Challenges

By significantly enhancing the character design process, a visual character designer assistant like I just build with GPT Builder could revolutionise creative sectors like gaming, animation and graphic novels. The ability to quickly generate consistently looking, detailed and high-quality characters could greatly accelerate production and encourage more experimentation.

However, this also risks reducing human input in creative roles. As AI becomes increasingly capable of mimicking human artistry, important questions around originality and authenticity arise. While AI art tools offer exciting new opportunities, maintaining a balanced perspective regarding their applications will be vital so that we can benefit from their potential while responsibly managing their risks.

Overall, as an innovative new frontier in AI-assisted creativity, this tool promises to take character design into an exciting new era. By harnessing its capabilities thoughtfully, we can unlock immense creative potential.

Screenshot of ChatGPT interface creating iterations of John, a fictional character generated using my tailored GPT with OpenAI’s GPT Builder. Generated by DALL-E.

Serendip – an Immersive Sustainability Learning Adventure

Launching Serendip project on November 9, 2023!

When we started the Global campus project in 2022, we were given the freedom to experiment the limits of online learning. We were expected to do really bold, even risky EdTech experiments. So, we thought very carefully how we could use our time wisely. We wanted to know what this university wants or needs? What could be something bold that would benefit all the members of the university community regardless of the faculty and beyond?

One of the strategic goals for the University of Helsinki is to advance ecological sustainability and responsibility. The University is dedicated to integrate the themes of sustainability into all education programmes.

Well-designed digital and physical environments for work, teaching and learning will enhance ecological sustainability and promote encounters with others, support creativity, renew forms of collaboration and improve accessibility.

(University of Helsinki Strategy 2021-2023)

Following this mission, sustainability became a topic that would be the glue of our work. In the design process, we asked from teachers and students what they are missing regarding sustainability education. We learned that a virtual space where students would gather together around the world to solve the sustainability challenges would be the secret wish of the sustainability teachers.

Students, on the other hand, wanted to travel in 3D worlds and learn how to influence stakeholders. They wished to improve their skills in finding the intervention points in decision-making processes. Students also desired to see hope and use their all senses. We knew we wanted to do this. And this was the foundation for a bold EdTech experiment, the project called Serendip*.

Based on our pedagogical framework, we believe that learning should be engaging and fun but also at the same time personalized and efficient. By offering students a visually appealing virtual reality learning environment with diverse multi-disciplinary learning content and a chance to actually train the sustainability competencies, we can help students to become the change agents this world needs.

The learning content has been developed together with researchers, teachers and students from different disciplines. The research-based content together with state-of-the-art technologies make an engaging learning experience. In virtual reality we could make impossible possible, travel in time and place and practice empathy.

Also, we identified that by taking the AI tools to the next level, we could increase the interaction between a student and the learning content. Therefore, we designed virtual AI-powered characters for different pedagogical purposes for the game. Each discussion is different and personalized, based on the student´s own interests.

The first game episode, the Boreal Forest, one of the tipping elements in earth´s climate system, is an adventure through snow and woods. It combines forest economy, forest ecology and well-being with Indigenous studies. It helps the students to practice their systems-thinking, values-thinking and intrapersonal skills.

We see that you have a role to play in sustainability, so we are happy to invite you to participate as a teacher, a student or a subject-matter expert and co-create with us the further episodes. Learn more on serendip.fi and join the adventure by sharing us how you would like to take part by filling in the form. Can a learning environment for the sustainability education look like this?

* Serendip = The word serendipity, originating from an old Persian fairytale “the Three Princes of Serendip”, means unplanned fortunate discoveries. The Serendip Learning Adventure is based on serendipitous learning approach where, through exploration, learners might discover unexpected and interesting connections among phenomena which can lead to meaningful learning. Serendipity, as valuable unexplored sources for learning, can be fostered through engagement and interaction. We see that sustainability challenges need innovations which can be results of serendipitous events.