The molecular flavour of chocolate

What are the chemical compounds responsible for the refined flavours of high-quality chocolate? Researchers have already started working on the molecular recipe for this irresistible treat.

Chocolate might be the most universally favoured treat in the world, where tons and tons are indulged yearly, thanks to its unique flavour profile and versatility. Despite that, we know little about the molecular development of its sensory qualities or what makes chocolate of different origins and varieties distinct from one another.

A group of researchers has attempted to analyse six specialty chocolates, divided into specific pairs of sensory attributes: acidic-fruity, cocoa-like – roasty and floral-astringent. The samples went through various extraction processes to isolate odour-active from flavour-active compounds. These compounds, also known as odourants and tastants, are directly responsible for our perceptions of smell and taste. One of the methods used even mimics human nasal activities during consumption, which is crucial for the chemical release of odourants. Researchers used certain molecules’ previously known sensory effects to compare the flavour pairs’ final amount of odourants and tastants.

Everyone who prefers Granny Smith apples to Honey Crunch knows the apparent distinction between origins and varieties of apples. This cognition can also be applied to cocoa beans, whose taste varies from delicate flower notes to unpleasant bitterness. This flavour diversity, however, does not guarantee the consistency required for mass-produced chocolate – so big corporations resort to blending beans from different sources. While scientists conducted past analyses with industrially produced chocolate, this study stands out for the novel use of single-origin chocolate in coordination with the Cocoa of Excellence (CoEx) program. CoEx acknowledges the cocoa quality and endorses the diversity neglected by large businesses. Their biennial award promises global recognition to top-quality cocoa producers worldwide, and the winning chocolate is chosen based on an official set of quality and flavour guidelines. Consequently, the results of this study can provide an essential basis for standard sensory assessment of superior cocoa and chocolate.

The origins and varieties of cacao beans are behind the vast diversity in chocolate.

The hypothesis was that the presence, or lack thereof, of known flavour-active compounds, reveals differences in molecular compositions of chocolate with different flavour profiles. From a general knowledge standpoint, the higher the concentration, the more intense sensation our senses can perceive. The minimum detectable amounts are called odour and taste thresholds. This phenomenon has a biological advantage. Sensory cells react to large amounts of essential substances while being much more stringent towards potentially dangerous compounds (which, more often than not, are accompanied by unpleasant flavour). Thus, to conclude the full sensory effect of the samples chocolate, researchers calculated the ratio of each dose over the responding threshold (DoT). The molecules are subsequently matched to its aroma extract dilution analysis (AEDA) – a procedure for evaluating the potency of odourants.

The impact of molecules on sensory perception turns out not to be so linear.

Looking deep into the molecular clusters of acidic-fruity chocolate, it was soon evident that none of the fruity-smelling compounds alone is responsible for the sensory distinction. The samples show visible variation concentrations of known esters under the same flavour description. What baffled the researchers even more was the presence of roasty odour compounds in this group. With some of the acidic-fruity chocolate samples further divided into specific fruit notes of brown fruits or dried fruits, the initial goal to interpret chocolate flavours on a molecular level becomes even more tangled. According to the DoT parameters, one ester may have an additional effect for brown fruit odour even at concentrations below the threshold.

Moreover, acetic acid, known for its pungent scent and sour taste, seemed to be the main responsible for the group samples’ acidity due to its highest DoT factors. On the other hand, the presence of citric acid and lactic acid, two other sour-tasting acids, in the chocolate implies that the combination of all these acids evokes the acidic flavour perception – collectively instead of individually.

Similar results were also observed from two other groups. The most abundant odourants in the cocoa-like – roasty group weren’t described as such during AEDA. Some floral-smelling odourants also appear more in this group than in the supposedly floral-astringent one. This eccentricity suggests possible interactions between flavour- and odour-active compounds. It is a fine-tuned collaboration – key molecular compounds dancing hand in hand gracefully to the melody of fine flavours.

The researchers believe that their study demonstrates for the first time how diversity in dark chocolate flavour develops under a microscopic gaze. Using the results, producers can optimise quality based on the selection of raw materials and processing. Nevertheless, we are far from grasping the nuances of chocolate flavours and even further from recreating its magic in our kitchen laboratory.

Hân Đỗ



How Virtual Reality and a Talking Dragon can Help to Diagnose Your Child

     Do you have a child, or someone close to you in their early childhood, that you think may possibly have attention deficit hyperactivity disorder (ADHD)? Have you ever thought about what goes into diagnosing ADHD in children? A parent might be surprised if, upon bringing their child in to be tested for ADHD, their child were handed a VR headset and then instructed, by a talking dragon, to complete sets of tasks around a virtual apartment. But in the not-so-distant future, this could become commonplace for testing children who display early symptoms of ADHD.


      ADHD, one of the most common neurodevelopmental disorders (about 5.9% of children worldwide), is associated with many difficulties, including but not limited to: impairments in quality of life, impairments in multiple cognitive domains, emotional and/or social impairments, and educational underachievement. The methods to measure these cognitive impairments, while many times effective, may fail to capture exactly how symptoms of ADHD manifest themselves in the unpredictable setting of day-to-day life. These methods are oftentimes highly structured, monotonous and lacking of external stimulus. Think about all of the distractions that you encounter every single day. I’m sure that if you tried to keep track of them, you would lose count within an hour of leaving your bedroom. Think of also the variety of goals, tasks or objectives which you set out to do every day. Even if you only have one goal, there is nobody telling you exactly how to reach that goal for the day. You have to create a series of sub-tasks to take steps toward that goal. These things cannot be accounted for in the highly structured nature of the standard methods used to identify symptoms, because they do not simulate everyday life. So what could be a solution? Who can help? Maybe a VR game named EPELI can.

     Neuroscience Meets … Videogames?

      Video games are loved by many, but also very much disliked by nearly as many. They have a bad reputation in some circles. Most who oppose video games view them as a waste of time or a distraction from the real world. This can be true, but video games have also shown that they can be educational and puzzling, can help young and old people alike learn new things and exercise their brain in an entertaining way. But can video games also be a tool? That’s what the developers of EPELI believe. EPELI, an acronym for Executive Performance in Everyday Living, is a VR video game for children and was developed by researchers in Helsinki. Its purpose is to assess the symptoms of children with ADHD as well as to distinguish behavior between ADHD children and typically developed children, meaning children without any neurodevelopmental disorders.

Virtual Reality has already been used before in assessing children with ADHD, most notably in the Continuous Performance Test (CPT). What makes EPELI different from the CPT, however, is the environment and the tasks given. The CPT is set in a virtual classroom (kinda boring) and tasks are usually something like watching and waiting for a certain character or image to show up on screen. EPELI places kids into a virtual apartment, with several rooms, interactable objects and interactable distractors (e.g a television that can be switched off). Then kids are basically asked to do chores (also kinda boring).

      So how do I play the game?

      So you decide that you’d like to give EPELI a try. If you’re worried about learning the controls, don’t be. Once you put on your Oculus Go headset and enter your new apartment, you will be greeted by a friendly little dragon. This little dragon will walk you through a practice session, in which you’ll be shown how to move around the apartment by pointing to your desired way point, and how to interact with objects of your choosing. Once you’re ready for the tasks, this same dragon will give you your first quest. Each task is composed of several smaller sub tasks, most of which you are free to do in any order, some of which are timed. Tasks are given orally, and could be something like “get dressed” , “brush your teeth”, or “make your bed”. After you know everything you need to know, the little dragon will disappear, but don’t worry, he will be back to give you your next task once you’re finished.


     So, why?

      I’m not sure why they chose to do certain things, such as have a cartoon dragon host the game, but the purpose of the apartment setting is to view how ADHD children perform in a setting other than a classroom, to see how their symptoms arise while they are doing things as mundane as household chores. While the child is playing the game, everything is monitored. Things such as head movements, number of interactions, the amount of time spent interacting with planted distractors, efficiency of movements around the apartment, and controller movements. All very important data. The child also has a watch, which they can view by looking down at their controller. This is important because it lets the observers see how the child is able to manage their time, giving insight into how they perceive time. ADHD children are known to perceive time differently.


      In short, EPELI is another way to use VR for assessing children with ADHD, in a different fashion than the already widely used CPT. This is an example of video games and virtual reality furthering the effectiveness of unbiased computerized testing, helping it towards its full potential.



Seesjärvi, E., Puhakka, J., Aronen, E. T., Lipsanen, J., Mannerkoski, M., Hering, A., Zuber, S., Kliegel, M., Laine, M., & Salmi, J. (2022). Quantifying ADHD Symptoms in Open-Ended Everyday Life Contexts With a New Virtual Reality Task. Journal of Attention Disorders, 26(11), 1394–1411.

The Sun can send us to the middle ages at any moment now. What can we do about it?

It is already a matter of fact that most of our lives today are fully reliant on technology. Almost our entire identities are tied to a small electric brick, a few banks from China announced that they won’t accept cash anymore and that their entire work will be done with digital currency and nowdays it seems that almost every system has some technological side to it. It has gone so far that even every day devices, that don’t need to be connected to the internet or even to have computers in them at all, became absorbed in this movement. For example, a few days ago while I was browsing a social media site, I found a picture with a grill that refused to open because it needed “a software update”. Can you imagine the irritation of you having to wait EVEN MORE for that delicious food to be cooked on your expensive grill?

Now, what if I were to tell you that everything of that is in a big danger. Almost all the systems that current society works on; has a big threat in front of its eyes and there could be even a possibility that it could collapse because of some event that we are unable to predict with 100% accuract and it seems to get more frequent. The event in cause is: Coronal Mass Ejection.

You see, when we talk about the Sun we have mainly two “solar weather” events that we are preoccupied with: Solar Flares and as mentioned earlier Coronal Mass Ejections.

They are both caused by somewhat the same event and that is the contortion of the electromagnetic field lines of the Sun. How does this happen? Well, the Sun gets randomly these so called “Sun Spots”. These are called areas, with lower temperature than their surrounding matter, but way more dense in magnetic energy. If you were to point a telescope straight at the Sun, you could see these little dark dots on it (if you wouldn’t go blind in the process, of course). Another remark, they might seem tiny, but they usually are as big as the Earth or a few times its size. A little bit not that “tiny” when thinking about it now.

When two Sun Spots appear near one another; they “connect magnetically” which basically means that they start to hold hands. Very tight. And the hands move chaotically. With high speeds and sometimes tremendous energy. But when the holding breaks…that’s when we run into problems and these two weather events are created because of the energy released by that rupture. Mostly, the difference between these two consist in what they emit, energy levels and the travel time to reach Earth. You see, when compared, solar flares are more of a muzzle flash of a blank gun that it is spread in its vicinity (so the energy is dispersed on a larger area) while C.M.E.’s are more of packet cannonballs, propelled in individual directions (less area of impact, but way more power when it hits an unlucky target).

Coronal Mass Ejection Group | UNH Earth, Oceans, & Space

So everything is nice and informative, they seem cool and all, but the real questions: Are we in any danger? Should we care at all about them? Can we use C.M.E.’s as actual cannonballs if we have a large enough medieval canon? Kind of, yes and…with great sadness, no.

Thankfully, we have this really great and nice magnetic shield around the Earth that keeps us safe of so much radiation that you don’t have any idea of. Really, without it, we would’ve been hit with so much cosmic radiation and charged particles from the Sun, that these meat packs called “Humans” would not have any chance at surviving. This fella is really nice, give it a beer if you ever can, or some cookies.

Now, if we are not in any real danger, why should we care at all about all of this? Well, while our soft tissue cases will be safe, everything that you could call an electronic device: laptop, phone, electronic key-locks, car system, bank systems, YouTube servers, aeroplane boards and auto-pilots. Almost every somewhat complex system of our current society, is not that happy about all of this.

We are concerned, because we have records that in the past, Earth has been hit quite a considerable amount of times by some strong powerful C.M.E.’s. For example, I’m not sure if you ever heard of this so called “Carrington Event” or “The Most Intense Geomagnetic Storm” that we ever received. It happened in the year 1859, so when telegraph lines were around for 20 years already and settled as a main way of communication. When the rush of electromagnetic waves hit, they were so strong that the telegraph lines snapped and caught on fire; there have been also significant power outages caused by this. The waves also made some pretty Auroras that went as South from the North Pole as to the Carribean’s levels. If something like this were to happen today, almost every single piece of technology that we wave, would just get fried and practically useless. Maybe you could use your phone as a literal brick this time, but nothing more.

Carrington Solar Flare of 1859

Since the “technological revolution”, the Sun activity was at one of the lowest activity periods, which meant a very low number of Sun Spots and no electronic manufacturer had to take into the account if the Sun decided to get angry and shoot canonballs at us. Almost all of today’s technology is vulnerable to such an attack; without any sort of protection against it. Recent research shows that the “low activity” period of the Sun to be over, for 2 or 3 years already, results suggest that it might actually start one of its most active periods. Which means, a significant higher amount of Sun Spots, and the risks of an actual C.M.E. to hit Earth to be that much higher.

Luckily, for a device to become nothing more than a brick, it needs to be on or connected to a system that is on. So all we need to do to keep our cat videos servers safe is just a warning from some smart scientiests and yay! All is fine! Well…, even though we are not 100% C.M.E.-proof, we are decently safe. In recent years there has been significant improvements in the way of how we are able to dected Coronal Mass Ejections, and even though they are not 100% perfect we can protect ourselves against the risk of going into the dark age times. Hopefully, the progress will be enough with the future level of Sun activity.

Sorry Americans, Your Cell Phone Is Useless Worldwide - Joyful Journeying

Made by: Daniel-Ioan Mlesnita

Sources for this blog post:

  • Chapman, S. C., Horne, R. B., & Watkins, N. W. (2020). Using the aa index over the last 14 solar cycles to characterize extreme geomagnetic activity.. Geophysical Research Letters, 47, e2019GL086524.
  • Sangeetha Abdu Jyothi. 2021. Solar Superstorms: Planning for an Internet Apocalypse. In ACM SIGCOMM 2021 Conference (SIGCOMM ’21), August 23–27, 2021, Virtual Event, USA. ACM, New York, NY, USA, 13 pages.

Can a computer predict cancer outcomes?

Cancer is a serious illness that, unfortunately, is not as simple to treat as most conditions one commonly seeks medical help for. Scientists have yet to discover an effective cure, which means that currently doctors and unfortunate patients have to rely on complicated and often lengthy treatments, in which every day counts. This is why an accurate prognosis at the very start is extremely important in deciding on the course of action both for the doctor and the patient. In some cases, having a good prediction of the outcome may help choosing a more effective treatment; in other cases, where the chances of survival are slim, a less aggressive treatment or otherwise no treatment at all can be opted for, in order to make the last days of a terminal patient as easy as possible. So what if computers could make such prognosis?

At the moment, the most common approach to predicting survival chances for oncology patients rely on a doctor’s educated guess after looking at various test results. The accuracy of such guess would depend on said doctor’s methodology and experience, which also means that two different doctors might disagree on how likely a patient would survive. But what if this decision making could be delegated to a computer? This is the question scientists from Finland and Sweden tried to answer in a recent study looking to find out whether machine learning could be used to accurately predict breast cancer outcomes.

In the study, the researches tried to create a smart model – consider it just a computer program – that would take in only an image of tissue from a tumour and make a prediction for the patient by assigning them a low or high risk of mortality. To train the model, the scientists used historical data available on numerous breast cancer cases throughout Finland.

The first layer of the model was trained to analyse the cancer tissue images and identify their noteworthy features – the kind a human doctor would look for with their eyes to make their own prediction. Then, said features would be converted into a digital form that is easier to work with for a computer algorithm. The second layer was trained to take data from the first and use it to make a prediction of low or high risk of mortality. Since the historical data used for training contained the actual survival outcome for each case, it was possible to give the learning algorithm a set of inputs and a set of expected outputs to create a model that can somehow make sense of connections between the two. After training was completed, the model could take in previously-unseen images of tumour tissue and make its own educated guesses.

The next challenge was to verify exactly how accurate model’s guesses were. To tackle that problem, the researches assembled a panel of oncology experts to compete with the model. A new set of breast cancer cases was pulled from the historical data; the set would not contain cases that the model had seen before. Both the experts and the model would give their survival prediction for each case and then the predictions would be challenged against the actual outcomes. It turned out that the model would give a correct prediction for 60% of the cases, while the experts would hit bull’s eye for 58% of them – a tight win for the model.

While the scientists state that their study has important limitations and doesn’t analyse all variables available to doctors in real life, they think that their approach to outcome prediction has real potential and is worth looking into and developing further. They also discovered that in some cases their model would give accurate predictions based on factors unknown to science – this means that sometimes a machine learning model can see more than a human doctor, which only strengthens the potential of this technology. Who knows, maybe some day all doctors will consult with smart models before giving a prognosis?

Anton Matveev

Turkki R, Byckhov D, Lundin M, et al. Breast cancer outcome prediction with tumour tissue images and machine learning. Breast Cancer Res Treat. 2019;177(1):41-52. doi:10.1007/s10549-019-05281-1

Can this algorithm outcompete all its competitors?

You would be surprised to know how much difference a simple idea can make when it comes to the efficiency of an algorithm.

By Gvendolin Fonyó

Horizontal DNA Double Helix
Horizontal DNA Double Helix Image: MR.Cole_Photographer / Moment / Getty Images

From computer networks to biological data analysis the graphs named after the Dutch mathematician N. G. de Bruijn are used in a wide variety of scientific fields. But as widespread as they are, up until now there was no space and time-efficient way of building such De Bruijn graphs. Just last year a group of scientists set out to come up with a method to accomplish the task with an efficiency never before seen. Their answer to the question is simple: buffering. Also known as batch-adding data, when updating a graph.

Suppose that you are working on a project that briefly touches on genomes. Writing a small paragraph shouldn’t take too long, right? But then you discover that simply zooming into a graph, to find the information you are looking for isn’t possible. Instead, you have to scroll through tens or potentially hundreds of graphs, because anytime new data became available a new graph had to be created.

The issue

One of the main problems that bioinformatics experts are facing in recent years is the rapidly growing amount of data that is being collected. Assembling such volumes of nucleotide sequences has been posing significant algorithmic challenges.  You can magine nucleotides as pieces of Lego in a game of building DNA and RNA.  At the same time, the demand for dynamic solutions is also rising. Adding, changing, and deleting aspects of a visual representation of their findings is highly sought after. The biggest challenge on the other hand is combining this mutability feature with space and time efficiency since mutability and compressibility are contradictory by nature.

The science behind it

The way this works is by representing bits of information with circles or rectangles, also known as vertices (singular: vertex), and connections between them with lines (or arrows in the case of a directed graph), also known as edges. Similar to how your house is connected to that building down the road by the street that you live on (two vertices connected to each other by an edge), but to reach the nearest grocery store a few, or a lot, more turns are needed (going from 1 to 6 on the graph below).

graph, created in Neato
Graph, created in Neato
Image: AzaToth, public domain, via Wikipedia Commons

A De Bruijn graph works similarly, representing overlaps between sequences in a genome. In biology, a genome is all the genetic information of an organism. While this type of graph gets its name from Nicolaas Govert de Bruijn it was actually discovered by both the British mathematician Irving John Good and De Bruijn independently.

The De Bruijn graph as a line graph.
The De Bruijn graph as a line graph.
Image: David Eppstein, Public domain, via Wikimedia Commons

Line of attack

The researchers’ approach for implementing mutability is by creating a space-efficient De Bruijn graph with two supporting structures. As requests for adding or deleting information come in, only the corresponding support element is updated. This is where you can see the magic of buffering the data happen. Buffering data means that the information waiting to be added to the graph is first collected in a virtual pool and only once that is filled up can the static or main data structure be updated. Hereby eliminating lots of unnecessary computations and saving computer power. Testing happens by criteria like memory and time needed to complete the requests.

Testing and conclusion

After rigorous testing, the group found that their method, named BufBOSS, is up to five times faster than its closest competitor, Bifrost. When it comes to the time required for adding new sequences, BufBOSS is a strong second in the competition, outcompeted by Bifrost, by only a factor of two, but beating all other contestants by a factor of ten or more.

Their conclusion was that BufBOSS offers attractive trade-offs when it comes to memory, time, and data, compared to its competitors. They, on the other hand, emphasized that some of the other available methods could be greatly improved by further developing them. This means that you do not necessarily have to develop a whole new method if keeping the existing one(s) up to date is a realistic option.

Alanko , J , Alipanahi , B , Settle , J , Boucher , C & Gagie , T 2021 , ‘ Buffering updates enables efficient dynamic de Bruijn graphs ‘ , Computational and Structural Biotechnology Journal , vol. 19 , pp. 4067-4078 .

Computer speech-to-text just got even better!

If you felt hesitant to implement or learn about the speech recognition software due to the complicated names most of the utilities have, fear not, Whisper is here to save the day.

The new machine learning model does not only score a win in the names department, as its performance is also incredibly good. If your machine learning (ml) terminology got rusty, an ml model is the output of an ml algorithm. The algorithm itself takes in some training data and produces and adjusts the internal properties of a model so that it is able to (in this case) classify words. That’s just a very brief, high-level overview but you can think of the parent algorithm as a potter who makes the pot (the model) adjusting every way it will work in future (except that pots don’t recognize words, or at least not as good as Whisper). As mentioned before Whisper does its job very well but, the new model does not outperform its competitors when the audio is clear, free of any background noise but as soon as the recording conditions get unfavourable, that’s where the new model really shines. The performance then is up by 25% in such noisy environments. The following graph presents the findings.

The horizontal axis shows you how bad the audio quality is and vertical the WER measure i.e. the word error rate. The higher the percentage the worse of a job the algorithm does. Thus we see the aforementioned disadvantage compared to other state of the art models. Nvidia’s stt_en_conformer_transducer_xlarge pulls out a small but noticeable lead in the 40 to 0 range but as soon as the audio gets worse, Whisper takes the lead. However, Whisper’s got another ace up its sleeve, it is multilingual, working on 99 languages (with varying performance)! Whereas the aforementioned Nvidia’s ei ole kuullee kielit.

So how have the scientists at OpenAi achieved such great results? Whisper is based on the machine learning transformer function, trained on 680 000 hours of speech. Around the same amount of hours of speech as that one friend spent convincing you to buy crypto. For everyone that wants to learn more about the prior, I highly suggest this video by IBM For everyone that wants to know more about the other transformers I recommend watching some Michael Bay movies. The latter part is more interesting though as it is a huge amount of data aggregated from many sources and put together to create this model.

The best part of it all is that it is open source, meaning everyone can use it and implement it wherever they want. Not only that but the way programmers can use it and adapt in their app is super easy. As a result, we might start seeing more speech to text functionality implemented in websites and digital applications. This in turn will make our digital world more accessible. This model is a big milestone for the AI world, what a time to be alive!


The paper is easily digestable even for the people who are not Ai experts, though some web searches will probably need to be made.

Alec R., Jong K., Tao X.  et al. Robust Speech Recognition via Large-Scale Weak Supervision (2022),

Life with software bots

Imagine a world where you talk to a bot to have various tasks done. Oh wait! We are already in that world. When you visit some website, a chatbot may pop up and greet you saying “Hello, how may I help you?”; and then you can have a conversation with it to find the information you need or get assistance. On your phones, there are AIs that can understand your speech, have actual dialogues with you and take care of various tasks for you. These are software bots or bots for short.

Software bots come in many shapes and sizes. Some are just simple chatbots that have conversations with you through texts and have limited capabilities. Others, like Apple’s Siri or Amazon’s Alexa, can have verbal conversations with you, read your email, call people, book tickets etc. The authors of Software Bots (2018, IEEE software. [Online] 35 (1), 18–23) describe different types of bots by their purpose, by their intelligence and by how users can interact with them.

Bots are created for a purpose. Bots such as Siri or Cortana can perform a wide range of tasks with very advanced intelligence. A large number of existing software bots are not generalists like Siri or Cortana. Some bots only perform specific actions on behalf of the user (transactional bots), some only fetch information and answer questions (information bots). There are also bots with more work-oriented purposes like productivity bots and collaboration bots. These bots’ main purpose is to facilitate individual human work life or collaboration between many parties (not all of them necessarily being humans).

All bots have some form of intelligence, from simple logic to advanced artificial intelligence. This greatly influences how they interact and integrate with human life. Some bots can only understand simple textual commands and perform rigid tasks accordingly. Other more advanced bots can actually understand human speech. To cooperate with humans, bots not only have to understand humans but also have to understand their environments. Many advanced bots, like Siri and Alexa, are aware of ongoing situations through your email, calendar, the weather data etc. Other simple bots, like the usual support chatbots, actively greet users prompting them for a conversation and are only aware of the information the user input. Different bots may also have different work environments, which greatly influences how the bot-human interaction is initiated. In some situations bots are only activated by some special commands such as “Hey Siri!”. In other situations, bots can become active based on previous interactions, for example, the AI on your phone can wake you up with a greeting or remind you of your incoming appointments. 

Bots are present almost anywhere software can be applied. But they did not just come to life on their own, someone created them; they are, after all, just software. Today, anyone with adequate knowledge can create a software bot. There are software libraries aimed specifically at the creation of new bots. This is very typical of the software development world: you can make software to help you make software. Usually, after a bot is created, it is not automatically integrated into human life, users have to choose to use them. Siri is only available on Apple phones. Cortana is only available on Windows machines. And even when they are available, most bots have to be installed, enabled, given permission to start functioning with humans.

Software bots are a very specific type of software that blurs the “hard boundary” previously present in human-machine interaction. They are already doing this everyday in today’s world, albeit at a very early and developing stage. Some computer scientists offer their insights in the face of the increasing adoption of software bots. We should use bots carefully. Bots should not replace but facilitate and enhance human-to-human interaction. We should also be careful in making bots. Since bots blur the human-machine boundary, users should always know what to expect when interacting with a bot. Bots should always make the user aware of whether they are interacting with a bot or with other humans. And depending on their purpose, bots should have the appropriate design and personality to best facilitate user interaction. Finally, bots should do no harm. It is wise to consider the safety and ethics of software bots before we use them too pervasively.

Lebeuf, C. et al. (2018) Software Bots. IEEE software. [Online] 35 (1), 18–23.

Can the bots beat the pros?

Artificial intelligence and machine learning, these have been as of late the biggest buzzwords in all of tech media, and for good reason. Every time you use a search engine, talk to a chatbot, scroll through any social media app, there is most likely some application of artificial intelligence at work. However, what I want to focus more on is AI tied to games. From simple games such as tic-tac-toe to more difficult ones on the level of chess and go, all have been shown to be cracked by machine learning algorithms. However recently this has been pushed to an even higher level with the introduction of OpenAi 5. They achieved this in Dota 2.

Now what is Dota 2?

Dota (Defence of the ancient) 2 is a video game where 5 people battle it out as a team against another team. The goal of the game is to destroy the enemy teams ancient, a structure that lies in the center of the team’s base, which can be seen on the top right of the map below.

fig1 : map of Dota 2

This is achieved by as a team piloting your character from a top down view on this map.

fig2 : A team fight in Dota 2

Above we can see an example of this

What you might immediately realize is the inherent complexity of this game compared to something like chess, and it gets worse. Not only does a player need to consider all of the actions such as abilities, movement mechanics, and targeting. They would subsequently need to consider when to do what and why and on top of it all think about overarching macro strategy. The game also relies on incomplete information. As can be seen above from fig1(i will label the images when making the real blog) the map has shaded areas after the diagonal, this means that if the player pans his camera to these regions of the map he will not be able to fi
nd out what heroes or actions are taking place in this area, and must therefore deduce this from given information. This makes you wonder, how could an AI ever be able to combat such a complex task.

A freakish amount of smart computing

Well as it was done by the Open Ai Five team the neural network trained received 16000 inputs every 4 frames of the game being played and performed up to a whopping 80000 outputs to control characters in the same time based on the information received. These raw numbers themselves are impressive however the way the research team weighted rewards for different actions based on time of effect ect. were also state of the art. The team used 256 GPU’s and 128,000 CPU’s to play 180 years of in game time a day to train the AI to play Dota 2.

What were the results?

Believe it or not Open Ai Five was able to beat the world champions in Dota 2 in a show match after less than a years training. It flexed an insane winrate of 99.4% against the highest skill players in the game. Below can be seen the graph of true skill of the Open AI Five.

fig3: True skill of the OpenAI Five

As you can see the last team to be played against was the world champions. It can also be seen how shockingly quick the AI got to being better than semi pro teams. So finally to answer our question 


[1] Berner, Christopher, et al. “Dota 2 with large scale deep reinforcement learning.” arXiv preprint arXiv:1912.06680 (2019).

The Collatz Conjecture – Deceptively Simple

Let’s do a magic trick.

Pick a number, any number. Now if it’s even, divide it by 2. Otherwise, if it’s odd, multiply it by 3 and add 1. Repeat this process over and over until you get to 1.

For example, if we start with the number 6, we would perform the following steps:

  1. 6 is even, so divide by 2: 6 / 2 = 3
  2. 3 is odd, so multiply by 3 and add 1: 3 * 3 + 1 = 10
  3. 10 is even, so divide by 2: 10 / 2 = 5
  4. 5 is odd, so multiply by 3 and add 1: 5 * 3 + 1 = 16
  5. 16 is even, so divide by 2: 16 / 2 = 8
  6. 8 is even, so divide by 2: 8 / 2 = 4
  7. 4 is even, so divide by 2: 4 / 2 = 2
  8. 2 is even, so divide by 2: 2 / 2 = 1

I can guarantee that no matter the number you pick, you’ll end up at the number 1 after a while.

You may not realise why this is surprising. However, if you think about it, there should be no good reason as to why we end up at 1: after all, multiplying a number by three and adding a bit has a bigger effect than dividing by two. We could expect that at least some number ends up blowing up to infinity, no?

Turns out, if we run this on a computer, we find that every number we test ends up at one. However, this doesn’t prove anything. Mathematicians search for formal proofs for any number, and are not happy with just using a trial-and-error method. After all, if you test a bazillion numbers, how do you know that the bazillion-and-first doesn’t break the rule?

This little mystery is what’s knows as the Collatz Conjecture, or the 3n + 1 conjecture. It is a mathematical problem that has puzzled mathematicians for over 80 years. The conjecture is named after German mathematician Lothar Collatz, who first posed the problem in 1937. It’s a simple problem with seemingly complex solutions, and it’s a great example of how math can be both fascinating and frustrating.

Despite its simplicity, no one has been able to prove or disprove the conjecture. In fact, it’s been tested on millions of numbers, and it always seems to hold true. But without a proof, mathematicians can’t be certain that it’s true for all numbers.

So, why is this problem so difficult? Well, it’s because it’s recursive – meaning that each step depends on the previous step. This makes it hard to analyze and predict what will happen next.

There have been a few attempts to prove the conjecture, but none have been successful. In the 1980s, a team of mathematicians tried to prove it using computers, but their calculations were so complex that they ended up crashing the computer.

Despite its simplicity, the Collatz Conjecture has many interesting properties and connections to other areas of mathematics. For example, it’s closely related to the mathematical field of dynamical systems, which studies the behavior of systems over time. It’s also connected to the study of prime numbers and the distribution of prime numbers in the natural numbers.

Through the years, the Collatz Conjecture continues to fascinate mathematicians and puzzle enthusiasts alike. It’s such a legendary problem that young mathematicians are repeatedly told not to waste their time on it, as it’s easy to be deceived by its apparent simplicity! After all, the conjecture has confounded some of the greatest minds in mathematics since its conception. Whether or not it will ever be solved remains to be seen, but it continues to captivate and challenge those who dare to take it on.

Do not put lemons in your cereal!

This Popular Science blog post will be talking about vitamin C or its other name ascorbic acid, we will use the research done by Hassan Alamari [1] from the university of Jordan and research called ascorbic acid vitamin C and its effect on 18 minerals bioavailability in human nutrition from the year 2020. The aim of the study was to see whether in the presence of vitamin C certain minerals would be absorbed less.

So, the study’s method shows that in the presence of vitamin C there is less mineral absorption for minerals such as iron, zinc, and magnesium for example. They checked this by having ascorbic acid supplemented with other minerals and seeing whether the mineral concentration in the blood would be the same as just having the minerals taken orally. The implications of this would be that the next time you have food containing iron or any other kind of minerals such as cereal which contains iron you would not have ascorbic acid-containing foods with them this would provide full mineral absorption from the grain and so all the iron will be absorbed. Another benefit of this is that it can prevent anaemia which has different causes such as B12 and iron deficiencies.

However, it is proven the absorption of certain kinds of iron is actually increased by vitamin C or ascorbic acid and some mineral absorption is increased. Therefore, one should not totally discard their lemons.  but rather understand that this research is supposed to affect the composition of the food you eat. You would not have cereal and lemon juice together it is naturally a bad decision to do that. However, foods containing iron, eaten with lemons such as fish, have some other side effects. Vitamin C also known as ascorbic acid is necessary for the growth development and repair of all body tissues it’s involved in many body functions including the formation of collagen absorption of iron the proper functioning of the immune system wound healing and the maintenance of cartilage bones and teeth. Therefore, ascorbic acid is very vital for humans as seen in Fig.2 and gives energy, but with certain minerals, it inhibits the absorption of them.



Figure 1 Vitamin C molecule, Source:

Figure 2 Benefits of Vitamin C source: