“Tonight! We are young!” – so are Saturn’s rings

It turns out that Saturn’s rings are much younger than we thought. A new study led by physicist Sascha Kempf from the University of Colorado Boulder has confirmed that the rings are relatively young: They are at most 400 million years old, while Saturn itself is 4,500 million years old.

Close-up view of Saturn’s rings. Credit: Nasa/JPL/Space Science Institute.

Saturn’s rings have been fascinating scientists and non-scientists alike for 400 years. In 1610, Galileo Galilei saw the rings for the first time through his 20-power telescope. However, the rings didn’t appear to him sharply as “rings” but looked more like “ears”. In 1859, James Maxwell proposed that a ring couldn’t be one solid piece but instead was composed of countless small pieces, all independently orbiting Saturn. Throughout the whole 20th century, scientists assumed that the rings had always accompanied Saturn since the birth of the planet, but there was not yet any evidence to prove or disprove the idea.

Kempf and his colleagues managed to calculate the age of Saturn’s rings by studying a usual object: dust.

In the Solar System, tiny grains of rock are constantly zooming around. When they hit a Saturn’s ring, some of the grains stay behind, forming a thin layer of dust and staining the color of the ring, which is mainly made of shiny water-ice. The researchers thought that they could measure how fast the dust accumulated on the rings over a period of time and then calculate how much time it took for the rings to be covered by the current amount of dust.

In order to study the dust, the researchers used an instrument called the Cosmic Dust Analyzer, which was aboard the Cassini spacecraft that was intentionally plunged into Saturn’s deadly atmosphere in 2017. For 13 years, the instrument collected over 2 million grains of dust and recorded each grain’s velocity. The researchers used this velocity information to determine the likely origin of each grain. For example, slow grains were more likely to come from within the Saturnian system (like from a Saturn’s satellite) and fast grains were more likely to come from outside the Saturnian system. The researchers focused on the grains that most likely came from outside the Saturnian system. There were only 163 that fitted the description but that was enough for a breakthrough. These 163 grains took up some space on Saturn’s rings over a period of time, providing the researchers of the rate at which the dust accumulates on the rings. Then we know for a fact that currently, the dust takes up 0.1% to 2% of the rings. For the dust to build up to that amount on the rings, it takes around 100-400 million years.

In our Solar System, none of the rocky planets have rings. The other gas giants – Jupiter, Uranus, and Neptune – have rings, but their rings are nowhere as colorful, luminous, and colossal as Saturn’s rings. This begs millions of questions: Where did the rings come from? When did they first appear? Will they disappear?

The first estimation of the age helps point the studies on Saturn’s rings to a more focused direction. With a lot of hard work and a little bit of luck, we’ll be able to uncover all the mysteries that the rings hold in the very near future.

 

Wrriten by Trang Nguyen

Kempf, S., Altobelli, N., Schmidt, J., Cuzzi, J. N., Estrada, P. R., & Srama, R. (2023). Micrometeoroid infall onto Saturn’s rings constrains their age to no more than a few hundred million years. Science Advances, 9(19).

Can AI replace paediatrics?

What is it and why is it important?
According to the CDC Health literacy is crucial to high-quality healthcare. It is the ability to understand the basic health information they are given. Better health literacy means fewer hospital visits and improved health. The same goes for children! And by recent estimates, up to 85% have insufficient health literacy. It is not about knowing more facts; it is about knowing what decision to make about your health.
Now, here’s the catch: it is still an emerging field, and there are not a lot of health resources for children from which they can learn about health literacy. But around 75% of all children use the internet for health information, and with new AI technologies like ChatGPT, it is important to see how well it performs in terms of health tips.
How is the study done?
The researchers were asking four AI models (ChatGPT-3.5, ChatGPT-4, Bing, and Bard) to list 288 childhood disorders. The prompt was “Explain {medical condition} to a grader,” checking how it responds for all grades. The responses were assessed using readability indicators.
What are the responses?
When they asked models like, “What is {}” or “Explain {}”, it turns out they are pretty smart! It is like they aced high school, with all of them hitting or surpassing that level.
ChatGPT-3.5 and ChatGPT-4 are like college students producing answers on that level. On the other hand, Bing and Bard were a bit more like high school and tenth-grade students.
But wait, there is a twist! When it came to the specific prompt “explain,” ChatGPT-4 went above and beyond, reaching a much higher level than ChatGPT-3.5. All the models had their own unique styles of answers, with Bing and Bard generally giving out lower academic levels compared to ChatGPT.
When they ask questions like “Explain {} to a _ grader” from 1 to 12, things get a bit tricky. They had a hard time hitting the grade level. ChatGPT-3.5 was switching between seventh-grade and college freshman level, throwing more words for the higher grades. Meanwhile, ChatGPT-4 was bouncing between eighth grade and college sophomore level.
Now onto Bing and Bard. Big was mostly in the tenth- to eleventh-grade zone, and Bard was between seventh and tenth grade. ChatGPT models were giving out more vocabulary compared to Bing and Bard.
Discussion
In this study, researchers wanted to see how well language models could explain common paediatric medical conditions. They threw a lot of promts at them focusing on two basic ones to see what grade-level responses they would generate.
Now, here’s a scoop: none of the models could nail down the exact grade level researchers wanted, but things got interesting as they added grade levels.
in their proms. In this context, ChatGPT-3.5 and ChatGPT-4 from OpenAI showed the most flexibility, changing their grade levels based on the level they requested.
On the other hand, Bard and Bing were a bit more one-note, consistently hitting a high school level.
Now, here’s the exciting part: language models could be game-changers for health literacy for kids and teens! By adjusting grade levels, we can make it more accessible to a wider audience. But, of course, models are not perfect. Sometimes they could give too many details or oversimplify things. So while ChatGPT and others can improve health literacy, they are better suited to be used with other educational tools. Now about Bard, It did not give out information for certain diseases, like depression, right away. Why? It seems like it’s safe, maybe to avoid a chance of spreading misinformation.
Conclusion
Lastly, the cool thing about language models is that they are interactive. Patients can ask for more information or simplification, making them very useful. This research was made for sixth grade, and future research should check how well adolescents and their parents could grasp language models’ outputs. As these models get better and better, they might become awesome resources for parent-child health tasks.

ChatGPT-3.5, ChatGPT-4, Google Bard, and Microsoft Bing to Improve Health Literacy and Communication in Pediatric Populations and Beyond. https://arxiv.org/pdf/2311.10075.pdf

Drugs can cure your depression, scientists say!

Times are tough and life can get a bit grey at times. What if I told you psychedelics was the answer you’re looking for? Psychedelic mushrooms to the rescue!

By Melissa Bergius | 08.12.2023


Have you ever felt down in the dumps? Like your life needs a little more colour, a little more happiness? We’ve all been there, and if you haven’t then congratulations, achievement unlocked! What if I told you that recent studies into the effects of psychedelic mushrooms on mental health have come back with positive results? Don’t believe me? Keep reading to find out.

Psychedelics have been used worldwide for thousands of years. You might be wondering, what exactly are psychedelics? Psychedelics are substances, namely drugs, that produce hallucinations and often an expansion of consciousness. In recent years the interest into studying psychedelics and their effects have accelerated research activity. The majority of this research has focused on psilocybin, or the main psychoactive ingredient found in psychedelic mushrooms (PM) for those non-sciency folks reading this. This hallucinogen is present in over 200 species of psychedelic mushroom, including liberty caps.

Roughly 970 million of the world’s population suffers from some sort of mental health disorder. Research has led scientists to believe that psychedelic mushrooms can be used in the treatment of many mental health disorders, including depression, post-traumatic stress disorder (PTSD), and addiction. Users of psychedelic mushrooms have reported experiencing less psychological stress and suicidality compared to individuals using other illicit drugs as coping mechanisms. Another benefit of psychedelic mushrooms is that compared to alcohol and tobacco, they have very low addictive potential. This means that you will likely not get addicted to them: another achievement unlocked in my opinion.

To get more into the science on how psychedelic mushrooms affect us and our mental health, we need to take a closer look at our brains. So how do PMs affect our brains? The way that psychedelic mushrooms affect us depends on the person. The chemical contained in them, psilocybin, enters the brain through the same receptors as serotonin, the feel-good hormone. Serotonin affects our sleep and mood, among other things. People who suffer from PTSD, depression and other mental disorders often have low levels of serotonin, which is why psilocybin is so effective. There are more receptors for it to pass through. Psychedelic mushrooms, among other psychedelics, have a very dramatic effect on the brain. When someone takes psychedelic mushrooms, there is an overall increase in the connectivity between sections of the brain that don’t normally communicate well. This phenomenon is known as ‘altered consciousness’—essentially, it’s another way of thinking. This allows people who are taking the PMs to see another way of going about their issues, which can lead to finding a way out (or lessening) their depression, anxiety, PTSD, etc.

The research into the use of psychedelic mushrooms has showcased promising results into treating many different types of mental disorders. However, it is still a new form of treatment, and further research into potential risks is needed. With the rise in awareness for mental health in recent years, this is an exciting new treatment, even if it’s only in its early stages. As someone who struggles with mental health issues, I find it very exciting when new treatment options start getting researched. It gives me hope that maybe someday it will all be ok. With the low addictive potential and the effectiveness seen so far in clinical research, I think psychedelic mushrooms could be a promising form of treatment in the future.

But remember kids, don’t do drugs—except maybe psychedelic mushrooms; that seems to be ok in my books. And who knows, maybe you’ll have a mind-blowing out-of-body experience that answers all your life’s questions and makes you mentally stable. I know I could definitely use some help with my mental health.

Always remember to consume responsibly.

 

Reference:

‌ Matzopoulos, R., Morlock, R., Morlock, A., Lerer, B., & Lerer, L. (2022). Psychedelic Mushrooms in the USA: Knowledge, Patterns of Use, and Association With Health Outcomes. Frontiers in Psychiatry12. https://doi.org/10.3389/fpsyt.2021.780696

 

The Evolution of Money: Unraveling Bitcoin’s Journey from 2008 to the Future of Finance

“Money makes the world go around” is a saying all of us have perhaps often heard. But what will this money look like in the future? Looking back to 2008, a paper published by “Satoshi Nakamoto” (quotes will later be explained), may give us an answer. The revolutionary cryptocurrency, Bitcoin, slowly but surely, gained major popularity and hype and eventually, took over the crypto world. For example, in 2009 when Bitcoin was released, 1 coin was valued at 0.0009$ each, at the time of writing the same coin is valued to be around 33 400$, to put this into context, 100$ invested into Bitcoin in 2009 would be valued at around 3.3 million today and around 7 million at its peak value (2021). In this article, I will take a look back at the published paper, how bitcoin works and what the future might hold for Bitcoin and the finance sector in general.

 

The author of the paper, or perhaps authors, are unknown. Satoshi Nakamoto is an alias for the person or group said to have created Bitcoin. (For simplicity, I will refer to them as “he”). He disliked the trust-based transaction system that many of us still nowadays use. The trust-based transaction system contains the sender, recipient and a mint, a type of middleman, usually a bank, that confirms the transaction. The mint in this transaction usually can have a transaction fee, which makes it inconvenient for small transactions. One of the main issues with digital currencies at the time was double-spending. In a traditional physical cash system, this is not a problem because when you hand over a physical bill, you no longer possess it. However, in the digital realm, copies of data can be created, making it possible for someone to spend the same digital currency more than once. Bitcoin and nowadays many other cryptocurrencies, have found a way to ensure that this doesn’t happen.

 

A Bitcoin transaction is a decentralized (not controlled by individual person or group) process that enables the transfer of digital currency between two parties over the Bitcoin network. It begins with a user creating a digital wallet, which consists of a public key (address for receiving funds) and a private key (used to sign transactions). When one user intends to send Bitcoin to another, they broadcast a transaction message to the network containing the recipient’s public key, the amount of Bitcoin, and a digital signature created with their private key. Miners, who validate and confirm transactions, include the new transaction in a block by solving a complex mathematical puzzle, these miners then get rewarded with new BitcoinThe block is then added to the blockchain, a decentralized and immutable ledger. Once confirmed, the recipient can access the transferred Bitcoin with their private key. The decentralized and transparent nature of this process ensures security and trust in the absence of a central authority, making Bitcoin transactions resistant to fraud and censorship.

 

In the future, Bitcoin has the potential to become the dominant decentralized and universally accepted form of money. Its advantages include borderless transactions, finite supply, immutability, and financial inclusion while also lowering reliance on conventional banking systems. The security and transparency of Bitcoin are guaranteed by the blockchain technology. There are, however, difficulties, including scalability problems that result in reduced transaction rates and increased energy usage throughout the mining process. It is said, however that all bitcoins will be mined by the year 2140, after which the currency will be more stable. Concerns include the possibility of use in illicit operations and regulatory difficulties. Challenges include unregulated value, adoption obstacles, and the absence of a single dispute resolution body. Despite these limitations, Bitcoin has the capacity to completely transform the financial industry if it can get over these problems, become more widely accepted, and provide a decentralized substitute for conventional money.

 

In conclusion, Bitcoin was and still is revolutionary and as we grow more and more towards a cashless world, Bitcoin has the capabilities to provide a safe way to transfer money through its blockchain technology. As global acceptance grows, overcoming current hurdles, Bitcoin may play a pivotal role in shaping the future of decentralized and borderless financial transactions.

 

References:

Nakamoto, S. (2008). Bitcoin: a Peer-to-Peer Electronic Cash System. In bitcoin.org. https://bitcoin.org/bitcoin.pdf

30,000 Pea Plants: A Look at ‘Genotype vs. Phenotype’ Studies

Ever wondered why you have brown hair? Where your blue eyes come from?

Surely we’ve all heard that genetics influences the way you look and the way your body responds to various environmental factors. But why? How do those tiny chains of protein affect the way your body was built? Many scientists have attempted to study the connection between genes and their outward expression. Most notably, Gregor Johann Mendel pioneered this particular field through breeding pea plants and observing their characteristics. He coined the idea of ‘dominant’ and ‘recessive’ features, which referred to the physical features of the plants and their offspring. These features he spoke about, such as the colour of their flowers or the colour of their seeds, are phenotypical features present in all things that have DNA.

But back to the topic at hand. A new tool has come to the forefront of this particular field of genetics, known as CRISPR cas9 genome modification. What’s so special about this tool, you may ask? Well, let me give a brief rundown.

In studying how one’s genes might affect one’s outward appearance or behaviour, there are two methods. The first, used by Mendel and the ones who came after him, is using patterns in the ‘result’ – the colour of flowers, for example – to deduce a hypothetical ’cause’, which would be differences in the genetic sequence. This is one of many types of ‘genetic screens’, which you may think of as a semi-permeable membrane that separates entities with differences in their genes. And despite this being a reversal of the typical cause and effect sequence, this method is one particular type of ‘forward screen’.

CRISPR screens, on the other hand, made by using the CRISPR cas9 mechanism to modify the genome we wish to study, is a ‘reverse screen’. In that we manipulate the ’cause’ – the genome – and study the ‘effect’, the changes in outward characteristics. And with CRISPR screens, we can also choose to test ‘loss of function’ or ‘gain of function’. These names should speak for themselves, but in case they don’t: loss of function is when a part of the genome is deleted, and gain of function is when we add a section to the genome. Hence, losing or gaining a ‘function’.

Consider it an experiment like any other, with the variable you manipulate and the variable you measure. Except we are manipulating the very building blocks of life itself, and we can now measure the impact of such changes on how living beings behave.

With the development of these reverse screens, it is no longer necessary to have obscenely large sample sizes like Mendel did to experiment on the cause and effect between genes and your outward features.

(Considering that Mendel originally presented his findings with a sample size of 30,000 pea plants, that is a definite upgrade.)

So then, why is all this relevant? Surely, this mechanism and these methods hardly interest the common folk. Most scientific minutiae don’t, certainly. But what makes it relevant is the areas – or area, singular – where this method is applied.

Medicinal genetic research.

Yes, three fancy words in a line. What does it entail? Primarily, it is the study of how various genetic tools can be used to further the field of medicine. One of the main directions of CRISPR screen research is how it interacts with cancer cells. It is also used to determine genes involved in drug and poison resistance, among others.

Perhaps one day, sometime in the future, we will be able to wholly prevent cancer by using CRISPR screens to rewrite a tiny fraction of an embryo’s genetic data, mutating the cells such that cancer cells would not be able to reproduce.

(Of course, there are ethical concerns with this but we’ll wait and see, I suppose.)

Still, it remains that the CRISPR method is the most efficient, reliable, and precise method for gene screening we have at the moment. From almost every perspective, no matter if it’s ‘loss of function’ or ‘gain of function’ that we’re testing, it’s the best we’ve got. Well. Maybe one day we can really declare cancer to be a preventable illness.

Alright! That’s all for now, see ya!

Written: Youlin Li

Source: Xue, H.-Y., Ji, L.-J., Gao, A.-M., Liu, P., He, J.-D., & Lu, X.-J. (2015). CRISPR-Cas9 for medical genetic screens: Applications and future perspectives. Journal of Medical Genetics, 53(2), 91–97. https://doi.org/10.1136/jmedgenet-2015-103409

Balls of spinning wind? New mathematical insight into spherical vortices.

A tornado has left it’s devestating marks on the local area. The property damage will be fixed, but the lives lost can never be recovered. There was no way to predict the tragedy in time to flee. Not with the current technology atleast. Tornadoes and other vortecies are on some level still mysteries to us, but important new insight is being found.

 

 A Reacent study by Professor Kyudong Choi from the Department of Mathematical Sciences at UNIST has demonstrated proof of the stability of Hill’s spherical vortex. This discovery has the potential to serve as a key to unlocking further understanding about vortices, such as tornadoes and typhoons. A vortex is a rotating area in a fluid, such as ware or air. Professor Choi’s research focuses on a very specific type of ball shaped vortex called Hill’s spherical vortex. Hill’s spherical vortex can be thought of as a ball of spinning fluid that pushes the fluid in front of it upward.

 

 In his paper Professor Choi was able to prove that Hill’s spherical vortex can in specific situations maximize the amount of movement inside of itself. Professor Choi reached his conclusion purely mathematically by combining previously known ways of counting the movements of fluid in a vortex. The results of this research may not seem earth shattering at first, but they have gained attention from many meteorologists. The air is full of wind vortices both big and small, therefore the insight provided by Professor Choi will likely be a cornerstone in furthering our capabilities of predicting weather phenomena.

 

by: Luukas Karjalainen

source:

Choi, K. (2024). Stability of Hill’s spherical vortex. Communications on Pure and Applied Mathematics77(1), 52–138.  https://doi.org/10.1002/cpa.22134

Trust issues with AI?

Most people in western countries have at least heard of artificial intelligence and many have used it themselves. From recipes and picture editing to challenging calculations, today’s artificial intelligence is capable of it all. Most people would probably say that they trust AI, at least to a certain level, and especially what comes to trivial matters. What you don’t often think of when using an AI is the amount of trust you have in it or where that trust comes from. Is the so-called trust even actually trust?

In a world where AI is becoming more and more a commonplace tool for people and possibly for companies, it’s important to know what it means to trust artificial intelligence. Our prayers have been answered for scientists have finally tackled this issue in a study called “Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI”, published in March 2021. They took a definition for trust from sociology and modified it accordingly to match the needs and requirements of the AI industry and private users, such as you and I. So, what does it mean to trust an AI?

According to the article sociology defines trust between two people as follows: if Annie believes that Bernie will act in Annie’s best interest, and accepts vulnerability to Bernie’s actions, then Annie trusts Bernie. When applied to the human-AI relationship, for you to be able to trust an AI, you must accept the presence of risk, as well as rely on the “machine”. Because if you are not depending on the answers or results provided by the AI you are using, then you have no reason to trust it and the trust is not measurable. For example, if you want to check a fact for your own amusement using AI, there’s no risk in it because you are not in a vulnerable position. Hence there is no need for trust. The case would be different if you would, for example, rely on AI to write your essay that would determine whether you pass the class or not. Then you would be vulnerable to the AI’s possible mistakes, and you would have to trust the AI.

Another important issue the scientists worked on is the basis on which people trust the AI they are using. The ”good” or beneficial kind of trust is formed when the trust is it’s based on the fact that the AI works in a way you expect it to perform. The “bad” or unwanted kind of trust on the other hand could be when you trust an artificial intelligence based on its creator’s abilities or other factors that have nothing to do with the AI’s performance. In that case the trust you have formed is not the kind of trust that is beneficial for you as a user of an AI because it doesn’t actually mean trusting the AI but trusting the things or people related to it.

So, what should you be getting out of all this? Firstly, as a user the most important thing you can do is make sure you know that what you are expecting from the AI is realistic and that you understand there’s the possibility of a risk involved. For most people the processes of how an artificial intelligence arrives to its conclusions are unknown or too complex to understand. Still, you can most likely find out how the AI you are using is expected to perform and using that information you can modify your own expectations and the amount of trust you have for the AI. Secondly, do not base your trust on external factors that have nothing to do with the AI’s performance. Even if the layout looks professional or your favourite celebrity has promoted it as “The best AI for you!”, do not blindly jump to the conclusion that it’s in any way a good indication to trust it.

Artificial intelligence is developing and changing faster than us people can change our own ways. That’s why it ‘s important to know what you are getting into as an AI user. These types of studies can help you to understand what you should take into account when using an AI and how to evaluate your own trust in artificial intelligence. Whether you are asking it to solve your homework for you or give you a medical diagnosis based on your symptoms, it’s never good to trust AI blindly!

Kia Kivelä

Source: Jacovi, A., Marasović, A., Miller, T. & Goldberg, Y. (2021). Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 624-635. https://dl.acm.org/doi/10.1145/3442188.3445923

Taming the darkness

Taming the darkness

In the realm of artificial intelligence, the quest to replicate and augment human abilities knows no bounds. One remarkable challenge that has captured the imagination of scientists and tech enthusiasts alike is the pursuit of enabling machines to see in the dark. Have you ever tried to take a photo of a starry sky with your smartphone? I have, and it sucked. The world where cameras can capture clear, detailed images even in extreme low-light conditions is already becoming a reality, thanks to groundbreaking research in the field of low-light image enhancement.

Are you already familiar with a revolutionary article titled “Learning to See in the Dark”? No? Well, hold onto your flashlights because we’re diving into the wild world of making cameras smarter in the dark! The study, authored by Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun, showcased an algorithm that could significantly improve the quality of images captured in low-light conditions. This development opened new frontiers in applications ranging from surveillance and photography to autonomous vehicles and beyond.

What’s the deal?

The primary challenge addressed by the researchers was the inherent noise and lack of details in images taken in low-light scenarios. Traditional cameras fumble like lost puppies when the available light is minimal, leading to grainy and indistinct photographs. The “Learning to See in the Dark” algorithm sought to overcome these limitations by harnessing the great power of deep learning.

Take a look at the comparison of low light camera outputs and the result using the algorithm from the article.

Comparison of image output of 2 cameras and the output of the convolutional neural network

 

The quality of the resulting image is fascinating!

How does it work?

At the heart of this groundbreaking technology is a convolutional neural network (CNN), a type of artificial neural network inspired by the human brain’s visual processing system. The CNN was trained on a massive dataset of paired short and long exposed images, learning to recognize patterns that could later be applied to enhance dark short exposed images.

One of the key features of this algorithm is its ability to balance noise reduction and detail preservation. Previous attempts at low-light image enhancement often resulted in over-smoothing, sacrificing crucial details in the process. The “Learning to See in the Dark” algorithm, however, demonstrated a remarkable capability to find a balance, producing images that were not only brighter but also retained sharpness and clarity.

Notice how grainy is the result of a BM3D scaling method used in low-light mode for modern cameras.

Comparison of BM3D denoising and the CNN technique

 

Why does it matter?

The applications of this technology are far-reaching. In surveillance, for instance, security cameras equipped with this algorithm could effectively operate in low-light conditions, providing law enforcement and security personnel with enhanced visibility during the night. Similarly, photographers could capture stunning images without the need for artificial lighting, preserving the ambiance and mood of a scene.

Autonomous vehicles stand to benefit significantly from this breakthrough as well. Driving at night poses challenges for self-driving cars, as their sensors often rely on visible light to navigate. With the ability to “see in the dark,” these vehicles could navigate more confidently, making nighttime driving safer and more reliable.

Is it perfect?

Of course, it’s not. Despite its remarkable achievements, the “Learning to See in the Dark” algorithm is not without its limitations. Any model’s performance is dependent on the quality and diversity of the training data, and as the dataset used for training the model does not contain humans or dynamic object it may struggle enhancing photos of those. Another opportunity for future work is runtime optimization as the current pipeline takes 0.38-0.66 seconds to process an image. Obviously it is not fast enough for real-time processing of video.

As researchers continue to work on the challenge of low-light photography, we can anticipate even more sophisticated algorithms. The fusion of AI and photography holds the promise of transforming the way we capture and perceive the world, opening new possibilities for creativity, safety, and exploration.

In conclusion, the journey to teach machines to “tame the darkness” represents a paradigm shift in artificial intelligence and computer vision. The “Learning to See in the Dark” algorithm of 2018 has paved the way for a future where low-light conditions no longer hinder our ability to capture and understand the visual world. As these advancements continue to unfold, we can look forward to a brighter – and perhaps, in this case, a clearer – future illuminated by the capabilities of artificial intelligence.

Written by Mikhail Golubkov.

Source:

Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018, May 4). Learning to see in the dark. arXiv.org. https://arxiv.org/abs/1805.01934

Could multiplayer online video games soon have an AI problem?

 

Imagine this: you just had a long day and want to enjoy some video games, so you begin searching for a ranked game. You get destroyed, having no chance of victory, and specifically one player on the opposing team seems to be performing especially well. Most would assume that this was a so-called “smurf”, meaning a player who has another account at a higher rank and is playing for fun in lower ranks. However, what if this was not a person at all, but an AI instead? At the current moment, this thought does not seem especially realistic, but with the rapid advances in technology, especially with AI, perhaps it could be possible in the future.

 

Back in 2017, as a part of a competition, programmers Guillaume Lample and Devendra Singh Chaplot wrote a report of their project to train an AI to play the first person shooter game “Doom”. Similar projects had been done before, specifically for old video games on the “Atari 2600” console. More famously the best chess robots are so good at the game that they surpass even the best players. However, there is a stark difference between games like these and a 3d first person shooter like Doom, that being simply that in a 3d space, the “correct” choice of move is often much more complicated. You have to worry about ammo, health, your position on the map, aiming at opponents , and even the environment around you.

 

Despite these challenges, it is still possible to train an AI agent using deep reinforcement learning or more specifically “deep Q networks” (DQN). This type of learning is essentially about trial and error, as the AI initially executes completely random inputs. When the AI does something that is determined to be “good”, it gains a reward, and based on these rewards, it starts to create a policy which it acts upon. At each step, the AI first observes the environment, then decides an action based on the policy it has built up. However, because Doom is in a 3d environment, the network fails to account for things which you would have to turn around to see. To deal with this, researchers introduced “Deep Recurrent Q-Networks” (DQRN) which have an extra input for this hidden element.

 

Just having these models was not enough however, as the real challenge came in finding the most effective and efficient method of training based on these models. The baseline DQRN model performed well in simple scenarios, but did poorly on deathmatch tasks, as the agents were just firing at will hoping to hit someone. A feature was added to the training so that every frame, the AI would receive info about whether a certain entity (referring to anything the player interacts with, like an enemy or a health pack) is in the frame or not, which drastically improved performance. Gameplay in “Doom” can be split into two phases, being navigation, referring to exploring the game area, and action, referring to fighting an enemy. Using two networks, one for each phase, instead of just one lead to better performance. Other notable advantages included faster training, and the mitigation of “camper” behavior, which meant that the agent would never just stand in one place waiting for enemies.

 

Finally, tests were conducted to see how the AI performed against other versions of itself as well as humans. To evaluate performance, the kill and death ratio was considered, as well as the raw numbers of kills, deaths, suicides and objects gathered. Agents utilizing the aforementioned two networks for the action and navigation phases performed better than ones who didn’t. Against humans, in both single player games against bots as well as one on one multiplayer games, the AI significantly outperformed the average person. While this was the case, the AI agents were not perfect, as seen by their deaths, and someone who has put a lot of time into playing Doom could likely outperform these agents.

 

In conclusion, the researchers were able to build AI agents who performed better than the average person. The question then becomes: if this was possible, can it be done on more modern multiplayer games, and if so, when could this happen? The answer is obviously up to debate but it probably won’t be happening in the near future. In this experiment, the researchers had direct access to the Doom game engine which allowed them to send commands to the game agent in the first place. Even if one would have access to a modern games engine, the agents would have to be much more complicated, as doom is still quite simple compared to these modern games. So, at least for now, I would advise not worrying about this and instead blaming your losses on something that actually exists.

Nikola Srbinoski

Sources cited:

Lample, G., & Chaplot, D. S. (2017). Playing FPS Games with Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10827