What if I tell you that Simba is going to die soon?

Imagine for a second being in the Amazon rainforest. You can hear the river flowing, smell some flowers whose names you probably can’t even pronounce, and there’s a mosquito buzzing around your ear. And then you see it – a majestic creature all covered in dark spots, a jaguar. Now let me take you for a moment to the frozen land of the Arctic Circle. The snow is dazzling, and you’ve never been so cold in your life, but it doesn’t really matter because the only thing you can focus on is a beautiful polar bear (whose fur is surprisingly more yellow than you thought it would be). 

Act Cool Polar Bear GIF by WWF_UK

via GIPHY

Now let’s take a step back and return to the brutal reality. Do you know that in the course of the next century these animals may become only a memory? This means that your great-great-grandchildren may not know what they look like. Maybe you’ll even be alive then, and you’ll have a chance to show them an old-fashioned selfie you took at the zoo in 2022.    

Needless to say, it’s a topic of great importance. That’s what biodiversity loss is all about: the extinction of species in smaller areas as well as worldwide. The topic is obviously not new, and so research on it has been conducted for years. But it’s not enough. How can a policy fighting animal extinction really make a difference if it doesn’t tackle the most important reason for the problem, because the bigger picture is missing? That’s why the article published in November of this year in Science Advances journal is a game changer.   

Now, if you’re imagining a bunch of scientists in white lab coats mixing up some colourful substances and making things blow up, that’s not what happened here. Thirteen dedicated researchers screened 45,162 studies, reading 575 of them in full! Can you even imagine going through almost 50, 000 studies? If you’re thinking in the book context, it’s like half the number of books there’re in the Helsinki Oodi library! And not even all of the 500 chosen studies were included in the final data set; it was only the 163 most relevant ones. Of course, that’s not the end, because in order to be compared in the general ranking, all this information needed to be converted first. I’m feeling tired even thinking about it, and we haven’t got to the maths part yet!   

So, after all that research, what’s the most important factor in animal extinction? Can you take a guess? If your answer was “Well, obviously climate change, I hear about it on TV all the time, duh!”, then you’re very, very wrong. Our all-time winner is land/sea use changes, which is basically a fancy way of saying that people take a piece of forest and decide to make it farmland (or a factory, or… well, you get the point). Then, going head-to-head, are pollution and direct exploitation of natural resources. Climate change is far behind them. But don’t let it fool you, since it’s as dangerous as the others! First and foremost, it intensifies so quickly and rapidly that we can’t easily get a hold of it. Another thing to consider is that all the factors are connected, so to focus on one while making anti-biodiversity loss policies is like trying to take the card from the very bottom of the house of cards you’ve just made – it’s going to fall. And, if things weren’t complicated enough, the hierarchy of these factors varies across different continents and changes significantly for oceans!     

Ok, but what’s next? What’s the conclusion here? Well, above all we need to realise how complicated the problem is, and that it can’t be solved by combating climate change alone. In this sense, current policies trying to take care of the factors one at a time can lead to overlooking the bigger picture. For example, one of the suggested solutions for helping with climate change is to focus on the development of croplands that can be used to produce biomass and then bio-energy. But what about all the animals that called this land, the land we’ve just changed into cropland, home? That’s why we need to think about it smartly. After all, what is the use of stopping climate change if there’re no animals left? We may as well move to Mars and give it a rest. But that’s a topic for another post. Until next time!   

 

Source:

Jaureguiberry, Pedro, et al. “The Direct Drivers of Recent Global Anthropogenic Biodiversity Loss.” Science Advances, vol. 8, no. 45, 2022, pp. eabm9982–eabm9982, https://doi.org/10.1126/sciadv.abm9982

 

Author:

Ala Żukowska

Could life exist all across the universe?

There are many questions that we rarely think about in everyday life, such as: why are the stars shining, why people in horror movies are so stupid or are there any aliens. While we can answer the first question, the other two are much harder. Let’s skip the second question and go straight for the third. Whether aliens exist or not is not something we know right now, but we can still estimate, if the are out there or not. For these estimations we use the mighty power of science, more specifically astrochemistry. Astrochemistry does exists and believe me it is not some kind of dark magic.

Despite many people believing chemistry to be the mysterious thing that happens in the lab, or when the chemistry teacher detonates some strange chemicals during the painfully long chemistry lesson, it is actually happening in places where you wouldn’t expect. Chemistry happens in the sea, in the ground, in your dog(don’t hate the poor animal for it), and even in outer space. The chemistry that happens in space is actually quite important. When scientists looked into space, with their big ass telescopes, they found that there are actually organic molecules out there. Why is this important? Because these organic molecules might be very important in the origin of life.

Unfortunately, these molecules can’t just magically appear, like Jehovah’s witnesses at your door.  They need to form from the gas clouds that are in space. However, to form these molecules require energy. Unfortunately they don’t have a FRANK app, in which there is a discount for electricity.

 

When you look up to the night sky( or even the day sky) what you see is actually the energy source for the poor molecules: stars. For certain, big big molecules, to become the building blocks of life, need to react with water. By this I mean, they need to take the oxygen from the water, like a bully taking the lunch of the nerds, to become more complicated, more ‘lively’ molecules. For this to happen the light needs to be really energetic, by energetic I mean UV. This means that, around most stars, this kind of reaction couldn’t happen, because they are too dim and doesn’t produce enough UV light.

 

But recently, scientists actually proved that even visible light could start the reaction, if the starting molecules are inside ice crystals. This is actually quite possible, since space is extremely cold, even colder than your freezer or your ex. So, If the big organic molecule is surrounded by smaller water molecules, they can help him out a bit, and when the light attacks the molecule, and with a light smash it blasts one of its electrons away. Once the electron is away from the molecule, the surrounding ice can hold the tiny particle, like some pervert, keeping the molecule excited and ready to react with water.

 

That is actually beneficial for another reason. Since UV is ‘stronger’ than visible light, other molecules can more easily absorb it. And because of that, visible light can penetrate deeper and harder into the clouds of matter that exist in space.

 

This means that even around dimmer stars, so-called “red dwarves” , or “orange dwarves” , complex organic molecules could form and enable the birth of life around them. We already have some candidates, like Proxima B and Trappist-1 that have planets just the right distance away from them to potentially harbor life.

Source:

Lignell , A , Tenelanda-Osorio , L & Gudipati , M S 2021 , ‘ Visible-light photoionization of
aromatic molecules in water-ice : Organic chemistry across the universe with less energy ‘ ,
Chemical Physics Letters , vol. 778 , 138814 . https://doi.org/10.1016/j.cplett.2021.138814

Author:

Roland Vadász

Tracking Everything but Ants: How We’ll Navigate the Future

How often do you get lost because your navigation placed you on a nearby street or showed you facing in the wrong direction? If you’re anything like me, the reliance on phone navigation has frequently led you astray. For all these instances, here’s a ready-made excuse for why you’re late again: our current navigation systems are actually pretty unreliable. 

Our current positioning technology is based on a technology called Global Navigation Satellite Systems (GNSS for short). GNSS determines your position by measuring your distance to different satellites and based on this data it guesses the only location that would render such results. However, it’s the technique of measuring the distance that is really incredible. GNSS sends radio waves to your device from space and measures how long it takes for them to get there. It’s pretty astonishing if you think about it in detail. The change in position from your couch to the grocery store is a drop in a bucket compared to the cosmic distance between the earth and the satellite. Nevertheless, they are able to detect it. 

Considering the tremendous task faced by these satellite systems, it’s easy to forgive them a few meters of inaccuracy. Adding to this error is the obstruction of the signal by clouds and other weather conditions. Moreover, in cities, the signal waves can bounce off of buildings and interfere with themselves. Although there exist band-aid solutions to improve these problems, all these compounded errors render GNSS unreliable.

At this point, you might be wondering what’s all this fuss about. Sure, GNSS has problems, but it works well enough. However, the navigation systems are not only important for getting you to the closest Alepa. GNSS is also a foundation of time synchronization that enables, for example, internet communication. Moreover, many emerging technologies like automated driving or quantum communication rely on accurate time and position. In these cases, reliable navigation can be a matter of life and death. These problems created a strong need for a more accurate positioning system.

The new type of navigation nicknamed terrestrial networked positioning system (TNPS) was developed by scientists at the Delft university of technology in the Netherlands. The determination of position is based on the same concept as in GNSS. However, in the case of TNPS, the signal is sent and received by terrestrial transmitters. 

The measurements of the time difference between the signal sent and received need to be unimaginably accurate. Light travels incredibly fast (exactly at 299,792,458 meters per second). Therefore, 1 nanosecond error in time measurement corresponds to 3 decimeter error in position. 

Delft scientists achieve subnanosecond accuracy by employing an already existing technology known as White Rabbit. White Rabbit is protocol, a way of communication present in Ethernet cables. Yes, the inconspicuous ethernet cable that’s gathering dust in your drawer is able to send time data with subatomic accuracy. Think twice about labeling it as “old.”

Using TNPS, scientists are able to determine the position of a device accurately to 2.5 cm. Although it’s an incredible result, technology doesn’t come without its shortcomings. Its operation requires extensive terrestrial infrastructure. Therefore, it is only suitable for urban environments. Moreover, the more accurate the tracking, the higher the chance of its abuse. Using such advanced technologies is always a trade-off between our comfort and the possibility of infringement upon our privacy. It’s a sensitive balance we all must find for ourselves.

References:

Koelemeij, J.C.J., Dun, H., Diouf, C.E.V., et al. A hybrid optical–wireless network for decimetre-level terrestrial positioning. Nature 611, 473–478 (2022). https://doi.org/10.1038/s41586-022-05315-7

Inflammations have never been easier to treat!

You may have heard of Nurofen, a very common medicine used to treat pain and swelling, or aspirin, which does help relieve pain but it’s not nearly as reliable as Nurofen for this purpose. They are a part of our lives, and many of us take them for granted, without knowing how they are made. The problem with these kinds of medicine, called anti-inflammatories, is they require very specific conditions in order to be produced and, as a result, could make them rather rare to find in some pharmacies. However, this is about to change for the better.

Would you believe a bunch of researchers have discovered a much more effective way to make treatments for inflammations? You might not know it, but anti-inflammatory medicine used to be made through very limited means. However, this has just changed recently: chemists will finally be able to relax and not have to worry their reaction won’t work (at least for this domain).

Lots of experiments have been done and lots of materials have been used in order to achieve this result. So much has been used that half of the researchers’ article solely lists every material and its composition. This alone should say something about how much effort and time the researchers had to put into this extensive experiment.

It’s like going to the store, trying every t-shirt available until you find the perfect one for you and buying lots of them afterwards. These 3 steps apply for these researchers’ experiments as well: 1) trying out every possible combination of reactants 2) finding the ideal conditions 3) receiving the largest quantity of product thanks to those conditions.

This is exactly why their methods are applicable to almost any kind of this so called “pyridine”, which is the main component of the product. This compound might seem like a big deal (and it is in general) but it’s probably a good idea to give a bit more information on what it actually is and some other uses.

As you can see in this picture, it is an aromatic compound, which means the carbon atoms are bonded together in the form of a ring. It is mainly used in medicine thanks to this important property, but not just for anti-inflammatories. It’s highly possible dental care products you use are made from pyridine, since it is effective at removing bacteria within the body.

Maybe you’ve worked in a lab before, and you’ve used pyridine to eliminate a chlorine atom, for example. It makes for a very good base in these kinds of reactions (elimination reactions) or when it reacts with an acid.

Carrying on with the scientific article, these researchers tried so many types of pyridines and realised most of, if not all of, them yielded a certain amount of product. This really makes them stand out from their predecessors in this domain, who could only obtain the yield from one specific type of pyridine, making their procedures very limiting.

As for us, the general public, if you ever found yourself searching for anti-inflammation medicine but were unsuccessful, this huge discovery should “treat” this issue and help provide readily available options for this need!

Source:

Zhou, S., Hou, X., Yang, K., Guo, M., Zhao, W., Tang, X., & Wang, G. (2021). Direct Synthesis of N‑Difluoromethyl-2-pyridones from Pyridines. Journal of Organic Chemistry, 86(9), 6879–6887. https://doi.org/10.1021/acs.joc.1c00228

Author:

Andrei Costea

The neglected carbon “bank” on the Earth — Ocean

Carbon plays a critical role in Earth’s life and climate system. Carbon not only plays a central role in Earth’s life but also is an important greenhouse gas that regulates the temperature of the atmosphere.

However, the amount of carbon in the atmosphere is surprisingly small. The largest carbon reservoir is solid Earth, which holds nearly 100,000 times as much carbon as the atmosphere, and the second-largest carbon reservoir is the often-ignored ocean, with roughly 60 times as much carbon as the atmosphere. The exchange of carbon between these major reservoirs and the atmosphere and biosphere constitutes the global carbon cycle. Understanding and quantifying this cycle are among the primary aims of modern Earth system science.

The ocean pulls carbon from the atmosphere through two pump-like mechanisms: solubility pump and biological pump.

First, the solubility pump is caused by the deep ocean’s cold temperatures and increased solubility. The ocean’s deep and bottom waters typically range in temperature from 2-4°C. These waters are almost twice as soluble as typical surface waters, which are about 20°C in temperature. This causes the amount of dissolved carbon to rise with depth.

Second, the biological pump is caused by some species, like numerous microscopic marine algae that can be found in seawater depths where sunlight can reach, where they collect carbon dioxide through photosynthesis (a process using sunlight, water, and carbon dioxide to create oxygen and energy in the form of sugar). When fish and other marine life live at greater depths where there is no sunshine, they swim to the upper layers of the water at night to feed, and they release carbon through feces that sink to the bottom of the sea, which also can carry carbon into the deeper water.

The third and smallest factor that changes dissolved carbon is anthropogenic carbon (the carbon produced directly by human activities). Although the impact of anthropogenic factors on carbon uptake in the ocean is small compared to natural causes, the ocean is extremely important for the uptake of anthropogenic CO2.

Since the start of the industrial revolution, human activities like industrial production and logging have emitted significant amounts of CO2, but about 60% of the CO2 has been absorbed by the oceans and terrestrial biosphere.

But there is a limit to how much carbon the ocean can absorb and absorbing large amounts of carbon can cause ocean water to acidify, leading to the death of corals and other marine life. The average pH of surface seawater has declined by 0.1 during the industrial era, most of which has occurred in the last 50 years.

The greatest declines in pH occurred in the surface ocean, where the concentration of anthropogenic carbon is the highest. Because of the slow ventilation times of the deep ocean, anthropogenic carbon has not penetrated into the deep layers of many parts of the ocean, which limits the rate of oceanic CO2 uptake.

In the future, to improve the ocean’s ability to absorb carbon to alleviate CO2-induced warming, we also need to further understand the ocean carbon cycle and enhance the rate at which carbon is transferred from the surface ocean or atmosphere to the deep ocean.

Citation:

DeVries, Tim. “The Ocean Carbon Cycle.” Annual Review of Environment and Resources 47 (2022): 317-341.

Auther:

Chengran Di

Can gene therapy be the cure for childhood blindness?

Among every 100,000 babies born, approximately 3 will be affected by Leber’s Congenital Amaurosis, or LCA. These babies will often suffer from a severe loss of vision at birth, affecting their quality of life right from the very beginning. LCA is an inherited form of blindness, meaning that both parents must be carriers of the gene in order to pass it down to their child. Presently, there is no cure for LCA. However the genetic nature of the condition makes it a perfect candidate for gene therapy trials. In a study conducted by University College London, researchers took on this task.

All new treatments need to undergo clinical trials in order to ensure their efficacy. For this study, the researchers conducted what is known as a “phase 1-2 open label trial”. This might seem a bit complicated to you at first, but it’s really quite a simple and efficient way to categorize clinical trials. There are up to 5 different phases in a trial, but we only need to focus on the first 2 for now. In phase 1, the treatment is administered in a very small dose to healthy volunteers. They are closely monitored for any adverse side effects, and if few are observed, the trial will proceed with a higher dose. This phase is mostly for researchers to become familiar with the treatment and how it affects the body, as well as to find the best possible dosage, length of treatment, and to determine the general safety of it. In phase 2, the treatment is re-administered to a larger group of actual patients using the predetermined safe dosages. In this phase less common side effects will often show themselves. As for the “open-label” part, this just means that no information was withheld from the participants, and they are well informed about what treatments they are receiving.

The study was conducted on 12 participants, aged 6 – 23. All patients suffered from early onset LCA, specifically with the mutation being on the gene that’s responsible for ensuring normal vision in humans. Officially, this is known as the Retinal pigment epithelium-specific 65 kDa protein, but we’ll call it the RPE65 gene. They were injected with virus vectors, which are biological tools used to administer genetic information to cells. These vectors carried the complementary RPE65 gene. 4 of the patients received a low dose of treatment, while the other 8 received a higher dose. As all good studies need to have a control, the eye that had lower vision was treated while the one that was better was used as a control, therefore left untreated. Over a 3 year time period, the participants were monitored by means of electroretinography, or ERG. This simply means that the electrical responses of different types of cells in the retina were measured. The retina is the layer of cells in the back of the eye, responsible for detecting light and sending signals to your brain in order for you to be able to see.

The studies found that in 6 of the participants, there was significant improvement in retinal sensitivity to varying degrees. This improvement only lasted 3 years, peaking at anywhere between 6 – 12 months before decreasing again. In terms of actual retinal function, there was no real improvement. 3 of the participants experienced intraocular inflammation, and 2 others experienced some other immune response. All 5 of these individuals had received the higher dose. 6 patients experienced a reduction in retinal thickness. This could potentially lead to even lower visibility in the patients. The older participants were more responsive to treatment, and the reasons for this are still unknown.

Although the results may seem inconclusive in terms of the effectiveness of the treatment on humans, I failed to mention earlier that the researchers conducted a parallel trial on dogs. Although highly dose dependent, the dogs experienced substantial improvement in their retinal functions. If you look at the improvement that patients experienced in the beginning of the trial together with the improvement in the dogs, you might see that there is true potential in this treatment. Gene therapy in general is just in it’s beginning stages and we should remain hopeful that one day it can be the cure for LCA and many other diseases like it.

Source:

James W.B. Bainbridge, Ph.D., F.R.C.Ophth., et al. Long-Term Effect of Gene Therapy on Leber’s Congenital Amaurosis. New England Journal of Medicine 372, no. 20, 1887-1897 (2015) DOI: 10.1056/NEJMoa1414221

Images:

https://www.allaboutvision.com/resources/retina.htm

How a chess AI discovered new algorithms

 

IBM’s Deep Blue chess engine beat Garry Kasparov in 1997, becoming the first computer to beat a world champion. Deep Blue’s “intelligence” was of the kind computers excel at  — calculation. It could search all possible sets of moves up to a certain depth and choose the best option. 

With the advancements in artificial neural networks over the last decades, we have witnessed the birth of a new generation of chess engines that mimic the way humans learn chess: by trial and error and repeated practice. 

This technique, known as reinforcement learning,  was used by DeepMind 5 years ago to develop AlphaZero, a chess AI. Humans were no longer a challenge, as the advancements in raw calculation power have made smartphones more capable than the supercomputer used by Deep Blue. Instead, AlphaZero would compete against other algorithms. It beat the reigning champion at the time, Stockfish 8, and was praised for its groundbreaking strategies. The highest rated human player, Magnus Carlsen commented on AlphaZero in an interview, saying that its play could be “mistaken for creativity”. How might this “creativity” be useful elsewhere? 

DeepMind’s researchers recently published a paper in the journal Nature, titled “Discovering faster matrix multiplication algorithms with reinforcement learning”, where they outlined a new application of AlphaZero for algorithmic discovery. The algorithm in question?

Matrix Multiplication

Below is an example of the traditional matrix multiplication process. To calculate the value of an element in the new matrix, you just need to calculate the sum of the element-wise products of the same row and column in the first and second matrices respectively. 

It might seem impossible to do this any faster than what we just saw, after all we need to calculate the values individually. Mathematicians agreed, and for centuries, the consensus was that the method shown above is the best possible.

A 1969 landmark paper by Volker Strassen proved that this was not the case. How did he manage to do this? It is actually remarkably simple. As did he, we can examine a small matrix such as 2 x 2 to understand the method.

As we can see, Strassen’s algorithm uses 7 multiplications, 1 less than the traditional method. This is the minimum possible for a 2 x 2 matrix.

Now you might ask, is this reduction of 1 in multiplications worth all the extra additions that come with this method? Indeed, it is not, for matrices of size 2 x 2. But the best part about Strassen’s algorithm is that it works for any size matrix (even non-square ones, but that requires some extra work). In the example above, we treated  the elements of A and B as real numbers, however, they can also be submatrices of A and B. Suppose the original matrix is size N x N, then each element would represent a matrix of size N/2 x N/2, one corner of the original. This division can be done recursively until the submatrices are small enough (depending on the implementation). Then we can switch to standard matrix multiplication and compute the final result.

Beyond the 2 x 2 case, Strassen’s algorithm is not the most optimal, and there are many faster known algorithms. The minimum possible number of multiplications is unknown for all sizes other than 2 x 2. In fact, one of the most important open problems in computer science is the general case of an efficient algorithm for computing the product of any two finite dimensional matrices. AlphaTensor cannot prove this, so mathematicians can rest easy knowing their jobs are safe. Nevertheless, the program is capable of finding ways to multiply small matrices more efficiently. It rediscovered algorithms like Strassen’s, and also found novel algorithms that surpassed the best human designed algorithms.

How?

In simple terms, the researchers turned matrix multiplication into a single player game, where AlphaTensor would try to solve a puzzle in as few legal moves as possible. The resulting sequence of moves represented a provably correct algorithm that could be used to multiply any two matrices of the inputted dimensions. The game is incredibly hard, since the number of possible algorithms for even a 3 x 3 matrix is greater than the number of atoms in the universe.

Applications

We have now talked at length about matrix multiplication, so one might consider what this is actually useful for. Matrix multiplication is fundamentally related to linear algebra, which has many applications. For example, a matrix multiplication can represent any geometric transformation e.g. rotation of any object, which makes them very useful for 3D graphics. Other applications include speech recognition, weather forecasting and data compression. Matrix multiplication is so widely used, that even minor improvements have major impacts. Organizations and businesses around the world spend billions of dollars on developing computing hardware for efficient matrix multiplication, so it can have real economic impact as well. 

The future

The algorithms found by AlphaTensor are both useful and interesting, but even more so, the approach of using reinforcement learning for algorithmic discovery is very promising in terms of its possible future applications. It demonstrates the power of neural network based approaches for optimizing pretty much anything, whether it be matrix multiplication or chess, or any of the many more applications yet to be developed.

Sources:

Fawzi, A., Balog, M., Huang, A. et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47–53 (2022). https://doi.org/10.1038/s41586-022-05172-4

Fridman, L., Carlsen, M. Magnus Carlsen: Greatest Chess Player of All Time | Lex Fridman Podcast #315 [Video] (2022). YouTube.  https://www.youtube.com/watch?v=0ZO28NtkwwQ&t=1634s

Kasparov, G. Chess, a drosophila of reasoning. Science, 362(6419), 1087–1087 (2018). https://doi.org/10.1126/science.aaw2221 

Pandolfini, B. Kasparov vs. Deep Blue: The historic chess match between man and Machine. Fireside (1997).

Strassen, V. Gaussian elimination is not optimal. Numer. Math. 13, 354–356 (1969). https://doi.org/10.1007/BF02165411

Author: 

Samuli Näppi

Could suncream for solar panels increase their efficiency?

By Eva Kastrinos

Throughout years of hearing about climate change and feeling worried, scared and sometimes just plain useless one group of scientists went out looking for solutions. They have created l2O3- Ta2O5-Al2O (aluminum oxide-tantalum pentoxide- aluminum oxide) a fittingly long and complex name, so for our ease, Solarpaint.

Where did this all come from?

They honed in on a type of green energy, solar energy produced by solar panels.

Although this is a great form of energy, solar panels have an efficiency of 15-20%. This means only 15-20% of the sunlight panels absorb is turned into power. So, these scientists focused on finding a cheap and simple way to improve solar panels, a way that governments could not ignore. With this they created Solarpaint, which increases solar panel efficiency by 14%, almost doubling their current efficiency.

So, what is it?

Solarpaint is a coating, a paint, that will be applied on top of solar panels to stop the reflection of light off of them. This may sound strange but a lot of solar panels efficiency is lost because of their reflective surface. Instead of absorbing all the sunlight that reaches them, they reflect over 35% of it back, like a mirror. So the scientists set out on solving this issue, so more of the light that reaches them could be used for power.

But how did they create it?

To find Solarpaint the researchers went through a bunch of compounds and settled on two, Ta2O3 and Al2O, which have great properties.

Ta2O3 has a high dielectric strength. Now this may seem too jargon-y but it’s simple to understand with a little story. Last week I noticed one of the hinges on a window at home had popped off. Although houses in Finland have good insulation, the loose hinge was letting a breeze through the house. So, the insulation wasn’t really useful because the house was still cold. We could say my house has a low protective-strength. If my house had a high protective-strength, it would have sustained the wind and the insulation would have remained useful and intact. So, in terms of dielectric strength, saying Ta2O3 has a high dielectric strength means it will remain a great insulator and stop electricity (wind) from flowing even when put in places with high electric fields (very windy places). This means that it doesn’t heat up very much, because it isn’t conducting electricity (like when you touch a lightbulb that’s off it’s cold, because it’s not conducting electricity).

Now you may be asking yourself why this matters. All solar panels degrade over time and one of the main causes of this is the temperatures they reach; they’re like people, if you’re under the sun a long time you’ll burn. So, using Ta2O3 ensures that the solar panel doesn’t heat up even more, so it does not speed up degradation. It acts like a suncream, protecting the solar panel from heating up.

suncream

Finally, the two compounds were chosen because they have no smell, colour, are not toxic, and as explained, they resist high temperatures. After coating a solar cell (a small part of a solar panel) with Solarpaint, they took its temperature and then did the same with an uncoated cell. The results showed that the coated cell had a lower temperature- their ideas had worked! The solar panels worked more efficiently and produced more power because of their Solarpaint.

This discovery could change the world of solar energy, and help progress the use of green energy immensely. By increasing solar panel efficiency cheaply and easily, they present governments with yet another great reason to invest in solar energy.

Rajvikram, M., & Leoponraj, S. (2018). A method to attain power optimality and efficiency in solar panel. Beni-Suef University Journal of Basic and Applied Sciences, 7(4), 705–708. https://doi.org/10.1016/j.bjbas.2018.08.004

 

 

 

How Science Uses Lasers for Your Health

This is the effect of lasers in diagnosis of common disease.

This study from 2018 shows how lasers can help with the diagnosis of a few common diseases. The study focuses on the possible replacement of current diagnosis methods which utilize breath analysis. Breath analysis is a common mode of diagnosis for some common disease such as type 1 diabetes, lung cancer, obesity and much more; and with new upcoming laser technologies, experts expect breath analysis to get way more accessible and easier than it already is.

11.3% of the American population (approximately 38 million people) had type 1 diabetes back in 2019, according to official government reports. Obesity and lung cancer also show high diagnose rates, so developing new, faster and more accurate methods to diagnose these diseases will certainly do good. According to CDC, even though millions of cases for these diseases get diagnosed each year, statists estimate that about quarter of the total cases go unnoticed and thus undiagnosed. New techniques for breath analysis are expected to also grant a reduction in the number of undiagnosed cases.

Breath analysis currently utilizes a few different kinds of chemical ionization mass spectrometry, which is basically a process of vapour analysis where the gas that is being analyzed is ionized through electron ionization and therefore reacts to create the mass to charge ratio that the experts need. To put it on simpler terms, think of a village where the villagers (analyte gas) live their lives completely ordinarily. Let’s say a band of rebels come to the village to turn them against their king (electron ionization). The villager then rebel(ionize) and come together rebel against the king. However, some families in the village seem more rebellious than others.

Mass spectrometry basically inspects how many villagers there are and how “rebellious” they are, and give us how many villagers from each family there are in the rebellion in total. Now, imagine the same situation where the rebels use more advanced propaganda techniques and LASERS!

That is where optical absorption spectroscopy, beware the difference from spectrometry. Spectrometry uses electrons to break molecules apart whereas spectroscopy uses electromagnetic radiation. This new technique uses lasers to pinpoint not from which family each villager is, but who each villager is. Even though currently the use of lasers is not as available and accurate as mass spectrometry optical frequency combs proves that the future has plans for optical absorption spectroscopy.

Optical frequency combs, or OFC for short, is a mid-infrared laser technology which made lasers relevant in the breath analysis industry once more, after lasers were proved to be less efficient than its competitors in early 2010s. Development of OFC showed a great boost in the accuracy and availability of laser-based breath analysis methods, particularly optical absorption spectroscopy. With OFC lasers, the whole method upgraded from what casual lasers offered.

A casual laser emits light in a narrow wavelength (region) whereas OFC lasers are equally distant and accurate with the ability to cover a wider wavelength range in the spectrum. This is what allows us to distinguish villagers as their own self instead of what family they belong to (which is not as discrete).

OFC is not only used for breath analysis; it is also useful in areas like astronomy and quantum observation. It is utilized in astronomy as an astro-comb, which is a type of frequency comb that is specialized for astronomy. It can also be found in atomic clocks, the standard for time, which measure time by the different energy levels of atom.

Researchers suggest that if with such little advancement in OFC, we get this much of a major upgrade to the laser-based breath analysis industry, then only time will show how much more can be accomplished with further advances in OFC technology.

Citation:

Metsälä, M. (2018). Optical techniques for breath analysis: from single to multi-species detection. Journal of Breath Research12(2), [027104]. https://doi.org/10.1088/1752-7163/aa8a31

Author:

Egemen TÜRKER

Evidence for New Densest State of Matter Found

The standard model of physics  describes quarks, the building blocks of protons and neutrons, as very sociable creatures. In almost all environments they will only ever be observable as pairs or triples. Within these groups, it is mathematically impossible to differentiate between or separate them, since the energy required to separate the quarks is high enough to create a new pair of quarks from the quantum foam. Where they cannot form these separate and small groups, either because they possess too much thermal energy to stay permanently connected to each other or because an external force is squeezing them together, they form what is called quark matter. This exotic state of matter somewhat resembles a liquid, in which none of individual quarks are discernible from each other.

One fairly well studied occurrence of quark matter is the quark-gluon plasma formed in the extreme environments of high energy particle collisions, such as those in Large Hadron Collider. Here quark particles are thrown away from each other by intense kinetic energy. Similarly to how electrons can’t hold on to atomic nucleus in the traditional kind of plasma, here the quarks cannot hold onto each other and move freely through space. Another kind of quark matter, that has been hypothesized to exist by physicists for a long time, is so-called “cold and dense quark matter”. This type would form from the enourmous gravitational pressure inside stellar remnants. Traditionally these objects have been called neutron stars, as the gravitational force acting on protons and electrons is extreme enough, to squish them together into neutrons. However, under intense enough pressure, these neutrons get further compressed to form quark matter, in which the boundaries between individual quarks are no longer existant.
Proving the existence of this hypothetical densest state of matter was thought to be impossible, since it would hide within large neutron stars, one of the most extreme environments in the universe. It is unknown if the pressure inside a stellar remnant can be high enough to form quark matter without being so high as to collapse the neutron star into a black hole. But researchers from the University of Helsinki may have found a way to show its existence by observing its structural effects on the gravitational behaviour of the star.
They use mathematical tricks from string theory that have already helped describe the quark-gluon plasma in order to build a mathematical model of compact stellar remnants. This model has one free unknown parameter they could vary, which corresponds roughly to the constituent masses of the quarks from which the quark matter is formed. Varying this parameter, they get four different solutions for the structure of a compact star: a traditional neutron star, a star made entirely out of quark matter and two different kinds of hybrid stars: one with a crust made from quark matter and one with a mantle made of quark matter.
Gravitational wave measurements from LIGO and VIRGO have put certain restraints on the tidal deformability of neutron stars. These gravitational wave detectors can detect the ripples in space time left by the merger of two neutron stars. How much the neutron star squishes and stretches under the gravitional influence of the other star has a subtle effect on the resulting gravitational wave amplitude and frequency spectrum. The researchers can use their mathematical models to calculate this tidal deformability depending on the choice of quark mass.  They find that models representing hybrid stars fit the experimental observations closer than models which describe traditional neutron stars. Traditional neutron stars are however well within the error bars of the gravitational wave data.
While the scientists admit that their models are based on heavy simplifications of the underlying theory of quantumchromodynamics their results may prove to be the first signs that we need to rethink our understanding of neutron stars.

Paper:

Annala , E , Ecker , C , Hoyos , C , Jokela , N , Rodriguez Fernandez , D & Vuorinen, A. 2018 , ‘ Holographic compact stars meet gravitational wave constraints ‘ , Journal of High
Energy Physics , vol. 2018 , no. 12 , 078 . https://doi.org/10.1007/JHEP12(2018)078

Author: Jeff Schymiczek