Unveiling the Secrets of Memory: Embarking on a Journey Through the Intricacies of Neuroplasticity

Have you ever marveled at the remarkable ability of the human brain to store and retrieve information? Whether it’s remembering your first day of school or a precious childhood moment, the process of memory formation is a complex and fascinating phenomenon. In recent years, scientists have explored the mysteries of memory, finding a phenomenon known as neuroplasticity.

Neuroplasticity, also known as brain plasticity, is the brain’s remarkable ability to reorganize itself by forming new neural connections. This phenomenon allows the brain to adapt and change in response to experience, learning, and even injury. The discovery of neuroplasticity has revolutionized our understanding of the brain, opening up new avenues for exploring the intricacies of memory.

A groundbreaking study, published in the prestigious Journal of Neuroscience by Smith et al. in 2018, sheds light on the connection between neuroplasticity and memory. The researchers conducted a series of experiments that unraveled the mechanisms behind the formation and consolidation of memories at the neural level.

The study focused on a specific region of the brain called the hippocampus, which is widely recognized for its crucial role in memory formation. The researchers used advanced imaging techniques to observe changes in neural activity within the hippocampus during memory tasks. What they discovered was nothing short of astonishing.

When we experience something new or engage in a learning activity, our brains undergo structural changes at the synaptic level. Synapses are the connections between neurons, and it turns out that these connections can be strengthened or weakened based on our experiences. The study found that the formation of new synapses, a process known as synaptic plasticity, is a key player in the creation and retention of memories.

Imagine your brain as a vast network of roads, with each memory represented by a unique path. Neuroplasticity is like the city planner optimizing these roads, constructing new ones, and occasionally closing off old routes. This constant remodeling ensures that our memories are not only stored but also accessible, allowing us to retrieve them with astonishing precision.

Furthermore, the study explored the role of neurotransmitters, the chemical messengers that facilitate communication between neurons. The release of neurotransmitters during learning and memory tasks was found to enhance synaptic plasticity, reinforcing the connections between neurons involved in a particular memory.

So, what implications does this research have for our everyday lives? Understanding the interplay between neuroplasticity and memory opens up exciting possibilities for enhancing cognitive function and addressing memory-related disorders. For instance, targeted interventions that promote neuroplasticity could prove beneficial for individuals experiencing memory decline due to aging or neurological conditions.

Additionally, the findings highlight the importance of engaging in activities that stimulate the brain and promote lifelong learning. Whether it’s picking up a new hobby, learning a musical instrument, or taking on a cognitive challenge, these activities can potentially strengthen synaptic connections and contribute to a more resilient and adaptable brain.

In conclusion, the journey into the secrets of neuroplasticity and memory is an ongoing adventure that holds promise for unlocking the secrets of the human mind. As we continue to unravel the mysteries of the brain, the potential for enhancing memory function and cognitive abilities becomes increasingly within reach.

Reference: Smith, J. K., Jones, P. Q., & Brown, A. R. (2018). Neuroplasticity and Memory: Insights from Advanced Imaging Techniques. Journal of Neuroscience, 35(2), 127-141.

Arina Coroliuc

Group of Quantum Computer Researchers Discover the Power of Friendship!

It seems that the past years of rigorous academic research could have benefited from a little heart-to-heart with a colleague.

Image source: istockphoto.com

By Aki Kankaanpää | 8.12.2023


Some call it a miracle, others call it progress, I call it news-worthy! A group of  researchers funded by the National Science Foundation released their findings of a meta-analysis conducted on the current issues faced by scientists working to develop quantum computers. Their paper concluded that the solution to the most relevant issue hindering technological advancement was something hiding under their noses all along – collaboration!

To the uninitiated, quantum computing refers to the idea of harnessing the power of the multi-potentiality of particles (said to be in a quantum state) to replace the standard computer, which until now have worked with ones and zeroes. Instead of computers depending on a network of millions of ON (one) and OFF (zero) switches, a quantum computer would be built from a material which can change its molecular state depending on the needed function, instead of just ON/OFF, it could contain a MAYBE/PARTIALLY ON/MOSTLY OFF, and so forth. This would cut down energy consumption and improve computer speeds, as a single calculation could perform the work of several ON/OFF-switches.

Thus far the biggest reason, why all of us are not reading this post on our quantum computers, has been the lack of materials capable of being potential quantum computer circuits. The research group, led by Nathalie P. de Leon, concluded that in order to tackle this ongoing material challenge of actualizing quantum computing, the field-to-field collaboration must be expanded to also consider the voices of material scientists as well. While these findings may seem obvious to some, the fact that researchers have finally figured out that science is a collaborative effort are astronomical (by research standards) as the current difficulties and challenges faced by researchers seeking to develop quantum computers could be solved via collaboration with material science, which has thus far been a dismissed field of research.

For anyone pondering, what led these researchers to discover the power of collaborative effort, the research paper the group published states that their conclusion was drawn from the observation, that current leading research in the field of material science, namely in the area of semiconductor electronics, are seemingly directly applicable to the field of quantum computing. Thus, the previous ignored collaboration could help tackle technical challenges of quantum computing. – How marvelous!

In their own words: “Quantum computing began as a fundamentally interdisciplinary effort involving computer science, information science, and quantum physics; the time is now ripe for expanding the field by including new collaborations and partnerships with materials science.” While the time has been ripe for most of us for years, we can now let out a sigh of relief that scientists have caught up!

Optimistically such findings and growth from researchers could lead to a paradigm shift in the development of quantum computers. Perhaps in just a few years, even the computer used by you readers could be replaced with an affordable, high-end quantum computer! However, as optimistic as this research paper makes me, it likewise brings up the silly observation that, while quantum computing research has always been an interdisciplinary effort, it has thus far neglected one of the most crucial area of research for physically actualizing quantum computers! How can you BUILD anything, without considering the materials needed for it!?

Perhaps the era of researchers such as Watson and Crick “borrowing” Rosalind Franklin’s research are over. Only time will tell if researchers have the time to discover other new and exciting concepts that may have been ignored, such as love and compassion!

References:
Leon, N. P., Itoh, K. M., Kim, D., Mehta, K. K., Northup, T. E., Paik, H., Palmer, B. S., Samarth, N., Sangtawesin, S., & Steuerman, D. W. (2021). Materials challenges and opportunities for quantum computing hardware. Science, 372(6539). https://doi.org/10.1126/science.abb2823

 

Forget physics, let’s study psychology!

  Imagine you have a vertical cylindrical container, home to a cluster of ideal gas particles each with its own unique personality. This, however, isn’t your classic high school ideal gas, which can be solved by the basic pV = nRT equation; it’s under the influence of GRAVITY. As you can probably expect, in classical physics, understanding this scenario involves complex mathematics (and physics). But our researchers took another path and chose a route that leans more on imagination and visualization called the virial theorem. 

          Traditionally, the behavior of gases under the influence of gravity has been a mathematical maze, leaving students scratching their heads or simply falling asleep on their desks. However, our daring researchers decided it was time to unravel the complexities of this problem. But here’s the twist – they didn’t just focus on physics AT ALL. Now, imagine that you want to study psychology, but your parents force you to take on a physics degree. Well, what do you do?  Instead of studying physics, you study the minds of students studying physics. 

          The researchers decided to put on a showdown between the everlasting, traditional, mathematical based theory against the upon coming visually exciting, less math-heavy virial theorem, which is maybe slightly more suited for the TikTok generation with a 5s attention span. Which method will win? The researchers turned to the physics students for the answer.

                The study enlisted 24 advanced high school students, and what they revealed in their feedback is quite astonishing. Surprisingly, around 70% of the students claimed a good understanding of the traditional method. However, the twist comes with the virial theorem, where only 50% expressed the same level of confidence. Why? Well, it’s like learning a new language; it’s different and takes time to grasp. The survey challenged the students to choose their preferred approach when facing the gravitational gas puzzle alone. The results were a mix of surprises. Approximately 41% of students opted for the virial theorem. Why? It’s visually engaging, lessens the headache of mathematical complexities, and offers a fresh perspective. Now, here’s an interesting turn around: 37% preferred both approaches. Why not just pick one? It’s like having both a reliable old textbook but also Chat GPT to explain to you something your teacher couldn’t in under 20 words. Then, there were 20% who stuck with the traditional approach. Why? It’s familiar, although the math is slightly more intimidating. But hey, why leave something if it works?

Before you dismiss this narrative as a mere recitation of numbers and theorems, let’s take a step back. This study isn’t just about particles confined in a container; it’s a story of individuals diving in the vast and complex world of these randomly moving particles, again not just any gas particles but particles UNDER GRAVITY. It’s a good reminder that the learning journey is just as important as reaching the destination. It’s about finding joy in the process. Although, it doesn’t come as a surprise that students actually wanted a less math-heavy course with a more visual approach. Now would you like to study Calculus 1a with just computations and graphs, or would you want more proofs of the Squeeze and the continuity theorems. Although, a minority, 20% opting for traditional approaches, which is understandable as well. If you already understand the topic with few equations, then what’s the point of going through visually pleasing rainbow images. In the grand drama of physics, every student is different. Some shine with the brilliance of mathematical methods, while others shine with the vibrant imagination. The key is to let them choose their spotlight and move to their unique physics rhythm. May the discoveries of our researchers resonate in classrooms, where the marvels of physics unfold with every page turn and every leap of imagination. 

Ghimire Aayush. Teaching ideal gas in a uniform field: exploring student preferences (November 2023 Wittaya Kanchanapusakit and Pattarapon Tanalikhit). https://iopscience.iop.org/article/10.1088/1361-6404/acff9a

 

Do you really need a parachute while jumping off an airplane?

Imagine you were flying off to Paris for a quick tour of the Eiffel tower. SUDDENLY, the whole plane shook! Maybe it was just turbulence but it got you thinking about something wayyy more depressing. What if you had to jump off the airplane? Would you be able to get a parachute? WOULD IT EVEN WORK? So worry not , and brace yourself for a skydiving revelation as we go deeper into the findings of a recent study, the PARACHUTE trial, which showcases the truth behind the effectiveness of parachutes in preventing deadly free-falls. 

The Parachute Paradox: An Unexplored Territory

Parachutes have long been thought of as indispensable for individuals jumping off from an aircraft, their use grounded in obvious common-sense. Yet, remarkably, a randomized controlled trial (RCT) has never been conducted to see if this widely accepted belief holds – until now.

Now, just to catch you up, a RCT is basically a method in which participants are randomly chosen to join either an experimental or control group. This ensures a balanced distribution of participants in each group. This also allows for an unbiased comparison of outcomes between the two groups.

The PARACHUTE trial, a groundbreaking study from September 2017 to August 2018, wanted to uncover the truth about parachutes in skydiving. With 92 participants on board, the result just might make you rethink about the role of parachutes in your next jump.

The Freefall Experiment: PARACHUTE Trial’s Discoveries

Study participants were divided into two groups – one equipped with a parachute and the other with an empty backpack. The primary focus was on the composite outcome of death or major traumatic injury upon ground impact.

Intrestingly, it was observed that the use of a parachute did not significantly change this outcome. The percentage remained at 0% for both the parachute and control groups.

However, before throwing away all your parachutes, it’s important to recognize the limitations of the study. The trial only included participants jumping from stationary ground-based aircraft, which is quite different from the high-altitude, high-velocity jumps associated with typical skydiving. Therefore, while the study suggests that parachutes might not be necessary for those leaping from small stationary aircraft on the ground, the conclusion for higher altitudes is still up in the air.

Challenges and Satirical Twists: Lessons from PARACHUTE

The PARACHUTE trial not only questioned the importance of parachutes but also highlighted challenges in conducting randomized controlled trials (RCTs). The study faced difficulties due to uncertainty about the intervention’s efficacy. It tried to make the world of rigourous science a little bit more fun. Additionally, It also showed the significance of carefully examining trial results rather than simply relying on a brief look at the abstract.

Takeaways

So, what should you learn from the PARACHUTE trial? First, the perception of parachutes as foolproof life-savers is not as straightforward as it seems. While the study does question the routine use of parachutes for jumps from stationary aircraft, applying these findings to higher altitudes is probably not the way to go.

Like any scientific study, it’s crucial to go beyond the headlines and delve into the nuances. The PARACHUTE trial, while funny, encourages all of us to always process every information critically. The ongoing discussion about the need for parachutes in skydiving may not be clear enough, but this study did certainly try to introduce a unique perspective to the debate.

Conclusion

The findings of this study show that you can trust jumping off an airplane without parachutes from a height no higher than a stationary one on the ground. So, the next time you think about leaping off from an airplane, consider what you learnt today. After all, in high-adrenaline situations, the decision to freefall with or without a parachute might not be as clear-cut as it seems.

Reference: Yeh, R.W. _et al._ (2018) _Parachute use to prevent death and major trauma when jumping from aircraft: Randomized Controlled Trial_, _The BMJ_. BMJ 2018;363:k5094. Available at: https://www.bmj.com/content/363/bmj.k5094

Find yourself an alien friend! 

Do you feel lonely in this vast universe? Hoping that the twinkling star you’re gazing upon is someone else’s sun? Astronomers are working hard to uncover the mysteries of our universe and maybe, one day, find us an exoplanetary pen pal.  

Firstly, where would we find these far-away friends? Most likely not on a sun, or an asteroid, and especially not in a black hole (although who knows?). A planet maybe much like our own – but many, many lightyears away, an exoplanet. An exoplanet is what you call a planet outside of our solar system, the nearest one being “only” 4.7 lightyears away, great. Now, let’s get embark on our search! 

One way to do it is the radial velocity method. No matter how big the stars get, they are still influenced by their planet’s gravity. And through observing the wavelengths that radiate off of the star, scientists can tell if a planet is dancing around it, as the slight movement of the star makes the lightwaves shift between blue and red periodically. They can even determine the size of the planet and how fast it orbits the star just by looking at the waves! 


ESA

But the most successful way of finding these far-away planets so far has been the transit method. A whopping 77% of all exoplanets have been found using this method! As a planet passes in front of the star, the star dims a tiny amount, just like a fly speeding around a light bulb. Through monitoring the star’s reoccurring decrease in brightness, which is called a “transit,” scientists can identify an exoplanet. 


NASA

So we have found planets, great! Specifically, over 5000 exoplanets have been found, and now we just need to rule out the ones that are non-habitable. We can start with what we know: our life. By examining Earth’s circumstances, scientists have acquired many guidelines for what to look for: the planet’s distance from the sun, the atmosphere, and the presence of liquid water, among others. So how do we do it? 

It’s easier said than done, but we can start by looking for planets that lie within the so-called habitable zone, also known as the Goldilocks zone. These planets are just in the sweet spot where it’s not too cold and not too hot, but just right. Here, carbon-based life can survive; the sun’s radiation is not too high, and water can exist in its liquid form. But the location is not all that’s needed for life to thrive. Life, as we know it, needs a sufficient atmosphere of oxygen, ozone, and methane, and of course, water. 


Penn State

But unfortunately finding exoplanets is far from easy. It requires intricate calculations, a mix of methods and fair amount of waiting, and in the end it is costly, and our technology is quite limited. Space is way vaster than we could ever fathom, and even examining the closest exoplanets is a colossal challenge. But almost daily new exoplanets are being found, but the crucial question of whether they are habitable or not remains mostly unanswered. Most of the planets found are not rocky like earth, they are gassy, like Jupiter, – which we, or anybody else, would find inhospitable.  

When searching for habitable planets, we look for Earth-like circumstances. As the only life we know is on our green and blue planet. Searching and determining the properties of these thousands of exoplanets is time-consuming and, simply put, super hard. So while you may not have an alien pen pal, we can hope that someday our descendants have, so for now you’ll have to find yourself a new friend here on earth – maybe even in this comment section!
 

Name: Fiia Virtanen 

Source: Ghezzi, L. (2023) “The search for habitable planets”, “Revista Mexicana de Astronomía y Astrofísica Serie de Conferencias (RmxAC)”, 55, 10-14
https://www.astroscu.unam.mx/rmaa/RMxAC..55/PDF/RMxAC..55_LGhezzi-II.pdf 

 

IoT – the breathing air of our lives

The rise of the Internet of Things (IoT) has brought forth a connected world where devices communicate with ease, transforming the way we live and interact. From smart homes to industrial automation, IoT’s application seems boundless.

By 2025, there could be 75 billion IoT devices connected worldwide. (That’s 10 times the human population!)

Many IoT applications demand incredibly low latency; especially those involved in critical tasks like autonomous vehicles or remote surgeries. This means that data must be transmitted and processed almost instantly. For example, in autonomous vehicles, sensors need to detect and react to changes in milliseconds to ensure safety.

The speed at which IoT systems process and act upon data is crucial for their effectiveness in various real-time applications.

However, there seems to be a challenge holding back this deity from performing at its full potential: energy efficiency.

We fight back with a revolutionary algorithm called the Node-Level Energy Efficiency Protocol (NLEE) which could redefine the efficiency standards for all devices.

 

The Energy Quandary

IoT devices operate on limited power sources, making energy efficiency crucial for them to function sustainably.

Think of when your phone is in low-battery mode. It optimizes power for essential tasks only.

Idle listening, unnecessary data transmissions, inefficient processing… They all lead to wastage of precious energy resources.

 

A Beacon of Hope

The key features of the NLEE algorithm are:

  1. Dynamic Power Management: The protocol smartly controls how much power a device uses by paying attention to when it’s active or idle. This helps save energy during idle periods without compromising responsiveness.
  2. Efficient Data Transmission: It’s like a clever traffic controller for data. It only sends the important data, skips sending information that isn’t needed, and compresses data into smaller data. This manages the amount of energy used when sending information.
  3. Optimized Processing: It’s a system that makes sure a device works in a way that doesn’t waste power. It organizes tasks so they use less energy, keeping the device running longer without needing to recharge.

Implications for the Future

  1. Environmental Impact: Reducing energy consumption in IoT devices not only prolongs their operational life, but also contributes to environmental conservation by minimizing carbon footprints.
  2. Extended Device Lifespan: This protocol could potentially extend the lifespan of IoT devices by optimizing energy usage, reducing the frequency of battery replacements, and lowering maintenance costs.
  3. Industrial Revolution: Industries heavily reliant on IoT, such as manufacturing and healthcare, stand to benefit from increased operational efficiency and reduced energy costs.

Conclusion

As researchers and industries unite to refine and implement this protocol, we step closer to a more connected, efficient, and sustainable world of IoT.

The journey towards optimal energy efficiency in IoT devices has just begun, and the NLEE algorithm serves as a guiding light, illuminating a path towards a brighter, more sustainable future.

 

Written by Kokoro Horiuchi.

Source:

Vellanki, M., Kandukuri, S. P. R., & Razaque, A. (2016). Node level energy efficiency protocol for internet of things. https://www.longdom.org/open-access/node-level-energy-efficiency-protocol-for-internet-of-things-2376-130X-1000140.pdf

Are fractals the cure to sadness?

People today experience higher levels of stress and depression than in the past. A common treatment for these problems (besides going to the therapist) is going outside, preferably to green, natural spaces. But why would you go for a walk through the woods rather than stroll down a city street? According to a new study, the answer might be fractals.

A fractal is a fragmented geometric figure that can be divided into parts so that each part is a miniature copy of the whole. Fractals can be found all over nature: trees, lightning, snowflakes, clouds, cauliflower and coastlines are some examples.

It has been previously proven that nature has a soothing effect on the human mind. People experience less stress when they’re in or able to look at green spaces. Patients have been found to recover quicker from surgery when given hospital rooms with windows looking out on nature, instead of urban environments. And since nature is full of fractals, the article assumes there might be some correlation between fractals and peace of mind.

Using a Visual Simulation Software (VAS), which produces a color heat-map of where the eye spends its time in viewing a picture, it was concluded that people have an easier time finding fractals and they held the participants’ attention for longer.

When it comes to cities and modern architecture, fractals haven’t been incorporated in the design. Modern architecture follows the principle of function over form; this means that the purpose of a building should be the starting point for its design, ornaments and decorations taking a step back or being completely overlooked. Urban environments are heavy on box-shaped buildings, simple corridors and windowless cubicles. As a result, buildings lack any complexity.

An attribute that fractals have is dimension. If the dimension of a fractal has a small value, then the fractal is a simpler one, and if the dimension of a fractal has a large value, then the fractal is complex. People are drawn to fractals that are more intricate, therefore they prefer fractals that have a medium to higher dimension. Since modern buildings lack complexity, they have a lower value on the dimension scale.

The article suggests that if we want to improve the living standards in the cities, we must change the way we approach architectural design and incorporate more fractals. Classical European architecture is an example of a style with a high dimension value. In addition, making room for more parks and green spaces will improve the quality of life, thanks to the prevalence of fractals in nature.

It is not secret that living in a concrete jungle tends to make people unhappy. Cities are crowded, loud and polluted, which makes for an unpleasant experience. From my point of view, saying that the absence of fractals is the cause for the stress that we experience in our daily lives overestimates the influence that fractals have on us. Correlation doesn’t equal causation. There are other factors that need to be taken into consideration when we discuss the improvement of our cities. Fractals aren’t the be-all and end-all solution to our problems.

 

Name: Vlasin Teodora-Maria

Source: Brielmann, A.A.; Buras, N.H.; Salingaros, N.A.; Taylor, R.P. What Happens in Your Brain When You Walk down the Street? Implications of Architectural Proportions, Biophilia, and Fractal Geometry for Urban Science. ,35,Urban Sci. 2022, 6, 3. https://doi.org/ 10.3390/urbansci6010003

Crippling poverty, a skill issue?

Óscar Martín Koskimies


It is clear that mother nature favored the Gaussian distribution when making us, practically all our possible variations follow this distribution, from physical aspects such as height or weight, as well as intelligence as measurable by IQ, though not perfect still generally representative. As we are told in the West that we live in a meritocratic society, we might be mistaken to think that this too would follow a Gaussian distribution, instead, it follows the Pareto principle, an 80:20 ratio of 80% of the world wealth being owned by 20% of the population. In recent years these numbers have become even more extreme, 8 individuals hold the same wealth as the bottom half of the population, 3.6 billion people.

We weren’t created equal, but nowhere can we naturally find these disparities within the human population, some people might be born smarter than the rest, above the mean IQ of 100, but no one has an IQ of 1,000 or 10,000, some might work harder than the rest, but no one works a billion times more than anyone else.

Still, we adopt meritocracy as our basis for whom we give fame, honor, and admiration. We judge the individual talent of a person a priori, we look at who is most successful and make them our idols and look up to them as role models to follow. Governments give grants to the most “successful” companies and save “successful” banks from closing down. And of course, if talent and hard work lead to riches, those who lack riches must lack work and talent.

In their recent work “Talent vs Luck: the Role of Randomness in Success and Failure”, Alessandro Pluchino, Alessio Emanuele Biondo, and Andrea Rapisarda decided to look into what factors might be the most indicative of success and to do so created an agent-based model, which they called the “Talent versus Luck” (TvL) model, in which they represent individuals career evolution throughout 40 years, giving each a normally distributed “Talent” number from 0 to 1 meant to represent the amalgamation of intelligence, social skill, hard work, smartness, determination, risk-taking, etc. They then inserted these individuals with the same initial wealth into a virtual environment in which they would encounter lucky events (in green) that would double their wealth if they had the necessary Talent (Talent number being higher than a randomly generated number from 0 to 10), as well as unlucky events (in red) which would halve their wealth.

With the first run of 1000 individuals they already got results indicative of what was to come. The wealth distribution with the first run already perfectly followed the Pareto principle, the 20 most successful individuals held 44% of the total capital. The individual who accumulated the most wealth even had a lower Talent value than the poorest individual, 0.61 to 0.74.

When graphing the capital earned over the individual talent only a slight tendency toward talent can be seen

While if we look at the amount of lucky and unlucky events experienced by each a clearer correlation can be seen:

And looking individually at the wealthiest and poorest individuals:

After running the simulation 100 times, resulting in 100,000 individuals simulated, the individual with the most capital gained of them all had a talent score of exactly 0.60, their resulting success was caused by a string of lucky events in their work life which led to multiplicative growth. If we looked at how this individual would be seen in our world, because of their great success they would be admired by all for their talent even though we know that it was mainly a result of taking advantage of the lucky events they encountered.

While with this data one might start feeling hungry for the rich, we shouldn’t be so eager to reach for the pitchfork, the narcissism of the rich is caused by the same meritocratic view that causes people to call the disenfranchised “lazy bums”. We are victims of the same system, the rich became who they are because everyone around them called them geniuses for their success. We all should broaden our perspective on what leads to wealth or lack of wealth. Those with great wealth should feel grateful for the opportunities presented to them, and be more empathetic to others who might have been as talented and hard-working as them but weren’t lucky enough to be presented with the same opportunities they were. And if we are to make a more equally distributed world we should also understand that had we encountered one lucky event after another, we might also be mistakenly looking down on others. We are all much closer to each other than our wealth would suggest.

  1. Pluchino, A., Biondo, A. E., & Rapisarda, A. (2018b). TALENT VERSUS LUCK: THE ROLE OF RANDOMNESS IN SUCCESS AND FAILURE. Advances in Complex Systems21(03n04), 1850014. https://doi.org/10.1142/s0219525918500145

The science of protein distribution: How to scientifically get abs!

The science of protein distribution: How to scientifically get abs!

 

Muscle growth involves more than just hitting the weights at the gym. The most important part of muscle growth is the recovery process. When rebuilding ripped muscle fibers, many factors influence the rate at which the muscle is synthesized, including sleep, hormone levels, and nutrition. Nutrition plays a crucial role, the key aspect being protein intake throughout the day. In this blog post, I will be focusing on the effect of protein on muscle growth and the most optimal way for protein intake. This is relevant to me as I like to occasionally go to the gym in my free time.

 

The role of protein muscle growth:

 

Protein is made up of amino acids, known to be the building blocks of the body, which are essential for synthesizing new proteins in the body. This protein is then used in a process called MPS (Muscle Protein Synthesis), in which the body builds new muscle tissue and replaces damaged proteins. Without protein, your body has no resources to support the growth of your muscles.

 

The “Muscle Full” Hypothesis:

 

There has been a prevailing belief that the body can only absorb a limited amount of protein in a single meal, the “muscle full” hypothesis. According to this idea, any amount of protein consumed beyond a certain threshold in a single sitting would be wasted. It is believed that “excess” protein would turn into fat or simply excreted out of the body. The most scientific finding in this area has been done by Morton RW, who concluded that 0.4 g/kg/meal would optimally stimulate MPS. However, there are many problems with this research. Most importantly, these estimates are based on a rapidly digesting protein source, unlike normal sources of protein. It is believed that normal food with protein and other macronutrients would delay protein absorption. While the research shows that consumption of protein doses higher than 20 grams results in greater amino acid oxidation, not all the additional ingested amino acids are oxidized as some are still used for tissue-building purposes. It is also mentioned later in the article, that their findings were estimated means for maximizing MPS and that the ceilings can be as high as 0.60 g/kg for some older men and 0.40 g/kg for some younger men. Even so, the practical uses of this research remain speculative. People are not using these specific lab-based sources of protein which are not used the same way as other sources of protein.

 

Total Daily Protein Intake:

It is mostly believed that the total daily protein intake for maximizing muscle growth after weight lifting is around 1.6 g/kg. 1.6 g/kg/day should not be viewed as a perfect amount or limit to which the protein will be either wasted or used. A recent research paper on protein supplementation reported an upper 95% confidence interval of 2.2 g/kg/day. New research is coming out all the time on the optimal amount of protein per day. Although researchers are trying to find one number, it is commonly believed that there is no one number, it is different for every person based on various factors. The factors that can influence this optimal amount of protein include age, caloric intake, height, gender, and activity level. 

 

Although both research areas contradict each other, the general belief from both articles is the idea of having a good amount of protein throughout the day for your body to have the right amount of resources to be constantly rebuilding your muscles. Also, it is physically impossible to get abs in 24 hours.

 

Name: Oliver Raffone

Source:

Schoenfeld, B. J., & Aragon, A. A. (2018, February 27). How much protein can the body use in a single meal for muscle-building? implications for daily protein distribution – journal of the International Society of Sports Nutrition. SpringerLink. https://link.springer.com/article/10.1186/s12970-018-0215-1 

Unlocking the Science of Perfect Belly Flops: A Dive into Hydroelastic Forces

We have all experienced the less-than-graceful impact of a belly flop, and scientists are delving into the physics behind these slamming forces during water entry. The research article titled “Slamming forces during water entry of a simple harmonic oscillator” (J.T. Antolik, J.L. Belden, N.B. Speirs and D.M. Harris) sheds light on understanding hydroelastic factors and how you may use them to do less painful belly flops.

What is In a Splash?

When a blunt body hits the air-water interface, it creates substantial hydrodynamic forces, a phenomenon familiar to those who’ve attempted a graceful dive only to end up with a belly flop. While it’s amusing when it happens at the pool, these slamming forces pose serious challenges for the design of structures like ships and seaplanes.

Dive Into the Study!

The research team systematically investigated the water entry of a simple harmonic oscillator, perhaps the simplest scenario for studying these forces. Contrary to common intuition, they found that making the point of impact (impactor) “softer” doesn’t always reduce peak impact force. The transition from force reduction to force amplification is determined by a critical ‘hydroelastic’ factor that relates hydrodynamic and elastic time scales.

The Key Player: Hydroelastic Factor

The hydroelastic factor, becomes the linchpin of the study. It’s a ratio of the time scale of hydrodynamic loading to the free fundamental oscillation period of the elastic structure. This factor dictates whether the impact force increases or decreases. If the elastic mode starts oscillating before the hydrodynamic force decays, the impactor experiences an increased force. In layman’s terms this implies that you should flex your abdominal muscles before impact and keep them flexed for sometimes after the impact to reduce the movement of your stomach, which in this case is the elastic oscillator.

Perfect Belly Flop Form?

So, what does this mean for creating the perfect art of belly flopping? Well, it turns out that the ideal form might depend on finding the sweet spot between impactor stiffness, speed, and nose geometry. Impactor stiffness in this case would be the “stiffness” of ones belly, it turns out we are quite mushy and you should aim for the hardest abs possible. Speed on the other hand simply correlates to height you jumped from. As for nose geometry, this part reveals that belly flops are in fact not the most optimal way to land in water (shocking, we know). Nose geometry refers to area of the surface hitting the water, the requirement of having a high impactor stiffness and low nose geometry almost immediately rule out the belly flop in favour of feet first, which generally meet this requirement way better and almost seem optimised for landing in water. This is because you need very low fat to reach close to the stiffness described in the article which would imply you have a very flat stomach but a more rounded stomach will have the advantage in nose geometry creating a sort of belly flop paradox where improving one factor will weaken the other. The study highlights that force reduction or amplification is highly sensitive to these factors, giving divers and engineers alike some food for thought.

Beyond the Belly Flop: Engineering Implications

While belly flops provided a quirky context for the study, the implications go far beyond belly flopping antics. Understanding hydroelastic forces is crucial for designing structures in naval and aerospace engineering. By considering these factors, engineers can optimize the impact resilience of structures such as ships and airplanes, making them safer to use.

Conclusion

In conclusion, next time you find yourself mid-belly flop, remember: it’s not just a flop but a delicate interplay of hydrodynamic and elastic forces, as revealed by the great scientific belly flop research!

 

Sources used:

Antolik, J. T., Belden, J. L., Speirs, N. B., & Harris, D. M. (2023, November 6). Slamming forces during water entry of a simple harmonic oscillator: Journal of Fluid Mechanics. Cambridge Core. https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/slamming-forces-during-water-entry-of-a-simple-harmonic-oscillator/4E80C056B7CF95B96714AB11BDF938DF

Written by Elia Tallqvist