Behind every successful neuron there are nine glia

Let me take you on a tour backstage of the complex and dynamic machinery we call the brain. In neuroscience we tend to focus the light on neurons, but what is a neuron without friends? The brain composes of only 10% neurons and the rest is glia, in terms of volume. An exceeding amount of interest is being turned towards glia, since looking at neuronal synapses and recording minis cannot solely explain neuronal functions. As the system gets more complicated we need to take more (f)actors into account.

Indeed, the elements of an excitatory synapse is in addition to the pre- and post synaptic terminals also one part astrocyte wrapping and one part visiting microglia process. This marriage of four was coined the “Quad-Partite” synapse by Dorothy P. Schafer and friends in their review in Glia 2013. We now know that astrocytes not only remove and recycle neurotransmitters but also actively modulate the function of synapses. In June this year at the Cold Spring Harbor Asia glia meeting, Pierre J Magistretti  gave a very interesting talk on this topic. Astrocytes nurse and protect neurons by maintaining the blood-brain barrier and providing neurons with glucose and lactate, a fuel kept at a steady state level in the parenchyma and produced from glycogen under eg. acetylcholine stimulation. Lactate has recently been shown to stretch further than just being fuel, it evokes signalling in neurons that modulates NMDA receptors and turns on plasticity genes. Talking about brain metabolism, Magistretti also pointed out that fMRI measures the activity of astrocytes and not directly neurons. Astrocytes buffer the environment that surrounds neurons. This is of pivotal importance for neuronal membrane potential to function normally, but the buffering has also further implications. Neuronal release of potassium is a sign of activity that doesn’t pass astrocytes unnoticed. Astrocytes maintain homeostasis and osmotic pressure, but potassium functions as a signal to astrocytes that energy expenditure is increased in hungry neurons. In Magistrettis lab, Suzuki et al have previously shown that lactate is required for LTP and long-term memory consolidation.








As oppose to astrocytes, originating from the same stem cells as neurons, microglia colonize the mouse brain from the periphery. This colonization starts already at embryonic day 9,5, around the time when neuron precursors begin to differentiate and the whole architecture of the brain takes shape. It has been general knowledge for decades that during development a huge excess of synapses are formed and later removed. The mechanism however, remained a mystery until focus turned from neurons to the neighbouring cells. Previously considered quite boring and dormant, microglia were seen in new, two-photon, light. The view of these little cells in the intact brain was game changing. In 2005, Davalos et al. and Nimmerjahn et al. saw that instead of being dormant, microglia are actively scanning the unchallenged brain; processes browsing through synapses at impressive speed. This evoked the obvious question; they must be doing something important, since this flamenco would be very energy expensive. In the lab of Beth Stevens, a few brave and elegant studies showed synaptic elements in the bellies of microglia. Since microglia are phagocytic cells, they could have a unique position to harness this machinery for targeted eating of weak synapses.

This was the inception of intense research, rapidly laying down the pieces of how microglia inspect synapses and by a complement-dependent manner eat excess synapses with a little help from astrocytes throwing in TGF-beta into the equation. Suddenly the door was open to a novel and very exciting field of neuroscience, providing a buffet of big discoveries to be made. Parkhurst et al. continued on this accelerating train and shed light on the other side of the story; microglia also strengthen learning-induced synapses by rewarding them with BDNF.

Oligodendrocytes deserve some credits as well. The brain of a newborn lacks myelin almost completely while up to 50% of the adult brain consists of myelinated fibres. In humans, the amount of myelin peaks at the age of 30 when the forebrain receives its final sheets. “Oligodendrocytes are specialized in producing internodal myelin sheets and they are very good at it”, says the charismatic Charles ffrench-Constant, one of the big oligo-sharks at the European glia meeting in Bilbao earlier this summer. These cells can independently form the sheet to match the thickness of the axon to an impressive degree that also matches the theoretical thickness that would give the most energy- and space-efficient length and thickness. By varying the length of the internodal myelin the velocity of the action potential is strongly regulated. Intermodal number and length has therefore a critical role in regulating neuronal signalling on a network level. This is implicated for instance in coincidence detector systems such as processing of auditory inputs.

As all other glia cells, oligodendrocytes are plastic and dynamically respond to changes in the surroundings. Various factors affecting myelin are regulated by the activity of the neuronal system, demonstrated by myelin loss in mice housed in isolation and an increase in enriched environment housed mice (Liu et al.) Oligodendrocytes are certainly more interesting than what they might seem at a first glance. Hot topics in the field are obviously the regulation of the adaptive myelination and myelin plasticity. At least I’m personally curious to hear more of the story about learning-induced myelination.

By this post, I hope I managed to highlight the importance of considering the brain as a tight collaboration of all the components it consists of. This collaboration spans throughout the hierarchy, from molecular interactions in the synapse and up to circuit level. When president Obama launched the BRAIN initiative, the challenge was grand and the goal was to “improve the lives of […] billions on this planet”. However, maybe we also need to consider a few more players in the brain in order to get a complete understanding of how things really work in both health and disease.

Lastly, I would like to thank my supervisor, Prof. Carl G. Gahmberg for giving me the opportunity to work with the challenging, but very beautiful microglia.

TurretPosition="TurretPosition5" ObjectiveLens="40x" LightingSetting="46%" ExposureUs="54521" CaptureSoftware="EVOS Software Revision 9064"


Sonja Paetau, M.Sc., PhD student
Doctoral Program Brain & Mind (B&M),
University of Helsinki, Department of Biosciences


Alzheimer’s disease – the challenges of finding a cure

When my friends and relatives ask me about the topic of my research, I first mention that I’m working in a research group specializing in the preclinical study of Alzheimer’s disease. Most of them have heard of Alzheimer’s disease or are affected by it, and I often face the same questions: How can I prevent from getting it myself? Why are so many elderly affected by it? Is there a cure against Alzheimer’s disease?

With a slight unease I usually provide an answer listing the factors that either lower or increase the risk of developing Alzheimer’s, explain the basic view of the disease progression and finally end my answer with a rather blunt but honest statement that there is currently no cure. This realization often leaves my audience, myself included, somewhat deflated. But why is it that over hundred years after the first description of the disease by doctor Alois Alzheimer and thousands of subsequent studies there still isn’t a disease-modifying treatment? Maybe a small review is in order.

Alzheimer’s disease is the most common cause of dementia in the elderly. Age is the primary risk factor for Alzheimer’s disease. The risk of developing the disease doubles every five years after 65 years of age, being roughly 5% at 65 years and 40% at 80 years. The disease is thought to start from near the medial temporal lobe, where the memory center, hippocampus, is located, and then spread outwards to the perimeter of the brain, the cortex (see Picture 1). As a result, the first detectable behavioral symptoms of the disease are often memory related problems. Death of neurons, the brain cells responsible for receiving, processing and transmitting signals needed for our thought processes and bodily functions, and the loss of the connections between them correlates with the spread of the pathology and symptoms.


Picture 1: The progression of Alzheimer’s disease pathology and symptoms

And what causes this progressive brain damage? Currently there are only mostly educated guesses on when and what exactly starts the pathology, but the main culprits behind the pathology are thought to be the proteins amyloid-beta (Ab) and Tau. Ab is a fragment of the amyloid precursor protein (APP) and is normally cleared from the brain, but accumulates in Alzheimer’s disease to form deposits called amyloid plaques (not to be confused with plaque formation in the teeth!). Before forming plaques, Ab disrupts neuronal signaling and causes toxicity. Tau on the other hand is a structural component of neurons that normally stabilizes microtubules that form part of the cell’s backbone called the cytoskeleton. In Alzheimer’s disease Tau is modified by enzymes called kinases to dissociate from microtubules, forming disruptive tangles leading to further neuronal toxicity and dysfunction. Ultimately the affected neurons die due to the accumulation of stress and damage. Familial mutations in Ab, Tau and associated genes are known to cause an aggressive Alzheimer’s pathology and an earlier onset of the disease (before 65 years).

Here comes the major dilemma of Alzheimer’s disease: deposition of the aforementioned plaques and tangles begins years, even decades, before the onset of the first detectable symptoms (see Picture 2). At this stage over 50% of brain cells in specific regions might already be dead or beyond rescue. The regeneration of brain cells, especially of neurons, is very limited, but the brain has an enormous adaptive capability to cope with damage by rewiring. This can efficiently mask the pathological onset of Alzheimer’s disease that may precede the first symptoms by even 20 years! MCI, or mild cognitive impairment, is considered a stepping-stone to Alzheimer’s, but not everyone with MCI will develop the disease since memory also suffers during the normal aging process. This presents the first major hurdle in curing Alzheimer’s disease: how to find the future Alzheimer’s patients before any visible symptoms or widespread pathology has developed?



Picture 2: Progression of the pathology and biomarker magnitude in Alzheimer’s disease (Jack CR Jr, Knopman DS, Jagust WJ, et al, 2010 Lancet)

Luckily, the search for clues or biomarkers of early Alzheimer’s pathology is well underway. Techniques to detect Ab and Tau from blood, CSF (cerebrospinal fluid) and by brain imaging are developing fast, but they are costly and the reliability of pre-diagnosis is still low. Some nationwide initiatives have been taken to find reliable markers to detect early pathological biomarkers from blood. One such endeavour is the Australian Imaging, Biomarker and Lifestyle (AIBL) study that has yielded valuable insight into the correlation between Ab accumulation in the brain and biomarkers in blood. Such studies can potentially provide an inexpensive and accurate diagnostic tool to assess Alzheimer’s risk in the future.

The second major challenge remains to find a disease-modifying treatment. Even the most recent clinical trials aiming to reduce Ab with an antibody have failed to produce a significant effect in patients suffering from mild-to-moderate Alzheimer’s disease. However, earlier treatment might offer improved efficacy when targeted at the early stages of Ab and Tau pathology. The conclusions from these studies are often uniform: Targeting or preventing the early pathological changes holds the best chance for an efficient disease-modifying treatment.

But until potent preventive treatments are developed, it is safest to follow the common formula for a healthy lifestyle: eat your nutrient-rich vegetables, pick the omega-3 rich fatty fish over the red meat, steer clear of excessive drinking and keep socially and physically active. To further decrease your risk of Alzheimer’s, educate yourself and challenge your brain with crosswords and Sudoku, even on holidays. And if you’re worried about your parents or grandparents, then simply make more time to see them. Perhaps treat them to a nice meal with salmon and a good bottle of resveratrol-rich New Zealand Pinot Noir, or maybe challenge them in a weekly round of golf. For those moments are surely ones that are hard to forget.

About Me:


Kai Kysenius, M.Sc., PhD student
Neuroscience Center, University of Helsinki

What’s real and what’s not

Neuroscience today is considered one of the hottest fields to work in. A simple Pubmed search indicates that more than 70,000 neuroscience papers are being published every year. The Times Higher education placed Neuroscience among the top 3 fields in terms of citation averages in the years 2000-2010. Neuroscience beats Space Science and (hold your breath) Computer Science by a big margin. This settles the issue for neurogeeks as to “What’s hot and what’s not?”

However, in this post, I pose to the reader, a different question – “What’s real and what’s not?”

I begin with a snippet from an article on The New Yorker published about two years ago:

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.”

The article goes on to explore the possible reasons as to why scientific findings were getting invalidated with time – a phenomenon that has a rather innocuous name, the decline effect. One possible reason cited is the publication bias, a preference by journal editors for articles with positive results.

While this has been recognized as a problem for some time now, the author concludes that it goes deeper than that. A recent rather alarming trend among many researchers is what has been termed “significance chasing”. Quoting the article:

According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,”

Thus, while high-impact journals have a tendency to publish findings that are least expected, the customary significance threshold remains the same even for these highly unlikely results. This has often resulted in highly publicized flukes which has negatively affected the field until proven wrong (which itself is difficult due to publication bias).

On a related note, I find it extremely hard to believe studies which rely heavily on correlations to support their findings. Remember the age old saying? Correlation does not imply causation. There is more to it. In a witty blog post, Dean Abbott describes what is popularly known as the Redskins rule. In a nutshell, the outcome of a particular American football game was able to predict the outcome of US elections, with close to cent percent accuracy. This was even before the era of Nate Silver and his advanced predictive models. It is obvious that the correlation is spurious. Here, the problem lies not with the statistics but with the quality of the data itself.

If the Redskins rule sounds like a far cry from your everyday research, consider this article recently published in Nature. This puts a big question mark on many connectivity studies (largely based on correlations!) that have studied autism and related brain disorders. Quoting from the article:

But three studies published in 2012 have come to the same conclusion: head motion leads to systematic biases in fMRI-based analyses of functional connectivity2, 3, 4. Specifically, motion makes it appear as if long-range connections are weaker than they really are, and that short-range connections are stronger than they really are.

The authors further explain that since autistic children who are put into fMRI scanners are more likely to move, the findings from the connectivity studies may well be an artefact.

Adding another twist to the already known publication bias, a recent study concluded that certain brain areas received more attention in higher impact journals as compared to other regions. In an almost comical turn to the article, the authors coin a term for brain regions which receive less attention – “low impact voxels”. Quantifying their claims, they state:

Leading the way in ignominy was the secondary somatosensory area (Z = –4.4, P < 5 x 10–6), but the supplementary motor area was almost equally disgraced (Z = –4.25, P <10 –6). Researchers unfortunate enough to find activity in these regions can expect to be published in a journal with approximately half the impact of their most celebrated colleagues (mean impact factors of approximately 5 compared with approximately 9).

As the authors conclude, one may have simpler explanations for this trend. However, it is almost natural to think that this could indeed have influenced many researchers to focus on specific brain areas.

       Brain regions that correlate positively (red) and
negatively (blue) with journal impact factor

As I conclude, I return to my question. How do I, as a young researcher, know which of the 70,000 articles have found something real? Is it a real finding, or just the fantasy of a creative mind? Media attention is hardly something to go by, as a recent debate indicates. Data and code sharing can mitigate these shortcomings to some extent. At the end of the day, the onus would lie with the individual investigator to critically question the findings and carefully examining the premise of an article before rushing for publication. I would suggest a simple talisman to refer to when in doubt – “Is this real or is it not?”

: Many thanks to my supervisor, Prof. Lauri Parkkonen for providing useful feedback and helpful comments.

About Me:

Mainak Jas, Masters Student
Machine Learning and Data Mining
Department of Information and Computer Sciences
Aalto University, School of Science

More about me:


Doctoral Program Brain & Mind (B&M) is a joint program between the University of Helsinki and Aalto University.

B&M is a multidisciplinary doctoral program which includes the following fields of neuroscience: cellular, molecular and developmental neuroscience, electrophysiology of cells and networks, neurobiophysics, neuropharmacology, neuroendocrinology, systems neuroscience, cognitive and social neuroscience, brain imaging, neuropsychology, computational neuroscience, complex networks, preclinical and translational neuroscience, neuropathology and brain diseases.

More information at

B&M doctoral students will post news and comments on neuroscience here.
In addition, a series of blog posts (in Finnish) has been started at website, administered by the Academy of Finland .