The Optimisation Adventure: Simulated Annealing Unveiled

Hello, algorithm enthusiasts! If you want to find the easiest route to glory, Simulated Annealing (SA) is just for you. If you’re up against a big and complex problem, this mind-blowing algorithm finds the easiest way to solve it. Just like magic! Visualise SA as a huge bouncing ball on a quest to find the most efficient solution, navigating a landscape of peaks and valleys to reach the treasure at the lowest point.

Our 40-year-old trusty sidekick SA, was born in 1983 to tackle nonlinear problems, inspired by the annealing process in metallurgy. At the start of every journey to find the lowest point, SA, the huge bouncing ball, jumps through different altitudes and obstacles. Everything is high at the beginning for our daring adventurer; the altitude, the energy, and the expectations. With the high energy, it can leap over immense mountains, with every jump creating temperature. As the altitude drops, the temperature drops as well, and instead of those intense jumps, our adventurer becomes more selective as a clearer path emerges in the valleys…

The annealing schedule to control the temperature determines how much uphill movement SA allows. Choosing the right schedule is crucial. Start with a high initial temperature, “melting” the system, and gradually reduce it as the search progresses. Finding this balance ensures SA explores extensively at the start but narrows down to the optimal solution.

The algorithm involves four key components: a representation of possible solutions, a generator of random changes, an evaluator for problem functions, and an annealing schedule which is a road map for temperature reduction during the search.

SA shows its brilliance in chaotic data because of its random search capability. One of the biggest issues that the more traditional friends of SA face is getting stuck in a local minimum, which means falling into a hole other than the lowest point, and failing to get out of it. However, our daring adventurer randomly accepts challenges and doesn’t shy away from climbing uphill. In other words, since SA doesn’t rely on strict model properties, it has the power of bypassing the local minima.

Even with these impressive features, in the optimisation arena, SA has some ambitious competitors like Neural Nets and Genetic Algorithms. Unlike Neural Nets that learn by strictly following one function, SA is a smart random searcher, which is an advantage as known in the local minima. When pitted against Genetic Algorithms, SA often emerges victorious, offering a global convergence guarantee.

SA is a probabilistic optimisation algorithm, that allows its versatility in problem solving. On the other hand, this means that a lot of time and precision in inputs is required for the quality of the solution. Implementing SA requires defining solutions, generating random changes, evaluating functions, and setting up the annealing schedule. Another vital part in these phases is the implementation of covariance matrices which show the distribution magnitude and direction of multivariate data in a multidimensional space. (Multivariate data refers to datasets that involve more than one variable.)

Now, you might wonder, “Why does SA matter to me?” Well, if you’re dealing with financial instruments, SA is becoming the go-to algorithm for hybrid securities and intricate trading strategies as its flexibility stands out while also having the ability to navigate complexities of multivariate systems, blending continuous and discrete sets seamlessly.

In a nutshell, SA is a flexible and robust optimisation tool that excels in navigating complex landscapes. While it may be computation-intensive, its prowess in tackling nonlinear problems makes it a hero in the world of optimisation.

So, optimisation enthusiasts, whether you’re crunching numbers in finance or exploring intricate models, consider adding Simulated Annealing to your toolkit. It might be just the adventure you need to overcome your optimisation challenges by diving to the lowest point.

Stay optimised, stay curious!

Prepared By: Ali Onur Özkan

Article Reference: Busetti, F. (2018, December 10). (PDF) simulated annealing overview – researchgate. ResearchGate.

Antimatter falls

The ALPHA experiment at CERN has found that antimatter particles fall down in gravity just like matter particles. An open question had been if antimatter particles, which are the opposites to matter particles, would fall up instead of down in gravity. This experiment answered the question.

Antimatter is the mysterious opposite form of matter; antiparticles have the opposite electric charge to normal particles. The antiparticle of the proton is the antiproton, and the antiparticle of the electron is the positron and so on. All fermions we know of have a corresponding antiparticle.   There should be an equal amount of matter and antimatter in the universe, but as far as we know the universe seems to consist mainly of matter, rather than antimatter. This discrepancy is called the matter antimatter asymmetry and it’s one of the biggest unsolved problems in physics.

According to our current understanding, the big bang should have created equal amounts of matter and antimatter, but it seems like all the antimatter has disappeared. Scientists have for a long time been looking for differences between matter and antimatter to figure out why matter dominates over antimatter in the universe. When antimatter meets matter, they annihilate. This means that its hard to study antiparticles since they disappear very quickly.

According to Einsteins general theory of relativity, all particles should interact in the same way with gravity. However, Einsteins theory was published before we even knew about antimatter, so an open question has been if antimatter interacts gravitationally in the same way as matter.

Scientists have wondered whether antiparticles could fall up instead of down. Meaning that if you held a ball made of antiparticles on earth and you dropped it, it might start falling upwards instead of downwards. This idea may seem quite absurd. There’s no reason to think that anti hydrogen atoms should fall up, but we also had no proof that they don’t. There might be a possibility that antimatter is affected with repulsive gravity. If this would be the case, it would obviously be a groundbreaking discovery, with a lot of practical applications.  So, the idea had to be tested, which is what scientist at ALPHA have now done. The ALPHA-g experiment dropped some antihydrogen and saw what happened.

The problem for a long time was to produce and contain antimatter for a long enough time to conduct an experiment like this. But now these scientists were able to contain anti hydrogen atoms and test the effect of gravity on them.  First antiprotons and positrons are produced and trapped together with the goal of making antihydrogen atoms. Antihydrogen, which consists of an antiproton and positron is the antimatter equivalent of hydrogen. The experiment traps these anti hydrogen atoms and then opens the top and bottom barriers of the trap and sees whether the particles travel up or down due to gravity. If 80% of the atoms travelled downward, they would behave the same as predicted for normal hydrogen atoms. The results showed that antihydrogen does in fact behave the same way as hydrogen in gravity.

This means that antimatter is affected by the earths gravitational force in the same way as matter. So, if you drop a ball of antimatter, it does not fall up. The result also means that Einsteins theory on the behaviour of matter under gravity also holds for matter that was discovered long after his theory was published.

The ALPHA-g result brings us closer to understanding antimatter, and subsequently understanding the origin and the structure of our universe.

Jonny Montonen

Anderson, E.K., Baker, C.J., Bertsche, W. et al. (2023) Observation of the effect of gravity on the motion of antimatter. Nature 621, 716–722

Unrevealing Heisenberg’s quantum mystery: Are trajectories real?




When thinking about some scientific principle or equation many would imagine Einsteins E=mc2  or Newtons F=ma , but some of you might imagine the mysterious Heisenberg’s uncertainty principle (UP for short). Heisenberg’s famous relation

(1)                                           ∆x∆p>h/2

changed the understanding of what the real world, deep inside, really is. However, it is many times poorly understood and even more often purely presented. What does it really mean that position and momentum cannot be known simultaneously? And more importantly, if we can’t determine position of particle precisely, can we describe its trajectory?


Quantum world is very different from ours. Many concepts can’t be understood through classical point of view. Young Heisenberg in the times when quantum physics was in crib, understood the strange nature of this world and proposed the principle, which in fact protects quantum physics from being inconsistent. In this blog, we will explore his argument and try to answer the question if the trajectories of particles are indeed real.



Heisenberg’s argument in 1927/1929

Heisenberg’s original argument wasn’t in the form we all know today. In fact, the famous equation (1) was shown in the same year by Kennard, but we can’t underestimate the importance of what Heisenberg argued.

Imagine we want to know the position and momentum of the particle simultaneously. This data would provide us with enough initial conditions, that we could predict where the particle will be in the future and what momentum it will have. Now take one electron which is in rest. If we try to measure its position to inaccuracy ∆q, we are in fact shining the γ-ray (photons) on electron. The resolution of this microscope will be proportional to inaccuracy ∆q of our measurement . What this really means is that we shine photon on electron, thanks to which we determine exact position up to order  but at the same time we destroy the electron. Compton effects will take place and electron will be scattered. The scattered photon will come back to our eye as light which is the “seeing position of electron”.

It was known to Heisenberg that photon appears both as wave with a wavelength λ, and also as particle with momentum p=h/λ (thanks to de Broglie). This λ is the resolution of microscope, because λ is what we see in microscope. But it is related to



inaccuracy  of position ∆q. If we equate the change of electron’s momentum with elastic transfer, then we can donate p as ∆p and our relation becomes,


which is the original Heisenberg’s uncertainty relation. This means that when we more accurately measure the position  we will have huge uncertainty in momentum ∆p thanks to kick of momentum from scattering. By observing we destroy the particle (state).




Thanks to this argument, he believed that the concept of position, velocity, or momentum must be redefined in quantum physics. There is no such thing as the exact location of particle or exact velocity with which the particle is moving. Rather there is some probabilistic gaussian distribution which determines how likely it is to be in certain position with certain momentum. This implies that the classical meaning of trajectories as set of points in space which particle takes and moves continuously is irrelevant.

With experiments we might more precisely determine one quantity, (position or momentum) but that would destroy the knowledge about the other. Heisenberg further associates the particles with wavefunctions. In the present the wavefunction formulation is the most widely accepted. Wave-packets form the particles, but they behave according to quantum rules, according to uncertainty principle. So, speaking about trajectories doesn’t really make sense for Heisenberg since we cannot determine trajectory since we can’t know the position and momentum exactly.

It might be the case that we can reconstruct the position and momentum of particles in our experiment. Meaning that uncertainty relation is broken for past. This, however, doesn’t seem like problem since anyway, even with this knowledge we can’t determine the future behavior of trajectory since there is uncertainty in momentum thanks to observation. Uncertainty relation is broken for past, but that is not what it is meant to describe. Therefore, if we don’t know the exact momentum p, which means exact velocity v (p=mv), we don’t know where the particle will be in future exactly, until we measure it. Speaking about trajectory here, doesn’t make sense. We might, but we can’t experimentally verify it, so it is a matter of taste as Heisenberg mentions. We shall think in terms of wave distributions and in that sense, trajectories are not well defined.


In conclusion, we cannot really describe in the classical sense the word “trajectory” of a particle, but nor are we absolutely sure what it really means. As quantum physics evolves, the meaning of what is real and what is not evolves with it and so we might expect that maybe one day we will be able to resolve the problem completely. What we know, however, is that uncertainty relation holds and played crucial role in development of modern physics in past and for years to come.


 Michal Bires

S. Aristarhov (2023) , Heisenberg’s Uncertainty Principle and Particle Trajectories, Physics of Particles and Nuclei Vol. 54, No. 5, pp. 984–990.


Please, No More Set Theory

Set theory keeps me up at night. Not in a good way. Don’t you also feel weird about how a thing can be a member of something else, at the same time as a part of everything else, at the same time is absolutely nothing? Thaaat’s right I’m talking about the empty set dingdingdingdingdingdingding!

Vague notions always come first. Mathematics has always worked in this way. In the ancient Greek times, philosophers thought that all numbers were rational, until they were proven wrong. Newton had to argue for an “infinitely close to 0” quantity. Most mathematicians would use the concept of sets in their work, but it wasn’t until the beginning of the 20th century, rigorous understanding of set theory developed. It was a classic, a masterpiece, a beam of light shining into the crisis of inconsistency haunting the mansion of mathematics. But what’s the price of it? Now every freshman must learn about how {{a}, {a, b}} actually behaves like an ordered pair, and if you get enough of them together you can even make things that looks like functions!

But seriously, this needs to stop. Axiomatic set theory is very “Hilbertian” in that, the axioms are pretty and simple, but the constructions using them are hell. And the derivations do not always describe a construction. When you get a set as the setting (huh) of a problem, it’s never entirely clear what you’re supposed to do with it. When you read a proof, there is nothing obvious about what fits into where, or how the shapes morph into each other. To me, it’s like when you asked someone to hand you a screwdriver, but they handed you a popsicle stick, and said “Now imagine this popsicle stick is stuck in another popsicle stick, both sides, at a 45-degree angle.” Maybe I’m biased, but I think a screwdriver should have a handle.

You might ask, what then? What are we going to use? Why the heck did we only have the set theory as our foundation to college mathematics then?

Here comes the good news: we actually had other systems! One of them was called type theory, which was first conceived when Whitehead and Russell were working on the Principia Mathematica, a grandiose project to write down literally all of mathematics at the time. That project was toast when Gödel came along, with the cold hard truth that math cannot prove itself right, not back then, not now, and never after. Sent the project 6 feet under. Along with it goes down the early type theory.

But the gist of it remained: a language that actually makes sense, where a natural number is just a natural number, not also a subset of a non-natural non-number. The cinder later reignited when the theory of computation was born, when a new generation of logicians like Turing and Church rules. Lambda calculus was born, along with all the variants of it, like simply-typed lambda calculus, later system F. They brought back the concept of one mathematical object having only one type. The neatness and expressiveness remain to this day, and you are still using a descendant of them whenever you code in Java or C++. That’s also one of the reasons of why type theory is so great. Since all kids are learning coding today, there is a lot of familiarity with one form of it.

Even better: type theory already has been used in math, with flying colors. The computer theorem prover Coq, famously used in the proof for the four-color theorem, is based on type theory systems. Agda, another theorem prover, is already used in domains of higher topology theories, with the brand new homotopy type theory as its foundation.

The crux of type theory is this: in the set theory, you build theorems using the rules from first-order (classical) logic, the rules applications form a tree, and only the leaves are the set theory axioms. In type theory – get this – the types themselves are also the logic! This is called proposition as types. A type could be understood as a proposition, or rather, a proposition is just defined as a type! Logical truth is denoted by the existence of an element in a type. Such types with at least one element are called “inhabited”, reflecting the concept of a true proposition. We also get a huge chunk of proof theory for free, since these elements of a type correspond to the proof of that proposition, so for example we could get cut-elimination from the computation on those elements.

Type theory solves this huge pain of set theory, as it is structural: just like how it is its own logic, the fundamental rules always show you clearly what you need to do at a given point, so you never need to guess. If you need to construct a pair, the rule says you need to give the two elements. If you have a natural number, the rule says you can do mathematical induction on it.

I truly believe the future of mathematics, at least a huge part of it, will be built on a new foundation like type theory. It sure is that much more intuitive for me, and I believe, one day, all of you could also be saved from the horrors of set theoretic constructions.



Jurij Vega – The only Slovene whose name can be found on the moon

If I told you he worked in mathematics, physics, astronomy, and ballistics; if I told you his logarithmic tables were a staple in all fields of science for over a century and that he held the world record for calculating the most digits of pi for over 50 years – would you know who I’m talking about?

Most likely, the answer to that question would be no. The answer would be negative from the majority of Slovenians too, despite him arguably being the greatest Slovenian mathematician. And if I’m honest, even after having competed in a few competitions named after him, I am not entirely sure I could answer with yes myself.

So, who is this mysterious man whose name and achievements seem to be known only by mathematics enthusiasts or those whose knowledge approaches an encyclopedic level?

His name was Jurij Vega (1754-1802), also known as Baron Jurij Bartolomej Vega. A Slovenian mathematician and artillery officer, he graduated as the top student in his class from the grammar school in Ljubljana. After working as an engineer, he joined the army and quickly rose to the rank of “Unterleutnant” in Vienna’s Artillery due to his talent. Due to his dissatisfaction with the available textbooks and equipment, he started writing his own mathematical literature. In 1782, he published his first book, followed by his first set of logarithm tables a year later.

Vega later published more books, including the notable “Thesaurus logarithmorum completus ex arithmetica logarithmica et ex trigonometria,” which is also the topic of this blog, released in Leipzig. Even during the wartimes, he continued his mathematical works. A testimony of a witness even states that Vega was calculating logarithms while cannonballs were flying above his head. Further, he authored at least six scientific papers and achieved a world record on August 20, 1789, by calculating pi to 140 decimal places, with the first 126 and later 136 being correct. Maria Theresa honored him with the title “Knight of the Iron Cross” for his contributions to weapon improvement with his calculations.

Unfortunately, Vega went missing in the middle of September 1802. His dead body was found in the same month on the 26th in the Danube River. The circumstances of his death remain unclear, with some speculating suicide or murder. However, his death was pronounced an accidental drowning.

Vega’s logarithmic tables

The first book of logarithmic tables was published in 1783, with tables with 7-digit base-10 logarithms, which surpassed all other tables available until then in terms of correctness. Many more editions followed due to great success.

Logarithmic-trigonometric Handbook published in 1793 covered the logarithms of the natural numbers from 1 to 101000 as well as the logarithms of the trigonometric functions for angles between 0° and 45° with a step size of 10 arc seconds (equivalent to 1/129600 of a degree).

In later editions, there were also lists of prime numbers and much more. The third edition, the Thesaurus logarithmorum completus contained 10-digit logarithms that were intended especially for calculations in astronomy.

It is said that Gauss criticized the incorrectness of several values in the 10th decimal place, which would be justified since many mistakes were later discovered and corrected in subsequent editions.

However, the tremendous progress in engineering and astronomy would have been impossible without Vega’s tables. Let’s see how they were calculated.

Calculating logarithms was revolutionized when James Gregory (1638-1675) and Brook Taylor (1685-1731) discovered the possibility of representing functions with their derivatives (Taylor’s theorem).

For natural logarithms (with base e), the following series expansion applies:

From the formula:

We get a converging series:

For example, to calculate ln(2), we first determine the value of x. Adding up the first ten summands yields a 10-digit value:

and then we calculate

The base 10 and the natural logarithms follow this formula:

To calculate this base 10 logarithm of x, we first determine ln(x) and multiply it by the reciprocal of the ln(10):

Vega calculated this for all prime numbers up to 100,000 and deduced the logarithms of the others with the formula log(ab)= log(a) + log(b). This was a big undertaking at the time, especially considering all the calculations were made by hand.

Vega and pi

On August 20, 1789, Vega achieved a world record by calculating pi to 140 places, of which the first 126 were correct in the first estimation. The second estimation achieved 136 correct places and is presented below. With his method of calculation, he had found an error in the 113th place in Thomas Fantet de Lagny’s estimation of 127 places from 1719. Vega retained his record for 52 years until 1841 and his method is mentioned still today. He managed to do so by improving John Machin’s formula from 1706:

with his equivalent of Euler’s formula from 1755:

which converges faster than Machin’s formula. He had checked his result with the similar Hutton’s formula:

In conclusion, these calculations allowed for the development of all scientific areas of the time that benefited from precise calculations. Vega’s work was still praised later on, even after his death, for its precision and convenience (with a few exceptions like Gauss). He was able to achieve this by diligent checking and with the tremendous help of his pupils and soldiers under his command. Although he may not be as recognizable now as he deserves, he was, and still is, well-respected in the scientific community. So much so that a crater on the moon was named after him, called the Vega. I realise that his name on the crater is what might have brought you here, but I sincerely hope you stayed for his work and outstanding achievements (partially also because there isn’t much more to the lunar story than that). However, this exhibits that perhaps we should all look up more lunar craters to discover more famous scientists that history doesn’t put the spotlight on, wouldn’t you agree?



Faustmann, Gerlinde, “Jurij Vega – the Most Internationally Distributed Logarithm Tables.” Accessed November 27, 2023.

Vega, “Détermination de la circonférence d’un cercle,” Nova Acta Academiae Petropolitanae, IX, 1795, p. 41.

Vega, “Détermination de la démi-circonférence d’un cercle, dont le diamétre est=1,” Nova Acta Academiae Scientiarum Imperialis Petrapolitanea for 1790, Vol. 9, 1795 pp. 41-44.

Vega, Thesaurus Logarithmorum Completus (logaritmisch-trigonometrischer Tafeln), Leipzig, 1794, p. 633.



The Return of the Ninth Planet

Would you believe it if I told you that scientists have found evidence that there might be a ninth planet lurking around in the far reaches of our solar system? Well, they have, and spoiler alert: it’s not Pluto. In fact, the predicted size of this unknown planet might be 1,5 to 3 times the size of the Earth! Hard to believe that a planet of that size could’ve escaped the telescopes of the scientists, right? But alas, our solar system and the universe are so much bigger than the meager human mind can fathom, and this hidden planet is very possibly out there.

Researchers, Patryk Sofia Lykawka and Takashi Ito decided to investigate the possibility of a ninth planet. “Why?” you ask. Well, this would explain three major properties that have been observed in the Kuiper belt, a ring of “space rocks” behind Neptune home to many dwarf planets, including Pluto. Now the three properties of this Kuiper belt are as follows:

  1. There is a group of objects beyond Neptune, fancily dubbed as “transneptunian objects” (TNOs), whose orbits are not under the gravitational influence of Neptune.
  2. These same TNOs have somewhat peculiar angles of rotation (and we don’t know why, yet).
  3. There are some “extreme objects” (like the asteroid Sedna) with weird orbits, which is also puzzling.

ALL of these could simply be explained by the existence of a planet bigger than the Earth, and this is what Lykawka and Takashi set out to find.

But how did they figure this out? The answer is: they performed a bunch of simulations, namely N-body computer simulations. N-body simulation is widely used in the astrophysics field, and simply put, it simulates objects in space, such as planets, and how they would act under different physical forces, such as gravity. These N-body simulations were used to investigate the effects of different sized hypothetical planets on the orbits of the afore mentioned TNOs. Now to be able to make these simulations, the researchers needed to determine the properties needed for the potential new planet, more familiarly called the KBP (Kuiper belt planet).

But first, they made a model that included the four giant planets (Neptune, Uranus, Saturn, and Jupiter) and their current orbits to use as a control model. Then by comparing simulations of different KBP models with the control system, they could figure out the best fit. The KBP models were different in the fact that the model’s hypothetical planet would have a new mass and orbit every time, to cover more ground. In order to investigate the potential changes caused by the KBP, the researchers compared the simulations using the Outer Solar System Origins Survey (OSSOS) Survey Simulator (OSS). Basically, they compared the simulation results by using the OSSOS – an astronomical program that is used to observe transneptunian objects using actual telescopes – which enabled comparison with real-life observations.

Now the results are indeed interesting, as they seem to support the existence of a planet in the Kuiper belt. But before you get too excited and start searching for pictures of this planet, remember that it is still only a hypothetical explanation for some properties and would need actual observations to back up its existence. I know, it’s sad – but alas, that is how science works and trust me: it’s a good thing. Now we can just sit back and wait to see if we will get this nice new addition to our big, universal family of round-ish objects floating in vast, endless darkness.


Sinianna Paukkunen

Lykawka, P. S., & Takashi, I. (2023). Is There an Earth-like Planet in the Distant Kuiper Belt?. The Astronomical Journal, 166(118).