Where Do Random Walks Lead

Rami Luisto asked the following question:

Pick randomly n unit vectors from the plane, [tex]x_1,\dots,x_n\in \mathbb{R}^2[/tex]. What is the probability that the sum of these vectors has magnitude less than or equal to one, [tex]|x_1+\cdots+x_n|\leqslant 1[/tex]?

(magnitude = norm = distance from the origin)

I will present here a brute force solution to this problem, but unfortunately the formula for the probability is a complicated iterated integral. I am able to calculate the answer for n=1, n=2 and n=3 . For n=4 it seems complicated. The corresponding values are

1, [tex]\frac{1}{3}[/tex] and [tex]\frac{1}{4}.[/tex]

This inspired me actually to force my computer to make some random walks. It made [tex]10^4[/tex] random walks of each of the lengths 3,4,5,6,7,8,9,19 and 39. On the basis of that and the mentioned calculations, I conjecture:

The probability for the sum of n>1 random unit vectors to be in the unit disc is [tex]\displaystyle\, \frac{1}{n+1}.[/tex]

This is a bit surprising since one would expect the probaility to be proportional to [tex]\frac{1}{n^2}[/tex], not to [tex]\frac{1}{n}[/tex], because the area of the possible outputs (end points) grows proportionally to [tex]n^2[/tex].

If it is indeed 1/(n+1), the simplicity of that suggests that there could be a simple solution. For instance there are simple solutions to similar problems. If you have a simple solution, then I am interested in it!

A random walk of length n, n-r.w., will mean n randomly picked unit vectors in the plane and the end point of that r.w. will mean the vector which is their sum.

I will define the probability [tex]P_n(a\leqslant r\leqslant b)[/tex] that the magnitude of the end point of an n-r.w. is in the interval (a,b). If a=0 and b=1, then it is the probability that the end point is in the unit ball. [tex]J_n[/tex] will be the corresponding probability density function, i.e.

[tex]\displaystyle J_n(x) = \frac{d}{dx}P_n(a \leqslant r\leqslant x)[/tex].

To begin with, let us define the following function f. Given non-negative x and y, consider a circle Y with radius y and a point p whose distance from the center of Y is x. There are several cases. First: Suppose that the circle with the center at p and radius 1 intersects Y in two points. Let the points be a and b. The function f at (x,y) is now supposed to be half of the sharp angle [tex]\angle apb[/tex]. Here is a picture to clarify:

Definition of the function f

In this case the value of the function is

[tex]\displaystyle f(x,y)=\arccos\left(\frac{-x^2+y^2-1}{2x}\right),[/tex]

We can ignore the cases, where Y intersects the p-centered unit circle only in one point and the case [tex]x=0, y=1[/tex]. In all other cases let us define

[tex]f(x,y)=\pi[/tex]

except for the case x<1-y (which is possible if y<1) when we set

[tex]f(x,y)=0.[/tex]

Summary:
If [tex]0\,\textless\, y-1\,\textless \, x\, \textless\, y+1[/tex], then [tex]f(x,y)=\arccos\left(\frac{-x^2+y^2-1}{2x}\right),[/tex]
if [tex]y+1\,\textless \, x[/tex] or [tex]x\,\textless\, y-1[/tex], then [tex]f(x,y)=\pi,[/tex]
If [tex]x\,\textless\, 1-y[/tex], then [tex]f(x,y)=0.[/tex]

Note, that because x and y are non-negative, the above cases are mutually exclusive.

The important feature of this function is that given a point with the norm x and a random unit vector v, the probability that the norm of x+v has value between a and b, (a<b), equals

[tex]\displaystyle \frac{2(f(r,a)-f(r,b))}{2\pi}=\frac{1}{\pi}(f(x,a)-f(x,b)).[/tex]

Suppose now that the norm of the end point of an n-r.w. is r. Suppose also that we now continue this random walk by one step i.e. add a random unit vector to this end point. The vector v has a random direction [tex]\alpha\in [-\pi,\pi)[/tex]. For some [tex]\alpha[/tex]’s the value |x+v| is between a and b. By the above, the probability for [tex]\alpha[/tex] to be such, equals

[tex]\displaystyle\frac{f(r,a)-f(r,b)}{\pi}.[/tex]

This gives a possibility to proceed by induction. Intuitively, if for each s we know the probability [tex]P_n(s\leqslant r\leqslant s+\varepsilon)[/tex], then we get the approximate probability [tex]Q_s[/tex] for an (n+1)-r.w. has the n:th point has the magnitude around s and the magnitude of the final end point is between a and b. Namely

[tex]\displaystyle Q_s=P_n(s\leqslant r\leqslant s+\varepsilon)\cdot \frac{f(s,a)-f(s,b)}{\pi}[/tex]

Now summing over s would give the probability that the end point of an n-r.w. is has magnitude between a and b. As [tex]\varepsilon[/tex] approaches zero,

[tex]\displaystyle\frac{P_n(s\leqslant r\leqslant s+\varepsilon)}{\varepsilon}[/tex]

approaches the derivative of [tex]P_n(0\leqslant r\leqslant x)[/tex] at s, namely the corresponding probability distribution [tex]J_n[/tex], and the sum approaches the integral

[tex]\displaystyle\int_{\max\{a-1,0\}}^{b+1} J_n(x)\frac{f(x,a)-f(x,b)}{\pi}dx.[/tex]

The limits of the integral are chosen taking into account that the integrand is zero elsewhere by the definition of f.

In the following I could have defined only [tex]P_1[/tex] and [tex]J_1[/tex] and then continue by induction, but then [tex]J_1[/tex] would be a proper distribution. To avoid that, I define explicitly [tex]P_2[/tex].

[tex] P_1(a\leqslant r\leqslant b)=1\text{ when }a\leqslant 1\leqslant b\text{ and } 0,\text{ otherwise.} [/tex]

[tex] P_2(a\leqslant r\leqslant b)=\displaystyle\frac{1}{\pi}(f(1,a)-f(1,b))[/tex]

Then for [tex]n \geqslant 2[/tex]:

[tex]J_{n}(x)=\displaystyle\frac{d}{dx}P_{n}(a\leqslant r\leqslant x).[/tex]

[tex] \displaystyle P_{n+1}(a\leqslant r\leqslant b)=\frac{1}{\pi}\int_{\max\{a-1,0\}}^{b+1} J_n(x)(f(x,a)-f(x,b))dx[/tex]

Example: We shall calculate [tex]P=P_{3}(0\leqslant r\leqslant 1)[/tex]. First we make some technical observations: D and d/dx mean derivative w.r.t. x

1. [tex]f(x,0)=\pi[/tex] for all [tex]x>0[/tex].
2. [tex]D\arccos(x)=-\frac{1}{\sqrt{1-x^2}}[/tex]
3. [tex]\frac{d}{dx}\arccos\left(\frac{x^2-2}{2}\right)=-2\frac{d}{dx}\arccos\left(\frac{x}{2}\right)[/tex]
4. [tex]D\arccos(x)=D\arccos(-x)[/tex]
5. [tex]\int f(x) Df(x)dx = \frac{1}{2}f(x)^2 + C[/tex]

Now,

[tex]\displaystyle P=\frac{1}{\pi}\int_{0}^{2} J_2(x)(f(x,0)-f(x,1))dx[/tex]

[tex]\displaystyle=\frac{1}{\pi}\int_{0}^{2} \left[\frac{d}{dx}P_{2}(0\leqslant r\leqslant x)\right]\cdot(f(x,0)-f(x,1))dx[/tex]

[tex]\displaystyle=\frac{1}{\pi}\int_{0}^{2} \left[\frac{d}{dx}\frac{1}{\pi}(f(1,0)-f(1,x))\right]\cdot(f(x,0)-f(x,1))dx[/tex]

[tex]\displaystyle=-\frac{1}{\pi^2}\int_{0}^{2} \left[\frac{d}{dx}f(1,x)\right]\cdot(\underbrace{f(x,0)}_{=\pi}-f(x,1))dx[/tex]

[tex]\displaystyle=-\frac{1}{\pi}\underbrace{\int_{0}^{2} \frac{d}{dx}f(1,x)dx}_{=f(1,2)-f(1,0)=-\pi}+\frac{1}{\pi^2}\int_{0}^{2}\left[\frac{d}{dx}f(1,x)\right]f(x,1)dx[/tex]

[tex]\displaystyle=1+\frac{1}{\pi}\int_{0}^{2} \left[\frac{d}{dx}f(1,x)\right]f(x,1)dx[/tex]

[tex]\displaystyle=1+\frac{1}{\pi}\int_{0}^{2} \left[\frac{d}{dx}\arccos\left(\frac{x^2-2}{2}\right)\right]\arccos\left(-\frac{x}{2}\right)dx[/tex]

Now use the observations above:

[tex]\displaystyle=1-\frac{2}{\pi^2}\int_{0}^{2} \left[\frac{d}{dx}\arccos\left(-\frac{x}{2}\right)\right]\arccos\left(-\frac{x}{2}\right)dx[/tex]

[tex]\displaystyle=1-\frac{1}{\pi^2}\left[\arccos\left(-\frac{2}{2}\right)^2-\arccos\left(-\frac{0}{2}\right)^2\right][/tex]

[tex]\displaystyle=1-\frac{1}{\pi^2}\left[\pi^2-\frac{\pi^2}{4}\right]=\frac{1}{4}[/tex]

About Vadim Kulikov

For details see this
This entry was posted in Calculus, Mathematics, Probability. Bookmark the permalink.

3 Responses to Where Do Random Walks Lead

  1. Yes. The conjecture is valid. Use characteristic functions and the identity $rf(r)=-g'(r)$ where $g(|x|)$ is the Fourier transform of the Haar measure on the unit circle and $f(|x|)$ is the Fourier transform of the characteristic function of the unit disk. Then integrate by parts once.

  2. Pingback: Walks On Math » Walks On Math

  3. Why the gods allow this sort of thing to continue is a mystery.

    Sent from my Android phone

Leave a Reply

Your email address will not be published. Required fields are marked *