Posted by on Apr 7, 2013 in Blog, Science | 0 comments

Noise and structure

In a previous post, I contrasted the distribution of matter in the universe at scales above and below a clustering distance, r_0, of about 0.002 times the Hubble distance (about 14 billion light years). I mentioned that at scales below that distance, the two-point correlation function, defined as the joint probability of finding galaxies in volumes \delta V_1 and \delta V_2 at a distance r, varies as a power law with an exponent \gamma = 1.77 \pm 0.04.

I want to expand on this idea in the present post. To change the focus slightly, let us imagine that we are given a sequence of numbers extracted from a Gaussian probability distribution, with mean 0 and variance 1: \{\text{x1},\text{x2},\text{x3},\text{...}\}. Assume that these are independent of one another and each identically distributed. In other words, there is no correlation between samples. If we establish a sampling histogram of these numbers, then we should find, after accumulating a sufficient number of samples, that there will be some finite number of samples at any arbitrary distance from the origin. It might take a while, but in principle, this distribution is non-zero for any finite line segment along the real-number line, x + \Delta x.

Let’s now take pairs of numbers from this set, \{(\text{x1},\text{x2}),(\text{x3},\text{x4}),(\text{x5},\text{x6}),\text{...}\}. We now have a probability distribution in a plane defined by these pairs of numbers. Again, with a sufficient accumulation of samples, we should find that a sampling histogram will eventually fill the entire plane; that is, there is a finite non-zero probability of finding some sample in any arbitrary element of area of the plane, \Delta A.

We can go up another notch by taking ordered triples, \{(\text{x1},\text{x2},\text{x3}),(\text{x4},\text{x5},\text{x6}),(\text{x7},\text{x8},\text{x9}),\text{...}\}. We now have a Gaussian distribution in 3 dimensions; but the same logic applies. Accumulate enough samples and any volume element, \Delta V, no matter how far away from the origin, will eventually include a sample of this random process because there is a finite and non-zero probability of the occurrence of whatever combination of 3 random numbers might exist in that volume element.

We can continue to do this up to any arbitrary dimensionality that we choose. However we segment the n-dimensional space in which we are embedding the random process, we will eventually find an element of the sample set within that segment. To make matters simple, we divide the n-dimensional space into n-dimensional boxes of some \Delta x_0 on a side. Each “hyper-box” is of volume x_0 ^n. If we count up boxes on the basis of just whether or not they contain at least one element of the random set, without any concern for the density of elements in any box, then every possible box with any choice of scale, will be counted eventually. It might take a while, and longer at higher dimensions; but in principle a space of any arbitrary dimension will be filled by a truly random process of the sort I described.

The picture would be quite different for a uniform random process with equal likelihood on the interval 0\leq x\leq 1. Going up in dimensionality in a similar fashion would simply fill a hyper-cube of length 1 on a side. We could never get an n-tuple like (1.1, 1.01, 1.5, ...) out of such a procedure since any individual component could never be greater than 1.

Now consider the following system of differential equations:

 \dot{x} = \sigma (y - x)

 \dot{y} = x (\rho - z) - y

 \dot{z} = x y - \beta z

which is known as the Lorenz system. For \sigma = 10, \beta = 8/3, and \rho = 28, this system of equations yields a solution that goes onto a dynamical attractor independent of the initial conditions. The concept of a dynamical attractor is not wildly complicated in its essence. For example, a damped harmonic oscillator has an attractor that spirals into a state of zero energy. An undamped harmonic oscillator has an attractor that involves the continuous exchange of kinetic and potential energy between two state variables. This Lorenz attractor can be diagrammed in a 3-dimensional space of its state variables x, y, z.

Here is a gallery of different views of 10,000 points of a numerical solution of this set of equations with the parameters as specified in the last paragraph [Click thumbnails for a larger view]:

The correlation dimension of this system is about 2.05. One can almost see that topologically it appears to be very nearly comprised of two sheets of spiral flows. That the dimension is slightly greater than 2 can be seen from the slight depth of each sheet.

Suppose now that we generate an “observer” of the internal state of the Lorenz system by simply adding up the state variables at any time; that is, we define

 o(t) = x(t) + y(t) + z(t)

for such a solution sequence as I’ve graphed above. Suppose we accumulated a long sequence of such samples and then we repeated our process of embedding them in spaces of higher and higher dimension. Would we find that this sequence filled the embedding space in the same way a truly random Gaussian process does? Would we find it filling up just some sub-set of the space like the uniform probability process? Or would we discover some other behavior?

The answer is that the correlation dimension of this observer sequence will turn out to be the same as that of the original state variables. This was first noted by Grassberger and Procaccia in 1982 (See Measuring the Strangeness of Strange Attractors, Physica D (1983) 189-208). We could actually have taken almost any linear combination of the state variables for this exercise, including any one of the state variables by itself.

This makes the observation about the scaling exponent of the correlation dimension of matter in the universe at lengths below about 0.002 H_d very interesting. It identifies an almost sheet-like structure of galaxies, galactic clusters, and super-clusters, which “dissolves” into a uniform (random) distribution of mass at longer scales.

A reasonably compelling explanation for this difference in distribution of observable matter is the effect of gravitation since the Big Bang on relatively small initial fluctuations in energy density. A process known to yield fractal dimensions of around 1.7 is diffusion-limited aggregation (DLA). One working hypothesis is that the correlation dimension of about 1.77 of observable matter in the universe at distances below r_0 is that a DLA process, in which diffusion of matter under a local force of gravity aggregates towards initial areas of higher density, has led to the observed patterns of galaxies and their clusters and super-clusters. At scales above this clustering distance, a combination of other effects must have prevented DLA from working, at least so far in time.

Interesting, no?

Leave a Reply