Thanks to the advent of relativity theory, and string theory in recent decades, there's a lot of talk in physics about space having extra, unseen dimensions -- up to 11 spacetime dimensions in one version of string theory! These days, the word "dimension" in physics immediately evokes *Twilight Zone* imagery:

There is a fifth dimension, beyond that which is known to man. It is a dimension as vast as space and as timeless as infinity. It is the middle ground between light and shadow, between science and superstition, and it lies between the pit of man's fears and the summit of his knowledge. This is the dimension of imagination. It is an area which we call the Twilight Zone.

(Fun fact: the introduction to the show changed pretty much every year it was on. See Wikipedia for the text of all the intros!)

The term "dimension", however, has another meaning in physics: a more mundane one, but equally important. This other type of dimension, used in what is known as* dimensional analysis*, has been used to gain surprising insight into difficult physical problems.

In mathematics, one typically studies pure numbers and the rules and relationships that govern them. In physics, however, these numbers typically have *units*. Distance is measured in units such as *meters*, *miles*, or *kilometers*. An interval of time is measured in units of *seconds*, *minutes* and *hours*. Speed is measured in units such as *miles per hour* or *meters per second*.

Though we may measure a quantity in a variety of different units, the overall interpretation of the quantity doesn't change. Meters, miles and kilometers are all measures of length; we say that a distance has the *dimensions of length*, which we denote [L]. Seconds, minutes and hours are all measures of time intervals; we say that time has *dimensions of time*, denoted [T].

This may seem somewhat trivial at first, but becomes significant when more complicated quantities are considered. Speed is always represented as a length divided by a time; the *dimensions of speed* are [L]/[T]. Acceleration *a* is measured in units of meters/second^{2}; the dimensions of acceleration are

\(a = [L]/[T]^2\).

According to Newton's second law, force is related to acceleration by \(F=ma\), where *m* is mass, with dimension [M]. The dimensions of force are therefore

\(F=[M][L]/[T]^2\).

Work, or energy, is typically expressed as Force times distance; the dimensions of energy are therefore

\(E = [M][L]^2/[T]^2\).

So what can we do with these expressions?

First, we note that any physical equation must have the same dimensions on either side of the 'equals' sign. It makes no sense, for instance, to say that 'meter = second', or [L]=[T]. We can use this observation to check our work any time we finish a calculation. For example, suppose we derive an equation for the position *x *of an object that starts from rest at \( x=0\) and is subject to a constant acceleration, and find that

\(displaystyle x = frac{1}{2}at^3\),

where *a* is acceleration, \( x_0\) is the initial position of the object, and *t* is the time elapsed. Is this equation correct? The left-hand side of the equation has dimensions [L]; the right-hand side of the equation has dimensions

\(displaystyle at^3 =frac{[L]}{[T]^2}[T]^3\),

where the factor 1/2 has no effect on the dimension of the expression. Canceling terms, we find that

\(displaystyle at^3 = [L][T]\).

The right-hand side of the equation has one too many factors of [T]! This isn't necessarily enough information to find the correct equation -- after all, we don't know if we determined the factor 1/2 correctly, either -- but it is a good indicator of where things went wrong.

(The correct equation is \(x = frac{1}{2}at^2\).)

The use of dimensional analysis to check one's math is a very useful, but mundane application. What is surprising and even spectacular, however, is that dimensional arguments can be used in some cases to gain a basic understanding of extremely complicated and otherwise intractable physical problems!

Let's consider a relatively simple example*: determining a formula for the lift force of wing, be it the wing of an airplane or the wing of a bird! In general, the physics of flight is a rather complicated subject: broadly speaking, air flowing over the top and the bottom of the wing moves at different speeds, producing an unequal pressure over the surfaces which results in an upward force, known as the lift:

Broadly speaking, the lift \(L\) of a wing depends upon the surface area \(A\) of the wing, the airspeed \(v\) of the craft, and the average density \(rho\) of the fluid. These quantities have dimensions

\(displaystyle L = frac{[M][L]}{[T]^2}\),

\(displaystyle A = [L]^2\),

\(displaystyle v = [L]/[T]\),

\(displaystyle rho = [M]/[L]^3\).

If \(A\), \(v\), and \(rho\) are the only quantities which the lift depends upon, it is reasonable to expect that the lift must be expressible by an equation of the form,

\(displaystyle L = A^alpha v^beta rho^gamma\),

where \(alpha, beta, gamma\) are constants to be determined. If we substitute our dimensional forms into this equation, we find that

\(displaystyle frac{[M][L]}{[T]^2}=[L]^{2alpha+beta-3gamma}[T]^{-beta}[M]^gamma\).

To make the dimensions the same on both sides of this equation, we must have

\(displaystyle 2alpha+beta-3gamma = 1\),

\(displaystyle beta = 2\),

\(displaystyle gamma = 1\),

which results in \(alpha = 1\).

This implies that the lift force on a wing must have the functional form,

\(displaystyle L = C_L A rho v^2\),

which is exactly what one finds in practice! The number \(C_L\) is generally referred to as the lift coefficient, which depends upon the finer details of the wing and the fluid. There is a wonderful plot which shows that this relationship between airspeed and lift holds for a wide variety of aircraft and animals, and can be seen on page 14 of this book.

Such dimensional analysis doesn't always work; it is possible that the "true" formula for an effect may be far more complicated than these simple techniques can provide. It is also possible that dimensional analysis may not provide a unique equation to determine a phenomenon. When it does work, however, it produces results that are quite amazing.

Another example that I am very familiar with involves the properties of atmospheric turbulence. We've all seen an extreme version of this turbulence on hot days in the form of "heat shimmer", when heat off the pavement creates inhomogeneities in the atmosphere:

A simple model of atmospheric turbulence is illustrated below:

A mass *M* of air, confined to a box, is illuminated by the sun, which adds a certain amount of energy/second to the system. This energy creates 'turbulence cells', regions of more or less uniform atmospheric properties. As time evolves, these cells break down due to viscous forces into smaller and smaller cells, until they reach a size at which they break up completely and their energy is dissipated; the rate of energy dissipation (energy/second lost) is called *P*.

Clearly this is a very complicated problem, and it is in fact very difficult to predict useful properties of turbulence. A fundamental breakthrough was made in 1941 by Andrey Kolmogorov, who used dimensional analysis to estimate the average behavior of the velocity in such a medium. The quantity of interest is the velocity correlation function, which may be written as

\(displaystyle C_v= langle |{bf v}(r)-{bf v}_0|^2rangle\).

To a non-mathematician, this is a very cryptic looking formula, but what it in principle quantifies is really quite simple. Suppose one measures the average velocity vector \({bf v}_0\) of the atmosphere at a central point, and measures the velocity vector \({bf v}(r)\) for points located at a distance \(r\) away from that central point. We square the absolute difference of these two numbers, and look at the average value (averaging denoted by \(langlerangle\)) over time. The result is a quantity with dimensions of velocity squared,

\(C_v = [L]^2/[T]^2\),

and which is a measure of how the velocity at the central point is related on average to the velocity at a different point \(r\).

Kolmogorov viewed the problem as follows (though I'm modifying the argument a bit for clarity). To a good approximation, the velocity correlation function depends upon three things: the spatial difference \(r\) between points, the mass of the atmosphere \(M\) in our 'box of air', and the amount of energy/second dissipated from the turbulence, \(P\). These three quantities have dimensions,

\( r = [L]\),

\(M = [M]\),

\(displaystyle P = frac{[M][L]^2}{[T]^3}\).

We should be able to construct a function with the dimensions of \(C_v\) from \(r\), \(M\), \(P\), of the form

\(displaystyle C_v = r^alpha M^beta P^gamma\).

By comparing dimensions, one can show that the only possible combination is

\(displaystyle C_v = r^{2/3} M^{-2/3} P^{-2/3}\).

The factor \(r^{2/3}\) is the important one: apparently the velocity correlation increases as \(r\) to the 2/3rd power! This result, which may be written in a number of forms, has been confirmed by experiment and forms the basis of theories involving light propagation through atmospheric turbulence. Though the result is by far no means perfect, it is amazing how well it actually works: one author, in reviewing the topic, noted**, "Kolmogorov's 1941 theory has achieved an embarrassment of success."

The final example is one that is little known, to my knowledge, but perhaps one of the most amazing: the use of dimensional analysis in 1906 to anticipate the need for a new theory of physics to explain the structure of the atom!

By the early 1900s, scientists had realized that something strange was happening inside the atom. Measurements of the spectrum of sunlight, as originally done by Joseph Fraunhofer, showed a continuous spectrum punctuated by isolated discrete spectral lines where no light is present:

These are *absorption* lines of atoms; a close look at the *emission* spectrum of atoms such as hydrogen show that the atoms also only emit light at such special, discrete frequencies; a spectrograph of the visible lines of hydrogen is shown below (via Wikipedia):

In 1884, Johann Balmer matched a mathematical equation to the known spectrum of hydrogen, and the series of spectral lines given by this formula became known as the *Balmer series*. Somehow, the structure of the atom results in very specific frequencies of light being emitted by atoms; as you may have read somewhere, many researchers tried to use existing electromagnetic and mechanical theories to explain the origin of these frequencies.

In 1906, James Hopwood Jeans published an article in *Philosophical Magazine* entitled, "On the constitution of the atom." Jeans noted the following:

Lord Rayleigh states an objection against regarding the atom as a system in steady orbital motion, rather than as one performing small oscillations about a position of statical equilibrium, -- namely, that the sharpness of spectral lines indicates a definiteness of structure such as it is difficult to imagine associated with a system of electrons in orbital motion. He goes on to say: "It is possible, however, that the conditions of stability or of exemption from radiation may after all really demand this definiteness... The frequencies observed in the spectrum may... form an essential part of the original constitution of the atom as determined by conditions of stability."

If this were so, these frequencies would depend only on the constituents of the atom and not on the actual type of motion taking place in the atom. Thus if we regard the atom as made up of point-charges influencing one another according to the usual electrodynamical laws, the frequencies could depend only on the number, masses, and charges of the point-charges and on the aether-constant V. What I wish to point out is that it is impossible, by combining these quantities in any way, to obtain a quantity of the physical dimensions of frequency.

In short, Jeans noted that in existing electromagnetic theory, only mass, charge, and "aether-constant" V (what we now call permittivity \(epsilon\)) are the fundamental properties of the atom. These quantities have dimensions

\(m = [M]\),

\(q = [Q]\),

\(epsilon = frac{[Q]^2[T]^2}{[M][L]^3}\).

Jeans noted that there isn't any way to combine these quantities to get a frequency \(omega\), with dimensions

\(omega = 1/[T]\)!

This strongly suggests that the discrete spectral lines require some sort of new physics. Jeans himself ended up going in the wrong direction:

It seems, then, that we must somehow introduce new quantities -- electrons must be regarded as something more complex than point-charges. And when we have once been driven to surrendering the simplicity of the point-charge view of the electron, is there any longer any objection to putting the most obvious interpretation on the line-spectrum, and regarding its frequencies as those of isochronous vibrations about a position of statical equilibrium?

Jeans suggested that, by introducing a length [L] which represents the size of the electron, one could then derive a frequency \(omega\) from mass, charge, aether constant, and electron size. This works, but is not the right resolution of the missing frequency.

What is the missing piece of the puzzle? Quantum mechanics is generally characterized by the presence of the fundamental constant known as Planck's constant, written \(hbar\), and which has dimensions

\(displaystyle hbar = frac{[M][L]^2}{[T]}\).

Let's see if we can derive a frequency from mass, charge, aether constant, and Planck's constant! We are looking for a formula of the form

\(displaystyle frac{1}{omega} - epsilon^alpha hbar^beta q^gamma m^delta\).

On substituting the dimensions of all the relevant quantities, we find that we must satisfy

\(displaystyle [T] = [Q]^{2alpha +gamma} [T]^{2alpha-beta}[M]^{-alpha+beta+delta}[L]^{-3alpha+2beta}\).

Solving for the exponents, we find \(alpha = 2\), \(beta = 3\), \(gamma =-4\), and \(delta = =1\). Our frequency must be expressible in the form

\(displaystyle omega = frac{q^4 m}{epsilon^2hbar^3}\).

According to quantum mechanics, the frequency of oscillation of an electron in the hydrogen atom in its ground state is given by

\(displaystyle omega = -frac{e^4 m_e}{8epsilon^2hbar^3}\),

where \(e\) is the charge of the electron and \(m_e\) is the mass of the electron. This formula is almost exactly of the form of the frequency which we derived by dimensional analysis, only differing by a constant factor 8!*** Though Jeans drew the wrong conclusion from his dimensional analysis, it is remarkable that it can be used, with the proper constants, to derive the fundamental frequency of the hydrogen atom!

Hopefully these examples illustrate that powerful and general results in physics can sometimes be derived with a minimal amount of mathematical effort!

*A final thought*: Planck's constant was first determined in 1901, and in 1905 Einstein used it in his explanation of the photoelectric effect. It was entirely possible in 1906 for Jeans to perform the analysis mentioned above, which would have resulted in him deriving a characteristic frequency for the atom very close to the experimentally observed value (within a factor of 8)! This may be considered a fascinating "what if?" of physics: if Jeans had applied his dimensional analysis and included Planck's constant, the history of quantum mechanics might have looked very different.

**************************************

* This example is derived from the nice textbook A Guided Tour of Mathematical Methods.

** From R.H. Kraichnan, "On Kolmogorov's inertial-range theories," *J. Fluid Mech.* 62 (1974), 305-330.

*** I just derived this result for the first time while writing this post, and I'm even I'm quite stunned that it worked!

I'm actually shocked that a theorist would like dimensions so much, considering that, in my experience, as a group they tend to make everything unitless or let's make everything have the same units. Let's not even start on the abomination that is the cgs electromagnetic units. Doesn't anyone have a feel for a dyne?

In conclusion, I think my favorite bit of units magic is when calculating thermal frequency noise of a harmonic oscillator. Where by units analysis it has just Hz, but it's really Hz^2/Hz.

"...they tend to make everything unitless or let's make everything have the same units." That comment made me smile 🙂 I agree that the feel of the physics involved may be lost when trying to put everything adimensional; but still as a theorist myself, once I understand what's going on I find it useful at times to use some dimensionless parameter to make a model more general.

IM: Yep; as Mike notes below, dimensionless parameters can be used to not only make a model more general but also give a unitless measure of what constitutes 'large' and 'small' in a problem. This can be very important in making simplifying approximations...

Grad: Dimensional analysis can often be the theorist's best friend, as these examples hopefully illustrate. The problems described here can't readily be solved by any rigorous mathematical formalism, but nice results can be found by the dimensional case.

I'm not really against any particular system of units -- each of them has evolved for the convenience of a specific collection of researchers. In high-energy physics, where much of the time the end result of the calculation is a dimensionless branching fraction, dimensionless units save a lot of headaches. In theoretical optics, I find cgs units to be sanity-saving for long calculations. As long as a set is well-defined, it isn't that much of a headache.

You're an *optical* theorist and not a high energy theorist, otherwise you'd only work in units where c=hbar=1 and time has dimensions of length. Or something. I got so confused by the different "simplifying" unit conventions used in my different classes that I gave up on figuring out which simplifications were consistent with one another and decided to stick to MKS units. Sure, it makes for some uglier numerical calculations, but that's why God (oops, I mean Wolfram) gave us Mathematica. Plus, MKS are my initials.

Incidentally, I can remember being a senior in high school and suddenly deciding to do a unit analysis on E=mc^2. When I realized it worked, I wondered why no one came up with that equation *before* Einstein. It was so obvious! Later it dawned on me that they were perfectly happy with E=.5mv^2, which does, of after all, have exactly the same units...

Mary: What's with the anger at high-energy these days? 🙂 I actually jump between MKS and CGS from time to time, depending on what I'm trying to calculate. If I'm interested only in electromagnetic wave propagation, then CGS is most elegant; if I'm trying to actually calculate physical forces/momenta with my results, CGS works.

Incidentally, I'll have more to say on units in the future; the subject is more fascinating than one might think. Unfortunately, I first have about 500 pages of background reading to do before I feel comfortable writing on the subject...

I'll also point out that making things unitless gives you an unambiguous definition of "small." Once everything is unitless, "small" means much much less than one, and that's that. This can be useful in figuring out what effects might be important or not, and this information might not be immediately clear from an equation with units.

Mike: Very good point! As I'm sure you're aware, such definitions of 'small' and 'large' are significant in using perturbation theory and asymptotic analysis.

This technique recently helped me find an expression for the "speed of sound" in a system of nonlinear conservation equations. Since I knew the primitive variables' dimensionalities, I could show that the speed of sound was probably proportional to sqrt(pressure / density).

Of course, right after deriving this, I realized I could have looked up the exact same expression in any fluid dynamics textbook, since my primitive variables (density, velocity, and pressure) were identical 🙂

Wade: Your story reminds me of a number of times I've "rediscovered" important results. I tried to console myself with the idea that, although I hadn't found anything new, I was at least as clever as the person who first made that result!

Just curious here. Is there any experimental proofs for existing one and two dimensional 'systems' in our 3D+times arrow?

Naively imagining a two dimensional object as observed in SpaceTime I come to the conclusion that from some angles it will exist, from others it won't. as it will have f.ex a lengt and a width but no height.

And if it doesn't exist as proof-able experiments would it be wrong looking at it as 'emergences' instead? Which then won't exclude different 'dimensionalities' as there easily could be 'bubbles' consisting of more and possibly even fewer dimensions too, but not of the 'copy & paste' variety. They would all be 'self consistent' systems if so as I see it, and 'whole objects', like our SpaceTime seems to be to me?

Any views of that?

Rereading myself, I mean that as I see it mathematics today seems to treat 'dimensions' as singular objects. I haven't seen any real proof for that view though. If one considered them emergences they would 'appear' as objects interwoven, which to me would suit the SpaceTime I see, very 'plastic'. And then perhaps the real object wouldn't be to define further 'forces' and 'smaller constituents' as much as to explain the 'symmetries' coming into existence.

And I can't see how it would invalidate what we already are doing and 'knowing' either. It's not as much an trial to exchange parameters as 'reinvent them' if you see what I mean? As everything would be as an 'mirror explanation' it seems to me, not so much invalidating what we already know as to try to look at it from a new aspect.

Don't know if this made any sense, one can try though 🙂