The New Physics
Entries by Doug (79)
Dimensions, Units and Coordinate Systems
In the previous post, I considered deriving a natural unit of motion, derived from the Rydberg frequency for hydrogen, as a basis for clarifying the meaning between dimensionless and dimensionful constants that John Baez has been discussing on his blog. Of course, this is exactly what Larson did. Using his approach, we can define a “natural” unit of length and time in terms of a selected system of units, whether it is a system of scientific units, or a system of bananas.
However, it’s also clear that the dimensions of the space aspect of the natural unit of motion defined in this manner are not fixed at 1; that is, we can think of an increase of area, or of volume, over time, as motion, just as easily as we can think of an increase of length, as motion.
Logically, therefore, it’s just one more small step to formulate the equation of motion as the ratio of two, dimensionless scalars. However, while we can conceive of squared or cubed meters, we can’t conceive of squared or cubed bananas, even though we can square, or cube, the NUMBER of any quantity of bananas. So, in a sense, there is an isomorphism in the dimensions of space and the dimensions of , even though we are not accustomed to thinking of it that way.
The problem we run into is in dealing with unity as a number. While (2 meters)2 and (2 bananas)2 are both equal to 4, four square meters and four bananas mean different things, because the dimensions of one square meter are two, but the dimensions of one “square” banana are zero. Yet, if space is an “amount of substance” in , why are the dimensions of square space (in meters, yards, or furlongs) two and not zero? In other words, if we can’t conceive of square bananas, how can we conceive of square meters, if we are regarding space as an “amount of substance?”
The reason is reflected in one of John’s statements, posted in the discussion on his blog, explaining the concept of passive coordinate transformations:
We also use [passive coordinate transformations] when we switch from a system of units to a simplified system of units. For example, the SI system has seven units, which measure length, mass, time, current, temperature, amount of substance, and luminous intensity. So, any physical quantity gives a point in - the “dimensions” of this quantity in the SI system. But, we may choose to work with fewer units. For example, we may decide not to treat “amount of substance” as a dimensionful quantity. SI measures amount of substance in moles, but we can say a mole is just 6.0221415(10)x1023 - a dimensionless number, Avogadro’s number. Our new system of units assigns to each physical quantity a point in , and we have a “change of units”
which is not an isomorphism. The ultimate extreme is to work in a system where all physical quantities are treated as dimensionless, so any physical quantity has dimensions living in . This is actually very popular in fundamental theoretical physics.
Therefore, even if it’s the ultimate extreme, if we regard the new natural unit of motion as a physical entity, John is saying that we can treat it with dimensions in , which is what we have also concluded, though not as formally and certainly not for the same reason!
But there is another “non-isomorphism” (sorry John, there’s just no way I can speak in the language of categories yet) that is found between the left and right side of the binomial expansion. On the left side, all the ones are in , but on the right side, all the ones are in , and when n = 3, the octonions, with dimensions , the binomial expansion is a map from a point to a volume, or from a zero-dimensional scalar to a three-dimensional pseudoscalar.
Indeed, whenever n > 0, is mapped to , and, I guess, when n = 0, is mapped to itself. Interestingly, however, the arrow in John’s passive tranformation goes in the other direction, from larger to smaller.
So, if we start with the larger, assuming n = 3, the dimensions of the natural unit of motion that we have defined would not be zero, but three, the dimensions of the pseudoscalar in the octonions. Thus, the equation of motion would be
as strange as this might seem to many, at first.
As John has explained in “This Week’s Finds” (I forget which one), the octonions are considered the “crazy uncles” of the math world and kept in the attic, because they are non-associative, yet they are mysteriously related to the Bott periodicity theorem, and some physicists are convinced that they play a key role in physics.
Well, if we define a natural unit of motion that has the dimensions of the octonions, and if we recall that the definition of motion is the heart and soul of physics, that should provide some motivation for taking the concept of a natural unit of motion seriously. However, it’s the mathematics that will have to be our guide, and here is the way we can think about it:
1) In general, the passive transformation that we are interested in is from to . Specifically, at n = 3, because BPT proves that there is no new phenomena beyond n = 3.
2) When we do this, we have to incorporate a view of unity that embraces the passive transformation; that is, the natural unit of motion takes center stage. This means that we replace the quantity one, with the ratio 1, and then the expression:
in the binomial expansion, changes meaning, and, consequently, the ones down the left side of the expansion (the scalars) are
,
and the ones down the right side (the pseudoscalars) are
which seems to be a mathematically meaningless distinction, but the point is, it is far from meaningless, as we will see below.
3) The displacement of direction (n-dimensional rotation) that normally forms the coordinate system of vectorial units, which we can map to “all possible coordinate charts” (i.e. higher, lower dimensional units), at a given value of n, is replaced by the displacement of “direction” (“n-dimensional” expansion) that forms a new coordinate system of higher, or lower, “n-dimensional” scalar units, which we want to map to all possible coordinate charts, at a given value of n.
How does this work?
At n = 0, there are no coordinate charts to work with (), but at n = 1 there are two (); At n = 2 there are four (), and at n = 3, there are eight ().
The binomial mapping is possible, because, by defining the unit value as a ratio of proportions, there are two “directions” in each dimension, n/1 and 1/n, which together, relative to 1/1, form a “bidirectional” field of scalar values.
So, applying the expansion of this scalar “bidirecton” at n = 1, instead of the rotation of a vectorial direction, which we normally use imaginary numbers to define, we have a new coordinate chart, the 1/n and n/1 scalars, which is isomorphic to the positive and negative integers.
However, n=1 also includes n=0, because . So we have two charts, the original scalar and the coordinate chart (signed integers). At n=2, we have four charts, the original scalar, two sets of signed integers, and … what? How do we describe the pseudoscalar at n = 2? For that matter, how do we talk about two sets of signed integers?
The key is in understanding the role of the scalar, which we have now placed in the middle of our 1D coordinate system. The scalar coordinate system of rational units makes no more sense without the reference point of unity, than the signed integers do without the reference point of zero.
So, the n=1 expansion, must include the unit scalar in the pseudoscalar; that is, it is isomorphic to a one-dimensional “line,” but not described from point to point, but from the center to one point and from the center to the opposite point. Numerically, the expression
becomes a “line” magnitude, where unity is “displaced” in two “directions,” which we can designate positive and negative, but together they form a composite scalar number similar to the way in which the reals and imaginaries are combined into a composite number that is a number of higher dimension than the reals alone.
Now, let me draw the obvious conclusion, and then I’ll end this: The square of the composite scalar number above, which is composed of and , is in , and its cube is in , which means that all three, , , and are contained in . Yet, it doesn’t matter as long as
It’s not like those nested groups and n-spheres that Baez and company talk about, because the dimensions of scalars are always zero. Yet, while that’s true in the ususal sense, it’s also obvious that the square of a scalar “line” magnitude, as defined in the composite number above, is isomorphic to an “area” magnitude, and such a “line” magnitude cubed is isomorphic to a “volume” magnitude, in this new sense. In other words, given
,
then,
,
which is now a value in , and if we cube it,
,
we get a value in , which contains copies of all the lower dimensional values, just as the octonions contain points, lines, and areas in the pseudoscalar. Only now, the algebra of these “multi-dimensional” psuedoscalars has to have the ordered, commutative, and associative properties of the zero dimensional scalars!
In other words, because we’ve defined the scalar value of 4/4 as the natural unit of motion, but in the form of a 1D magnitude (a “line” magnitude) the square of this unit is (4/4)2 = 16/16 natrual units of motion, or four of the 1D magnitudes (4 “line” magnitudes), which are the minimum required to form a square around a center point. Its cube is (4/4)3 = 64/64, or 16 of the 1D magnitudes (16 “line” magnitudes) of motion, which are the minimum required to form a cube with a center plane.
There is much more to say about this idea, but the point I wish to make here is that this is system of units that is like a system of coordinates, but it is not a system of coordinates. It’s a system that lets us describe the reciprocal aspects of the reality of motion, space and time, using numbers, but without depending on our arbitrary choices of units.
(more later)
Dimensional Analysis
There was a discussion last month, on John Baez’ blog, entitled dimensional analysis, which Baez characterizes as a one of several “knarly issues” he wants to discuss. “They’re ‘gnarly’,” he says, “not because they’re technical, but because they involve slippery concepts. Their clarification may require not so much hard calculations as patient, careful thought.” He begins by observing:
It’s common in physics to assign quantities “dimensions” built by multiplying powers of mass (M), length (L) and time (T). For example, force has dimensions MLT −2. Keeping track of these dimensions can be a powerful tool for avoiding mistakes and even solving problems.
This raises some questions:
* What’s so special about mass, length and time? Do we have to use three dimensions? No - we often use fewer, and sometimes it’s good to use more. But is there something inherent in physics that makes this choice useful?
* What’s the special role of dimensionless quantities - those with dimensions M0L0T0? In what sense is a dimensionless quantity like the fine structure constant more fundamental than a dimensionful one like the speed of light?I thought I had these pretty much figured out, until Vera Kehrli pointed out two things that surprised me:
* Dimensionless constants often depend on our choice of units.
* Dimensionful constants often don’t depend on our choice of units
For example, the speed of light is
c=299,792,458m/s
Here a meter, m, has dimension L. A second, s, has dimension T. The speed of light, c, has dimensions
LT −1. To make the dimensions match, it follows that the number 299,792,458 must be dimensionless.Now suppose someone comes and changes our units. Say they redefine the meter to be twice as long as it had been. Then m doubles and the number 299,792,458 gets halved, keeping c the same. So we see:
* The dimensionful constant c does not depend on our choice of units. If we double m, we halve C, but c stays the same. Of course this number is what it is, regardless of our units. But if we say
c=Cm/s
then the dimensionless quantity C depends on the definition of m and s.
* The dimensionful constant c does not depend on our choice of units. If we double m, we halve C, but c stays the same.
All perfectly trivial - yet physicists like to run around saying the fine structure constant is more fundamental than the speed of light because it’s dimensionless and therefore doesn’t depend on our choice of units! They mean something sensible by this, but what they mean is not what they’re saying.
It’s good to compare two examples:
The fine structure constant:But the definition of a meter no longer involves a rod in Paris - that’s obsolete; I mentioned it just to illustrate a point. The current definition says a meter is “1/299,792,458 times the distance light in a vacuum travels in one second”. And this makes a different point. Again the value of this quantity is a fact about physics - we could radio an alien civilization the definition of a meter, and if they knew enough physics, including the definition of a second they could build a rod the right length. But with this definition of m, the dimensionless quantity C=c/(m/s) seems to tell us nothing about our universe!
α=e2/4πε0ℏc≃1/137.036is a dimensionless quantity built from quantities that seem very fundamental - the electron charge −e, the permittivity of the vacuum ϵ 0, Planck’s constant ℏ and the speed of light c. (Ultimately, Benjamin Franklin is responsible for the conventions that make the electron charge be called −e instead of e. But that’s another story.)
The speed of light in meters per second:
C=c/(m/s)=299,792,458is also dimensionless, but it’s built from quantities that seem less fundamental. c seems fundamental, but m and s seem less so. After all, the definition of a second is “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of a caesium-133 atom at rest”. Like the speed of light, the value of this quantity is a fact about physics - but a more complicated fact. The length of the standard meter rod in Paris is an even more complicated fact, which has the disadvantage of being tied to a specific artifact! With this definition of m, the dimensionless quantity C tells us something funny about the universe: something about how the speed of light, the frequency of a specific kind of light emitted by caesium, and the length of the meter rod in Paris are related. It’s a bit like how α tells us some relationship between the electron charge, the permittivity of the vacuum, Planck’s constant and the speed of light - but it seems less “fundamental”, whatever that means.
(Actually it tells us some funny blend of information about the speed of light and the definition of m and s.)
One might argue that C is less fundamental than α because we could get any value of C by changing our definitions of m and s. But that can’t be the whole point, since we could also get any value of α by changing our definitions of e,ε0,ℏ and c. So, there must be some other reason why α seems important and C seems completely silly. What’s going on, exactly?
Most of the ensuing discussion devolved into a discussion of the meaning of units, and the confusion that results in assigning units to coordinates per se, when it’s the change of coordinates, or the difference between them, that is the only meaningful concept of multi-dimensional magnitudes. The trouble is, it seems necessary to assign units to the coordinates in order to work in a meaningful way with the vectorial concepts, but the dimensions of the units assigned to the coordinates get confused with the dimensions of the physical magnitudes involved. In one post, Baez described the confusion in a detailed example. Here’s part of that comment:
I always get confused about this when I try to solve a GR problem with the help of dimensional analysis. I wind up spending so much time analysing the basic issues that I get bogged down before I reap the rewards of this work! So then I give up and act like a mathematician where everything is dimensionless… and then the next time this situation comes up, I’ve forgotten what I learned before.
So in fact, I need to start from scratch again here. Let me relive my blunderings in public - it could be educational.
When I see a coordinate like x i my instant gut feeling is to assign this units of length. But then I think “diffeomorphism invariance” and imagine a general coordinate transformation
y i=f i(x 1,…,x n)
so I think “oh-oh, it’s forbidden to apply an arbitrary smooth function to a dimensionful quantity!” So then I start wanting the coordinates to be dimensionless.
Then this desire gets heightened when I remember that the metric is what takes tangent vectors and spits out lengths (actually squares of lengths). So, I want to pack units of length 2 into my metric tensor somehow.
But then I think: the metric tensor is an element of S 2T *M, the symmetric square of the cotangent bundle. If this has units of length 2, I must want cotangent vectors to have units of length.
And then I think: no, it’s not the metric tensor g that has units of length 2, it’s when we apply this tensor to a pair of tangent vectors, say v and w, that we get something with dimensions of length 2, namely g(v,w).
So where do I put the units of length? Do I put two of them in g, or one in v and one in w? If I do the latter, I’m saying tangent vectors have units of length. But a minute ago I was wanting cotangent vectors to have units of length!
How can I be so confused? I’m supposed to know something about physics, but apparently I don’t even know if tangent vectors or cotangent vectors have units of length!
Although applying the concepts of units, dimensions, and directions, used in scalar science, would seem to be useful in vectorial science, it is not possible for LST physicists to see it clearly, unless they grasp the idea that all physical entites consist of motion, combinations of motion, or relations between motion, and that motion exists in three dimensions, with two, reciprocal, aspects, space and time.
If I had the chance to converse with these guys, I would start by asking an unusual question: “Is it possible to measure the dimensions of length or time independently?” I’ve asked this question many times, and have never received a positive answer to date, because to measure length always requires motion; that is, we have to move a measuring rod into place, counting the units of measure as we do so, or after we do so; or else we have to send a sound wave, or a light wave, of known velocity, and measure the time of travel along the distance to derive the length, or devise some other way, but, the point is, any method of measuring length or time that we can conceive involves measuring motion.
Since this is the case, then, logically, when we measure distance, or time, we are actually measuring the past motion that previously separated given locations in space and time, in a sense. If this is true, then is it not also true that we cannot regard length, or time, as independent physical entities? That is to say, if we are to be logically consistent in our reasoning, space and time should have no physical meaning outside the meaning they have as the reciprocal aspects of motion.
Hence, when we select a system of units of length and time, selecting the value of one fixes the value of the other in that system of units, if we specify the relative value, or the velocity, that they must have for our purposes.
For instance, when we consider the velocity c, and choose a unit of measure based on an observed physical constant, say on the Rydberg frequency for hydrogen, regarding this constant as a natural unit of motion, measured in units of cycles per second, we can then determine the corresponding natural unit of time, in terms of seconds, a derived unit in a system of units we select.
Since velocity, expressed as a frequency, is an oscillation, assuming that the value of the Rydberg frequency, expressed in cycles per second, is a natural unit of oscillation, then its motion includes two natural units of length. Therefore, we need to double the value of the Rydberg frequency to determine the natural unit of time in this constant, expressed in seconds, which is simply the reciprocal of the doubled frequency.
Now, of course, equipped with a natural unit of time, expressed in units of seconds, we can calculate a natural unit of length, based on meters, the corresponding unit of space in our selected system of units, by multiplying c times the reciprocal of the natural unit of time, expressed in our system’s units of time, seconds.
Does this not give us a beginning to finding a way out of the dilemma being discussed, at least in terms of the meaning of units, whether dimensionless units, or dimensionful units? By assuming motion as our fundamental unit of measure, we gain a new perspective on the issue articulated by Baez:
A system of units gives a coordinate system on some space of quantities we’re trying to measure. So, if we understand coordinate systems thoroughly, we should understand systems of units.
We can easily understand coordinate systems in terms of 1D units of motion, because this is the domain of the reals, the basis of Euclidean geometry. If we take Hestenes’s suggestion, and stick to the reals in Cl3, we can define vectors in terms of Cl3’s four, independent, linear spaces, and describe “space” in terms of units of points, lines, areas, and volumes, a coordinate system of relative space locations, conforming to Euclidean geometry.
However, if the locations of “space” and “time” within this system of multi-dimensional units have no independent physical meaning, as indicated by our inability to measure them independently, but only have meaning when considered together, in terms of motion, and we can determine a natural unit of motion as described above, doesn’t that imply that the units to use in this system of units, which we want to exploit in exploring invariance principles, should be a system of units of motion, rather than a system of units of space and time (spacetime)?
This thought just bends the mind when you think about it, because the dimensions of motion are space and time. Yet, the velocity is a pure number too, the reciprocal relation between two real numbers. The fact that we give dimensions to these two reciprocal numbers, does not mean that velocity also has these dimensions. For instance, the velocity of an expanding gas or liquid, does not have the dimensions of length and time, but the dimensions of volume and time. The velocity of an expanding planar wave, does not have dimensions of length and time, but the dimensions of area and time.
If this is so, then why can’t the space aspect of c-speed have dimension zero, as well as dimension 1, 2, or 3? That is to say, why should we think of the dimensions of space, in the equation of motion, as length, area, or volume? Isn’t it just as logical to regard motion as the relation of two changing scalar values, as it is to require that space have dimensions of length, area, or volume?
Indeed, if we were to describe the outward expanding motion of the universe as a whole, not from a particular point of reference, wouldn’t the dimensions of both the space and time aspects of the equation have to be scalar, i.e. dimensionless numbers? The answer is obvious, but what might not be so obvious is that, if we do choose a particular point of reference, then we can see that the expanding motion is motion in all three dimensions, depending on the dimensions of measurement.
(to be continued)
It takes Guts to Do Physics
I think I mentioned before that I picked up on a statement that John Baez made in discussing octonions, and the efforts of some physicist friends of his, Tevian Dray and Corinne Manogue, to pursue them, as possible solutions to advanced challenges in quantum mechanics, as it relates to divisional algebras and group theory: “It takes guts to do physics,” he said.
Well, I know what he means because, I’ve found that, if you want to share your thoughts in developing new ideas, you have to be willing to look like a dummy sometimes, and you have to be willing to look like a babbler sometimes, and you have to be willing to take chances and stick your neck way out there, where your opponents (and there are always plenty of those) can cut it off happily.
On the other hand, as they say, opposition comes with the territory. Opposition is always an implication of proposing a solution to a problem, especially, if the proposer happens also to be the one identifying the problem, as in this case. Not everyone agrees that Larson’s RSt has the problems with SHM, and the other properties of the photon. They don’t believe that the answers to the questions of how to derive the atomic spectra won’t come out of Larson’s development either, but there doesn’t seem to be much progress along that line so far.
Identifying Larson’s decision to logically follow “another possibility” rather than the net zero possibility, as a mistake in his development, takes courage, because this decision is so early in the development that its consequences have fundamental impacts on the RSt. For example, the exposure of “scalar rotation” as an oxymoron is frighteningly fundamental, since scalar rotation is the basis of Larson’s development of the RSt. Other consequences follow as well. For instance, the basis for photon propagation, the so-called “vacant dimension,” is no longer tenable. The basis for gravitation, the inward rotation, is no longer tenable, and the basis for the universal expansion, the expansion of space responsible for the recession of the galaxies, seems to be in doubt as well. All of this, and undoubtedly much more, are implications of suggesting that the solution to the problems in the RSt is to back up and reconsider the first “possibility,” the net zero possibility, first.
Clearly, the central hypothesis of the new program is that the entire structure of the physical universe is determined by the necessary consequences of the fundamental postulates, and these postulates lead to the net zero option first, as Larson himself concluded in the Preliminary Edition, before he revised this conclusion in the Revised Edition. So the question to be addressed is, “Why did Larson change his earlier conclusion?” One look at the PAs and the answer is obvious: the continuous “direction” reversals of the net zero option cannot be a photon, because the frequency is fixed; that is, the frequency of the continuous reversals, expressed as a speed-displacement, is one unit, or the velocity of c relative to the unit datum. The space/time ratio of the net zero option is fixed at ds/dt = 1/2, and it therefore corresponds to a fixed velocity of
ds/dt = 4.558816 x 10^-6 cm/2(1.520655) x 10^-16 sec = 1.498965 x 10^-6 cm/sec, or
half the speed of light, .5c, as measured from zero. How can this be used as the basis of the photon? Perhaps Larson was originally thinking that combinations of this basic velocity would produce the variability in the frequencies of photons, but adding units of 1/2, to other discrete units of 1/2, can never change the ratio, from 1/2 to something else. Hence, he found “another possibility,” where the space/time ratio could vary, the periodic “direction” reversal pattern, which he promptly introduces in Chapter IV of NBM, without fanfare. That the new possibility is genuine cannot be disputed. The only question is, “How does it (the new pattern of periodic “direction” reversals) arise?” Ironically, the answer is, by combining the two, inverse, net zero patterns. Had Larson realized this, we might have had an entirely different RSt, than the one we have now. More than anyone else, I wish he had realized it, but there is no turning back now, my ____ is caught in the wringer, to put it crassly.
The implications of the LRC solution, the SUDR and TUDR as 3D vibrations, and their combination as the SUDR+UDR+TUDR (S|T), in the form of the RN
ds/dt = 1/2 + 1/1 + 2/1 = 4/4
are that the theoretical concept of photons is much more complex than Larson’s concept, that the propagation is not due to the lack of reversals in a vacant dimension, but due to the combination of the unit time propagation of the SUDR and the unit space propagation of the TUDR, which produces a unit space/time propagation in the S|T combo. At first it was natural to assume that the middle term, 1/1, represents the outward progression, the UPR, but then further analysis of the RN clearly reveals that this term is the inward component of the combination, not the outward, and that the two outward units are found in the left and right terms, a net unit of outward time progression in the SUDR contribution on the left side of the number, and a net unit of outward space progression in the TUDR contribution on the right side of the number.
However, the fact that the net total of inward and outward motion is zero in this combo, doesn’t seem to mean that the outward and inward propagations necessarily cancel each other, that they have to be summed, like -1 + 1 = 0, but rather that they coexist simultaneously in the same number, much like the real and imaginary parts of a complex number don’t cancel each other, but coexist simultaneously in one number to express a new type of number. The implications of this line of thought were very interesting from the beginning, but only now is the full impact of what this implies starting to emerge.
The implications of what is being discovered are truly amazing. What we have discovered so far implies that the whole of LST physics is misled by the failure to recognize that the nature of the “direction” of scalar magnitudes is analogous to, but distinct from, the nature of direction in vectorial magnitudes.
This is clearly visible in the development of quantum mechanics, where the role of the nature of direction in vectorial motion was employed to explain the role of the nature of “direction” in scalar motion, albeit, unawares. Given this point of view, a breathtaking vision of what actually happened in the history of the LST theory development begins to emerge from the darkness. For example, recall that the first progress, in the LST understanding of the atomic spectra of the atom, was Bohr’s idea of an electron orbiting a proton nucleus, implied by Rutherford’s results, that could only take discrete values of a given wavelength, the emission/absorption values being the delta of these levels. However, while these levels were regarded as values of angular momentum, the particle nature of an electron, orbiting the nucleus in the manner in which Bohr first imagined it, led to problems, which were solved by deBroglie’s wave idea; that is, that the angular momentum, or its energy equivalent, of the electron could also be regarded as the energy in a wave, the wavelength of which determines the number of cycles, which can fit in the circumference of the orbit. As the frequency increases, the wavelength decreases, and a greater number of wavelengths fit into a given orbit, or, holding the frequency constant, a larger circumference of an outside, or higher, orbit accommodates more sets of the number of cycles, corresponding to an electron’s energy. In other words, in Bohr’s model, as the radius of the orbit increases, the higher the energy it can accommodate, and, thus, the higher number of electrons that can fit into a given orbit.
In the usual account of this part of the story, the narrative quickly moves on, describing quantum numbers, Heisenberg’s uncertainty principle, Pauli’s exclusion principle, etc, but let’s freeze the frame in the movie at this point and take a closer look at what’s actually happening here in light of our new understanding. The problem was that Bohr’s model only worked for Hydrogen. When the higher frequencies in the higher orbits of other elements were calculated, the spectra calculation was incorrect. The usual story of how this problem was eventually solved recounts Heisenberg’s breakthrough discovery of the necessity of using non-commutative multiplication in calculating the frequency terms in the Fourier expansion of the atomic spectra, an approached that worked, but which he suspected and which he initially considered to be a “significant difficulty” with his approach.
However, it was Dirac who saw it not as a significant difficulty, but a significant discovery, when he realized that Heisenberg’s non-commutative product, which was actually only a reflection of the fact that a frequency transition has two possible “directions,” up to a higher frequency, and down to a lower frequency, could be assumed to represent the difference in a frequency transition “equal to
times their Poisson Bracket expression.” In N.A. McCubbin’s account of this, which I’ve referred to before, he explains the excitement of Dirac, when he saw the connection between the “significant difficulty” of the non-commutative product in Heisenberg’s draft paper, or what today would be considered a “preprint,” were it on the Internet, and the Poisson Brackets of Hamilton’s mechanics:
This was just the kind of connection that Dirac was looking for: in place of a strange looking multiplication rule and the mathematically somewhat fuzzy Correspondence Principle, the Hamiltonian formalism was mathematically precise, elegant, and powerful. Of course he had only proved the connection in a particular limit, using, ironically, the Correspondence Principle. So he made a leap. In his paper ‘The Fundamental Equations of Quantum Mechanics’ [6] he wrote: ‘We make the fundamental assumption that [i]the difference between the Heisenberg products of two quantum quantities is equal to [tex]\small \frac {ih}{2\pi[/tex] times their Poisson bracket expression[/i].’ (Dirac’s italics) So he assumed the equality not just in some limit of large quantum numbers, but always! With this assumption results simply pour out.
When I first read this, I wasn’t sure what it meant. Not only do I not know what Poisson Brackets are, but I didn’t understand what McCubbin was talking about when he referred to Dirac’s “leap,” which both he and Dirac characterize as a “fundamental assumption.” McCubbin states that the reason for the assumption is that Dirac had only proved the connection (between the commutator and times its Poisson Bracket expression) “in a particular limit, using, ironically, [Bohr’s] Correspondence Principle,” but “assuming the equality not just in some limit of large quantum numbers, but always!” What does this statement of McCubbin’s mean? I don’t understand the point that he’s making here clearly enough to appreciate the exclamation point at the end of it. I truly wish I could find someone that could explain it to me.
Nevertheless, in my struggle to understand it on my own, I think that I’ve discovered something significant about the relation between the two systems of physical theory, and I think, not unexpectedly, in hindsight, that it has to do with this confusing in LST physics of the direction of vectors with the “direction” of scalars. The first thing I needed to understand in McCubbin’s statement was, “why is Dirac’s use of Bohr’s Correspondence Principle (CP), in proving the connection, “ironic?” The CP is a way to explain how classical physics relates to quantum physics, but, according to McCubbin, this principle is “mathematically fuzzy” compared to the “precise, elegant, and powerful” formalism of Hamilton:
To recapitulate: in the Correspondence Principle limit, in which classical and quantum descriptions should coincide, the difference between the Heisenberg products of two quantum quantities X,Y becomes equal to [tex]\small i\hbar\left(X,Y\right)_{PB}[/tex]…This was just the kind of connection that Dirac was looking for: in place of a strange looking multiplication rule and the mathematically somewhat fuzzy Correspondence Principle, the Hamiltonian formalism was mathematically precise, elegant, and powerful.
Bohr describes his CP in a 1925 paper, as follows:
Nevertheless, the visualization of the stationary states by mechanical pictures has brought to light a far-reaching analogy between the quantum theory and the mechanical theory. This analogy was traced by investigating the conditions in the initial stages of the binding process described, where the motions corresponding to successive stationary states differ comparatively little from each other. Here it was possible to demonstrate an asymptotic agreement between spectrum and motion. This agreement establishes a quantitative relation by which the constant appearing in Balmer’s formula for the hydrogen spectrum is expressed in terms of Planck’s constant and the values of the charge and mass of the electron. The essential validity of this relation was clearly illustrated by the subsequent test of the predictions of the theory regarding the dependence of the spectrum on the nuclear charge. …
The demonstration of the asymptotic agreement between spectrum and motion gave rise to the formulation of the “correspondence principle”, according to which the possibility of every transition process connected with emission of radiation is conditioned by the presence of a corresponding harmonic component in the motion of the atom. Not only do the frequencies of the corresponding harmonic components agree asymptotically with the values obtained from the frequency condition in the limit where the energies of the stationary states converge, but also give in this limit an asymptotic measure for the probabilities of the transition processes on which the intensities of the observable spectral lines depend.
We can see from this that the CP relates to the “probability of the transition,” as well as its energy, or frequency, per se. In fact, we can say that the intensity of the transition frequency is as important in the calculations as the frequency, and that the CP allows us to say that, as the number of quantum states grows, then the quantum calculations, based on quantum numbers, will converge with the classical calculations, based on Fourier series expansion, like those that Heisenberg obtained. In other words, the probability for each stable state to change into another stable state is altered by adding or removing energy from the system, and the idea of probability in a discrete system is tied to the idea of intensity in a classical system, by the CP, because, while the intensity of the radiation of a given transition can be calculated as the sum of a certain number of terms, classically, if there are enough of them, “in the [selected] limit,” the intensity of the radiation, in the same transition, corresponds to the number of times the emission event occurs, in the quantum calculation.
Thus, a correspondence between the “rate of fire,” in the quantum state transitions and the number of stationary classical states that are summed in order to make up the same level of intensity in the transition, establishes a “fuzzy” correspondence between the two concepts. Soshichi Uchii, in his “Seminar on Bohr,” illustrates this relationship graphically, as shown below (see: http://www.bun.kyoto-u.ac.jp/~suchii/Bohr/correspond.html):
However, Dirac’s assumption, characterized by McCubbin as a “leap,” is ironic, I guess, because it led to a new concept that replaced the CP, even though, without Heisenberg’s reliance on the CP, it likely never would have been discovered by Dirac. Ok, so that explains the “ironic” aspect, but now we want to know exactly why does Dirac’s new, more “precise, elegant, and powerful,” concept, based on the Hamiltonian formalism, and the assumption that the commutator is ihbar times its Poisson Bracket expression, make it possible for quantum mechanical results to “simply pour out?”
We’ll discuss that topic later.
Why Does the RST constitute "New Physics?"
As discussed in our “The Trouble with Physics” blog, Ira Flatow, the host of the NPR show Science Friday, asked his guest Lee Smolin, the author of the new book The Trouble with Physics:
Are we at a point now, where you just have to sit and scratch your head and think, “We need some revolution, don’t we?” I mean, we need a revolution in physics, maybe we need a new physics!
Normally, however, when physicists talk about “new physics,” they don’t mean it in the way Flatow meant it, as a “revolution” in the theoretical concepts of current physics, but rather as the discovery of new forces, particles, or dimensions. In the latter sense, the acceleration of the expansion of the universe, tentatively attributed to “dark” energy, constitutes new physics, because it appears to be a new force. Similarly, the discovery of anamolous rotation speeds of stars in galaxies, attributed to “dark” matter, constitutes new physics, because it appears to be the result of a new particle, and the search for Supersymmetry particles is a search for new physics, because, if Supersymmetry is discovered, it would appear to confirm extra dimensions of space.
In this sense, then, new physics constitutes new clues that might be helpful in solving the mystery of nature, using the existing concepts of physics. However, Flatow, Witten, Gross, Green, and others, have referred to a revolution in the concepts of physics itself, meaning a revolution in the ideas that constitute the science of physics, distinct from the ideas of the normal science of modern physics. Of course, a change in ideas so monumental as to revolutionize the science of physics itself is difficult to comprehend, and I think Smolin, and many other professional physicists like him, find it just too much to contemplate.
Nevertheless, it appears, as Flatow says, that “we need some revolution,” a change in the fundamental concepts upon which the current practice of theoretical physics depends. Most physicists who recognize this, also recognize that the nature of the required change will have to do with the concepts of space and time. I quoted Brian Green’s comments to this effect in our Trouble With Physics blog, but David Gross, in another NPR interview, has essentially said the same thing:
In string theory I think we’re in sort of a pre-revolutionary stage. We have hit upon, somewhat accidentally, an incredible theoretical structure…but we still haven’t made a very radical break with conventional physics. We’ve replaced particles with strings—that in a sense is the most revolutionary aspect of the theory. But all of the other concepts of physics have been left untouched…many of us believe that that will be insufficient…That at some point, a much more drastic revolution or discontinuity in our system of beliefs will be required. And that this revolution will likely change the way we think about space and time.
It is our conviction that the time of drastic change predicted by Gross has arrived. Indeed, we are convinced that the new scalar physics under investigation here at the LRC, based on Larson’s Reciprocal System of Physical Theory (RST), definitely constitutes the “drastic revolution or discontinuity in our system of beliefs” that Gross believes is required. The drastic change that the RST introduces can best be understood in light of the impact it has on three concepts:
1) The concept of motion
2) The concept of the space and time reference system
3) The concept of the three properties of magnitude: quantity, direction, and dimension
The Concept of Motion
In the vectorial system of motion, or what we refer to as the legacy system of physical theory (LST), motion is defined as a change in an object’s location, x, over time t. It is the physics program that Newton inaugurated, and, borrowing from the words of David Hestenes, it is the “grand goal” of this program “to describe and to explain all properties of all physical objects” (see: New Foundations for Classical Mechanics.) The program’s approach is determined by two general assumptions:
1) Every physical object consists of a composite of particles, and
2) A particle’s behavior is determined by its interaction with other particles.
The objective of the program then is to reduce the description of the structure of the physical universe to “a few interactions among a few particles,” according to Hestenes.
The standard model of elementary particles is the result of this program, and although it’s not fully satisfactory from many standpoints, it is generally looked upon as the finest intellectual achievement of the 20th century, and regarded as the pinnacle of success for modern theoretical physics. The standard model represents the power of the mathematical formalism of the Newtonian program’s general assumptions, even though its total dependence on the clear formulation of the key concepts of particle and interaction, as an object with a definite orbit in space and time, had to be modified somewhat in order to accommodate the discovery of the discrete nature of atomic phenomena.
Nevertheless, Hestenes writes that the foundation of the program to this day is that the orbit of a particle is represented by the continuous function x = x(t), which specifies each object’s position x, at time t, and, thus, expresses the continuous existence of a particle, by expressing its motion, as a continuous function. According to Hestenes, if we assume that variations in a particle’s motion are completely determined by its interactions with other particles, then the equation of motion becomes
and since m, the mass of the particle, is a constant, or scalar, and is the variation in the motion, or acceleration of the particle, then the equation becomes “a definite differential equation, when f is expressed as a specific function of x(t) and its derivitives.”
In Newton’s program, this idea led to a focus on the forces of interaction sufficient to determine the motion, or existence, of a particle, and became a means for classifying elementary particles. Thus, the standard model is the classification of elementary particles, according to the kinds of interactions in which they participate.
Clearly, then, the redefinition of motion, from a vectorial definition, depending on the changing location of an object over time, to a scalar definition, where no object is involved, and no location is changed, has a drastic impact on modern theoretical physics. Instead of a focus on forces of interaction, as a means for classifying particles of matter, an entirely different approach will be necessary, one that will seek to classify particles of matter, according to the scalar motions that constitute them. Thus, it is the scalar motions that are elementary, not the particles of matter. In scalar physics, particles of matter are either atomic, or subatomic, units of motion, or combinations of units of motion, and the interactions between them are relations between units of motion, or combinations of units of motion.
Absolute or Relative?
The changes in the fundamental concepts of theoretical physics that the RST makes don’t stop with the definition of the new scalar motion. One of the most important concepts of LST physics involves reference systems. The famous debate between Leibniz and Newton over whether the nature of space is absolute, or relative, which goes directly to the heart of the fundamental crisis facing theoretical physics today, ultimately has to do with the reference systems, which are needed to define vectorial motion.
It is the invariance of physical laws, through translation and rotation of the reference system, that is the key determination in formulating the laws of LST physics. These laws are based on the continuum concept of space and they are always invariant under the transformations of space and time, which lead to the important conservation laws of physics:
1) The invariance under translations in space conserves momentum.
2) The invariance under translations in time conserves energy.
3) The invariance under rotations in space conserves angular momentum.
In addition, the motion between reference systems must be taken into account; that is, there is no preferred reference system that can be used to determine the function x(t) in any absolute sense, because vectorial motion is defined by the change of an object’s location with respect to other, fixed, locations, in a selected frame of reference. Therefore, the magnitude of a given motion, or velocity, in LST physics, will change, if the reference system, in which it is defined, is changed, from a fixed frame of reference, to a moving frame of reference. Hence, it is impossible to assert that a given frame of reference is the absolute frame of reference for defining the function x = x(t).
Scalar physics changes all of this, but not in the sense that invalidates LST physics, but rather in the sense that subordinates it; that is, LST physics is the physics of vectorial motion, and, therefore, it does not apply to scalar physics, but it depends upon scalar physics, because, we are assuming that without scalar physics, no physical entity can exist, and, therefore, without scalar physics, no fixed frame of reference can exist in which to define vectorial motion. In other words, if the geometry of space is defined by the relation of the positions of existing objects, then it doesn’t exist, if the objects defining relative positions don’t exist first, which was Leibniz’s argument that space is only a relative concept: space does not exist in an absolute sense, as Newton asserted.
Besides the obvious impact that scalar physics therefore has on the fundamental laws of vectorial physics, there is another important facet that is not so obvious. It is a result of the concept of the relational view of space that is noted by Lee Smolin, in his paper “The Case for Background Independence.” He writes:
…a physics where space and time are absolute can be developed one particle at a time,
while a relational view requires that the properties of any one particle are determined
self-consistently by the whole universe.
Thus, the development of the universe of motion upon the principles of scalar physics must meet this requirement, and it does so by beginning with the unit progression ratio (UPR) of the uniform progression, where ds/dt = 1/1, as the absolute reference for magnitudes of scalar motion. It is the quantum displacement from this reference speed that determines the properties of any one particle in the universe of moton.
Quantity, Direction, and Dimension
Finally, the impact of recognizing scalar motion, as a redefinition of vectorial motion, eliminating the motion of an object with respect to a background reference system of space and time, as a necessary part of the definition of motion, leads to another drastic change in the accepted concepts of LST physics, the concepts of quantity, direction and dimension. Indeed, the change in the meaning of these accepted concepts is arguably the most drastic of all the revolutionary changes that scalar physics introduces.
In vector physics, since scalar magnitudes, by definiton, have no direction, they are treated separately from the concept of direction; that is, a vector magnitude has two properties, the quantity of the magnitude and the direction of its path, defined in terms of dimensions. Thus, the velocity of an object is defined in terms of the coordinates in the three dimensions, x, y, and z, of its location, the time rate of change of which constitutes the magnitude of the velocity, a scalar value, while the history of those changes describes the direction of its velocity. The time rate of change part of vectorial velocity is scalar; that is, it is a value that specifies no direction, while the history part of its coordinate changes describes its path, or direction.
In contrast, scalar motion is defined without employing the changing location of an object in the definition of the time rate of change of its spatial aspect. Therefore, it has no history of a one-dimensional direction, specified by changing coordinates, describing a path of the motion. For instance, the scalar motion of the galactic recession has no specific direction. This observed motion is clearly scalar, because it has magnitude only; that is, the distance between the distant galaxies is simply increasing, it is not increasing in a given direction, but in all directions simultaneously.
Clearly, however, there is a “direction” to the outward motion of the galactic expansion of the universe, relative to the inward “direction” of the galactic contraction of the universe, if such a contraction actually existed. This outward versus inward “direction” of scalar motion cannot be differentiated in terms of three orthogonal dimensions, as the directions of vectorial motions are commonly differentiated in a fixed coordinate system, but it can be differentiated in another, analogous, manner: it can be outward/inward in space, or in time.
In fact, scalar motion can exist in both the outward and inward “directions” simultaneously. For instance, the collection of distant galaxies is expanding outward in all directions at the same time that the galaxies themselves are contracting inward in all directions individually, due to gravity. If it were not so, the expanding galaxies would be torn apart by the expansion, but, clearly, they are not. Therefore, in the universe of motion, where it is assumed that matter consists of nothing but units of motion, and combinations of units of motion, the scalar motion of matter must exist in both its inward and outward forms simultaneously.
In the Scalar Mathematics blog, we will see how these “directions” of the scalar motion can actually be described in terms of three “dimensions” of scalar motion:
1) Outward space motion
2) Outward time motion
3) Inward space or inward time motion
Again, however, scalar motion can exist in these three scalar “dimensions” simultaneously, whereas the vectorial motion of objects in three dimensions can only exist in one, resultant, vector at a time. This too has important consequences.
Conclusion
It should be clear now why the scalar physics of the RST constitutes new physics. The revolutionary changes in the accepted concepts that constitute scalar physics are a drastic departure from the concepts of vector physics, but they do not invalidate the concepts of vector physics, they simply expand the concept of physics itself, and thereby inaugurate a new, expanded, RST program of research that is better equipped “to describe and to explain all properties of all physical objects” than is the current LST program, restricted as it is to vector physics.