I think I mentioned before that I picked up on a statement that John Baez made in discussing octonions, and the efforts of some physicist friends of his, Tevian Dray and Corinne Manogue, to pursue them, as possible solutions to advanced challenges in quantum mechanics, as it relates to divisional algebras and group theory: “It takes guts to do physics,” he said.
Well, I know what he means because, I’ve found that, if you want to share your thoughts in developing new ideas, you have to be willing to look like a dummy sometimes, and you have to be willing to look like a babbler sometimes, and you have to be willing to take chances and stick your neck way out there, where your opponents (and there are always plenty of those) can cut it off happily.
On the other hand, as they say, opposition comes with the territory. Opposition is always an implication of proposing a solution to a problem, especially, if the proposer happens also to be the one identifying the problem, as in this case. Not everyone agrees that Larson’s RSt has the problems with SHM, and the other properties of the photon. They don’t believe that the answers to the questions of how to derive the atomic spectra won’t come out of Larson’s development either, but there doesn’t seem to be much progress along that line so far.
Identifying Larson’s decision to logically follow “another possibility” rather than the net zero possibility, as a mistake in his development, takes courage, because this decision is so early in the development that its consequences have fundamental impacts on the RSt. For example, the exposure of “scalar rotation” as an oxymoron is frighteningly fundamental, since scalar rotation is the basis of Larson’s development of the RSt. Other consequences follow as well. For instance, the basis for photon propagation, the so-called “vacant dimension,” is no longer tenable. The basis for gravitation, the inward rotation, is no longer tenable, and the basis for the universal expansion, the expansion of space responsible for the recession of the galaxies, seems to be in doubt as well. All of this, and undoubtedly much more, are implications of suggesting that the solution to the problems in the RSt is to back up and reconsider the first “possibility,” the net zero possibility, first.
Clearly, the central hypothesis of the new program is that the entire structure of the physical universe is determined by the necessary consequences of the fundamental postulates, and these postulates lead to the net zero option first, as Larson himself concluded in the Preliminary Edition, before he revised this conclusion in the Revised Edition. So the question to be addressed is, “Why did Larson change his earlier conclusion?” One look at the PAs and the answer is obvious: the continuous “direction” reversals of the net zero option cannot be a photon, because the frequency is fixed; that is, the frequency of the continuous reversals, expressed as a speed-displacement, is one unit, or the velocity of c relative to the unit datum. The space/time ratio of the net zero option is fixed at ds/dt = 1/2, and it therefore corresponds to a fixed velocity of
ds/dt = 4.558816 x 10^-6 cm/2(1.520655) x 10^-16 sec = 1.498965 x 10^-6 cm/sec, or
half the speed of light, .5c, as measured from zero. How can this be used as the basis of the photon? Perhaps Larson was originally thinking that combinations of this basic velocity would produce the variability in the frequencies of photons, but adding units of 1/2, to other discrete units of 1/2, can never change the ratio, from 1/2 to something else. Hence, he found “another possibility,” where the space/time ratio could vary, the periodic “direction” reversal pattern, which he promptly introduces in Chapter IV of NBM, without fanfare. That the new possibility is genuine cannot be disputed. The only question is, “How does it (the new pattern of periodic “direction” reversals) arise?” Ironically, the answer is, by combining the two, inverse, net zero patterns. Had Larson realized this, we might have had an entirely different RSt, than the one we have now. More than anyone else, I wish he had realized it, but there is no turning back now, my ____ is caught in the wringer, to put it crassly.
The implications of the LRC solution, the SUDR and TUDR as 3D vibrations, and their combination as the SUDR+UDR+TUDR (S|T), in the form of the RN
ds/dt = 1/2 + 1/1 + 2/1 = 4/4
are that the theoretical concept of photons is much more complex than Larson’s concept, that the propagation is not due to the lack of reversals in a vacant dimension, but due to the combination of the unit time propagation of the SUDR and the unit space propagation of the TUDR, which produces a unit space/time propagation in the S|T combo. At first it was natural to assume that the middle term, 1/1, represents the outward progression, the UPR, but then further analysis of the RN clearly reveals that this term is the inward component of the combination, not the outward, and that the two outward units are found in the left and right terms, a net unit of outward time progression in the SUDR contribution on the left side of the number, and a net unit of outward space progression in the TUDR contribution on the right side of the number.
However, the fact that the net total of inward and outward motion is zero in this combo, doesn’t seem to mean that the outward and inward propagations necessarily cancel each other, that they have to be summed, like -1 + 1 = 0, but rather that they coexist simultaneously in the same number, much like the real and imaginary parts of a complex number don’t cancel each other, but coexist simultaneously in one number to express a new type of number. The implications of this line of thought were very interesting from the beginning, but only now is the full impact of what this implies starting to emerge.
The implications of what is being discovered are truly amazing. What we have discovered so far implies that the whole of LST physics is misled by the failure to recognize that the nature of the “direction” of scalar magnitudes is analogous to, but distinct from, the nature of direction in vectorial magnitudes.
This is clearly visible in the development of quantum mechanics, where the role of the nature of direction in vectorial motion was employed to explain the role of the nature of “direction” in scalar motion, albeit, unawares. Given this point of view, a breathtaking vision of what actually happened in the history of the LST theory development begins to emerge from the darkness. For example, recall that the first progress, in the LST understanding of the atomic spectra of the atom, was Bohr’s idea of an electron orbiting a proton nucleus, implied by Rutherford’s results, that could only take discrete values of a given wavelength, the emission/absorption values being the delta of these levels. However, while these levels were regarded as values of angular momentum, the particle nature of an electron, orbiting the nucleus in the manner in which Bohr first imagined it, led to problems, which were solved by deBroglie’s wave idea; that is, that the angular momentum, or its energy equivalent, of the electron could also be regarded as the energy in a wave, the wavelength of which determines the number of cycles, which can fit in the circumference of the orbit. As the frequency increases, the wavelength decreases, and a greater number of wavelengths fit into a given orbit, or, holding the frequency constant, a larger circumference of an outside, or higher, orbit accommodates more sets of the number of cycles, corresponding to an electron’s energy. In other words, in Bohr’s model, as the radius of the orbit increases, the higher the energy it can accommodate, and, thus, the higher number of electrons that can fit into a given orbit.
In the usual account of this part of the story, the narrative quickly moves on, describing quantum numbers, Heisenberg’s uncertainty principle, Pauli’s exclusion principle, etc, but let’s freeze the frame in the movie at this point and take a closer look at what’s actually happening here in light of our new understanding. The problem was that Bohr’s model only worked for Hydrogen. When the higher frequencies in the higher orbits of other elements were calculated, the spectra calculation was incorrect. The usual story of how this problem was eventually solved recounts Heisenberg’s breakthrough discovery of the necessity of using non-commutative multiplication in calculating the frequency terms in the Fourier expansion of the atomic spectra, an approached that worked, but which he suspected and which he initially considered to be a “significant difficulty” with his approach.
However, it was Dirac who saw it not as a significant difficulty, but a significant discovery, when he realized that Heisenberg’s non-commutative product, which was actually only a reflection of the fact that a frequency transition has two possible “directions,” up to a higher frequency, and down to a lower frequency, could be assumed to represent the difference in a frequency transition “equal to
times their Poisson Bracket expression.” In N.A. McCubbin’s account of this, which I’ve referred to before, he explains the excitement of Dirac, when he saw the connection between the “significant difficulty” of the non-commutative product in Heisenberg’s draft paper, or what today would be considered a “preprint,” were it on the Internet, and the Poisson Brackets of Hamilton’s mechanics:
This was just the kind of connection that Dirac was looking for: in place of a strange looking multiplication rule and the mathematically somewhat fuzzy Correspondence Principle, the Hamiltonian formalism was mathematically precise, elegant, and powerful. Of course he had only proved the connection in a particular limit, using, ironically, the Correspondence Principle. So he made a leap. In his paper ‘The Fundamental Equations of Quantum Mechanics’ [6] he wrote: ‘We make the fundamental assumption that [i]the difference between the Heisenberg products of two quantum quantities is equal to [tex]\small \frac {ih}{2\pi[/tex] times their Poisson bracket expression[/i].’ (Dirac’s italics) So he assumed the equality not just in some limit of large quantum numbers, but always! With this assumption results simply pour out.
When I first read this, I wasn’t sure what it meant. Not only do I not know what Poisson Brackets are, but I didn’t understand what McCubbin was talking about when he referred to Dirac’s “leap,” which both he and Dirac characterize as a “fundamental assumption.” McCubbin states that the reason for the assumption is that Dirac had only proved the connection (between the commutator and times its Poisson Bracket expression) “in a particular limit, using, ironically, [Bohr’s] Correspondence Principle,” but “assuming the equality not just in some limit of large quantum numbers, but always!” What does this statement of McCubbin’s mean? I don’t understand the point that he’s making here clearly enough to appreciate the exclamation point at the end of it. I truly wish I could find someone that could explain it to me.
Nevertheless, in my struggle to understand it on my own, I think that I’ve discovered something significant about the relation between the two systems of physical theory, and I think, not unexpectedly, in hindsight, that it has to do with this confusing in LST physics of the direction of vectors with the “direction” of scalars. The first thing I needed to understand in McCubbin’s statement was, “why is Dirac’s use of Bohr’s Correspondence Principle (CP), in proving the connection, “ironic?” The CP is a way to explain how classical physics relates to quantum physics, but, according to McCubbin, this principle is “mathematically fuzzy” compared to the “precise, elegant, and powerful” formalism of Hamilton:
To recapitulate: in the Correspondence Principle limit, in which classical and quantum descriptions should coincide, the difference between the Heisenberg products of two quantum quantities X,Y becomes equal to [tex]\small i\hbar\left(X,Y\right)_{PB}[/tex]…This was just the kind of connection that Dirac was looking for: in place of a strange looking multiplication rule and the mathematically somewhat fuzzy Correspondence Principle, the Hamiltonian formalism was mathematically precise, elegant, and powerful.
Bohr describes his CP in a 1925 paper, as follows:
Nevertheless, the visualization of the stationary states by mechanical pictures has brought to light a far-reaching analogy between the quantum theory and the mechanical theory. This analogy was traced by investigating the conditions in the initial stages of the binding process described, where the motions corresponding to successive stationary states differ comparatively little from each other. Here it was possible to demonstrate an asymptotic agreement between spectrum and motion. This agreement establishes a quantitative relation by which the constant appearing in Balmer’s formula for the hydrogen spectrum is expressed in terms of Planck’s constant and the values of the charge and mass of the electron. The essential validity of this relation was clearly illustrated by the subsequent test of the predictions of the theory regarding the dependence of the spectrum on the nuclear charge. …
The demonstration of the asymptotic agreement between spectrum and motion gave rise to the formulation of the “correspondence principle”, according to which the possibility of every transition process connected with emission of radiation is conditioned by the presence of a corresponding harmonic component in the motion of the atom. Not only do the frequencies of the corresponding harmonic components agree asymptotically with the values obtained from the frequency condition in the limit where the energies of the stationary states converge, but also give in this limit an asymptotic measure for the probabilities of the transition processes on which the intensities of the observable spectral lines depend.
We can see from this that the CP relates to the “probability of the transition,” as well as its energy, or frequency, per se. In fact, we can say that the intensity of the transition frequency is as important in the calculations as the frequency, and that the CP allows us to say that, as the number of quantum states grows, then the quantum calculations, based on quantum numbers, will converge with the classical calculations, based on Fourier series expansion, like those that Heisenberg obtained. In other words, the probability for each stable state to change into another stable state is altered by adding or removing energy from the system, and the idea of probability in a discrete system is tied to the idea of intensity in a classical system, by the CP, because, while the intensity of the radiation of a given transition can be calculated as the sum of a certain number of terms, classically, if there are enough of them, “in the [selected] limit,” the intensity of the radiation, in the same transition, corresponds to the number of times the emission event occurs, in the quantum calculation.
Thus, a correspondence between the “rate of fire,” in the quantum state transitions and the number of stationary classical states that are summed in order to make up the same level of intensity in the transition, establishes a “fuzzy” correspondence between the two concepts. Soshichi Uchii, in his “Seminar on Bohr,” illustrates this relationship graphically, as shown below (see: http://www.bun.kyoto-u.ac.jp/~suchii/Bohr/correspond.html):
However, Dirac’s assumption, characterized by McCubbin as a “leap,” is ironic, I guess, because it led to a new concept that replaced the CP, even though, without Heisenberg’s reliance on the CP, it likely never would have been discovered by Dirac. Ok, so that explains the “ironic” aspect, but now we want to know exactly why does Dirac’s new, more “precise, elegant, and powerful,” concept, based on the Hamiltonian formalism, and the assumption that the commutator is ihbar times its Poisson Bracket expression, make it possible for quantum mechanical results to “simply pour out?”
We’ll discuss that topic later.