You are on page 1of 105

INSTANT CONTINUUM

A philosophical rebuttal on contemporary scientific theories concerning forces and


energies, light and matter

by Francisco Damián Folch Torres

© Copyright by Francisco Damián Folch Torres, 2004, 2006. All Rights Reserved.
3

TABLE OF CONTENTS

CHAPTER I: CAUSE ........................................................................................... 5

CHAPTER II: REALITY ..................................................................................... 11

CHAPTER III: ORDER....................................................................................... 17

CHAPTER IV: MOTION ..................................................................................... 25

CHAPTER V: TIME............................................................................................ 31

CHAPTER VI: FORCE....................................................................................... 37

CHAPTER VII: WAVE........................................................................................ 47

CHAPTER VIII: PARTICLE ............................................................................... 64

CHAPTER IX: SPACE ...................................................................................... 79

CHAPTER X: EFFECT ...................................................................................... 85

CHAPTER XI: THEORY .................................................................................... 93

CHAPTER XII: IMAGINATION .......................................................................... 99

INSTANT CONTINUUM
5

CHAPTER I: Cause

“Science advances only by making all possible mistakes;


the main thing is to make the mistakes as fast as possible
– and recognize them.” –John Archibald Wheeler

“One of the hardest test for scientific mind is to discern the


limits of the legitimate application of the scientific
methods.” –James Clack Maxwell

I am not a Columbus but more so a Vespucci. I am not a discoverer navigating on


uncharted territory but rather a cartographer, demarking colonized terrain.

I am not a scientist, presenting a paper on new findings, nor a theoretical physicist


speculating on possible new findings. There is nothing new discovered here, but there is
many anew explanations. What I write here is as a philosopher, a cartographer of logic.

Science teaches that truth is revealed by observation and that observation is verified by
repetitiveness and further observations. Philosophy seeks truth through contemplation, a
rather more subjective appreciation of reality.

Knowledge is gained through observation and reasoning, written down for others to learn
and analyze—a distillation of Truth—Humanity’s process towards Understanding.

As we transcend our perpetual Age of Darkness, Science should enlighten us towards


the way of Truth. And so we study the writings of our ancient fathers and the article of
the next Nobel aspiring scientist from the latest magazine in order to learn. But the total
acceptance of Science will not come until attitudes and philosophies do not change to
accommodate the ignored truth about our mind’s imaginary realm. Any explanation of
Reality must reflect an accepted belief in gods and their spiritual world, which
commanded human acceptance of reality. It is a bias interpretation of reality by the
fancies of the human intellect, reflecting emotions and encompassing our abysmal
ignorance. For the European mind this meant an acceptance that observing and learning
the way things behave did not take credit away from God’s command over the physical
world. Observing the orbits of planets did not affect God’s hand in propelling them.
These old beliefs persist still today even when God’s hand has loosened his power to
swirl hurricanes and move tectonic plates by the explanations of Science.

Understandably! There are far too many things that still remain unknown for which the
infinite powers of gods can simply explain. What else can so easily explain the
appearance in nature of a molecule so complex as the deoxyribonucleic acid? Or, what
can explain the containment of the seemingly eternal universe? God suffice for many as
one appeasing answer. My answer is the all-encompassing Universe itself.

But this is not a book on the debate of God versus Science. My position is of a realist,
that the nature of things is revealed through observation (measurements). We only need
to understand what we observe, thus we experiment.

INSTANT CONTINUUM
6

Currently, scientific observation has peered into Nature like never before, but current
theories lack from congruency. It has been repeatedly recognized in scientific books and
magazine articles that current theories on the behavior of light and its effect on matter at
the atomic scale fall short of being congruous and consistent with new observations. The
solution has been to modify standing theories in order to accommodate new findings,
regardless if these are presented as exceptions or based on theoretical phenomena.

This book comes as a synthesis of many current texts on Relativity, Quantum, and
cosmological theories. It concerns the definition of a new model that explains
experimental observations in much simpler terms. But such a proposed model stands
against formidable convictions supporting the standing mainstream theories, regardless
how complex and unintelligible these have developed into.

In order for the proposed model to be accepted, the reader must first analyze the basic
premises from which Relativity, Quantum, and cosmological theories are based upon.
This requires the reevaluation of old experiments from which these premises were
established. Furthermore, it should then be indicated how these premises complicates
current models, and how their elimination simplifies things.

The main difficulty in accepting the here proposed model is that it parts from the premise
that is most insipid given current accepted theories: the subjectivity of time. Time is a
descriptive term that is so innate to our way of thinking; it is difficult to conceptualize it as
anything else other than objectively real.

Time is not real, at least not in the sense Albert Einstein prescribes it, as a fourth
physical dimension of the structure of Nature, but exists more so as an abstract
conceptualization of our mind, a logical term employed in the understanding of motions
and their relationship with other motions and the environment. This definition of time is
not new, but after Special Relativity, such classical argument would seem groundless
and few scientists would risk adopting such old premise. Time became a dimension of
the physical side, affected by motion or gravity. However, something very interesting
(logically speaking) happens when one explains the phenomena of Relativity without
time, it resolves certain inconsistencies, especially of congruency with other fields of
Physics.

Without time, the universe becomes one continuous three-dimensional field, the limits of
which are determined by the extent of light and matter. Regardless of dimension or
shape, the universe changes as everything tends towards a balance of conditions. But
as these changes take effect, the universe remains one. No infinite parallel universes or
endless chains of past and future realities. We are always at present, the past is only
recorded or remembered, and the future will be as a consequence of current conditions,
only for us to speculate upon or affect. Subjective time fall upon the domain of
Philosophy but is a view Science should embrace as a truly objective perspective.

It is the purpose of Science to define the conditions of existence and determine how
changes occur. If we understand we can thus explain—somewhat more.

This work scarcely goes as far as presenting, with few detail explanations and brief
discussions, the various experiments by which basic scientific discoveries were made
and from which fundamental logic were deduced at the beginning of the Twentieth
INSTANT CONTINUUM
7

Century for the construction of Relativity and Quantum physics. It is desirable for the
reader if some prior knowledge on the fields of Relativity and Quantum is attained,
though the style and language used is geared to the general audience. Regrettably the
inherent nature of the topic makes this text a rather dense reading.

A quest to find a theory that will unify all forces and integrate Relativity and Quantum
physics has been on the procession ever since these theories were presented, driven by
a portent overview—an extension to the unification of magnetism and electricity into one
force, the electromagnetic. Yet, I am aroused by the current attempts at such a
unification theory, as have been presented by even the most respected scientists in the
world. Many of these ideas use such unorthodox thinking that they border on the
metaphysical, among the most infamous of these, superstrings and their ten or twenty-
six dimensions of space.

I use my license as a philosopher to write a book on science and present some


inconsistencies I believe exist in the way some experiments have been
interpreted—experiments which are cornerstones for modern physics. These have been
left unquestioned as further experimentation revealed no mayor discrepancy to the
general theories; where only minor adjustments were conceived in order to concur with
each observation.

I do not claim to be a scientist. To be a scientist implies to conduct experiments, a


procedure I have not employed here. Every argument I lay before the reader is strictly
based on experimental results that have been published by various scientist. This is not
a publication of new findings; it merely takes existing experiments and reinterprets the
data accumulated. Like differentiating between a “half full” and a “half empty” glass. The
purpose of this book is to present an alternative model using the same information other
scientist have had at hand.

Interested in the sciences and as a strong advocate of scientific thinking, I pursue an


understanding of the world in the most factual and logical manner. Yet the views given in
this work are phenomenological, that is to say, they depict Nature not as scientist
through numbers, but as philosophers through arguments. I believe in a physical world
of cause and effects, and in our ability as humans to recognize through our senses and
our minds, both the direct cause and the indirect effect of every phenomena in the
physical world. That the world in which we exist is strictly physical in all its
characteristics, but that our cognitive ability, our mind, is capable of representing our
surrounding as an abstract conceptualization.

By that same mechanism by which we rationalize, the mind is also capable of making
relationships and associations that although logical, are outside the realm of the physical
world (i.e. love, beauty, coincidence). This ability to imagine can be so detailed and so
full of realistic correlation with our complex world, that they can become convincingly
real. This ability to imagine makes us sometimes susceptible of conceptualizing
erroneously. So science must stand as a beacon of light, a lighthouse to indicate the
way to the land of truth in a sea of confusion. Science screens facts from fantasies, to
find out and distinguish between what is actually real and what seems to be. Science is
the way towards Truth, regardless of how hard this process be; for it will take the genius
of illustrated individuals and the powerful tools employed in scientific methodology, to
formulate the true relationship of things. It is a process that will prevail from all ideas,
beliefs and opinions, contradictive or not, towards the gain of knowledge.
INSTANT CONTINUUM
8

How philosophy plays into all this? Concocting a theory is, in a narrow sense,
philosophizing. I use the word “narrow” to give credit to the specificity and
standardization employed in scientific theorizing, strictly elaborated from experimental
data and mathematical logic. Philosophy, however, is at liberty to be more flexible and
encompass the metaphysics. Yet, this flexibility lies solely in the definitions of terms; that
while proper logic is generally maintained, by accepting fictitious definitions the whole
logical construct becomes metaphysical. For this work, however, the employment of
sound logic and scientific facts has been the primary concern, as an attempt to develop
a scientific logical construct.

In order that a theoretical model which takes time as a subjective attribute to be


accepted, regardless of individual convictions, every physical phenomena relating to the
field must be explained consistently. This requirement stands foremost in the
development of the alternate model. But the presentation of such a model could not be a
straightforward act, since some erroneous preconception derived from the objectivity of
time, must be clarified first. For example, the Second Law of Thermodynamics, which
supports the so-called “arrow of time”, would have to be scrutinized.

Three precepts have been adopted along with timelessness: (a) that dynamic systems
reduce their energy state, following the path of least resistance; (b) that dynamic
systems are intrinsically chaotic and tend towards balance; and (c) that forces and
energies propagate only as fields. None of these ideas are new, though historically
speaking Chaos is the newest aphorism. The older statements are rejected by modern,
well-established theories. The “path of least resistance” seems to be contradicted by
chaotic behavior and most potently by the Second Law of Thermodynamics, while the
quantum physics strongly advocates a particle/wave (a force/force-carrying-particle)
duality, which contradicts the last.

Basing the model under such propositions create a most precarious position, the
likelihood of not been accepted by the scientific community, as it concerns the revision of
the major theories in Physics of the past century. Though it is not an all encompassing
refute, it does ask for the re-evaluation of many fundamental interpretations of
experimental data. Thus, it presents itself as heresy to the established Science.

My studies in the field of Quantum and Relativity started from early high school when I
became interested to know about these two Sciences. So I independently of the school’s
curriculum picked up some relevant books and started reading. Right away, the authors
and fathers of these sciences, particularly on Quantum, presented their theories as the
best alternative, but not quite revealing full understanding on the behavior of the atomic
system. The authors present Quantum as the most advance of theories but not
altogether logical or all conclusive. Their reasoning is that the underlying physics of the
atoms are precisely undeterminable. That one cannot know with exactitude both the
position and the velocity of a particle simultaneously, that these two undeterminable,
position and velocity of the atomic are affected by observation (e.g. light itself). This is
called the Heisenberg Uncertainty Principle.

The model I present here is an elaboration that has grown and evolved during the past
twenty years. I cannot say that my work is fully complete, as I am still polishing to a high

INSTANT CONTINUUM
9

luster. As it is, I see it is shinny enough. It is my hope that the reader finds this model a
new splendor in the view of the world.

INSTANT CONTINUUM
11

CHAPTER II: Reality

“Reality, what a concept!” –Robin Williams, comedian

“If you would be a real seeker after truth, it is necessary


that at least once in your life you doubt, as far as possible,
all things.” –Rene Descartes

I define “opinion” as: a personal conceptualization of reality, which may or may not follow
proper logic and non-subjective observations. Those opinions that do concur with these
specifics I denote as “reason”, that is: an idea expressed precisely in a manner that
properly reflects reality in a logical and objective manner. For the most part, individuals
will go to great length defending their opinion as reason. This is understandable, since
opinions are constructed from the “logical” associations of acquired knowledge, and form
the basis by which the mind and the individual will respond towards the environment.
The opinion a person gives is a reflection of how reality is conceptualized by him or her.
Thus, an opinion is taken by its author to stand as reason, as it stands outside the
capability of the individual, to reach other opinion without knowledge of alternative or
additional information that might exist. And even then, we have the free choice of
choosing from various opinions, and in a sense choose what seems reasonable,
whether logical or not.

An opinion should depend on three factors to be considered as reason. First, that


adequate (and sufficient) information has been gathered. Second, that correct logic has
been used in correlating or associating the information. And third, that all information is
factual. Any compromise should disqualify the opinion as just that.

Of course, if information is altogether lacking, it will be undiscerning as to the required


information needed to constitute an opinion as reason; and inevitably any opinion will
stand as reason until conflicted by further studies which will provide new information. A
reason becomes an opinion when confronted by an opposing idea. For any conflicting
ideas to exist, there must be a discrepancy in some, if not all, of the factors comprising
“proper” reason. Since each of these factors mentioned previously are progressively
harder to prove true it becomes a very difficult process to scrutinize any opinionated idea
as reason.

Through discussion any differences can be identified. This process is not always easy
and in some cases it becomes almost impossible to fulfill. Opinions are generally based
on a large number of logical relationships, so the first attempt is to verify that both sides
share equal information. If a discrepancy is found, then it is just a matter of agreeing that
the information not shared can in fact be properly included as part of the set of factual
information for use towards the formation of a logical construct. Agreeing to such terms
can sometimes be difficult but it is the simplest of the discrepancies to be specified if not
corrected.

Otherwise, if in fact both sides share all corresponding information then the discrepancy
may lie in the reasoning of logic used. Correcting such problems is achieved by

INSTANT CONTINUUM
12

continued discussion, and not only requires the resourcefulness to decide on what is
correct logic but the willingness to accept mistakes and learn from others. Usually this
means the acceptance or rejection of a particular premise.

The last factor, which in most cases is just an implication of either of the first two factors,
is by far the hardest to clarify once identified. Since the origin of such disagreement is
not a “lack of knowledge” or a “wrong interpretation” but the disagreement on the validity
of a particular information. Upon disagreement on the validity of any particular
information both the set of knowledge and their interpretation are consequently affected.
It is the grand purpose of Science to test claims and scrutinize the information on which
they are based upon. Science pertains to the physical phenomena, as only the Natural
seems to behave in. So this discrepancy is easily enough resolved if it is experimentally
feasible. Yet for Science as elsewhere, premises exist which are never easily discernible
experimentally—a premise being particular information that is taken as truth by general
reasoning and not from experimental findings. The classical example is God, whom
existence can neither be proven nor disproved experimentally. Science dismisses it
since the idea lacks of physical expression (a debatable argument, since many believers
attribute both natural events and mental inspirations as influenced by such outside-the-
physical being). Whether a god is included in a theory or not, either case a logical and
sound opinion can be reached. Metaphysics regrettably claims there is more than the
physical aspect, which only Science corresponds to (thus the name).

Even when much is yet to be understood, there is sufficient knowledge of Nature to


establish Science as veracious. Yet Science will always be faced with conflicting views,
serving only to discredit it among the believers of Metaphysics. While our inquiry of
Nature continues, there will be conflicting arguments, many of which might be
metaphysical. Truly there is little physical support for the metaphysical, and most of
these have been disproved for that same reason. It is the respectability and reliability of
a robust science that will ultimately dismiss the metaphysics as imaginary, when no
experimental test exists to refute a claim. The believers of myth and religion could hardly
admit such status claiming that not all Nature is necessarily logical and void of intensions
which is a thing of the will. It is the difficulty in confirming the objectiveness of certain
premises that makes the third discrepancy in the reason-defining logic factors almost an
unreachable resolution.♣


An analogy: a child’s fear of monsters hiding inside the closet in his room, believing
that they could come out at night to give him bad dreams. The parents of the child, as
rational adults, hear their son’s fear, and naturally tell him that monsters are nothing but
a figment of his imagination. The parents know very well monsters do not exist; this is
known through experience and past-surmounted fears. Only if the child will accept the
word of his father and mother, or dare he open the closet door and be convinced, will the
fear cease. But, as long as the child continues believing in monsters, he will have a
reason to fear them. This analogy applies to the general concept of reality, the monsters
being debatable premises and the fear a metaphysical opinion.

Regardless of what our opinions are, Reality is external to our appreciation of it.
Monsters do not come to existence because we believe them to be real. Yet the fear of
monsters can certainly be real as long as the belief is held. Though this will be
maintained up until the moment when the closet door is opened and it is realized that
they do not exist.
INSTANT CONTINUUM
13

Through our perceptions we form an appreciation of reality. Perception of our


environment is inferred through our visual, auditory, olfactory, taste, and tactile senses.
Logic is, in a metaphorical way a sixth sense, since it provides an enhanced appreciation
of the physical world, by conceptualizing relationships—but it is also capable of
associating various physical effects in ways that have no physical correlation, like:
simplicity, usefulness, efficiency, rarity, or beauty. Such terms are not very scientific but
properly emphasize a particular characteristic and enhance the description of the world.
Through the logical structure we construct in our minds, a reflection of reality is projected
as our thoughts.

What follows is an explanation of a schematic diagram on the relationship between


reality, perception, and appreciation/rationalization.

Figure 1: Perceptions.

The row of squares at the bottom is comprised of objects, forces, actions or anything
that is either empirical, hypothetical, believed or conceptualized as real (information).
The rectangles at the top are mayor theories (thoughts). Each theory is divided by a
dotted line to denote the idea at the top and the logic underneath. Intercrossing between
the top and bottom elements are perceptions, regardless if they are gathered through

INSTANT CONTINUUM
14

instrumentation, our senses, or faith. The dotted vertical lines separate the domains of
Religion, Science, and Metaphysics. Obviously, not every possible premise, theory or
perception is presented here, but an attempt to name most mayor elements, with some
economy in design, has been made.

It is not my intention to either dismiss or enforce any of the “realities”, but just keep in
accordance with the definition put forth, so that every alternative is included. The
separations between Religion and Science is founded on the basic premise of the gods
or any other supernatural entity (to include souls), that through faith are believed as true
and real, but can not be experienced neither through our senses (aside from logic) or
instrumentation. Since faith is not accepted as a scientific perception, as one that can be
verified and reproducible, thus the division between Religion and Science (Metaphysics
holds a similar distinction as with Religion). Religion and Metaphysics have been
separated and placed on opposite ends of Science, not only to symbolize the narrow
and specific criteria of Science, but also give credit to both the gods and souls, as most
believers would have it, that these are not among the mystical and magical superstitions.

For Science, real is everything that can be sensed and measured; for Religion, Science
lacks faith, to sense other things that are real but not palpable by physical means. For
the metaphysical the universe holds a will and creates through imagination. Science and
Religion are in principle truth finder and truth provider, respectively. Metaphysics is a
truth expander and a truth decipher. Stated here only to indicate the extent of their
similarities, there is no intention in presenting a history of both harmony and conflict
between the two. Religion provides truth from the wisdom of a conceptualized supreme
being, learned through faith and guidance of the clergy, and left to be interpreted by the
believers to the best of their abilities, as the diversity and ramifications of religions
indicate. Science, on the other hand, searches for truth in the wonders of Nature,
learned through observation and the scrutiny of experimentation, to be analyzed by
investigators with the best logical explanation, as the unity of the sciences demonstrates.
Metaphysics must find both these ideas insufficient.

In Science, the distinction in the terminology of opinion and reason does not necessarily
exist. Hypothesis and theories are generally presented as reason since they deal with
physical phenomena: a theory being a mathematical and logical description of
observable phenomena, and a hypothesis are assumptions to be refuted or supported
by experimentation which are based on existing theories. However, it is occasionally the
case that avant-garde scientist break the sequence of steps in an attempt to stipulate
additional observations not yet made experimentally but reached through imaginary
experimentation; guesswork on what other secrets Nature keeps. And then build
theories from untested hypothesis. This sort of mental work is not science but more so
philosophy. And though it is a very powerful tool for the development of Science, it can
also prove to be all too fantastic. This fantasy approaches the metaphysical.

It is possible that a theory be logical in structure and mathematically feasible (which is in


itself an experimental test) but still is physically impossible. For instance, in hypothesis, a
magnetic monopole can exists, in reality no such thing is ever likely to be found. Or say
for instance a particle devised from the mathematical possibilities of a theory, in which in
itself becomes basis for further devised particles—but this such particle will be discuss
later on this book.

INSTANT CONTINUUM
15

Science is a system by which reason is attained, hypothesis merely raised to be


investigated. Both theory and hypothesis are logical constructs attained by a process of
observation and analysis, that is continually put to test and enhanced by the successive
findings and rethinking on its implementation. The methodology of experimentation is
employed as a discreet yet stature projection of our cognitive ability. With the utilization
of more precise, sensitive, and powerful tools we are capable of perceiving, discovering,
and understanding the world in ways more enhanced than our natural abilities would
normally permit us to. As perception expands, so does appreciation and rationalization,
guiding thus forth to the fuller understanding of Nature. Naturally, if the experiments are
done correctly, then the observations acquired are reliable; and if the interpretations are
sound and mathematically feasible, then the opinion set forth as hypothesis stand as a
theory of reason. The experimentation process of science of verification through
reproduction or reconfirmation, guards against misinformation; yet misconceptions can
still percolate regardless if the information collected is duplicated again and again, when
failure to interpret the observations correctly continue due to the incorporation of
erroneous premises. Erroneous in the sense that it is not true to reality, something
pertaining to Nature, which is what Science upholds. So in addition, Sciences employs a
mechanism to safeguard against misinterpretation by demanding that theories must not
only explain in the simplest form all the available observations, but able to make
predictions and stand the scrutiny of further test.

Yet Science is still hunted by another more sublime monster than


metaphysics—ignorance; a circumstance which hinders the faculties to discern fact from
fiction, on whether the idea presented is at fault over knowledge, interpretation or validity
factors. Faced with the problematic that Nature in itself is an extremely complex system,
there is so much still to be learned. This gives room to variations in logical constructs
which, for all practical purposes (for lack of information), can constitute as theories. What
is always at stake is the question of a premise factualness, which will exists even when
there are no supporting experiment or opposing hypothesis. Again, premises are
assumptions that form the basis to a theory, and are generally unverifiable by scientific
methods, thus left to our cognitive intuition for scrutiny; an intuition on perception and
rationalization.

Subjective relationships like tendencies or coincidence are useful in explaining the


physical world but they can become confused as true perceptions and defray our ability
to reason. Historically, this has been the case, as it was with the observed motion of
celestial bodies.

The retrograde motion of the planets as they appear to move against the background of
stars under the Aristotelian geocentric model could only be explained with eccentrics
and epicycles. So was Kepler’s model of the universe, in which he made some
connection between the harmony of all celestial motions and the number of perfect
solids; a connection that had no physical correlation and he later dismissed. Modern
Science is not exempt from perceptual confusion and for such a reason, experiments are
continually being developed to verify and certify any claim. Science, nevertheless,
provides a very aesthetic tool to resolve conflicting theories. Provided that two or more
ideas attempt to explain a particular concept or phenomena, Occam’s (Ockham’s) razor
dismisses the more complex. This however is not always a simple criteria to apply, nor
necessarily justifies or warranties the least complex as the correct. But while no
experimental information exists to determine a correct choice, it is a liable weapon.

INSTANT CONTINUUM
16

What is being conveyed here is that a theory that would seem very logical and certainly
mathematically feasible is in fact wrong. This, not only gives philosophy a place in
assisting in the advancement of science but a justification for the work here presented.

Many inconsistencies have been raised in resent years that jeopardize contemporary
theories concerning light, atoms and the formation of the Universe. So I propose an
alternate description, based on the same experiments published by scientists.

INSTANT CONTINUUM
17

CHAPTER III: Order

“In essence, nature is simple.” –Hideki Yukawa

“Here then is this principle, so wise, so worthy of the


Supreme Being: Whenever any change takes place in
Nature, the amount of action expended in this change is
always the smallest possible.” –Pierre Louis Moreau of
Maupertuis, 1732

Many philosophers and theorist have seen the tendency of an isolated system towards
disorder as a support to the irreversibility of the arrow of time (which will be discuss in a
later chapter). The problem with the Second Law of Thermodynamics, which formulates
such a tendency, is that it is always in need of a new excuse. In every field of Science,
organized structures and systems form. From the formation of galaxies, and the
evolution of genes, to the grandest demonstration of order of all structure (as regarded
by some), the human brain—the Second Law is confronted with an onslaught of
violations to its policy. Surely this Second Law should be left only for the study of motors,
for which this principal was originally deduced, to explain the unavoidable lost of energy
to friction.

Isolated systems per se, do not have inherently this incessant tendency towards
disorder; on the contrary, external influences are the cause of disruption on most
occasions. For instance a building left isolated, since it is a static structure, will maintain
its composure indefinitely, but it must suffer from the weathering due to rain, wind and
use, to erode. As for dynamic isolated systems, what is intrinsic is an impetus toward a
least energetic state through the path of least resistance, whichever configuration such
state leads to, it is a process by which the least amount of energy is utilized (or the
largest amount of energy liberated). It is also a chaotic process.

From the surface of least area of bubbles to the tendency for high pressure to expand
turbulently, Nature demonstrates an economy in energy expenditure. Dissolution and
homogeneousness are inevitable process while these are the least resistant route,
otherwise precipitation or dispersion would occur. Some compounds are soluble in water
while the polarity of other molecules like oils make them hydrophobic. It should be
notices that under the terms of least resistance, at no moment are order or disorder
presented as factors in the dynamics of these systems. What drive dynamic system is
strictly its energy components and how each object or particle within it, relates to one
another and it’s surrounding.

Reluctantly, scientist and philosophers cling to this Second Law of Thermodynamics, the
intrinsic increase of Entropy, as the inseparable sibling to the First Law of
Thermodynamics, the Conservation of Energy Law, which truly remains inviolable
towards infinity. This stubbornness to maintain the validity of the Second Law, forces
philosophers and theorist to develop tangent loops of reasoning to accommodate into
the process under investigation a violating decrease in entropy whenever order is
observed. These tangent loops of logic go by the names of: Rayleigh-Bénard

INSTANT CONTINUUM
18

hydrodynamic instability, Turing instability, Hopf instability, etc. The occurrences of order
in Nature, though very recurrent, are regarded as anomalies for they contravene the
Second Law. So for instance, Coveney and Highfield indicated in their book, The Arrow
of Time, systems which are far from chaotic must be explained through the Glansdorff-
Prigogine’s minimum entropy production theorem which is an “approximation which
makes systems far from equilibrium [that is, ordered systems], look and behave locally
as a good-natured patchwork of equilibrium,” and “the remarkable fact is that while far
from equilibrium systems the global entropy production increases at a furious rate,
consistent with the Second Law, we can nevertheless observe exquisitely ordered
behavior. Thus we must revise the received wisdom of associating the arrow of time
[their main theme] with uniform degeneration into randomness.” But they add “the only
reason why the criterion is not a universal rule is that there is an enormous range of
possible behaviors available far from equilibrium [referring again to ordered systems].”i

To further illustrate the sort of tangent reasoning that attest in defense of the Second
Law, theorist have concocted a state of “super-symmetry” to have existed prior to or at
the origin of formation of finitely conceived infinite Universe. Such a state of perfect
symmetry is made necessary in order to allow the primordial cosmos to be much more
organized than it is presently observed, all the way up to its most perfectly symmetrical
origin of what is eternally present and truly has no origin, no beginning. The logic
extension to this proposition is the eventual death of the cosmos at maximum disorder
[at a perfect entropic equilibrium]. Our present state is somewhere in between. When it
is generally agreed that organized galaxies and solar systems [far from entropic
equilibrium] surge from the condensation of giant gas nurseries [at maximum
equilibrium], it seems absurd that Nature must follow such a sinusoidal evolution in the
formation of galaxies when they will finally should died a chaotic death [at maximum
entropic equilibrium]. The following graph that illustrates this irrationally proposed
behavior, which could in fact be extended to continue its sine wave evolution back to
perfect symmetry if the Big Crunch is taken into consideration.

INSTANT CONTINUUM
19

Figure 2: Apparent Sinusoidal Entropic Evolution of the Cosmos in the Galactic


Formations.

Another example is genetic evolution that is generally accepted as primarily driven by


chance mutations. Yet the continual enhancement observed in the evolutionary process
cannot be dismiss as merely an increase disorder in the genetic code. On chance alone,
the rising of a creature as complex as human beings would definitely seems impossible;
and in fact it has been observed by many religious advocates that such improbability
supports their rightly belief of life occurrence by design of a supernatural creator. Yes!
The Universe.

Even for the atom-splitting realistic philosopher or theorist the process of evolution down
to the decent of Homo sapience could be explained by sheer numbers: billions of years
of continuous enhanced reproduction. Considering the prolong period and extent in
volume in such evolutionary process as it has taken in our living planet, while at the
same time recognizing the feasibility by which basic organic constituents for life, are
formed in the laboratory (Stanley L. Miller and Harold C. Urey♣), then the opportunity for
deoxyribose nucleitides to have formed seems to be likely. And as long as conditions for
the existence of life is maintained, life will continue to enhance herself.

These continual elaborations towards complexity could still be perceived as an increase


in entropy. But chemical reactions between chance encounter of two molecules does not
constitute an increase in entropy; entropy is not probability, chemical reactions follow
strict physical laws which are the same regardless of where they take place (given, of


http://www.accessexcellence.org/WN/NM/miller.html &
http://www.amnh.org/education/resources/rfl/web/essaybooks/earth/p_urey.html

INSTANT CONTINUUM
20

course, similar conditions). Given an ocean full of simple genetic building material, the
large volume increases the chances for enzymes to link the appropriate proteins into the
reproduction of large ribonucleic chains. This mechanism for reproduction allows for
error in the process, which is generally due to external influences (e.g. ultra-violet light);
but by the same token, variety proliferates. This could again justify an increase in
entropy, but it is a very narrow appreciation. There is a will to the nature of the genetic
code that drives all living organisms to be with purpose and reason of form, giving to all
will to live. Infinite in forms and functions but each creature a design nonetheless.
Variations may be seen such as errors giving way to more elaborate configurations as
well as some dead ends.♣♣

These last two examples might be a little too much for the stomach, so I include a more
palpable example concerning a recently developed experiment, where precisely the
Second Law of Thermodynamics is most violated by the ordered formation of highly
symmetrical molecules. This is regarding buckminsterfullerene’s and the generation of
the rest of its consorts. (Harold Kroto from the University of Sussex, James Heath, Sean
O'Brien, Robert Curl and Richard Smalley, from Rice University, discovered C60 and the
other fullerenes earning them 1996 Nobel Price for Chemistry♣♣♣). These magnificent
carbon molecules, created either by vaporizing carbon films with laser or setting an
electric arc between two graphite electrodes, either of which take place inside a
pressurized helium chamber. These most symmetrical molecules emanate in surprising
propensity from the very disordered state of the free suspended carbon atoms within the
inert helium. How this is so can be explained by molecular physics alone, without the
need of including some sort of strange thermodynamic explanation to support the
compelling state of “least entropy”. As it is universally observed, free carbon attract with
a proficiency to form aromatic links. The geometric imposition of an occasional
pentagonal arrangement is to curve adjoining hexagons into spheroids. Under the
prescribed circumstances, the tendency towards a path of least resistance is not only
manifested by the proficient formation of aromatic rings and an instability of adjacent
pentagonal rings (thus resisting malformations), but that these pentagonal rings form in
order to reduce the surface area of the molecule to a minimum, increasing in turn its
stability.ii

I shall make reference to one last article in which the authors’ main topic is natural order,
but again in conflict with their conviction that things should inexorably tend towards
disorder. “Order in nature would appear to be the exception, not the rule. The regularity
of the solar system the complex organization of living things and the lattice of crystal are
all transient patterns in a grand dissolution into chaos. The prevailing theme of the
universe is one of increase entropy,” and goes on presenting “particular fascinating”
order in nature, patterned ground. Krantz, Gleason, and Caine conclude by reflecting

♣♣
So discomforting—that the existence of life is a clear violation to the Second Law of
Thermodynamics. “It is important to note that the entropy may decrease in a system that
can exchange energy with its environment. The emergence of life on earth represents a
decrease of entropy, which is allowed by thermodynamics because the earth receives
energy from the sun and losses energy to outer space.” [Dreams of a Final Theory, S.
Weinberg, FIRST VINTAGE BOOKS, 1994, p. 286].
♣♣♣
http://euch3i.chem.emory.edu/proposal/www.inetarena.com/~pdx4d/synergetica/eja1.html &
http://en.wikipedia.org/wiki/Fullerenes

INSTANT CONTINUUM
21

“the wonder of patterned ground is not so much how it happens but that it happens at all.
The same can be said of the more familiar regularity of snowflakes. An element of
mystery will always attend phenomena that unite the precision of geometry with the
vagaries of change.”iii♣♣♣♣

Chemistry demonstrates a different violation to the Second Law of Thermodynamics with


an array of endothermic reactions♣♣♣♣♣; coldness and freezing is regarded as “far from
equilibrium”. Instead of attributing such reactions with further exceptions, all chemical
reactions, regardless if they are endothermic or exothermic, can be explained without
any inconsistencies by the tendency to reach the least energetic state along the path of
least resistance.

This tendency for dynamic systems to change through the path of least resistance can in
many events seem as an “increase in disorder”. For example, adding red dye to water
will ultimately end as a homogeneous coloration of the solution, which is generally
regarded as an increase in disorder of the system. Since it is taken that the dye, when
conglomerated in a small drop, is at its most ordered state. Alternatively, such argument
might be considered true while the dye is isolated, but once mixed with the solvent it no
longer holds valid. Homogeneousness should be regarded as the most ordered state for
the solution. After all, it is at this stage that the system has reached “equilibrium”
—evenly throughout the entire volume, the solution is at a normalized equal; I see no
reason why this should not constitute as uniformity, order or symmetry rather than
maximum entropy.

Le Chatelier's Principle♣♣♣♣♣♣: When a stress is applied to a system at equilibrium the


system will adjust to relieve the stress—a clear demonstrations of Nature insistence
towards balance.

Where some would see infinite diversity, others would recognize variations of particular
themes, as clouds are infinite varied in shape yet categorized by form: cumulus, cirrus,
cirrostratus, mammatus, etc. Order and disorder are properties of pure appreciation and
their degree or magnitude has no physical constituency. That is to say, Nature does not
discern the state of “order” of a system, which “compels” a change towards an increase
in “disorder”.

Disorder has a direct relationship with the term entropy, which is, as defined by Rudolf
Clausius, the capacity of change. Regrettably, scientist interchange these terms
indiscriminately, since in the majority of the examples used to describe entropy, it’s
appreciative increase in disorder goes in accordance with an entropic increase. Yet one
must be careful with such terms, that is, entropy as the capacity of change, which is
mathematically described as a ratio of expelled heat over temperature. When and where
there are no differences between the ratios at two different states, of the same system,

Example: http://www.sciam.com/article.cfm?articleID=0005BEE2-3FE9-1E28-
♣♣♣♣

8B3B809EC588EEDF&sc=I100322
♣♣♣♣♣
Example: http://www.newton.dep.anl.gov/askasci/chem99/chem99634.htm

♣♣♣♣♣♣
Reference: http://scidiv.bcc.ctc.edu/wv/7/0007-008-le_chatelier.html

INSTANT CONTINUUM
22

then such process of change is considered reversible. Contrariwise, if a difference is


measured, the process is defined as irreversible and tending towards thermodynamic
equilibrium. But not necessarily must there exist a correlation between the energy
transformation of a system and the level of disorder. Take for instance the solar system,
with Earth slowly being dragged towards the Sun as gravitational tug converts Earth’s
orbital kinetic to the Sun’s gain in rotational velocity. The system is completely isolated
from external influences, and loosing energy as the sun radiates. Yet, if and when Earth
is finally swallowed up by the Sun, entropy need not have change, even though the
system appears a little more organized without us around. Or, take a box containing a
dismantled jigsaw puzzle, where disorder is considered at its maximum, which is in turn
shaken, regardless how long this takes place disorder will remain constant. Shaking the
jigsaw puzzle would not necessarily constitute a reversible process, though certainly
some scientist would argue such. Nevertheless, either the box spontaneously combusts
to hell or the person tires to death from shaking it, but disorder will be maintained
constant.

Nature strives on simplicity, even when humans starve at the apparent complexity. The
Universe is certainly chaotic in that it is a non-linear system, not to imply that it so must
be mystical and perplexing. The complexity of many systems overwhelms us, and
figuring its secrets is a challenging process for the endeavor of Science. Beneath are
unveiled, through observation and logic, fundamental laws of Nature. As we
progressively expand our knowledge of what interactions are being manifested in
Nature, the very complicated is simplified. This knowledge has been capitalized by the
recent discovery that chaos itself is an intrinsic characteristic of systems that evolve in
orderly fashion. The mathematics of chaos reveals that some orderly factor is
responsible for the disordered state of the function. These are dynamic systems that
depend on their own proper configuration to define their consequent state—a constant
recursiveness whose immediacy of response destroys its history into a flux of causes. It
is this self-reference-cause effect that attributes to chaos. But this is how Nature is. It is,
in a single word: evolution. Regardless if it involves astronomic, atomic, biologic, or
social systems, all will be appreciated as organized since their formation, however
complex, are the continuous results of actions taking place in a system through the least
resistant path (governed by specific forces and invariable laws)—developing into
indeterminate variations of common patterns. These are historical processes that will
hardly be seen as causal.

It is not coincidental that mathematical functions like the Mandelbrot set and Lorentz
Strange Attractor surprisingly echoes systems in Nature. Both, the development of a
chaotic set of equations and the physical dynamic system involve a multitude of
variables that change according to the just-prior state of the system. Again, it is the
development of new states, as determined by its own internal structure and
circumstance, what evolves or what transforms chaotically. The state and circumstance
of every element involved within a system, determines (regionally) how changes are to
occur, hence minute fluctuations can evolve into grate differences. Yet this does not
imply that because a butterfly flutters its wings over Florida a typhoon in Malaysia could
result—the presence of the butterfly alters slightly the wind through which it flies, while
the typhoon which ruffles the seas is generated independently from atmospheric
disturbances of thousands or millions times more the amount of energy. All the butterfly
proves is that we will never determine meteorological events exactly but be very
proximate to it—just flutters away.

INSTANT CONTINUUM
23

The fact that natural systems can be emulated through mathematics formulations does
not in any way imply that such natural systems are then to be precisely extrapolated, as
to be deterministic. Though an approximation for the most likely state is possible with
computers, simulations must be kept in continual re-calibration if these are not to diverge
far from the real system being mimicked. The intentions behind a simulation are not to
be determinate as to the exact future state of a system but more to understand its
behavior or the manner in which it evolves. Naturally, there is an element of
unpredictability in any chaotic system, but patterns that reveal some sort of organization
are developed. This provides some degree of predictability as well as understanding,
though intrinsically with some limitations. Pattern reoccurrence in chaotic systems are no
substitute for determinism but certainly helps to prescribe ranges within which the
system is most likely to develop into—in other words be very statistically correct.
Regardless of its uncertainty, chaos is a great support to the irreversibility of events, an
in some way more so than a deterministic view ever was.

The method of prescribing ranges within which a system is most likely to develop was
first postulated by James Clark Maxwell, by describing the behavior of gases not by the
individual motion of its constituent particles but as a statistical mechanic of all these so
as to define the entire gas as a unit. To this, Ludwig Boltzmann’s enhancement in the
statistical mechanics serves to discredit the Second Law of Thermodynamic.
Boltzmann’s H-theorem describes the dispersion of gases on account of the statistical
mechanics of the internal collisions of the molecules or atoms in the gas, reflecting the
natural tendency of a system toward a least energetic state—entropy, a term imported
only as an appreciated term to quantized the energy within a gas which increases
dispersion, the capacity of change.

Understanding the underlying laws that govern a system will not render the system as
predictable. Knowing that molecules in a gas speed and collide will not determine the
trajectory that a feather will take, when it falls through it. As nor will knowing genetics will
exactly determine heredity. From sunspots to the formations of vortices, to predict
occurrences Science will only deal through probability. Even in chaotic systems, there
will be some degree of predictability through probability. This regards deeply our
understanding of the system in question. Probability will indicate the size, origin and
trajectory a hurricane should take upon formation, and this knowledge will result greatly
from the understanding of the atmospheric state that generate these systems prior to
their formation.

I propose the adoption of a theorem (postulated as early as the eighteen century) that
will be more in accordance with observation, a sort of Economy of Energy Law, in that a
dynamic isolated system (static isolated systems do not change) will tend to minimize its
expenditure of energy, through a path of least resistance. This will reflect the
overwhelming tendency for systems to organize (i.e., stability), yet keeping in mind that
frictional force reduce that same efficiency. That as a result of the self-reference-cause
effect, chaos will result in the evolution of these systems. Chaos is not an impediment of
order.

Must we stubbornly continue to support the archaic Second Law of Thermodynamic with
its strange loops of reasoning or can we finally accept Nature as it is? Simple!

Two final notes on this tendency towards a path of least resistance: first, it does not
violate the certain First Law of Thermodynamics, the Conservation of Energy (and
INSTANT CONTINUUM
24

Mass); and secondly, it is in accordance with Newton’s First Law: “every body continues
in its state of rest, or of uniform motion in a straight line, unless it is compelled to change
that state by forces impressed on it.” It should be pointed out that in order for the Second
Law of Thermodynamics hold a congruent relationship with the First Law of Motion then
systems isolated from external influences will tend toward disorder as the Second Law of
Thermodynamics declares, and inertia would not need to be maintained but rather loose
energy to the rest of space.

INSTANT CONTINUUM
25

CHAPTER IV: Motion

“[Physics] shows us the illusion that lies behind


reality—and the reality that lies behind illusion.” –John
Archibald Wheeler

“Henceforth space by itself, and time by itself, are doomed


to fade away into mere shadows, and only a kind of union
of the two will preserve an independent reality.” –Hermann
Minkowski

Motion is a change in position; a relative property determined from an arbitrary point of


reference. With such a definition, it can be deduced that as two bodies approach each
other, deprived of any other reference, it will be indeterminable as to which among the
two is moving, if not both. What can be discerned is acceleration, that is, forces exerted
upon each object.

The classic studies of Galileo Galilei and Isaac Newton on the mechanics of motion,
demonstrate that velocity was not absolute but rather relative to a frame of reference.
For instance, a ball placed inside the cart of a moving locomotive seems stationary in
reference to the cart but speeds greatly in relation to the railroad. I do not refute this,
nevertheless, it must be emphasized that the components of motion, what constitute as
the inertia of the ball, remains the same regardless of any frame of reference and this is
true even in the most simple of motions. The fact that some motions can be canceled out
by adjusting the reference frame does not eliminate the constituent inertia of that body.
The ball will remain motionless inside the cart of a moving train, as long as the train
maintains a straight trajectory at constant speed. Any change in direction or magnitude
in velocity of the train will cause the ball to continue under its own inertia until otherwise
affected by opposing forces—like the walls of the cart.

Newton’s First Law of Motion, which states this tendency for bodies to remain at rest,
unless acted upon by an external force, implies that Nature has no distinction over
velocity but does so over acceleration—acceleration being the numerical representation
of the expression of a force determined by the change of velocity over a set duration
(time). What is distinct is inertia, which is precisely the intended definition behind the
First Law. Inertia remains unchanged regardless of reference frame. A ball, which
remains still to one observer, can appear to move to another, yet both are the same
object and maintain the same inertia—regardless of reference frame.

Imagine for instance two children playing pass ball inside a fast moving spaceship—fast
in relation to a very distant planet they might be approaching. The ball appears to have a
small inertia, changing direction with easy tosses, yet the same ball, should it continue
with the same speed towards the distant planes, consequently colliding, would express a
tremendous release of kinetic energy. Inertia is a direct result of mass and energy
accumulated by the system from impending forces and that, which is intrinsic to it. The
ball in this case has indeed gathered additional energy from the acceleration of the
spaceship, yet the larger inertia of the ball remains undetectable to the kids, as they too

INSTANT CONTINUUM
26

have increased their inertia inside the ship. The two kids where only changing the balls
inertia within the system slightly. Since we need to quantify inertia for descriptive
necessity, it seems then that an absolute origin is needed if we are to maintain the
numerical value of the inertia regardless of reference frame. This is arbitrary as an origin
may be established anywhere for any system or trajectory. Maybe a more mundane
example will help visualize the distinction.

An automobile traveling at ninety-one kilometers and hour where it to impact another


vehicle traveling on the same direction at ninety kilometers an hour, would exert a force
equal to that of a one kilometer an hour impact. That same vehicle, where it to collide
against another stationary vehicle, it would cause a disastrous accident. For both cases
the automobile certainly would have the same inertia. But the difference in inertia
between the two vehicles is the energy differences manifested between the two
collisions. This is consistent whether there is or not a ground to reference with.
Likewise, for the collision with the stationary vehicle, it is irrelevant whether the ground
helps to reference the motion of either vehicle. Even if the scenario lacked from any
reference frame (to determine velocity), the energy exerted in the impact, is determined
by the difference of inertia between the two bodies involved.

Forces exerted on the system change the inertia (mass and energy) of the system. This
is a physical change that manifest independent of reference frame. (True that
acceleration is also measured against an arbitrary reference frame in order to quantities
the change in velocity, but an unequivocal difference exists; a person can distinguish
acceleration with eyes closed but not so velocity—and that is as sensible as science
might get.)

The independence between inertia and reference frame generally stands unperceived
since collisions normally considered for study involve few bodies. When many objects
are involved, as is the case when dealing with molecules in a gas, this ambiguity
disappears. A gas will always exert the same pressure against “a flask” regardless of the
reference frame chosen for its definition of velocity. What is taken into consideration is
the kinetic energy of the myriad of molecules as they vibrate and collide against one
another and “the flask”. It is irrelevant if “the flask” is perceived by a moving observer,
who would observe—hypothetically speaking—as one might presume, a slight difference
in pressure (proportional to velocity) between the front and rear of “the flask”. This could
only be true if the “flask” was accelerating. To imply that the motion of the observer
affects the pressure the gas makes against the container is absurd even when the
observer is accelerating. What is of concern here are the energetic collisions of
molecules exerted upon the container; the reference frame should be reduce to a
minimum, that is, between molecule and container or between molecule and molecule.
The general rule is that only the objects involved in a collision are relevant in defining the
impact force, regardless of reference frame of the observer; so that the reference frame
can always be reduce to include only those bodies involved in the collision independent
of the observer.

Reducing the reference frame to a minimum destroys absolute space and motion,
deeming them inconsequential. This notion goes in accordance with Ernst Mach’s belief
that the two properties are “pure mental constructs”. (Space here is defined from a
particular arbitrary point in space which is common to the observed motion, that is, a
point in space or origin which does not move, by definition. Such point is absolute only

INSTANT CONTINUUM
27

for that definition.) Only the sense that this fact is so counter-intuitive makes it so
disproving.

The fact seems even less intuitive when dealing with waves. Given a sounding siren
atop a stationary tower, a Doppler effect is observed on the wave propagation, if there is
a difference in velocity between the siren and the listener, which as an example will be a
pilot aboard a fast airplane. Since frequency is defined as the energy of a wave, the
observed shift suggest that by choosing a reference frame this intrinsic property is
affected. This is contrary to the maxim: that, observation does not denote reality (aside
from Quantum physics which refutes it). Despite of this, while a frequency shift is
perceived, the wave emanates at a certain frequency from its source, evenly radiating,
regardless of any observation. The appreciated shift is due to the natural definition of a
wave as a field (a series of continuous waves). One must keep in mind that it is
irrelevant whether the observer or the source is actually moving, since the reference
frame should be reduced to a minimum to determine the energy of the wave. Physically,
the wave does not change, but the motion of the observer (or a moving source, whatever
the case might be) produces a shift in the rate of impingement of the sound waves to the
ear or microphone, effectively altering the energy impressed. (This is analogous to the to
the energy difference exerted between the collision of an automobile against another
moving vehicle compared to that if colliding against a stationary vehicle.) Those objects
involved in a collision, in this case the sound waves and the ear, are relevant to the
definition of the impingement rate, or energy. So motion between listener and siren does
affect the energy perceived, a pitch change. The apparent paradox with the frequency
shift of waves occurs as a result of referencing the speed of the sound wave as well as
that of the listener, whether moving or not, against the environment instead of minimizing
the reference frame independent of the environment.

Enter light!

The speed of light is measured by multiplying wavelength times frequency—wavelength


determine the distance between each wave crest, frequency determines how many
waves vibrate as a specific point is space per second, thus velocity. For instance, x-rays
have a wavelength of about 5 nanometers (5.0x10-9 m), its frequency something like
6,000 terahertz (6.0x1016 Hz), the product of course equals 300,000 kilometers per
second, the speed of light in vacuum. Congruently, shifting the frequency by
observation, affects the appreciated wavelength. For light, since these properties are
determined by reducing the reference frame to that between the electromagnetic wave
and the detector, the apparent velocity remains constant. Say for instance, that an
observer moves sufficiently fast so as to perceive a blue-shift of x-rays to gamma-rays (a
purely hypothetical spaceship), the frequency of the wave is perceived at 300 exahertz
(3.0x1020 Hz) this has reduced its wavelength accordingly to 1 picometer (1.0x10-12 m).
Again, the velocity of light remains constant and most importantly, what permits this
“particularly special relativity” of waves is that they are fields, not particles.

To help the reader visualize this difference when a medium is involved, I return to the
anterior example of the tower siren and the airplane for a more common analogy.

Sound travels trough air at about 330 meters per second (740 mph) at standard
temperature and pressure (we will obviate standards). The pilot travels towards the
siren, sounding off at a frequency of 10,000 Hz. The velocity of the airplane by no means
alters the physical characteristics of the sound waves propagating from the tower, yet
INSTANT CONTINUUM
28

the listening pilot perceives a Doppler shift in the frequency of the sound. Relatively,
frequency increases but the pilot realizes that the wavelength must not change, only
apparently. The pilot realizes it is an effect caused by the airplane moving towards the
waves as they move through the air, which is responsible for the apparent increase in
frequency (and its relative velocity taking the air into consideration).♣ If the airplane is
traveling at 200 meters per second, the relative velocity of sound will be 530 meters per
second. At such speed, the frequency of the sound wave have increased to about
16,000 Hz. Again, this is consistent in that the wavelength remains constant, about 3
centimeters long. So, because the medium was referenced, a relative change in velocity
was observed, which for a wave translates into an increase in frequency.

Twilight!

There is a problem with current definitions of light: Einstein’s theory of relativity argues
that light maintains a constant velocity regardless of the velocity the observer has with
respect to the light source which is true for all fields of waves but he necessitates to
formulate his Special Relativity when he visualized light as a particle.♣♣ This creates a
dilemma, which is unresolved even with his General Relativity.

Again, the velocity of any wave is a computed ratio between the frequency and
wavelength. Notice too, that the factor of this two properties results in the energy
definition of the wave. For light, these properties are measured by means of a
photodetector. A relative change between the wave and detector causes a frequency
shift, but congruently so does its perceived wavelength, thus maintaining a constant
ratio, which is why light is always measured to the same constant speed, regardless of
reference frame, regardless of energy shifts.

With light there was an inaccurate assumption made, that like other waves, it must too
travel through a medium, thus the æther was proposed. To prove the hypothesis of the
æther the Michelson-Morley experiment was first conducted in 1887. The two
Americans, to their surprise, discovered that such medium, if it existed, did not affect the
relative speed of light, hence the medium needed not be there. Light can travel through
empty space and did so at a constant speed.

Nevertheless, it was perturbing that light violated classical relativity. This was because
the æther really never disappeared from the minds of scientists. Classical relativity
remains consistent for particles or wave, regardless. Even so, for light, what seems so
counterintuitive as a constant speed, regardless of reference frame, this is itself resolved


If it might help the reader visualize this relativity better, imagine instead trains of carts,
the frequency at which the carts pass by depend on velocity, not length—which remains
constant.
♣♣
Light is perceived by modern physics as both a wave and a particle; which leads to an
ambiguity as to the reference of its velocity. If light were a particle then a change in
velocity would be appreciated. Being that no change is appreciated, then there is
certainly a need for Special Relativity to explain the irrelativeness of such a particle.
Although the discussion here, of wave relativity, does help to refute the particularity of
light, it will be the objective of Chapter 7 to emphasize such.

INSTANT CONTINUUM
29

by reducing the reference frame to a minimum and realizing that waves are fields not
particles.

Exit light!

INSTANT CONTINUUM
31

CHAPTER V: Time

“I am conscious of being only an individual struggling


weakly against the stream of time.” –Ludwing Boltzmann

“Our present picture of physical reality, particularly in


relation to the nature of time, is due for a grand shake-
up—even greater, perhaps, than that which has already
been provided by present-day relativity and quantum
mechanics.” –Roger Penrose

At this point I will like to bring to the attention of the reader a small detail found in the
schematic diagram named Figure 1: Perceptions., on CHAPTER II: Reality. It is a
detail that happens to be the basis on which this work is founded upon. Among the
realities of science, the premise of time has been singled out by a dotted border for the
reason that I consider time to be an appreciation, just as beauty and disorder, which has
been considered previously, are not physical constituents of the physical nature of the
Universe.

Norman Mermin, indicates that, “the mystery of relativity fade, however, when one firmly
recognize that clocks do not measure some pre-existing thing called ‘time’, but that our
concept of time is simply a convenient way to abstract the common behavior of all those
objects we call ‘clocks’. While such a distinction may sound like splitting hairs, it is
remarkably liberating to realize that time in itself does not exist except as an abstraction
to free us from having always to talk about this clock or that.”iv

The reason why time seems so very much real in our minds is that we appreciate the
changes in the world through memory and logic. Without recollection there would be no
past. Without prefiguration there would be no future. Erroneously, this definition could be
extended to state that without any awareness, there would not even be a
present—which is true. It must be recognized that conceiving a future does not create
the future, likewise with the present. The future in only a speculation; we recognize that
there is a continuity in Reality, hence we speculate. We can only guess at what the
future holds through inference and deductions, and it always comes as a surprise. In
whole, our mental ability creates the concept of time; not implying that we create the
reality we sense. Every instance, impulses from the five senses are integrated into a
sense of present, into awareness. So awareness, is a consequence of the correlation
made between the five senses and logic (which I previously considered as a hypothetical
sixth sense). Awareness is our inference of Reality, lying between Perception and
Rationalization. It is in our awareness where time recedes. But for all we can appreciate
from Reality, we cannot prescribe the future. It never is, only the present, forever—real
and independent of our thoughts. Reality cannot be created by observation, like
Schrödinger’s cat.

INSTANT CONTINUUM
32

The idea conveyed here assimilates what the German philosopher, Immanuel Kant
attested with his definition of phenomena and neumena, a distinction between Reality
and the phenomena we sense.♣

There is a distinction between mental and physical attributes, which must be recognized.
We perceive reality through our senses, and create appreciations with our minds. Mental
attributes that we create from appreciation are only subjective. Failure to realize this will
result in confusion and erroneous theories. So we must distinguish between our sense of
time and the continuum of instances, which our senses perceive. Reality is a flux of
eternal presents, and Time is an invention of the human mind.

Humans have measured time since early history. Keeping track of climatic changes with
relation to the heavens was essential for the survival of a society dependant of
agriculture. The pursuit of knowledge and understanding of celestial motions provided
appropriate solutions to the questions of the optimum time to cultivate and harvest.

Time, in essence, is a system for the measurement of rates and like other systems of
measurements it has a need for a standard. Such a definition can be achieved by the
selection of two natural harmonious phenomena, like the celestial motions of our
planets. By “natural harmonious phenomena” it is referring to cyclic actions whose
changes in position (or any other characteristic) return to their original state only to
repeat the change once again. Formulation of a ratio between the revolution of Earth
around the Sun and the daily rotation about its own axis, which are both cycles that
repeat with virtually no lost in precision, set the standard of measurement—a ration that
is in fact basis for all our timekeepers. Defining a revolution as a year and a rotation as
a day, the ratio of approximately 365.24 days per year is derived. From there, a further
division of the day, into hours and seconds, reduces the prolong cycle of the passing
days into a more appropriate scale.

With the advancement of science arises a need for higher precision by which to measure
shorter periods (i.e. faster events). From the swinging of a pendulum to the vibration of
quartz, to the undulation of light-waves, all are harmonious motions by which to
reference other, not necessarily harmonious motions. In cases where non-harmonious
motions or events are referenced, time is calculated by counting how much repetition of
the harmonious motion was observed during the completion of the non-harmonious
action. Ideally, harmonious motions are selected as standard units for reference and the
computation of time. The physics of waves gives light an apparent motion even at the


“Duration in time is the only thing we can measure (however imperfectly) by thought
alone, with no input from our senses, so it is natural to imagine that we can learn
something about the dimension of time by pure reason. Kant taught that space and time
are not part of external reality but are rather preexisting structures in our minds that
allow us to relate objects and events. To a Kantian the most shocking thing about
Einstein’s theories was that they demoted space and time to the status of ordinary
aspects of the physical universe, aspects that could be affected by motion (in special
relativity) or gravitation (in general relativity). Even now, almost a century after the
advent of special relativity, some physicists still think that there are things that can be
said about space and time on the basis of pure thought.” [Dream of a Final Theory, S.
Weinberg, p. 173].

INSTANT CONTINUUM
33

most reduced reference frame and is well suited as a standard of measure, as it is use in
modern instrumentation.♣♣

Since in order to detect motion it must be referenced, it can be deduced that any change
whatsoever in the universe defines time. Even though this is the case, our notion of time
involves cross-referencing. Let us first start with a simple example. Imagine if you will,
staring at a motionless particle amidst the infinity of space. Neglecting heartbeats,
breathing and every other living cycle for the effectiveness of this imaginary experiment,
there would be in essence no possible reference of motion—sure, light, but we shall
neglect this reference too. One could be staring at a still picture and not be discerned.
Turning to a somewhat more complicated environment, imagine living in a windowless
room, having exclusively a perpetual pendulum as a reference of motion inside the
enclosed room (again neglecting the living cycles). The room provides by itself adequate
reference against the pendulum that motion is apparent without the need of any
additional action. In effect, time is defined by the repeated cycles of the swinging
pendulum. But a swing is a swing is a swing. Outside the room, time, defined by the
days and the routines of life pass unreferenced. A cycle for the pendulum regardless if it
lasts a second or a day, will still be only a full swing. Here, although there is the
perception of a harmonious cycle and time can be defined by counting swings, it lacks of
cross-reference that would provide relevant dimension. Another and final example that
illustrates this need of cross-referencing is with the use of interlinked gears. Taking into
consideration an arrangement of gears, each particular gear moves as a consequence
of the others, always maintaining their ratio of rotation according to relative diameter. It is
irrelevant how fast the gears are spinning, he ratio remains constant, so it lacks of any
significance as timekeeper if it cannot be cross-referenced by other actions. One will
count rotations without relevant meaning, that is, without dimension, thus lacking of a
sense of time.

In reality, it will never be possible to experience any motion however simple, without
additional cross-references, we even take our life-cycles as reference, making the
senses of time so determinative.

The description of the physical world has always been associated with time. No other
way could motion and dynamic attributes could be described so simply than by a unit
ratio of duration. This abstract appreciation of our world has been developed so
profoundly in our minds and in our way of speaking that it transcends every other
appreciation, even for words and numbers. This is so much so, that it seems there is no
other way to define time but as an ad priori.

So then, why living organisms age and die, if there is no time but in our minds? Since
the whole of the Universe progresses irreversibly, we suffer such processes. The reality
is that the Universe is a dynamic system that progresses in a flux of causes; a chaotic
process of self-reference and self-definition where the subsequent state of any system
derives from its present state and circumstance. Biological processes likewise suffer

♣♣
“The duration of 9,192,631,770 periods of the radiation corresponding to the transition
between the two hyperfine levels of the ground state of the cesium-133 atom,” is the
definition of a second adopted by the General Conference of Weights and Measures
(1967).

INSTANT CONTINUUM
34

changes through its regeneration and the effect of the environment. Cells regenerate to
promote healing where there is injury, replacing old or dead cells for new ones. This
process known as mitosis is not the only form of cell division. A more complex process
known as meiosis leads to the regeneration of offspring, to allow for the substitution of
the entire organism, which will continually suffer from exogenous degenerative forces.
Life is perpetuated because subsequent generations are entrusted with surviving traits,
even as the unsuccessful (through the process of natural selection) fail to reproduce. We
age and die because our bodies are machines in process. And we need not reproduce,
but we have been entrusted to do so.

The heart beats to circulate blood, to enrich it with oxygen and nutrients, replenishing the
body with energy. This process is not inexorably propelled by time but that in its action of
living it has a rhythm, a duration. A beat follows another to sustain life, as design by
evolutionary processes. A sort of innate fear arises after realizing that our heart
palpitates causally and inextricably until death. Like a wire-puppets no longer being
animated by strings, our lives are liberated from the all too instinctive feeling of time.

The concept “time” is indispensable to the understanding and expression of the physical
world. It is not my intention to dismiss time altogether, it is a very important appreciation,
one that is deeply rooted in the way we communicate, rationalize, and live. But, if
science is to represent reality more objectively, we must scrutinize current theories and
redefine those hypothesis that have been formulated from the premise that time is a
physical aspect of the universe. As our understanding of the Universe has become more
elaborate, the standing concept of time as a physical property has become a very
cumbersome premise. Nevertheless, as strictly an appreciation, time serves
extraordinarily well as means of computation. Time is a most appropriate scalar unit by
which to relate the changes in our world as well as record duration of motion and speed.

The ticking of a clock, the sprouting of a seed, and the weathering of mountains are
consequential effects, caused by the internal state of the system in relation to the
external circumstances of the environment. I must stress the words ‘cause’ and ‘effect’
as these are the motives for every interaction and change. Reality is a continuous flux of
interactions. Every change, from quantum mechanics to the dynamics of galaxies, is
defined by the state and circumstance of the system. Even if as a consequence of a
system’s complexity our effective understanding of every interaction is limited, it does
not alter the fact that these remain causal events.

There is, regardless of our limitations to predict, an irreversibility of events. By the


statement of irreversibility it is not to exclude reversible events; like the pendulum’s
swing, these would not be retrogressing in time but merely compelled to retrace their
route in opposite direction, and then once again and again. There are no such things as
time reversal or time-symmetry violation, if not as an appreciation. It is against all
observations and logic that where the universe to suffer the Big Crunch, we would
retrace our lives once again but backwards; rising from our graves to finally die in our
mother’s womb. This is totally absurd. If the Earth, by some fictitious reason was to all of
a sudden trace its orbit in the opposite direction, all it would mean is that synagogues
would have to face the other way, their entrance looking to the west, to greet the sun of
a new day as it rises in the new direction of dawn. Just because projectile motion trace
the same symmetrical parabolic path where it to be seen in reversed (as in a
photographic film strip) does not extend to every other event in nature. Apples do not un-
rot, so they can jump up from the ground and stick themselves at a precise location
INSTANT CONTINUUM
35

where the stem happens to match perfectly with a twig in the tree. Nature as a whole
(and in all particulars) is impossible in reverse. A collapse building would not rebuilt itself
from rubble and floating dust to expel an imploded bullet which will travel backwards to
insert itself inside a long barrel along with hot gases and fires from nowhere to implode
and end up inert and cold braced to a cartridge of gunpowder, and silly humans walking
backwards to extract it from the cannon.

This idea or any other that accounts for time’s reversibility is purely imaginative. For
instance, Stephen Hawkings’ “imaginary time” might just be “mathematical subterfuge”
and not intended as a realistic representation. So even for theoretical scientist in order to
formulate new theories, the concept of time can be over-conceptualized, if I may be
allowed to state it as such. As Hawkings himself puts it: “all one can ask is whether
imaginary time is useful in formulating mathematical models that describe what we
observe.”v With such view, so does god, who could make a much simpler mathematical
model, but it would not be too scientific!

Patterns in the astronomical scale demonstrate that the Universe as a whole acts with a
unidirectional flow of events. This has been regarded as “the arrow of time”, of which
various recent books have gone to considerable length in demonstrating that such an
“arrow” in facts points in one direction only. That time as a physical dimension of the
Universe inexorably develops forwards by some unspecified law of Nature, which also
supports Thermodynamics’ Second Law. Some of these books even reach intangible
metaphysical implications as conclusion: that the Universe has come to be as is by
design.

Various other authors readily recognize the unidirectional flow of events in chaotic
processes, by stating how these appear unnatural or right-out impossible if they where to
be observed in reverse (e.g. the shattering of glass). Most processes if observed in
rewind will be perceived as reversed, and impossible under the natural progress of
“time”. Only for the rare exception involving so-called “reversible processes of zero
entropy”, are the rewind views imperceptible from the forward play (e.g. projectile
trajectory again, without considering the differences between the energies of the firing
and the hit). However, if one is to retrograde time for an irreversible process within a
computer simulation instead of employing video recording, then it will be observed that
the system will not revert back to its original state but rather develop further towards a
chaotic state (because its consequence state depends on its present condition, even if
the logic is: to calculate what state must have existed prior to the present it still depends
on the present state).

So then, the laws that govern the evolution of chaotic systems do not depend on the
“arrow of time”, so that forwards or backwards, “order” is maintained. So it comes as a
surprise that processes regarded as irreversible (those that are chaotic) be virtually
“reversible”; that is, whether the “arrow of time” points forward or in reverse, a system
will develop in chaotic patterns according to the order characteristics of the system. This
illustrates that the laws that govern these processes not only do not depend on the
supposed “arrow of time” but also regards no such concept. This indifference for natural
phenomena towards the “arrow of time” supports the conceptualization of time. Likewise
Newtonian mechanics are defined independent of any direction of the “arrow of time”
and will appear natural in reverse. By dismissing time as purely an appreciative concept,
and supporting the case of irreversibility by noting the chaotic evolution of the Universe,
then a simpler picture arises: a simplistic reactionary universe through the path of least
INSTANT CONTINUUM
36

resistance of effect by self-referenced cause. The grandeur of the Cosmos; the reflection
of itself.

INSTANT CONTINUUM
37

CHAPTER VI: Force

“…it takes a generation or two until it becomes obvious


that there’s no real problem. I cannot define the real
problem, therefore I suspect there’s no real problem, but
I’m not sure there’s no real problem.” –Paul Dirac

As I begin divulging my ideas, I find it precarious to present the reader with some
inconsistencies of contemporary physics, thus justifying my concerns in the matter.
These are inconsistencies that have prevailed for over a century, as well as additional
ones that continue to surface even to this day. As these inconsistencies arise, scientist
and theorist fine-tune standing theories to accommodate them. It is generally the case
that only minor adjustments are needed without jeopardizing the integrity of the unsettled
theory. However, I argue that concerning resent advances in Nuclear Physics and
Astronomy, the process of reformulation have developed into extravagant and outlandish
explanations.

I contend, that the flaws amid Nuclear Physics and Astronomy are collectively large
enough to consider redefinition of the theories of Relativity, Quantum, and the Big Bang.
As a brief example: neutrinos, though defined as undetectable, are believed to exist for
reason of missing mass in some nuclear reactions. From this premise, a series of
speculations can be concocted, for instance the infliction of antineutrinos into atoms,
which induce them to reverse beta decay. Even when antimatter is extremely scares in
the universe, experiments have been conceptualized to detect the byproduct of such
exceedingly oblivious interaction between an antineutrino and a nucleoid. And when the
number of reactions detected disagrees with “expected results”, scientists argue the
“expected results” and not consider a different interpretation of the main theory from
which the experiment was developed. Maybe, something else is responsible for the
nucleoid decay observed, aside from the undetectable-by-definition antineutrinos (this
will be reanalyzed in a later chapter). The current theory holds up because it is reasoned
that it explains to many things to be readily discarded. Thus, scientists argue that maybe
the Sun, “our main source of antineutrinos”, might not produce as much of these feeble
particles as calculated or that the anti-neutrinos transform themselves on their way
towards the detector in such a way that they are less likely to be detected—instead of
even disputing the validity of neutrinos in the first place.♣vi


60 tons of gallium metal contained in large underground tanks for the Soviet-American
Gallium Experiment (SAGE) is monitored for the detection of radioactive germanium,
theoretically a byproduct of the neutrino’s interaction with gallium atoms. “An observation
in a gallium experiment of a strong suppression of the [expected] low-energy neutrino
flux requires the invocation of new neutrino properties,” writes Peterson, Physics
correspondent for SCIENCE NEWS. He adds, “the SAGE measurements pose a serious
threat to conventional theory because a gallium detector picks up low-energy neutrinos
generated inside the sun during proton-proton fusion reaction. Shortfall in high-energy
neutrinos, seen in previous experiments, had prompted theorists to speculate that either
the sun’s core temperature is lower than expected, or neutrinos somehow change their
identities before they reach the detector, and thus fail to interact with Earth-based
INSTANT CONTINUUM
38

Incidentally, when I started conceptualizing time as an imaginary phenomenon, thus


eliminating it from being an objective premise, it was obligatory to reformulate the
theories of Relativity, Quantum, and the Big Bang.

I have conceptualized such reformulation, but in order for the reformulation to have any
sort of validity it is imperative that every experimental result be explained, without any
inconsistencies. To what my logic permits me, I have not found any discrepancy. I
publish these reformulations on the belief that they positively resolve these
contemporary inconsistencies, which will be presented shortly hereafter. This work,
however, is by no means intended as a resolution to every question in physics, like
some sort of Answer to Everything.

I will first present inconsistencies in current physics and set the stage for the
presentation of the reformulation. I ask of the reader to keep an open mind as these
concepts should come at first as awkward, just as the idea that time as a subjective term
might come as intellectually insipid to most.

Henry Poincare suggested on 1895, “the principle of relativity, according to which the
laws of physics phenomena [electromagnetic and optical] should be the same, whether
for an observer fixed, or for an observer carried along in a uniform motion of
transformation; so that we have not and could not have any means of discerning
whether or not we are carried along in such a motion.”vii To which Hendrick Antoon
Lorentz incorporated on 1904, to explain the behavior of moving electric charges in
which he applied the Lorentz-FitzGerald contraction of space and time.

A year later, Albert Einstein independently and unaware of these notions made by
Lorentz, explained the idea for what became the Theory of Special Relativity. Einstein
reached the idea from the explanation of electricity and magnetism, which differ,
according to contemporary views, on whether the motion was by the conductor or the
magnetic field. For Einstein these explanations where inconsistent in that they
suggested absolute motion, thus he proposed Special Relativity, which in principle
rejects the idea of absolute space and time in any reference frame. The theory proposes
that motion of a body deforms its own space and time so that a particular event never
corresponds from one reference frame to another—except when referencing against an
electromagnetic field. This diverges greatly from classical relativity by the notion that an
object can distorts space and time (thus defining it) by its uniform motion. The intended
implication for Special Relativity was to maintain as Poincare emphasized, that electric
or magnetic fields observed as moving should conserve their definition, so that
observation would not denote the physical manifestation of electromagnetism. Yet, there
should not have been any space-time deformation in order to resolve the apparent
paradox. In fact, there was no paradox, since one effect is the converse of the other, and

detectors in the expected manner. Because the rate of proton-proton fusion reaction
does not depend strongly on the sun’s core temperature, the data from the gallium
detector favors the change-of-identity explanation… Some theorists argue that an
electron-neutrino (one of the three known types of neutrinos) created in a proton-proton
fusion reaction can, on its way to Earth, transform itself into another type of neutrino.
Such a transformation is possible only if neutrinos have mass. The current standard
model of particle physics envisions neutrinos as massless.”

INSTANT CONTINUUM
39

the common reference frame was the conductor, the medium by which electrons
traveled through. It does not depend on observation, that is, whether it is the conductor
or the magnetic field that is perceived as moving since it is only the motion of electrons
that determine the expression of the field. In brief, an electric current generates a
magnetic field around it (Ørsted) while moving a conductor through a magnetic field
generates an electrical current (Faraday). But even if the seemingly paradoxical
manifestation of the electromagnetic field was resolved here, I shall present some
inconsistencies which Special and General Relativity entails.

Special Relativity specifies that regardless of the frame of reference, light speed remains
constant, but this is done so by a distortion of space and time; since, as I have stated
previously, the æther was never eliminated from the minds of scientist—its nonexistence
was accepted but unconsciously light was continued being referenced to it, light being
regarded as a particle. The reasoning follows that an object traveling at close to the
speed of light, if it approaches another object moving at the same speed but in opposite
direction, they would appear to approach one another at slightly less than the speed of
light, even if logic dictates that properly adding both velocity would exceed the speed of
light. The effect, according to the Theory of Special Relativity, is a consequence of each
object as they approach the speed of light by suffering space and time deformations.
These deformations are strictly intrinsic to the objects’ motion, which is defined by
velocity, regardless of the reference frame elected.♣♣

The classical anecdote that demonstrates the effect of space and time deformations is of
a relativistic spaceship capable of approaching very nearly the speed of light. Lorentz-
FitzGerald transformation formulas are employed to mathematically illustrate the
reduction of space in the dimension of travel and the deformation of time. Theoretically,
the relativistic ship is capable of traveling great distances suffering a reduced time lapse
so it could traverse what seems as a transient flight, when in fact was transcendental.
This occurs since for the spaceship time has slowed down and has deformed its own
dimensions of space so as to reduce its distance traveled. This implies that information
from an outside source, say for instance from Earth, will be received in a speed-up
manner while signals from the spaceship would appear retarded. The effect becomes
very much real as the ship returns to its point of departure to find that history has fled
without regards to the ship’s crew. A similar deformation occurs with the dimension of
travel, where the ship supposedly becomes shorter, regardless if this deformation
remains unapparent to the ship’s crew. It should be pointed out that there is a reciprocal
change in the distance to be traveled, that an observer sees the relativistic ship expand
as space shrinks for the traveler.

♣♣
It is strongly emphasized in Special Relativity, that events are observed according to
the elected reference frame, which would suggest that space and time are merely
appreciative terms. However, General Relativity appoints the four-dimensional space-
time continuum as physical in order to define gravity, a very objective force. The reader
should remember, that Nature does not concern itself with relative motion, so that the
deformation of space and time should be subjective when in fact Einstein’s Relativity
regards these as objective deformations, and relative motion affecting and defining the
physical dimensions of space and time.

INSTANT CONTINUUM
40

A prime bafflegab in Special Relativity is that there being two reference systems defined,
which differs in both space and time dimensions, then there is an ambiguous definition
as to what constitute as velocity.

Velocity, in classical physics, is not an intrinsic characteristic of the inert body but one
that is defined by reference to the environment. Special Relativity disregards this
definition attributing velocity as an inherent characteristic of matter, which affects the
definition of space and time on its own, disregarding both the environment and its
observer, regardless of the observer’s own intrinsic definitions.

The cross-referencing of space and time of both traveler and observer does lead to the
paradoxes that are so confusedly illustrated and championed by Special Relativity. Such
ambiguous cross-referencing is not only misleading but also erroneous. It presents
contradictions and inconsistencies. By discriminating space deformation in order to
justify or accentuate time deformation, Special Relativity provides loopholes of reasoning
concerning proper and relative motion. Even when Special Relativity is applied to the
moving object, while neglecting the observer, it fails to be consistent.

Special Relativity indicates that as two beams of light, traveling towards one another on
the same line of path, will appear to approach at the speed of light regardless from
whichever beam one reference the other, and not twice the speed as logic would dictate
(this is, of course, regarding light as a particle). As waves they would interfere as two
frequencies and propagate at the speed of light, without any dilemma.

Since all photons (as particles) presumably are identical in all their properties
(disregarding polarity), consequently both must suffer the same time and space
transformation as a function of the photon’s (particle) velocity. One beam does not affect
in any way the rest of the space-time continuum, and vice versa, nothing in the entire
space-time continuum affects its motion, or thus its relativity. Special Relativity dictates
the transformation suffered by light as an intrinsic characteristic defined by its velocity
alone, which remains constant regardless of the environment and from what reference
frame it is observed within that environment, so the same transformation is suffered
whether a photon was to approach another photon or a stationary object. That is to say,
regardless of any reference frame, light is referenced as approaching towards any other
object at the speed of light. This might seem consistent, respectively, but it falls apart
when both opposite directed light rays plus any other object are put together in the same
reference frame. To one photon, a stationary object as well as the other speeding
photon both approach at light speed, as it the entire environment remains fix in space
and time. Scientist could in blind faith close their eyes and accept that light moves
through space without detecting motion but it is irrational to have no reference of motion
and yet move.

An expansion to this scenario demonstrates much clearer this inconsistency: as light


travels at constant speed, its environment, however complex, will appear to be moving at
the same speed but on opposite direction. (Incidentally, does the Universe shine from
the point of view of a photon?) So any motion in the direction opposite to the photon
from any other object amid that environment will appear to supersede the speed of light,
but according to Special Relativity, this does not happen since each object suffers its
own transformation so as to not violate its apparent velocity to that and any other ray of
light. Keeping in mind that the propagation of a single photon has no effect to the rest of
the Universe and that the transformation of space-time is solely due to its intrinsic
INSTANT CONTINUUM
41

velocity, it is impossible to accommodate all the cross-referencing consistently but


Einstein was convince it all made sense in a special relative way.

A possible solution to this inconsistency would be that the Lorentz-FitzGerald


transformation would not be applicable to particles moving at the speed of light. This
could certainly be justified mathematically since the Lorentz-FitzGerald formulas become
undefined (it goes to infinity) for the speed of light. The four dimensions of space-time
are reduced to two, like a flat picture, perpendicular to the direction of motion. To a
“particle” that very well might lack on any dimension outside the wave function in which it
behaves, it is beyond my reasoning to attribute any properties to a “particle” which
suffers no time, lacks any dimensions and perceives no motion as it moves. I assume
that those who apply these formulas have realized this dilemma but I cannot imagine
why it is neglected. That light is exempt from these formulas does not deny the fact that
light too, suffers from some transformation, otherwise their would be no Theory of
Special Relativity. It could be argued in defense that Special Relativity does not apply to
light but only to particles moving very close to the speed of light, but unfortunately for
Special Relativity the inconsistencies do not end there.

Other violations to the “light-speed limit” come from astronomy in the observation of
some active galaxies. Active galaxies have a perpendicular jet emitting from the center
of the accretion disk, at which location the galaxy is suspected of housing a super-
massive “black hole”. Particles accelerated in the jet generate photons with energies in
the gamma range. Some of these galaxies are also known as superluminal quasars, as
radio images show them to move in some parts faster than light speed. “The
superluminal situation is thought to be an illusion caused by relativistic effects in a
pointer beam traveling close to the speed of light”viii—where exactly the formula for
illusion are derived in Special Relativity is beyond me (okay, so I am not being fair with
this comment, maybe).

Additionally, there is the case of the Cherenkov detector, named after Pavel Cherenkov
who first observed that charged particles produced light when moving faster than light
through certain transparent medium. It is known that light slows down in dense
mediums, so that what is referred to as c, is the speed of light in vacuum. But must it
now be that Special Relativity be subject of further excuses, ergo that the reference
frames or the transformation formulas vary depending on the density of its medium,
where the speed of light is governed by the ability of the associated electric and
magnetic fluxes to charge and discharge the capacity and inductive reactance of the
medium.

All in all, it is not just the definition of velocity, especially that of light, which is at question
in Einstein’s Relativity.

According to Henry Poincare’s statement, relativity must keep with the


indistinguishability of forces regardless of reference frame. Einstein extrapolated this,
although the intended phenomenon was different, indicating indistinctness between the
pull of acceleration and that of gravity to what later became the Theory of General
Relativity. This implied that the time dilation suffered by “approaching” the speed of light
could equally be experienced in a gravity field.

INSTANT CONTINUUM
42

Figure 3: Indistinctness between Acceleration and Gravity.

A concern whether this deformation of space and time is due to relative velocity or
acceleration (as that due to gravity) must be addressed. Special Relativity deals with
velocity not acceleration, otherwise the transformation formulas need not concern
themselves with velocity nor be applicable to light, which does not accelerate.

General Relativity states that the stronger a gravitational field is the more the rate of time
slows down. Accordingly, the more mass an object has the stronger the acceleration of
gravity and the slower time becomes. It is an experimentally tested effect, that time
advances slower near the surface of a massive body than farther away, where gravity is
much weaker (astronauts get older faster as they orbit in space).

Gravity, according to the Theory of General Relativity, is the result of the distortion of
space in the dimension of time by the mass of an object.♣♣♣ In other words, a mass

♣♣♣
A representation of this four-dimensional space-time continuum is generally made by
the use of a flat grid that is elastic in the third dimension, representing time. This
representation helps visualize how a massive body curves space so as to affect the
trajectory of other bodies by the indentation created in the elastic fabric in the form of
funnels with the massive body resting at its bottom.

INSTANT CONTINUUM
43

distorts space in the dimension of time, which is expressed as gravity. The severity of
the distortion of space-time by celestial bodies is proportional to the density of mass.

If acceleration, not velocity, is then the defining factor, it would be possible to keep
acceleration low enough so that the transformation are kept low, and permit the speed of
light to be attained and exceeded, provided long enough period of acceleration. If
Special and General Relativity’s indiscriminate effect are to extend to accelerating
bodies, all is just as well. But Special Relativity is not concerned with a body’s
acceleration, only its velocity, which defines as an intrinsic property, independent of
reference frame. Even in combination, it is inconsistent, since a moving object alters the
curvature of space, a single light beam could not simultaneously satisfy its intrinsic
relativeness with that of a massive body and remain unaltered (a circumstance similar to
the two light beams and a stationary object scenario previously presented). Light having
self-defined relativistic transformations, upon falling within a gravity well, must
additionally alter its own space-time so as to not violate its own speed, as it is being
affected by gravity.

General Relativity fails to explain why free moving bodies must curve the way they do,
along the curvature of space-time, which is defined as gravity. Since an object must
follow the geodesic path of the space-time it is found in, which is defined as the
gravitational field, the trajectory the object takes is restricted by the structure of space-
time itself. But in contrast lets us say that three-dimensional space is straight and gravity
is a mere force that affects both matter and energy, these would still bend object’s
trajectories in parabolic path or elliptical orbits as we observe in Newtonian physics but
does so as an effect of a force not as a restriction of the structure of space. For
Einsteinian physics, objects which travel within a distorted space-time will bend their
inertial trajectory by the restriction imposed of the structure of space-time, a
manifestation of gravitation. Likewise, the straight path of light would bend according to
the geodesic path, acting as gravitation. There would not be a force, per se, in the
space-time model.

When it is convenient, General Relativity pretends that it regards the hypothetical four-
dimensional distortion of space-time as a potentiometric representation of the force of
gravity, but it so strongly claims space-time to be an objective characteristic of the
universe that such pretension are refuted. Space-time distortion, which manifests as
gravitation, is solely responsible for the curve trajectory not only of light but planets and
projectiles as well. Since gravity is really the deformation of space-time by the mass of
an object, objects should be restricted to geodesic paths.

To illustrate how this mystery force arises, I propose as an example that the reader thus
take a sheet of paper and draw a line a few centimeters away and parallel to one of the
edges of the paper. Lifting then the paper by one of the corners where the line ends,
steep enough to make the corner’s surface close to perpendicular to the rest of the
paper, the line will twist accordingly. Imagine then a tiny sphere following along the line
towards the raised corner; naturally affected by gravity and the inclination of the paper, it
would start diverting off the line into a curve trajectory in relation to the true straight path
along the curved surface, which represents the curvature of space-time. Since gravity is
defined as the curvature of space-time itself, an additional mystery force (expressed in
the example by actual gravity) is needed so that the passing body diverts away from the
curvature of space-time or geodesic trajectory. This additional unspecified force will be in
the direction of time, perpendicular to all dimensions of space, which in the example is in
INSTANT CONTINUUM
44

the vertical dimension. So that, by whatever mysterious force objects fall, they neither
follow a straight geodesic path along the curved space-time nor is gravity by definition
responsible for the deviation. This is undeniably incoherent and although it could easily
be defended with muddled four-dimensional talk, it should be enough to discredit
General Relativity.

Figure 4: Geodesic and Affected Path in Curved Space-Time.

Although it seems appropriate to attribute General Relativity to an object’s mass and


acceleration (agreeing with Newtonian physics) unfortunately the Lorentz-FitzGerald
transformation formulas for space and time regard these distortions as a function of
velocity. Quite disturbing! Velocity being a relative quantity while the speed of light has

INSTANT CONTINUUM
45

no regards to any frame of reference, couple by the inapplicability of light itself to these
formulas. Additionally, the transformation formulas fail to reflect time dilation suffered by
the effects of gravity for static systems. I realize that Special Relativity does not
prescribe exclusively to these equations but they are a mathematical support to it.

Finally, it should be pointed out, that there is a variation of the transformation formulas
that applies to the mass of moving objects, which explains why so much energy can be
expended attempting to accelerate a subatomic particle and yet it is impossible to reach
the velocity of light. Theoretically, as the velocity of the object increases, so does its
mass, so that the more energy is inserted the larger is the particle’s inertia and the
harder it becomes to accelerate. Surely, this transformation could be applied somehow
to justify many discrepancies in General Relativity but the overall model will remain
flawed.

Later on, I will attempt to resolve these inconsistencies, explaining every phenomenon
that has lead to and come from Special and General Relativity, and do so without time,
but must first ponder upon other matters.

INSTANT CONTINUUM
47

CHAPTER VII: Wave

“In quantum mechanics we speak of mathematical


constructs called wave functions that give us information
only about the probabilities of various possible positions
and velocities.” –Steven Weinberg

“The hindrances met with on this path originate above all in


the fact that, so to say, every word in the language refers
to our ordinary perception. In the quantum theory we meet
this difficulty of the feature of irrationality characterizing the
quantum postulate. I hope, however, that the idea of
complementarity is suited to characterize the situation,
which bears a deep-going analogy to the general difficulty
in the formation of human ideas, inferent in the distinction
between subject and object.” –Niels Bohr

I find no other structure in Nature as beautiful as that of an atom as described by Niels


Bohr, Erwin Schrödinger and Max Born, amongst others: orbitals of probability model♦;
subtle in shape but complex in definition, simple in faculty but perplexing in function. A
negatively charged electron attracted around a nucleus, an oppositely charged proton,
forms an atom in its simplest arrangement. The structure, however, could not follow the
classic rules of electromagnetism, or the electron would spiral down into the nucleus, it
holds a balance of distance and proximity—this, couple with other strange behaviors, set
the stage for Quantum mechanics.

The properties of orbitals follow a few simple rules on spherical harmonic shapes,
allotment size and the coupling of electrons, to explain the whole range and diversity of
chemistry from the remarkable organization of Mendeleïev’s periodic chart, to spectrum
lines (Balmer). I could hardly dispute that which works so well at explaining natural
behavior, be what it be; but one can never be certain, just as Niels Bohr indicated: “there
is no quantum world. There is only an abstract quantum description.”

The model prescribes that the orbiting electron does not follow a specific orbit but moves
about an orbital region in a manner described by a wave function. This wave function
indicated the electron’s position and momentum with the limitation that it can not be
possible to measure precisely one attribute without compromising the accuracy of the
other, causing an uncertainty on the exact behavior of the electron at any particular time,
thus the orbital designates the region of highest probability where the electron can be.
Even when I am an advocate of Cause and Effect, quantum mechanics, which lacks
predictability, is acceptable in this scheme. For as in Chaos, there are limitations on our
appreciation and predictability of reality due to the overwhelming degree of complexity.


http://www.orbitals.com/orb/orbtable.htm & http://www.shef.ac.uk/chemistry/orbitron/

INSTANT CONTINUUM
48

In quantum, this uncertainty lies beyond the interference caused by observation and is
intrinsic to the sensitive physics of subatomic particles.♣

Werner Heisenberg’s Uncertainty Principle is an understandable expression of our


limitation on our means to measure the position and momentum of atomic particles. Bohr
points out “a discontinuous change of energy and momentum [Heisenberg’s Uncertainty
Principle] during observation could not prevent us from ascribing accurate values to the
space-time coordinates, as well as to the momentum-energy components before and
after the process,” so that, he adds, uncertainty is “the limited accuracy with which
changes in energy and momentum can be defined.”ix Heisenberg himself, however, did
not prescribed exactly to the notion that the uncertainty was a mere limitation of our
instrumentation imposed by the great sensibility of the observed particle itself. For
Heisenberg, it was more an intrinsic characteristic of the subatomic particle behavior, so
that classical mechanics did not and could not apply to such particles. Quantum has a
contract abridgement clause to violate the Classical knowing.

Upon reading on the indeterminable mechanics of quantum particles one continuously


stumbles upon the suggestion that proper classical mechanics do not satisfactorily
describe the behavior of electrons. This in turn is presented as a sort of frustration that
where, quantum, at all possible to be explained in classical terms, the two realms would
finally be compatible. So I am tempted to achieve this integration, although
ungainly—since the complications of Quantum Physics go well beyond the uncertainty
principle. (Quantum physics involves a series of implications, many of which are based
on experimentation, that have also to be resolved in order for any explanation to be
significant.) So I begin, if only with a mediocre attempt, demonstrating uncertainty at the
macroscopic scale.

The imaginary experiment consists on measuring the momentum of a large solid ball,
but having only two means by which this can be measured: a spinning wheel to measure
velocity, and a set of bowling pegs to measure trajectory—the ball will be restricted to
roll along a horizontal flat surface.

The first experiment consists of a paddle wheel, which axis is perpendicular to the flat
surface so that when the ball hits the wheel some momentum is transferred. Ideally, the
more momentum transferred from the ball to the wheel, the better the measurement,
however, it will always be restricted from a remaining momentum of the ball, which has
nevertheless changed both velocity and direction after collision. By the spin of the wheel,
it can be determined the speed at which the ball collided with the wheel. Regardless, it is
impossible to determine direction of travel by observing the spin of the wheel alone.

The second experiment consists of an array of pegs, the principle being similar to that of
the game of bowling. As the ball strikes the pegs, it marks a track through which it has


However, these are particular events with virtually inconsequential results when
describing the whole system. Like molecular motion in gases, we need not concern
ourselves with each molecule to describe the general characteristic of the gas as a
whole and be able to predict its behavior with certainty. So too, the chemical behavior of
molecules can be described with certainty, even if atomic bonding within can only be
explained through probabilities; likewise for nuclear reactions.

INSTANT CONTINUUM
49

traveled, thus knowing the direction of travel, but also in doing so, not only is some
momentum lost to the pins but the direction is slightly altered as well. And the idea here
too, is that on account of the pattern left by the fallen pegs, velocity would be
immeasurable.

Figure 5: Uncertainty at a Macro Scale.

These two experiments are analogous to actual meteorological instrumentation. It is


either possible to measure direction with a wind vane or velocity with an anemometer,
but where an experiment to restrict measurement to a minute gust of wind, it would be
impossible to use both instruments without altering the wind.

INSTANT CONTINUUM
50

In quantum physics, a more accentuated limitation is imposed due to the fact that it is
electromagnetic energy the means of observation, which is at the same time the energy
that rules the physics of atomic particles.

A paper written in 1905 by Albert Einstein demonstrated that photons came in small
packages of energy called quanta. It was this work which earned him the Nobel Prize in
Physics. The experiment employed a diminutive light source, a photoelectric plate, an
amplifier and a light bulb, and with it, Einstein help explained the photoelectric effect by
the absorption of photons, or quantum of light, which help propel the emerging concept
of the dual nature of light as both particle and wave.

The experiment utilized a photoelectric plate, which provided a demonstration of a


phenomenon known as photo excitation, in which light would excite an electron in the
plate. This exited electron, liberated by molecules in the photoelectric plate, were then
reflected through a series of electrically charged metal plates, where it would excite
additional electrons and consequently causing a cascade of electron with which a strong
electrical signal was made (an amplifier). A quantum was defined as the minimum
amount of energy that would cause the first electron to be expelled out of the domain of
the photoelectric plate’s molecule and into the amplifier to form a signal. I shall return to
this experiment shortly after the next discussion.

Quanta are equivalent to the energy emitted when an electron is excited and falls once
again to a lesser orbital to achieve a less energetic state. It is the physics of atoms as
discovered by Niels Hendrick David Bohr that only specific energies are permitted for
electrons to jump to higher orbitals. His Third postulate says that the angular momentum
of electrons exists only as a natural multiple of h/2! (h is Planck constant, 6.625^10-34
Joules seconds). This discrete amount of energy, explains phenomena from photo
excitation to spectroscopy. The model depicts the electron not as behaving
electromagnetically in continuous orbits around the nucleus but rather in stepped or
distinct spherically harmonious orbitals. (The energy required for electron excitation and
de-excitation varies according to the level difference and the type of orbitals being dealt
with.)

I present a rather unconventional analogy to this most eloquent theory to illustrate such
behavior: residential apartments represent orbitals; quanta would be the different rents;
and photonic energy the currency. The electron or resident must have to compromise
between comfort and expense for its accommodation. If the electron happens to gain
energy its motion becomes larger, more erratic, so naturally a larger space to move
about would be more comforting. But it can only move to the next higher apartment if it
can meet the rent agreement of the landlord, the nucleus. If not, it must suffice with
current accommodations of a less comfortable apartment. The natural tendency, which
is to be at the lowest energy level possible, means that the electron must compromise by
sacrificing comfort for economy of expense. This does not altogether agree with the
actual physics of atoms, but gives an anthropogenic approach to describe the quantum
model. According to the model, the electron does not collect energy until it has gained
sufficient to allow it to jump to the next higher orbital, but rather, it receives that specific
amount of energy and jumps. These packs of energy are quanta, and these are defined
as particles of light.

Yet the corpuscular or particular theory of light demonstrated by Einstein’s photo-


excitation experiment is not altogether complete proof, since light too is described as a
INSTANT CONTINUUM
51

wave. Thomas Young, around the year 1801, performed an experiment that definitely
demonstrated light as waves, producing interference pattern.♣♣ It consisted of coherent
light beamed through a plate with two narrow slits to illuminate a screen on the other
side. The logic is that if light is composed of particles then two bands of light would
appear on the projection screen, instead, light shines through as a series of bands,
consequence of the interference pattern emerging from the two slits. If, however, one of
the slits is covered, the pattern disappears (since interference requires the presence of
at least two waves). A similar experiment performed by Sir George Biddell Airy in 1835
uses a single pinhole instead of two slits, consequently creating concentric circles of light
and darkness.♣♣♣ This experiment has been demonstrated to produce diffraction pattern
with any type of wave (e.g. sound).

Modern experimentation has enhanced these photoelectric experiments by employing


sensitive photodetectors instead of a simple reflecting screen. Utilizing sources of light
capable of producing very low intensity light, individual points of light are observed in the
detector or photographic plate. The molecules in the detector or photographic plate
make these individual points, where only very few of them photo-excite given the very
low level of energy employed. These points of light fall along the bands of brightness
within the interference. Such experiment seems to suggest a schizophrenic personality
to the photon, which acts as a wave as long as it is permitted, without any definite
position (and able to interfere with its re-directed half-self to collapse as a particle), but if
observed in such a way as to attempt to detect a photons position or velocity, the wave
collapses and becomes only a particle.

The particle/wave duality is a concept that many have attempted to simplify in order to
represent the phenomena observed in terms of classical physics. David Bohm novel
attempt provides for an invisible pilot-wave that dictates the behavior of the particle. The
complication is that the wave allows for a non-local universe, one in which faster than
light communication permits relationships at a distance to determine how the wave
should collapse upon being observed. Years later John Stewart Bell proves
mathematically that in order for a model to explain quantum facts it must be non-local,
which is to say that distant events are connected via superluminal information. But Bell’s
model though regarded as an accurate description of the quantum world, stands among
many other theorems, which are not necessarily all logical concurrent. I argue in
opposition to the particle/wave duality, also as a misinterpretation of experiments,
specifically those involving photo-excitement or photodetectors. My claim is that it is the
quantum physical characteristic of electron excitation by atoms (Bohr’s atomic model) in
the phosphor or metallic plate being employed as photo-detector that are solely
responsible for the particularity effect. Being an unrestricted wave, energy propagates
continuously through space, but atoms behave quantitatively. Electrons occupy different
orbitals in discrete amounts of energy. The influx of energy disrupts the stability of the
atom, but only after an adequate amount of energy has been absorb, a quantum, will an
electron skip to a higher orbital. When the source of energy is very low, only few atoms
at a time become excited as sufficient energy is collected from the impending light for
photo-excitation. A good analogy would be the popping of corn in a microwave oven.

♣♣
See: http://micro.magnet.fsu.edu/primer/java/interference/doubleslit/

♣♣♣
See: http://www.u.arizona.edu/~mccorkel/airy.html

INSTANT CONTINUUM
52

The source of energy is continuous and uniform, but corns will pop at random, mostly
one as a time. Since in fact the duality is raised as a consequence of observation
techniques, it would be justifiable to attribute the duality as a limitation of such
techniques. My arguments are radically different to conventional interpretations. Instead
of quantized, light is continuous and existing only as wave. Bohr’s Third Postulate is
understood as harmonic states, and, it is the behavior of atoms that attributes to the
apparent particularity observed in photodetectors, when low intensities of light are used.
It is erroneous to project such physical characteristic of atoms to waves of light. The
photon’s particularity should remain as a virtual object of a wave function.

Although waves can be “quantized”, such does not occur when propagating through
space. Nick Herbert describes this characteristic: “Quantized attributes correspond to
confined waveforms like spherical harmonics, whose vibrations are restricted to the
surface of spheres.”x Like waves on a string of fix length, only waves that are “harmonic”
to the length, are permissible. By “harmonic” it is meant those wave with wavelength that
are integer fractions to the string’s length. Two attributes that can change the waveform
confined to a string of fix length are the wave’s amplitude and frequency (both of which
are affected by the density and tension of the string or medium through which a wave
travels). Light as it propagates through space, is unconfined, so every energy is
permissible as we see in the range of electromagnetic waves, but only exists in specific
colors or harmonies when restricted to atoms (depending on their orbital configuration).

Leaving the particle/wave duality to stand creates many controversies. Theories like
Richard Feynman’s sum-over histories, Hugh Everett many worlds interpretation or
David Finkelstein non-Boolean quantum logic, derive as logical implications to the
particle/wave duality of subatomic particles.

An experiment can be set to receive light from an object, say a quasar many millions of
light-years far away, provided that two light beams emitted from such same so
extraordinarily distant source to have converged once again at Earth, all thanks to a
gravity lens of some distant intervening massive galactic cluster. So then it can be
played with half-silvered mirrors within the telescope to determine if they are to interfere
as waves, or select a particular route. The interpretation suggests that the photon knew
before hand (eons in advance) that it was to interfere with itself or not at a telescope in
Earth. Such argument present itself, it anything, as sarcasm but the paradox has been
stated by John A. Wheeler as a serious extension of the non-local universe that light
seems to demonstrate. He describes this as an observer-participancy universe, an
astronomical extension of the delayed-choice experiment he so devised. Consequently,
by choosing how to observe the quasar, either by including a beam splitter in the
telescopic instrumentation or not, the observer has in effect participated in determining
which characteristic (particle or wave) the photon has taken after being emitted from so
farthest of stellar objects. “A strange inversion of the normal order of time,” that can be
resolved by re-defining the past as non-existent “except as it is recorded in the
present.”xi In turn, the idealistic experiment could only defeat its purpose by supporting
the far-reaching idea that observation has more of a participating role in the history of
the Universe. In logical context, it does serve as a strong disprove to the wave/particle
duality theory, but it is excused into acceptance by regrettably claiming quantum logic,
inaccessible by classical mechanics and classical logic.

In my interpretation the experiment strongly suggest that it is in fact the manner in which
light is observed which determines light’s behavior. In other words, it is in the setting of
INSTANT CONTINUUM
53

the half-silvered mirrors and detectors that determines whether the light is to interfere or
take a particular path. (This fact must not necessarily be tag along to the observer-
participant model.) Light acts with each setting thus accordingly, leading scientist to
suppose that both cases must in fact be true.

The diagram below illustrates how the setting of mirrors, half-mirrors (and polarized
lenses) determine whether two light waves are to interact with each other to produce
interference pattern or pass unhindered to two photodetectors which would reveal their
“particularity”. Light from a common source is separated by a beam-splitter so that the
two are made to intersect each other again by mirrors. It is at this point that the observer
has a choice on how the observation is to be made. Either another half-mirror could be
placed which would create an interference pattern, so that only one detector will produce
a signal, or allow both beams to reach the detectors by not placing the half-mirror.

Figure 6: Mirror and Half-mirror Experiment.

It is important at this point to explain the effect polarize lenses have on light in order to
illustrate just how mirror and half-mirrors can affect light into destructive interference.

Without explaining what is in itself the polarity of light, the behavior can be ascertain.
Polarize lenses interfere with the “passage” of light according to the angle of alignment
to which it impinge, thus affecting the intensity of the light.

INSTANT CONTINUUM
54

A drastic effect occurs when two lenses are utilized so that variations in the angle of
alignment between the two lenses affect the “passage” of light from transparency to
complete opacity. The accepted interpretation to this effect is that photons have an
intrinsic polarity and that the lenses block their “passage” according to the angle of
impingement. The lenses are pictured as if they were molecularly composed of parallel
slits, which trap photons of one polarity while letting those with adequate angles pass
through. Yet, this blocking effect is actually an error of interpretation.

The behavior of polarize lenses depends on the face side from which the light passes
through. So that between two lenses there are three possible combinations in face
alignment. To differentiate between one face and the other, slanted bars have been
placed at each end of the lens symbols in the diagram below. Only the configuration in
the middle of the diagram can, at a particular angle, “block” completely the passage of
light (the other two face arrangements can only vary the color—opaqueness is never
achieved in any angle). This strongly entertains an alternative interpretation of a blocking
slits phenomenon.

Figure 7: Possible Face Alignment of Polarize Lenses.

Accepting the blocking effect of the polarize lens face a few conflicting observations that
can only be resolved if photons are given either a complex polarity or an ability to adjust
before observation.

Photons have a definite polarity. An experiment support in the intrinsic polarity of


photons involves calcium atoms, which are photo-exited, releasing in turn photons of a
specific polarization. So that regardless of the position of a polarize lens, as long as
spherical alignment is maintained, then the polarity of the photons emitted are always
“measured” to be the same. Defining the angle of alignment as angle α, shifting the lens
to an arbitrary angle β a decrease in translucency occurs, just as is the same photon
was to pass through two lenses aligned at angles α and β.

INSTANT CONTINUUM
55

Figure 8: Polarity of Photons from Calcium Atoms.

The interpretation that polarized lenses block the light, becomes more disparate when a
third lens is introduced into the arrangement. With the two lenses arranged so as to
“block” completely the passage of light, by placing a third lens between these two, it will
ironically allow light to pass through all three lenses. This completely demerits the
interpretation that polarize lenses discriminate the passage of light by “blocking”, as if by
parallel slits.

Explaining such behavior solely by photonic attributes, forces a description of reality with
non-local properties, as indicated by Wheeler and championed by Bell. Such behavior
implicates a sort of foresight ability to photons, as to the manner that it will be observed.
It was the sort of reasoning which inspired the Albert Einstein, Boris Podolski, Nathan
Rosen argument, which points out Quantum Theory’s inability to fully describe photonic
physics. The Einstein-Podolski-Rosen (EPR) argument indicates an inconsistency
INSTANT CONTINUUM
56

regarding Heisenberg Uncertainty Principle for measuring identical but distantly


occurring events. So that the precision of one measurement could not affect the
outcome of the other if taken far apart. David Bohm later supported the EPR argument
experimentally. However, it was John Bell’s argument that attributed not two but three
properties to the photon (i.e. position, momentum, and polarity) that dismisses the
experiment. Thus implying that “action at a distance” was the only solution. And so
photon would behave accordingly by superluminal communication.

Figure 9: Effect of Lens Alignment to Light's Intensity.

Non-locality presents a photon capable of adjusting itself prior to crossing through a


given configuration of lenses. All this askew logic that leads to non-locality is derived
from misunderstanding the effects of polarity. This can be avoided if the phenomenon is
not left entirely as an intrinsic attribute of photons; instead, if the phenomenon is
explained as an effect of the lens, so that the incidence of translucency depends on the
polarity of incoming light. That is, the intensity of light after passing through is
determined by the cosine of the difference in angle between successive lenses times the
intensity of light before it passed through (see Figure 9, above). The first lens receives
light of arbitrary polarity; the preponderance of photons reduces intensity according to
their angle of incidence. Polarize lenses alter the polarity of light by phase realignment or
shifting. A second lens when aligned orthogonally against the first will realign all passing
light to obscurity.

INSTANT CONTINUUM
57

Interestingly enough, mirrors, without regard to any angle relative to the polarize lens,
always reflects an image orthogonally opposed, so that the intensity of light is always
reduce completely between the polarize lens and its mirror image—phase out. Curiously,
this happens with a face combination between real and virtual lenses, which would allow
light through between two real polarize lenses (notice the slanted bars at the end of the
lenses symbols in the figure below, comparing them with those in Figure 7).

INSTANT CONTINUUM
58

Figure 10: Polarize Lens and Mirror.

This experiment, probably the only one I have ever done regarding the physics of light,
demonstrates how the mirror image of a polarized lens appears black when seen
through the lens. The lens performs the same phase realignment but to orthogonally-

INSTANT CONTINUUM
59

reflect light, making it opaque. If polarity were due to a “blocking” effect of the lens, it
would seem that the light would pass twice unhindered, as the alignment is conserved.

In turn, every other experiment performed, in order to determine the behavior of light,
must be reconsidered in light of such effect imposed by polarize lenses, mirrors and half-
mirrors. By studying the various experiment arrangements, it becomes evident that an
even-number combination of mirrors, half-mirrors, polarized lenses and detectors
permits an interference pattern, while uneven settings destroys or impedes such
possibilities.

To continue supporting the blocking effect of polarized lenses, depicted as slits, gives
support to Bell’s non-local interpretation, where the photon adjusts itself before
observation. But the same phenomena could be explained as alterations to the phase of
the electromagnetic wave by the lens.

Unfortunately, things are usually more complex, so I could not simply dismiss the
particle/wave duality as a misinterpretation of the photoelectric effect or the phase
alteration of mirrors and polarized lenses. For instance, the particle/wave duality has
been extended to subatomic particles other than photons.

Prince Louis Victor Pierre Raymond de Broglie, being the first to raise the argument,
contended that just as Einstein had shown how light waves had corpuscular properties,
so too would other particles of matter have wave attributes. An experiment, which
reputably favors his conjecture, was performed by Americans Clinton J. Davisson and
Lester H. Germer, which measured the “de Broglie wavelength” of an electron.

First, de Broglie’s intentions were to explain quanta with wave harmonies, through
electrons neither have fixed harmonious motion, nor propagates like a wave. Bohr’s
atomic model could very well justify the behavior of electrons through wavelike motion.
Only their nondescript motion is made undeterminable through observation.

Second, de Broglie did not suspect the Einstein’s photoelectric experiment could have
been interpreted differently, so that instead of assigning the quanta size energy package
as an actual corpuscular photon at its minimum energy level, that it be ascribed to the
molecular reaction according to Bohr’s Third Postulate concerning atomic excitation
phenomenon.

Third, to what Quantum Mechanics refers to as wave are two different things, which
could be interpreted in certain models to represent the same thing. These two
references are: the physical disturbances of an energy field; and, the mathematical
curve of possibility for a particle. This second reference, describes the manner in which
particles exist and can be observed, thus providing the means of measuring the
probability of finding various particle attributes such as position and momentum at a
particular moment. For example, alpha radiation, the nuclear decomposition by release
of a helium ion, as presented by Gamow’s theory of radioactive disintegration is
described as a spherical wave that continuously emanates in all directions from the
nucleus. But this spherical wave provides means to measure the probability of
disintegration that the alpha particle can emerge in any direction. The highly energized
helium atom will thus be emitted in total randomness and detected only as a particle but
described mathematically as a wave. Like S. Weinberg said: “In other words, [particle]
waves are not waves of anything; their significance is simply that the value of the wave
INSTANT CONTINUUM
60

function at any point tells us the probability that the [particle] is at or near that point.”xii
This wavefunction have been even attributed to whole atoms.♣♣♣♣

Fourth, electrons and other subatomic particles unlike photons, have mass, so they are
restricted to a space. Chemical reaction, and molecular configuration should be ample
reasons to reject wave attributes at least to atoms. As for electrons, as demonstrated by
experiments where they are made to impact a nucleoid, their particularity is always
maintained.

Fifth, waves can be restricted such as by magnetic fields to manifest themselves at


discrete areas. So too do objects such as lenses and magnets, which can be used to
focus or diffuse, altering the electromagnetic waves, thus restricting any affected
electron accordingly.

And sixth, electron’s motion are governed by electromagnetic waves through which they
travel; though the raison d’entre of the force field are the energy and position of
electrons and protons, the wave dominates the particles’ behavior. So, the Davisson and
Germer’s experiment, which consisted of the Airy pattern, can be alternatively explained
by the particularity of electrons whose motion is governed by electromagnetic waves on
which they travel by. The concentric rings observed is a consequence of the atom within
the nickel crystal (or gold metal foil) used to reflect and diffract the ray of electrons.♣♣♣♣♣

I nevertheless think it is rather poetic to regard atoms as a composition of waves


enclosed in themselves. Like a little sound that harmoniously vibrates in a spherical
domain restricted by the force that has produce it.

Since I figure this does not yet entirely dismiss the particle/wave duality, additionally I
present another natural phenomenon, which supports the wave nature of light and
rejects its particularity: Holography.

Holography is the recording of light interference patterns in a photographic plate,


produced by splitting laser light so that one beam reflects off an object to interfere with
the other undisturbed beam, both of which are focused over the surface of the plate. The
interesting phenomenon is that while the entire surface of the plate is illuminated, some
areas in the picture will remain dark. This effect is easily explained with destructive
interference of wave. Otherwise as a particle, photons must become of virtual existence
so as to disappear, wherever the carrier electromagnetic wave destructively interferes.
Although this virtuality of photons is suggested in other theories, it is not with the same
context or intention (this will be discussed in a later chapter).♣♣♣♣♣♣ It should be pointed

♣♣♣♣
A wave function for the entire Universe has also been proposed (Hugh Everett
many worlds interpretation). (See DISCOHERENT HISTORIES in CHAPTER XI:
Theory.)
♣♣♣♣♣
http://hyperphysics.phy-astr.gsu.edu/hbase/davger.html &
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/davger2.html
♣♣♣♣♣♣
I have not put much though into the actual set-up of full color holography, but it
well seems possible to split the image in ordinary white light, scrutinizing each beam with
polarize lenses.
INSTANT CONTINUUM
61

out, that under current quantum interpretation the object being photographed should
poise as a detector, adequately enough to collapse the photon-wave into particle, so that
it should no longer interfere or act as wave.

A disturbing controversy concerning the propagation of photon waves should be clarified


as well. The intensity (or square of the amplitude) of light diminishes with distance, while
its energy remains constant. Energy is equivalent to the frequency of light times Planck’s
constant, and it is always the same in every direction and at any distance traveled from
its source. Light, given for instance of blue color, from a far away luminous source, is still
observed as blue regardless how distant, its frequency remains unchanged. If it were to
loose energy it would shift to red, this can be observed in cosmological observations but
for an entirely different reason of no concern here. Only the intensity of the light is
reduced by distance. Since in fact the frequency of light defines its energy, this seems to
suggest a violation in the law of conservation of energy. If one photon (a quantum wave
of energy) is to be emitted spherically from an atom, consequently it could exalt
innumerable other atoms by the same amount of energy. The paradox is largely due to
the corpuscular definition of photon. Imagine a series of infinitely elastic bubbles with no
surface tension, expanding in regular intervals one within the other from a common
point. The surface of each is traveling at a constant velocity, concentrically. Imagine then
a small needle-like probe, manufactured solely for the purpose of measuring the passing
of each bubble at its tip. The bubbles will not explode upon impacting the probe, but will
continue unhindered their expansion. What the probe measures is the interval between
consequent bubbles. A low measurement means large separations between bubbles.
Energy is the frequency at which the bubbles come in contact and are detected by the
probe. It is perfectly logical to conclude that the measurement would remain constant
regardless of the position of the probe in space; what is more, a thousand or a million
probes will all measure the same frequency, the same energy. Theoretically it would be
possible to accommodate an infinite number of probes in space to measure the energy
exerted by the expanding concentric bubbles. It is not the summation of all the
measurements but the equivalence of all those measurements, since it is frequency that
this experiment measures. In the real sense, these infinite expanding bubbles are light
waves and the probes are any detector or eye.

There are two factors in determining the amount of energy needed to excite an electron,
its appreciated frequency and its intensity. Concerning wave physics, frequency would
be relative to reference frame, while intensity would not. The intensity of light diminishes
by the spare of the distance, so it would require longer exposures for individual atoms to
accumulate sufficient energy to photo-excite. Thus a quantum, or photon package, is the
discrete amount of energy that upon absorption by an atom causes one of its electrons
to jump to a higher orbital shell, on amount known as quanta. Neither a single crest nor a
specific wave count constitutes a photon. It makes as much sense to define a wave as a
particle, as defining the length of a point.

I could not completely ignore Heisenberg’s warning that the uncertainty in determining a
particle’s momentum and position was not to be interpreted solely in terms of
measurement disturbances. This ingenious man had arrived at this conclusion from first
hand experimentation, knowing exactly the sort of behavior subatomic particles

INSTANT CONTINUUM
62

took.♣♣♣♣♣♣♣ But in view of new interpretations given for the “particularity manifestation of
photon” produced by photodetectors; that such is a consequence of the quantized
behavior of atoms within the metallic or phosphor screen and not of the continuous
energy wave that impinge it, couple by the general acceptance toward chaotic
indeterminism in contemporary Science, it demands a re-analysis of the entire theory.
This is not to imply that quantum mechanics is to be dismissed altogether. Quantum
mechanics continues to adequately represent and predict experimental results through a
statistical approach, presenting the underlying causal reality with chaotic fields of
probability (as described by Schrödinger’s wave function). The uncertainty principle
remains as an unavoidable limitation; since subatomic particles are governed by energy
waves, they can never be observed without affecting either their position or momentum,
if not both. Complemented must be the subjective description with an objective
computation.

In l i g h t of this, I find pertinent to discuss Richard P. Feynman’s Quantum


Electrodynamics Theory, so as to take what is good and dismiss what is perceived as
inconsistent with this new interpretations of light’s interaction. His main argument is that
light, a photon particle, does not necessarily travels in straight line, but rather take any
arbitrary route, curve or what have it; but that the summation of all those possible paths
always add up to that of straight and “least time” trajectory between source and detector.
This must be clarified, since it might appear that the intended argument refers to the
spherical propagation of light waves. Here, however, the intended description is for an
individual photon particle, and imposes an unnecessary complexity to photons by
describing an obscure trajectory.

Feynman’s attempt in figuring lights amplitude, the probability of it to reflect on a given


surface, is very effective, though very complicated to determine as a mathematical
exercise. Amplitude is calculated by the addition of numerous and minute arrows which
change direction through time. (The manner in which these vectors change direction
reflects the sinusoidal behavior of light.) This is further complicated when interaction
between photon and electron is concerned. So light, not only takes various paths but so
too, have variation of speed. “It may surprise you,” Feynman points out, “that there is an
amplitude for a photon to go at speed faster or slower than the conventional speed, c.
The amplitudes for these possibilities are very small compared to the contribution from
speed c; in fact, they cancel out when light travels over long distances.”xiii Yet this idea is
in need, in order to explain a slight imperfection in measurement to what was first
theoretically calculated by Paul Dirac to be the “magnetic moment” of an electron.

♣♣♣♣♣♣♣
Though the Uncertainty Principle forms a logical mental construct, it lacks of
objectivity. Steven Weinberg simplified explanation might prove helpful in understanding
the logical relationship of terms: “If we happen to know that the particle’s position is
definitely here then the there value of the wave function must vanish and so the stop and
go values of the wave must be equal, which means that we know nothing about the
particle’s momentum; both possibilities have 50% probability. Conversely, if we know
that the particle is definitely in the state stop with zero momentum then the go value of
the wave function vanishes, and, because the go value is the difference of the here and
there value, the here and there values must then be equal, which means that we know
nothing about whether the particle is here of there; the probability of either is 50%.”
[Dream of a Final Theory, S. Weinberg, p. 77].

INSTANT CONTINUUM
63

Feynman further explains that, “this correction was worked out for the first time in 1948
by Schwinger as jxj divided by 2!, and was due to an alternative way the electron could
go from place to place: instead of going directly from one point to another, the electron
goes along for a while and suddenly emits a photon; then it absorbs it own photon.”

Figure 11: Electron Virtual Photon Emission and Absorption.

So no longer is the electron’s “magnetic moment” 1 (as is by definition) but


1.0011596522±10-11 (the minute imprecision is due to the uncertainty as to what exactly
is the value of j). What all this “most precise of all physical measurement” adjusts for is
simply to account for the slight interim period between absorption and emission of a
photon by an electron. Not being immediate thus arises the discrepancy. Yet, in order to
justify this behavior of electrons, it calls upon action at a distance, since electron must
release a virtual photon just before the electron reacts, implying that some other
information reaches the electron before the force signal itself.

Instead, if light is accepted as a wave, the electron will require a certain amount of
energy before it reacts; and even then, a slight adjustment to account for the electrons
inertial mass could further justify to the delay response. Again, this interpretation goes in
accordance with quantum measurements—without the need for particle/wave duality
and action at a distance.

In reality, light simply propagates in spherical radiance (or however restricted by the
configuration of the source, e.g. conical proyection). Light will spread through space, and
the only means by which light can be detected is by direct impingements. Like the
reflection of the moon upon a lake, that of one and only one moon, but the same image
can be seen from any directions along a properly reflective angel (physically changing
the location of reflection over the surface of the lake, we see only those light waves that
our eyes intersect with.

INSTANT CONTINUUM
64

CHAPTER VIII: Particle

“When I rest my head on a quantum pillow I would like it to


be fat and firm, the recently available pillow have been a
little lumpy to soothe me back to sleep.” –Norman Mermin

“Not clear physics!” –my own silly pun

Quantum physics might not prescribe the exact mechanism of electrons and photons in
an electromagnetic field, but it has precisely described their physical relationship,
holding true and unaltered since 1913. Unfortunately, the same cannot be said of
Particle Physics, which every few years must be reformulated, corrected, adjusted and
complicated ever so more since the day Victor F. Hess discovered cosmic rays. Nobel
laureate Sheldon L. Glashow wrote about the field in which he endeavors: “quantum field
theory is a science unto itself that does not necessarily deal with things that exist in the
real world.”xiv Maybe so! Here is a science that has used the Uncertainty Principle of
Quantum to allow itself the luxury of implementing the most unorthodox and deranged
form of reasoning. A license to kill Reason! Just to briefly name a few, there is the
Gauge Theory, space or time-reversal symmetry violations, and intermediate vector
gauge invariance of weak interaction, whose exotic names subtly disguises its insipid
logic.

Heisenberg Matrix Mechanics, Schrödinger’s Waveform Mechanics or Dirac’s


Transformation Theory, provides unsurpassable predictive properties to Quantum
Mechanics. All in fact are describing indeterminism with probability. Because of the
nature of probabilities, these exercises are so unrestrictive that each allows for over-
implication, that is, non-real attributes that are mathematically permissible. For instance,
Heisenberg’s Matrix Theorem could allow within its mathematics, imaginary time—a
concept strongly advocated by the leading theoretical physicist Stephen Hawkings.

A perfect example of over-implications is Sheldon Glashow’s conjecture, the charm


quark. Taking Schrödinger’s wave of probability to explain the non-occurrence decay of
K-particle into two muons (subatomical particles will be discussed shortly), the charm
quark could be presented as an out of phase wave to that of the strange quark’s wave,
so that there combined waveform would cancel the possibility, thus reducing the
probability of occurrence to almost never that the K-particle will decay into two muons as
theorized.xv Before continuing with the discussion of Particle physics, as a prelude to the
kind of nit picking that will be conducted in these pages, I would like to skip back to the
idea of time in face of quantum physics, to demonstrate the type of irrationality that
although it has not surface, can occur in this crazy field of Quantum Field Theory for
subatomical particles. As a matter of fact, what I am just about to present, had in fact
surfaced before, but no one recognized the tip of that iceberg. Niels Bohr wrote: “In the
conception of stationary states we are, as mentioned, concerned with a characteristic
application of the quantum postulate. By its very nature this conception means a
complete renunciation as regards a time description. From the point of view taken here,
just this renunciation forms the necessary conditions for an unambiguous definition of
the energy of the atom… In connection with the discussion of paradoxes of the kind

INSTANT CONTINUUM
65

mentioned, Campbell suggests the view that the conception of time itself may be
essentially statistical in nature… according to which the foundation of space-time
description is offered by the abstraction of free individuals, a fundamental distinction
between time and space, however, would seem to be excluded by the Relativity
requirement.”xvi Quantum mechanics provides a tolerance for inconsistencies by the
guidon of uncertainty and probability. The position and momentum, which cannot be
precisely known simultaneously is represented by the mathematical operations of
quantum theory as statistical relationship. It can almost be expected that this problem
will tend to be resolved by the devising of quantum-time. Since precision of
measurement on the particle’s position and momentum cannot be precisely measured
simultaneously, by quantifying time, simultaneity will be evaded with non-causal events.
This would almost tie Quantum with Relativity, which lacks of simultaneity. It will seem
then possible to measure exactly both position and momentum on the exact instant, by
employing two different reference frames that would registered the instant at rather
different times and then create an interference between the two time probability wave
functions of the event. This will most expectably open the door for a complete new array
of possibilities that could even include time particles, or chronons, whose virtuality
determines time. I suspect an equation of the type,

Et – tE = -ih,

will creep up, although I would not imagine all the implications and applications to have
beyond those just mentioned. The fact is that it surprises me that quantum time has not
appeared before, even more so after Heisenberg stated that the electromagnetic force is
visualized by the time part of an “anti-symmetric tensor located in space-time.”xvii

Particle physics branches off from Quantum physics, so whatever misinterpretation


originated in Quantum, it naturally is inherited by Particle physics. Unfortunately, a series
of mutations have taken place and Particle physics that has now become a monstrous
theory of half-unreal particles, which have no relevance in Nature. For the last fifty years,
from the Cyclotron♠ to the thousand times more powerful Tevatron (1.8 TeV), the
collection of “elementary” particles have increased to numbers in the hundreds.
Originally intended to simulate the nuclear reaction caused by highly energized protons
(better known as cosmic rays) when they collide with the atmosphere. These machines
have progressively continued to exceed their energy levels, and their original intentions,
in hope of finding the smallest constituent of matter. Such collision fragments the
nucleus (or whatever particle are used) into streams of energy. The higher the energy
put into the collision, it is reasoned, that the fragmentation increases, forming smaller
particles, with the eventuality that the smallest indivisible constituent be observed.
(Higher energy denotes smaller frequency, which in turns provides better resolution. So
by association, the higher the energy the smaller the particle, and the large the mass of
that particle, since mass equates to energy.) Collision of successively higher energy
creates heavier “particles”; these “particles” being many times more massive than the
original particles they came from. This could go indefinitely, the higher the energy level
attained the more mass involved and those produced a bigger bang. “It soon became
clear that the number of [subatomic] particles in the universe was open-ended, and


http://www.phy.ntnu.edu.tw/java/cyclotron/cyclotron.html & http://hyperphysics.phy-
astr.gsu.edu/hbase/magnetic/cyclot.html

INSTANT CONTINUUM
66

depended on the amount of energy used to break apart the nucleus. There are over four
hundred such particles at present.”xviii Regrettably, not all scientists agree on such clear
exposition. As new “elementary” particles are created, Particle physics must change in
order to accommodate their existence. Many of the more important particles (in terms of
theoretical support) like the W-boson, have been created very recently (Carlos Rubbia,
European Center for Nuclear Research, 1983) after colliders had achieved adequate
energies of 300 GeV. The heaviest and hardest to detect of these necessary
“elementary” particles to date has been the top quark. The European Center for Particle
Physics in Geneva, have determined a new lower mass limit to the top quark to exceed
170GeV.xix This would require energy levels (theoretically speculated) achieved by the
now defunct Superconductor Supercollider ring.♣

Colliders can accelerate an electron to energy levels on excess of a thousand million


electron volts, but under such conditions it can no longer behave as an electron. Just
because it maintains its negative charge should not constitute it as an electron either. An
analogy would be like energizing radio waves to frequencies at X-rays levels and still
classify them as radio waves or comparing a small lug of lead in ones hand with a fired
bullet.

Most particles created have a life span of a mere billionth of a second and fashion no
relation in the structure of matter; these can be considered as disorganized bundles of
energy or deformed matter that rapidly decay through the least resistant route, towards a
permissible state of matter. True, that to pursue higher energy levels the conditions
created approximate those theoretically existed in the initial moments of the formation of
the Universe (while still scientists try to reach cosmic rays’ levels of energy). In doing so,
the experiment might no longer pertain to elementary particles per se, but do reveal the
rules or better yet, the path of least resistance taken by such extraordinarily high
energies towards the formation of naturally stable mass.

Physicist carrying on these experiments must discern from the myriad of “particles” and
identify their characteristics, such as energy and charge, in order to catalog them. Since
some few hundred “elementary particles” have been created, it naively leads to the
conclusion that these are no longer true elementary particles. So arbitrarily then, the
various attributes used in cataloguing these “particles” were in turn adopted as
constituents of matter. So in a real sense, the idea of quarks arouse as a way of
grouping by constituent characteristics (Murray Gell-Mann) like decay time or energy
size (mass). These attributes are taken as real elementary constituents and are even
defined by a wave function.

Quarks make up hadrons (common taxonomy is: fermions, which are particles that
follow Pauli’s Exclusion Principle, and boson which do not. Both categories are further
subdivided into particles affected by the strong nuclear force, hadrons, and those that do
not. A fermion that is also a hadron is denominated as a baryon. A non-hadron is a
lepton. All together quarks are very numerous and allowing them to exits requires in turn
an additional force carrying particle responsible for holding them together (gluons).
Quarks are the epitaph of years and trillionth upon trillions of Joules spend in the


Shortly after the U. S. Congress denied additional funding to the Superconductor
Supercollider did other labs claim having produce the Top quark.

INSTANT CONTINUUM
67

creation of hundred of malformed energies that could not hold themselves together
naturally.♣♣ I personally feel physicist are privately laughing with uneasiness and marvel
over these quarks they have concocted. For even in their naming reflects the sort of
nonsensicality they have summoned: strange, charm, up, down, bottom, top. For some
obscured reason it reminds me of the seven dwarves in the story of Snow White.

What definitely can be said about the collection of malformed beast is that they reveal
some patterns: the allowable terms and restrictions by which mass is formed. For
instance, electric charge is one basic characteristic that energy suffers when becoming
massive. From all known particles, deformed et al, chargeness comes in integer
multiples of the electron charge (i.e., -1, 0, 1, 2).♣♣♣ Another important characteristic in
itself concerning charge is that it is a heavier toll on a particle’s mass. The table below
indicates charge and mass of various particles.

♣♣
I christen all these malformations as terions (for monstrosity particles) for particles
naturally unstable, which would seeking to reduce its state of mass/energy. The least
resistant path taken often show up in beautiful spirals, testifying the sort of forces
involved.
♣♣♣
Quarks, nevertheless, are assign fractional charge (e.g. down -1/3, up 2/3).
Excusable, since the charge of an electron was first defined as –1. So a proton of charge
1 is made by 2 up’s and 1 down (2/3 + 2/3 – 1/3 = 1) and a neutron of 1 up and 2 down’s
(2/3 + -1/3 + -1/3 = 0). But if redefined, down as –1 and u as +2 the electron thus carries
a –3 charge. Now taken the spontaneously decays where a neutron turns into a proton
and an electron, then there is a discrepancy for the negativity of the electron, since
electrons by definition are not made out of quarks: neutron {2 + -1 + -1} >< proton {2 + 2
+ -1} + electron {(-3)}. Even, if electrons are disregarded, a symmetry transformation
proposed since the late thirties which indicates the probability for nucleoids of being
either a proton or a neutron, would confute the idea of quarks.

INSTANT CONTINUUM
68

Table 1:Particle Chargeness.♠


PARTICLE SYMBOL MASS(e=0.511 MeV) CHARGE
photon γ 0 0
neutrino ν 0 0
electron e- 1 -
muon µ 207 -
pion π0 264 0
π+ 273 +
π- 273 -
kaon Κ+ 966 +
Κ0 974±1 0
eta η 1074 0
proton p+ 1836 +
neutron n 1839 0
lambda Λ0 2183 0
sigma Σ+ 2328 +
Σ0 2334 -
Σ- 2343 -
xi Ξ0 2573±2 0
Ξ- 2586±1 -
delta Δ++ 2413±6 ++
Δ+ ? +
Δ0 2415±4 0
Δ- 2429±10 -
sigma* Σ*0 2703±8 0
Σ*+ 2706±1 +
Σ*- 2712±4 -
xi* Ξ*0 2992±2 0
Ξ*- 3002±6 -
omega Ω- 3273±1 -

Notice that as the negative charge increase for a type of particle so does its mass.
Exceptions to policy are the π0 whose lifespan is about three hundred million times
shorter (8.28 x 10-17 s) than its short lived 30 nanosecond (2.603 x 10-8 s) brothers, and
the Σ*0 with a mass uncertainty measured at plus or minus eight units, which could place
it properly between its two siblings. There is a variation between the mass increase due
to chargeness within each set of particles, from ‘quite interestingly’ 3 units between the
nucleons to 13 between the Ξ and Ξ* particles. Implying that malformed beasts have
permissible existence if there is excess energy. The surplus energy is thus released in
the decay process.

These malformed particles could essentially be over-energized protons, which is an


interpretation totally opposite to that of the current majority. Protons are not sliced to
more energetic substructures but rather injected with excessive energy, which tends to
break and dismember in order to decay back into a less energetic state, that is, back into
a natural permissible structure. Electrons and protons would be true “elementary”


From Physics, R. Resnick & D. Halliday, John Wiley and Sons, Inc. 1977.

INSTANT CONTINUUM
69

particles from which other unnatural malformations could be assembled—provided


sufficient energy. The idea of electron and protons as true elementary particles is not
only simpler but also experimentally sound. Such conjecture might be my weakest
proposition but it present a much simpler model than Quantum Chromodynamics.

Figure 12:Proton Neutron Gauge.♦

The quarks’ charge balances is balanced out:

(0)neutron ⇒ (1)proton + (-1)electron,

An electron has no quark unless is in itself one, a ‘down-minus-up’ or “up-less down”.


This expands to,

(2 + –1 + –1)/3 ⇒ (2 + 2 + –1)/3 + (-1 - 2)/3,

which is simplified as,

0/3 ⇒ 3/3 + (-3/3) electron

The electron/proton constituency model suggests that energy tends to naturally become
massive in one configuration, namely the neutron (an electron-proton bonding). The
neutron being a rather unstable configuration if left isolated decays into an electron and
a proton, producing in turn an electromagnetic field.

Adding the constituent masses reveals a relatively large mass defect, somewhat larger
than the mass of an electron or about 0.793 MeV.


http://hyperphysics.phy-astr.gsu.edu/hbase/particles/proton.html

INSTANT CONTINUUM
70

This is exceedingly more energy than that permissible for the electron to orbit the
produced proton, so it is violently emitted with the surplus energy. Being such a large
exothermal reaction, it explains the tendency of neutrons to be so short lived (918
seconds on average) while isolated. In the equation below, better known as beta-decay,
I utilize the symbol for neutrino loosely, as the surplus energy. But in the subjective
reality of particle physicists this missing mass accounts for an “objective” though
“undetectable” massless particle, the neutrino.

n ⇒ p+ + e- + νe

I shall turn the attention for a brief moment to the neutrino. Wolfgang Pauli first thought
of the neutrino “particle” (which might well be the only means by which such particle
could ever be “observed”) intended to carry away with the mass defect in beta decay so
as to not violate the Conservation of Energy Principle. The neutrino is theoretically a
massless, charge-less and virtually an undetectable particle.♣♣♣♣ Neutrinos are
theoretically capable of trespassing a lead block as thick as the solar system without
suffering any collision. It was deemed “observed” in an experiment by Cowan and
Reines which comprise of a large underwater cage filled with pure water and wired by
very sensitive instruments that could detect a produced positron and neutron from
reverse beta-decay. The theory in play is that an antineutrino would strike a proton within
a water molecule and cause it to change into a neutron and a positron that will in turn
annihilate itself with an electron to produce light. This is the inverse process of beta
decay, violating the PCT (parity, charge, time) symmetry, which is why antineutrinos are
held responsible. Such process was indeed observed. The problem is that for each
reverse-beta-decay observed the experiment must count with about a trillion of these
ghostly antineutrinos. As I have mentioned in CHAPTER VI: Force, lower than expected
results in solar neutrino conflicts with the current model, which hopelessly evoke these
ghosts.♣♣♣♣♣xx An alternative explanation, which excludes neutrinos, could be found
using solely quantum mechanics: the spontaneous entrapment of an electron by a
proton. I realize that such process requires at least a “neutrino” of energy in order to
occur. I also realize that such process does not clearly correspond with reverse beta
decay, but shortly I will be explaining how the electron-proton constituency model
clarifies such question. For now it can be said that the mechanisms within the nucleus

♣♣♣♣
An experiment performed by Wolfgang Stoeffl, recorded energy variations of
electrons emitted by tritium nucleus (radioactive hydrogen of two extra neutrons); a
statistical analysis of the neutrino’s mass reveals it to be a negative number.
♣♣♣♣♣
The SAGE measurements were further supported by the GALLEX research, which
also found fewer neutrinos than that expected by the standard theory. “[GALEX]
obtained an average value for the capture rate of 83 solar neutrino units (SNU), where 1
SNU equals 10-36 neutrinos captured per atom per second. Theoretical models of
neutrino production within the sun predict capture rates from 124 to 132 SNU.”

INSTANT CONTINUUM
71

could create unbalances for which the path of least resistance towards stability would be
the reverse beta decay, needing thus to entrap an electron.♣♣♣♣♣♣

I am aware of the significant distinction between electromagnetism and weak nuclear


force, where expelling a nucleoid is all too different than electron excitation. So the idea
of radioactivity resulting from similar processes as electromagnetism might seem as an
unacceptable proposition if expressed solely in terms of path of least resistance.
Presented however, under different name, the unification of these forces becomes an
attractive one. But why must one employ such nonsensical models as the Gauge
Mechanics, the Electroweak Unification, the Spontaneous Symmetry Breaking Theorem,
and the Grand Unification Theory in attempting this? When these concoctions pull logic
arbitrarily from incongruent theorems and mathematical operations, permitting any
hypothetical conjecture to stand as feasible. The electron and proton constituency model
is not an attempt to synthesis a hypothetical particle responsible for the electromagnetic
and weak nuclear force modulation gauge, as to become the electro-weak force, rather it
presents or leads to an explanation in terms of field of least resistance, that works for
both electromagnetism and nuclear reaction of the weak sort.

I should like to add at this point, since it is the least inappropriate place to mention it, that
the Coleman-Glashow mass formulas can be better accounted for by the electron-proton
constituency model:

2n + 2Ξ = 3Λ + 1Σ

and,

Σ- - Σ+ = (Ξ- - Ξ0) + (n – p).

Of course, such model is still inconclusive. Other characteristics, such as intrinsic spin,
lifespan, mass and association amongst themselves must also be explained (which I
might add, standing models can only explain with strictly hypothetical quarks).

Just as the particle/wave duality has been the poltergeist of Quantum Physics, the
concept of force/force-carrying-particle duality has been the downfall of Particle Physics’
integrity. This additional double role of matter is taken as a natural extension to the
quantum wave/particle duality—a real aspect being the particle, and a virtual aspect as
the force manifestation. Electromagnetism is carried by a virtual photon, gravity by a
virtual graviton and nuclear forces by an array of virtual bosons. It provides a particularity
to the seemingly ghostly and far reaching interactions of forces. So Earth is attracted to
the Sun by an exchange of gravitons. But such reasoning has accumulated into a
collection of off-spin theories employed solely for the purpose of this real/virtual duality.
For such reason quarks have been invented, as a face-saver to the duality. In the words

♣♣♣♣♣♣
Radon-222 decays to Plonium-218 and again to Lead-214 by consecutive alpha
decays one hour apart, then back up to Bismuth-214 and Polonium-214—a process
known as “neutrino-less double-beta decay”. Two neutrons consecutively turn into two
protons, emitting two electrons. The fact that is neutrino-less tends to support the
electron-proton constituency model, and that the undetectable neutrinos may not exist
after all.

INSTANT CONTINUUM
72

of Glashow himself, “in this [virtual] role the photon is not observed as a real particle, for
it does not emerge from the region of interaction to impinge on a detector—it is
consumed in the act of producing the electromagnetic force.”xxi It is very hard to apply
this comment when one is playing with magnets, they are interacting at great distances,
enough time and space for virtual photons to manifest themselves and impinge on our
eyes (which could be diverted, say, by mirrors).

Attributing a dual role of particle/force was how pions were rationalized by Hideki
Yukawa to express the strong nuclear force as a virtual manifestation of a particle. For
Glashow, such constant interaction between nucleoids violates the law of conservation
of energy, but it can be excused through quantum mechanics. With its license of
inexplicability, such disruptions are allowed since “the intrinsic uncertainty in time
corresponding to the energy of a virtual pion is about equal to the time it takes light to
traverse the nucleon, about a trillionth of a second. Thus the virtual pion has an
exceedingly fleeting existence.”xxii This statement illustrate the sort of irrationality
physicists are willing to accept just to make place for virtual particles that could account
for the existence of forces.

Because the hypothetical pion acts across extremely minute diameters of the atomic
nucleus, it was so attributed a mass several hundred times larger than an electron, and
an exceedingly short expected lifespan. Thus, to observe such a particle experimentally,
enough energy had to be generated in order to produce such a heavy ‘particle’. Cecil
Frank Powell eventually attained this in 1947 and Yukawa’s virtual particle was so.

The existence of the fleeting pion was dubious regardless, as it was observed for a mere
trillionth of a second in the natural collision of cosmic rays with the atmosphere. Its quick
path towards stability produces a muon, an unstable beast in itself, with a lifespan of two
microseconds. This implies that the impact causes a great amount of energy that is
quickly released and consumed by surrounding atoms in the atmosphere.

Recent development has added more controversy to the pion. At the Los Alamos
National Laboratory, atomic nuclei were bombarded with protons in order to observed
pion interactions, but found they were not involved in close proximity between nucleons.
“Pions carry nuclear force only over distances of 0.5 fermi or more,” indicates the
article.xxiii Speculations arise that gluons are thus “directly involved” with the strong
nuclear force. Gluons are responsible for holding quarks together, so now, the pions are
left without a purpose to be.

Another off-spin theory that rooted out of this particle/force duality was the positron, the
antimatter counterpart of the electron. The idea of antimatter arose from Paul Dirac’s
(1931) explanation of vacuum, as a field of negative-energy electrons by which light
could travel; a substitute to the unforgotten æther Albert Einstein so much wanted to
reject also but could not do so intellectually three decades before, as if the light’s ability
to propagate through empty space was hard to accept. By inference, a positive charge
particle, originally believed to be a proton, canceled out the negative charge electron into
a stable medium by which the virtual photon propagates through. But for the sake of
symmetry the particle most have the same mass as the electron, thus a positron;
discovered by Carl David Anderson in his studies of cosmic ray collisions—surely
promoting the credibility of Dirac’s equation. I can’t deny the veracity of antimatter as it
not only observed as a byproduct of some radioactive processes but also are repeatedly
produced inside particle accelerators, and spewed out in a vast jet stream from the
INSTANT CONTINUUM
73

centers of galaxies.♣♣♣♣♣♣♣ In general, antimatter would not exist in equal amounts as


matter does, as if made from a well-balanced act of cosmic creation. The study of
electromagnetism shows that the field has spontaneous right-hand-ness in formation.
(Or at least, if the direction of the electric field taken by the current of charge particles
happens to be random, the first of these would induce other charge particles to follow
accordingly.) Nevertheless, the idea of space comprising of virtual electron-positron
pairs strikes me as metaphysical, since for one thing they will annihilate each other upon
contact. And even if Dirac’s equation were subsequently altered by renormalization and
over again by quantum electrodynamics, the reason for its present dismissal was not on
the obscurity in which the argument was originally based, but due to some measurable
inconsistencies among further experimental findings. The discovery of positron, though
serendipitous, it is still regarded as a consequence to Dirac’s “prediction”. I must add at
this point, that charge and mass are basic characteristics of matter. The fact that a
positron exists and is a stable particle while maintained isolated, demonstrates a
possible state or combination of this two independent characteristics. When a positron
and an electron annihilate each other in two gammas indicates that positrons cannot
exist naturally, that it is wrongly assembled within this universe of matter. The idea I am
trying to convey is that the positron has nothing to do with Dirac’s negative electron sea,
that it is simply misbehaved or better yet a misassembled particle. It has been Dirac’s
fortune that the conjecture came before the discovery. And that Dirac’s inference was
attained by presumptuously conceptualizing photons as virtual particles responsible for
the electromagnetic force, which would propagate through space by jumping from
electron-positron pair to electron-positron pair.

The force/force-carrying-particle duality has been further strengthened by a hypothetical


term known as intrinsic spin. It implies that by the rotational symmetry a particle
possesses, it will either comply with Pauli’s Exclusion Principle or not. This does not hold
loyal to the original idea of intrinsic spin of an electron, which restricted the number that
could occupy a given orbital to two, but it attained its purpose supporting the duality: spin
1 is a particle, spin 0 is a force.

Alternatively, the electron-proton constituency model accounts for all relevant


phenomena explained by current models, but so too many other effects and behaviors of
matter, which current models do not. For instance, a narrow band of allowable isotopes
called the Band of Stability could be defined by the electron-proton constituency model
in ways quarks never can.

A neutron, which is rather unstable by itself, finds stability by coupling with a proton. This
relationship is very striking in order, and furthermore permits the conglomeration of
proton-neutron couplings. Although not exactly a 1 to 1 ratio, as it tends to increase
towards a 1:1.5 for heavier isotopes♣♣♣♣♣♣♣♣ this coupling stability is so much so, that it is
impossible to hold two protons together without a neutron.

♣♣♣♣♣♣♣
See http://osse-www.nrl.navy.mil/dermer/annih.html
♣♣♣♣♣♣♣♣
The stability of a 1:1.5 proton to neutron ratio is demonstrated in the recent
production of very heavy isotopes. Usual isotopes of seaborgium, atomic number 106,
decay in fractions of a second. Instead, seaborgium-265 (produced by fusing neon-22
with curium-248) has decay times between 2 and 30 seconds. “Theorist attribute the
enhanced stability of these isotopes to a slight deformation of the nuclei from a spherical
INSTANT CONTINUUM
74

These facts reveal very important interrelationship of nucleons, that the coupling
alleviated exuberance by defining a balanced field. The neutrons act as joining
mechanisms between protons by alleviating or virtually eliminating their repulsion, acting
also as a buffer in the varying electronic field that surrounds it. There clearly is a
limitation on the effectiveness of the neutron to facilitate the gathering of protons, due to
the growing need to enlarge the ratio as the proton number increases. So there is a
small proton field instability that needs of additional neutrons as this discrepancy
accumulates.

Figure 13: Isotope Curve.

Nuclear stability is not maximized at a 1 to 1 ratio, demonstrated by the extensive range


of isotopes, the deviation away from the N=P line, and the continuous fusion of elements
to higher configurations. Again, this is all due to a slight unbalance in the coupling
between proton and neutron. Where it to have been perfect, it would all still by hydrogen,
but the supreme exothermic characteristic of the fusion of light atoms into heavier ones,
demonstrates the feasibility, and provides for the tendency of nuclei to gather in larger

to a oval shape. They predict a stronger deformation and even greater stability for a
nucleus with atomic number 108 [hassium] and mass number 270.” Again, the proton to
neutron ratio is 1 to 1.5. [SCIENCE NEWS, Sept. 24, 1994, Vol. 146, No. 13; p. 206].

INSTANT CONTINUUM
75

and more stable configurations. True, that it takes large amount of energy to fuse two
atoms together, a fact that has restricted this process to none but the very massive
bodies in the cosmos or so it seems. These processes are then only permissible under
specific conditions where nuclear reactions present themselves as path of least
resistance.

What is the most stable configuration? If a table is constructed plotting the binding
energy pr nucleon according to isotopes, a curve of the manner depicted below is made.
At the peek of this curve lies iron (Fe), one of the more stable nuclei. That is to say,
when the constituents of the isotope: 26 protons, 30 neutrons and 26 electrons are
added together it differs from the observed mass of iron-56 by the largest amount—just
about half a proton’s mass.xxiv So the configuration alleviates its internal energy by a less
massive state. This should not be taken arbitrarily—under current standard theories it
does require considerable stress in the environment, such as that of ancient stars, to
allow for such nuclear states to be reached.

Figure 14: Isotope Stability Curve.

By expressing nuclear force with a state of least resistance, a very simple synthesis of
strong and weak nuclear forces is achieved, that goes beyond mere implementation of
common terms. If an unbalance in the nucleus is enforced, the energy and mass will
seek the pass of least resistance to alleviate its state to one of lesser energy. There
should be no restriction as of how the release of energy/mass be effectuated. It can take
the form of electromagnetic energy, such as gamma emission, or the decay of a neutron
into a proton and an electron or the reverse, depending solely on the present
configuration. Radioactivity is that process of least resistance by which the atom reaches
a more stable nucleic state.

INSTANT CONTINUUM
76

Visualize the atomic nucleus as a conglomeration of rapidly vibrating protons and


neutrons. Quantum mechanics alone, should account for the spontaneous jump of a
neutron nucleon to be released from the set—the larger the nucleus the greater the
likeliness. Yet this could very well occur in a relatively stable and simple isotope, say for
instance an oxygen atom that has lost a neutron. In order to reach a more stable nucleic
ratio, after loosing a neutron, one proton could entrap an electron to become a neutron,
releasing large amounts of energy in the process. The oxygen unavoidably becomes a
nitrogen atom. What I am describing here is “isomorphic” to the reverse beta-decay, a
reaction used to testify the existence of antineutrinos. The two diagrams below illustrate
the correlation.

Figure 15:Isomorphic Reaction to the Antineutrino Reverse Beta Decay.

It should be noted that the antineutrino represents a gain in mass—the neutron being
heavier than the proton and electron added together. Though both reactions gain mass,
the antineutrino represents a greater discrepancy since in itself lacks of any mass and
additionally a positron is needed for the liberation of light. The apparent violation to the
Conservation of Energy Law for both schematics is resolved by a mass defect resulting

INSTANT CONTINUUM
77

from any proton-neutron coupling.♣♣♣♣♣♣♣♣♣xxv So neutrons within a nucleus are lighter


than when isolated. Even when some electromagnetic energy is released in the reverse-
beta-decay reaction, a mass balance is maintained. But not all the electromagnetic
energy accounts for the mass defect. To balance the mass-energy equation for all
nuclear reaction, the nuclear binding energy must be accounted for. So for instance, the
beta-decay, where a neutron decays into a proton which remains in the nucleus as well
as emitting an electron, instead of attributing the mass defect to an undetectable
neutrino, some of the neutron’s mass is converted to binding force forming an
electromagnetic field. This produces a stronger, more stable nuclear field. Since nuclear
stability depends on the proton-neutron ratio, variations away from the Band of Stability
denote the propensity for nuclear readjustment through the path of least resistance. All
in all, there are no neutrinos, much less antineutrinos.

Under the field of least resistance model it should be questioned what then prevents the
electron from falling within the nucleus and couple itself to a proton. A neutron, after all,
could be seen as the lowest state of electron excitation in the electron-proton
constituency model. It would then seem reasonable that atoms would be tendentious
towards this more stable state by reducing its electromagnetic field. This is in fact so but
to an extent: it should not affect nuclear stability, so a proton would not readily absorb an
electron to become a neutron—however if for nuclear stability such proton-electron
coupling accurse while is the path of least resistance towards a greater atomic balance.
The exothermic nature of the beta decay demonstrates that nuclear balance overrules.
Were an electron to couple with a proton, it would unbalance the proton-neutron ratio,
which requires much more energy to alleviate. It is less strenuous to repel an electron
than to change the nucleon ratio—make the neutron responsible for this. Hence two
contrary fields restrict the motion of electron thus following a path of least resistance,
preventing the electron from falling into the nucleus [nuclear stability] while it lures it into
proximity [electromagnetic stability]. This implies that for a simple atom as hydrogen, a
balance between the two fields will describe a quite simple spherical field of probability
of specific radii range, between the electron and the proton. This is in accordance with
the s orbital of the Bohr-Schrödinger-Born atomic model.

Accordingly, chemistry is just as simple explained through fields of least resistance.


Atoms tend towards the more stable chemical state, noble configurations, where every
orbital is occupied by two electrons; thus, the ionic variations. In order to maintain both
electromagnetic balance and noble configuration, atoms either gain or loose electrons,
which in most cases is best achieved by the sharing of electrons with other
atoms—covalence bonding.

Finally, the most elementary “particle” of all, the photon—the wave that started all. It
might very well be the building block of matter; after all, “subatomic particles” tend to
propitious decay into light. Take an electron and a positron, annihilate them by contact
and the resulting energy is two very energetic (not virtual) photons in the gamma range

♣♣♣♣♣♣♣♣♣
“When a proton captures a neutron to create a nucleus of deuterium, the
interaction releases energy in the form of gamma rays… This mass loss is presumed
equivalent to the energy released.” David E. Pritchard and Frank DiFilippo have
measured this mass defect to great precision, 0.002388178 amu (=2.22 MeV).

INSTANT CONTINUUM
78

(1.23x1020 Hz or 0.511 MeV each).10♣xxvi The equivalent of energy to mass, demonstrates


that some mechanism in nature exist to define one from the other. The difference seems
to lie in its motion. For energy to become massive it must turn unto itself loosing its field
attributed, so in the term of loosing its wavelike propagation it gains particle
exclusiveness. My belief is that here too is locked the secret to gravity.

If the laws of Nature permit energy to become massive into one particular configuration,
mainly the neutron, it takes photons in the energy levels of the highest gamma (3x1023
Hz) in order to count with sufficient energy to create such a particle. A photonic wave
cannot gain energy by itself from nowhere; likewise the contrary it cannot loose or get rid
off its energy (x-ray cannot naturally become ultra-violet light). However, given the
conditions believed to have existed at the beginning of the Universe, photonic waves of
such high energy would have existed. The physics of the electromagnetic wave is that of
a sinusoidal spherical vibration. At very high energies it becomes very erratic, especially
if dealing with higher amplitudes. Under such conditions the wave folds unto itself,
becoming massive, easing this high-energy field. What exactly is involved in this folding
of a wave is too abstract for me, but I speculate that within the mathematics of the
quantum wave function, such a transformation from sinusoidal to impact or special wave
is possible.11♣xxvii The formation of matter seems a rather futile attempt for an energetic
photon to reduce its field state to particle state, as matter creates various disturbances of
greater impact to space. These disturbances are exclusiveness, electromagnetism and
gravity. As it turns out, it was a rather fruitful compromise that Nature had allowed to
occur some 1080 times (supposed number of atoms in the Universe).

What lacks is the question of gravity, which to answer it will first require a re-definition of
Einstein’s relativistic space, ridding it off its fourth “time” dimension and rather see a
solution through paths of least resistance.

10♣
The French-Soviet satellite GRANAT recorded a brief pulse of gamma radiation from
a source known as 1E1740.7-2942 “The Great Annihilator”. The photons had energies
near 0.511 MeV, the amount of energy generated by electron-positron annihilation. It is
believed to be the result of the enormous stress that surrounds the [black] hole as matter
falls in. Eventually the positron interacts with surrounding gas and gets annihilated. (See
Gamma-Ray Burst and Radiating Black Holes in Chapter XI-Theory).
11♣
The mysterious manner in which energy would turn to particle could not be a surprise
to particle physicists, since already they belief in the hypothetical Higg’s field, which by
definition is responsible for the mass of all particles in its virtual state. It describes
particularity with the symmetry breaking of the force/force-carrying-particle duality with a
nonzero strength at the lowest energy state. In other words, it stops being energy
because at its least energetic state the field is something other than zero. This
explanation could be expressed with quantum terms towards incomprehension, and still
mask the truth that the formation of particles is a mystery to scientists. And could only be
excused by the detection of the Higg’s particle in some future super-powerful
accelerator.

INSTANT CONTINUUM
79

CHAPTER IX: Space

“[Gravity and electromagnetism]… all ye know on earth,


and all ye need to know.” –Albert Einstein

“This sudden success on a grand scale, after a generation


of desperate striving by great minds, lends a heroic, even
mythic, quality to the history of [quantum theory]. But,
inevitably, it has also led to a sensitivity of the part of
physicists, a kind of defensiveness, ultimately arising from
the fear that the whole delicate structure, so painstakingly
put together, might crumble if touched. This has tended to
produce a ‘Let’s leave well enough alone’ attitude, which I
believe contributes to the great reluctance most physicists
have to tinker with, or even critically examine, the
foundations of quantum theory. However, fifty years have
gone by and the structure appears stronger than ever.”
–Daniel Greenberger

So far I have commented disapprovingly towards contemporary physics, dismissing time


as a mental construct rather than a physical dimension, revoke the Second Law of
Thermodynamics with a diametrically opposite observation that all things tend towards
balance and order, disregard the particle/wave duality of light as a misinterpretation of a
century old experiment, and reduce quantum insipid logic, which explains these
uncertain phenomena by statistical manifestations to a more palpable one, the proton-
electron coupling. Now my task is mainly to construct a theory that will explain the
Special and General Relativity theories with timelessness, employing the various
concepts previously presented. For those who have approved on my previous views, will
find this model a fulfilling complement. It is my hope that for those who disagree with
what I have presented so far, that after finally reading this chapter through, I may get
their concordance.

In the discussion concerning the inconsistencies of Relativity an alternative formulation


was not given, but not until the previous chapter did I presented sufficient basis to
provide such. With no more undue preamble I introduce the idea of event-space.

The term event-space is synonymous to space-time but for the time factor. Since the
basic premise of this work is the non-objectiveness of time, Einstein’s Relativity or
rather, every experimental observation must be accounted for in terms of time as a
scalar in order that this proposed model is accepted.

The word “event” refers to action taking effect be it motion of a particle or propagation of
energy. The word “space” refers to the boundaries establish for a described system. All
may vary in the density of matter or the intensity of energy. It is the density of matter and
the intensity of energy that defines the event-ness of that space.

INSTANT CONTINUUM
80

Event-ness represents the “change of ratio” within a space (congruous with the definition
of time as a standard ratio of harmonious motions). A system within a larger event-space
compared to an arbitrary standard event-space then motion will be “swifter” while smaller
event-space space is compressed and motion is “retarded”. “Shorter” or “smaller” refers
to larger compression, akin to vibrations where a shorter wave means higher
frequencies.

Higher densities of matter create stronger gravitational forces that retard motion by
increasing inertia. It is for this reason that clocks on earth tick (or vibrate) slower than
they would in space for the same time period (it should be clarified that time scale is
ultimately defined by Earth’s rotation—a day). This retardation is a result of the increase
on the inertia of matter which gravity makes. Likewise, lower intensities of energy reduce
the capacity of motion—less inertia and less acceleration suffered. In general, the
density of matter and the intensity of energy have reciprocal effect on the event-ness of
space.

While Einstein’s General Relativity describes gravity as distortions of space-time, this


alternate model describes gravity as a force that distorts events in space. Synopsis: not
gravity as distortion but distortion by gravity. Since an increase in gravity increases the
inertia of matter, motions require more; which is why high-density matter (akin to black
holes) has small event-ness (resulting in the shrinking of space and slowing down of
time). However, higher force fields impinge greater accelerations so that motions speed
up faster in shorter event-space.

The distinction might seem irrelevant but the implications are considerable. Space is the
volume within which a dynamic system is described, defined by the propagation of
matter and energy (the events themselves). The amount of matter and energy within the
volume defines the intensity for both gravitational and electromagnetic fields. As fields,
these forces affect the manner of interaction on every particle and their energies. Just
like larger electromagnetic fields increase inductances that in turn increases flow,
likewise, larger gravitational fields increase inertia, thus increases acceleration. Systems
acting under larger gravity fields will have greater inertia resistance than in gravity fields
of lesser potency; but things will accelerate faster.

The distortions caused by gravity, may no longer be seen as physical alterations in the
hypothetical four-dimensional curvature of space-time, but on the rate at which events
take effect. Time is a scalar by which the periodicity or swiftness of the events (motions)
in that system act, and may be referenced to similar events under different gravitational
and electromagnetic fields to discern a difference.

As a consequence of these distortions, relative measurements are also affected.


Electromagnetic energies may shift frequencies or suffer gravitational lens as a result of
a gravitational field. Since both the distance of space and the rate of time are scalar
properties defined by measurements of light, consequently both space and time vary
according to the severity of gravity in an event-space.

The term event-space itself is employed as a mean of differentiating the severity of


gravity from one region of space to another. It has an additional significance in that it
attests to the dynamism of the Universe. The rule is simple: Space with high density of
matter will have more compressed event-space.

INSTANT CONTINUUM
81

Likewise, there should be no appreciated difference for matter, between the effect of
gravity and acceleration by any other forces. This should be intuitive. Gravity being a
force accelerates matter—the degree of acceleration depending on the amplitude of that
force (which is proportional to the density of matter). But other forces may accelerate
matter, so it can be deduced that all forces alter event-ness as well.

It is certainly appropriate at this point to represent event-space graphically. The figure


below illustrates two light waves of the same wavelength traversing the same distance;
the only difference between the two spaces is their event-ness, the shorter one being
compressed by the greater mass density of the defined space. It would certainly appear
as if light traveling in a longer event-space would travel faster than that of shorter event-
space, but this is not so. Both rays traverse the same “distance” at the same “speed”.
The logical resolution to this apparent paradox is that gravity (proportional to the density
of matter) alters the wavelength and frequency of light. Since the velocity is determined
as the product of these two properties (see CHAPTER IV: Motion), constancy is
maintained. Relatively, both frequency and wavelength remains constant; distance and
time are defined in reference to these properties.

Figure 16: Alteration of Event-ness.

Again, distance and time are scalar, established by an arbitrary coordinate system from
which to reference. Notice, that dimensions are determined by measuring the properties
of light. But as light moves from one event-space to another it suffers a deformation, the
transition being undetectable (the difference established is only apparent by inference
not by direct reference).

For Relativity’s sake, suppose that two transcending lights in different event-space can
be referenced by superluminal information. It would indeed be demonstrated that the
light on longer event-space would move farther. This is only apparent if both event-
spaces are reference from another non-event-space, such as a mental image.

But taken separately, there should be no discernible difference in the definition of “time”
between two different event-spaces. I explain: since time is a scalar. Suppose time is
defined by co-referencing a clock with a quartz crystal, and two identical sets of clocks
and crystals are placed at both event-space. Reference for both event-spaces between
the ratios of the clock mark to the wave count of the crystal remains the same. A second
INSTANT CONTINUUM
82

is a second is a second. That is, a person will perceive the passing of “time” (by
observing a clock) at the same apparent rate, regardless how strong or weak the gravity
field is, even though the event-spaces are different and things do take place at different
rates. This difference in rate is only apparent through outside reference.

But not quite! Take for instance the classical two identical clocks experiment. Both
clocks are set at the same time. One is taken aboard a rocket to orbit Earth, while the
other stays on the ground. The geometry of astronomical objects permits an onion shell
distortion onto space, of gradually longer event-space. The orbiting clock suffers a lesser
effect from Earth’s gravity field, so its atoms move much “swifter” by comparison.
Twenty-four hours registered on the ground base clock might measure by the orbiting
clock as a day and a second (hypothetical difference). In reality, it was one time period,
a day—a single rotation of Earth around its own axis. The larger event-space of lesser
gravity simply permits chronometers to tick swifter, thus marking more “time”. But what
time is it, Earth’s time or Space’s time?

Event-space does have its simplifications, which directly contradicts Special Relativity
complications. No particles of light, the photons, to deal with relative motion, only light as
a continuous field of energy. According to the field definition of waves, velocity is a
product of the apparent wavelength times its reciprocal frequency, that regardless if
these are affected by the relative motion of the observer the resulting velocity will remain
a measurable constant since is a field of waves. Two lights approaching each other
would result in a relative interference of frequencies, even though they are still moving at
light speed. They interfere according to phase alignment. And as the two interfering
waves continue their separate propagation, each would re-emerge unaltered as if they
never concurred at all (such is the physics for all waves).

Light, being the standard reference frame, continues traveling through whatever event-
ness at the same velocity. Super-compressed event-space does not imply that light
would hardly get through, or that it could traverse such event-space as if it was a
Hawking’s wormhole. A light travels in a second what a light travels in a second
regardless if is in an event-space stretch or compressed a million fold.

Interestingly, the model does not impose limits for the velocity an object can achieve.
Though it remains extremely difficult for a mass to accelerate to velocities as that of light
waves, there seems to be no physical limitation. As for what limits particle accelerators
from attaining such speeds, is that as the magnetic field increases its even-space
shortens and its inertia increase, being a circular trajectory, the deformation is apparent
to an outside observer as a Relativistic-like limitation.

The model also implies that as a massive object moves through space it compresses
event-ness forward while it stretches behind its trail. If a massive accelerated object
approaches an observer, the image would be severely blue-shifted as a result of the
accumulated compression (more on this accumulated effect in the next chapter).

The event-space model resolves a mayor controversy of Special Relativity that concerns
the definition of rest or inertial state. According to Relativity, motion is a consequence of
the reference frame chosen relative to the objects being observed, so that it is
considered at rest if the reference frame elected is fixed on the object. And this must be
so, in order for the Relativity transformations to take effect. In Relativity, there should be
no relativity effect if no motion is observed (velocity being zero). This Relativistic
INSTANT CONTINUUM
83

definition is inconsistent with Galilean or Newtonian definition of state of rest, which


states that a body at a state of rest is that which is exempt from any force that could
avert its uniform motion, regardless if this uniform motion is observed by the chosen
reference frame. Simply stated an inertial motion does not suffer acceleration.

Special Relativity carries an aspect of observer-participant reality. Choose any frame of


reference and light strangely adjusts itself so as to maintain constant speed. But again,
this is result of visualizing light as a photon “particle” which must suffer the relativeness
of position and speed. With waves, we deal only with apparent frequency and
wavelength.

Instead, event-space concerns only with accelerations, regardless if gravitational or


electromagnetic forces make these, which in turn affects event-space by which the
definition of distance and time is made. This is congruent with Newtonian mechanics and
Maxwellian electromagnetic relativeness in describing motion.

If gravity is not the curvature of space-time but rather gravity alters event-space, gravity
must consequently be a force akin to electromagnetism. As gravity waves propagate
through space, it shortens the event-ness of that space. A mass follows the path of least
resistance by tending towards shorter event-space. As wave, gravity’s amplitude
augment with the amount of mass—its intensity being proportional to the density of
matter; and as wave also its amplitude diminishes as a quadratic function with distance.

Einstein suspected gravity waves would never be observed. Detecting gravity waves by
the deformation of space as indicated by Russell Ruthen of magazine Scientific
American: “Detectors one kilometer long… would change by less than a billionth of a
meter. That is a distance about 1,000 times smaller than the diameter of an atomic
nucleus.”xxviii On the account of quantum uncertainty alone the whole experiment can be
thrown out the window. With such conditions, Einstein’s suspicion seems almost
irrefutable. Yet such machines are being constructed at present time, with cost reaching
over two hundred million dollars.♣ The scientists constructing such apparatus expect to
observe distant black holes as well. How are they to focus gravity waves, is a mystery to
me.

However, an experiment performed by Samuel A. Werner (1977) found that changing


the interferometer’s orientation relative to the Earth’s gravitational field could alter the
interference pattern formed by neutron diffraction.xxix Such experiment seems to confirm
the existence of gravity waves.

What produces gravity waves? If matter is not a satisfactory answer, I could not find a
better explanation. If the idea of fields that define a path of least resistance does not
suffice, disputed by a virtual force-carrying particle, then such a question will remain
equally unanswerable to electromagnetism, nucleic decay, and Pauli’s exclusiveness.

I emphasis that from what I can recognize, without employing extravagant particles and
impenetrable logic, the event-space model agrees with every experimental observation
of Quantum, Relativity and Classical alike. Unfortunately again, my mathematics abilities


Laser Interferometer Gravitational-Wave Observatory (LIGO).

INSTANT CONTINUUM
84

betray me, and the model is risked to stand without mathematical support, which would
not only establish the model as logical but provides some tools for predictability as such.
I must admit that this concept of event-space is a little muddle and can lead to
relativistic-like discrepancies.

It should be simple: time is a scalar. Time is an establish ratio between two harmonious
motions. This is referenced against the rest of the environment, as defined by the
position and propagation of its energy and matter. Since energy and matter interact
according to their relative position to other particles, the force fields generated effectively
determine the scale of both distance and time.

INSTANT CONTINUUM
85

CHAPTER X: Effect
“I’d much rather see a theory forbidding wormholes.”
–Stephen Hawkings

“It’s impossible that the Big Bang is wrong. Perhaps we’ll


have to make it more complicated to cover the
observations, but it’s hard to think of what observations
could ever refute the theory itself.” –Joseph Silk

The model of Event-space was in fact what initiated this work entire thought process.
Concocted as an alternative explanation to Einstein’s relativity theories with the premise
that time was not a physical constituent of the Universe, such model implicated the many
other ideas I have presented so far, most importantly the antithesis to the Second Law of
Thermodynamics: the tendency towards balance through the path of least resistance.
This Economy of Energy Principle explains gravity not as a curvature of space-time, but
as a field in all its classical sense. It was beyond all my expectations that the same
principle could explain the physics of atoms, in view of all the controversial phenomena
described by Quantum. The model might be so simplistic after all that I expect physicists
will reject it on grounds that though it is much simpler, it is not deterministic since there is
still plenty of room to accommodate Chaos and its chancy nature. I do not know how
else to emphasis how this one law, the tendency by the system’s dynamic response of
its own configuration, allows for the development of intrinsically chaotic yet organized
structures.

There are five principal definitions for the Event-Space Model:

1. The Universe is a dynamic, self-configuring, non-linear [chaotic] system.


2. Space is defined by the propagation of matter and energy. The density of matter
and energy defines time.
3. The presence of matter and energy generates fields of forces.
4. Fields of forces are causes for change.
5. And changes take place through the most economic means.

So the Laws of Nature, as simplistic as they are, created the wonderful formations
throughout the cosmos, with interminable variations.

In the style of Feynman’s quantum hieroglyphs the following schematic representation of


the interaction between energy, matter and force is given. I do not want to place much
emphasis on the diagram, it is merely a pretty picture created with not much intricate
thought.

INSTANT CONTINUUM
86

Figure 17: Interaction Diagram between Force, Energy and Matter.

The Universe is commonly defined as an ever-expanding volume outside of which


nothing exists; that is, the Universe is everything that exists and nothing is outside.

The idea that the Universe suffers a dynamic expansion came by George Gamow’s
interpretation of Edwin P. Hubble discovery of galactic spectral red shift, stating that the
farther away galaxies are the greater the shift, thus the faster they are receding. Since
the regression is observed in every direction, it is inductive to believe that the Universe,
in its origin, must have been immensely packed—we are conveniently observing from
the center, from the origin, the point from where all is expanding away. A self-
impregnated universal ovum from which all the cosmos formed—an egg of infinitely
dense light is the origin of all. Such a state could be extended to extreme, compressing
the Universe to an infinitely dense point, which would be so energetic that all matter
would have been converted to energy. This state is known as supersymmetry, and
agrees with the Second Law of Thermodynamics, so that present state universe is more
entropic. Additionally, it can be stated that at the moment this supersymmetry was
violated time and space where defined. This created an explosion that set the universe’s
expansion (at rates which vary according to the model adopted) during which time some
mysterious influence created variations that led to the formation of the Cosmos. Edward
Harrison and Yakov B. Zel’dovich are among many independent researchers looking for
possible explanation as to how galaxies appear from the super-symmetric primordial
universe, with loop of reasoning supporting contradictions to the Second Law, as the
result of minute fluctuations in the density of matter (as it has been recently ‘observed’ in
the background radiation). Should the Nature of Things be completely entropic, could
there exist order?

As for the future state of the Universe, scientist hope that sufficient matter exists so that
it would eventually slow the expansion down and cause it to contract. Let us assume that
the Universe is to contract. It is to their belief reasonable that all would end as the
hypothetical “singularity” state of the Big Bang model; everything would fall back to its
original location—this is known as the Big Crunch Theory. Consequently, from the
moment the Universe stops expanding and commence its reduction, entropy would
decrease, violating the Second Law of Thermodynamics. If such would be the case, it

INSTANT CONTINUUM
87

would be very interesting to hear from theorists what would happen at the moment
expansions becomes zero. Could it be the end of causality and the beginning of a new
universe? It would be absurd to suppose that history would reverse itself.

Contrariwise, if the contraction is to avoid any violation of the Second Law (in agreement
with the Big Bang Theory), the final conglomeration is to be of infinite disorder (whatever
that means), then “singularity” would hardly be an adequate term to describe such a
state. It could not be presumed that such a disorder, or maximum entropy of infinite
scarcity would be organized. Consequently, if a model of the Universe’s End is not to
defy the Second Law, it should be that of an open universe (endless expansion).

An eternal universe being a rather distraught idea apparently rids us from a purpose to
our existence. A bounded universe pleases the faithfulness of scientist, appeasing their
irrational fear of the infinite. These wishes for a close universe, although have not been
expressed too strongly, have been exited by the proposition of the hypothetical “dark
matter”. If such particle exists, it could account for the exorbitant deficiency of mass now
observed that would allow a contraction. This dark matter—which has been speculatively
attributed to neutrinos, axioms, WIMPs, photinos, and neutrinos (maybe other particles
has been concocted too)—is made responsible for the holding together of galaxies as
well, since it appears apparently that there is not sufficient mass for them to be, or to be
the way they are, and form the way they form.

Observations demonstrate that the stars revolving in the accretion disk possess escape
velocities (to the mass observed), so that galaxies should have dismantled long ago. So
too, is dark matter held responsible in another astronomical mystery, the formation of the
galactic clusters. It has been calculated that our galaxy belongs to a group of galaxies
that are approaching one another towards a so-called Great Attractor. Not enough mass
is observed amongst all our neighboring galaxies to pull each other close; thus dark
matter.

To my relief, Hannes Alfvén has proposed an alternative theory to the Bing Bang, known
as Plasma Cosmology. I say relieved because not until I read the book by Eric J. Lerner
which popularizes this theory, was there any alternate solution to the dark matter
dilemma.

When electric currents pass through plasma, dancing filaments of light are form. Such
phenomenon is responsible for auroras as well as the formation of solar flares and
nebular gossamers. As electric currents flow through plasma, cylindrical magnetic fields
are formed. These attract other currents flowing in the same direction, twisting and
converging into larger vortices. “Given enough time, currents and filaments of any
magnitude, up to and including supercluster complexes, could form—in fact, must
form.”xxx These provide sufficient force to shape galaxies into the characteristic spiral. “If
the speed of gas rotating around the galactic center is plotted against its distance from
the center, the curve first rises rapidly but then levels off. However, if the disk-shape
galaxy is held together by gravity alone, the speed should fall steadily as distance
increases. As in the solar system, outer planets move more slowly than planets close to
the sun… the flat rotation curve emerges quite naturally in a galaxy wholly governed by
electromagnetic fields.”xxxi Filaments have been observed emanating from the center of
our galaxy, to prove that currents the size of galaxies does form. Additionally, “in 1989, a
team of Italian and Canadian radio astronomers detected a filament of radio emissions
stretched along a supercluster, coming from the region between two clusters of galaxies.
INSTANT CONTINUUM
88

Electrons trapped in a magnetic field emit radio radiation, so their finding provided
indirect evidence of a river of electricity flowing through the empty space.” If
electromagnetism is taken into consideration, being so much stronger than gravity. The
formation of galaxies and superclusters are then possible without the need of dark
matter.

It is this intergalactic plasma as well, which is responsible for the smoothness of the
microwave background radiation. “High-energy electrons spiraling around magnetic field
lines within filaments, like any accelerated particle, generate synchrotron radiation—in
this case, of radio frequencies. And Kirchhoff’s law, a fundamental law of radiation,
states that any object emitting radiation of a given frequency absorbs photons from the
background and then re-radiate them in another, random direction, they will in effect
scatter the radiation into a smooth isotropic bath, just as fog droplets scatter light into a
featureless gray.” Observation on the apparent radio brightness of galaxies suggests
that waves at this frequency are being absorbed or scattered by the intergalactic
medium. “Astronomers had observed that as one looks farther out into space, the
number of radio sources increases much more slowly than the number of optical
sources, and thus the ratio of radio-bright to optically bright sources decreases sharply.
For example, a distant quasar is only one-tenth as likely to be radio-bright as a nearby
one. Cosmologists have attributed this to some unknown mysterious process that
somehow caused the early, distant quasar, to be less efficient at producing radio
radiation, even though their optical and X-ray radiation is no different than that from
present-day, nearby quasars.” (Though on page 148 of the same book, the author
states: “As one looked outward in space and backward in time, there were more and
more radio sources…” Though I might be taking the phrase out of context, it is certainly
confusing.) Nevertheless, intervening “thickets” of magnetic force-free filaments resolve
the mystery of such an isotropic radiation. Martin Reed has postulated that currently
observed production of helium is sufficient to account for all the microwave background
energy. Electrons in the galactic filaments would repeatedly absorb and emit this
microwave and effectively produce the isotropic radiation observed.xxxii

With all this in mind, it may be concluded that current model on the evolution of the
Universe must change drastically.

Under the model of event-space, the volume of the Universe would expand to reduce the
density of energy. An accumulative distortion of event-ness would result, perceived as
an increase spectral shift as the distance of the objects observed increases. No need
here for four-dimensional non-Euclidean space or Hawkings’ imaginary time. The
increase in the shift would be the accumulated result of continual distortion suffered by
light as it crosses ever-longer event-space (plus the Doppler shift cause by motion of the
emitter and observer). The more elongations suffered the larger the difference of event-
ness that has traveled through from origin to destination. Under such configuration,
Hubble’s equation of distance would still hold but with a slight difference in interpretation.
Greater spectral shift to red would not be a consequence of a greater velocity of the
more distant object, but as a result of the distance alone. The argument is not geared
towards the disapproval of Hubble’s equation, rather to present a much simpler dynamic
of the cosmos, one of more inertial expansion. Consequently, the universe is
dynamically smaller and statically older than previously thought by Hubble’s equation.

Mapping of millions of galaxies by Peeble, and clusters of galaxies by Tully and Fischer
show how they form filaments, which contradicts the homogeneity assumed by the Big
INSTANT CONTINUUM
89

Bang model. The formation of such filaments require at least four times longer than the
age of the universe as determined by the Hubble Expansion. This is but one of many
observations which calculate the age of the Universe to be much larger than that
computed by the Hubble Constant. For example, two diametrically opposed galaxies as
observed from our own galaxy, being at more than five thousand million light-years
away, would had taken almost eight time longer than the age of the Universe to separate
themselves form a common origin since galaxies apparently do not move themselves
much faster than a thousand kilometers per second (which might be the reason behind
the INFLATIONARY UNIVERSE SCENARIO, see CHAPTER XI: Theory).

It is certainly a disconsolation that the reasonable implication to the event-space model


is that the Universe needs no beginning and no end.

I will even go as far as question the conjecture that gravity would overwhelm the forces
of electromagnetism (referring to the exclusive definition of whole atoms), and nuclear
exclusivity. Chandrakansa’s black hole model is based on the assumption that the
attraction of all nuclei in a stellar objects over two and a half times the mass of the Sun,
would collectively overpower the exclusiveness of individual atoms. I would agree that
upon thermonuclear deficiency, a star might collapse but only to the extent permissible
by the atomic exclusiveness; such collapse could generate sufficient energy to allow for
a nova explosion for instance thus evading the path towards black hole compression.

Theoretically, an object could become so massive that no longer would


electromagnetism and nucleon exclusivity could hold the force of gravity. Such objects
are named black holes. This, by far, had been the best mathematically describe
hypothetical object in physics; and supported by indirect observations it has an
accountable probability of existence so high that it is virtually accepted by the entire
scientific community. There are some discouraging facts that I have thought of, however,
which should discourage this unanimity.

It is generally assumed that black holes are non-rotating objects. Roy Kerr is credited
with a set of solutions describing rotating black holes.xxxiii I was perplexed by what this
implicated. Taking the Sun’s current spin and hypothetically collapse it to become a
black hole, its angular momentum being conserved, would extremely increase its
angular velocity. This is as if sitting in a rotating chair and impulse one self with legs
stretch out, by pulling them in, the angular velocity increases considerably, and this is
only in human scale, imagine then contracting a body as big as the Sun into a diminutive
sphere. To resolve my suspicions, I performed the calculations. By contracting the Sun
to the density of neutron star, that is 5x1026 Kg per Km3 (about a billion tons per cubic
inch) or about 10 km radius (six and a half miles), the rotational velocity at the equator
would be 8.99x104 kilometers per second. To contract the Sun even further to a radius of
1 Km, at which density would be that of a black hole, the equatorial speed would have
multiplied one hundred times to 3000% the speed of light. I was shocked by this result
and figured my calculations wrong; however, I have since checked my work and confirm
it as correct. Consequently, I had to reread the source of my information concerning Roy
Kerr’s solutions, the book A Brief History of Time by Stephen Hawkings, and realized
that Kerr had employed relativistic formulas. So I can only assume that physicists are
aware of this superluminal violation, but apply Lorentz’s transformation formulas to the
severely curved space-time within the black hole to accommodate the numbers within
the law. How exactly this is performed, I cannot imagine, for the transformations
formulas become undefined (imaginary numbers) for speeds greater than light’s. This
INSTANT CONTINUUM
90

superluminal rotation is a tremendous flaw to the model of black holes. It is reasonable


to believe that the Sun was created under very similar circumstances as the trillions
upon trillions of other stars in the Universe, so that hardly any would have zero-spin. And
what is more, the larger a celestial body is, the faster it rotates, so stars large enough to
become black holes would spin considerably much faster than that calculated here for
our Sun. Unless an explanation is given that would allow a spinning object to loose
angular momentum, the theory of stellar collapse into dense black holes should be
revised. A possible solution might be that as the density of matter increases, the
increase in inertia due to the increase intensity of the gravitational field would slow the
rotation of the star, but this comes as an unlikely hypothesis.

The contraction of any gas raises its temperature and pressure, naturally tending to
resist compression. Extending this to the extreme, the state of matter at high
temperature and pressure is plasma. However, there is various other naturally opposing
forces which tend to impede the increase in pressure by gravity: the release of energy of
thermonuclear reactions to increase temperature, and the rotational momentum that
enlarges volume.

Hypothetically, provided sufficient mass to overpower even these forces, there is still the
particularity of atoms to resist further reduction of volume through exclusivity.

Gravity is very weak; the intensity rate per atom between gravity and electromagnetism
is 1:1040.xxxiv Yet, the accumulation of matter can impinge so much pressure it could
theoretically heat plasma beyond particularity, thus converting mass into energy. In such
a case it could only increase temperature so as to resist even more the collapse of
matter under gravity. In any case, such a state of energy would hardly constitute a “black
hole”.

However, the collapse of matter by astronomical objects cannot be entirely avoided.


Since matter tends to conglomerate, a galaxy could expectably collapse into a billion-
solar-mass object. Such objects would certainly push the limits of exclusiveness towards
collapse, turning mass into energy. Liberation from particularity, I have named such a
state, luciferous; the compromise boundary of over-energized and super-compressed
matter.

The ultra-gamma energy would radiate only to facilitate the conversion of more mass
into energy, thus reducing the gravity pressure. Another solution would be for the energy
to assemble into antimatter, which would interact with matter to become energy,
effectively reducing the amount of matter in the system as well only to become light.

All in all, the collapse of stellar objects to “black” that so strongly supports the concept of
space-time curvature of General Relativity can be abandoned.

The predominant theory for galaxy formation has it that from an ellipsoid cloud of proto-
suns, bands form in a spiral pattern as an expression of gravity waves alone.

An alternate theory proposed by Stephen R. Holland, places the spiral arms before the
ellipsoid cloud. That is, it is reversing the order of formation so that an ellipsoid galaxy is
not the earliest manifestation but the oldest. The sequence of pictures below illustrates
this novel approach. Current standing theories have no explanation for the almost linear

INSTANT CONTINUUM
91

galaxies, since spiral arms come after an ellipsoidal formation by a phenomenon known
as gravity pressure.

Figure 18: Galactic Development.

After the last picture to the right in the above series of picture of galaxies, galaxies reach
a state of electromagnetic homogeneousness and at that point its prominent inertial
vector points to the center form any point in the sphere, so it begins to collapse under
gravity alone.

At one point in its core, matter begins to transform to energy. That energy (very high
gamma) cannot easily escape from all the falling matter (billions of stars). The energy
may be expressed as antimatter, which would reduce the mass pressure. Eventually, an
eruption will occur, liberating a long stream of matter, as that observed with M87 (NGC
4486). This jet could again start to twist in an electromagnetic current created.

This in turn agrees with Hannes Alfvén galactic model so elegantly presented (in
scientific terms) by Eric J. Lerner in his book “The Big Bang Never Happened” by
Vintage Books 1992, ISBN 0-679-74049-x

To make the Universe truly eternal, its end must be its beginning. If and when the
Universe reaches a state where all energy is homogeneously distributed and all matter
rest at the lowest energy state, the presence of ortho-matter would result in the

INSTANT CONTINUUM
92

alleviation of the state of the universe, and ortho-integration stands as the path of least
resistance towards universal balance. Change does not start from a single point but
happens everywhere as one.

So an energy field, regardless of how weak it is, may fold in order to form an ortho-
matter corresponding to the neutron. Static and isolated it would decay into a simple
energized and ionized pair of ortho-proton and ortho-electron. So will be that their
presence will generate two forces: ortho-gravity and ortho-electromagnetism. The ortho-
cosmos would eventually evolve into a new universe; an eternal cycle like yin and
yang—a balance of opposites.

In the words of Hannes Alfvén "There is no rational reason to doubt that the universe
has existed indefinitely, for an infinite time. It is only myth that attempts to say how the
universe came to be, either four thousand or twenty billion years ago." Additionally:
"Since religion intrinsically rejects empirical methods, there should never be any attempt
to reconcile scientific theories with religion. An infinitely old universe, always evolving,
may not be compatible with the Book of Genesis. However, religions such as Buddhism
get along without having any explicit creation mythology and are in no way contradicted
by a universe without a beginning or end. Creatio ex nihilo, even as religious doctrine,
only dates to around AD 200. The key is not to confuse myth and empirical results, or
religion and science."

And so this has been my attempt explanation on the nature of the universe. By taking
time as an abstraction, the Second Law of Thermodynamics also as an appreciation and
not an inherent tendency, and by explaining gravity and quantum mechanics as fields of
least resistance, in equivalence to electromagnetism, Nature as a whole can be
explained in classical yet chaotic terms using statistics as a tool to determine the most
probable behavior.

Reviewing what has been stated, four forces account for every phenomenon: gravity,
electromagnetism, nucleic stability, and Pauli’s Exclusion principle (the particularity of
matter). Each will interact with one another in a play of gods, forming the Cosmos as is.

My Theory of Everything:
-1. Force is the result of differences
0. Matter and energy—the same
1. A tendency for balance

INSTANT CONTINUUM
93

CHAPTER XI: Theory

“It’s disturbing to see that there is a new theory every time


there’s a new observation.” –Brent Tully

Here follows a series of proposed theories that I find as metaphysical. The concoction of
so many outrageous theories during the last few decades is indicative of the
uncertainties and inconsistencies that established theories have in face or resent
discoveries.

ANTINEUTRINOS
While most of the antimatter produced in that well-balanced explosion that formed the
Cosmos out of a “super-symmetric” state, have since then mostly been annihilated, yet
much of the antineutrinos remain, as they, like their counterpart, interact very weakly
between themselves and with other particles (plus the fact that stars are producing
antineutrinos, theoretically as a byproduct of helium fusion). What exactly is an
antineutrino is hard to say. Originally, the idea of antimatter was that of particles which
held the same characteristic of normal particles except for its charge, which was
reversed. Neutrinos only interact by weak nuclear force, not electromagnetism, so they
lack of charge. So then, the definition of antimatter was broadened to encompass other
particles, so they too possess corresponding anti-particles. This was achieved by
proposing additional symmetries: parity and time. Parity describes the left and right-
handedness of an object’s shape or rotation, while time accounts for the direction of its
action in chronological order. If neutrinos are thus the byproduct of beta-decay,
antineutrinos are consequently responsible for reverse-beta decay.

BABY UNIVERSES
(See also INFLATIONARY UNIVERSE SCENARIO, below.) For this I will quote E.
Lerner: “From [the efforts of postulating a quantum gravitational theory] came the most
bizarre theoretical innovation of the eighties—baby universes—pioneered by Stephen
Hawkings. At the scale of 10-33 cm, less than one-million-trillionth of a proton’s diameter,
space itself is, according to this idea, a sort of quantum foam, randomly shaping and un-
shaping itself; from this, tiny bubbles of space-time form, connected to the rest by narrow
umbilical cords called wormholes. These bubbles, once formed, then undergo their own
Big Bangs, producing complete universes, connected to our own only by wormholes 10-
33
cm across. Thus from every cubic centimeter of our space some 10143 or so universes
come into existence every second, all connected to ours by tiny wormholes—as our own
universe itself emerged from a parent universe. It is a vision that seems to beg for some
form of cosmic birth control.”xxxv Apparently the model is very attractive to theoretical
physicists despite the absurdities, because they keep working at it, even after fifteen
years without a single contribution to science. Oh well, of course, the COBE results for
which everyone takes credit. So what can make this theory more conceptually insipid:
theoretical physicists should try some fractal mathematics to truly reach the ludicrous.

CENTRIFUGAL FORCE PARADOX


Black holes are particularly attractive to the minds of theorists, as it tends to support
General Relativity very nicely by demonstrating space-time curvature in its severest. It

INSTANT CONTINUUM
94

too, provides a series of implications, due to the extremity of its relativistic state, which
borders on the metaphysical. For instance: since gravity affects light, the field around a
black hole could be so intense, that at a particular distance known as the event horizon,
light would curve so much it could not escape, thus the name black hole. According to
Abramovictz and Lasota, light traveling parallel to the surface of the black hole at three
times (3X) the radius of the event horizon will bent 45 degrees while at one and a half
times (1.5X) it would orbit. “[We] realized that motion along the path of a circular ray
appears to be so acutely paradoxical because it is difficult to accept the fact although
this light ray is really circular; it is also in a certain sense, perfectly straight.”xxxvi The
authors go on to claim that such optical geometry denotes physical geometry, so much
so that a ship traveling along such an orbit would not feel “centrifugal force” as it is “in a
certain sense” traveling straight. Below this orbit the “centrifugal force” would be
reversed in direction.

DISCOHERENT HISTORIES
A readjustment of Hugh Everett’s “many worlds” interpretation, originally devised to
describe the universe through quantum mechanics by assigning a wave function, assign
a probability of occurrence. Personally, I see no other probability than unity. “For
practical purposes; it does not matter whether one thinks of all or just one of them
[universes] as actually happening.”xxxvii (Murray Gell-Mann and James B. Hartle)

GAMMA-RAY BURST
To date, some few hundred gamma-ray bursts have been observed by the recently
launched Gamma Ray Observatory (GRO), for which not so satisfactory a theory
exists.xxxviii These bursts, which last a few seconds in duration, have been detected from
indiscriminate directions in the sky. The idea that the gamma burst are produced by
astronomical objects, such as pulsars, which beam their energy so that those that
happen to radiate their beam towards Earth can be detected, lacks in that the source
itself has not been identified nor account for the relative consistency in their intensity.
“The shortfall of faint bursts,” says Bohdah Paczynski, “may indicate that bursts did not
exist in the early universe of that faint bursts are so distant that their radiation has shifted
to wavelength longer than those of gamma rays.”xxxix But no x-ray bursts holding the
same characteristics have been recorded. If the source are so recent, it presents more
problems; presumably only large objects such as galactic centers or black holes could
stress matter so much so as to radiate in gamma energy. Such sources could not be to
proximate and regardless of distance the beaming produced by such monstrosities
would certainly be prolong for more than a few second, if not at least periodically repeat.
Paczynski has proposed that the bursts could be the result of encounters between
neutron stars and black holes.xl Yet, the bursts count could have already pass the
neutron star and black hole population, especially if at the rate at which the two hundred
or so number have been observed so far, was to be extrapolated through the age of our
galaxy. In other words, the rates at which these bursts appear exceed the theoretical
rate of neutron star-black hole encounters by magnitudes billions of times over.

So an alternative theory should be welcomed. I have placed it here with the other
theories for no better place to mention it. I speculate that these bursts of energy are a
result of spontaneous nucleic decay. Free neutrons have a mean like of eleven minutes
and naturally would not be expected to exist as such in space, yet there is a process by
which they can form—the electron entrapment of a proton. The arrangement that would
permit such a formation in space would be a very unstable configuration. The greater
stability of helium would propel the conversion of one proton into a neutron, permitting a
INSTANT CONTINUUM
95

conglomeration with the other two protons (see CHAPTER VIII: Particle, on reverse-beta
decay). So one of the protons absorbs an electron and becomes a neutron. Yet, this
does not necessarily leads towards a helium arrangement of the nuclei. Enough energy
must be readily available for the helium configuration to result. The greater probability is
that the neutron looses its hold from the molecule (path of least resistance). The
remnant hydrogen molecule remains as a stable configuration and the neutron decays
shortly there after into a free proton, a free electron and a pair of gamma photons, which
is what is observed. If eventually one of these bursts were recorded as being emitted
from Earth or the Moon itself, then there would be sufficient proof for the claim.

A more roman-tech possibility (as I name the subject of extraterrestrial life) would be that
these are unnatural.

INFLATIONARY UNIVERSE SCENARIO


(Alan H. Guth, 1980) The Universe began by an exceedingly rapid expansion, 10-30
seconds in duration, expanding from 10-28 centimeters to about a meter (this is roughly
equal to the speed of light, no surprise there). This “hyper-expansion” was a
consequence of gravity, which for some reason unexplained was repulsive, creating thus
a “negative pressure”. This was a cold stage but thanks to the “decay of the scalar-field
matter producing the expansion” the universe heated up.xli Hawkings and Hartle further
support this model with the concept of imaginary time. And further on supported by
Andrei D. Linde and Alexander Vilenkin by a “tunneling” proposal: that the universe must
have tunneled (similar to quantum-tunneling) through nothing during this period.

PROTON DECAY
The Standard Theory predicts the proton’s lifespan to be virtually eternal. As for the
Gauge Theory, a half-life of more than 1030 years is attributed to the proton but calls for
a collection of twice as many bosons as the Standard Theory, bringing forth a great
many complications. One in particular, the X-boson, responsible for the stability of
matter, would mass over 1015 GeV.

PULSARS
Not that these are hypothetical objects, but the idea that these are spinning neutron
stars whose powerful magnetic field, askew from the axis of rotation, beams radio waves
which sweeps by an observer once a rotation, strikes me as hard to believe. Rather
more acceptable would be to contribute the variation in the radio emissions as the result
of spherical vibrations of the star’s surface. This stellar vibrato could account for the
regularity and the periodicity of pulsars as well as some quasi-periodic pulsars recently
observed.xlii (Discovered by Anthony Hewish and S, Jocelyn Bell, 1967).

QUANTUM FOAM
A state of space-time that might exist where the curvature is so high that relativity
becomes wedded with quantum mechanics and wormholes are created. Such regions
would have dimensions of about 10-23 centimeters. The diameter of a minuscule proton
is 10-13 centimeters. This area would certainly be two million times smaller than an
electron. My belief is that the person who came up with such a theory definitely has
quantum foam in his/her brain. Just kidding, of course!

INSTANT CONTINUUM
96

RADIATING BLACK HOLES


“The Second Law of Thermodynamics has a rather different status than of other laws of
Science… because it does not hold always, hast in the vast majority of cases.” This was
Stephen Hawkings arguing that black holes tend to violate the second law by reducing
entropy outside of it, since it is “impossible to see the entropy inside a black hole”
(paraphrased).xliii But Jacob Bekenstein argues that such increase in entropy inside the
black hole could in fact be observed indirectly. As matter carrying entropy falls into a
black hole, the area of its event horizon would go up.

First, such argument, does not explain how such matter, compressed denser than the
nucleus of atoms, has any entropy at all; it no longer has any capacity to change, there
is no longer heat from atomic vibration, no electromagnetism either for resistance.

Secondly, how compressing matter further accounts for an increase in entropy in the first
place.

Nevertheless, Hawkings argues against Bekenstein’s supposition in that entropy implies


that the black hole radiates temperature, “in order to prevent violation of the second law.”
Eventually Hawkings himself, devices a model in collaboration with two Russian
physicists, which supports this radiation not from within the black hole but just above the
event horizon. The model takes into consideration Dirac’s negative energy space with its
continuous creation and annihilation of virtual particle-antiparticle pairs, of which one
(generally the negative energy particle, the explanation of which is that these involve
negative-kinetic energy) happens to fall below the event-horizon while its other pair
remains liberated. Hawkings comments at the end: “But what finally convinced me that
the [Zel’dovich-Starobinsky] emission was real was that the spectrum of the emitted
particles was exactly that which would be emitted by a hot body, and that the black hole
was emitting particles at exactly the correct rate to prevent violations of the second law.”

STRANGE STAR
Strange quark as proposed by Arnold R. Bodner allows the formation of stable multi-
quark cluster, beyond the limitation by definition of three quarks to the hadron, thanks to
a slight positive-ness in charge (somewhat more than 1/2 of a proton). A conglomeration
of the sort could unleash a chain reaction if it where to collide with a massive neutron
star, converting down quarks to strange as these would provide a lower energy state.
Such consumption of a neutron star into a strange star could take less than one minute.
This strange state would occupy less space since intrinsic quark forces would bind the
whole body. This meta-hypothetical state of matter could be observed by half-
millisecond pulsar since ordinary neutron stars cannot spin that rapidly.xliv

SUPERSTRINGS
Twenty-six dimensions—four for Einstein, twenty-two other to curl up into infinity; or for
the more economical theorist, make it then only six extra dimensions—to which
conveniently apply incomprehensible mathematics to describe everything. The
underlying idea is, that as Einstein had explained gravity as distortions on the fourth
dimensional fabric of space-time, likewise nuclear and chemical forces could have their
own dimensions…

Well, an ancient sage eight millennia past, was even more economical in his theory,
proposing the existence of an entity, which lacked of any dimension yet it was all
encompassing. Maybe Edward Witter should reconsider, or approach the problem from
INSTANT CONTINUUM
97

simpler equations, possibly employing, say for instance something like a “superdot” that
is, if he does not like to accept the sage’s proposition.xlv

INSTANT CONTINUUM
99

CHAPTER XII: Imagination

Sometimes in confusion
I felt so lost and disillusioned
Innocence gave me confidence
To go up against reality
-Rush, Circumstances

Now, I close my eyes to dream of improbable things.

If time is but a reference system to motions, and gravity or any other force acting on a
mass, retards or encumbers motion as a consequence of inertia, then if it were at all
possible to construct a machine that could accelerate for a long “time” it could then be
possible to extend life, more than what the mass of the planet could provide, that is, a
machine to further retard the eventfulness of space more so than the effects of Earth’s
gravity.

What sort of machine could this be? Maybe a spinning carousel but most likely a rocket.
A spinning carousel would have little use. A rocket instead, would accelerate in one
direction, serving as a transport toward any arbitrary direction. Pick the nearest star! So
given sufficient distance, one could accelerate comfortably for an extended period and
reach or exceed Einstein’s Luminal Limitation, while still slowing down “time” due to the
shorter eventfulness.

One year accelerating at one Earth’s gravitational force (1g) will travel 0.516 light-years
distance and a velocity of 309264.5 Km/s, which violates Einstein’s luminal limit. Two
years at 1g equals 2 light-years at 2 times the speed of light. Three years equals 4.6
light-years at 3 times the speed of light. Four years get up to 8 light-years travel. Ten
years and you would travel over 51 light-years. There are plenty of stars within those
limits. It would be much better to increase the acceleration at least to 2g, which would
reduce eventfulness as well.

INSTANT CONTINUUM
101

A H
acceleration..........................................................25 Hannes Alfvén .................................................. 87
Hendrick Antoon Lorentz................................... 38
Æ
Henry Poincare ................................................... 38
æther ....................................................................28 Hideki Yukawa.................................................. 72
holography......................................................... 60
A
Hugh Everett ..................................................... 94
Alan H. Guth ......................................................95
I
Albert Einstein........................................ 28, 38, 50
Arnold R. Bodner ..............................................96 imaginary time .............................................. 35, 64
awareness.............................................................31 Immanuel Kant ................................................... 32
inertia................................................................... 25
B
interference pattern ......................................... 51
Big Bang.............................................................86 interference patterns ....................................... 60
Bohdah Paczynski............................................94 Isaac Newton....................................................... 25
C J
Carl David Anderson........................................72 Jacob Bekenstein............................................. 96
Carlos Rubbia....................................................66 James Clark Maxwell ......................................... 23
chaotic systems....................................................35 John A. Wheeler............................................... 52
charm quark.......................................................64 John Bell ............................................................ 56
Conservation of Energy Law ..........................76 John Stewart Bell ............................................. 51
D L
David Bohm .................................................51, 56 Le Chatelier's Principle....................................... 21
Doppler effect......................................................27 Lorentz Strange Attractor
Doppler shift........................................................28 Chaos.............................................................. 22
Lorentz-FitzGerald transformation formulas..... 39
E Ludwig Boltzmann ............................................. 23
Edward Harrison ...............................................86 M
Edward Witter....................................................96
Edwin P. Hubble ...............................................86 Mandelbrot set
electron-proton constituency model..............73 Chaos.............................................................. 22
entropy Martin Reed....................................................... 88
Chaos ..............................................................21 Max Born............................................................ 47
Ernst Mach...........................................................26 motion ................................................................. 25
Erwin Schrödinger ............................................47 N
event-ness .........................................................80
event-space .................................................79, 85 neutrino .............................................................. 70
Exclusion Principle .....................................66, 73 neutrinos.............................................................. 37
Niels Bohr .................................................... 47, 50
F Norman Mermin ................................................. 31
First Law of Motion ............................................25 O
force/force-carrying-particle duality...............71
opinion................................................................. 11
G orbitals................................................................ 47
Galileo Galilei .....................................................25 P
General Relativity .............................................80
George Gamow.................................................86 particle/wave duality .................................. 51, 71
gravity .................................................................90 path of least resistance........................................ 23
gravity waves.....................................................83 Paul Dirac .................................................... 62, 72
Pavel Cherenkov................................................. 41
photons ................................................................ 40

INSTANT CONTINUUM
102

pions....................................................................72 Stephen R. Holland ......................................... 90


Planck’s constant..............................................61 strange quark .................................................... 96
Plasma Cosmology ..........................................87 strong nuclear force......................................... 72
polarity of light ...................................................53
T
polarize lenses ..................................................54
positron ...............................................................72 Theory of General Relativity.............................. 41
Prince de Broglie ..............................................59 Theory of Special Relativity .............................. 38
Thomas Young ................................................. 51
Q
time...................................................................... 31
Quantum.......................................................47, 76 U
Quantum Electrodynamics Theory................62
quarks .................................................................66 Uncertainty Principle ................................. 48, 56
R V
reason...................................................................11 velocity................................................................ 40
reverse beta-decay ..........................................76 Victor F. Hess ................................................... 64
Richard P. Feynman ........................................62 virtual photon .................................................... 71
Roy Kerr .............................................................89 W
Rudolf Clausius ...................................................21
waves................................................................... 27
S Werner Heisenberg.......................................... 48
S. Weinberg .......................................................59 Wolfgang Pauli ................................................. 70
Samuel A. Werner ............................................83 X
Schrödinger’s cat.................................................31
Second Law of Thermodynamics .......................35 X-boson.............................................................. 95
Chaos ........ 8, 17, 20, 21, 24, 79, 85, 86, 92, 96
Y
Sheldon L. Glashow.........................................64
speed of light .................................................27, 41 Yakov B. Zel’dovich ......................................... 86
Stephen Hawkings............................ 35, 89, 93, 96

INSTANT CONTINUUM
103

i
The Arrow of Time, P. Coveney and R. Highfield, BALLANTINE BOOKS, 1990, pp. 163-
165.
ii
Fullerenes, R. Curl, R. Smalley, SCIENTIFIC AMERICAN, Oct. 1991, Vol. 265, No. 4,
pp. 54-63.
iii
Patterned Ground, W. B. Krantz, K. J. Gleason and N. Caine, SCIENTIFIC
AMERICAN, Dec 88, p. 44.
iv
Boojums all the Way Through, N. D. Mermin, CAMBRIDGE UNIV. PRESS, 1990; p.
111.
v
Coveney & Highfield, p. 145.
vi
More Evidence of a Solar Neutrino Shortfall, I. Peterson, SCIENCE NEWS, Vol. 140,
Dec. 21 & 28, 1991; p. 406.
vii
Revolution in Physics, B. M. Casper and R. J. Noer; p.312.
viii
Inconstant Cosmos, SCIENTIFIC AMERICAN, May 1993; pp. 110-118.
ix
Quantum Theory and Measurement, edit. J. A. Wheeler and W. H. Zurek,
PRINCETON UNIV. PRESS, 1983; p.5.
x
N. Herbert, p. 103.
xi
Wheeler & Zorek.
xii
Dreams of a Final Theory, S. Weinberg, VINTAGE BOOKS, 1993; p. 72.
xiii
QED, The Strange Theory of Light and Matter, R. P. Feynman, PRINCETON, 1985;
pp. 89-90.
xiv
Interactions, S. L. Glashow, WARNER BOOKS, 1988; p.148.
xv
Quantum Reality, N. Herbert, ANCHOR BOOKS, 1985; pp.97-98.
xvi
The Quantum Postulate and the Recent Development of Atomic Theory, NATURE
121, 580-90 (1928); reprinted N. Bohr 1934, Atomic Theory and the Description of
Nature, CAMBRIDGE UNIV. PRESS; pp. 52-91.
xvii
The Physical Content of Quantum Kinetics and Mechanics, Heisenberg.
xviii
Feynman, p.132.
xix
SCIENCE NEWS, Mar. 21, 1992; p. 189.
xx
At last, Neutrino Results from GALEX, I. Peterson, SCIENCE NEWS, Vol. 141, June
13, 1992; p. 388.
INSTANT CONTINUUM
104

xxi
Glashow, p. 87.
xxii
ibidem, p.89.
xxiii
SCIENTIFIC AMERICAN, May 1993; p. 30.
xxiv
General Chemistry, H. F. Holtzclaw et al., D. C. HEATH AND COMPANY, 1984.
xxv
SCIENCE NEWS, Sept. 24, 1994, Vol. 146, No. 13; p. 199
xxvi
SCIENTIFIC AMERICAN, Jul. 1991; p.32.
xxvii
Textures and Cosmic Structures, D. N. Spergel and N. G. Turok, SCIENTIFIC
AMERICAN, Mar. 1992; pp. 52-59.
xxviii
Catching the Wave, R. Ruthen, SCIENTIFIC AMERICAN, March 92, Vol. 266, Num.
3; pp. 90-99.
xxix
Quantum Philosophy, J. Horgan, SCIENTIFIC AMERICAN, Jul. 1992; pp. 94-104.

xxx
The Big Bang Never Happened, E. J. Lerner, VINTAGE BOOKS, 1992; p. 49.
xxxi
ibidem, p. 240.
xxxii
ibidem, pp. 266-277.
xxxiii
A Brief History of Time, S. Hawkings, BANTAM BOOKS, 1988; p.96.
xxxiv
Feynman, p. 151.
xxxv
Lerner, p. 161.
xxxvi
Black Holes and the Centrifugal Force Paradox, M. A. Abramowitz and J. Lasota,
SCIENTIFIC AMERICAN; pp. 74-81.
xxxvii
Quantum Cosmology and the Creation of the Universe, J. Halliwell, SCIENTIFIC
AMERICAN, Dec. 1991; pp. 76-85.
xxxviii
A Shower of Gamma-ray findings, R. Cowen reporting from meeting of the American
Astronomical Society, SCIENCE NEWS, Vol. 141; p. 60.
xxxix
SCIENCE NEWS, May 15, 1993, Vol. 143,. No. 20; p. 319.
xl
Star Burst, C. S. Powell, SCIENTIFIC AMERICAN, Dec. 1991; p. 32.
xli
J. Halliwell, p. 77.
xlii
Radio Pulses Hint at Unseen Planets, R. Cowen, SCIENCE NEWS, Vol. 141; p. 20.
INSTANT CONTINUUM
105

xliii
Hawkings, pp. 109-111.
xliv
The Search for Strange Matter, H. J. Crawford and C. H. Greiner, SCIENTIFIC
AMERICAN, Jan 1994, Vol. 270, Num. 1; pp. 72-77.
xlv
Profile: Edward Witten, J. Horgan, SCIENTIFIC AMERICAN, Nov. 1991; pp. 42-47.

INSTANT CONTINUUM

You might also like