Explaining the EPR Paradox

For anyone in their first year of quantum mechanics, the Einstein-Podolsky-Rosen Paradox (EPR) is required study.  Its significance is underlined because it marked the final confrontation between the classical way of viewing the world, and the emerging quantum way.  At its heart, is the universe deterministic, predictable, and classical?  Or is it random, indeterministic, and quantum?  Models describing particles with wave functions had emerged and been tested, and there was no question QM worked (and worked well).  The question was, is QM fundamental?  Or are all these probabilities and uncertainties the result of our particular model, which covered up a misunderstanding of deeper principals with probability descriptions?

Einstein, father of the beautifully deterministic worldview given to us in general relativity, refused to believe the universe could operate, at its lowest levels, with uncertainty.  This is what he meant when he famously said, “God does not play dice with the universe”.  It is also misquoted rather vigorously to suggest Einstein meant a literal god, but the fact he often spoke pantheistically is well documented.  In any event, he felt strongly that a deeper understanding of quantum principals might some day remove the uncertainty and show us what was really going on under the surface.

Along with Podolsky and Rosen, he proposed a paradox which highlighted some concerns he had in the early formulations of quantum theory.  Without going into the specifics, which Wikipedia can cover better than I, here is the important assertion raised by the paradox:

If we produce two particles in a twin state (entangled) are fired them off towards detectors, their spin is described by QM as being in a superposition of states, and is thus effectively undefined until measured.  EPR suggested the particle might be random in-so-much as it was assigned randomly at the time of entanglement, but while in flight and prior to being detected both particles must have (the same) determined (x,y,z) spin.  It was not, fundamentally, uncertain.

Nobody could test this, so it was left to ponder, that is until Bell came along with an experiment (untestable at the time) which could set the record straight, known as the Bell Inequality.  Here is what Bell figured out:

Produce your entangled particles, then setup two detectors, one to measure each particle.  The detectors are setup to RANDOMLY measure only 1 axis each time, either X, Y, or Z.

If a particle has a determined spin, it can be described as (A, B, C), where A, B, and C are either U (up-spin) or D (down-spin) for the x, y, and z axis respectively.  (U,U,D) would be a particle spinning up on the x and y axis, and down on the z axis.

So in Bell’s setup, each time we run the experiment, each detector would report either “UP” or “DOWN” for whatever it measured on the axis it randomly chose that time.  The two detectors pick their axis at random and need not report which they used, thus each time we run this experiment there are 9 combinations of measurements we might see.

Detector 1 Detector 2
x x
x y
x z
y x
y y
y z
z x
z y
z z

In each of those 9 cases, we will just see UP or DOWN from each detector, not knowing which of those 9 specific combinations led to that result.

Quantum mechanics tells us that when the particle is measured, no matter what axis we choose, the result will be U 50% of the time, and D 50% of the time.  Therefore, half the time the two detectors will agree, half the time they will not, as you can see below:

D1: U, D2: U => Agree

D1: U, D2: D => Disagree

D1: D, D2: U => Disagree

D1: D, D2: D => Agree

The quantum case is easy.  If EPR were wrong, and we run Bell’s experiment, we should see the two detectors agree 50% of the time.

The classical case is a little longer, though not complicated.  Here is the point that will become the key: if we have 3-axis spin, such as (U,D,U), at least two of the axes will ALWAYS be the same.  Think about it.  You can only assign U or D to each axis, and you have 3 to fill, so no matter how you do it, at least 2 will match.  Keep this in mind because it is the foundation of Bell’s inequality and breakthrough.  Here are the 8 possible combinations for particle states, classically speaking:

(U,U,U), (D,D,D), (U,U,D), (U,D,U), (U,D,D), (D,U,U), (D,U,D), (D,D,U)

Let’s dig into this a bit.  The first two I listed above will cause our detectors to ALWAYS agree, no matter what axis they pick.  If the particle is in state (U,U,U), then it doesn’t matter if I look at x, y, or z… I am going to see a U, and so will the other detector (remember, for our simplified example entangled particles have the same state).  This means out of the 8 possible particle configurations, we can expect 100% agreement 2 times out of 8 (no matter which of the 9 axis-combinations chosen by our detectors).

The other 6 cases (U,U,D), (U,D,U), (U,D,D), (D,U,U), (D,U,D), (D,D,U) will always agree 5/9th of the time.  How did I figure this out?  We can see it visually.  Every particle in these 6 cases as a majority spin and a minority spin, represented by the solid and open dots respectively in the below diagram (the three on the left are the particle hitting detector 1, and the three on the right are the particle hitting detector 2).  I’ve drawn in the 5 combinations where the two detectors will agree, e.g., if detector 1 reads the first majority axis and detector 2 reads the first majority axis, they will agree, and I’ve drawn the top-most horizontal arrow to show this:

epr-match

Out of the 9 possible combinations of detector-axis-choice shown in the table further up, the 5 shown right here will match, and the 4 I didn’t draw will not.

So in total, we have 2 cases out of 8 where the detectors will agree 100% of the time, and 6 cases out of 8 where the detectors will agree 5/9ths of the time, or:

Screen Shot 2016-03-13 at 14.30.58

And there we have it.  If QM is right, our detectors will spit out matching reports of UP and DOWN exactly 50% of the time.  If EPR is right, our detectors will spit out matching reports of UP and DOWN 66% of the time.

Later, the experiment was conducted, and the results agreed spectacularly with the Quantum Expectation.  This means particles really don’t have hidden properties as they flit around, they exist in a true state of uncertainty.

String Theory (2007 paper)

This post comes from an independent study term paper I wrote in 2007 as an undergraduate, on the need for a new physical theory, the emergence of String Theory, an examination of the theory’s development, and an assessment of the future of string theory.

Introduction

The scientific process has a long-standing history of theoretical investigation, followed by experimental verification.  Since the time of Newton, science has been represented through mathematical representations of measurable events.  This is a very comfortable and reasonable approach to science: the models we use are empirically known to work, so we know we can trust the predictions they make about reality.  For the past 40 years, science has straddled the verge of a new threshold (or at least a new trend): some modern theories are so completely theoretical that there is no experimental evidence against which to test them.  These theories are instead considered “valid” because their theoretical structure is in agreement with a number of conceptual ideas, such as elegance, unification, and symmetry.  A serious question in modern science is how to treat these kinds of theories: how much commitment should we give them? How do we know when we are on to something and when we are going awry?

Super-symmetric string theory (string theory from here on) is a prime example of an entirely theoretical construct that has, in addition to devouring the careers of thousands of scientists over the last 45 years, gained increasing support despite the complete lack of any experimental verification.  The appeal of the theory is not a secret: by allowing for subtle and creative modifications of our definition of particles, string theory stands poised to completely redefine our understanding of reality in a single swoop.  It claims able to unify all of the forced of nature, as well as the particle, to explain all of the fundamental constants in nature, and to identify why the laws of physics we observe are the way they are.  However, development of the theory has not come without its roadblocks.  In fact, after decades of roadblocks, a reasonable question to consider is: might string theory be nothing more than a clever mathematical construction that actually has nothing to say about reality?  Is the theory being kept alive simply because of its appeal as a potential theory of everything?  These are not questions that have clear answers, so the best approach is probably to consider the potential string theory has to offer, and weight it against the cost of accepting the theory.  Before developing a discussion of String Theory, it is necessary to take a step back and discuss modern physics,
as it sits today.

The State of Modern Physics

Modern physics refers to the set of principles and formulations that were refined and finalized during the twentieth century, including Einstein’s general relativity, the standard model of particle physics, and quantum mechanics.  Many other developments emerged, and many older ideas were reinforced.  In the mid 1900’s, physics appeared to have almost finished its job of explaining how everything works.  General relativity was confirmed beyond a doubt, and it provided an especially clean description of how gravity and acceleration affect space-time.  Quantum mechanics continues to be one of the most accurate theories ever constructed: it’s alarming predictions have never been wrong, and have never failed to explain an observed phenomena.  Lastly, the standard model contains all of the experimentally observed particles, and completely details their properties and families.  Essentially every prediction made by the standard model about the subatomic regime has been verified down to about 10-12 meters.  Despite the success of these three major pillars of modern physics, there are problems lurking beneath them all.

Despite the success of the standard model (for example) it cannot be a complete theory because it does not include gravity.    It requires 19 constant parameters that describe the forces and particles in the model, but it offers no explanation as to why these parameters take the values they do – they are strictly experimentally determined values.  In physics, constants are generally used to blanket over phenomena that we do not understand.  A complete theory would take a single parameter, and all other ‘constants’ would be derived from basic principles.  Furthermore, the model does not explain what any of the 61 particles it lists are… any any attempt we make creates problems.  For example, if we decide to treat particles as solid balls, we quickly run into relativity issues: imagine exposing the ball to some force and causing it to move.  If it were perfectly rigid, all parts of the ball would have to begin moving at once, even before the force reached all parts of the ball.  This kind of faster-than-light communication is not allowed.  Similar issues arise if we consider the ball to be soft or to contain substructure.  The worse issues arrive when we treat particles as points: quantum mechanics predicts that a point particle should be surrounded by a cloud of infinite energy arising from virtual particles.  Additionally, if we consider a graviton colliding with any other massive particle, if the two are points then the gravitational force should instantly go to infinity as the distance separating them goes to zero.  Lastly in terms of the standard model, there are missing particles: there are no candidates for dark matter – a nonluminous material that is believed to exist in huge quantities in the galaxy.  Also, there are no super-partner particles, which are believed to exist from symmetry arguments.

Quantum mechanics and general relativity, while alarmingly successful within their own domain, are completely incompatible with one another.  The two theories have a different background dependence, namely in the limit as a distance ∆x goes to zero, general relativity requires space to be completely smooth, flat, and continuous.  Quantum mechanics, however, disagrees.  On the same limit, as ∆x→0 uncertainty dictates that the contour of space-time become violently frothing and discontinuous (a phenomena called quantum foam).  The two theories require that space-time behave differently, but space-time cannot be both smooth and discontinuous.  Presently, there is no quantum treatment of gravity, and no convincing way to merge these two major theories.  When it became clear that the problems lurking beneath physics were not easily solved, people began searching for anything that had the creativity necessary to offer a possible solution.  String theory emerged as a candidate almost 40 years ago, and has gained tremendous support since.

String Theory Emerges

String theory has surfaced as a prominent contender to resolve the conflict between relativity and quantum mechanics, and to patch the gaps in the standard model.  In addition to this feat, and part of the reason for the attraction of the theory, string theory claims to be able to unify all of the forces of nature, as well as all of the particles.  The fundamental idea behind string theory is that particles are actually tiny vibrating filaments (strings) which are themselves fundamental.  While they have a finite size, it is 20 orders of magnitude smaller than an atomic nucleus, in the order of the plank length, and held at a tension of nearly 1039 tons.  The strings are free to vibrate along their length in a number of patterns (like music notes).  The pattern of vibration dictates the properties of the particle.  For example, the higher the vibration, the more energy in the string – and from Einstein, the more energy in the string this higher the mass of the particle it describes.  Proceeding in a similar method, the charge, spin, and strength of force (for force carriers) is determined from the vibration pattern the string undergoes.  We find that one pattern matches the properties of a photon, another the gluon, and another the graviton.

The equations of string theory define how strings should interact and scatter, and it does so with a single parameter: the coupling constant.  String theory as it is currently proposed requires only 2 parameters, one of which we expect to derive eventually: the first is the coupling constant which represents the tension on the string.  The second is the geometry configuration, taken as a background to the theory.  String theorists believe that this background can be derived, although they have not been successful doing it.  Also, string theory predicts gravity in a way no other theory does in that gravitation and general relativity emerge naturally from the theory.  There is a required closed-string configuration in the theory that corresponds to a zero-mass, spin-2 graviton.  Furthermore, the theory offers an reasonable explanation of why gravity is substantially weaker than the other forces.

String theory would not be of any value if it suffered from the same incompatibilities as the theories it is seeking to replace.  Fortunately, it does not, and it offers several clever resolutions to the issues in modern physics.  String theory resolves the issues with the standard model, for example, by replacing it: solving the allowed resonant patterns of the strings gives rise to all of the particles in existence, and theoretical properties.  This includes the graviton and a number of candidates for dark matter.  String theory also explains the relative strengths of the forces, namely why gravity is so weak.  There are also a number of interesting problems that string theory may provideinsight into, including why there are three particle families, and why and how they decay.

Some of the issues between general relativity and quantum mechanics are corrected simply by virtue of the fact that strings are spatially extended objects.  Point particles can penetrate infinitesimally small distances, so if a mismatch in definitions arises over those small distances between general relativity and quantum mechanics, we expect that they have to deal with it.  String theory resolves this issue in a clever (albeit simple) way: because strings are extended objects, it becomes meaningless to discuss distances that are smaller than they are.  That is, distance scales that are smaller than the strings cannot affect the physics or interactions of strings or of anything made of strings.  In this way, strings avoid the inconsistencies that arise in the theory at extremely small distances (smaller than the plank length) by saying the inconsistence arises from our misunderstanding of reality, rather than from the theories themselves.  More specifically, string interactions do not occur at points; because they are extended objects, observers in different frames of reference cannot agree in an exact location that an interaction took place (they could if the interaction was between point particles).  In a sense, the point of interaction is smeared out over space-time in such a way that phenomena at points in space (quantum foam) do not come into play.

Getting Off the Ground

The first hint of what would one day become string theory came in 1968 while Gabriele Veneziano was studying the strong nuclear force at CERN.  Completely by chance, he observer that a function called Euler’s beta-function seemed to describe a number of the strong force properties.  Nobody understood why this would be and it was left a mystery.  Then in 1970, Nielson & Nambu realized that if particles were treated like strings, the resulting vibration (governed by the beta-function) described the strong force exactly.  This earliest form of string theory was called Bosonic string theory, and it suffered from a number of problems.  The most prominent is that it did not include any fermions; additionally, it predicted a number of particles we do not see, including particles with imaginary mass called tachyons.  Furthermore, it produced negative probabilities with quantum mechanics (which does not make any sense as the domain of probability is from 0 to 1).  In 1971, Pierre Ramond of the University of Florida started incorporating super symmetry; by 1973, string theory was modified to include super-symmetry, and became what is called super-string theory.  And then in 1974, Schwarz & Scherk realize that one of the extra particles from the theory matched with the expected configuration of the graviton, meaning string theory might be a doorway to quantum gravity.  These results and others got very little attention from anyone outside the scientific community.

After almost a half-decade of work, theorists finally resolved the negative probabilities that were spilling out of even basic quantum mechanics calculations.  Theorists realized that by adding degrees of freedom to vibrating strings, the negative probabilities canceled out.  The community shifted to accept that there may be more “hidden” dimensions in our universe. Despite the growing potential to solve so many problems, the failure to show any real signs of delivering on its promise kept many scientists away for the early years of string theory.  In 1984, Green and Schwarz emerged and announced that string theory could explain all four forces of nature, as well as each of the particles of the standard model.  This was the first time this suspicion was confirmed that string theory actually did unify nature, and the announcement created the explosion they had expected ten years before when they first announced the presence of the graviton in string theory.  Almost overnight, everybody dropped what they were doing to investigate string theory.  The years of 1984 to 1986 are known as the first string revolution, and are marked by multiple daily publications from scientists all over the world; it turned out string theory naturally predicted many aspects to the standard model that had taken years to derive independently.

A Multiplicity of String Theories

While string theory has many benefits to offer, accepting the theory would come at a very high price.  In the present form of the theory, it requires our universe to exist in 10 space dimensions, and one time dimension – 11 dimensions total at each point in space.  It turned out the strings needed nine space dimensions in which to vibrate in order to produce valid results, specifically three extended dimensions, six or seven “hidden” dimensions, and one time dimension.  This is very odd because we are only aware of three independent ‘directions’ of motion, so where are the rest the theory requires?  There are still a number of extra particles not accounted for, such as the superpartner particles, which should have the same mass as their partners.  It also allows for what is called fractionally charged particles, another phenomena that we do not ever see in reality.  Lastly, string theory has a string configuration that seems to correspond to a fifth force, which in the absence of any such force is very troubling.

While super-symmetry resolved many of the issues with the Bosonic string theory, it introduced a number of issues of its own.  Particularly, there is more than one way symmetry can be worked into the theory: depending on how one groups what are called symmetry generators, and on how one defines the “boundary conditions”, five different theories emerge.  They each share the common attributes of a string theory, but on the specifics they differ substantially.  Potential of the theory aside, why would there be multiple flavors of string theory?  How does it make sense for a grand unifying theory to not exist alone?  It was not clear which, if any, were correct – and without the ability to solve specific calculations in each theory, there was no clear way to isolate one over another.  The five theories are called: Type I, Type IIA, Type IIB, Heterotic E8xE8, and Heterotic O(32).

Each flavor has several basic features in common: they each require 10 dimensions of space-time (and accept the same geometric configurations).  They each produce a set of massless states including the spin-2 graviton, and many massive states of the order 1019 GeV.  There are a number of differences however: in Type II A, strings are permitted to vibrate in only one direction (clockwise or counter-clockwise), and they all vibrate in that one direction.  Type IIB allow strings to vibrate in both a clockwise direction and a counter-clockwise direction.  Type I looks a lot like Type IIA except it allows for open-string configurations.  The Heterotic theories both have clockwise vibrations that look like Type IIA/B, but also counter-clockwise vibrations that are akin to Bosonic string theory.  In Bosonic string theory, there are 26 dimensions of freedom, 16 of which are curled into one of two donut shapes – depending with of the two donuts you use, Heterotic O(32) or Heterotic E8xE8 emerges.  There are some specific terms used to define a string theory, including “world-sheet”, which described the path swept over time by a string in motion.  Following is a description of what exactly changes from theory to theory.

Type II: The world-sheet in this formulation is a free-field theory containing 8-scalar fields (corresponding to 8 transverse directions in a 9 dimensional space), and 8-majorana fermions.  All scalar fields satisfy periodic boundary conditions, which can be either periodic or anti-periodic (clockwise or counter-clockwise); these are called Ramond (R) and Neveu-Schwarz (NS) respectively.  The definition of the ground state gives us a choice of boundary condition using either one or the other; this produces the two subsets of Type II theory: Type IIB (chiral n=2) and Type IIA (non-chiral n=2).

Heterotic: The world-sheet of this formulation consists of 8-scalar fields and 8 right-moving majorana fermions, and 32 left-moving majorana fermions.  The major difference between the Heterotic theories and the Type II theories is that in Heterotic theories, Bosonic states arise from an NS boundary condition on right-moving fermions and fermionic states arise from an R boundary condition on right-moving fermions.  This leaves two choices for the left-moving boundary conditions which gives rise to two more theories: SO(32) requires all 32 left-moving states are one of periodic or anti-periodic – not a combination of both.  Alternately, E8xE8 groups the 32 states into 16, and allows the groups to have either boundary condition, allowing for a combination of periodic and anti-periodic.

Type I: The world-sheet in this formulation is identical to that of Type IIB except that it has a parity transformation invariance, and it incorporates open string configurations

Following the first super-string revolution, the details of these 5 theories were investigated as thoroughly as possible – but as before, the mathematics quickly became insurmountable.  Approximate equations gave each theory the appearance that they described different universes each… so which one was ours, and who belonged to the others?  Further complicating the math of string theory was the background dependence o the theory: the physics predicted by each theory depends on the shape of the dimensions that the stings occupy, specifically the compacted extra dimensions.  This is because it is strictly string vibrations that determine particle properties, and vibration patterns are heavily influenced by the surface they reside on.  There are several classes of shapes that produced the desired physics, including Calabi-Yau manifolds.  Studies into the relationship between string vibrations and the shape of the geometry began to offer insight into the standard model: geometry explained why there were families of particles.  Sometimes a geometry can have something like a “hole” in some of its dimensions, allowing strings to wrap through the hole.  The number of holes (and therefore the number of unique string orientations on the surface), creates the families of lowest-energy particles we see.  A fair amount can be deduced from knowing the general properties of the geometry, but if we knew the exact shape, it would tell us a great deal more.  Once again, the mathematics makes any sensible derivation impossible.

What exactly is the nature of the mathematics that prevents us from making headway?  It is because string theory is analyzed using perturbation.  With perturbation, you essentially approximate and get a rough answer – then you refine your answer by including more and more details that were initially omitted.  This assumes the initial estimate is good, and the overlooked details converge to zero overall.  This is sometimes the case, for example if you look at the orbit of the Earth around the Sun, a first approximation would be to ignore all other influences.  Then, to refine the approximation, you could account for the other planets, the debris in the path, the electromagnetic forces at work, etc.  Each of this modifications is almost negligible.  However, there are configurations where perturbation does not work well.  Imagine a gravitational system of 3 large objects.  In a first-order approximation you might ignore one entirely to get an approximate effect of just the two.  When you then add in the third, it will produce a substantial change, not a refinement.  These situations are analogous to the situation in string theory.

All of the equations explicitly allow that at a point of interaction between two strings, an arbitrary number of string / anti-string pairs may appear and annihilate before the resulting strings scatter off.  An interaction can produce a virtual pair which then annihilate, and may produce an additional virtual pair, etc., until the final strings emerge and scatter.  An exact calculation requires us to look at the contribution to the final solution of a string interaction plus a string interaction with 1 virtual loop, plus a string interaction with 2,3,4…infinite loops.  Like in the planet example, if we assume each additional loop contributes less to the overall solution, we can safely ignore the higher numbers.  However, if it is not a decreasing contribution – or worse if it is an increasing contribution – then the math, by its very nature, will be useless.

The coupling constant in string theory defines the likelihood of strings splitting into virtual pairs during an interaction.  This constant has some freedom in what values it might take, and we don’t know exactly what value to use.  Generally, when the coupling constant c is less than 1, perturbation works (increasing number of virtual loops contribute less and less to a final solution).  However, if c > 1 the opposite is true and perturbation fails dismally.  Currently the string equations tell us only that the coupling constant satisfies a relation: 0 . c = 0.  Of course this is useless, because ‘c’ can take any value, and there is no way to tell if it is greater than or less than one.  The initial burst of excitement surrounding string theory began to wane, and the momentum and support evaporated.  It was too difficult to make any serious progress because of mathematical limitations.  The first string revolution came to a dead halt by 1986, and nearly everybody abandoned string theory once more.

The Second Super-string Revolution

String theory sat on the back-burner for another decade until in 1995 when Edward Witten initiated what is now called the second super-string revolution.  Witten identified a set of dualities that essentially allowed him to transform one string theory into another.  He suggested that all of the string theories might in fact be reduced forms of a larger 11-dimensional theory he called M-Theory.  Additionally he demonstrated for the first time a non-perturbative approach to solving some string theory calculations.  Witten found that the weakly-coupled description of any of the 5 string theories (c < 1, where perturbation works) has a duality with the strongly-coupled definition of another string theory (c > 1, where perturbation does not work).  This means when you need to perform a calculation that is beyond the range of perturbation, you transform into a different string theory’s weakly-coupled definition and perform the calculation there.  This was a stunning and brilliant breakthrough, and it caused a storm much like the one a decade earlier.  Physicist dropped what they were doing to begin to reconsider string theory, with the hopes of uncovering the secrets to the mysterious M-Theory.

ST_mtheory

Depiction of the relationship between the 5 string theories, and the M-Theory.
Only a few duality lines are drawn; many more exist.

M-Theory has 1,2, 3, and up to 9 dimensional fundamental objects, not just strings, but the energy of the various objects ensures we are only likely to encounter strings.  As the coupling constant gets larger, a “string” seems to becomes either an extruded string, or a tube (depending on the theory), but adding an extra dimension to the mix either way.  This dimension is one into which a string can extend, but not vibrate, so it does not alter the expected predictions about force unification, etc.  M-Theory in a reduced energy form describes an 11-dimensional super-gravity theory.  Witten made his connections between the different theories by highlighting a number of dualities which use symmetry or transformation arguments to make different subsets of string theory equivalent.  Each string theory is parameterized by a Moduli, which contains the string coupling constant, the shape and size of the space-time dimensions, and the background fields.  Each moduli has a region where the string coupling is low, and a region where it is high.  Dualities often link theories in a chain, rather than just connecting two; this enforces the idea that they are all interconnected.  Also this provides the added benefit of reducing the non-uniqueness of the theories.

ST_mtrel

Relationship between M-Theory and other major theories

One duality that connects the two Heterotic string theories, as well as the two Type II theories is a duality of distances.  This duality comes from the winding mode of a string: the total energy of a string is the sum of its vibration energy, and its “winding number” which basically means, the number of times the string is wound around apiece of space-time.  Notice this wound configuration is meaningless for point particles; strings being extended objects can actually create a closed loop around a curled up piece of space.  If we image such a wrapped string around a cylindrical piece of space that is shrinking, we can illustrate this duality: as the space loop shrinks, the string’s winding energy decreases, but its vibrational energy increases.  Likewise, if the piece of space is expanding, the string’s vibrational energy decreases while its winding energy increases.  It is only the total energy of a string the identifies it’s properties and interactions, not what kind of energy, therefore the physics defined by a string of vibration energy X and winding energy Y acts identically to a string with vibration energy Y and winding energy X.  As these two energies are related to the radius of the string, we find a shrinking string of radius R is exactly equivalent to a growing string of radius R-1.  This in effect eliminates the meaning of sub-plank distances: if an extra dimension were to shrink with a wound string around it, when it reached a size of R^-1 it would be as if the dimension “bounced” and was now growing.  The boundary conditions and physics described on Type IIA string theory with string radius R, for example, are exactly identical to those described on Type IIB string theory if we take the radius as R^-1.

Another duality Witten identified was a T-Duality, which maps the weakly-coupled region of a theory to the weakly-coupled region in another theory.  This does not improve our mathematics, but provides insight that the theories are interconnected in several ways.  An example of this kind of connection is the Type IIB theory in 10 dimensions, which is self-dual.  Also Type IIA on a circle of radius R has a T-duality to Type IIB on a circle of radius R-1.  Witten provided strong evidence for his dualities making use of “non re-normalized” quantities that emerge from symmetry but are independent of the coupling; these are referred to as BPS states.

M-Theory is considered by some to be the ultimate unifying theory of everything, and by others to be a mathematical curiosity with little relevance to the real world.  Regardless, the tools presently do not exist to explore the theory in detail, so its exact nature remains largely unknown.  While Witten and other are sure it exists, nobody has yet written down the equations for it. One of the keys that makes M-Theory special is that it is Lorentz invariant, which implies a compatibility with general relativity.  Other unifying theories exist if we start at a different string theory and use similar logic, including F-Theory, but these other theories suffer the shortcoming of not being Lorentz invariant.  The big question in the string theory community is what will it take to successfully merge the 5 theories into M-Theory?  Of course no answer exists to this, but there are a few areas that would make good starting points.  All string theory, and M-Theory in particular, are background dependent.  It seems only reasonable that a background-independent formulation might shed light on which of the many possible geometries we should choose for our hidden dimensions.  Others have thought of this, but generally the math gets even more complicated when we try to form background independent theories.  What comes next is anyone’s guess.

String Theory Stumbles

Having looked in some detail at the mathematical and physical structure of string theory, it now makes sense to take a step back and consider what string theory really means as a scientific theory.  Many people do not like the theory, and are concerned that it is completely contrived – filled with arbitrary tricks and turns to make it work.  The fact string theory is presently untestable certainly doesn’t help the situation, but is it fair to dismiss the theory purely on the grounds that it cannot be tested?  There is no doubt string theory represents a new kind of science, where prediction about reality can exist completely on its own – without ever being verified.  But without the limitation of checking against reality, it might be too easy for theorist to take the theory too far.  The issue at hand is whether string theory has been contrived or otherwise deliberately engineered, or if the revisions and changes to our understanding are due to our ignorance of the nature of reality.

Every theory in science requires some ‘reverse-engineering’.  Einstein’s general relativity, and even Kepler’s planetary motion suffered from some serious mistakes in their days of conception – changes were made to make the theories reflect observation.  That alone is not an indication that a theoretical construction is invalid.  The first “tweak” applied to string theory was in 1974 when Schwarz & Scherk were considering the extra particles the Bosonic string theory produced.  They realized that if they played with the coupling constant, they could reverse-engineer its value to produce a graviton – which is what they were looking for.  Around this same time, theorists decided to incorporate super-symmetry into the Bosonic string theory which, as we have discussed, suffered from a large number of issues.  While this step seems perfectly reasonable, we can’t ignore the fact that the attempt to merge string theory and super-symmetry was only taken in an attempt to “fix” the many errors it predicted previously.  Even after these two modifications, string theory began to predict negative probabilities, which did not make sense.  Researchers found that if they give the strings more dimensions of freedom in which to vibrate, they can eliminate the negative probabilities.  So again, with no fundamental reason for doing so, they reverse engineered the number of dimensions that the universe must have for the probabilities to work.  The questionable development of the theory did not end there; when string theory hit a dead end because of the multiplicity of theories (5 flavors), the parameters of the theory were tweaked further until Edward Witten found that adding another dimension from 9 to 10 had the potential to unify the flavors and fix some outstanding issues.  He did not develop a proof to show why it should be 10, rather he was trying to find a relationship between what is called “Type IIB” string theory, and “11D Supergravity”.  When he found a relationship, he used it as justification for moving to 11D in string theory as well.  Again, the logic seems perfectly sound, but it represents another change made to fix the theory from drowning under its own errors.

Furthermore, while string theory presents a glamorous façade of grand unification, it actually has very little to tell us as of right now.  The equations of the theories are so complicated that nobody today knows how to calculate so much as a simple interaction between two strings (again a result of the limitations of perturbation which, while lessened by Witten’s discoveries, are still in place).  Despite almost 40 years of effort, and thousands of careers, string theory has not yet provided any of the answers its pursuers hope to answer.  Unfortunately, the magnitude of error we acquire when doing approximate calculations is way to large to check any of the 19 constants against the prediction.  While the theoretical ability to derive them exists, we can’t actually check to see if the equations give us the experimental values.  It turns out the exact equations are required to solve anything that would definitively show the equations are correct or not – and we do not have access to the exact equations.  This prevents us from either eliminating or verifying any of the 5 theories.  Furthermore, nobody has been able to write down the form of the equations for the so-called M-Theory, making it more of a wishfully-inspired theory than a well grounded formulation.

This is a difficult issue to rule on; all theories, as indicated, require a level of reverse engineering – that is how we learn about nature.  We know where we are trying to get, so we use that as a hint to derive the path to get there.  It is hard to tell when this natural process is underway, and when scientists are allowing any modification required just to get a theory to work.  The magnitude of changes string theorists are requiring is very large, but in fairness it is not substantially more abstract than the changes quantum mechanics has required of our understanding.  It turns out there is a school of reasoning that physicists are using to decide if their modifications are acceptable or not, and that has to do with the conceptual ideas about nature that modern scientists agree upon.

Reconsidering the Basis for Theories

Scientists tends to call upon conceptual understandings of “how reality should be” when considering physical theories – these ideas have not been proven, and may not even have any fundamental basis, but they make sense to us and so we try to work them into everything we do.  A prime example of this is conservation of energy.  There is nowhere in physics that the conservation of energy is written, but we observe it to be true, and on many levels it just makes sense.  Another example of a conceptual idea about reality is symmetry.  String theory is firmly grounded in symmetry, and many of the modifications to the theory have been to respect one form of symmetry or another.  Symmetry is a term that describes how similar systems with various transforms evolve.  For example, if I setup two identical systems, and then translate one of them 1 meter to the right, I will expect them to still develop the same.  1 meter to my right is the same as right in front of me; this is an example of translational symmetry.  There are rotational symmetries, parity symmetries, time symmetries, gauge symmetries, and dozens more. There is absolutely no reason why symmetry must be true, but it is something that we really put a lot of faith in, and will select theories based on their adherence to symmetry.

A third example is the idea of mathematical elegance.  This can sometimes take an almost religious meaning, but it basically describes how concisely and cleverly formulated a theoretical model is.  Scientists heavily favor elegance, because seems consistent with the idea that nature is well constructed (something many of us believe), and should therefore be described in a (mathematically) clean way.  Relationships should emerge naturally, and principles should fit together like clockwork.  This kind of thinking has played a huge role in physics; Einstein himself was completely convinced his general relativity was correct, purely on the grounds that its formulation is so beautiful. Why do people put so much weight in these kinds of beliefs?  That is a very difficult question to answer, but at the end of the day we are all humans with feelings and outlooks on reality – we need to satisfy the human element when we work with things like physics.  The human equation is either gifted or condemned (depending on how you look at it) by its need to internalize things, and because of this, ideas that seem well organized to our brain (such as elegance) tend to settle better.

It is not an easy question to identify if some of these ideas that appeal to us only because we are human, or if they appeal to us because they are foundational to nature, and we are built from nature as well.  We can answer the question: are the facts we use at least reasonable, even if not proven?  Returning to the context at hand, are the various revisions made to string theory on the basis of these kind of ideas, revisions that we can trust?  I think it is safe to say yes, they are very reasonable.  String theory is highly promoted because it is such an elegant formulation of reality.  In many ways it is very simple, and almost beautiful, to think of the universe as, “akin to a cosmic symphony” (The Elegant Universe, 146) each playing their own notes, and interacting accordingly.  Things like extra dimensions have their own subtlety that present a certain appeal to the imagination.  And the idea that one theory has all of modern physics bundled into it, seems a little too perfect to be wrong, regardless of the means used to develop the ideas.  Whatever conclusion we draw here, it is with the understanding that only though experimental verification can we ever truly say what is right an wrong.

Verification of String Theory

(Note: This section contains an error.  See my post on calculating N-Spheres for the corrected math)

There is no doubt the history of string theory has not been a smooth one.  Despite the decades of silence, the seemingly insurmountable mathematical issues, and the various revisions to “make the theory work”, it is still a widely adopted theory.  The next question is if we can or ever will be able to verify the theory experimentally.  The sizes of the strings make any direct observation absolutely impossible.  The approximate equations make any prediction and test of a property of a particle impossible.  Instead, modern experiments will look for other attributes that would either speak for or against the likelihood of the theory, although not prove it one way or the other.  If any such evidence is going to arise it will be a CERN, which is scheduled to come online later this year (so we shouldn’t expect it until next summer I am sure…).

One example of an experimental piece of evidence that could be seen is the presence of extra dimensions.  While this would certainly not indicate string theory was correct, it would be a step in that direction.  Considering a Newtonian gravitational system, everyone is familiar with the inverse relationship between the square of the distance, and the strength of the force.  The R^2 term is an implicit statement that there are three physical dimensions in the universe (the gravitational flux divides over the surface of sphere, which is a function of R^2):

ST_form1

Notice in the above equation, S is the surface area of a 3-dimensional sphere (called a 2-sphere).  If string theory is correct and there are six additional dimensions, physicists expect to observe a deviation from the R-2 drop-off of gravity over very small distances (distanced on the size scale of the extra dimensions).  If we calculate the flux through a 9-Sphere (surface around a 10-dimensional sphere), we find:

ST_form2

This calculation uses the following pattern to find the surface of an N-Sphere:

ST_form3
The first integral is the surface of a 0-Sphere, then a 1-Sphere, a 2-Sphere, and a 3-Sphere.  If string theory is correct, we should expect to see the force of gravity obey an R^-9 relationship over very small distances.  So far, the inverse-square relationship has been confirmed down to one-tenth of a centimeter.  The size scale needed to observe a variation would depend on the size of the curled up dimensions, which could be as large as a millimeter, but as small as the plank length.  If the hidden dimensions were much closer to a millimeter, there is a possibility we will observe an inverse-square violation in upcoming years at CERN.  When particles collide at full energy, we may observe a unique phenomena: the particles will collapse into a minuscule black hole, which will then evaporate.  To form a black hole, the strength of gravity would need to by substantially stronger than it is over long distances.  However, if the dimensions are smaller or even close to the plank length, we will never be able to probe small enough to see their effect on gravity.

In addition to the extra dimensions, string theory predicts super-partner particles and fractionally charged particles, neither of which fit in the standard model.  It is possible some of these particles may be created in collisions when CERN comes online.  The LHC at CERN is capable of creating a 20TeV collision, which is much more energy than anything possible today, and may be enough to reveal some of these predicted particles.  If we were to see a fractionally charged particle, or a super-partner, it would be a good argument in favor of string theory.  Unfortunately, many of these particles are expected to be very heavy, and even outside the range of the LHC’s energy.  Experiments will no doubt search for the presence of miniature black holes and super-partners, but to bluntly answer the question posed at the beginning of this section, we will probably never be able to verify string theory directly.  The strings and energies are way to far out of range.

The only prediction made using string theory that has been accepted by the scientific community is in regards to the entropy of black holes.  A major problem in black hole physics that has eluded physicists for nearly 25 years is: what is the source of their enormous entropy?  A black hole is entirely defined by its mass, its spin, and the force charges it carries, so what is there to be in a state of disorder?  String theory provided a framework that was exploited in 1995 to explain how black holes acquire entropy.  No explanation using conventional theories has offered an explanation.  Again, this by no means says string theory is true, but it does represent a vote of confidence that the theory must have something right.

The State of String Theory

The topics presented above regarding the state of string theory are still in hot debate, so there is no correct answer as to where one thing or another falls.  I can, however, provide my personal opinions.  Despite the total lack of evidence and slow (even engineered) growth of the theory, there is a strong sense of totality about what it represents.  The amount it has to offer, and the amount of complex physics that emerges totally naturally from the theory, is really compelling; the fact that only a single parameter can fully qualify our reality suggests to me that string theory is either correct, or very close to a real fundamental theory of everything.  I also believe that it is not a problem if the theory allows for an infinite number of permutations (variations in the geometry), as long as it provides an explanation for why a particular geometry should emerge instead of others (probably something regarding a lowest energy configuration).  Moving forward in the theory will no doubt require identifying the nature of M-Theory, and possible inventing some new mathematics in the process.  The clever tricks presented by Witten help in some instances, but it seems clear that the perturbative approach needs to be replaced entirely if string theory is to really break free.  Just as the second super-string revolution followed ten years after the first, we are now, 10 years later still, due for our third super-string revolution, which I believe will need to do exactly this.

String theory is of a particularly complex nature, and we stumbled across it accidentally.  It is unsurprising that a lot of our perspectives about reality should be found in error, when we already know our two most successful physical theories are incorrect (or at least incompatible), and a number of phenomena remain unexplained.  To assume we have a valid idea about the nature of reality at this stage in the game seems unjustifiably arrogant to me.  If nothing else, string theory has been an inspiring view into the possibilities our universe has to offer.  The success or failure of string theory, I believe, will set the stage for how people treat theories of this totally theoretical nature in the future.  If string theory comes into proper fruition, it will be a monumental achievement for humanity; we will be able to derive, from basic principles, why everything we see is the way it is. Why there are three large dimensions, why particles carry a certain charge and have a particular mass (and are not slightly heavier or lighter).  We will be able to explain exactly why each force is the strength it is, and why it wouldn’t be any other way.  It would be like finding the blueprint to nature, and it would open up all kinds of doors, both intellectual and technological.  The next several years when CERN comes online are no doubt going to be exciting for the physics community.  I personally hope we get lucky, and find evidence that there are in fact extra hidden dimensions, and that the full richness of nature has not yet been fully revealed.

References

Excepting the conclusion and the work on gravitation in higher dimensions, this paper is a compiled summary of materials from the following sources:

DeWitt, Richard.  Worldviews.  Malden, MA: Blackwell Publishing Ltd, 2004.

Green, Brian. The Elegant Universe.  New York: W. W. Norton & Company, 1999.

Green, Brian. The Fabric of the Cosmos. 1st ed. New York: Random House, 2004.

Sen, Ashoke. “An Introduction to Non-Perturbative String Theory.” Mehta Research. Institute of Mathematics and Mathematical Physics 12 Feb 1998.

Woit, Peter. Not Even Wrong. New York: Basic Books, 2006.

A Brief History of String Theory. (Provided by Dr. Beal; author unknown)

Davies, Paul, Cosmic Jackpot. Houghton Mifflin, 2007

Calculating N-Spheres

As a result of my years studying String Theory at the undergraduate level, I eventually developed a method for determining the properties of N-Spheres, with the ambition and then-enthusiastic hope of finding the required testing distance over which the strength of gravity would need to be measured to determine the number of physical dimensions there are in space.  While I did eventually come up with an equality that might produce an experimental setup, the choices I made with my career after my undergraduate degree obviated the need for further investigation.  In any event, I thought I would share what original research I did develop, so here is goes.

Why Spheres?

Gravity is weak.  This is well known and endlessly recited, but the explanations for why it might be weak are often too esoteric to delve into.  One proposal  comes out of string theory research (now several decades old), and revolves around the properties of the graviton.  Although presently undetected, the graviton is a proposed member of the complete standard model, and serves as the carrier of the force of gravity.  These particles are theoretically exchanged by massive objects and thereby guide their attraction to one another.

But why is the strength of gravity so far out of whack from the other forces?  Employing an example of this weakness from Brian Green, think about a common refrigerator magnet.  Here we have a competition — a battle of strength against a few ounces of iron on the magnet, and the gravitation pull of the entire planet Earth.  Were gravity and electromagnetism on equal footing, the magnet would fall straight to the ground (and weight much more than a few ounces for that matter).  But that is not what we actually see.  In fact, the few ounces of iron have no trouble at all resisting the pull of the Earth’s gravity, and the magnet sticks resolutely to your refrigerator.

In string theory, there are more than three dimensions in real space, but those extra degrees of freedom are only accessible to certain types of string configurations, one of which is the graviton, and only over very small distance scales.  In normal 3-space, a force emanating from a point will expand as a spherical wave, thereby experiencing a flux density that drops in proportion to R squared.  Why squared?  Because the total flux emitted by this point must spread itself over the surface of a sphere growing  with time, therefore the density is the total flux divided by the surface area of a sphere.  This can be clearly seen built into the common equations of force.  Take for instance Gravitation:

gauss

‘G’ is the gravitation constant, and being a constant it can accurately be rewritten as any combination of constants, leading to the unusual but entirely accurate formulation:

g_flux

And there we have it, a flux (numerator), divided over the surface area of a sphere.  This is visible again in Coulomb’s Law, and here the constant is often defined in terms of 4π:

em_flux

Again the flux (numerator) is divided by the surface of a three-dimensional sphere.  So if you are dealing with these forces, you are dealing with spheres, and if you are dealing with three dimensions, you are dealing with ordinary looking “2-spheres” described by their usual properties.

Moving to N-Sphere

Without going into too much detail, the string theory explanation is pretty simple.  If string theory is correct about the true makeup of space-time, then over very small distances the force of gravity is much stronger.  As gravitons emit from a source, many of them drift off into extra dimensions.  The pieces that stay in our coordinates then move along and obey normal 2-sphere propagation attenuation, but appear as if they were very weak to begin with, as so many gravitons have already disappeared.  What I sought to accomplish was to determine what the force of gravity should actually look like in small scales, if space-time has more than three dimensions.  Clearly a 2-Sphere no longer describes the real “surface” that the net flux is distributed over.  The theoretical test, once a n-sphere formulation of gravity is identified, would be to find a means of balancing gravity against electromagnetism (a force that cannot see the extra dimensions, and is therefore 2-spherical all they way down) and see where they balance.

So one last time, the gravitational FLUX is described by the numerator in the equation above, and we are searching for the new surface to distribute it over.  Let psi-g represent the flux, and the constant k-g will be defined as follows (or merely the gravitational constant divided by 4π):

g_flux_num

Spheres

When trying to imagine N-Spheres, like many before me, I work on analogy starting from lower dimensions.  Lets bear in mind the definition of a circle (a 1-Sphere) to help extrapolate: the set of all points equal distance from one center point.  In the case of a 0-Sphere, (a “sphere” that exists on a single line), the “set of all points” would refer to only two points.  The point R away from me along the line in one direction, and the point R away from me along the line in the other direction.  The “volume” of this sphere can be calculated by doing a “shadow integral” over an identity function running between R and -R.  As with any sphere, the surface area is then the derivative of the volume.

0-sphere

These answers make some sense intuitively.  If you have two points, separated by 2r then the “volume” of your 1-d enclosure is simply the length of line between them, or 2r.  The “surface” is really the sum of two points, which is harder to visualize, but expressed in the math.  Lets move ahead to 1-Spheres… commonly called circles.  We already know what to expect from these results, but lets test it.  Now I am going to do a “shadow” integral over an identity function, first describing a circle, and then from -r to r once again:

1-sphere

Lets do the last familiar case, so the pattern in the equation can emerge.  Here is the 2-Sphere:

2-sphere

Whenever you want to determine the “volume” of the next dimensional sphere, you need to first integrate over a circle that contains all of the degrees of freedom in question, and then treat that as your “shadow” and integrate up each dimension in the chain.  As expected, each dimension adds an extra power to the radius, which each time extends into a new degree of freedom.  Here are the results of the next several N-Spheres, arriving ultimately at a 9-Sphere whose surface can describe the flux distribution in an 11-dimensional string theory (special thanks to the TI-89 Titanium, and several sets of batteries!):

3_to_5-sphere

Here are the final order ones that matter for String theory.  These took hours to verify on my TI-89, although I originally determined these using an expansion I derived from the previous 5.

6_to_9-sphere2

And there we have it.  String theory calls for at least 11 dimensions, meaning one time dimension and 10-spatial dimensions.  A ten dimensional sphere, or a 9-Sphere (9 because the numbering indicates the number of dimensions that makeup the bounding surface), therefore describes the needed volume and surface.  Of course I make no claim of having inventing the concept of shadow integrals, but I can take credit for deciding to use them to solve for N-Spheres in this way.  Just to cap this off, here is my formula (fully original, deduced only from my previous calculations) for finding the volume of an N-Sphere.  I have not proven this formula mathematically, but I have verified its predictions up to 10-Spheres:

general_formula

New equivalence

If string theory is correct and gravity is very strong on a small scale, the relative strength of gravity compared to electromagnetism should change drastically on those scales.  I surmised that a test could be undertaken, attempting to find an equilibrium between the attractive force an electron’s gravity, and its repulsive force of its electromagnetism.  Let see on what scale such an equilibrium could be found:

e_g_equation2

Now we can solve this equation with the known values for an electron, and we find:

e_g_solve2

So when two electrons are brought to a halt about one thousandth of a millimeter from one another, we should be able to get them to balance IF a) string theory’s proposition of 11-dimensions is correct, and 2) if the wrapped up dimensions described in the theory are on the order of 1000ths-of-millimeters or more.

In other formulations of string theory that require more dimensions, this equilibrium shrinks.  Unfortunately, current experiments have only been able to probe the strength of gravity on scales two orders of magnitude greater (tenths of millimeters), which suggests an answer will not be forthcoming for at least a couple years.  But in any case, lack of any finding is insufficient to falsify string theory because either of the two premises leading to the above equivalence may be wrong without string theory itself necessarily being wrong.

Avoiding the slippery slope that is my personal dislike of string theory, let me leave this entry alone as only a commentary on N-Spheres.  Well, now I can’t say my undergraduate degree in physics was a complete waist!  At least I got one Blog entry out of it.

Collective Consiousness

In the dampened wake of the Holidays, I found myself once again drifting aimlessly into my own mind, an activity that almost inevitably leads to a blog entry or at least mild insomnia. In this case the former; in particular, I became absorbed with the concept of a Collective “hive” mind, and how it might affect a species such as humans.

The common portrayal of such a paradigm is never positive, exemplified most vividly with the Star Trek The Next Generation antagonists: the Borg.

Borg Drone

The Brog are a cybernetic species that specialized in the indiscriminate assimilation of foreign biology and technology. The Borg are also pivotally characterized by a collective mind… the members of the Borg are merely drones without any personal awareness or sense of individuality. Indeed the horror of assimilation, and the compulsive replacement of your individuality with the collective, are recurring themes in Star Trek, as well as other scifi stories that touch the concept.

I take issue with several of these portrayals, and ultimately assume the unpopular perspective that a collective mind would be a huge opportunity and sign of maturity for humanity. It would also represent a fundamental paradigm shift of unprecedented proportions to the “human experience”.

Nodes in the Network

The key to keep in mind is that joining a “collective” does not alter the way individual brains process, it simple interconnects the brain with others. What a connection to a collective is supposed to entail is an instant and unfiltered exchange of all thoughts and experiences between all members of the hive. Each human connected (or node) remains an individual processing center, meaning they continue to have their own consciousness and their own interface with experiences. The difference is that after the instant of initial experience, the event becomes public and known to all, and free for everyone to individually react to.

This is where the idea of losing one’s self enters the picture. Of course it is a matter of speculation, but I don’t subscribe to this model. It seems reasonable that people in a collective might arrive at interpretations or beliefs that they would not have held individually. From this deviation, we might deduce that the node is no longer an individual as it was unable to hold its own opinion. In other words, it may seem the individual’s opinion was forcibly overwritten by the collective. To the contrary, however, I would expect this sort of deviation. The change in a node’s “personal” opinion is not because the individual is unable to hold their own thoughts, but because their own thoughts mingle with every other person’s thoughts and a massive averaging takes place whereby every node individually aggregates the diversity of opinions and knowledge and arrives inexorably at the same conclusion. The key to remember is that the nodes share everything, so any differences of perspective or personality of individual nodes are subjected to every opposing opinion and perspective, allowing each node to personally agree with the “collective” personality and perspective.

This difference may seem subtle, but I insist it is not. Consider the elements that prevent people from agreeing on fundamental principals — take for instance an Evangelist and an Atheist. These two groups have entirely incompatible world views, and no amount of arguing could ever get them to agree. If they were connected to a collective, however, they would suddenly be able to exchange feeling associated with experiences, inherent instincts that cannot be explained, and they would be exposed to each other’s actual belief. As stated, they cannot have both beliefs, points would come into conflict and all internal reasoning would be shared and inclusive.

With the extra information and understanding, they would each likely arrive at some middle ground based on the various points one group or the other was unable or unwilling to internalize previously — In essence they are each so well informed and have such common experiences (personal or learned) that they nearly inevitable arrive at similar conclusions. The end result is that their opinions may have changed, but not because they had to… only because each individual grew beyond their original perspective and actually choose to agree with the collective. If disconnected from the collective, I would expect each individual to truly continue to believe whatever middle ground that had previously discovered

Averaging Knowledge

The ability to exchange information on the level of our “inner voice” opens up the door to this idea of true knowledge averaging. When we all have the same pieces and the same feedback on the best and worst way to use those pieces, then our interpretation of information is likely to average out to the “most-globally-reasonable” interpretation. This is not a loss by any means, it is a huge gain. It enables the enhancement of human understanding and influence to extravagantly unthinkable levels. It also does not require us to lose anything that we value in our current method of individual contributions, those contributions simple become lower level. For instance, an individual whose perspective is very innovative and new can still redirects the whole collective. But in a collective mind, that innovation can be leveraged to a greater capacity because as soon as it is discovered by a single node, it becomes available to all nodes to leverage.

Because the processing of information is still within the brain of the nodes, it makes sense that certain nodes would have certain values — some more likely to innovate and some more likely to make abstract connections, much like in our world. Again with the key difference that all nodes instantly understand how and why that innovation was realized, and can hopefully simulate the thought process.

This dispersal also allows humans to optimize themselves in ways previously unimaginable. Technology as is stands now — wikipedia, social networking, televised entertainment, music — none of it would be required anymore when culture and enrichment is available on demand. We would not lose these facets of our culture, we would simply be able to experience them without the technology middleman. I imagin a collective culture relying very little on technology or surroundings for happiness or entertainment.

Portability of Consciousness

I will close with a curious afterthought on this subject. If the individual consciousnesses in a collective were so interconnected that they could actually distribute their existence over multiple brains, a very sci-fi opportunity appears. Up until now, I describe a node as its own person who is fully connected to each other person. In many ways, this allows the group to control the group, because every decision (where to walk, what to say) is influenced and planned by the whole collective. However to execute the actual action, the host of that particular body must agree with the collective, and their brain must control their body. In this new sense of shared consciousness, individuals could actually move their consciousness between particular nodes, or even share control of nodes living primarily in the cloud. For physical tasks, a strong body might be occupied by an individual, and then for solving a problem, a node better suited to mental work might be occupied. In addition, several individuals might share control of multiple nodes at once.

This kind of collective allows humans to break the 1:1 connection that exists between a body and a mind — in fact it opens up the ability for n:m where n minds control m bodies, and n >= m. Now if a body is lost, it does not necessarily pull its host out of the collective — the host may exists redundantly across the network. Now any consciousness can actually control any body, a subtle difference from before where only one could control a body, even through its decision to do so was largely the decision of the collective.

Enlisting in the Borg

The technology to achieve the kind of interconnectivity a collective requires is no where near the horizon, and may be permanently delegated to the Sci-Fi realm. If it does every make it to reality, however, I think we stand to benefit greatly from its potential. The changes it represents to our way of life are so small compared to the amazing opportunity for peace, advancement, and growth as a species. If it ever comes to be, I envision plugging in will be a major point of contention, but inevitably everyone would seek its refuge and comfort, and be much happier they had done so.

Humans & Transporters

Ever since I was a child watching Star Trek: The Next Generation with my father, the concept of technology-driven teleportation (“transporters”) has captured and provoked my curiosity. With implications for communication, global unification, health-care, and general convenience, ‘provoking’ is plainly diaphanous compared to the true magnitude of the matter.

Despite harboring these thoughts and questions for many years, it was only very recently that I began to consider the philosophical connotations of teleportation, in particular to the user of the hardware. I sought to answer the question, “What emerges on the other side of a transporter?” Of course I don’t have the answer, but I have an answer, and I wanted to write it all out.

Transporters

Before I can get into this too much, it is worth pursuing a quick tangent, and discussing how transporters work in the Star Trek series. The concept is fairly simple: leveraging Einstein’s E=M*C*C, a computer scans a user and dematerializes their matter into an energy stream along with data about their original configuration. Next ensues a handful of semi-relevant albeit esoteric techno-babble, including the likes of “pattern buffer”, “confinement beam”, and “Heisenberg compensator”. When it is all said and done, the computer delivers the energy stream up to 40,000 km and reverts it back into its original matter state… e.g, the person who was getting transported. In the Star Trek story line, the computer scanner is able to resolve the quantum uncertainty that should otherwise be present between the position and momentum (or other non-zero commutators in QM) of the particles. This stage is the only part of the transport process that is fundamentally resting on impossible science, so I will ignore it in my discussion. Here, I am curious with what might actually happen if one of these transporters were built, and under no circumstance could we build something with functional “Heisenberg compensators”.

Consciousness

Perhaps predictably, the real question at the gut of this whole thing is if human consciousness can be duplicated in the same manner that matter can. While arduously avoiding the word “soul”, I wish to following in the method of Rene Descarte and suggest a few fundamental truths about consciousness to serve as a starting point for subsequent deductions. While Descarte’s basic principle approach to philosophy only got so far as “cogito, ergo sum”, I propose granting the assumption that what applies to one individual must also apply to every individual, and thereby extending the foundation: you think, therefor you are. And thus combined we can agree that we each do exists, and we each are separately sentient.

In granting the supposition that we all exists, we have acknowledged that consciousness is something real and distinct from person to person. It seems obvious, but clearly my consciousness is not the same one as your. There is some mechanism that makes sure my consciousness stays with my body, and does not leak into others or else vanish altogether… in other words, it seems quite reasonable to conclude that a particular consciousness is mapped immutably to one instance of a human. Everything in our experience suggests that this is the case. I believe this conclusion still holds when we start to look at more unusual or even hypothetical situations, although it becomes less obvious and definitely arguable. Here are a couple cases I have thought about in an attempt to better define my own perception of the boundaries of a particular consciousness.

1. Monozygotic Twins

Okay, this one is not so hard. Identical twins (lets take two as an example) have nearly the same biological construction, but clearly there is “somebody home” inside each twin independently. At the point when consciousness is likely to have manifested (prior to sentience), variation between the two twins would be confined to errors during mitosis, and the minuscule differences in personal experiences while in the womb. Despite the differences being essentially immeasurable at first, each twin still gets assigned a separate consciousness.

2. Clones

We have to employ our imagination a little harder now. Suppose you go to the doctors office and you are cloned. While you watch, the scientists grow a copy of you at a rapid pace. It seems unlikely that when the clone reaches the point of being able to support consciousness, you would suddenly be affected. The idea that your awareness of self might suddenly span two bodies is unreasonable. Again, we are likely dealing with a new separate consciousness despite the mirrored biological construction. It seems to follow from these two examples that consciousness emerges independent of the particular brain construction. That is to say, the “person” who sits behind one’s eyes is not a function of biological construction.

3. Replacement Clone

Now lets say you are cloned through a process that necessarily kills you. The doctors take your blood, multiple samples, and the end result is your death. Then they use the materials they acquired and create your clone. Does the exit of your consciousness have any effect on “who” wakes up inside that clone? Being that your original consciousness is gone, could the new one actually be your original consciousness again, or is this case really the same as the one above? We are past the point where I can offer any certain answers, but my hunch tells me that there should be no relationship between the existence of one consciousness and that of another. If we agree that the particular mind to emerge is not a function of the biological construction, then I believe that the clone in this example, just like before, would be a brand new consciousness — albeit one that thinks they are you, that acts, talks, and behaves like you, but would actually be different. This case is very proximal to the central discussion about transportation, so I will hold off further thoughts until we get back to that.

Let me present one last thought about consciousness before moving on. The line between “you” and “your consciousness” is very vague. In general, those things that define who you are, are all bodily. You personality comes from your experiences, your sense of self accomplishment comes from your memories, your purpose, your self worth, all of the facets to your temperament… they are all the result of years of experiences, memories, thoughts, and interactions. Of course there is an innate component to many of these things, but I argue that those items that really define us — the people we love, the people who loved us, our proudest moments, our deepest understanding of life — these things that have shaped us, are entirely contained in our physically-stored memories. Experiments with animals, as well as studies of humans after accidents and with certain memory-related diseases, have well established that personality and memories can come and go with alterations to the brain. In other words, the common concept of who you are is not dependent on your consciousness. I propose that consciousness provides nothing more or less than the “self” who is able to experience what the brain processes.

This apparent tangent is very important. It means that “your consciousness” is not synonymous with “you”. Who you appear to be to others is defined by the makeup of your brain… two people with the same predispositions and the exact same experiences, would likely act as if they were the same person. Quite contrary to this, we have established here that consciousness is not related to the physical makeup (or else clones would be controlled by a single consciousness). When I talk about “you” in regard to transporters, I really mean the combination of your physical identity (memories, feelings) as well as you particular self-awareness. Either one without the other is not the entire you.

Teleporting Humans

Alright, so 1000 words later and still the question remains unanswered: what would happen if a human was transported? From the physical perspective, we know that the original human is decomposed into energy and a copy is created at a distance. Note that sans the Heisenberg Compensator, we cannot truthfully state that the same physical particles are moved to the new location… but nonetheless, we undoubtedly have a better copy than our “Replacement Clone” example above. Lets further clarify that a transport process need not kill the transportee (in practice this might make little sense, but the point is that the same relationship that existed between clones and replacement clones exists here). We found previously that a clone and a replacement clone were really the same phenomenon, each independent of the exit or entrant of other entities. In this case alike, I doubt that the same “self” that existed before the transport somehow moves or shares the new “structure” created by the transporter. It seems inevitable that we are dealing with a new consciousness.

“Beam Me Up”, or “Count Me Out”?

So if a human enters a transporter, they are not in fact transported, rather a duplicate is created elsewhere while they are killed. We are forced to wrap up on a final philosophical curiosity: would it really matter to society as a whole?

In every quantifiable respect, the copy would be the original person. We have already discussed how personality, memories, experiences, and even temperament are parts of the physical body, and would therefore operate in the copy precisely as they did in the original. The copy would walk out of the receiving end of the transporter with a perfect memory of getting in at the other end moments before. In fact nothing about them could give any indication that anything had changed (since we know nothing physical did change), so for all intents and purposes, it would be the same person.

But the “self” inside their head would actually be only moments old, and completely distinct from the original.

My guess is that if transporters are ever invented, many many people will use them without worry and apparently without cause for worry. Myself ? I’ll just call for a shuttle.