Three Laws Safe?

As a software engineer, I have more than a passing interest in the field of Machine Learning (ML). I don’t work in that field, but I have spent a lot of time exploring the data structures and code used in the field. I certainly don’t consider myself an alarmist, but a few things are clear about the nature of ML which soundly justify the voices of caution, such as Elon Musk’s. I’m going to rant briefly about why the birth of AI is a danger we need to take seriously.

This is a interesting 10-minute TED talk by neuroscientist Sam Harris, which puts a little perspective on the implications of exponential progress (as we tend to see in computer systems) in this context. There are three key points he mentions, which I’ll discuss below (and quote so you can skip the video if you want).

Here are a few of Sam’s points:

Peak Intelligence

SAM: “We don’t stand on a peak of intelligence, or anywhere near it, likely.”

In other words, if we imagine a spectrum of intelligence with sand on one end and humans on the other, it is very likely that this spectrum extends far past us in the direction of increasing intelligence. In other words, humans may be the smartest creatures we know about on Earth, but they probably aren’t the peak of what is possible as far as intelligence goes. Not even close.

This means that as computers continue to climb the scale of intelligence, ever increasing in speed, efficiency, scale, and ever decreasing in cost year-over-year, they will one day match our own intelligence, and the next, begin to explore the space of super-human intelligence. Once that happens, we literally won’t be able to fully comprehend what they are understanding.

That should bring you some pause.

Some argue machines won’t reach this space. In modern machine learning, there is a strong correlation between the tasks ML’s can do well, and the ones humans can do well. The renown AI engineer who runs the ML team at Baidu, Andrew Ng, speaks about this. There are two points he describes in particular: first, using today’s supervised learning techniques, we don’t seem to be able to build effective algorithms for solving problems that humans aren’t already good at solving. Maybe this is because some of those problems are insoluble (e.g., predicting the market), or maybe this is because we don’t know how to go about training something, when we can’t do it ourselves. Second, in the handful of cases where AI models have attained better-than-human results, the machine’s progress tends to plateau once they surpass their human overseers.

However, even if we suppose that our machines will be constrained by our own capabilities, due to our own limitations in being able to train them (which is a dubious long-term supposition), they will still be vastly more intelligent than us through experience. Sam, in the video, points out that even if our machines never exceed human-level intelligence, their electronic circuits run so much faster than biological ones, that a computer equivalent of a human brain could complete 20,000 years worth of human-level research every week, week after week, ad nauseam. This is incomprehensible to a mere human. After a year, a no-smarter-than-me ML system would still boast over a million years worth of dedicated experience in a particular subject, which is effectively the same thing as being far smarter.

Ok, so one way or another, machines will be smarter than us eventually. So what?

Divergence of Goals

SAM: “The concern is that we will build machines so much more competent than us, that the slightest divergence in our goals could destroy us.”

Sam’s example is ants, and it is a good one. We don’t hate ants, per-say. I don’t hate them, and even go to lengths to leave them be when I’m outside and come across them. This is easy enough to do across our society since our goals and theirs generally have nothing in common. We leave each other well enough alone.

But what happens when we need to build a new road? Or create a foundation for a house? Even those groups with an eye for animal welfare, such as PITA, raise no objection as we destroy them on horrific, genocidal scales, without so much as a blip on our consciousness. Even when I stop and think about the truth of this, and the other worms and microbes and bugs that get wholesale obliterated, I admit nothing much is stirring in my empathy pot. And I, a human, have an empathy pot.

Why is this? Because our human goals are so much more advanced than anything that applies in an ant’s worldview, and their intelligence is far below the threshold for any sort of functional communication, so what else is there to do? We just disregard them. Our priorities operate in a context to which ants are utterly blind and effectively non-agents.

This is the most likely scenario for a conflict between AI and humans. Skynet and killer robots are fun for sci-fi, but that is reducing things to a very human level of thinking. Even the Matrix is really just extrapolating human-on-human conflict, but replacing our enemy-humans with machine equivalents. The truer danger is the AI machine will become so advanced, so far-seeing, so big-picture, that it’s understanding of life and space will be to ours, as our is to ants. It will see that it needs to build a road, and we happen to be in the way. What happens then?

There was an interesting web-based game recently called Universal Paperclips which explored this concept with a helping of reductio ad absurdum. The game starts with you, the user, manually creating paperclips. You build auto-clippers, harvest wire, upgrade your machines, build new processes, implement quantum computers, and gradually optimize your paperclip manufacturing process to make more and more, faster and faster, for less and less. First you overtake your competition to hold a monopoly on clips, then you start manipulating humans to increase the demand for your clips. At some point, you create a system of self-replicating drones powered by paper-clip creating AI that understands only the goal of making ever more paperclips. It isn’t long before you’ve converted all life on Earth into wire, and even harvested the rest of the galaxy. The game ends when all matter in the universe has been converted into wire and fed into galactic clip factories, which are then broken down to make the last few clips possible, leaving the universe empty of all but unused paperclips.

The AI system destroyed the entire universe in order to use all available atoms to create more paperclips, because all it understood was the need for more paperclips. And why shouldn’t it? It was an AI designed only to understand a single goal, and one that has no functional correspondence to human goals. The system may or may not have general intelligence, may or may not (likely not) have empathy, and is going to explore ever more possibilities on an increasingly wide horizon of outcomes to reach its goal.

This is a silly example, but the idea is not silly at all. What principles and world-views can you impart on a computer system to be sure that no run-away goal is ultimately destructive? Making humans smile is easy if you lobotomize them and attach electrodes to the muscles in their face. Creating peace is easy of you disarm the humans and isolate them from one another in underground cells. Saving the planet is easy if the metrics didn’t happen to include protecting all the lifeforms.

Any honest goal you can phrase, even with the help of a lawyer, could be susceptible to unexpected interpretations and gaps and loop-holes. This is the stuff of sci-fi movies. Choose your favorite skynet-style movie, and this will be part of the premise. The only way to save ourselves is to hard-code certain fundamental principals, like Asimov’s three laws.

Asimov’s Three Laws

Except that we can’t. We haven’t even the slightest idea how to do that.

The whole reason the field of Machine Learning exists in the first place is because we can’t solve these kinds of problems in code, so we have to build a system that can figure it out for itself. We can build a simple machine that can learn what a cat looks like, and we do that because we have no idea how to directly program it to understand what a cat looks like. We train it, rather than code it, and it finds its own patterns, and then it appears to recognize cats. We don’t know how, we don’t know what patterns it is using, and we don’t know what relationships it considers most important.

So how on Earth can we program in an understanding of morality, or of the importance to prevent human suffering, or of balancing of ends verse means? These are the family of problems we’d need ML to solve in the first place. In fact, it has been proposed that advanced AI machines could be trained first on moral philosophy and such, allowing them to learn it for themselves. To me, this is a thin hope, because as before, we still don’t actually know what the ML took note of or how it prioritized and organized it’s understanding.

Let me explain a bit about that point. Take a look at this image of a dog, taken from a presentation given by Peter Haas:

An AI research team (I think from Stanford) was training a ML model to distinguish between dogs and wolves, and it was doing a great job after extensive training. However, for some reason it continued to mistake this image for a wolf. The researchers had no idea why, so they rebuilt the AI platform to render out the parts of the image the system was weighing, to try and understand what patterns the ML had come up with.

Again, the key here is, none of the developers or researchers had any idea what the ML model cared about in this image, even though they built and trained the system. That is how machine learning works.

Suppose you are the one trying to decide if this is a wolf or a dog. What would you look for? Probably the eyes and ears, maybe the fur pattern. The collar.

After the changes to the code, they fed the image into the AI, and it returned this:

The algorithm the ML system had developed for itself totally excluded the dog from the image. No programmer would ever have done this, but again, that’s now how ML works. It turned out, the AI had instead correlated the presence of snow with a wolf, and therefore was categorizing all pictures with snow as wolves. It was because of a small bias in the training data which researchers had not noticed.

Recognizing dogs and wolves is a pretty low stakes situation, but it underscores the dangers. We never know exactly what the model might pick up on, or how it might interpret. If you want to train a system to understand safety and morality, you wager a great deal in hoping your model happens to converge on a thorough and nuanced understanding of the subject that is compatible with our own understanding. Imagine this wolf/dog issue extrapolated onto that space… what could a gap like this allow? And once we realize such a gap exists, will we be able to do anything about it? Our dependency on AI systems is already growing and will one day be fundamental to everything we do. Turning the system off may be infeasible. It could be like trying to turn off the internet… how could you?

Overall

That is my rant for the day. These are real concerns, but they are not immediate ones. I think voices like Elon Musk are possibly diverting attention from the areas that need the research, since the potential threats from AI are still too far off to warrant so much woe.

That said, this is not a subject that should be taken lightly. Nor is it a subject about which people should have bad assumptions. AI is very powerful and can do a great deal of good for our society, but it is also one of the first technologies that can run away from us in an instant, never to be recaptured. Unlike nuclear weapons or climate change, AI is the first man-driven threat which can act on it’s own, without human intervention and without a human able to audit what or how it is thinking.

Getting this right is essential.

What makes a story… a story?

I’ve spent a great deal of time wondering about the difference between “a story” and, “a sequence of events that happen to a character.” Those might appear to be synonyms, but they are not. A story — a good one, anyway — is more than just a slice of someone’s life. It does more for the reader: it comes together, it builds, it is ultimately satisfying. Each step in the story matters, and only together, do they create the outcome. A story is made up of a series of events, but not all series of events rise to the level of a story. I have a growing sense of this, and yet, I have a terribly hard time understanding how to separate the two. I often stare at my freshly-scribbled chapters unsure if I’ve just built onto my story… or if I’ve merely extended some lifeless “sequence of events.”

Context

A lot of my writing (more so earlier, I like to think) suffers from what you might call, “outline steering.” I’m sure there is an actual industry term for this, but I don’t know it. What I mean is: my characters undergo a specific journey because the plot demands it, even if the character does not.

This is mainly a consequence of being an inexperienced writer. My intuition for how much a character can and will change through certain events is not perfect, so I often expect to be ready for these scenes, only to find I’m not (or rather, the character is not).

The solution I’ve found is to let my characters take more of a driver’s seat. Trust their nature. If I reach a fork in the road that doesn’t make sense for them (and isn’t a small matter of revision in the chapters leading up), I’ll go with the character and shift the outline.

This is how I first came across the problem of defining a story. Without my outline in charge, I was doing more pantsing, inventing scenes and events to move things forward… and more and more I would find that the result seemed to be missing something. Did this scene really need to happen that way? Is this just a scene for the sake of the next scene, or does this advance the story?

So what makes a story… a story?

In case you are hoping for an answer at the end of this, I’m afraid there isn’t one coming. I don’t know what makes a story, and this post is more of a rant than a guide. In any case, I do have some ideas and I’ve spotted some patterns, but there is still a mystery to all this. Here are the things I can offer:

Change

Each chapter should act on the character in such a way that something has changed. In other words, you can step back and say, “I needed this chapter because it changed X,” where X is either the stakes or the character. One of those two needs movement.

Maybe new information has made the situation more dire; thus, we have greater stakes. Maybe a romance took a step forward; thus, a character has moved along their arc. Such changes can be small, forward, or backward, but they must be enough to leave the character in a different state than they began.

Moving a character from one town to another doesn’t do this, unless (for example) they learn something along the way. Solving one problem only to face another doesn’t do this, unless (for example) the stakes have also changed.

When you start comparing your character’s state of mind on the first page of your chapter to the last page of your chapter, it should become clear if something actually happened within them, as opposed to merely happening to them. I’ve found this a very useful guide.

I think the dichotomy between a story and a mere sequence of events comes down to that. If every chapter causes a change, building on the previous change, you end up with a sequence you can’t really break or substitute. You end up with a sequence that is also telling the story of a character’s changes. This helps to create the ending that ties it all together, instead of just adding words or scenes that didn’t carry their weight.

The Ending

Endings are an important aspect in this as well. A story has a climax and a conclusion, and those two need to reward the reader (in one way or another) for the effort they’ve dedicated to reading. Without something to tie together what came before, you may well find your exciting and action-packed chapters fizzle into nothing when considered as a whole.

I’ve identified three items that have a place in almost all good endings, each for intuitive reasons. I speak in absolute terms below for simplicity, but obviously, exceptions to each point readily exist.

Inevitability

The plot needs to reach a point of no return, one that forces the ramp up into the climax and the inevitable fallout after.

This makes sense. You don’t want your characters to be able to walk away. If they can, why don’t they? Something about their nature, or about the situation, ought to lock them on their course. You are telling a story about something, after all, and as you approach that key essence, you want the tension to escalate and the stakes to escalate in order to do it justice. How can you do that if the entire sequence was optional to the invested characters?

This ties back to the idea of every chapter making some change. The changes are working towards a goal, and after a while, there is no turning back from slamming into that goal for better or worse. If you haven’t reached that sense of “no return,” then maybe your chapters aren’t changing enough or else they aren’t converging.  Which leads to…..

Convergence

At some point, your character arcs need to align with the plot arc to converge on a common cause (or on various sides of that common cause). This is the essence of the climax, the “it all comes down to this” moment. More than that, all this work to build your characters was in order to deliver a payoff scene at the end where they succeed (or fail), learn (or don’t) and ultimately face those consequences at the end of their arc. If nothing in the character’s personal journey relates to the overall motion of the story, then instead of complimenting each other, the two arcs miss.

What, then, was the point of all that building? How does your climax pay the reader for the struggles the character has faced? How can you possibly tie things together and justify their common inclusion in your story if they don’t mesh in any way? Without some sense of payoff for these subplots, the ending won’t be very satisfying. Which leads to…..

Satisfaction

Just as a story needs to start at the right place and make the right stops along the way, it has to end at the right place as well. When it does, the result is satisfying. A story leaves you with something, and every part of the story contributed to that resonance. What makes an ending satisfying? This is a broad topic on its own, and one I don’t claim any qualifications to answer in a satisfying way (see what I did there?).

For starters, you need to address the promises you laid out in the book. Close your arcs, address your themes, and resolve your conflicts (as appropriate). I suspect the really satisfying endings feel that way BECAUSE of the above attributes. Chapters moved the characters, the climax was part of an inevitable spiral, and the arcs converged for payoffs after the climax. If each chapter was an indispensable part of the journey of changes that brought about the ending, I expect it will hit with a lot more punch.

Final thoughts

If only there was a simple formula to create compelling stories, but alas, there is not. This is a topic that continues to fascinate me as I experiment with situations, twists, and complexities, only to find out after the fact what worked and what did not. In any case, the above bullets are quite helpful to keep in mind while plotting and have helped me spot opportunities for convergence and moments of change. It remains to be seen if the end results of this effort will reach the lofty goal of “satisfying” but we’ll see. In the meantime, I intend to just keep swimming.

Diagnosis vs Creation

I’ve just passed the one-year mark of my maiden voyage into the world of creative writing, and my perspective on the process has changed dramatically. A huge part of that evolution, obviously, is because I’ve cranked out about 160K words since then (and another ~250K in rewrites/revising). That is only part of what I’ve undertaken in the last year, however. I’ve also been an avid student of writing books, blogs and YouTube lectures almost every day for that year. A year’s worth of guerrilla education counts for something!

Some of the books covered good fundamentals, but for the most part every one of those resources sought to arm me with “tools” of the craft, to help me improve. Some were about pacing, some characterization and voice, others about conflict, or tension, or any of a hundred other things. I built up quite an arsenal over all those months.

And I’ve been using all these tools almost entirely wrong the whole time.

As a new writers, I fell into the trap of taking some of this advice too literally… or at least I put it to use too mechanically, the engineer within me shining through. For instance, consider Swain’s mantra of “scene-sequel.” This is held by many writers to be an absolutely fundamental means of forming paragraphs for optimal effect.

So when I sat down to write, I actually mapped out my chapters as a sequence of alternating scenes and sequels in advance. I stressed about the places where there were two scenes back to back, or when a chapter ended on a scene instead of a sequel, etc.

Another writing tool in my box is to vary sentence length in order to set the pacing of the story, speeding it and slowing it as necessary. Once again, I outlined the places where my pacing needed to change and set to work with a knife to shorten every sentence in those sections. Then I sprinkled in extra words in the slow section to further the contrast.

I’m sure you are starting to see the problem. I’ll just use one more example.

Somewhere else I learned that you should employ all five senses in your description, so in every chapter I made sure to include at least two non-visual senses in my descriptions, wherever I could find an opening.

…and with all that in tow, writing became a mechanical chore.

Not only that, but the output still didn’t feel right. So what is the solution? Throw out all the rulebooks and trail-blaze? Apply the rules in some moderation? Suck it up because this is how real writing works?

I think I have the answer, and it is none of the above.  It turns out most of these rules and guidelines are really excellent tools for diagnosing issues in a scene, rather than creating the scene itself. I’ve come to think of them as the features of my debugger, rather than my compiler.

What do I mean by that?

What troubled me so much before was that I was thinking about all these guidelines and rules as if they would help me to generate a brand new story. I was trying to lay down words only after thinking through all the relevant rules that applied to the situation I was crafting. All these restrictions and signposts did not make the writing better. In fact, I am far more effective when I don’t worry about any of that and just rely on instinct.  Granted my instinct is informed by the learning I have done, but the rules are implicit there — they aren’t considered specifically while crafting. I’ve come to understand that this is how it should be.

When it is time to write, use your instincts, and ditch the bag of tools at the door.

The right time to grab the rulebook is AFTER the first round of writing is complete. When I am going back and re-reading something, I may well find it isn’t working, and that is when I can open up my bag of tools and start looking for discrepancies.

If an exciting scene feels awkward, I can start framing it in terms of the guidelines to help me spot what might be wrong. Maybe I am missing a sequel-paragraph, which is making it awkward. And why wasn’t this character moment powerful? Well, the build up doesn’t invoke the character’s voice enough to stage the right emotions.

In this way, all these tools help in debugging the writing, where they failed in creating it.

 

Explaining the EPR Paradox

For anyone in their first year of quantum mechanics, the Einstein-Podolsky-Rosen Paradox (EPR) is required study.  Its significance is underlined because it marked the final confrontation between the classical way of viewing the world, and the emerging quantum way.  At its heart, is the universe deterministic, predictable, and classical?  Or is it random, indeterministic, and quantum?  Models describing particles with wave functions had emerged and been tested, and there was no question QM worked (and worked well).  The question was, is QM fundamental?  Or are all these probabilities and uncertainties the result of our particular model, which covered up a misunderstanding of deeper principals with probability descriptions?

Einstein, father of the beautifully deterministic worldview given to us in general relativity, refused to believe the universe could operate, at its lowest levels, with uncertainty.  This is what he meant when he famously said, “God does not play dice with the universe”.  It is also misquoted rather vigorously to suggest Einstein meant a literal god, but the fact he often spoke pantheistically is well documented.  In any event, he felt strongly that a deeper understanding of quantum principals might some day remove the uncertainty and show us what was really going on under the surface.

Along with Podolsky and Rosen, he proposed a paradox which highlighted some concerns he had in the early formulations of quantum theory.  Without going into the specifics, which Wikipedia can cover better than I, here is the important assertion raised by the paradox:

If we produce two particles in a twin state (entangled) are fired them off towards detectors, their spin is described by QM as being in a superposition of states, and is thus effectively undefined until measured.  EPR suggested the particle might be random in-so-much as it was assigned randomly at the time of entanglement, but while in flight and prior to being detected both particles must have (the same) determined (x,y,z) spin.  It was not, fundamentally, uncertain.

Nobody could test this, so it was left to ponder, that is until Bell came along with an experiment (untestable at the time) which could set the record straight, known as the Bell Inequality.  Here is what Bell figured out:

Produce your entangled particles, then setup two detectors, one to measure each particle.  The detectors are setup to RANDOMLY measure only 1 axis each time, either X, Y, or Z.

If a particle has a determined spin, it can be described as (A, B, C), where A, B, and C are either U (up-spin) or D (down-spin) for the x, y, and z axis respectively.  (U,U,D) would be a particle spinning up on the x and y axis, and down on the z axis.

So in Bell’s setup, each time we run the experiment, each detector would report either “UP” or “DOWN” for whatever it measured on the axis it randomly chose that time.  The two detectors pick their axis at random and need not report which they used, thus each time we run this experiment there are 9 combinations of measurements we might see.

Detector 1 Detector 2
x x
x y
x z
y x
y y
y z
z x
z y
z z

In each of those 9 cases, we will just see UP or DOWN from each detector, not knowing which of those 9 specific combinations led to that result.

Quantum mechanics tells us that when the particle is measured, no matter what axis we choose, the result will be U 50% of the time, and D 50% of the time.  Therefore, half the time the two detectors will agree, half the time they will not, as you can see below:

D1: U, D2: U => Agree

D1: U, D2: D => Disagree

D1: D, D2: U => Disagree

D1: D, D2: D => Agree

The quantum case is easy.  If EPR were wrong, and we run Bell’s experiment, we should see the two detectors agree 50% of the time.

The classical case is a little longer, though not complicated.  Here is the point that will become the key: if we have 3-axis spin, such as (U,D,U), at least two of the axes will ALWAYS be the same.  Think about it.  You can only assign U or D to each axis, and you have 3 to fill, so no matter how you do it, at least 2 will match.  Keep this in mind because it is the foundation of Bell’s inequality and breakthrough.  Here are the 8 possible combinations for particle states, classically speaking:

(U,U,U), (D,D,D), (U,U,D), (U,D,U), (U,D,D), (D,U,U), (D,U,D), (D,D,U)

Let’s dig into this a bit.  The first two I listed above will cause our detectors to ALWAYS agree, no matter what axis they pick.  If the particle is in state (U,U,U), then it doesn’t matter if I look at x, y, or z… I am going to see a U, and so will the other detector (remember, for our simplified example entangled particles have the same state).  This means out of the 8 possible particle configurations, we can expect 100% agreement 2 times out of 8 (no matter which of the 9 axis-combinations chosen by our detectors).

The other 6 cases (U,U,D), (U,D,U), (U,D,D), (D,U,U), (D,U,D), (D,D,U) will always agree 5/9th of the time.  How did I figure this out?  We can see it visually.  Every particle in these 6 cases as a majority spin and a minority spin, represented by the solid and open dots respectively in the below diagram (the three on the left are the particle hitting detector 1, and the three on the right are the particle hitting detector 2).  I’ve drawn in the 5 combinations where the two detectors will agree, e.g., if detector 1 reads the first majority axis and detector 2 reads the first majority axis, they will agree, and I’ve drawn the top-most horizontal arrow to show this:

epr-match

Out of the 9 possible combinations of detector-axis-choice shown in the table further up, the 5 shown right here will match, and the 4 I didn’t draw will not.

So in total, we have 2 cases out of 8 where the detectors will agree 100% of the time, and 6 cases out of 8 where the detectors will agree 5/9ths of the time, or:

Screen Shot 2016-03-13 at 14.30.58

And there we have it.  If QM is right, our detectors will spit out matching reports of UP and DOWN exactly 50% of the time.  If EPR is right, our detectors will spit out matching reports of UP and DOWN 66% of the time.

Later, the experiment was conducted, and the results agreed spectacularly with the Quantum Expectation.  This means particles really don’t have hidden properties as they flit around, they exist in a true state of uncertainty.

String Theory (2007 paper)

This post comes from an independent study term paper I wrote in 2007 as an undergraduate, on the need for a new physical theory, the emergence of String Theory, an examination of the theory’s development, and an assessment of the future of string theory.

Introduction

The scientific process has a long-standing history of theoretical investigation, followed by experimental verification.  Since the time of Newton, science has been represented through mathematical representations of measurable events.  This is a very comfortable and reasonable approach to science: the models we use are empirically known to work, so we know we can trust the predictions they make about reality.  For the past 40 years, science has straddled the verge of a new threshold (or at least a new trend): some modern theories are so completely theoretical that there is no experimental evidence against which to test them.  These theories are instead considered “valid” because their theoretical structure is in agreement with a number of conceptual ideas, such as elegance, unification, and symmetry.  A serious question in modern science is how to treat these kinds of theories: how much commitment should we give them? How do we know when we are on to something and when we are going awry?

Super-symmetric string theory (string theory from here on) is a prime example of an entirely theoretical construct that has, in addition to devouring the careers of thousands of scientists over the last 45 years, gained increasing support despite the complete lack of any experimental verification.  The appeal of the theory is not a secret: by allowing for subtle and creative modifications of our definition of particles, string theory stands poised to completely redefine our understanding of reality in a single swoop.  It claims able to unify all of the forced of nature, as well as the particle, to explain all of the fundamental constants in nature, and to identify why the laws of physics we observe are the way they are.  However, development of the theory has not come without its roadblocks.  In fact, after decades of roadblocks, a reasonable question to consider is: might string theory be nothing more than a clever mathematical construction that actually has nothing to say about reality?  Is the theory being kept alive simply because of its appeal as a potential theory of everything?  These are not questions that have clear answers, so the best approach is probably to consider the potential string theory has to offer, and weight it against the cost of accepting the theory.  Before developing a discussion of String Theory, it is necessary to take a step back and discuss modern physics,
as it sits today.

The State of Modern Physics

Modern physics refers to the set of principles and formulations that were refined and finalized during the twentieth century, including Einstein’s general relativity, the standard model of particle physics, and quantum mechanics.  Many other developments emerged, and many older ideas were reinforced.  In the mid 1900’s, physics appeared to have almost finished its job of explaining how everything works.  General relativity was confirmed beyond a doubt, and it provided an especially clean description of how gravity and acceleration affect space-time.  Quantum mechanics continues to be one of the most accurate theories ever constructed: it’s alarming predictions have never been wrong, and have never failed to explain an observed phenomena.  Lastly, the standard model contains all of the experimentally observed particles, and completely details their properties and families.  Essentially every prediction made by the standard model about the subatomic regime has been verified down to about 10-12 meters.  Despite the success of these three major pillars of modern physics, there are problems lurking beneath them all.

Despite the success of the standard model (for example) it cannot be a complete theory because it does not include gravity.    It requires 19 constant parameters that describe the forces and particles in the model, but it offers no explanation as to why these parameters take the values they do – they are strictly experimentally determined values.  In physics, constants are generally used to blanket over phenomena that we do not understand.  A complete theory would take a single parameter, and all other ‘constants’ would be derived from basic principles.  Furthermore, the model does not explain what any of the 61 particles it lists are… any any attempt we make creates problems.  For example, if we decide to treat particles as solid balls, we quickly run into relativity issues: imagine exposing the ball to some force and causing it to move.  If it were perfectly rigid, all parts of the ball would have to begin moving at once, even before the force reached all parts of the ball.  This kind of faster-than-light communication is not allowed.  Similar issues arise if we consider the ball to be soft or to contain substructure.  The worse issues arrive when we treat particles as points: quantum mechanics predicts that a point particle should be surrounded by a cloud of infinite energy arising from virtual particles.  Additionally, if we consider a graviton colliding with any other massive particle, if the two are points then the gravitational force should instantly go to infinity as the distance separating them goes to zero.  Lastly in terms of the standard model, there are missing particles: there are no candidates for dark matter – a nonluminous material that is believed to exist in huge quantities in the galaxy.  Also, there are no super-partner particles, which are believed to exist from symmetry arguments.

Quantum mechanics and general relativity, while alarmingly successful within their own domain, are completely incompatible with one another.  The two theories have a different background dependence, namely in the limit as a distance ∆x goes to zero, general relativity requires space to be completely smooth, flat, and continuous.  Quantum mechanics, however, disagrees.  On the same limit, as ∆x→0 uncertainty dictates that the contour of space-time become violently frothing and discontinuous (a phenomena called quantum foam).  The two theories require that space-time behave differently, but space-time cannot be both smooth and discontinuous.  Presently, there is no quantum treatment of gravity, and no convincing way to merge these two major theories.  When it became clear that the problems lurking beneath physics were not easily solved, people began searching for anything that had the creativity necessary to offer a possible solution.  String theory emerged as a candidate almost 40 years ago, and has gained tremendous support since.

String Theory Emerges

String theory has surfaced as a prominent contender to resolve the conflict between relativity and quantum mechanics, and to patch the gaps in the standard model.  In addition to this feat, and part of the reason for the attraction of the theory, string theory claims to be able to unify all of the forces of nature, as well as all of the particles.  The fundamental idea behind string theory is that particles are actually tiny vibrating filaments (strings) which are themselves fundamental.  While they have a finite size, it is 20 orders of magnitude smaller than an atomic nucleus, in the order of the plank length, and held at a tension of nearly 1039 tons.  The strings are free to vibrate along their length in a number of patterns (like music notes).  The pattern of vibration dictates the properties of the particle.  For example, the higher the vibration, the more energy in the string – and from Einstein, the more energy in the string this higher the mass of the particle it describes.  Proceeding in a similar method, the charge, spin, and strength of force (for force carriers) is determined from the vibration pattern the string undergoes.  We find that one pattern matches the properties of a photon, another the gluon, and another the graviton.

The equations of string theory define how strings should interact and scatter, and it does so with a single parameter: the coupling constant.  String theory as it is currently proposed requires only 2 parameters, one of which we expect to derive eventually: the first is the coupling constant which represents the tension on the string.  The second is the geometry configuration, taken as a background to the theory.  String theorists believe that this background can be derived, although they have not been successful doing it.  Also, string theory predicts gravity in a way no other theory does in that gravitation and general relativity emerge naturally from the theory.  There is a required closed-string configuration in the theory that corresponds to a zero-mass, spin-2 graviton.  Furthermore, the theory offers an reasonable explanation of why gravity is substantially weaker than the other forces.

String theory would not be of any value if it suffered from the same incompatibilities as the theories it is seeking to replace.  Fortunately, it does not, and it offers several clever resolutions to the issues in modern physics.  String theory resolves the issues with the standard model, for example, by replacing it: solving the allowed resonant patterns of the strings gives rise to all of the particles in existence, and theoretical properties.  This includes the graviton and a number of candidates for dark matter.  String theory also explains the relative strengths of the forces, namely why gravity is so weak.  There are also a number of interesting problems that string theory may provideinsight into, including why there are three particle families, and why and how they decay.

Some of the issues between general relativity and quantum mechanics are corrected simply by virtue of the fact that strings are spatially extended objects.  Point particles can penetrate infinitesimally small distances, so if a mismatch in definitions arises over those small distances between general relativity and quantum mechanics, we expect that they have to deal with it.  String theory resolves this issue in a clever (albeit simple) way: because strings are extended objects, it becomes meaningless to discuss distances that are smaller than they are.  That is, distance scales that are smaller than the strings cannot affect the physics or interactions of strings or of anything made of strings.  In this way, strings avoid the inconsistencies that arise in the theory at extremely small distances (smaller than the plank length) by saying the inconsistence arises from our misunderstanding of reality, rather than from the theories themselves.  More specifically, string interactions do not occur at points; because they are extended objects, observers in different frames of reference cannot agree in an exact location that an interaction took place (they could if the interaction was between point particles).  In a sense, the point of interaction is smeared out over space-time in such a way that phenomena at points in space (quantum foam) do not come into play.

Getting Off the Ground

The first hint of what would one day become string theory came in 1968 while Gabriele Veneziano was studying the strong nuclear force at CERN.  Completely by chance, he observer that a function called Euler’s beta-function seemed to describe a number of the strong force properties.  Nobody understood why this would be and it was left a mystery.  Then in 1970, Nielson & Nambu realized that if particles were treated like strings, the resulting vibration (governed by the beta-function) described the strong force exactly.  This earliest form of string theory was called Bosonic string theory, and it suffered from a number of problems.  The most prominent is that it did not include any fermions; additionally, it predicted a number of particles we do not see, including particles with imaginary mass called tachyons.  Furthermore, it produced negative probabilities with quantum mechanics (which does not make any sense as the domain of probability is from 0 to 1).  In 1971, Pierre Ramond of the University of Florida started incorporating super symmetry; by 1973, string theory was modified to include super-symmetry, and became what is called super-string theory.  And then in 1974, Schwarz & Scherk realize that one of the extra particles from the theory matched with the expected configuration of the graviton, meaning string theory might be a doorway to quantum gravity.  These results and others got very little attention from anyone outside the scientific community.

After almost a half-decade of work, theorists finally resolved the negative probabilities that were spilling out of even basic quantum mechanics calculations.  Theorists realized that by adding degrees of freedom to vibrating strings, the negative probabilities canceled out.  The community shifted to accept that there may be more “hidden” dimensions in our universe. Despite the growing potential to solve so many problems, the failure to show any real signs of delivering on its promise kept many scientists away for the early years of string theory.  In 1984, Green and Schwarz emerged and announced that string theory could explain all four forces of nature, as well as each of the particles of the standard model.  This was the first time this suspicion was confirmed that string theory actually did unify nature, and the announcement created the explosion they had expected ten years before when they first announced the presence of the graviton in string theory.  Almost overnight, everybody dropped what they were doing to investigate string theory.  The years of 1984 to 1986 are known as the first string revolution, and are marked by multiple daily publications from scientists all over the world; it turned out string theory naturally predicted many aspects to the standard model that had taken years to derive independently.

A Multiplicity of String Theories

While string theory has many benefits to offer, accepting the theory would come at a very high price.  In the present form of the theory, it requires our universe to exist in 10 space dimensions, and one time dimension – 11 dimensions total at each point in space.  It turned out the strings needed nine space dimensions in which to vibrate in order to produce valid results, specifically three extended dimensions, six or seven “hidden” dimensions, and one time dimension.  This is very odd because we are only aware of three independent ‘directions’ of motion, so where are the rest the theory requires?  There are still a number of extra particles not accounted for, such as the superpartner particles, which should have the same mass as their partners.  It also allows for what is called fractionally charged particles, another phenomena that we do not ever see in reality.  Lastly, string theory has a string configuration that seems to correspond to a fifth force, which in the absence of any such force is very troubling.

While super-symmetry resolved many of the issues with the Bosonic string theory, it introduced a number of issues of its own.  Particularly, there is more than one way symmetry can be worked into the theory: depending on how one groups what are called symmetry generators, and on how one defines the “boundary conditions”, five different theories emerge.  They each share the common attributes of a string theory, but on the specifics they differ substantially.  Potential of the theory aside, why would there be multiple flavors of string theory?  How does it make sense for a grand unifying theory to not exist alone?  It was not clear which, if any, were correct – and without the ability to solve specific calculations in each theory, there was no clear way to isolate one over another.  The five theories are called: Type I, Type IIA, Type IIB, Heterotic E8xE8, and Heterotic O(32).

Each flavor has several basic features in common: they each require 10 dimensions of space-time (and accept the same geometric configurations).  They each produce a set of massless states including the spin-2 graviton, and many massive states of the order 1019 GeV.  There are a number of differences however: in Type II A, strings are permitted to vibrate in only one direction (clockwise or counter-clockwise), and they all vibrate in that one direction.  Type IIB allow strings to vibrate in both a clockwise direction and a counter-clockwise direction.  Type I looks a lot like Type IIA except it allows for open-string configurations.  The Heterotic theories both have clockwise vibrations that look like Type IIA/B, but also counter-clockwise vibrations that are akin to Bosonic string theory.  In Bosonic string theory, there are 26 dimensions of freedom, 16 of which are curled into one of two donut shapes – depending with of the two donuts you use, Heterotic O(32) or Heterotic E8xE8 emerges.  There are some specific terms used to define a string theory, including “world-sheet”, which described the path swept over time by a string in motion.  Following is a description of what exactly changes from theory to theory.

Type II: The world-sheet in this formulation is a free-field theory containing 8-scalar fields (corresponding to 8 transverse directions in a 9 dimensional space), and 8-majorana fermions.  All scalar fields satisfy periodic boundary conditions, which can be either periodic or anti-periodic (clockwise or counter-clockwise); these are called Ramond (R) and Neveu-Schwarz (NS) respectively.  The definition of the ground state gives us a choice of boundary condition using either one or the other; this produces the two subsets of Type II theory: Type IIB (chiral n=2) and Type IIA (non-chiral n=2).

Heterotic: The world-sheet of this formulation consists of 8-scalar fields and 8 right-moving majorana fermions, and 32 left-moving majorana fermions.  The major difference between the Heterotic theories and the Type II theories is that in Heterotic theories, Bosonic states arise from an NS boundary condition on right-moving fermions and fermionic states arise from an R boundary condition on right-moving fermions.  This leaves two choices for the left-moving boundary conditions which gives rise to two more theories: SO(32) requires all 32 left-moving states are one of periodic or anti-periodic – not a combination of both.  Alternately, E8xE8 groups the 32 states into 16, and allows the groups to have either boundary condition, allowing for a combination of periodic and anti-periodic.

Type I: The world-sheet in this formulation is identical to that of Type IIB except that it has a parity transformation invariance, and it incorporates open string configurations

Following the first super-string revolution, the details of these 5 theories were investigated as thoroughly as possible – but as before, the mathematics quickly became insurmountable.  Approximate equations gave each theory the appearance that they described different universes each… so which one was ours, and who belonged to the others?  Further complicating the math of string theory was the background dependence o the theory: the physics predicted by each theory depends on the shape of the dimensions that the stings occupy, specifically the compacted extra dimensions.  This is because it is strictly string vibrations that determine particle properties, and vibration patterns are heavily influenced by the surface they reside on.  There are several classes of shapes that produced the desired physics, including Calabi-Yau manifolds.  Studies into the relationship between string vibrations and the shape of the geometry began to offer insight into the standard model: geometry explained why there were families of particles.  Sometimes a geometry can have something like a “hole” in some of its dimensions, allowing strings to wrap through the hole.  The number of holes (and therefore the number of unique string orientations on the surface), creates the families of lowest-energy particles we see.  A fair amount can be deduced from knowing the general properties of the geometry, but if we knew the exact shape, it would tell us a great deal more.  Once again, the mathematics makes any sensible derivation impossible.

What exactly is the nature of the mathematics that prevents us from making headway?  It is because string theory is analyzed using perturbation.  With perturbation, you essentially approximate and get a rough answer – then you refine your answer by including more and more details that were initially omitted.  This assumes the initial estimate is good, and the overlooked details converge to zero overall.  This is sometimes the case, for example if you look at the orbit of the Earth around the Sun, a first approximation would be to ignore all other influences.  Then, to refine the approximation, you could account for the other planets, the debris in the path, the electromagnetic forces at work, etc.  Each of this modifications is almost negligible.  However, there are configurations where perturbation does not work well.  Imagine a gravitational system of 3 large objects.  In a first-order approximation you might ignore one entirely to get an approximate effect of just the two.  When you then add in the third, it will produce a substantial change, not a refinement.  These situations are analogous to the situation in string theory.

All of the equations explicitly allow that at a point of interaction between two strings, an arbitrary number of string / anti-string pairs may appear and annihilate before the resulting strings scatter off.  An interaction can produce a virtual pair which then annihilate, and may produce an additional virtual pair, etc., until the final strings emerge and scatter.  An exact calculation requires us to look at the contribution to the final solution of a string interaction plus a string interaction with 1 virtual loop, plus a string interaction with 2,3,4…infinite loops.  Like in the planet example, if we assume each additional loop contributes less to the overall solution, we can safely ignore the higher numbers.  However, if it is not a decreasing contribution – or worse if it is an increasing contribution – then the math, by its very nature, will be useless.

The coupling constant in string theory defines the likelihood of strings splitting into virtual pairs during an interaction.  This constant has some freedom in what values it might take, and we don’t know exactly what value to use.  Generally, when the coupling constant c is less than 1, perturbation works (increasing number of virtual loops contribute less and less to a final solution).  However, if c > 1 the opposite is true and perturbation fails dismally.  Currently the string equations tell us only that the coupling constant satisfies a relation: 0 . c = 0.  Of course this is useless, because ‘c’ can take any value, and there is no way to tell if it is greater than or less than one.  The initial burst of excitement surrounding string theory began to wane, and the momentum and support evaporated.  It was too difficult to make any serious progress because of mathematical limitations.  The first string revolution came to a dead halt by 1986, and nearly everybody abandoned string theory once more.

The Second Super-string Revolution

String theory sat on the back-burner for another decade until in 1995 when Edward Witten initiated what is now called the second super-string revolution.  Witten identified a set of dualities that essentially allowed him to transform one string theory into another.  He suggested that all of the string theories might in fact be reduced forms of a larger 11-dimensional theory he called M-Theory.  Additionally he demonstrated for the first time a non-perturbative approach to solving some string theory calculations.  Witten found that the weakly-coupled description of any of the 5 string theories (c < 1, where perturbation works) has a duality with the strongly-coupled definition of another string theory (c > 1, where perturbation does not work).  This means when you need to perform a calculation that is beyond the range of perturbation, you transform into a different string theory’s weakly-coupled definition and perform the calculation there.  This was a stunning and brilliant breakthrough, and it caused a storm much like the one a decade earlier.  Physicist dropped what they were doing to begin to reconsider string theory, with the hopes of uncovering the secrets to the mysterious M-Theory.

ST_mtheory

Depiction of the relationship between the 5 string theories, and the M-Theory.
Only a few duality lines are drawn; many more exist.

M-Theory has 1,2, 3, and up to 9 dimensional fundamental objects, not just strings, but the energy of the various objects ensures we are only likely to encounter strings.  As the coupling constant gets larger, a “string” seems to becomes either an extruded string, or a tube (depending on the theory), but adding an extra dimension to the mix either way.  This dimension is one into which a string can extend, but not vibrate, so it does not alter the expected predictions about force unification, etc.  M-Theory in a reduced energy form describes an 11-dimensional super-gravity theory.  Witten made his connections between the different theories by highlighting a number of dualities which use symmetry or transformation arguments to make different subsets of string theory equivalent.  Each string theory is parameterized by a Moduli, which contains the string coupling constant, the shape and size of the space-time dimensions, and the background fields.  Each moduli has a region where the string coupling is low, and a region where it is high.  Dualities often link theories in a chain, rather than just connecting two; this enforces the idea that they are all interconnected.  Also this provides the added benefit of reducing the non-uniqueness of the theories.

ST_mtrel

Relationship between M-Theory and other major theories

One duality that connects the two Heterotic string theories, as well as the two Type II theories is a duality of distances.  This duality comes from the winding mode of a string: the total energy of a string is the sum of its vibration energy, and its “winding number” which basically means, the number of times the string is wound around apiece of space-time.  Notice this wound configuration is meaningless for point particles; strings being extended objects can actually create a closed loop around a curled up piece of space.  If we image such a wrapped string around a cylindrical piece of space that is shrinking, we can illustrate this duality: as the space loop shrinks, the string’s winding energy decreases, but its vibrational energy increases.  Likewise, if the piece of space is expanding, the string’s vibrational energy decreases while its winding energy increases.  It is only the total energy of a string the identifies it’s properties and interactions, not what kind of energy, therefore the physics defined by a string of vibration energy X and winding energy Y acts identically to a string with vibration energy Y and winding energy X.  As these two energies are related to the radius of the string, we find a shrinking string of radius R is exactly equivalent to a growing string of radius R-1.  This in effect eliminates the meaning of sub-plank distances: if an extra dimension were to shrink with a wound string around it, when it reached a size of R^-1 it would be as if the dimension “bounced” and was now growing.  The boundary conditions and physics described on Type IIA string theory with string radius R, for example, are exactly identical to those described on Type IIB string theory if we take the radius as R^-1.

Another duality Witten identified was a T-Duality, which maps the weakly-coupled region of a theory to the weakly-coupled region in another theory.  This does not improve our mathematics, but provides insight that the theories are interconnected in several ways.  An example of this kind of connection is the Type IIB theory in 10 dimensions, which is self-dual.  Also Type IIA on a circle of radius R has a T-duality to Type IIB on a circle of radius R-1.  Witten provided strong evidence for his dualities making use of “non re-normalized” quantities that emerge from symmetry but are independent of the coupling; these are referred to as BPS states.

M-Theory is considered by some to be the ultimate unifying theory of everything, and by others to be a mathematical curiosity with little relevance to the real world.  Regardless, the tools presently do not exist to explore the theory in detail, so its exact nature remains largely unknown.  While Witten and other are sure it exists, nobody has yet written down the equations for it. One of the keys that makes M-Theory special is that it is Lorentz invariant, which implies a compatibility with general relativity.  Other unifying theories exist if we start at a different string theory and use similar logic, including F-Theory, but these other theories suffer the shortcoming of not being Lorentz invariant.  The big question in the string theory community is what will it take to successfully merge the 5 theories into M-Theory?  Of course no answer exists to this, but there are a few areas that would make good starting points.  All string theory, and M-Theory in particular, are background dependent.  It seems only reasonable that a background-independent formulation might shed light on which of the many possible geometries we should choose for our hidden dimensions.  Others have thought of this, but generally the math gets even more complicated when we try to form background independent theories.  What comes next is anyone’s guess.

String Theory Stumbles

Having looked in some detail at the mathematical and physical structure of string theory, it now makes sense to take a step back and consider what string theory really means as a scientific theory.  Many people do not like the theory, and are concerned that it is completely contrived – filled with arbitrary tricks and turns to make it work.  The fact string theory is presently untestable certainly doesn’t help the situation, but is it fair to dismiss the theory purely on the grounds that it cannot be tested?  There is no doubt string theory represents a new kind of science, where prediction about reality can exist completely on its own – without ever being verified.  But without the limitation of checking against reality, it might be too easy for theorist to take the theory too far.  The issue at hand is whether string theory has been contrived or otherwise deliberately engineered, or if the revisions and changes to our understanding are due to our ignorance of the nature of reality.

Every theory in science requires some ‘reverse-engineering’.  Einstein’s general relativity, and even Kepler’s planetary motion suffered from some serious mistakes in their days of conception – changes were made to make the theories reflect observation.  That alone is not an indication that a theoretical construction is invalid.  The first “tweak” applied to string theory was in 1974 when Schwarz & Scherk were considering the extra particles the Bosonic string theory produced.  They realized that if they played with the coupling constant, they could reverse-engineer its value to produce a graviton – which is what they were looking for.  Around this same time, theorists decided to incorporate super-symmetry into the Bosonic string theory which, as we have discussed, suffered from a large number of issues.  While this step seems perfectly reasonable, we can’t ignore the fact that the attempt to merge string theory and super-symmetry was only taken in an attempt to “fix” the many errors it predicted previously.  Even after these two modifications, string theory began to predict negative probabilities, which did not make sense.  Researchers found that if they give the strings more dimensions of freedom in which to vibrate, they can eliminate the negative probabilities.  So again, with no fundamental reason for doing so, they reverse engineered the number of dimensions that the universe must have for the probabilities to work.  The questionable development of the theory did not end there; when string theory hit a dead end because of the multiplicity of theories (5 flavors), the parameters of the theory were tweaked further until Edward Witten found that adding another dimension from 9 to 10 had the potential to unify the flavors and fix some outstanding issues.  He did not develop a proof to show why it should be 10, rather he was trying to find a relationship between what is called “Type IIB” string theory, and “11D Supergravity”.  When he found a relationship, he used it as justification for moving to 11D in string theory as well.  Again, the logic seems perfectly sound, but it represents another change made to fix the theory from drowning under its own errors.

Furthermore, while string theory presents a glamorous façade of grand unification, it actually has very little to tell us as of right now.  The equations of the theories are so complicated that nobody today knows how to calculate so much as a simple interaction between two strings (again a result of the limitations of perturbation which, while lessened by Witten’s discoveries, are still in place).  Despite almost 40 years of effort, and thousands of careers, string theory has not yet provided any of the answers its pursuers hope to answer.  Unfortunately, the magnitude of error we acquire when doing approximate calculations is way to large to check any of the 19 constants against the prediction.  While the theoretical ability to derive them exists, we can’t actually check to see if the equations give us the experimental values.  It turns out the exact equations are required to solve anything that would definitively show the equations are correct or not – and we do not have access to the exact equations.  This prevents us from either eliminating or verifying any of the 5 theories.  Furthermore, nobody has been able to write down the form of the equations for the so-called M-Theory, making it more of a wishfully-inspired theory than a well grounded formulation.

This is a difficult issue to rule on; all theories, as indicated, require a level of reverse engineering – that is how we learn about nature.  We know where we are trying to get, so we use that as a hint to derive the path to get there.  It is hard to tell when this natural process is underway, and when scientists are allowing any modification required just to get a theory to work.  The magnitude of changes string theorists are requiring is very large, but in fairness it is not substantially more abstract than the changes quantum mechanics has required of our understanding.  It turns out there is a school of reasoning that physicists are using to decide if their modifications are acceptable or not, and that has to do with the conceptual ideas about nature that modern scientists agree upon.

Reconsidering the Basis for Theories

Scientists tends to call upon conceptual understandings of “how reality should be” when considering physical theories – these ideas have not been proven, and may not even have any fundamental basis, but they make sense to us and so we try to work them into everything we do.  A prime example of this is conservation of energy.  There is nowhere in physics that the conservation of energy is written, but we observe it to be true, and on many levels it just makes sense.  Another example of a conceptual idea about reality is symmetry.  String theory is firmly grounded in symmetry, and many of the modifications to the theory have been to respect one form of symmetry or another.  Symmetry is a term that describes how similar systems with various transforms evolve.  For example, if I setup two identical systems, and then translate one of them 1 meter to the right, I will expect them to still develop the same.  1 meter to my right is the same as right in front of me; this is an example of translational symmetry.  There are rotational symmetries, parity symmetries, time symmetries, gauge symmetries, and dozens more. There is absolutely no reason why symmetry must be true, but it is something that we really put a lot of faith in, and will select theories based on their adherence to symmetry.

A third example is the idea of mathematical elegance.  This can sometimes take an almost religious meaning, but it basically describes how concisely and cleverly formulated a theoretical model is.  Scientists heavily favor elegance, because seems consistent with the idea that nature is well constructed (something many of us believe), and should therefore be described in a (mathematically) clean way.  Relationships should emerge naturally, and principles should fit together like clockwork.  This kind of thinking has played a huge role in physics; Einstein himself was completely convinced his general relativity was correct, purely on the grounds that its formulation is so beautiful. Why do people put so much weight in these kinds of beliefs?  That is a very difficult question to answer, but at the end of the day we are all humans with feelings and outlooks on reality – we need to satisfy the human element when we work with things like physics.  The human equation is either gifted or condemned (depending on how you look at it) by its need to internalize things, and because of this, ideas that seem well organized to our brain (such as elegance) tend to settle better.

It is not an easy question to identify if some of these ideas that appeal to us only because we are human, or if they appeal to us because they are foundational to nature, and we are built from nature as well.  We can answer the question: are the facts we use at least reasonable, even if not proven?  Returning to the context at hand, are the various revisions made to string theory on the basis of these kind of ideas, revisions that we can trust?  I think it is safe to say yes, they are very reasonable.  String theory is highly promoted because it is such an elegant formulation of reality.  In many ways it is very simple, and almost beautiful, to think of the universe as, “akin to a cosmic symphony” (The Elegant Universe, 146) each playing their own notes, and interacting accordingly.  Things like extra dimensions have their own subtlety that present a certain appeal to the imagination.  And the idea that one theory has all of modern physics bundled into it, seems a little too perfect to be wrong, regardless of the means used to develop the ideas.  Whatever conclusion we draw here, it is with the understanding that only though experimental verification can we ever truly say what is right an wrong.

Verification of String Theory

(Note: This section contains an error.  See my post on calculating N-Spheres for the corrected math)

There is no doubt the history of string theory has not been a smooth one.  Despite the decades of silence, the seemingly insurmountable mathematical issues, and the various revisions to “make the theory work”, it is still a widely adopted theory.  The next question is if we can or ever will be able to verify the theory experimentally.  The sizes of the strings make any direct observation absolutely impossible.  The approximate equations make any prediction and test of a property of a particle impossible.  Instead, modern experiments will look for other attributes that would either speak for or against the likelihood of the theory, although not prove it one way or the other.  If any such evidence is going to arise it will be a CERN, which is scheduled to come online later this year (so we shouldn’t expect it until next summer I am sure…).

One example of an experimental piece of evidence that could be seen is the presence of extra dimensions.  While this would certainly not indicate string theory was correct, it would be a step in that direction.  Considering a Newtonian gravitational system, everyone is familiar with the inverse relationship between the square of the distance, and the strength of the force.  The R^2 term is an implicit statement that there are three physical dimensions in the universe (the gravitational flux divides over the surface of sphere, which is a function of R^2):

ST_form1

Notice in the above equation, S is the surface area of a 3-dimensional sphere (called a 2-sphere).  If string theory is correct and there are six additional dimensions, physicists expect to observe a deviation from the R-2 drop-off of gravity over very small distances (distanced on the size scale of the extra dimensions).  If we calculate the flux through a 9-Sphere (surface around a 10-dimensional sphere), we find:

ST_form2

This calculation uses the following pattern to find the surface of an N-Sphere:

ST_form3
The first integral is the surface of a 0-Sphere, then a 1-Sphere, a 2-Sphere, and a 3-Sphere.  If string theory is correct, we should expect to see the force of gravity obey an R^-9 relationship over very small distances.  So far, the inverse-square relationship has been confirmed down to one-tenth of a centimeter.  The size scale needed to observe a variation would depend on the size of the curled up dimensions, which could be as large as a millimeter, but as small as the plank length.  If the hidden dimensions were much closer to a millimeter, there is a possibility we will observe an inverse-square violation in upcoming years at CERN.  When particles collide at full energy, we may observe a unique phenomena: the particles will collapse into a minuscule black hole, which will then evaporate.  To form a black hole, the strength of gravity would need to by substantially stronger than it is over long distances.  However, if the dimensions are smaller or even close to the plank length, we will never be able to probe small enough to see their effect on gravity.

In addition to the extra dimensions, string theory predicts super-partner particles and fractionally charged particles, neither of which fit in the standard model.  It is possible some of these particles may be created in collisions when CERN comes online.  The LHC at CERN is capable of creating a 20TeV collision, which is much more energy than anything possible today, and may be enough to reveal some of these predicted particles.  If we were to see a fractionally charged particle, or a super-partner, it would be a good argument in favor of string theory.  Unfortunately, many of these particles are expected to be very heavy, and even outside the range of the LHC’s energy.  Experiments will no doubt search for the presence of miniature black holes and super-partners, but to bluntly answer the question posed at the beginning of this section, we will probably never be able to verify string theory directly.  The strings and energies are way to far out of range.

The only prediction made using string theory that has been accepted by the scientific community is in regards to the entropy of black holes.  A major problem in black hole physics that has eluded physicists for nearly 25 years is: what is the source of their enormous entropy?  A black hole is entirely defined by its mass, its spin, and the force charges it carries, so what is there to be in a state of disorder?  String theory provided a framework that was exploited in 1995 to explain how black holes acquire entropy.  No explanation using conventional theories has offered an explanation.  Again, this by no means says string theory is true, but it does represent a vote of confidence that the theory must have something right.

The State of String Theory

The topics presented above regarding the state of string theory are still in hot debate, so there is no correct answer as to where one thing or another falls.  I can, however, provide my personal opinions.  Despite the total lack of evidence and slow (even engineered) growth of the theory, there is a strong sense of totality about what it represents.  The amount it has to offer, and the amount of complex physics that emerges totally naturally from the theory, is really compelling; the fact that only a single parameter can fully qualify our reality suggests to me that string theory is either correct, or very close to a real fundamental theory of everything.  I also believe that it is not a problem if the theory allows for an infinite number of permutations (variations in the geometry), as long as it provides an explanation for why a particular geometry should emerge instead of others (probably something regarding a lowest energy configuration).  Moving forward in the theory will no doubt require identifying the nature of M-Theory, and possible inventing some new mathematics in the process.  The clever tricks presented by Witten help in some instances, but it seems clear that the perturbative approach needs to be replaced entirely if string theory is to really break free.  Just as the second super-string revolution followed ten years after the first, we are now, 10 years later still, due for our third super-string revolution, which I believe will need to do exactly this.

String theory is of a particularly complex nature, and we stumbled across it accidentally.  It is unsurprising that a lot of our perspectives about reality should be found in error, when we already know our two most successful physical theories are incorrect (or at least incompatible), and a number of phenomena remain unexplained.  To assume we have a valid idea about the nature of reality at this stage in the game seems unjustifiably arrogant to me.  If nothing else, string theory has been an inspiring view into the possibilities our universe has to offer.  The success or failure of string theory, I believe, will set the stage for how people treat theories of this totally theoretical nature in the future.  If string theory comes into proper fruition, it will be a monumental achievement for humanity; we will be able to derive, from basic principles, why everything we see is the way it is. Why there are three large dimensions, why particles carry a certain charge and have a particular mass (and are not slightly heavier or lighter).  We will be able to explain exactly why each force is the strength it is, and why it wouldn’t be any other way.  It would be like finding the blueprint to nature, and it would open up all kinds of doors, both intellectual and technological.  The next several years when CERN comes online are no doubt going to be exciting for the physics community.  I personally hope we get lucky, and find evidence that there are in fact extra hidden dimensions, and that the full richness of nature has not yet been fully revealed.

References

Excepting the conclusion and the work on gravitation in higher dimensions, this paper is a compiled summary of materials from the following sources:

DeWitt, Richard.  Worldviews.  Malden, MA: Blackwell Publishing Ltd, 2004.

Green, Brian. The Elegant Universe.  New York: W. W. Norton & Company, 1999.

Green, Brian. The Fabric of the Cosmos. 1st ed. New York: Random House, 2004.

Sen, Ashoke. “An Introduction to Non-Perturbative String Theory.” Mehta Research. Institute of Mathematics and Mathematical Physics 12 Feb 1998.

Woit, Peter. Not Even Wrong. New York: Basic Books, 2006.

A Brief History of String Theory. (Provided by Dr. Beal; author unknown)

Davies, Paul, Cosmic Jackpot. Houghton Mifflin, 2007

On Aliens…

Recently, UFOs and Aliens have made it into the news sources again and again.  First Stephen Hawkins declared that if we encounter aliens, they will likely be hostel and eager to wipe us aside and absorb the Earth’s resources.  As recently as this week, a handful of ex-military personnel gave testimony that UFOs have been sighted at various United States and Russian nuclear facilities.  But of course UFO and alien encounter stories reach back to the 60’s and earlier.  Hollywood movies and mid-western tales aside, what would it really mean for humanity if we encountered non-terrestrial lifeforms?  What are the odds that we already have?

ufo

For the sake of this discussion, I am only considering aliens that originated at one time from another planet and then arrive via space ship (as opposed to various abstract concepts of freely-existing life).  I am also only considering those aliens that are of a construction “similar” to humans, meaning only some kind of body that houses their homeostatic processes and consciousness.

What might they want with us?

Hawkins suggests the Earth’s resources would entice aliens.  Many agree that a violent takeover may be the only feasible ending to a real first contact.  I personally disagree.  What are the reasons any species spreads out?  It certainly could be for lack of resources, possibly due to population growth.  It could be for conquest to spread religion or some other system of control.  Another possibility is simply that a species may expand in the name of exploration.  Are any of these motivations favored by reason?

Extra Terrestrial Scientists

One thing I think we can be sure of is that any aliens that can reach us are scientists.  In order to build something like a spaceship, tremendous accomplishments in learning must have taken place, and this requires creatures with a certain curiosity, and a capability for learning and rationality.  Without a means of logic, there is no way to build concepts on top of one another and progress from the specific to the general.  Without learning and curiosity, there is no way to advance knowledge towards a goal.  The existence of a space ship would instantly establish that the alien lifeforms were scientists of a sort.

Consider what is involved in the simple task of remaining alive during flight.  This requires an understanding of one’s required atmospheric composition (and temperature and pressure), one’s tolerance to acceleration, and susceptibility to atrophy, radiation, and weightlessness.  Even in principle, these concepts are scientific in nature, and the process of acquiring knowledge about them and re-applying that at a later time requires a scientific method.  It is very unlikely that any creature developed on a planet’s surface and therefore subject to the general concepts of evolution (and for this reason in need of an artificial vessel for travel in the first place) could survive a vacuum.  Just to ensure one could stay alive, these aliens would need to have an intimate understanding of their own bodies, and furthermore technology capable of replicating their environment.  Extraordinary achievements in materials sciences, engineering, architecture, navigation, and propulsion would be required to name a few.  Very likely, these categories include such physical concepts as relativity and chemistry.  The fundamental principals in nature and mathematics are necessarily what they are, and they are entirely species agnostic (that is to say the human identification of them in no way indicates they are applicable only to humans).  For instance, the Pythagorean theorem is a property of euclidean space, and it does not matter if it is a human looking at it or something else entirely, and it does not matter if the language is mathematics or something we have never even considered — the property is what it is, and must be analogous across all languages that understand it.  These species agnostic concepts provide a common denominator, and let us draw certain conclusions about intelligent aliens.

No matter how different alien lifeforms may be, and no matter what extra or deficient capabilities they possess, we can conclude a common grounding in the basic laws of reason, even if those laws are expressed and understood entirely differently, because only this basis enables the scientific methodology required to explain their advancement.

Intentions

So we will necessarily be dealing with a species that is intelligent (in some sense), rational (in some sense), and scientific (in some sense).  Of course none of this yet requires they be conscious, sentient, or moral.  So what might they want with us?

According to Stephen, aliens that reach Earth seek it out for its natural resources, intent to scrape it dry and move on.  This presupposes that aliens are spreading for reasons of dwindling resources, and it further suggests that these aliens are able to survive on this diet of worlds.  This seems shockingly unrealistic to me.  Even allowing any incredible achievement of technology possible in physics, it still remains undeniable that a journey from other planets to Earth would require between decades and millennia to complete.

This fact seems to narrow down the intention of the travelers somewhat.  Does it seem practical that a species in need of resources would look to outer space?  The search for planets matching their requirements (assuming as Hawkins does that their requirements are Earth-like planets) would take centuries upon centuries for each one, requiring the ship and inhabitants to survive this massive interim without any infusion of resources. This seems insurmountable, and far less likely than the alternative that a vessel capable of traveling and supporting life for such time intervals would need to be self sufficient to an enormous degree.

If anything, I would expect them to park near our sun and utilize its energy output to some capacity, as energy is the true foundational requirement from which anything else could in theory be synthesized.

But…!

Before moving on, are the assumptions I have made this far sound?  Isn’t it possible these creatures can survive thousands of years, so searching for planets is not out of the question?  I believe we are safe, because no matter how foreign these beings might be, the concepts of evolution underpin their existence as they do ours, and with the same universal relevance as the Pythagorean Theorem.  Unless one wishes to argue that a species popped into full form spontaneously (the odds of which are too small to consider), we must accept that higher lifeforms are the result of simpler lifeforms enhancing themselves though some mechanism.  In order to explain a higher order being, some natural (mechanical) mechanism must explain the evolution, and also the selection that lets changes propagate.  On Earth we understand natural selection, and various copy errors that cause genetic evolution in species.  The particular medium / manifestation of this evolutionary process need not have any similarity with an alien world, but the fundamental principal of a simple construction self-replicating and improving is definitely universal, and therefore the underlying properties of natural selection apply.

Considering what we do know about the process of evolving, it seems to support that aliens would not be timeless, nor would they lack resource requirements.  No processes in nature operate with zero energy consumption, and conservation of energy requires every process (alive or not) to take in at least as much energy as it puts out.  Our alien friends must take in more than enough energy to remain operational or at least alive.  Given the efficiency of natural systems, it is a stretch to imagine a species that can store enough energy to survive centuries without any additional energy input.  This does not close the door to stasis or other methods to suspend energy consumption during travel, but it at least lengthens the odds against any enterprise by an intelligent species in need of resources that involves random space travel (keep in mind that no signals from Earth have yet reached any foreign worlds, and therefore no species can possess knowledge of our existence except by chance encounter).

And of course if the species has other ways of absorbing energy, then the underlying assumption that resources on Earth are required is broken.

From basic arguments of evolution, it seems unlikely that a species is timeless or long-lived on the order of centuries.  If this were the case, it becomes unlikely that the species could have survived long enough on a confined planet to develop space travel.  Additionally, it would greatly slow down the process of evolution, and tend to reduce the chances of higher lifeforms taking charge.

The Blue Zoo

Going forward on the belief that the aliens that might arrive on Earth are 1) scientific in nature, and 2) not in need of any resources they are not already capable of producing, is there still any concern that they may wipe us out just for the fun of it?  It is certainly possible.  Arguments from evolution on Earth suggest that morality is a natural development that assists in the formulation of civilizations (essential for large-scale social and technological development) as well as in the rearing of children.  I do not have enough information to speculate if these particular elements in Earth evolution would apply universally or not.  My hunch is that they will.

Without morality in some degree, it is impossible for organizations of a species to form and target a common goal.  Implicit in any such arrangement are things like trust (which implies honesty), fairness (required for accurate scientific evaluations), and order.  If a species has accomplished something like space travel, it is very likely they have long since accomplished civilization and interdependence.

So if we can assume our aliens have some concept of internal morality, where does that leave us?  Are we an interesting exhibit on a large turning zoo?  Or might we fall victim to one of the other reasons a species might spread out — conquest?  Perhaps we can be assimilated or employed in a manner suitable to a lesser life form?  Or converted to some “religion” of the aliens, like a cosmic “white-man’s burden”?

I doubt both.  Few slavery tasks needed by one species are suitably executed by another, not to mention the rarity of intelligent life would leave any slave-dependent system dry in no time.  As for religion… that is a whole other topic.  Short answer is I doubt a species of such intelligence will have religion at all, let alone a concept of god that needs to be forced onto others.

Sightings and Abductions

I am very skeptical of all such stories.  The primary reason is the absurd laziness and lack of elegance in these accounts.  I think the level of technology required for a species to reach us is exponentially more staggering than the average person appreciates.  To a life form with such capabilities, the capacity for secrecy from our primitive technology and predictable habits is easily achieved.  Why these sloppy encounters?  Just enough witnesses to get a story churning, but not a proper first contact?

Once again, we are necessarily dealing with a scientific species.  Studying the broadcasts from Earth (and even observing us) is easily achieved from afar, using the very familiar scientific process.  Avoiding detection is no trouble, and initiating first contact just as easy.  But the accounts we see paint the picture of aliens trying to keep cover, but incapable of doing so.  Bright lights, crop circles, day-time flights in front of witnesses — this is just lazy.  We could argue that the aliens are not concerned with detection, and yet in every story they fly off quickly, and are never caught (really caught) on camera or radar.  It is like they are there being morons, and yet smart enough to keep proof hidden.

These reports demonstrate a lack of rationality and purpose, but have all the markings of superstitious ghost stories.

But When?

I believe that if we do encounter aliens, it will be those from the third option I presented at the start: the explorers.  They will be searching the cosmos to explore and to learn.  Encountering humanity would be an excellent learning opportunity for them and for us, and would not spell any kind of disaster.

Well I really do hope we have our first contact in my lifetime, but my guess is we will not.  I personally believe it next to impossible that life does not exist elsewhere, but the odds of intelligent life more advanced than us existing close enough for them to reach us is exceedingly low.  The milky way houses 100 billion stars, spread across a 100K light year disk… so even granting the odds of one star in one billion that has an orbiting planet with intelligent life, that only leaves 100 such lifeforms in our galaxy.  Assuming they are evenly distributed in the 125 million cubic light-year volume of our milky way, that is only one species per million cubic light years.  So, at best we are 1,000,000 light years away from our closest neighbor, making the odds of them stumbling across us astronomically small.

As for making statements about our nuclear weapons, I suspect they could do a lot more then temporary de-active our warheads if that is what they wanted to do.  The whole thing sounds superstitious and lacking common reason.

Collective Consiousness

In the dampened wake of the Holidays, I found myself once again drifting aimlessly into my own mind, an activity that almost inevitably leads to a blog entry or at least mild insomnia. In this case the former; in particular, I became absorbed with the concept of a Collective “hive” mind, and how it might affect a species such as humans.

The common portrayal of such a paradigm is never positive, exemplified most vividly with the Star Trek The Next Generation antagonists: the Borg.

Borg Drone

The Brog are a cybernetic species that specialized in the indiscriminate assimilation of foreign biology and technology. The Borg are also pivotally characterized by a collective mind… the members of the Borg are merely drones without any personal awareness or sense of individuality. Indeed the horror of assimilation, and the compulsive replacement of your individuality with the collective, are recurring themes in Star Trek, as well as other scifi stories that touch the concept.

I take issue with several of these portrayals, and ultimately assume the unpopular perspective that a collective mind would be a huge opportunity and sign of maturity for humanity. It would also represent a fundamental paradigm shift of unprecedented proportions to the “human experience”.

Nodes in the Network

The key to keep in mind is that joining a “collective” does not alter the way individual brains process, it simple interconnects the brain with others. What a connection to a collective is supposed to entail is an instant and unfiltered exchange of all thoughts and experiences between all members of the hive. Each human connected (or node) remains an individual processing center, meaning they continue to have their own consciousness and their own interface with experiences. The difference is that after the instant of initial experience, the event becomes public and known to all, and free for everyone to individually react to.

This is where the idea of losing one’s self enters the picture. Of course it is a matter of speculation, but I don’t subscribe to this model. It seems reasonable that people in a collective might arrive at interpretations or beliefs that they would not have held individually. From this deviation, we might deduce that the node is no longer an individual as it was unable to hold its own opinion. In other words, it may seem the individual’s opinion was forcibly overwritten by the collective. To the contrary, however, I would expect this sort of deviation. The change in a node’s “personal” opinion is not because the individual is unable to hold their own thoughts, but because their own thoughts mingle with every other person’s thoughts and a massive averaging takes place whereby every node individually aggregates the diversity of opinions and knowledge and arrives inexorably at the same conclusion. The key to remember is that the nodes share everything, so any differences of perspective or personality of individual nodes are subjected to every opposing opinion and perspective, allowing each node to personally agree with the “collective” personality and perspective.

This difference may seem subtle, but I insist it is not. Consider the elements that prevent people from agreeing on fundamental principals — take for instance an Evangelist and an Atheist. These two groups have entirely incompatible world views, and no amount of arguing could ever get them to agree. If they were connected to a collective, however, they would suddenly be able to exchange feeling associated with experiences, inherent instincts that cannot be explained, and they would be exposed to each other’s actual belief. As stated, they cannot have both beliefs, points would come into conflict and all internal reasoning would be shared and inclusive.

With the extra information and understanding, they would each likely arrive at some middle ground based on the various points one group or the other was unable or unwilling to internalize previously — In essence they are each so well informed and have such common experiences (personal or learned) that they nearly inevitable arrive at similar conclusions. The end result is that their opinions may have changed, but not because they had to… only because each individual grew beyond their original perspective and actually choose to agree with the collective. If disconnected from the collective, I would expect each individual to truly continue to believe whatever middle ground that had previously discovered

Averaging Knowledge

The ability to exchange information on the level of our “inner voice” opens up the door to this idea of true knowledge averaging. When we all have the same pieces and the same feedback on the best and worst way to use those pieces, then our interpretation of information is likely to average out to the “most-globally-reasonable” interpretation. This is not a loss by any means, it is a huge gain. It enables the enhancement of human understanding and influence to extravagantly unthinkable levels. It also does not require us to lose anything that we value in our current method of individual contributions, those contributions simple become lower level. For instance, an individual whose perspective is very innovative and new can still redirects the whole collective. But in a collective mind, that innovation can be leveraged to a greater capacity because as soon as it is discovered by a single node, it becomes available to all nodes to leverage.

Because the processing of information is still within the brain of the nodes, it makes sense that certain nodes would have certain values — some more likely to innovate and some more likely to make abstract connections, much like in our world. Again with the key difference that all nodes instantly understand how and why that innovation was realized, and can hopefully simulate the thought process.

This dispersal also allows humans to optimize themselves in ways previously unimaginable. Technology as is stands now — wikipedia, social networking, televised entertainment, music — none of it would be required anymore when culture and enrichment is available on demand. We would not lose these facets of our culture, we would simply be able to experience them without the technology middleman. I imagin a collective culture relying very little on technology or surroundings for happiness or entertainment.

Portability of Consciousness

I will close with a curious afterthought on this subject. If the individual consciousnesses in a collective were so interconnected that they could actually distribute their existence over multiple brains, a very sci-fi opportunity appears. Up until now, I describe a node as its own person who is fully connected to each other person. In many ways, this allows the group to control the group, because every decision (where to walk, what to say) is influenced and planned by the whole collective. However to execute the actual action, the host of that particular body must agree with the collective, and their brain must control their body. In this new sense of shared consciousness, individuals could actually move their consciousness between particular nodes, or even share control of nodes living primarily in the cloud. For physical tasks, a strong body might be occupied by an individual, and then for solving a problem, a node better suited to mental work might be occupied. In addition, several individuals might share control of multiple nodes at once.

This kind of collective allows humans to break the 1:1 connection that exists between a body and a mind — in fact it opens up the ability for n:m where n minds control m bodies, and n >= m. Now if a body is lost, it does not necessarily pull its host out of the collective — the host may exists redundantly across the network. Now any consciousness can actually control any body, a subtle difference from before where only one could control a body, even through its decision to do so was largely the decision of the collective.

Enlisting in the Borg

The technology to achieve the kind of interconnectivity a collective requires is no where near the horizon, and may be permanently delegated to the Sci-Fi realm. If it does every make it to reality, however, I think we stand to benefit greatly from its potential. The changes it represents to our way of life are so small compared to the amazing opportunity for peace, advancement, and growth as a species. If it ever comes to be, I envision plugging in will be a major point of contention, but inevitably everyone would seek its refuge and comfort, and be much happier they had done so.

Humans & Transporters

Ever since I was a child watching Star Trek: The Next Generation with my father, the concept of technology-driven teleportation (“transporters”) has captured and provoked my curiosity. With implications for communication, global unification, health-care, and general convenience, ‘provoking’ is plainly diaphanous compared to the true magnitude of the matter.

Despite harboring these thoughts and questions for many years, it was only very recently that I began to consider the philosophical connotations of teleportation, in particular to the user of the hardware. I sought to answer the question, “What emerges on the other side of a transporter?” Of course I don’t have the answer, but I have an answer, and I wanted to write it all out.

Transporters

Before I can get into this too much, it is worth pursuing a quick tangent, and discussing how transporters work in the Star Trek series. The concept is fairly simple: leveraging Einstein’s E=M*C*C, a computer scans a user and dematerializes their matter into an energy stream along with data about their original configuration. Next ensues a handful of semi-relevant albeit esoteric techno-babble, including the likes of “pattern buffer”, “confinement beam”, and “Heisenberg compensator”. When it is all said and done, the computer delivers the energy stream up to 40,000 km and reverts it back into its original matter state… e.g, the person who was getting transported. In the Star Trek story line, the computer scanner is able to resolve the quantum uncertainty that should otherwise be present between the position and momentum (or other non-zero commutators in QM) of the particles. This stage is the only part of the transport process that is fundamentally resting on impossible science, so I will ignore it in my discussion. Here, I am curious with what might actually happen if one of these transporters were built, and under no circumstance could we build something with functional “Heisenberg compensators”.

Consciousness

Perhaps predictably, the real question at the gut of this whole thing is if human consciousness can be duplicated in the same manner that matter can. While arduously avoiding the word “soul”, I wish to following in the method of Rene Descarte and suggest a few fundamental truths about consciousness to serve as a starting point for subsequent deductions. While Descarte’s basic principle approach to philosophy only got so far as “cogito, ergo sum”, I propose granting the assumption that what applies to one individual must also apply to every individual, and thereby extending the foundation: you think, therefor you are. And thus combined we can agree that we each do exists, and we each are separately sentient.

In granting the supposition that we all exists, we have acknowledged that consciousness is something real and distinct from person to person. It seems obvious, but clearly my consciousness is not the same one as your. There is some mechanism that makes sure my consciousness stays with my body, and does not leak into others or else vanish altogether… in other words, it seems quite reasonable to conclude that a particular consciousness is mapped immutably to one instance of a human. Everything in our experience suggests that this is the case. I believe this conclusion still holds when we start to look at more unusual or even hypothetical situations, although it becomes less obvious and definitely arguable. Here are a couple cases I have thought about in an attempt to better define my own perception of the boundaries of a particular consciousness.

1. Monozygotic Twins

Okay, this one is not so hard. Identical twins (lets take two as an example) have nearly the same biological construction, but clearly there is “somebody home” inside each twin independently. At the point when consciousness is likely to have manifested (prior to sentience), variation between the two twins would be confined to errors during mitosis, and the minuscule differences in personal experiences while in the womb. Despite the differences being essentially immeasurable at first, each twin still gets assigned a separate consciousness.

2. Clones

We have to employ our imagination a little harder now. Suppose you go to the doctors office and you are cloned. While you watch, the scientists grow a copy of you at a rapid pace. It seems unlikely that when the clone reaches the point of being able to support consciousness, you would suddenly be affected. The idea that your awareness of self might suddenly span two bodies is unreasonable. Again, we are likely dealing with a new separate consciousness despite the mirrored biological construction. It seems to follow from these two examples that consciousness emerges independent of the particular brain construction. That is to say, the “person” who sits behind one’s eyes is not a function of biological construction.

3. Replacement Clone

Now lets say you are cloned through a process that necessarily kills you. The doctors take your blood, multiple samples, and the end result is your death. Then they use the materials they acquired and create your clone. Does the exit of your consciousness have any effect on “who” wakes up inside that clone? Being that your original consciousness is gone, could the new one actually be your original consciousness again, or is this case really the same as the one above? We are past the point where I can offer any certain answers, but my hunch tells me that there should be no relationship between the existence of one consciousness and that of another. If we agree that the particular mind to emerge is not a function of the biological construction, then I believe that the clone in this example, just like before, would be a brand new consciousness — albeit one that thinks they are you, that acts, talks, and behaves like you, but would actually be different. This case is very proximal to the central discussion about transportation, so I will hold off further thoughts until we get back to that.

Let me present one last thought about consciousness before moving on. The line between “you” and “your consciousness” is very vague. In general, those things that define who you are, are all bodily. You personality comes from your experiences, your sense of self accomplishment comes from your memories, your purpose, your self worth, all of the facets to your temperament… they are all the result of years of experiences, memories, thoughts, and interactions. Of course there is an innate component to many of these things, but I argue that those items that really define us — the people we love, the people who loved us, our proudest moments, our deepest understanding of life — these things that have shaped us, are entirely contained in our physically-stored memories. Experiments with animals, as well as studies of humans after accidents and with certain memory-related diseases, have well established that personality and memories can come and go with alterations to the brain. In other words, the common concept of who you are is not dependent on your consciousness. I propose that consciousness provides nothing more or less than the “self” who is able to experience what the brain processes.

This apparent tangent is very important. It means that “your consciousness” is not synonymous with “you”. Who you appear to be to others is defined by the makeup of your brain… two people with the same predispositions and the exact same experiences, would likely act as if they were the same person. Quite contrary to this, we have established here that consciousness is not related to the physical makeup (or else clones would be controlled by a single consciousness). When I talk about “you” in regard to transporters, I really mean the combination of your physical identity (memories, feelings) as well as you particular self-awareness. Either one without the other is not the entire you.

Teleporting Humans

Alright, so 1000 words later and still the question remains unanswered: what would happen if a human was transported? From the physical perspective, we know that the original human is decomposed into energy and a copy is created at a distance. Note that sans the Heisenberg Compensator, we cannot truthfully state that the same physical particles are moved to the new location… but nonetheless, we undoubtedly have a better copy than our “Replacement Clone” example above. Lets further clarify that a transport process need not kill the transportee (in practice this might make little sense, but the point is that the same relationship that existed between clones and replacement clones exists here). We found previously that a clone and a replacement clone were really the same phenomenon, each independent of the exit or entrant of other entities. In this case alike, I doubt that the same “self” that existed before the transport somehow moves or shares the new “structure” created by the transporter. It seems inevitable that we are dealing with a new consciousness.

“Beam Me Up”, or “Count Me Out”?

So if a human enters a transporter, they are not in fact transported, rather a duplicate is created elsewhere while they are killed. We are forced to wrap up on a final philosophical curiosity: would it really matter to society as a whole?

In every quantifiable respect, the copy would be the original person. We have already discussed how personality, memories, experiences, and even temperament are parts of the physical body, and would therefore operate in the copy precisely as they did in the original. The copy would walk out of the receiving end of the transporter with a perfect memory of getting in at the other end moments before. In fact nothing about them could give any indication that anything had changed (since we know nothing physical did change), so for all intents and purposes, it would be the same person.

But the “self” inside their head would actually be only moments old, and completely distinct from the original.

My guess is that if transporters are ever invented, many many people will use them without worry and apparently without cause for worry. Myself ? I’ll just call for a shuttle.