On Interstellar Travel: Does the Warp Stare Back at You?

On Interstellar Travel
Part 3 of 3: Does the Warp Stare Back at You?

“When you stare into the Warp, the Warp stares back into you.”
– Warhammer 40k

As part of the 45th anniversary of the Moon landing, I’ve written a series of articles on interstellar travel. In the last two installments I covered slower-than-light interstellar travel and Star Trek’s warp drive. Slow travel is plagued with the problem of travel duration. After all, spending 150 years on a ship is not very appealing, and what happens if you run out of supplies? Warp drive sounds plausible, and even has a real-world mathematical analogue, but it has a high probability of causing space-time paradoxes.

So maybe we’re thinking too small. Maybe we don’t need interstellar travel schemes that are “plausible with existing physics”, after all they really aren’t all that plausible. Now we know that the Earth, Sun, and Interwebs are governed by Einsteinian physics. So if we want to break Einstein’s rules, we need to travel to a different dimension altogether! For maximum traveling comfort, this dimension should be immediately adjacent to normal space-time, and we should be able to bring along enough space-time to keep our physical bodies intact.

As it turns out, tons and tons of science fiction universes make use of a high-speed dimension immediately adjacent to regular space. This parallel dimension is called Hyperspace in most fictional universes, Slipspace in the Halo universe, and Ultraspace in Iain Banks’s Cultureverse. For the sake of convenience I’ve grouped them all under the umbrella of “Hyperspace”.

Because hyperspace exists outside of normal space-time, it doesn’t have to follow any of the laws of physics. However, if you want to give hyperspace a pseudo-scientific veneer, you can always invoke string theory. Unproven variants of string theory suggest that there are many extra dimensions adjacent to our own, rolled up into incredibly small spaces that we can’t access. If you could somehow squeeze into these alternate dimensions, you could move just a tiny bit and find yourself halfway across the universe. Ta-da, realistic hyperspace!

There’s one big problem with string theory hyperspace: the extra dimensions are very, very small. Not just regular small, incomprehensibly small: on the order of a Planck length. This is so small that if a proton was enlarged to the size of the Earth’s orbit, a Planck length would be the size of a DNA double helix. Passing a camel through the eye of a needle is downright trivial compared to traveling through such a tiny dimension. Also, physicists aren’t sure that string theory is real, and the string theorists aren’t sure that the extra dimensions are real. So scientifically speaking, string-theory hyperspace seems much less plausible than warp bubbles or even time travel.

I guess the scientist with a comical lisp was right for once – it was never supposed to be “hyperspace”, it’s “hypothspace” – a hypothetical space.

So let’s forget reality, and get into some different fictional concepts of hyperspace.


A Taxonomy of Hyperspaces
Not to be confused with a Hyperspace Tax

At a very basic level, hyperspace concepts can be split into “safe” and “dangerous” versions. Let’s start with safe hyperspaces, as they are much more common. In safe-hyperspace universes, hyperspace is pretty darned boring. You could kill yourself by dropping out of hyperspace on top of a star, as alluded to by Han Solo, but you’re unlikely to die in hyperspace itself. Depending on how hyperspace works, it may not be possible to fight a battle in hyperspace.

Safe hyperspaces may be further divided based on method of hyperspace entry. In Star Wars and Halo, ships can enter and exit Hyperspace/Slipspace at arbitrary locations. Less advanced ships may suffer from restrictions on where they can jump, while high-tech ships can enter and exit Slipspace at will.

In terms of their narrative impact, these “go-anywhere” hyperspaces are really not much different from warp-bubble drive. You could replace every warp core in the Federation with Corellian Hyperdrives of equal speed and reliability, and no one would really notice. Of course, canonical Star Wars velocities are much higher than Trek velocities, but they’re probably the same now.

One big problem with the “go-anywhere” drive is that they tend to give space combat an offensive bias. With no spaceborne equivalent to terrain or chokepoints, the attacker will enjoy advantages in mobility, initiative and surprise. It’s no coincidence that Star Trek, Star Wars, and Halo all place some emphasis on the idea of “don’t let the enemy find our fleet / superweapon / homeworld.” Once they discover your point of vulnerability, it’s awfully hard to defend – even when you set a trap!

This leads to the next category: “restricted hyperspace”. Maybe unassisted interstellar travel is extremely slow, expensive, or dangerous, but most travel occurs with the help of jump points, wormholes, mass relays or other fixed devices. These “jump paths” make interstellar travel downright easy, but your movement becomes predictable. Babylon 5, Mass Effect, and Honor Harrington all use variants of this hyperjump concept.

The restricted-hyperspace concept is highly appealing to writers because from a plot perspective it behaves much like terrestrial geography. Well-charted jump lanes are like major roads, while low-quality jump lanes are like back-country roads. Governments, bandits, and invading baddies all want to seize control of the jump paths, as they are the most economically valuable part of the star system. On the other hand, unassisted hyperspace is like a spooky forest that you can hide in, diving into “uncharted jumps” to evade pursuit. Just watch your back; hyperspace may be dark and full of terrors.

Restricted hyperspace also allows military forces to set up strong defensive chokepoints, slagging invading forces as they funnel through a wormhole or mass relay.

In some universes, hyperjumps may be “hard-restricted”, making FTL utterly impossible outside of spacelanes. This “hyperspace on a rail” concept removes all possibilities of escaping into uncharted space. It is pretty unpopular in fiction, but very common in gaming. The Freespace, Master of Orion, and Sins of a Solar Empire series all use hard-restricted jump geometry, as does Every Space Board Game Ever. Games prefer hyperspace-on-a-rail for its simplicity, as true 3-dimensional movement is very difficult to pull off in videogames and frankly impossible in boardgames.

In a minority of hyperspace systems, it is impossible to stay in hyperspace for any measurable amount of time. Instead, ships rapidly jump in and out of hyperspace in “stutter warp”. This is a relatively rare form of warp drive, originally published in the tabletop RPG 2300 AD and popularized by the novel A Fire Upon the Deep. Because each stutter-jump is instantaneous, you don’t need to worry about how time flows while you’re traveling faster than light: it doesn’t. Otherwise it’s not too different from go-anywhere hyperdrive.

From a realism perspective, all of these hyperspace concepts are purely speculative. You could say that hyperspace is “further out there” than Trek warp because there’s not as much supporting math, but you could also argue that the math proves that Trek warp is impossible. Until the physicists discover radically new branches of physics, FTL travel will remain impossible in our existing scientific understanding.


Here There be Dragons
Go ahead and take that Step, just remember to Walk Without Rhythm

Back in the Age of Sail, exploration was so dangerous that many explorers never returned. Human imagination concluded that there must be an endless number of monsters in the sea, from seductive sirens to terrifying dragons. Of course, the deep seas of Earth never contained any sirens or dragons, but the danger was real and the body count high.

Just as the ocean was terribly hostile to flimsy ancient ships, hyperspace may be a very hostile place for future starships. Perhaps hyperspace is simply so bizarre that people go crazy by staring into it, as in the Ringworld series. Or maybe foldspace is filled with subtle hazards that can only be percieved by highly specialized individuals, such as the Spice-addicted Guild Navigators in the Dune series.

Or maybe the sirens and dragons aren’t just figurative…  The Trope NamerWarhammer 40k, describes starships traveling through The Warp, a dimension full of immeasurably horrific Daemons and Chaos Gods. This is similar to the dimensional gates used by H.P. Lovecraft’s Elder Gods, and is probably inspired by Lovecraft to some vague extent.

In a universe where FTL travel is extremely dangerous, interstellar trade and travel would be difficult and expensive. Anyone willing to travel a long distance through Chaos would have to be desperate, crazy, or seeking a large payoff. Any substance that makes travel safer would be incredibly valuable, sought after and hoarded by every military force in the galaxy.

Hazardous-FTL universes tend to be more violent and militaristic than gentler universes. Part of this is narrative bias; someone who would write a completely peaceful story is unlikely to make hyperspace a violent place. However, it’s also a logical consequence. If hyperspace is highly dangerous, only highly dangerous people would feel comfortable with it. The hazards of space travel would discourage merchants and schoolchildren to a much larger extent than pirates, mercenaries, terrorists or madmen.

Now it’s interesting to speculate what might happen if a hazardous-FTL society became much less hazardous. Intercontinental travel was near-suicidally dangerous in the 16th century, but routine in the 21st. If the same thing happened in a WH40k-esque setting, would the galaxy become more peaceful? Probably (although not in WH40k, that’s just ridiculous).

How much of the violence in pre-industrial human civilization was caused by the fact that everyday life was so deadly that there was less of a taboo on killing? I’ll leave that question to the anthropologists, historians and philosophers, but it seems to me that it’d be responsible for some of human violence. I think it would be very interesting to set a sci-fi novel in the midst of a cultural transition between an ultraviolent “Warhammer” setting and a peaceful “Star Trek” galaxy.

* * *

So that’s it for my semi-systematic ramblings on interstellar travel. The fact is that with our current understanding of physics and outer space, mankind is not going to take any small steps under an alien star. Someone will need to discover the next domain of physics, whether he’s a brilliant academic mind or a half-crazy drunk. And on the day that his work is publicized, all of us sci-fi enthusiasts will cry. Half of our tears will be shed in joy at the advancement of mankind and space travel, and the other half will be shed in mourning over all the pseudoscience that’s suddenly as dated as Jules Verne’s moon cannon.

Post your comments if you got any!

Advertisements

On Interstellar Travel: Can we Break the Light Barrier?

On Interstellar Travel
Part 2 of 3: Can we Break the Light Barrier?

Zefram Cochrane, Is That You?
Protoss Corsair

This is Part 2 of the three-part series “On Interstellar Travel”, written to celebrate the 45th anniversary of the Moon landing. In the previous installation I discussed slower-than-light interstellar flight. Today we make the faster-than-light (FTL) plunge!

Under Einsteinian physics, nothing can move faster than light with respect to spacetime. However, spacetime itself can move as fast as it wants to. Shortly after the Big Bang, the universe expanded much faster than light. Therefore, even with “realistic” physics, FTL travel is at least somewhat plausible.

The Star Trek style “warp bubble” is one of the most enduring faster-than-light concepts in science fiction. A starship doesn’t move inside its warp bubble, so it doesn’t need to worry about time dilation or other relativistic effects. The warp bubble itself moves at speeds much faster than the speed of light.

In the 1990s, Miguel Alcubierre developed a mathematical theory that supports faster-than-light bubbles in Einsteinian space-time. Dr. Alcubierre’s academic paper refers to “the warp drive of science fiction” as inspiration. In fact, it’s directly based on Star Trek. Interestingly enough, ever since Alcubierre’s rise to fame, many modern sci-fi authors have equipped their starships with “Alcubierre drives”. This places the Alcubierre drive in the same hallowed position as cyberspace, a science fiction concept that inspires a real-world concept that inspires more science fiction. This image may be the ultimate circular reference: NASA’s concept art of a “USS Enterprise” powered by Alcubierre drives based on warp drives based on Star Trek.

Now, Alcubierre’s original theory was explicitly impossible. Generating the warp field required obscene quantities of “exotic matter” and “negative energy”, and there was no way to steer the warp field. However, since the Alcubierre drive is purely theroetical, it’s possible that tweaks to the math could greatly decrease its energy requiements.

Alcubierre and Star Trek disagree in one major respect: what happens to matter (or light) entering and exiting the warp field? Trekkie ships routinely engage in warp-speed combat, slinging phasers, disrupters, and photon torpedoes without dropping out of warp. That wouldn’t work with a “realistic” Alcubierre field – the edge of the warp bubble is an area of severely distorted space, much like the event horizon of a black hole. Any energy or matter passing through the edge would be severely distorted if not destroyed. This should affect communications as well, unless your communications signals exist in a parallel dimension (ie subspace communicators).

The characteristics of a warp-based interstellar civilization would depend on just how fast their ships, and their communications signals, could travel. In pre-JJ Abrams Star Trek, ships took days to weeks to travel around Federation or Klingon space, and much longer than that to cross the galaxy. However, you could have a real-time conversation with a Starfleet admiral from very far away. This allowed the major Trek powers (Feds, Klingons, Romulans, Cardassians etc) to build well-coordinated interstellar empires, while still preserving a sense of distance. Isolated backwater worlds could exist in pre-Abrams Trek because distance was actually meaningful. Unfortunately, in the post-Abrams universe the Moon appears to be in low Earth orbit, and Qo’noS is just a few miles further. Most illogical.


Does a Warp Drill Pierce the Heavens?
When does the fantasy stop making sense?

“Hard science fiction sticklers” are very intelligent people who rank higher on the evolutionary tree than the rest of us. You know this is true because they have highly sophisticated brainstem reflexes. After all, they roll their eyes as soon as they hear “faster than light travel”.

So why is FTL such nonsense? Well it contradicts our current understanding of science, but that shouldn’t be a game-breaker. After all, the whole point of sci-fi is to show speculative technologies. However, one of the persistent complaints about FTL travel is that it seems to require the existence of time travel. Hard sci-fi fans recoil in horror at the thought of time travel, as it inevitably leads to silly logical inconsistencies.

Actually, indiscriminate use of FTL travel could cause logical problems even worse than time travel. Let’s try out some science-fanfiction: a thought experiment in the setting of Star Trek (The Original Series).

*   *   *

Warning: Physics Ahead!

* * *

Captain Bob of the USS Paradox leaves the Earth at 8:00AM traveling on maximum impulse power, a speed of 99.5%c. At 8:20AM, Captain Bob suddenly realizes that he forgot to lock his space-car door. So he orders Commander Spock to turn around and head back to Earth at Warp 9.

Traveling at 99.5%c causes 10-fold time dilation, so when Captain Bob turns the starship around at 8:20AM, only 2 minutes have passed by on Earth. Since it takes time for Earth’s light to reach Bob, if he looks at an Earth clock it will show “8:01AM”.

“We have re-oriented and are ready to engage warp drive,” says Spock.

“Engage.”

Warp 9 is around a thousand times lightspeed so it takes just over a second for Captain Bob to get back to Earth. The Earth clock now reads “8:02AM”.

“That’s funny,” says Captain Bob, “If only two minutes have elapsed, then I can’t possibly have traveled further than 2 light-minutes.”

Commander Spock points at something behind Captain Bob’s back. “Look behind you.”

Captain Bob looks back in the direction he came from. Sure enough, the USS Paradox appears just one light-minute away. “That’s odd. If my ship is out there, and I’m also right here, that means we’ve duplicated ourselves.”

Spock nods. He knows the feeling.

“But if that’s so, then at some point in warp flight we must have gone straight through a past version of ourselves.”

Spock slowly raises a single eyebrow. “That is most illogical. Two objects cannot occupy the same space at the same time.”

“Oh, you’re rig…” The Captain faces the camera with the wide eyes of a cartoon coyote who’s just realized he’s standing on thin air. Then his entire starship explodes in a glorious blast of illogic.

Less than three millparsecs later, the Millennium Falcon sails gracefully out of the still-glowing fireball. “Whew, that was a close one Chewie!”

“Arrhrhrhhrhrh!”

This paradox is illustrated below:

Trekparadox


Solving the FTL Paradox
Cutting the Einsteinian knot

So from a naive perspective, FTL seems completely impossible. The existence of a warp drive would cause collisions throughout space and time, logic-eating paradoxes that could fundamentally alter the rules of the universe in crazy and unpredictable ways. For example, the travel time between Earth and Qo’noS could inexplicably decrease from several weeks to a few minutes. Oh wait, I already mentioned that one.

That said, there is one easy way to immediately banish all FTL paradoxes: Do away with Einsteinian relativity.

Relativistic paradoxes only occur because there is no “correct” (aka “absolute”) frame of reference. If an absolute frame of reference exists on some cosmic level, then you can easily prevent any time travel or paradoxes. Let’s go back to our previous example, using a cosmic background frame that is stationary with respect to the Earth.

* * *

Since Captain Bob is moving with respect to the cosmic background, he experiences time dilation and the background does not. So when Bob’s clock reads 8:20 AM, the cosmic clock has advanced by 200 minutes and reads 11:20AM. Bob has traveled 199 light-minutes in the cosmic reference frame, but due to time and length contraction this is only 19.9 light-minutes in Bob’s reference frame. In Bob’s reference frame, the Earth clock only reads 8:01AM, the same as in the first example.

Bob slaps himself in the forehead. “Oh crap, I forgot to lock my space-car door.” He reaches for the space-fob on his space-keys. “Commander Spock, turn this thing around. Maximum warp, engage.”

When the USS Paradox engages warp drive, it travels 199 cosmic light-minutes in 12 cosmic seconds. Since Bob is still under 1:10 time dilation, his clock only advances 1.2 seconds. It reads 8:20:01 by the time he reaches Earth. However, the Earth clock says 11:20:12AM – 12 seconds later than when Bob entered warp.

“Look behind you,” says Spock.

Bob looks over his shoulder and sees an image of the USS Paradox 100 light-minutes away. “Wait a second,” he says. “I’m still seeing a duplicate image of our ship. I thought that meant we could collide with ourselves?”

Spock shakes his head. “No, Captain. 201 minutes have elapsed here on Earth, and our trip only took 200 Earth minutes. It will take another 198 minutes for our light to catch up to us, but it’s only light. There is no duplicate of our ship out there.”

Bob visibly relaxes. “So there’s no way that we could run into our past selves?”

“Of course not, Captain. That would just be ridiculous.” Spock keeps a straight Vulcan face, but his human half is laughing on the inside.

* * *

As long as faster-than-light travel exists within an absolute frame of reference, individual people and ships can experience all the time dilation they want, but the universe will never see two copies of the same object in the same place at the same time.

There’s one big obstacle to getting rid of relativity: if an absolute frame of reference exists, it should be fairly easy to observe. Whatever direction the Earth is moving during the spring, it’s moving the opposite direction in autumn. If there is a fundamental cosmic background frame, we should be able to detect our motion relative to this background. In fact, the absence of a seasonal difference in physics is exactly what drove Einstein to invent the theory of relativity in the first place.

This non-observation can be “solved” by assuming that the absolute frame of reference only applies to objects in warp space. After all, if Bob returned to Earth under impulse drive, he’d experience the “normal” time dilation effects described by Einstein.

Of course, if a FTL starship has to follow totally weird laws of physics just to exist, it may require a more fundamental change in space-time than an Alcubierran warp bubble. Instead of trying to create a bubble of exotic space in the ocean of realspace, it may make more sense to throw your entire starship into a different dimension.

This concept is best described as Hyperspace, and will be the subject of Part 3 of this article. (Thanks for reading!)

On Interstellar Travel: Can We Reach For The Stars?

On Interstellar Travel
Part 1 of 3: Can we Reach for the Stars?

“You feel so lost, so cut off, so alone. Only you’re not.”
Contact

45 years ago, Neil Armstrong took one small step for (a) man, one giant leap for mankind. Over the intervening decades, it’s interesting to see the progress that mankind has made in outer space. As a species we have continued to leap forward, placing thousands of satellites into Earth orbit and sending probes all over our Solar System. Yet we have not taken any more small steps for man, or woman. There are no more bootprints on the Moon or any other celestial body than there were in 1972.

Manned space travel is difficult and perilous, and at the moment low-reward. Earth is the only Earth-like world in the Solar System; any colony we put on the Moon or Mars would require supplies from Earth just to survive. If you’re looking for a comfortable extraterrestrial world to live on, you’ll have to go interstellar. There’s a lot of ideas for how mankind could one day walk on an exoplanet – some realistic, some less so.

As part of the celebration of the 45th anniversary of the Moon landing, I’ve written up a series of three articles on interstellar travel. Today’s article will stick to (mostly) realistic slower-than-light travel options, while the next two pieces will delve into increasingly (but not infinitely) improbable modes of propulsion.


When I’ve Been There Ten Thousand Years
Traveling Much Slower than Light

Einsteinian space-time has three space-like dimensions, one time-like dimension, and an absolute speed limit of c, approximately 300,000 kilometers per second (kps). Nothing can move faster than c without also traveling backward in time. And since arbitrary time travel causes all sorts of logic-destroying stupidity, most scientists assume that time travel is impossible. Therefore, nothing can go faster than the speed of light.

In a realistic universe, it takes an awfully long time to get anywhere. The Apollo moon missions maxed out at around 11 kps relative to the Earth. Traveling to the nearest star would take 115,000 years at this pace. Actually, you’d never get that far. Starting from the Earth, the escape velocity of the Solar System is ~42kps. You’d need a considerably faster craft to ever exit the Solar System.

In the 1960s, the Orion nuclear pulse-rocket was “designed” as a deep space exploration concept. This starship would have used repeated thermonuclear explosions to push it at extremely high velocities (compared to conventional rockets). Such a craft could accelerate up to velocities of around 3%c. This would get you to Proxima Centauri in 142 years.

With much-slower-than-light travel, a journey between the stars will either require many lifetimes, or prolonged cryogenic freezing. Either way, all of your friends at home will be long dead by the time you reach your destination. And if people live on a starship for too many generations, they may eventually forget that they are on a starship.

*   *   *

Very-slow interstellar travel faces one major problem: Resource consumption. Where do you get fuel, water, and other materials while spending centuries between the stars? Every ecosystem requires light and heat, which means you have to generate energy, and energy is in short supply in interstellar space. Even nuclear reactors will run out of fuel during a thousand-year journey.

A Bussard ramscoop could gather interstellar gas for fusion power, but there’s not a lot of gas out there and it would be plain H-1. This is a much dirtier fusion fuel than He-3, and over the years would cause radiation damage to your fusion drive. You’ll burn most of the hydrogen that you collect just to create enough thrust to offset the ramscoop’s drag. And the ramscoop won’t collect any metals – if anything on your starship breaks, you can only hope that your ancestors brought a spare.

Some slow-starship designs completely bypass the energy problem by relying on laser energy beamed from Earth. This energy could be used both to propel the ship and to power its ecosystem. It’s certainly an elegant solution, as you could rely on an extremely large energy-producing infrastructure that doesn’t have to travel with your starship. But what happens when your benefactors run out of funding, are killed in a war, are destroyed by climate change or natural disasters?

The fact is that based on a present-day understanding of physics and engineering, a slower-than-light “generation ship” is really not much more realistic than faster-than-light travel. If we ignore the difficulties of energy generation and resource collection in interstellar space, we might as well ignore the rest of physics.

And let’s say someone develops a technology that allows a civilization to live forever without an external energy source – why would you even want to live on a planet at that point? Just stay in interstellar space.

In a universe where human civilization is limited to much-slower-than-light travel, there would be no such thing as an interstellar civilization. Humanity might eventually spread out to a bunch of stars, but each solar system would have its own unique way of life. The human colonies might communicate with each other, but they really couldn’t trade effectively, and no one could travel back and forth between different stars. There could be countless alien civilizations in the galaxy, but we might never encounter them because they are too far away.


Oh my God, it’s full of stars!
Traveling at near the speed of light

According to Einstein, funny things happen when you get near the speed of light. Time slows down. Distances get shorter. Mass gets more massive. A traveller moving at 99.5%c will experience 10-fold time dilation, length contraction, and mass increase. That means he experiences time passing 10 times slower than someone at rest. Relativity may sound funny, but it isn’t just empty theory – our entire telecom and GPS system is programmed with relativity in mind. If Einstein was wrong, then none of the technology you’re using to read this blog article would work.

Science fiction authors have played with the concept of time dilation for many decades, because it’s fun. An interstellar traveller may live for a normal human lifespan but witness thousands of years of galactic civilization in fast-forwards.

There’s one massive problem with near-lightspeed travel, and it’s mass. Well, it’s really energy, but we all know that’s the same thing. If you’re using time dilation to age 10x slower, that means you are also 10x as massive as you were at rest. If you were to stop moving, you’d need to shed kinetic energy equal to 9x your rest mass, a truly absurd amount. In order to get moving again, you need to gain an equally ridiculous amount of kinetic energy.

How ridiculous is this? Well, the rest mass of a 70-kg (154#) human is 6.3 exajoules. That’s equivalent to 1,500 megatons of TNT, or 3 times the total energy of every nuclear bomb ever detonated. Now imagine spending nine times that energy just to accelerate a single person to near-lightspeed. We haven’t even considered the mass of the starship yet!

Even with antimatter or black holes, it is very difficult (and highly dangerous) to come up with this kind of energy. Science fiction writers have either ignored the energy problem, or circumvented it with handwaving pseudophysics. (“It’s an inertialess drive!”) In Speaker for the Dead, Ender Wiggin wondered if a star winked out every time a starship started moving. (since the ship picked up a vast amount of energy without spending any energy)

Astute readers might wonder, if a starship can pick up energy ex nihilo, could it harness that energy to some other cause? At the very least, with enough energy you could completely destroy any planet you crashed into. Of course, if you had the technology to generate “free” energy, you may already have much more efficient ways to destroy a planet.

In a universe where travel occurs at near-lightspeed, there could be something resembling interstellar trade and travel, it would just be very difficult. If faster-than-light communication exists, it’s plausible that far-flung human colonies would stay in touch with each other, sharing the same Internet and the same entertainment and a similar culture. However, travelling to see another star system for yourself would require a major time commitment. Anyone you left behind at home would be much older by the time you reached your destination, or dead if your journey was too long.

Unless, of course, your interstellar civilization managed to dramatically extend their lifespans. Simple anti-aging and regenerative medicine techniques could keep human-like bodies alive for many hundreds of years, long enough to reach nearby stars.

However, if you wanted to tour the hundreds of billions of stars in the Galaxy, at 4 years per star you’d have to live a trillion years. Neither medicine nor mechanical prowess could keep a physical body functioning for that long. You could repeatedly switch bodies, but it’s better to transsubstantiate into an energy being. An energy being might think and act on a totally different timescale compared to biologicals. If your consciousness was slow enough, or your memory long enough, you could hold a conversation with your friends across the galaxy despite a 20,000 year lightspeed delay. At that point, you would definitely not resemble a human in any meaningful way.

Oh, what was that sound? I guess it was the rumbling boom that happens when you break the plausibility barrier. I believe that brings today’s episode to a close!

*   *   *

Come back later for Parts 2 and 3, where I will delve into interstellar propulsion ideas less constrained by reality.

Paging Dr. Hologram: Artificial Intelligence or Stupidity?

 

The Doctor (Star Trek: Voyager)

Doctors Turn to Artificial Intelligence When They’re Stumped,” reports PBS. A dermatologist uses the Modernizing Medicine app to search for a drug to prescribe. A Microsoft researcher describes electronic health records as “large quarries where there’s lots of gold, and we’re just beginning to mine them”. Vanderbilt pharmacists build a computer system to “predict which patients were likely to need certain medications in the future”. CEOs, venture capitalists, and PhD researchers all agree: artificial intelligence is the future of medicine.

In the article, IBM’s Watson is even described as an “artificially intelligent supercomputer”, which sounds far more brilliant than its intended level of expertise of a “nurse” or “second year med student”. (This makes no sense either. A nurse is way smarter than a 2nd year med student unless your patient desparately needs to know about the Krebs cycle. Unless it’s a brand new nurse.)

A simple read-through of the PBS article might convince you that artificial intelligence really is on the cusp of taking over medicine. By the last few paragraphs, the PBS writers are questioning whether computers might not be altogether more intelligent than humans, making “decisions” rather than “recommendations”. You’d be forgiven for believing that electronic health records (EHR) software is on the verge of becoming an Elysium Med-Pod, Prometheus Auto-Surgeon, or if you prefer the classics a Nivenian AutoDoc.


 

 

 

“Machines will be capable, within twenty years, of doing any work that a man can do.”

Herbert A. Simon, The Shape of Automation for Men and Management, 1965

Reading between the lines gives a much clearer picture of the state of electronic clinical decision support (CDS) algorithms:

  • Dr. Kavita Mariwalla, a MD dermatologist treating real patients, uses AI to figure out what drugs to prescribe.
  • Dr. Joshua Denny, a PharmD treating real patients, uses AI to recieve prescriptions and to anticipate what drugs may be prescribed.
  • Dr. Eric Horvitz, a PhD computer scientist at Microsoft, talks about mining your medical records for profit. Of course he would do it in a totally privacy-respecting, non-creepy, non-exploitative way.
  • Daniel Cane, a MBA CEO who sells software, suggests that it is easier for physicians to learn “what’s happening in the medical journals” by buying his software. (because reading medical journals is just too difficult)
  • Euan Thompson, a partner at a venture capital firm, suggests that artificial intelligence will make “the biggest quality improvements”, but only if people are willing to pay the “tremendous expense” involved.
  • Dr. Peter Szolovits, a PhD computer scientist, is optimistic about computers learning to make medical decisions, and his biggest concern is that the FDA would come down on them “like a ton of bricks” for “claiming to practice medicine.”

It isn’t hard to tell that the clinicians and the non-clinicians have very different views of medical AI.

 


Are Computers Really That Smart?
I’m sorry Dave, but I cannot do that.

The most useful programs in current-day medical practice are pharmacy-related. So when PBS wrote their article about AI, they latched on to two pharmacy-related examples of direct patient care. Computers can search through vast amounts of information very quickly, telling us the correct dosing for a drug, second-line drugs you can switch to, or whether X patient is more likely to have a bleeding event with Plavix based on the data in their EHR.

Even then, computers can sometimes be more of a hassle than a help. Most physicians practicing nowadays have run into annoying pharmacy auto-messages in the vein of, “Mrs. Smith is 81 years old, and you just ordered Benadryl. Patients over the age of 70 are more likely to have adverse effects from Benadryl. Please confirm that you still want to order the Benadryl.” (you can replace “benadryl” with just about any imaginable medication.)

However, one thing that computers definitely can’t do is pick up on subtle cues. The PBS article suggests that a computer could tell that a patient is lying when he says he’s not smoking even though there are “nicotine stains” on his teeth and fingers. A computer would need incredibly good machine vision just to see those stains, and how would it know the teeth weren’t stained from coffee, chronic antibiotic use, or just poor dental care. Same with fingers; your patient could be a mechanic wearing a Ford dealership hat and coveralls, how do you know his fingers aren’t stained with motor oil?

For all the recent advances in machine-vision, self-driving cars and all, a computer can only do what it is programmed to do. A Googlemobile can only drive itself because Google has spent years collecting immense amounts of data, correcting errors as they pop up. “Rather than having to figure out what the world looks like and what it means,” Google says, “we tell it what the world is expected to look like when it’s empty. And then the job of the software is to figure out how the world is different from that expectation.”

A wannabe Hologram Doctor can’t rely on having an ultra-precise map of what to expect from a human body, because every single human is different. This is a vastly more difficult problem than figuring out that a slowly moving human-sized object is a pedestrian.


 

The Perils of Excessive Hype
Daisy, Daisy, daisy…

So what’s the harm? If medical-AI researchers want to suggest that computers are on the verge of telling lies from truth, diagnosing complex diseases, and “practicing medicine” like a trained professionals, can we really blame them? After all, they’re just hyping up their field.

Well, the fact is that AI publicity has always been the greatest enemy of AI research. Ever since the 1960s, every time an incremental improvement is made in AI, people hype it up to ridiculous levels, and the hype ends up discrediting the actual technology. Real machine-learning technologies have only improved over time (after all, Moore’s Law is still in effect) but the perception of AI has whiplashed back and forth through the decades.

Perception is a very big deal in healthcare, just ask pediatricians about vaccines. If large healthcare institutions implement (or mandate) half-assed AI programs that end up hurting some patients (even if relatively few), the ensuing public mistrust of medical AI may never go away. You can bet your ass that the FDA would turn hostile to AI if that happened.

Machine-learning technology has a lot of potential for improving healthcare, but unless you’re a venture capitalist or software CEO it’s irresponsible to suggest that decision-support software will rapidly change medical decision-making for the better.

What’s even more irresponsible is suggesting that commercial software should replace reading as a way for physicians to keep up with the medical literature. Anyone who’s worked with “Clinical Pathways” type software knows that don’t always give you a “board exam safe” answer. While they may hew to some consensus guideline, which guideline they use is entirely up to the MD consultants hired by the software company. It’s the professional responsibility of each physician to go to meetings, keep up with the evidence, and use our brains to decide which papers to believe and which guidelines to follow. If we can’t be trusted with that much, then why do MDs go through 4 years of med school and 3-8+ years of postgraduate training?

As a physician and technophile, I think that EHR and CDS are greatly beneficial when done correctly and when they don’t take away from the physician’s medical judgement. Rushing new medical software into practice, whether to comply with a poorly-thought-out government mandate or to generate free publicity, has the potential to do much more harm than good. Like many other medical advances, it is much better to be right than to be first.

Science Or Nonsense: Did Humans Evolve into Weaklings?

Rise of the Planet of the Apes

Science or Nonsense:
Did Humans Evolve into Weaklings?

Humans Evolved Weak Muscles to Feed Brain’s Growth,” says the National Geographic headline. The idea that humans are the cloistered wimps of the animal kingdom is an old and commonly repeated meme. I was intrigued by the promise of scientific evidence in its favor, so I clicked the above article. What I found was even more interesting but less straightforward.

The NatGeo article is based on a scholarly article from the Max Planck Institute in Germany, with the much less catchy name of Exceptional Evolutionary Divergence of Human Muscle and Brain Metabolomes Parallels Human Cognitive and Physical UniquenessThe full text is available @ PLOS One, along with an accompanying commentary article. (Yay for open-access publication!)

Based on its coverage in NatGeo as well as other media coverage, you would be forgiven for thinking that the entire research paper was a literal tug-of-war between humans and apes. “All participants had to lift weights by pulling a handle,” states the NatGeo article. It quotes the editorial commentary, “Amazingly, untrained chimps and macaques raised in captivity easily outperformed university-level basketball players and professional mountain climbers.


 

Dexter’s Lab

Digging Through The Science
Reading between the Press Releases

If you read through the original article you’ll find that pull strength really wasn’t the point. The actual scientific study was an opus of molecular biology, specifically metabolomics. Using a combination of liquid chromatography and mass spectroscopy, they could examine the patterns of metabolites in each tissue in the body. By comparing humans to chimps, macaques and mice they could figure out what metabolic pathways had the most human-specific differences. Not surprisingly, the human brain was very different from animal brains: around 4x as many human-specific changes as the kidneys (their “control” tissue). However, muscles actually managed to outdo brain: they had 8x as many human-specific changes!

The authors were intrigued by this large difference in human muscles, so they embarked on additional studies. First, they performed a genomic (mRNA) analysis to confirm that their metabolomics weren’t totally crazy. They found that the gene expression analysis matched very closely with their metabolomic analyses. Then, they did a less scientific but more newsworthy confirmatory study: the “pulling strength” experiment.

This metaphoric human versus ape tug-of-war accounted for a single paragraph in a very long paper. If you asked any of the authors what experiment they were most proud of, I’m pretty sure none of them would say “Pulling Strength!” Unlike the great detail they gave of their biochemical and statisticcal analyses, the authors gave very little detail on the physical setup of their pulling strength experiment. All we know is that humans, chimps and macaques had to pull a handle to get food, the weight on the machine was progressively increased until the subject couldn’t pull it, and the heaviest weight pulled was recorded as a datapoint.

The final conclusion of “apes are stronger than humans”, came from the endpoint of “pull strength per kilogram of body weight”. This metric is inherently biased toward the smaller animals, and with chimps weighing in at 40kgs (88#) they certainly had an advantage. A similar analysis done on a smaller weight machine would have proven the supernatural strength and agility of spiders.

 

The authors and the editor all made comments about how they did not control for biomechanics. Yes, biomechanics is a valid criticism – everyone knows that the same amount of weight can feel a lot heavier or lighter depending on which weight machine it’s on. Maybe the geometry of their experimental weight machine was bad for humans. However, biomechanics is really the least of “pull strength”s problems.


 

Bipodia as the key insight
Are you pulling my leg?

The problem with a human-ape tug-of-war should have been much more obvious: Humans are bipedal. Apes and monkeys are not. A chimpanzee walks on its knuckles and hangs from tree branches. Ape shoulders are angled toward its head, and ape scapulae are narrow and elongated. This gives apes a more solid muscle attachment for knuckle-walking, hanging, and swinging motions, at the cost of restricted range of motion. For example, apes cannot scratch their own backs, one of many reasons that they spend a lot of time grooming each other. Ape arms are significantly longer than their legs, as is necessary for their walking posture.

On the other hands, human arms are not meant for locomotion, they are meant for manipulation. Our arms are 30% shorter than our legs. Our hands are much smaller than ape hands, trading raw grip strength for dexterity and opposable thumbs. We can cross our arms behind our backs, something that apes cannot do. While our arms are strong enough to hang from monkey bars, it takes us an awful lot of effort to do so. (much as a chimp can walk upright with effort) And humans really can’t knuckle-walk; our arms are too short and our knuckles too small.

Given that ape locomotion uses a lot of pulling motions and human locomotion doesn’t, the fact that apes can out-pull humans shouldn’t surprise anyone. As long as we use a tug-of-war to judge strength, humans just don’t stand a chance. Change the strength test to throwing speed and now the ape seems much weaker.

The idea that humans are the pathetic weaklings of the animal kingdom flies in the face of everything we know about primitive humans. Human biology evolved hundreds of thousands of years before effective weapons like spear-throwing slings or bows. Armed with rocks and sharp sticks, there’s no way we could have survived if our muscles were 2-3x weaker than any other animal.

It is quite likely that human muscles are weaker in short bursts but better at prolonged exertion when compared to other animals. The practice of persistence hunting, primitive tribesmen catching prey by running at it until it collapses from exhaustion, gave a spark to the barefoot running movement. If our muscles aren’t able to produce as much peak pulling force, it’s only because they are optimized for endurance and heat tolerance instead.

 


So, Are Our Muscles Different?
Get your hands off me!

So if I don’t buy the “human muscles are useless” theory, then why are there so many biochemical differences in human muscle compared to our kidneys and brains? The answer of course, is that we don’t know. I’m sure there are plenty of researchers trying to figure this out – every good study needs follow-up studies!

That said, you could make a few guesses based on simple metabolic facts:

The Real Paleo Diet: The great apes are omnivores, but they eat meat very rarely. It’s estimated that wild chimpanzees get ~3% of their calories from meat. The overwhelming majority of their calories come from fruits; there’s a reason why monkeys and apes are portrayed with bananas! On the other hand, ancient humans ate a lot of meat. Variations of the Paleo Diet tell you to get 50-70% of your calories from meat and seafood. While the Paleo Diet is of questionable prehistoric accuracy, humans definitely eat way more meat than the great apes that we evolved from. After all, our bipedal gait and heat tolerance were really good for running down prey!

On a broad scale, human metabolism can be described as two different modes: a glucose metabolism and a ketone metabolism. In well-fed humans with an unrestricted diet, glucose is the basic energy source. Glucose produces a small amount of energy through anaerobic glycolysis, then it feeds into the citric acid cycle which produces a large amount of aerobic energy. During a meal, we digest carbohydrates into very large amounts of glucose. We produce insulin in order to allow our cells to uptake the glucose and use it for energy. Excess glucose is stored as glycogen in liver and muscle. When fasting, we digest the glycogen back into glucose, keeping our blood sugar stable. Glycogen is such a good energy source, athletes often practice “carbohydrate loading” to increase their own glycogen stores.

When we become carbohydrate-deficient, either due to starvation or due to a low-carb diet, the human body switches track completely. The liver breaks down fatty acids into ketone bodies, which become the primary energy source for the body. Ketone bodies enter the citric acid cycle as acetyl-CoA, producing aerobic energy in the same fashion as glucose. However, insulin-mediated glucose transport and glycolysis are completely left out of the picture. Low-carb diet advocates claim numerous benefits of ketosis, some of them more plausible than others.

So what’s my point? The human diet is sufficiently different from that of monkeys and apes that it’s likely that our metabolism has changed in response to diet. A chimp never has to worry about ketosis; its diet is way too high-carb. And since the majority of our carbohydrates are stored as glycogen in the muscles, it only makes sense that our muscle metabolome would change more than any other organ.

Well, that’s my theory at least.

Link

Hacking the Mind Epilogue: Psychosurgery

Hacking the Brain Epilogue: Psychosurgery

While we’re on the subject of “hacking the human mind“, it looks like there is renewed interest in psychosurgery. The link goes to an article about deep brain stimulation for alcoholic cravings, PTSD, and depression!

People have been triyng to control psychiatric conditions with surgery since the days of the prefrontal lobotomy. Electrical stimulation has the advantages of precision and reversibility. However, as with any neurosurgical procedure it relies upon localizing an unwanted symptom to a specific location in the brain. For example, deep brain stimulation works for Parkinson’s because the disease is localized to the basal ganglia.

No matter how much funding you throw at electroneurology, it won’t do any good if an unwanted emotion or compulsion is spread out over a large area of the brain. It remains to be seen how well localized things like alcoholism and PTSD are.

Hacking the Human Mind: Enter Reality

Image courtesy of Emotiv/ExtremeTech.

Hacking the Human Mind, Pt. 2: Enter Reality
In the first part of this post, I discuss the concept of “hacking the human mind” in mythology and fiction. Ever since antiquity, many people have tried to improve the human mind and body. The information era has contributed the term “hacking” to the idea of human-improvement. More recently, pop culture has adopted the idea of hacking humanity and turned it into a ubiquitous plot device.

 


Snap Back to Reality
Whoops there goes Gravity

Hollywood has portrayed hacker-like characters as superhumans, shadowy villains or even honest-to-goodness sorcerers. However, hacker culture in real life is a far cry from its fictional portrayal. While wizards and sorcerers jealously guard their knowledge, real-world hackers are famous for sharing knowledge. (especially when they’re not supposed to)

Possibly thanks to the popularity of “hacking the human mind” as an idea, medical researchers have started to promote the so-called hacker ethic. This philosophy holds that decentralized, open-source use of technology can improve the world. Traditional medical research goes through multiple cycles of proposal, review and revision before anything happens. Successes are often published in closed-access journals while failures are often buried. The hacker ethos encourages freewheeling experimentation and open-source sharing among the scientific community.

Among its many innovations, hacker culture has given birth to the idea of medical hackathons. A “hackathon” is defined as an short duration (often just a weekend), high-intensity multidisciplinary collaboration. During the event, participants make “60 second pitches” to attract other people who might have special skills. For example, a physician with a good idea for telemedicine might go around trying to find a coder who knows about Internet security. Then they could come across a hacker with machine-vision expertise and use him to improve their cameras.

Although they occur too quickly to really polish a product or conduct clinical trials, hackathons generate numerous bright ideas that can be worked on later. In a way they are the ultimate brainstorm.


Heroes of the Brainstorm
Harder, Better, Faster, Stronger

Hackathons are undoubtedly coming up with lots of very good ideas. However, even the best medical ideas take a long time to implement. The only ideas that can be implemented immediately are very small pieces of provider-side software. (ie, enhanced changeover sheets for hospitalists) Anything that touches a patient requires a lengthy process of requests, reviews, and consents before it is ever used… and only then can you figure out whether it is effective.

As of 2014, the medical hackathon simply hasn’t been around long enough to show much of an effect. It’s a bit like a drug in Phase I-Phase II studies: everyone has great hope that it will improve things, but you can’t point to a major innovation that would not have been possible without the hackathon.

Integrating small-scale hackathon products into larger suites of medical software is a much tougher problem. Even the large-vendor EHRs (Epic, Meditech, Cerner) have difficulty communicating with each other, let alone with smaller pieces of software. The greatest problem in healthcare IT is that the so-called “HL7 Standard” isn’t really a standard.

Standard file formats exist so that they can be consistently read by everyone. A PDF looks the same on a PC, Mac, iPhone or Google Glass. A Kindle file (.AZW) is the same on a Kindle, PC or phone. Even medical imaging has a true standard format. Whether your CT scanner is a GE, Phillips, or Siemens, when you export DICOM images to another physician, the CT slices will show up exactly the same.

HL7 is not like that at all. In my personal experience, naively transferring documents between two pieces of “HL7-compliant” software results in loss or misinterpretation of some of the data. In order to fix this, you need a highly trained IT expert to create a specialized “connectivity interface”, or sometimes you pay big bucks to purchase such an interface. I am amazed that things are still so difficult in the year 2014.

In the field of traditional software design, hackers have benefited from the uniform interoperability of Unix (Linux) for many decades. As of today, healthcare lacks this important feature.

Maybe the hackers could come up with a solution for interoperability?


Big Data: The Rise of the Machines
Thank God it’s not Big Lore, Big Bishop, or Big Terminator

One of the promises of “medical hacking” has been the application of “Big Data” techniques to healthcare. Data analysis in healthcare has always been difficult and often inconsistently performed. Many medical students and residents can tell you about painstaking research hours spent on manual data-entry. Big Data techniques could turn ten thousand med student hours into five minutes of computer script runtime. Unfortunately, to this date Big Data has been much less successful in real life.

So far, the two Biggest Data medical innovations have been Google Flu Trends and 23andMe. GFT purports to forecast the severity of the flu season, region by region, based on statistics on flu-related Google searches. 23andMe was originally supposed to predict your risk of numerous diseases and conditions using a $99 DNA microarray (SNP) analysis. Far from being a home run for Big Data, both of these tools are more reminiscent of a strikeout, if not a pick-six.

GFT was billed as a Big Data tool that would vastly improve the accuracy and granularity of infectious disease forecasting. When first introduced in 2008, GFT’s flu predictions were more accurate than any existing source. However, every year it became less and less accurate, until it became worse than simply measuring how many flu cases happened two weeks ago. GFT’s performance degraded so badly, it was described as a “parable of traps in data analysis” by Harvard researchers.

23andMe offered SNP testing of the entire genome, used both for ancestry analysis and disease prediction. Prior to November 2013, the website offered a vast number of predictors ranging from lung cancer to erectile dysfunction to Alzheimer’s dementia to drug side effects. It was held up as an exemplar of 21st-century genomic empowerment, giving individuals access to unprecedented information about themselves for the low, low price of $99.

The problem was, 23andMe never bothered to submit any scientific evidence of accuracy or reproducibility to the Food and Drug Administration. The FDA sent a cease and desist letter, forcing them to stop marketing their product as a predictive tool. They’re still selling their gene test, but they are only allowed to tell you about your ancestry. (not any health predictions) This move launched a firestorm, with some people arguing that the FDA was overstepping or even following “outdated laws“.

However, the bulk of the evidence suggested that 23andMe simply didn’t give accurate genetic info. Some molecular biologists pointed out the inherent flaws in SNP testing, which make it impossible for 23AndMe to be usably accurate. Others pointed out that even if accurate, most of the correlations were too weak to have any effect on lifestyle or healthcare. The New England Journal of Medicine concluded that the FDA was justified in issuing a warning, and that “serious dialogue” is required to set standards in the industry. Other commentators were “terrified” by 23andMe’s ability to use your genetic info for secondary studies. After all, how can 23andMe sell genetic tests for $99 when other companies charge thousands? Obviously they didn’t plan to make money from the consumers; instead, 23andMe hoped to make money selling genetic data to drug companies and the rest of the healthcare industry.

In the end, that is my biggest misgiving against medical Big Data. Thanks to social media (this blog included) we have already commoditized our browsing habits, our buying habits, our hobbies and fandoms. Do we really want to commoditize our DNA as well? If so, count me out.


Doctoring the Doctor
Damnit Jim, I’m a doctor, not a hologram!

Another big promise of the “hacker ethos” in medicine is that it could improve physician engagement and enthusiasm for technology. Small decentralized teams of hackers could communicate directly with physicians, skipping the multi-layered bureaucracy of larger healthcare companies.

Many healthcare commentators have (falsely) framed the issue of physician buy-in as a matter of technophobia. Doctors are “stuck in the past“, “Luddites in white coats”, and generally terrified of change. The thing is, it’s just not true. Just look at the speed at which new medical devices are popularized – everything from 4DCTs to surgical robots to neuronavigation units, insulin pumps, AICDs and deep brain stimulators. If physicians saw as much of a benefit from electronic health records (EHRs) as we were supposed to, we would be enthusiastic instead of skeptical.

I believe that EHR would be in much better shape today if there had never been an Obamacare EHR mandate. No one ever improved the state of the art by throwing a 158-page menu of mandates at it. Present-day EHRs care much more about Medicare and other billing rules than they do about doctor or nurse usability.

Back on subject, I do believe that medical hacking has the potential to get physicians more involved in technological innovation. So long as physicians are stuck dealing with massive corporate entities, we can provide feedback and suggestions but they are very unlikely to be implemented. Small-scale collaborations empower doctors with the ability to really change the direction of a project.

Now, not every medical hack will result in something useful. In fact, a lot of hacks will amount to little more than cool party tricks, but some of these hacks will evolve into more useful applications. Some easily-hackable projects may involve documents or files produced by older medical technology. During residency I worked on a research project involving radiation treatment plans from a very old, non DICOM-compliant system. We quickly discovered that the old CTs were not usable by modern treatment planning software. Fortunately, one of the physicists on our research team was familiar with DICOM. He coded a computer program that inserted the missing DICOM headers into the old CT images, allowing us to import old CTs without any problems.

Introducing more hackers to medicine can only increase the number of problems solved by astute coding.


What Happened to Superpowers?
Paging Dr. Manhattan…

The addition of hacker culture to medicine certainly has a lot of potential to improve the everyday practice of medicine. But what happened to the idea of “hacking the human mind” in order to develop super-strength and speed?

On a very rudimentary level, “hacking the mind” improves physical performance every time an athlete grows a beard for the playoffs or wears his college shorts under his NBA uniform. But true hacking should be more sophisticated than mere superstition!

Biofeedback is a common pre-game ritual for various athletes that could be construed as a minor form of “hacking the mind/body”. Dietary habits such as carb loading could also be considered a mild form of hacking. For less legal mind-body hacking you could always turn to performance enhancing drugs.

Speaking of drugs, there’s a long-held belief that people high on drugs (mostly PCP, sometimes meth or bath salts) gain superhuman strength. While the evidence is mostly anecdotal, there’s a plausible medical explanation. The Golgi tendon reflex normally prevents muscles from over-exerting themselves, and it can be suppressed in desperate situations (the “mother lifts a car off her child” scenario). It’s reasonable to assume that some drugs could have a similar effect.

It’s also reasonable to assume that military physicians have spent decades (the entire Cold War for sure) trying to produce a super-strength drug with fewer side effects than PCP. The fact that our entire army doesn’t have the physique of Captain America suggests that those efforts were unsuccessful. Granted, this doesn’t rule out the existence of a super serum that only worked on one guy ever.

Evolutionarily speaking, it is highly implausible that humans would have tremendous physiological potential locked behind some mental gate. If the human body had such great power, our prehistoric ancestors would have needed every ounce of it to outrun or outfight angry lions and hippos and crocs. It would make no sense for humans to have a mental block on our strength. Unless removing that mental block led to instant death or infertility, the first caveman to lose his mental block would be evolutionarily favored over the rest of proto-humanity. Therefore, it’s very unlikely to think that human performance can be “magically” improved with drugs, meditation or other techniques.


 

So let’s cap off this long ramble with a little teaser on evolution and human strength. This National Geographic feature suggests that early humans directly traded muscle strength for brain power.

http://news.nationalgeographic.com/news/2014/05/140527-brain-muscle-metabolism-genes-apes-science/

What is wrong with this argument?