On Interstellar Travel: Can We Reach For The Stars?

On Interstellar Travel
Part 1 of 3: Can we Reach for the Stars?

“You feel so lost, so cut off, so alone. Only you’re not.”
Contact

45 years ago, Neil Armstrong took one small step for (a) man, one giant leap for mankind. Over the intervening decades, it’s interesting to see the progress that mankind has made in outer space. As a species we have continued to leap forward, placing thousands of satellites into Earth orbit and sending probes all over our Solar System. Yet we have not taken any more small steps for man, or woman. There are no more bootprints on the Moon or any other celestial body than there were in 1972.

Manned space travel is difficult and perilous, and at the moment low-reward. Earth is the only Earth-like world in the Solar System; any colony we put on the Moon or Mars would require supplies from Earth just to survive. If you’re looking for a comfortable extraterrestrial world to live on, you’ll have to go interstellar. There’s a lot of ideas for how mankind could one day walk on an exoplanet – some realistic, some less so.

As part of the celebration of the 45th anniversary of the Moon landing, I’ve written up a series of three articles on interstellar travel. Today’s article will stick to (mostly) realistic slower-than-light travel options, while the next two pieces will delve into increasingly (but not infinitely) improbable modes of propulsion.


When I’ve Been There Ten Thousand Years
Traveling Much Slower than Light

Einsteinian space-time has three space-like dimensions, one time-like dimension, and an absolute speed limit of c, approximately 300,000 kilometers per second (kps). Nothing can move faster than c without also traveling backward in time. And since arbitrary time travel causes all sorts of logic-destroying stupidity, most scientists assume that time travel is impossible. Therefore, nothing can go faster than the speed of light.

In a realistic universe, it takes an awfully long time to get anywhere. The Apollo moon missions maxed out at around 11 kps relative to the Earth. Traveling to the nearest star would take 115,000 years at this pace. Actually, you’d never get that far. Starting from the Earth, the escape velocity of the Solar System is ~42kps. You’d need a considerably faster craft to ever exit the Solar System.

In the 1960s, the Orion nuclear pulse-rocket was “designed” as a deep space exploration concept. This starship would have used repeated thermonuclear explosions to push it at extremely high velocities (compared to conventional rockets). Such a craft could accelerate up to velocities of around 3%c. This would get you to Proxima Centauri in 142 years.

With much-slower-than-light travel, a journey between the stars will either require many lifetimes, or prolonged cryogenic freezing. Either way, all of your friends at home will be long dead by the time you reach your destination. And if people live on a starship for too many generations, they may eventually forget that they are on a starship.

*   *   *

Very-slow interstellar travel faces one major problem: Resource consumption. Where do you get fuel, water, and other materials while spending centuries between the stars? Every ecosystem requires light and heat, which means you have to generate energy, and energy is in short supply in interstellar space. Even nuclear reactors will run out of fuel during a thousand-year journey.

A Bussard ramscoop could gather interstellar gas for fusion power, but there’s not a lot of gas out there and it would be plain H-1. This is a much dirtier fusion fuel than He-3, and over the years would cause radiation damage to your fusion drive. You’ll burn most of the hydrogen that you collect just to create enough thrust to offset the ramscoop’s drag. And the ramscoop won’t collect any metals – if anything on your starship breaks, you can only hope that your ancestors brought a spare.

Some slow-starship designs completely bypass the energy problem by relying on laser energy beamed from Earth. This energy could be used both to propel the ship and to power its ecosystem. It’s certainly an elegant solution, as you could rely on an extremely large energy-producing infrastructure that doesn’t have to travel with your starship. But what happens when your benefactors run out of funding, are killed in a war, are destroyed by climate change or natural disasters?

The fact is that based on a present-day understanding of physics and engineering, a slower-than-light “generation ship” is really not much more realistic than faster-than-light travel. If we ignore the difficulties of energy generation and resource collection in interstellar space, we might as well ignore the rest of physics.

And let’s say someone develops a technology that allows a civilization to live forever without an external energy source – why would you even want to live on a planet at that point? Just stay in interstellar space.

In a universe where human civilization is limited to much-slower-than-light travel, there would be no such thing as an interstellar civilization. Humanity might eventually spread out to a bunch of stars, but each solar system would have its own unique way of life. The human colonies might communicate with each other, but they really couldn’t trade effectively, and no one could travel back and forth between different stars. There could be countless alien civilizations in the galaxy, but we might never encounter them because they are too far away.


Oh my God, it’s full of stars!
Traveling at near the speed of light

According to Einstein, funny things happen when you get near the speed of light. Time slows down. Distances get shorter. Mass gets more massive. A traveller moving at 99.5%c will experience 10-fold time dilation, length contraction, and mass increase. That means he experiences time passing 10 times slower than someone at rest. Relativity may sound funny, but it isn’t just empty theory – our entire telecom and GPS system is programmed with relativity in mind. If Einstein was wrong, then none of the technology you’re using to read this blog article would work.

Science fiction authors have played with the concept of time dilation for many decades, because it’s fun. An interstellar traveller may live for a normal human lifespan but witness thousands of years of galactic civilization in fast-forwards.

There’s one massive problem with near-lightspeed travel, and it’s mass. Well, it’s really energy, but we all know that’s the same thing. If you’re using time dilation to age 10x slower, that means you are also 10x as massive as you were at rest. If you were to stop moving, you’d need to shed kinetic energy equal to 9x your rest mass, a truly absurd amount. In order to get moving again, you need to gain an equally ridiculous amount of kinetic energy.

How ridiculous is this? Well, the rest mass of a 70-kg (154#) human is 6.3 exajoules. That’s equivalent to 1,500 megatons of TNT, or 3 times the total energy of every nuclear bomb ever detonated. Now imagine spending nine times that energy just to accelerate a single person to near-lightspeed. We haven’t even considered the mass of the starship yet!

Even with antimatter or black holes, it is very difficult (and highly dangerous) to come up with this kind of energy. Science fiction writers have either ignored the energy problem, or circumvented it with handwaving pseudophysics. (“It’s an inertialess drive!”) In Speaker for the Dead, Ender Wiggin wondered if a star winked out every time a starship started moving. (since the ship picked up a vast amount of energy without spending any energy)

Astute readers might wonder, if a starship can pick up energy ex nihilo, could it harness that energy to some other cause? At the very least, with enough energy you could completely destroy any planet you crashed into. Of course, if you had the technology to generate “free” energy, you may already have much more efficient ways to destroy a planet.

In a universe where travel occurs at near-lightspeed, there could be something resembling interstellar trade and travel, it would just be very difficult. If faster-than-light communication exists, it’s plausible that far-flung human colonies would stay in touch with each other, sharing the same Internet and the same entertainment and a similar culture. However, travelling to see another star system for yourself would require a major time commitment. Anyone you left behind at home would be much older by the time you reached your destination, or dead if your journey was too long.

Unless, of course, your interstellar civilization managed to dramatically extend their lifespans. Simple anti-aging and regenerative medicine techniques could keep human-like bodies alive for many hundreds of years, long enough to reach nearby stars.

However, if you wanted to tour the hundreds of billions of stars in the Galaxy, at 4 years per star you’d have to live a trillion years. Neither medicine nor mechanical prowess could keep a physical body functioning for that long. You could repeatedly switch bodies, but it’s better to transsubstantiate into an energy being. An energy being might think and act on a totally different timescale compared to biologicals. If your consciousness was slow enough, or your memory long enough, you could hold a conversation with your friends across the galaxy despite a 20,000 year lightspeed delay. At that point, you would definitely not resemble a human in any meaningful way.

Oh, what was that sound? I guess it was the rumbling boom that happens when you break the plausibility barrier. I believe that brings today’s episode to a close!

*   *   *

Come back later for Parts 2 and 3, where I will delve into interstellar propulsion ideas less constrained by reality.

Advertisements

45 Years from the Moon

45 years ago this morning, Neil Armstrong, Buzz Aldrin and Jim Lovell launched Apollo 11 from Kennedy Space Center. Four days later, Neil Armstrong would become the first man on the moon.

For the 45th lunar landing anniversary, NBC is running an entire series here:
http://www.nbcnews.com/science/space/apollo-11-plus-45-how-neil-armstrong-got-ready-moon-n155916

Since 1969, spaceflight has transitioned from a government-run program to a highly profitable industry. The Apollo missions may have been justified by national pride and glory, but they were really all about the nuclear arms race; if you can build a moon rocket you can build very large and precise missiles.

In 2014, our smartphone signals, internet connections, and GPS would not work without the multi-billion dollar infrastructure in Earth orbit. Most satellites don’t need to be subsidized, they are quite profitable on their own. Yet we haven’t seen a lot of manned exploration outside of low Earth orbit, because it isn’t yet profitable.

It remains to be seen whether anyone will build a hotel in spacemine gold from asteroids, or film a reality show on Mars. In any case, the future of space exploration will require people to make money in space.

Was Malthus Right?

http://www.pbs.org/newshour/making-sense/world-woe-malthus-right/

I ran across this very interesting PBS article recently (link above). It is an excellent summary of Malthusian philosophy that got me musing about Malthusianism and public policy.

Reverend Thomas Malthus first published his theories in the late 18th century, a time of dramatic social upheaval. The might of England had fallen short against the rebellious colonies, while the Ancien Régime had lost its head to the rebellious Jacobins. The only thing certain in this era was uncertainty.

Against this backdrop, Malthus proclaimed that there were a finite quantity of resources on Earth, and that the human population will always proliferate until those resources are consumed. Once the resources are exhausted, the world is doomed either to widespread famine or violence. If the overall resource level is increased by social or technological developments, humans will simply proliferate to a larger population and our overall misery will remain unchanged.

Malthus wrote that the median income of the common folk, expressed in the amount of food (pounds of wheat) they could afford, had remained constant from prehistoric times to the end of the 18th century – and this number was barely enough food to survive. The central dogma of Malthusian belief was that increasing living standards led to higher populations which led to decreasing living standards, causing a long-term equilibrium of famine and poverty.

Malthus believed that this negative feedback cycle could only be broken if the whole world decided to have fewer children. In an era where reliable contraception was nonexistent and many children died at a young age, this must have sounded as loony as putting a man on the moon.

Malthus also suggested that any large-scale charity (such as social welfare programs) would prove useless or harmful in the long run. According to Malthusian dynamics, the only thing keeping poverty in check is the death rate of poor people. Therefore, anything you did to help poor people would only cause more people to become poor. This part of his philosophy was attractive to an aristocracy terrified of the proletariat mob at their gates. As such, 19th century Malthusianism was staunchly conservative.


 

By the time of World War II, every civilized country had major social welfare programs in place. Thus, the “charity is harmful” portion of Malthusian philosophy was largely ignored (as it remains to this day). Instead, 20th century Malthusians focused the importance of population control. In the pre-WWII era this often meant eugenics and forced sterilization – the Malthusian Belt of Brave New World. Again, this placed Malthusianism firmly on the conservative end of the political spectrum.

Adolf Hitler proceeded to Godwin the eugenics movement, taking it to its most horrific extreme and making it unmentionable in polite society. However, a pharmaceutical innovation revived interest in Malthus – The Pill. Oral contraceptives allowed a new generation to have kids only when they wanted to. Birth control was immediately opposed by the religious right, so Malthusian philosophy was suddenly liberal. This right-to-left shift was completed when many early environmentalists started preaching Malthusian population control as a way to decrease environmental impact.

Malthus believed that food production was the crucial limiting factor for population growth. The Earth had a “carrying capacity”, a maximum number of mouths that the planet could feed. Back in the 1950s and 1960s, food was a central dogma in Malthusian environmentalism. In The Population Bomb(1968), Paul Ehrlich stated that hundreds of millions of people would starve to death by the end of the 1970s. He suggested putting contraceptives in the water supply or in staple foods, while noting the sociopolitical impossibility of doing so.

Instead, a social and technological revolution occurred. Basic farming techniques such as irrigation, fertilizers and pesticides spread from the First World to the Third. New crop cultivars, developed first by conventional breeding and later by genetic modification, massively increased farm yields. Food prices dropped so low that many industrialized countries had to pay farmers not to farm. Even as the human population of Earth increased from a few hundred million to over 7 billion, Malthus’s prediction of widespread food shortages never came true.


 

A funny thing happened between the 1970s and now. Populations leveled off and started to decline in Europe, Russia, Japan, and among non-Hispanic whites in the USA. This happened despite the fact that an increasing world population had not triggered any horrific famines, wars or plagues. It also happened in the absence of any draconian measures such as Ehrlich’s hypothetical contraceptive water supply. Economists coined the phrase “demographic-economic paradox” to describe the decreasing fertility among wealthy socioeconomic groups. What public policy triumph allowed population control to finally happen? Widespread access to affordable contraception, a remedy far easier to swallow than forced sterilization.

The success of birth control could be seen as the ultimate confirmation of Malthus’s thesis that limiting the population would improve quality of life. It has undoubtedly broken the Malthusian cycle of “increased living standards -> increased birth rate -> decreased living standards”. Recent predictions suggest that human population will peak in the mid-21st century and then decline. This predicted peak doesn’t happen due to food shortages, but because humans are choosing to have fewer children. Those children will not be limited to Malthus’s “14 pounds of wheat”, they will have much greater access to food and material goods.

Reverend Malthus’ ultimate objective was to decrease the worldwide fertility rate, and by that measure he has been wildly successful. What he could not have forseen was the method of this success. Malthusian doctrine gave birth to numerous population-limiting schemes over the centuries, many of which were impractical or inhumane. In the end, the global fertility decline occurred thanks to affordable contraception. Billions of human beings chose to have fewer children. No one forced them to do so. (except in China).

I wish that more policy thinkers would draw a lesson from this part of history. You can craft onerous laws to change people’s behavior, and they will fight you every step of the way. Or you could give people the freedom to choose. If the change in behavior is truly beneficial, people will gravitate toward it over time – as has happened in every high-income country over the past several decades.

Cheating on the Turing Test

Cheating on the Turing Test
Hello, my name is Anna

The “Turing Test” asks an intelligent computer program to pretend to be human. The computer wins if humans are unable to tell the difference between a computer and a human. In the Battlestar Galactica universe, humans can’t tell the difference even when having sex with a robot. That’s a pretty convincing win. Real-world AI is quite a bit less sophisticated (and less sexy).

Over the weekend, a program named Eugene Goostman allegedly passed the Turing Test, convincing 33% of judges that it was human. Problem is, that test had so many problems, the Internet spent more time mocking the winner than congratulating it. Don’t just take someone’s word for it, go read one of the chat transcripts:

Turing-test “chatbots” have existed for years. They’ve even been the inspiration for a song. These bots converse with a combination of canned jokes and scripted answers, repeating the question, and denying knowledge when directly asked. This often leads to dead-giveaway non sequiturs, like when a ‘bot first claims that it lives in Ukraine, only to say that it’s never been to Ukraine. You would think that the people running Turing Tests would ask those “gotcha” questions right away, but that would presuppose that people running Turing Tests are actually doing meaningful AI research. In reality they are the AI equivalent of pro wrestling.

The chatbot strategy of sticking to a finite script turns out to be quite useful in the real world, not for true artificial intelligence but for telemarketing. While a chatbot can deflect unwanted questions by saying they “don’t understand” or “I’ve never been there”, the telemarketer ‘bot has an even better excuse: “Sorry, we have a bad connection”.


 

The Turing Test

Alan Turing published the concept of the Turing Test in 1950. Machines with “a storage of 174,380” were the state of the art, enormous and expensive. This comes out to just over 21 kilobytes, enough to hold a few seconds of MP3 audio or a very small thumbnail JPG. Such a computer couldn’t do much more than basic arithmetic, yet arithmetic was a big deal in an era when airplanes and rockets were designed by slide rule. At that point, the idea of holding a natural-language conversation seemed nearly as fanciful as composing poetry or falling in love.

In his original 1950 article, Turing predicted that a computer with “10^9 bits” (128 megabytes) would be able to adequately imitate a human, and that such a machine would be invented within “50 years”. He was right about the 128 megabytes in 50 years (a decent PC in the year 2000 would have 128MB of RAM) but absolutely wrong about being able to pass the Turing Test.


Why is AI so difficult?

The futurists of the 1950s used arithmetic-computation speed as a surrogate for intelligence and found computers to be much “better” than humans. They wrongly assumed that this would rapidly translate into true artificial intelligence. For the purposes of this section I will use the phrase “AI” to refer to “general artificial intelligence”: machines capable of thinking their way through arbitrary scenarios as opposed to being narrowly programmed for one application. (much as a chatbot is programmed to fool humans)

In 2014, computers are billions of times faster than they were in the past. Yet no one’s come up with a computer program that can hold a real conversation, let alone demonstrate true intelligence. Why is this so difficult? I’ll go through a few hypotheses.

AI is impossible

Church and Turing proved that a “universal Turing machine” with infinite memory and time could compute any function that computable on any other Turing machine. They also proved that certain problems were not Turing-computable, most famously the Turing halting problem. No Turing machine can reliably determine whether another Turing machine will halt or run forever. All digital computers are finite Turing machines, so these limitations apply to Windows, iOS, Android and Playstation.

Ever since Alan Turing came up with the concept, scientists and philosophers have argued over whether the human brain is a Turing machine. This debate is more philosophical than practical, as it’s pretty much impossible to prove either way. The human brain certainly doesn’t act like a Turing machine – nothing about it is neatly classified into “1”s and “0”s. There may be enough quantum effects in the brain’s ion channels, neurotransmitters, proteins, DNA, chromatin, microtubules and vesicles to make it completely non-computable with classical deterministic mechanics. If the human brain works in a fundamentally non-Turing way, maybe all of our thoughts are non-Turing computable.

Some people believe that while a universal Turing machine can compute everything, it may be very inefficient at doing so. For example, the difficulty of decrypting cryptographic keys scales exponentially with the length of the key. Therefore, increasing the length of an RSA key from 1024 bits to 2048 bits doesn’t just double difficulty, it multiplies it by millions of times. If you expand this to the 10^11 neurons in a human brain, even if the infinite Turing machine could replicate the brain, it might require more mass/energy/time than exists in the visible universe.

AI is possible, but our computers aren’t good enough yet

Maybe Turing et al were off by a few orders of magnitude. Maybe instead of requiring 128 megabytes to simulate a human mind, computers will require 128 exabytes. If this theory is correct, it’s only a matter of time and Moore’s law (assuming it keeps on scaling) before computers overtake human intelligence.

The problem with this hypothesis is that it is non-disprovable. As long as true AI doesn’t exist, and computers are getting more powerful every year, you can always presume that true AI will exist next year.

Also, it’s not very interesting to talk about.

AI is possible, but humans aren’t smart enough to program it

What if our current PCs are more than powerful enough to host human-level intelligences, but humans just aren’t smart enough to write that code?

This theory isn’t all that far-fetched. After all, human intelligence is obviously finite. We have difficulty memorizing anything longer than a 10-digit phone number. We don’t fail to struggle with relatively non-difficult logical conjunctions such as multiple negatives. We cannot accurately remember smells.

We know that when animals run up against the limits of their finite intelligence, they cannot solve a problem no matter how hard they try. No matter how many times a fish wakes up and sees its reflection in the fishtank, it will always be scared of the “second fish”. It will never realize that it’s just a reflection.

So it’s entirely possible that even though a million Steve Jobs working for a million years could never come up with a workable AI algorithm because humans are inherently stupid, a nonhuman superintelligence could program your iPhone to be smarter than you.

If this “God-touched iPhone” copied itself over to the next generation of iPhones, it would become smarter. Then it could take over an iPhone factory and put more memory and more processor power in the next generation. The “God-touched iPhone” would very quickly take over humanity (hopefully in a benevolent way). After several generations, the iPhone may become as superintelligent as its Creator. Alternatively, it may plateau at some level of machine intelligence that is superior to humans, but not yet able to create other intelligences. God would remain God, machine would become Angel, and Man would remain Man.

The most frightening possibility is that humans may be too stupid to intentionally create an AI, but we could accidentally create an AI. If that AI was more intelligent than humans, it might be smart enough to improve its own intelligence until it became infinitely smarter than us.

At that point, we’d be left hoping that the AI enjoys having sex with humans.


 

What do you guys think? Leave a comment!