Peter Thiel’s CS183: Startup - Class 19 Notes Essay

Here is an essay version of my class notes from the last class of CS183: Startup, class 19. Errors and omissions are mine.

The following three guests joined the class for a discussion:

  1. Sonia Arrison, tech analyst, author of 100 Plus: How the Coming Age of Longevity will Change Everything, and Associate Founder of Singularity University
  2. Michael Vassar, futurist and past President of the Singularity Institute for the study of Artificial Intelligence (SIAI)
  3. Dr. Aubrey de Grey, gerontology expert and Chief Science Officer at the SENS Foundation

Credit for good stuff goes to them and Peter, who gave the closing remarks. I have tried to be accurate. But note that this is not an exact transcript.

Class 19 Notes Essay—Stagnation or Singularity?

I. Perspectives 

Peter Thiel: Let’s start by having each of you outline your vision of what kinds of technological change we might see over the next 30 or 40 years.

Michael Vassar: It’s lot easier to talk about what the world will look like 30 years from now than 40 years from now. Thirty seems tractable. Today, we’ve gone from knowing how to sequence a gene or two to thousand-dollar whole genome sequencing. Paul Allen is running a $500 million experiment that seems to be going very well. This technological trajectory is both exciting and terrifying at the same time. Suppose, after 30 years, we have a million times today’s computing power and achieve a hundred times today’s algorithmic efficiency. At that point we’d be in a place to simulate brains and such. And after that, anything goes.

But this kind of progress over the next 30 years is by no means something we can take for granted. Getting around bottlenecks—energy constraints, for example—is going to be hard. If we can do it, we’re at the very end. But I expect that there will be a lot of turmoil along the way.

Aubrey de Grey: We have a fair idea of what technology might be developed, but a much weaker idea of the timeline for development. It is possible that we are about 25 years away from escape velocity. But there are two caveats to this supposition: first, it is obviously subject to sufficient resources being deployed toward the technological development, and second, even then, it’s 50-50; we probably have a 50% chance of getting there. But there would seem to be at least a 10% chance of not getting there for another 100 years or so.  

In a sense, none of this matters. The uncertainty of the timeline should not affect prioritization. We should be doing the same things regardless. 

If you look at certain AI approaches, you conclude that you need both a great understanding of how the world works and a lot more computing power to pull them off. But they are worth pursuing even at a 10% chance of success in the next 30 years. We should be sympathetic toward giving very difficult approaches the time of day. Orchestrating the development of technology is not easy. It’s a process of sidestepping ignorance and planning to manipulating nature based on an incomplete picture of nature to begin with. Achieving full-blown uploading—and when—is so speculative that it’s probably not worth talking about in real probabilistic terms. But our priorities should be the same: develop radical technology in biotech, computation, hardware, etc.

Sonia Arrison: I spend most of my time looking at biotech, so I’ll talk about the biotech slice first. It is clear that biology is quickly becoming an engineering problem. I got interested in biotech several years ago when my CS friends started picking up biology books. They thought, probably accurately, that the next big thing in coding would be bio, not computers. This is now a mainstream view. Bill Gates has said something like this, along with several others. Great hackers go into biotech. In 30 or 40 years, the bio-as-engineering paradigm could make the world look radically different. There is a sense in which genomics is moving faster than Moore’s law. Prices are falling; the first human genome sequencing was around $3 billion. Now it can be done for around $1,000. There is work being done on a genomic compiler, which would make it easier to hack all sorts of organisms’ genomes, which would in turn open up all kinds of possibilities.

The big complaint right now is that, despite the fact that the first draft of the human genome was sequenced in 2000, twelve years later not that much has actually happened in terms of new treatments or cures based on the technology.  . This criticism is weak because it misses an important point: for most of those 12 years genomic sequencing was so expensive that very few scientists could do the work they wanted to do using genomes. Of course, now that prices have fallen substantially, barriers are falling in a serious way. Things will happen—people are working on radical new things. Gene therapy promises to cure diseases. It’s possible that we can develop new kinds of fuels. There is a Kickstarter project that involves taking an oak tree and splicing firefly genes into it. The end result would be trees that glow. More than just cool in it’s own right, maybe you could use those firefly trees to illuminate roads instead of streetlamps. That’s awesome. And there is so much more that we can’t even fathom right now. A lot can, and will happen at the nexus of bio and engineering.

In the short run and outside of biotech, the shift to online education seems like it will dramatically change how people learn. Things like the Stanford AI class, Udacity, the Kahn Academy—we don’t know exactly how it will all play out, but it’s safe to say that there are a lot of things to look forward to on this front.

Peter Thiel: Let’s engage on the culture question: why do most people think you’re crazy?  

Michael Vassar: For whatever reason, having opinions about the future is seen as strange. Only a small minority of people forms opinions about the future—even the near future. Perhaps this is because thinking about the future is uncomfortable and kind of difficult. People prefer to work with models that involve one variable changes in linear trajectories, while everything else stays the same. We know that that’s nonsense, of course; the world doesn’t work like that. But it makes for easy conversation. Keeping the discourse at that simplistic level allows us to focus on one thing and work together today. Factoring in 100 variables would in some sense break that dynamic. But thinking about the future is very important, and right now that can be isolating. Diverging from people means that there are fewer people you can talk to. There are fewer connotations; people tend not to understand where you’re coming from.

But this is not to say that people just have different beliefs than we do. Usually, they don’t. You don’t usually encounter anti-singularity views. Maybe some global warming people or apocalypse people are affirmatively anti-singularity. But most people aren’t substantively engaging. What is perceived as crazy isn’t the substance of the belief itself, but rather having the belief in the first place. 

Aubrey de Grey: I disagree a bit. People do tend have some view of the future. They usually project relative stagnation. People tend to believe that, not only will most things not change, but what will change won’t change very quickly. People who criticize my views on biotech and aging, for instance, do not identify bad logical steps or seize on anything substantive. Rather, they choose not to believe what I’m saying because it conflicts with their bias toward stagnation. They walk away quite sure that the rate of progress in anti-aging and longevity technology will never accelerate. That is pretty striking. 

I try and dispose of this by pointing out that if you were to ask someone in 1900 how long it would to cross the Atlantic in 1950, they would make a prediction drawing from ocean liner speed trajectories up to that point. They wouldn’t be able to foresee the airplane. And so their calculation would be off by orders of magnitude.

Of course, everyone knows how much technological change has happened in the past few centuries and decades. Everyone knows what the Internet did in recent years. But there is a huge reluctance to apply any of this as precedent for what might or is likely to happen in the future.

There’s also a desirability aspect to it. Fear of the unknown is such a deep-seated emotion. When people encounter a radical new proposition, they are biased to think that things will go way wrong. It is very hard for people to consider the reasonable likelihood of those scenarios unfolding, so they exaggerate risks. More rational aspects to the conversation go out the window.

Sonia Arrison: For the record, no one thinks that I’m crazy. 

Peter Thiel: You’re the best disguised…

Sonia Arrison: Well, “crazy” is a hard claim to make since I focus on actual technology that is grounded in reality. I write about tissue engineering, regenerative medicine, and biohacking, for instance. That exists now. And it’s going to continue to develop and, I think, really change the world. There are three reasons that people sometimes have a problem with this stuff. First, they don’t understand it. Second, they don’t believe it. Third, they fear it.

Think about the firefly/oak tree street lamps for a second. Just the idea of that terrifies some people. It’s completely different from how things are now. Some people respond with knee-jerk reactions: “Don’t mess with nature!” “Don’t play God!” This reaction is understandable, but it stands in the way of progress. It’s not the best reaction. In a lot of ways it doesn’t really make sense.

Peter Thiel: Is the best approach to ignore those people, then?

Sonia Arrison: Better than ignoring them is trying to educate them. It is important to explain things clearly. Technology that people do not understand looks a lot like magic sometimes. And Magic is scary. But if you distill and explain—“this is x, this is what it does”—you can sell them on it. It’s just a matter of clearly communicating the benefits vs. the costs. “This will drive out dirty fossil fuels,” for example, might be one persuasive line of argument in favor of the firefly/tree hybrid street lamps.

Peter Thiel: There’s a compelling case that we’ll very likely see extraordinary or accelerated progress in the decades ahead. So why not just sit back, grab some popcorn, and enjoy the show?

Another cut at the question is this: In Kurzweil’s The Singularity is Near, progress follows an exponential growth curve. It’s a law of nature. In a sense, the singularity is happening regardless of what individual people actually do to make it happen. The assumption was that there will always be enough people who try things, so you, as an individual, don’t actually have to do anything and you can just wait for things to happen. Is there anything wrong with that argument?

Aubrey de Grey: Yes, there is. It doesn’t only matter that these technologies are developed. When they are developed is hugely important as well. Take anti-aging science, for instance. Very close to 150,000 people die everyday. About 100,000 of these daily deaths are aging-related. (Probably about 90% of deaths in Western countries are aging-related). So each day that you don’t delay saves 100,000 lives. From this perspective, it doesn’t matter how inevitable the singularity is. Inevitable is cold comfort to the people losing their lives or loved ones now. We want the defeat of aging by medicine as soon as possible, for the simple reason that more suffering is alleviated the sooner we achieve it.

Michael Vassar: I strongly agree. It is important to work toward making good effects happen, and avoiding bad things. Inevitability can cut both ways; sometimes you want it to happen, if the effects are good, but sometimes you don’t want certain things to happen. Focusing just on inevitability misses other important pieces. If death is or seems inevitable and we are basically dead in the long run, there is still some chance at survival, and we should give it a damn good fight. 

Besides, popcorn is bad for you. Though I guess Aubrey might figure out a way to make it not so bad for you… 

Sonia Arrison: Focusing on inevitability alone is dangerous because it allows people to get complacent about bad systems in place. People might ignore the many perverse incentives that often thwart or frustrate the many scientists working on radical technologies. Too few people are thinking about how the FDA might be blocking very important developments. If it’s all going to happen anyway, there’s less of a sense that it is important to reform what we have now so we can better realize our goals. But of course that kind of reform is terribly important, and it won’t happen if we don’t work towards it.

Peter Thiel: So who do you think is going to do this? Who is going to forge the technological future?

Michael Vassar: You.  [laughter…] 

Peter Thiel: [pause] Michael… you’re supposed to be motivating the people in this class

Michael Vassar: But I’m serious. It’s a short list of people. You, Elon, Sean…

Sonia Arrison: My take is that innovation comes from two places: top-down and bottom-up. There’s a huge DIY community in biology. These hobbyists are working in labs they set up in their kitchens and basements. On the other end of the spectrum you have DARPA spending tons of money trying to engineer new organisms. Scientists are talking to each other from different countries, collaborating on synthetic bio projects. All this interconnectedness matters. All these interactions in the aggregate will bring the change.

Aubrey de Grey: I disagree. My answer is Oprah Winfrey.

Yes, there are a few people like Peter. There are a very few visionary people who can make a real difference at the formative early stage. But there are also many people with Peter’s net worth who aren’t doing this. It’s not that these people don’t understand the issues or the value of technology. They understand these things very well. But they are held back by social opinion. They probably can’t articulate this well to themselves, let alone to others. But they face viscerally emotional blockades that the people around them erect. Just because you’re rich doesn’t mean you don’t fear people laughing at you. Many potential visionaries are held back by little more than social pressure to conform.

This is why mainstream opinion formers are absolutely pivotal. Perhaps no other subset of people could do more to further radical technology. By overpowering public reluctance and influencing the discourse, these people can enable everyone else to build the technology. If we change public thinking, the big benefactors can drive the gears.

Michael Vassar: I do not think that progress will come from the top-down or from the bottom-up, really. Individual benefactors who focus on one thing, like Paul Allen, are certainly doing good. But they’re not really pushing on future; they’re more pushing on individual thread in homes that it will make the future come faster. The sense is that these people are not really coordinating with each other. Historically, the big top-down approaches haven’t worked. And the bottom-up approach doesn’t usually work either. It’s the middle that makes change—tribes like the Quakers, the Founding Fathers, or the Royal Society. These effective groups were dozens or small hundreds in size. It’s almost never lone geniuses working solo. And it’s almost never defense departments or big institutions. You need dependency and trust. Those traits cannot exist in one person or amongst thousands.

Peter Thiel: That’s three different opinions on who makes the future: a top-down bottom-up combo, social opinion molders, and tribes. Let’s run with some version of Michael’s tribe theory. Suppose it’s just a small cabal of tech people that drives it.

Aubrey de Grey: I think the tribe argument is right. Michael is right that single people don’t make the difference. There is too much infrastructure. Working in biology costs a fair bit of money. Developing algorithms can be quite costly too. Individuals have to fit themselves into the network of money flow, whether that network is entrepreneurial, philanthropic, or public funding. But the truly radical technology discussed in this class is so early that philanthropic support will probably play the largest role for awhile longer. That can change fast as these technologies advance and more people start to see the commercial viability. When public opinion changes, the people who want to get elected will fund the things that people want, and we’ll start to see more funding for these things.

Sonia Arrison: In some sense asking for a single source of progress  is the wrong question. It can come, and almost always does come, from lots of places. Things are interconnected. Ideas build on top of each other, and often ideas that once seemed unrelated can come together later on.


Question from the audience: We know that progress has happened in the past. But fairly rarely did that progress look like what people were expecting beforehand. So how do you know that your claims as to how progress is going to happen in the future are right? What do you make of the line that “most discussion about the future is either fantasy or bullshit”?

Michael Vassar: People used to predict the future in a pretty determinate way. Suppose you’re looking for oil. That involves making fairly concrete predictions: there is x amount of oil at y place, and it will last z number of years.

People have largely stopped doing that. Recent science fiction is a bit more on point than the science fiction of old. It used to be hard to predict the distant future. It may be that it’s actually quite easy to predict what the late 2020s look like, relative to what it used to be. But it is unusually hard to make any statement about 2040. 

People were much better at predicting the future before movies and mass media. The tools were logic and trend analysis, not what looked cool on the big screen. Modern forecasts of the future are often more about looking credible than about making reasonably accurate predictions. 

Consider things like Neal Stephenson’s Snow Crash—some very good abstraction there, somewhat satirical. There are lots of details that probably aren’t going to play out like that in the actual 2020’s. But we can think of them as being about as reasonable as Kurzweil’s descriptions of possible future technology. 

Sonia Arrison: The question basically says, “Well, a lot of people were wrong about the future in the past, so we shouldn’t talk about it now.” That’s nonsense. Yes, people will be wrong. But we’re not talking pie-in-the-sky guesses about the future. We’re talking about what is here now, and reasonably extrapolating from that. This isn’t science fiction. Gene splicing and gene therapy exist. We can create living code, as Craig Venter demonstrated. The questions are how long will this take and how fast can we go. These are difficult questions to answer. But that doesn’t mean we can’t think about them. We should think about them. That people have various perspectives doesn’t invalidate the project.

Question from the audience: Will the future be a science problem or engineering problem? 

Aubrey de Grey: We are right in the middle at this point. In medicine and computation, for instance, we are seeing a shift from inherently exploration-based, science-based perspectives to engineering perspectives. 

Michael Vassar: Science matters much more than engineering does. But it’s easier to talk about engineering. So one should use engineering to discard the 99.9% of people who have no clue what’s going on. But then one should get into the science with the remnant. That is where the upside will come from.

Sonia Arrison: There is also a knowledge aggregation problem. It is hard or impossible for one human brain to know everything. So people don’t know what other people are doing, and they sometimes work on overlapping or redundant things. To the extent computers can better organize knowledge, people’s efforts will be further streamlined, whether they are scientific or engineering-focused.


Question from the audience: On the hardware side, Moore’s Law seems like it’s going to continue to hold. But on the software side, the process of software engineering and collaboration seems to be improving only linearly. Is there a leveragability problem or some hidden limit there?

Michael Vassar: Linear growth in capabilities can get you over key hurdles. There is a feedback loop. Linear growth can be enough for you you to nail down a process, leverage it, and get positive feedback to face transitions that then have the exponential growth arcs. And then you’re back to growing linearly.

This is true for probably all of psychology and for AI (which is essentially psychology-as-engineering).

Peter Thiel: We know that, in practice, timing is very important. So while we don’t know exactly when radical technology of the future will come to be, the timing does make a great deal of difference. If it’s all crazy science fiction that’s barely plausible, it might not make sense to work on it now. That would be like the Chinese man who tried to launch a rocket into space in the 11th century. No one was or should have been working on supersonic flight in the Middle Ages. That would be paddling way to far in advance of the wave.

Aubrey de Grey: I’m not sure the timing question is so critical. There must always be stepping-stones to an eventual goal. In the 11th century, the goal may have been to travel to the moon. But the technology then only permitted, say, a prospective space traveler to get one foot off the ground. So at that time, you’d get the equivalent of your PhD if you could make a system that got you 10 feet off the ground.

The question is thus which trajectories will lead toward the ultimate goal and which ones will fail. We must identify the good trajectories and prioritize them. But without the long-term goal, you can’t organize competing trajectories, and you’ll never get there.

Peter Thiel: So perhaps a 20-year goal with lot of milestones along the way would be a good approach. The problem there is that too many milestones make the achievability of the end goal rather speculative.

Aubrey de Grey: You have to see that coming, and avoid the wrong turns.

There are also humanitarian reasons to set the sights large. We must remember that 100,000 lives are saved each day that the solution to aging comes quicker. In that light, 20 years is dramatically better than 21.

Sonia Arrison: People usually become deterred if a goal seems too hard or impossible. We can’t expect everyone to be a tireless visionary. So showing traction is key. We can grow blood vessels and tracheas and bladders in the lab. So maybe we can get to hearts. The stepping-stones are key, since without them, fewer people will be as excited about the prospects of engineering new hearts.

Michael Vassar: The Apollo project was a tremendous 10-year project with lots of technological convergence. That was more than 40 years ago. At this point we probably can’t even go to the moon anymore.

Framing the U.S. Constitution was an incredible accomplishment. The Founders had the knowledge to do that. They wrote for a particular socioeconomic and technological context. They didn’t intend to write the end-all governing document for the entire world for all eternity. And yet, when we take over a Middle Eastern country today, we basically copy our Constitution. We have no idea how to do what our Framers did some 200 years ago. We’ve lost the ability to make such a culturally nuanced system. Applied history is underrated.

Question from the audience: No trend can run without running into limits.  Where is the future asymptotic? When do we reach the limits of physical world? How long does the exponential part go, and when does it stop?

Michael Vassar: It’s hard to say where it stops. Probably not for a good while; there’s much more to be done. If something happens x times in a row, and no other variable is at play, one way to think about the chance of it happening again is to estimate it at (x+1)/(x+2). It’s a really crude technique, but can be quite useful too.

Aubrey de Grey: Kurzweil acknowledges that you get S-curves. But those curves tend to be replaced by new S-curves with each paradigm shift. Merge all those curves into one and you get a mega S-curve. Obviously there’s only so far you can go within physical laws. But we’re not hitting those problems yet.

Sonia Arrison: At some point, things decelerate. But that’s okay. Necessity is mother of all invention. There will be other things to tackle. There will always be a new exponential curve.

Question from the audience: We at the Stanford Transhumanist Association are interested in open dialogue about the consequences of technological change, so we do a lot of research on how core emotions like fear or empathy come into play when people evaluate technology.

What do you think are the most effective ways to get people interested in and comfortable with transhumanist ideas?

Sonia Arrison: Sometimes it’s possible to just appeal to the humanistic side. Certain aspects of transhumanism would, fully realized, alleviate lots of suffering. Some issues fit in that category pretty well. So if you frame it right, the conclusion becomes a no-brainer. No one wants net suffering.

Other things don’t fit in that category as well. These are the things that just look radically different from the status quo—we might think they’re cool, but that’s not others’ default. The emotional argument on these things is that people should be free to be individuals. But there can be a serious fear factor on freedom. Some people are generally scared of it. So the problem is much harder.

Michael Vassar: You could appeal to people’s sense of wonder. If you’ve ever interacted with an Alzheimer’s sufferer or someone who has a mental disability, you might have gotten a sense that they were missing something. Well, so are we. The gap between them and us is practically adjacency in the space of possibilities. We’re probably missing out on a great many things. Shouldn’t we try and fix things so we’re missing less?



II. Closing Thoughts (from Peter Thiel)

This course has largely been about going from 0 to 1. We’ve talked a lot about how to create new technology, and how radically better technology may build toward singularity. But we can apply the 0 to 1 framework more broadly than that. There is something importantly singular about each new thing in the world. There is a mini singularity whenever you start a company or make a key life decision. In a very real sense, the life of every person is a singularity.

The obvious question is what you should do with your singularity. The obvious answer, unfortunately, has been to follow the well-trodden path. You are constantly encouraged to play it safe and be conventional. The future, we are told, is just probabilities and statistics. You are a statistic. 

But the obvious answer is wrong. That is selling yourself short. Statistical processes, the law of large numbers, and globalization—these things are timeless, probabilistic, and maybe random. But, like technology, your life is a story of one-time events. 

By their nature, singular events are hard to teach or generalize about. But the big secret is that there are many secrets left to uncover. There are still many large white spaces on the map of human knowledge. You can go discover them. So do it. Get out there and fill in the blank spaces. Every single moment is a possibility to go to these new places and explore them. 

There is perhaps no specific time that is necessarily right to start your company or start your life. But some times and some moments seem more auspicious than others. Now is such a moment. If we don’t take charge and usher in the future—if you don’t take charge of your life—there is the sense that no one else will.

So go find a frontier and go for it. Choose to do something important and different. Don’t be deterred by notions of luck, impossibility, or futility. Use your power to shape your own life and go and do new things. 

Tags: cs183