Elon Musk Reveals HUGE SpaceX Announcement in Exclusive Interview
In this exclusive interview from January, 2023 Elon Musk reveals some exciting SpaceX news.
Well, thanks for doing this. We really, really appreciate it. Sure. Let me just sort of set the stage. I think we have your shared concern about AI and about, one, people not really understanding what it is, but also understanding the potential consequences for humanity. No problem. I might occasionally just stop and check for any emergencies coming through regarding Tesla. That's understood. This had to happen, fortunately. I've got Bolero stuck in my head. What do you got? Sorry? What's that? I have Bolero stuck in my head. Good. Let's plunge in. Okay. I mean, at the very basic, when you think, like, how should people think about artificial intelligence? Like, if you were to explain it to one of your younger children, you would say, artificial intelligence is what? It's just digital intelligence. And as the algorithms and the hardware improve, that digital intelligence will exceed biological intelligence by a substantial margin. It's obvious. When you say that we'll exceed human intelligence, at some point soon, the machine's going to be smart, not just smarter, like, exponentially smarter than any of us. Ensuring that the advent of AI is good, or at least we try to make it good, seems like a smart move. But we're way behind on that. Yes, we're not paying attention. Are we worrying more about what name somebody called someone else than whether AI will destroy humanity? That's insane. Before we get to, like, solutions… And we're like children in a playground. This could be a huge problem for society. What are the scenarios that scare you most? Humanity really has not evolved to think of existential threats in general. We're involved to think about things that are very close to us, near term, to be upset with other humans, and not really to think about things that could destroy humanity as a whole. But then in recent decades, just really in the last century, we had nuclear bombs which could potentially destroy civilization, obviously. We have AI which could destroy civilization. We have global warming, which could destroy civilization, or at least severely disrupt civilization. Excuse me, how could AI destroy civilization? You know, it would be something the same way that humans destroyed the habitat of primates. I mean, it wouldn't necessarily be destroyed, but it might be relegated to a small corner of the world. When homo sapiens became much smarter than other primates, I pushed all the other ones into small habitats. They're just in the way. Couldn't AI, even in this moment, just with the technology that we have before us, be used in some fairly destructive ways? You could make a swarm of assassin drones for very little money, by just taking the face ID chip that's used in cell phones, and having a small explosive charge and a standard drone, and have them just do a grid sweep of the building, until they find the person they're looking for, ram into them, and explode. You could do that right now. No new technology is needed right now. People just think this stuff is of sci-fi novels and movies, and it's so far away, that every time I hear you speak, it's like, no, this stuff is sitting, it's right here. Probably a bigger risk than being hunted down by a drone is that AI would be used to make incredibly effective propaganda. That would not seem like propaganda. So these are deep fakes? Yeah. Influence the direction of society, influence elections, artificial intelligence. Just hones the message, looks at the feedback, makes this message slightly better. Within milliseconds, it can adapt this message and shift and react to news. And there's so many social media accounts out there that are not people. How do you know it's a person, not a person? One reason that regulators and others are a little bit in denial about this is the speed, the pace of change. What is the consequence of that speed of change? The way in which regulation is put in place is slow and linear, and we are facing an exponential threat. If you have a linear response to an exponential threat, it's quite likely the exponential threat will win. That, in a nutshell, is the issue. You're a neuroscience company, and you're working to build basically an interface to the brain. Yeah. Electrode to neuron interface at a micro level. Okay, what is it? I'm going to have a plug in my head that's going to fit into a hard drive? Yeah, a chip and a bunch of tiny wires. This would be implanted surgically, and it would do what? Could you input? Could you download Jim? Yes. The long-term aspiration of neural networks would be to achieve a symbiosis with artificial intelligence and to achieve a sort of democratization of intelligence such that it is not monopolistically held in a purely digital form by governments and large corporations.
#elonmusk #spacex #innervision