AI—What do we have to fear?
Inside the heads of Elon Musk, OpenAI (chatGPT) founder Sam Altman, Silicon Valley OG Bill Joy, Google cofounder Larry Page, and the anti-AI Gloomer VC kingpin Marc Andreessen.
Terminator 7 — Coming to a theatre near you!
Back in April 2000, Silicon Valley OG Bill Joy scared the shit out of people and inspired an international debate by publishing an article in Wired magazine where he postulated that due to the acceleration of development in the fields of robotics, genetic engineering, and nanotech, the era of living with intelligent and sentient robots was imminent. And the scary part of this scenario, according to Mr. Joy, is that because these robots could make their own decisions, they might decide that humans are not suitable for the planet and enslave or eliminate humanity.
“If the machines are permitted to make all their own decisions, we can’t make any conjectures about the results because it is impossible to guess how such machines might behave. I only point out that the fate of the human race would be at the mercy of the machines because people will be so dependent on them that turning them off would amount to suicide.”
—Bill Joy, in his Wired article Why The Future Doesn’t Need Us, April 2000
It’s been over 20 years since Mr. Joy’s article, and so far, there are no overbearing sentient robots yet in sight, but there has been a new scare on the horizon centered around the field of artificial intelligence. To examine the potential threat to humanity this heavily funded and rapidly advancing science might present, I thought it would be good to start by getting inside the head of another Silicon Valley OG, Elon Musk.
Elon Musk: We cannot hold a candle to AI.
Elon Musk has been involved with AI since college, deployed AI and neural network technology throughout his Tesla and SpaceX operations, and cofounded the original OpenAI (chatGBT) when it was created to be a foundation. With all that knowledge under his helmet, Elon still believes that the smartest creatures on this earth, as far as we know, are humans, and this is our defining characteristic. 'We are less agile, yet way smarter,' he says.
This all being said, he still publicly asks out loud what it would be like when we inevitably form superintelligent machines out of silicon vastly smarter than humans. "People call it 'The Singularity,' and that's probably a good way of thinking about it. It's singular—it's hard to predict, like at the center of a black hole. When AI systems reach this moment— when they can outperform humans in virtually every way—what happens next is anybody's guess,' he says. As such, Elon believes that AI is "one of the four or five things that would most impact our future."
"AI is more dangerous than mismanaged aircraft design and maintenance or bad car production. It has the potential to cause, however small one might regard the possibility—the destruction of civilization. It's not a trivial potential outcome. We are currently headed towards a state where machines will make more decisions for us and take control of us in ways we can't turn off. It would happen like Terminator—except the intelligence will be in the data centers—but the robots will be the end effectors, and some will be moving so fast that you won't be able to see them without a strobe light." —Elon Musk, April 2023
As we sit on the precipice of this hypothetical moment in time, 'the singularity,' when AI, robots, genetic engineering, and nanotech become so advanced, humanity undergoes a dramatic and irreversible change; what should be our greatest fear? Elon offers an example of what we need o watch for in the short run: central powers using superintelligent machines to produce very persuasive content to control large swaths of the population. "The pen is mightier than the sword," says Elon. "We could see a time when bad actors use superintelligent AI capable of writing incredibly well and convincingly and spread their messaging on Twitter and Facebook, and other social media to spread deceit and manipulate public opinion in their favor. How would we even know?"
Digital God versus the Specists
In Elon's interview with Tucker Carlson in April, Elon told how the break-up of his friendship with Google cofounder Larry Page was his impetus to start the original OpenAI as a non-profit.
Elon: The reason OpenAI exists at all is because I used to be really close friends with Larry Page and stay at his house in Palo Alto. We would talk late in the night about AI safety, and my impression was that Larry wasn't taking AI safety seriously enough.
Tucker: What did he say about it?
Elon: He really seemed just focused on achieving digital superintelligence—essentially a Digital God, if you will, and as soon as possible. He has made many public statements over the years that the whole goal of Google is to create AGI or artificial general intelligence. [Note: AGI is a type of hypothetical intelligent agent that perceives its environment, takes actions autonomously, and can learn to accomplish any intellectual task that human beings can perform. The 'general' in the label refers to the fact that the agents achieve 'generalized human cognitive abilities.'] I agree with him there is potential for good here, but there is also a potential for bad. It is not necessarily going to be bad, but it will be outside of human control. So if you have some radical new technology, you want to set standards that maximize the probability it will do good and minimize it will do bad things. You can't just barrel forward and hope for the best. So at one point, I said to Larry, we have to make sure humanity is okay here, and then he called me a speciesist. (chuckles).
Tucker: A speciesist? (howls) Did he use that term?
Elon: Yes. There were witnesses. I wasn't the only one there. So I say, yes, I am a speciesist. You got me. I am fully a speciesist. Busted. (laughs) What are you? (howls) So that was really the last straw. At the time [2013], Google and DeepMind [a British AI research lab acquired by Google in 2014] had about three-quarters of all the AI talent in the world, obviously a tremendous amount of cash, and more computers than anyone else, so we were living in a unipolar world now where one company has a near monopoly on AI and scaled computing, and the person in charge doesn't seem to be concerned about safety. This is not good. So I thought, 'What is the furthest thing from Google?' which would be a fully open non-profit. So the 'open' in OpenAI stands for open source and transparency so people know what is going on. I'm normally in favor of for-profit companies, but the idea was not to be a profit-maximizing demon from hell that never stops. So that is why OpenAI was founded. Very unfortunately they decided to become a for-profit company.
Tucker: So you want speciesist incentives here?
Elon: That's right. We want pro-human, respect for the future, therefore good for humans incentives because we are humans.
It's 420-time with Elon and Joe
Beyond superintelligent propaganda machines and Google, Elon also has shared grave concerns about autonomous weapons powered by AI. Elon signed an Open Letter in 2017 along with the cofounder and former head of applied AI at DeepMind Mustafa Suleyman, and 116 other AI experts urging the United Nations to block the use of lethal autonomous weapons, expressing concerns that they could lead to an arms race and be used in unethical or malicious ways. The letter called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT, so robust safety measures could be designed and implemented. Since then, he has taken on a more fatalistic disposition towards AI. "I tried to convince people to slow down AI and regulate AI. I even met with Obama with one message: better watch out. This was futile. I tried for years. Nobody listened,' he told Joe Rogan.
Almost four years ago, Joe Rogan and Elon smoked a blunt and talked AI; here are some excerpts that touch on the theme of this post.
Rogan: When I listen to you, and Sam Harris talk about AI, its scares the shit out of me. This is a genie that you are never getting it back in once it's out of the bottle. Are you honestly, legitimately concerned about this? I mean, is AI one of your main worries in regard to the future?
Elon: Yes. It will be tricky here because it will be very tempting for humans to use AI against each other as a weapon. In fact, people will use AI as a weapon, a potential danger that is out of our control.
Rogan: How long are we away from a truly sentient machine that can make up its own mind independent of whether it is ethically and morally correct?
Elon: Well, one could argue that any group of people, like a company, is essentially a cybernetic collective of people and machines. There are different levels of complexity in how these companies are formed. There is a kind of collective AI in Google Search where we are all plugged in as nodes on the network, like leaves on a big tree. And we are all feeding this network with our questions and answers and collectively programming the AI. So Google and all the people constantly connected to it are one big cybernetic collective. This is also true of Facebook, Twitter, Instagram, and all the social networks. They are giant cybernetic collections that combine human intelligence with machine intelligence.
Rogan: It seems that it is built in all of us—like an instinct—to want to push innovation forward so that we can get the next great iPhone or Tesla, and this push will get us to some incredible point. And it makes us happy— like an ant building an ant hill, we feel like it's our job to fuel this.
Elon: It does feel like we are the biological bootloader for AI. We are building it, and the percentage of intelligence that is not human is increasing. Ultimately, we will represent a very small percentage of intelligence. But the AI isn't formed, strangely, by the human limbic system; it is largely id writ large. Those primal drives, the things we hate and like, and fear, are all there on the Internet as a projection of our limbic system.
Rogan: It makes sense. These social media networks as some sort of an organism that's a combination of electronics and biology.
Elon: I think the best-case scenario is we effectively merge [our brains] with AI, where AI serves as a tertiary cognition layer. We've got the limbic resonance, the more engagement.
Rogan: But what's the ultimate—what's the idea behind it? What are you trying to accomplish with it? What is the best-case scenario?
Elon: I think the best-case scenario is we effectively merge [our brains] with AI, where AI serves as a tertiary cognition layer. We've got the limbic system—our primitive brain—and we've got the cortex—and our limbic system and cortex are in a symbiotic relationship. And the cortex is mostly in service to the limbic system—that instinct you are referring to. People may believe the thinking part of themselves is in charge, but it's mostly their limbic system that's in charge. And the cortex is trying to make the limbic system happy. That's what most of that computing power will be oriented towards—how can it make my limbic system happy? That's what it's trying to do. Now, if we have a third layer—the AI extension of ourselves—that is also symbiotic. And if there's enough bandwidth between the cortex and the AI extension of yourself such that the AI doesn't de facto separate, that could be quite a positive outcome for the future.
Rogan: So instead of replacing us, it will radically change our capabilities?
Elon: Yes. It will enable anyone who wants to have superhuman cognition. And the availability of this upgrade will not be subject to your earning power because your earning power will be vastly greater after you do it. That's the theory. And if that's the case, then—and let's say billions of people do it—the outcome for humanity will be the sum of human will—the sum of billions of people's desire for the future.
Rogan: If you had to explain it to the average person—how much different would people be from today? When you say "radically improved," what do you mean?
Elon: How much smarter are you with a phone or computer than without? You're vastly smarter, actually. You can answer any question if you're connected to the Internet. You can remember flawlessly now because your phone's memory is essentially perfect. You can store images and videos and create and save things never possible before. Our phone is already an extension of us. Most people don't realize it, but we are already cyborgs. It's just that the data rate—the communication rate between you and the cybernetic extension of yourself—your phone and computer—is slow. It's very slow. It's like a tiny straw of information flow between your biological self and your digital self. And we must make that tiny straw a giant river: a huge, high-bandwidth interface. It's an interface problem, a data rate problem. Solve the data rate problem, and then we can hang on to human-machine symbiosis in the long term. And then people may decide whether they want to retain their biological selves or not. I think they'll probably choose to maintain their biological self. But we shall see.
Note: Elon Musk owns an AI company called Neuralink, which aims to develop high-bandwidth implantable brain-computer interfaces to achieve what he was explaining to Joe Rogan. Elon believes Neuralink is a potential solution to the AI control problem, as it allows humans to merge with AI and maintain control over its development. We will be examining Neuralink in one of our future AI series posts.
Google DeepMind
Meanwhile, over at the Googleplex, the November launch of the instantly popular chatGPT superintelligent chatbot produced by OpenAI lit a fire under a few people’s asses in Moutain View. The first thing you do when the franchise is seriously under attack? Launch your own wanna-be chatbot competitor (a.k.a. Bard) and call in the founders.
Larry Page and cofounder Sergey Brin stepped back from operating duties in 2019 when Page handed the CEO reins over to Sundar Pichai. In January, Mr. Pichai called in the company founders to regroup on Google's AI strategy and help plan the counter-attack to OpenAI and a slew of venture-capital back competitors running up from behind. The only word on the street so far is the "Do No Evil" twins pitched chatbot features to put into Google's search engine and encouraged company leaders to prioritize AI in all product development plans.
"AI would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted and give you the right thing."
—Google cofounder Larry Page in an interview with Charlie Rose in 2014.
There is no reason to think that Larry & Co. has given up on achieving artificial general intelligence (AGI). World data domination via Digital God remains squarely on the boardroom table, if not at least in the minds of Larry and Sergey. With the formation of OpenAI and a slew of starts with $250 billion of VC money in their pockets, Google may no longer represent three-quarters of all AI power but still control the Mac Daddy of all data centers. Data is the power in this business.
"Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world." — Vladimir Putin, President of Russia.
Regarding Elon's concerns about Google's lack of AI safety, the recent resignation of the 'Godfather of AI,' Geoffrey Hinton, from Google Brain, has raised an eyebrow or two. Dr. Hinton told the New York Times the progress made in AI technology over the last five years had been 'scary.' He told the BBC he wanted to discuss 'the existential risk of what happens when these things get more intelligent than us.'
"Right now, the machines are not more intelligent than us, at least as far as I can tell. But they soon may be. We are already seeing GPT-4 eclipse humans in general knowledge by a long way. Its ability to reason is not so good, but it does handle simple reasoning. All these AI capabilities are accelerating quite quickly, so we need to worry about that. We do not want bad actors using these tools to do bad things. You can imagine, for example, if a bad actor like Vladimir Putin decided to give robots the ability to create their own sub-goals like 'I need to get more power,' that might turn out so well.
The kind of intelligence we're developing is very different from biological intelligence. Digital systems have many separate copies of the same set of weights, the same models of the world, yet they can share their knowledge instantly. So it's as if you had 10,000 people, and whenever one person learns something, everybody automatically And that's how these chatbots can know so much more than any one person. In the shorter term AI would deliver many more benefits than risks, so we can't afford to stop development. If everybody in the US stopped developing AI, China would just get a big lead."
In April, DeepMind, the AI lab Google acquired in 2014, and Dr. Hinton's old Google Brain team were combined and rebranded as Google DeepMind. The UK-based DeepMind's early investors included Paypal and Palantir cofounder Peter Thiel and Elon Musk; the company developed machine learning systems with the help of deep neural networks and models based on neuroscience. DeepMind's cofounder Demis Hassabis is the leader of the new Google DeepMind, and the following is an excerpt from Mr. Hassabis's remarks summarizing Google's AI vision.
"We live in a time when AI research and technology are advancing exponentially. In the coming years, AI - and ultimately AGI - has the potential to drive one of the greatest social, economic, and scientific transformations in history. That's why today, Sundar [Pichai, Google CEO] is announcing that DeepMind and the Brain team from Google Research will be joining forces as a single, focused unit called Google DeepMind to accelerate our progress toward a world in which AI helps solve the biggest challenges facing humanity. In close collaboration across all the Google Product Areas, we have a real opportunity to deliver AI research and products that dramatically improve the lives of billions of people, transform industries, advance science, and serve diverse communities. By creating Google DeepMind, I believe we can get to that future faster."
—Demis Hassabis, CEO, Google DeepMind
In an interview with The Wall Street Journal, Mr. Hassabis boldly said that artificial general intelligence would be a reality in a few years and achieve human-level cognitive abilities in the next five years.
All of Big Tech is now leaning toward AI throughout all its products and services. On a recent earnings call, Apple CEO Tim Cook told investors and reporters, 'We see enormous potential in AI technology, and virtually incorporate it in every product and service we have.' Apple has already integrated AI into some iPhone, and Apple watch features, including a crash detector and electrocardiogram. Earlier this month, Apple released an AI-powered Apple Books digital narration app that instantly creates an audiobook from any written work. Apple is also readying for the ChatGPT fight with a super-charged Siri.
OpenAO
We highly recommend StrictlyVC's Connie Loizos's two-part video conversation with OpenAI's Sam Altman, which showed the cofounder and CEO to be a thoughtful and long-term thinker with the right amount of caution in his vision. During his interview, Mr. Altman said the worst case could be "lights out for humanity because of some accidental misuse cases in the short term." Mr. Altman also admitted his fears to the Economic Times recently, "What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT. That maybe there was something hard and complicated in the system that we didn't understand and have now already kicked off."
Microsoft recently invested another $10 billion in OpenAI as a follow-on to its 2019 and 2021 investments totaling $1 billion. It has also incorporated chatGBT technology into Word, Excel, and its Azure cloud service platform. In February, Google introduced 'Bard' its own 'experimental conversational AI service' to compete with ChatGPT and a slew of well-funded private companies.
"AI is not a threat; it's an opportunity. It's an opportunity to augment our own intelligence, to learn from the vast amounts of data that are available, and to help us make better decisions."
— Satya Nadella, CEO of Microsoft
Last February, Mr. Altman's blog post describes AGI as "generally smarter than humans." By this vague measure, it would be difficult to determine whether it is ever really achieved. With help from OpenAI, Microsoft Research released a paper on GPT-4 that claims the algorithm is a nascent example of artificial general intelligence (AGI). The Microsoft Research team is candid about GPT-4's inability to succeed at all human labor and its lack of inner desires.
"When AI possesses human-level understanding rather than just the ability to complete tasks, it could create a worldwide dystopia. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too," —Sam Altman wrote in a February blog post.
Like Elon's worry, Mr. Altman that bad actors use AI for large-scale disinformation campaigns. "The more ability these models have to manipulate and persuade people on a one-on-one interactive basis, the scarier," Mr. Altman said at a recent Senate hearing. "We worry about authoritarian governments developing this capability, and given we're heading into an election year, I think this is a significant area of concern," he told ABC News. "This disinformation could disrupt elections and create economic shocks and other upheavals we are unprepared for."
Other concerns Mr. Altman has include: AI that could design novel biological warfare-oriented pathogens and AI that could launch cyberattacks and hack large data centers.
Marc Andreessen: Why AI Will Save the World
Just as I was closing this post, a 7,000-word thought piece by Internet pioneer and VC power player Marc Andreessen called Why AI Will Save the World was published and served as a poetic counter to some of the fears expressed above. We recommended reading it in its entirety. For the sake of offering the other side to the doom and AI gloom perspective in this post, let’s pretend you just asked chatABP (my initials) to summarize Marc’s thoughts, and this is what you get ;)
AI will not destroy the world and, in fact, may save it.
We have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years. AI offers us the opportunity to profoundly augment human intelligence. AI augmentation of human intelligence has already started in the form of computer control systems and is rapidly accelerating with AI Large Language Models like ChatGPT. The rate of innovation will very quickly from here – if we let it.
In the future, AI-powered smart agents will be our assistants, tutors, trainers, and therapists. Ai will expand the scope and efficacy of scientific and technological research and achievement. It will be the same for CEOs, military commanders, government officials, coaches, writers, artists, musicians, and teachers. In short, anything people do with their natural intelligence today can be done much better with AI. Much like the microprocessor and the PC, AI will usher in a new era of productivity and prosperity by creating new industries, jobs, and wage growth and usher in a new era of heightened material prosperity on all continents.
Historically, new innovations, from electric lighting to automobiles to radio to the Internet, have sparked a moral panic – a social contagion that convinces people the new technology will wipe out humankind. The Pessimists Archive documents these technology-driven moral panics over the decades; their history makes the pattern vividly clear. This present panic is not even the first driven by AI fears.
We are back to a full-blown moral panic about AI right now. A variety of actors are demanding new AI restrictions, regulations, laws, and even a ban on AI development. These actors, who present themselves as selfless champions of the public good, make extremely dramatic public statements about the dangers of AI and inflame turmoil and panic. There is a slew of "AI safety experts", "AI ethicists", and "AI risk researchers" who are paid or receive grants by universities, think tanks, activist groups, and media outlets to foster AI this panic.
Note: In his post, Marc lists and comments on the five AI safety risks the 'AI doomers' most often mentioned. Follows is the summary of the three that pertain to the subject of this article.
Risk #1: AI will kill Us. My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being. It is math–code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will, at some point, develop a mind of its own, have independent motivations, and set its own goals that lead robots to try to kill us is superstitious nonsense. AI is a machine – it will not come alive any more than your toaster will.
In the Bay Area, "AI risk" has developed into a cult, which has gained global press attention and pulled in not just fringe characters, but also some actual industry experts and a few wealthy donors. This cult is why there are a set of AI risk doomers who sound so extreme – it's not that they have secret knowledge that makes their extremism logical; it's that they've whipped themselves into a frenzy and are…extremely extreme.
Risk #2: AI will ruin our society. If the murder robots don't get us, hate speech and misinformation will. The tipoff to the nature of this AI societal risk claim is its own term, "AI alignment". Alignment with what? Human values. Whose human values? Just like what happened in social media, "AI alignment" will usher in a shockingly broad range of government agencies and activist pressure groups, and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of the speech they view as threatening to society and/or their own personal preferences. They will do this up in breathtakingly arrogant, presumptuous ways that are nakedly felony crimes.
AI is likely the control layer for everything in the world. How it is allowed to operate will matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you. In short, don't let the thought police suppress AI.
Risk #3: AI will lead to bad people doing bad things.
I actually agree with this risk factor. But this fear causes some people to propose, well, in that case, let's not take the risk; let's ban AI now before this can happen. Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It's the opposite; it's the most accessible material in the world to go by – math and code. The AI cat is already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and outstanding open source implementations are proliferating by the day.
There are two straightforward ways to address this risk. First, prosecute bad actors who use AI to break laws. Hack into the Pentagon? That's a crime. Steal money from a bank? That's a crime. Create a bioweapon? That's a crime. Commit a terrorist act? That's a crime. We don't even need new laws – I'm not aware of a single bad use for AI that's been proposed that's not already illegal. And if further bad use is identified, we ban that use. QED.
The second way to prevent such bad actions is to use AI as a defensive tool. For example, suppose you are worried about AI generating fake people and fake videos. In that case, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals. Let's put AI to work in cyberdefense, biological defense, hunting terrorists, and everything else we do to keep ourselves, our communities, and our nation safe.
The China Risk. There is one final and real, AI risk that is probably the scariest at all. China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.
I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union—"We win, they lose." We should operate with s sense of urgency to to win the race to global AI technological superiority and ensure that China does not. In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential. This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision. I propose a simple plan:
Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk.
Startup AI companies should be allowed to build AI as fast and aggressively as they can. If and as startups don't succeed, their presence in the market will continuously motivate big companies to be their best.
Open source AI should be allowed to proliferate freely and compete with both big AI companies and startups. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future and will ensure that AI is available to everyone.
To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society's defensive capabilities.
We should outpace China by using the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.
Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality against a wall of fear-mongering and doomerism. I do not believe they are reckless or villains. They are heroes. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.
The Cryptonite Take—To Be an AI Gloomer or Not to Be?
As a non-coder, the world of AI is way above my pay grade, and I can only hope to play a triangulator of what the smart folks think. All things considered, my instincts have me leaning toward Marc Andreessen's views.
First and foremost, as Marc notes, the advent of breakthrough technologies has continuously stirred unnecessary fear and panic where the initial hysteria always ends up proving to be—hysteria. We also see these emotional tendencies play out in investment markets driven by FOMA—the fear of missing out—most recently during the Internet bubble (2000), housing bubble (2008), and crypto bubbles (2018 and 2021).
The comedian Bill Maher recently highlighted and very enlightening Gallup survey regarding our recent chapter of hysteria during the Covid epidemic.
“Don’t spin me when it comes to my health. Over the past few years, the medical establishment, the government, and the media have taken a scared straight approach to getting the public to comply with their recommendations. I’m old school—“Give it to me straight, Doc.” Because in the long run, that always works better than “You can’t handle the truth.” In a new Gallup survey, the liberals—you know, the high-information, by-the-science people—did much worse than the Red Party in getting the correct answer to the fundamental question: What are the chances somebody with Covid must be hospitalized? The answer is between 1% and 5%. 41% of the Blue Party thought it was over 50%, and another 28% thought the chance of hospitalization was over 20%, so almost 70% of the Blue Party were wildly off on this key question and also had a greatly exaggerated view of the danger of Covid to, and the mortality rate among children.”
Researchers at Dartmouth built a database measuring the Covid news coverage by major media outlets. They found that while other countries mix the good news with the bad, the US media reported almost 90 percent bad news. Ironically, the Covid case study proves that we have more to fear being manipulated by central agencies, such as mainstream media than we do the disease.
So if the right-wing media bubble has to own things like climate change denial, shouldn’t liberal media have to answer for “How did your audience end up believing such a bunch of crap about Covid? —Bill Maher, comedian
The shocking results of this Gallup survey underscore what I believe to be the most immediate AI concern and the one urgently described by Elon and Mr. Alman above. If we are already at a point where our government and corporate media can dramatically confuse us on the Covid hospitalization rates, only the Digital God knows how far authoritarian regimes could confuse and manipulate people with AI.
Another recent example of exaggerated fear and loathing was the over-population hysteria starting in the 1960s. In their book, The Population Bomb, published in 1968, co-authors and current Stanford University academics Paul and Anne Ehrlich postulated that by 2000, we would face mass starvation, mass migration, and riots everywhere. By their estimation, the earth could only accommodate 500 million to 2 billion people. The first sentence sets the tone: "The battle to feed all of humanity is over." And humanity had lost. What happened instead was everyone got way richer, the world's poor were largely lifted out of poverty, and the planet got greener and healthier. Yes, there is still inequality and poverty, and severe environmental challenges, but these conditions have proven to be more a result of political corruption rather than a lack of resources and ingenuity.
The new data today indicates that the global population will top out at 9 billion. Ironically, the way we are trending, one of our biggest problems in 50 years, as Elon has often expressed, is that we will have a young people shortage. The reality turned out quite the opposite of the Population Bombers. The population collapse in developing countries is precipitous. Korea, Japan, Ukraine, most of the former Soviet block, Greece, Portugal, and Italy are all reproducing way below their population replacement rates. This is a problem because young people whose energy and innovation largely carry a society ahead.
On the technology side, the AI gloomers are making some big scientific and cultural assumptions that we will examine in future editions of our AI series.
Yes, Elon is right that we have become cyborgs already. Our smartphones have made us a lot smarter, and the marriage between machines and humans will continue to march in step with Moore's Law. But how exactly will we jerry-rig the chip onto our brain to super-charge our cortex to access all the data found on Google in a nano-second? Or how exactly do we lose control of these robots who start operating with a mind of their own? Or what does Elon mean in the Rogan interview that 'people may decide whether they want to retain their biological selves or not? Some of this talk smacks of the wild conspiracy theories of the clinically paranoid. Or maybe the conspiracy theorists are right—Elon is an alien!
Finally, it seems like the AI safety debate gets the most horrifying when imagining robots with independent and unpredictable decision-making power corraling us into slavery and servitude or lining us up for assassination. For this nightmare to happen, some gloomers infer the robots will be sentient, i.e., self-aware, have a sense of purpose and free will, be capable of rational thought, feel emotions, and see meaning in the world around them— all elements that form the core of what it means to be human.
This scenario begs the question—Can we program a robot to be human? At its fundamental level, our consciousness allows us to confront the immense chaos in front of us called life and transform it into habitable order. As part of this drive for order and survival, we've come to honor the idea that every human being has some uniquely divine and transcendent value deserving of inalienable rights and respect—it's embedded in our consciousness and our legal structure. Can this kind of intuition and instinct be coded?
From a neurobiological standpoint, despite a lot of effort by scientists and psychologists over the last 50 years, we are nowhere near understanding what makes up our consciousness. It's not something we can get our heads around with a fundamental materialist approach, which only deepens its mystery.
Will robots achieve a form of superintelligence? Yes. A sentient, conscious state? I think not. Nine in ten Americans believe in a higher power and a supernatural dimension to reality and existence. A belief that every person has an immaterial soul that animates the body and gives it supernatural life. But the reality here, of course, is science can only infer the soul is otherwordly but can not prove something that is physically unquantifiable exists. My gut says creating a sentient robot will always be infinitely too complex and unpredictable beyond a highly-functional level, and breathing the breath of consciousness in a robot's nostrils will never happen.
The debates about the nature of God, the nature of the human soul, the relationship between the material and immaterial aspects of reality, and the nature of causality and purpose involving faith and reason will only intensify as AI innovation accelerates. "AGI is a philosophical question. So, in some ways, it's a tough time to be in this field because we're scientists," says Sara Hooker, who leads Cohere for AI, a non-profit research lab that seeks to solve complex machine learning problems. "A lot of the questions around AGI are less technical and more value-driven."
History shows we ultimately self-correct for the better when adjusting to new innovations. We have self-corrected (if not over-corrected) on population growth, and, hopefully, we have learned from our Covid journey, and our future pandemics will be confronted more realistically.
All this being said, we as a community need to work together and continue to monitor and cautiously manage the development of all advanced technologies, whether AI, genetic engineering, nanotechnology, and robotics, to name a few. And one would hope that during this process, we remain pro-human and keep our faith in the principle (even if just metaphorically) that we are all endowed with dignity and value by our Creator and, therefore, must enjoy equal rights to be protected at all costs. We witnessed three totalitarian regimes in the 20th century that chose 'community' rights over those of the individual, and that sure did not turn out well. We have made a lot of progress in the last 50 years; let's keep pointing to the Promised Land.
We hope you enjoyed our intelligence and take on some of the most influential people in AI to date. In the Web3 era, blockchain and digital assets are emerging as the new decentralized internet that deflates cloud computing, AI is the 'smart engine' for all new apps, and the metaverse and VR will offer a powerful new way to visualize and engage with data, content, and each other. The most disruptive and potentially harmful is AI. The Web3 sector will continue to be the focus of the risk investors, and hundreds of great new companies will receive billions more in VC and corporate money. The best will rise and help make us and our enterprises infinitely more productive, transparent, private, and safe, changing the world in ways we can't even imagine.
As always, many issues here are open for debate and discussion, so we encourage you to post your take in the comments section below for all our benefits. You can also send your private thoughts to us at: TheEditor@CryptoniteVentures.com
Further Recommended Reading
Artificial General Intelligence Is Not as Imminent as You Might Think
What’s AGI, and Why Are AI Experts Skeptical?
Why AI Will Save the World blog post by Marc Andreessen
StrictlyVC’s Connie Loizos's video conversation with OpenAI’s Sam Altman (part 1)
StrictlyVC’s Connie Loizos's video conversation with OpenAI’s Sam Altman (part 2)
AI 'godfather' Geoffrey Hinton interview with BBC on AI dangers as he quits Google.
Nice job, Tony.