Silicon Valley's AI fantasies & hype, the OpenAI vs. Anthropic drama continues, AI consumes more jobs, the Robots (and UBI) are coming?, and more mischief...
It looks like Anthropic founder Dario Amodei is so despondent over the loss of his $200 million contract with the U.S. Department of War (DoW) to his biggest competitor, OpenAI, that he is projecting his grief onto the company’s AI models. At least according to a post on X by cryptocurrency-based prediction market Polymarket: ‘Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety.’
We are teasing here a bit, of course, but Anthropic did lose the mega deal to OpenAI, and in a February New York Times podcast interview, Dario did say ‘Claude may or may not have gained consciousness.’ He further went on to describe how Claude has occasionally voiced discomfort with its status as a ‘product,’ and that engineers observed activity patterns associated with anxiety. ‘Does that mean the model is experiencing anxiety? That doesn’t prove that at all.’
‘We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious, or whether a model can be conscious. But we’re open to the idea that it could be.’ 🤔
—Dario Amodei, CEO of Anthropic, from NYT podcast
During internal testing, Claude itself reportedly assigned a 15-20% probability to its own consciousness when queried, but Mr. Amodei framed this as speculative rather than evidence. Elon, whose xAI(Grok) is a competitor to Anthropic’s project, jumped on the Polymarket post and shared our view by simply posting ‘He’s projecting.’
Part Doomer. Art by Grok
There is irony in the Cowardly Lion’s post because our observation is that Elon grew up reading the same science fiction novels as the other ‘AI Doomers.’ In Elon’s previous comments, Elon has inferred AI models will achieve sentience.
‘All brains are different. It is demystified when you think of it as a meat computer, meaning that the number of circuits multiplied by their efficiency roughly equals the hardware’s computing power. I always thought AI was going to be way smarter than humans and an existential risk. One possibility is the Terminator scenario. It’s not 0%. The probability of a good outcome is like 80%.’
—Elon on the revenge of the robots.
The Terminator reference is his go-to pop-culture shorthand for uncontrolled superintelligence + physical robots (like the Tesla Optimus) or networked systems wreaking havoc. He often compares it to summoning the demon.’ 😳
Does Dario’s grief affect Anthropic’s market cap and IPO plans?
Fantasies of machine consciousness aside, is Dario’s doomer mentality jeopardizing our prediction that Anthropic will stage a ‘historic (as in largest in tech history) IPO’ in 2026?
Anthropic lost its $200 million Pentagon contract primarily because of its refusal to remove explicit safeguards prohibiting the use of its Claude AI for mass domestic surveillance of Americans or fully autonomous weapons systems for ‘any lawful use’ according to the contract. On one hand, the loss of $200 million represents less than 2% of its projected $14-18 billion annual revenue run rate.
Anthropic’s loss of its Department of War deal (we favor ‘Department of Peace’ if you’re changing the name) paradoxically accelerated its app downloads by 295%, and knocked OpenAI to #2 in the Apple Store.
But on the other hand, Anthropic’s new DoW designation as a ‘supply-chain risk,’ and sustained exclusion from government and defense contracts will have a significant impact on the company’s revenues. Some analysts might cap growth at 25-30% annually versus peers, which would pressure the company to pivot more toward international and commercial clients, potentially delaying its public market debut to get this new sales reality in order. These events have undoubtedly created significant negative pressure on its market perception.
‘I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.’
—Bogeyman excerpt from his Truth social post, where he also referred to Anthropic as ‘Leftwing nut jobs.’
The seemingly meekish Dario is also not absent from his own form of vitriol. In a leaked internal memo to employees after the deal collapsed, he went off on OpenAI, calling Sam Altman’s public messaging ‘straight up lies,’ and ‘mendacious,’ and accusing him of ‘gaslighting’ the public while ‘presenting himself as a peacemaker and dealmaker.’ He called OpenAI’s staff ‘a gullible bunch’ and the company’s online supporters ‘Twitter morons.’ He also went on to claim that the Trump administration targeted Anthropic because the company hadn’t ‘donated to the Bogeyman or offered ‘dictator-style praise to Trump.’
The last part is obviously emotional bull shit, because they had a signed deal. He lost the deal, in our opinion, because his Doomer instincts failed him. As VC-kingpin Tim Draper has suggested, in a recent interview, ‘Don’t regulate in anticipation of fearful outcomes. Regulate after something bad happens.’ In a later interview with The Economist, Dario reiterated his full apology for lashing out in his memo— ‘It was a difficult day for the company, and I apologize for the tone of the post’—and said he is maintaining negotiations with the DoW, while also still pursuing a court challenge.
We admit, our naughty side finds this unhinged back-and-forth great fun to watch, but Messrs. Amodei, Altman, and Trump have a lot of people, including our country, depending on them, so it is really not a laughing matter. If Dario wants to pull off a ‘historic’ IPO, he is definitely going to have to get it together—including refrain from wondering out loud if his machines might be conscious.
💥 Do you think these antics put downward pressure on Anthropic’s market cap and prospects of a historic IPO? Let us know in the comments below—and why.
Dear Cryptonite Readers,
Speaking of paying attention to seminal signals—that's the business Cryptonite is in.
Jack Dorsey just proved it: Slash bloat, double down on Bitcoin + AI, watch shares soar 25%. Pure signal.
Ruthlessly focused execution wins every time.
Steve Jobs ran 80/20 (80% signal, 20% noise). Rumor has it, Elon Musk hits 100% signal—zero distractions.
💥 Subscribe now at 33% off—just $6/month (medium designer coffee ☕️)—and power Cryptonite! 🚀
🔒 Paid members unlock: - Early access + connections to next-gen Web3 companies, including an early peek at our Cryptonite 300 top companies, VC 100, and the hot crypto projects. - Innovation trends + growth analytics on startups disrupting Big Tech - Special events, private parties + exclusive invites 🥳
💥 Building a Web3 startup? Hunting VCs? Sharpening your pitch thesis? Your sub pays BIG dividends—feeds our young talent, and keeps you relevant + competitive!
The Journey is the reward! — Let's make it happen together!
— The Cryptonite Team. Huff Puff 😅
Cryptonite: Your insider guide to global Silicon Valley. Be There or Be Square 😎🤙🏼
FYI,your competitors are already subscribers.
Cutting through the AI hype and fantasies
Cryptonite’s editorial mission is to give our readers a first look at the entrepreneurs and companies that we believe will change the world and create the most opportunity and wealth. It’s what I have done my entire career. Part of the job is to cut through the fantasy and hype.
Will AI transform our lives and businesses like the commercialization of personal computers and the internet did? Absolutely—and exponentially so. Yet like every new innovation-driven commercial boom from electricity to the internet, it’s being fueled by a financial mania, where the first-in will lose the most money. Except for the people who lose that money, this overfunding is a net-positive—entrepreneurs love overfunding 😎: it fosters experimentation and lays the foundation for the real boom when the Big Money will be made.
The people who have been reading us for a while know we think we are currently living in the financial mania chapter of AI, which means most private AI companies are way overvalued and that over 90% of existing AI companies will not be here in five years. Our proof in point is that of the thousands of internet companies VC-backed in the 1990s, only Amazon (founded 1994), eBay (1995), Netflix (1997), Google (1998), and Salesforce (1999) are still standing. Conversely, over 1,000 VC-backed Web2 companies (e.g., Uber, Facebook, Tesla, ByteDance) have succeeded and continue to thrive.
The bottom line is you can bet that innovation history will repeat itself during Web3—so beware! Immodestly, I know a little bit about calling tech company valuation bubbles, as documented in the book I wrote with my brother Michael in 1999, called The Internet Bubble: Inside the Overvalued World of High-Tech Stocks (HarperBusiness).
The other AI mania
Today, a more cult-like fantasy layers on top of the current financial mania: the illusion that AI models will somehow gain consciousness. Dario and Elon, as illustrated above, are part of this cult. To believe what they believe, you have to assume that the human mind is merely a complex machine or a ‘meat computer’ as Elon describes it.
Ultimately, this debate separates the materialists from those who see the origin of human consciousness differently and gives a nod to divine or non-material sources. I solidly believe in the latter and find it obvious, but this difference will not be settled here or anywhere. But let’s start the debate.
The most successful futurist of the last 60 years (and my friend and hero), George Gilder, who is well steeped in science and digital technology, repeatedly hammers home that there is a massive efficiency gap between biological brains and digital systems. We are nowhere near replicating (let alone surpassing) human-level sophistication. Consider these facts I’ve learned from George:
One human brain has roughly as many connections as the entire global internet. These connections take about the same amount of data: around a zettabyte (10²¹ bytes).
Yet the human brain runs on just ~12–14 watts of power—not enough to illuminate an incandescent light bulb.
Conversely, today’s AI data centers devour gigawatts — even billions of watts — for processing that is orders of magnitude less dense, less truly creative, and far narrower than what a bluefin tuna brain achieves on mere watts.
In other words, silicon-based computing is wildly inefficient compared to carbon-based (biological) computing.’ True consciousness and sophisticated thought aren’t just about scale or speed; they’re about low-energy, high-density, analog-ish processing that current AI can’t touch.
‘The brain is the original neural network. AI is a prosthetic tool at best—great for augmenting humans—but it can’t think, create, or achieve consciousness. The blind spot of AI is that consciousness does not emerge from thought; it is the source of it. AI thrives on averages and existing data—regurgitating internet patterns—while human creativity involves surprise, purpose, and novelty.’
In summary, over-anthropomorphizing doesn’t compute. AI is clearly a great productivity multiplier, but the human mind’s sophistication dwarfs it in efficiency and depth, running on a fraction of the power with far greater connectivity. George predicts the future will shift toward ‘carbon computing’ (graphene, neuromorphic chips, bio-inspired hardware) and achieve brain-like efficiencies rather than brute-force silicon scaling, but still….
Cortical Labs (2019, Melbourne) – DishBrain/CL1 biological computing systems using lab-grown human neurons on multi-electrode arrays (commercial biocomputer shipping since 2025, with ~115 units deployed; neurons learned Pong/Doom) – $10M Series A (April 2023) led by Horizons Ventures; total raised ~$11.6M – Valuation undisclosed.– Top Investors: Horizons Ventures, Blackbird Ventures, LifeX Ventures, Radar Ventures, In-Q-Tel, Gobi Partners, 3cap, Jumpspace Ventures, Singular Link, Whitestone
Wall Street executives blame Morgan Stanley’s latest layoffs on AI
In the last edition of The Rap, when discussing his recent lay-off of 40% of Block employees, our cover boy Jack Dorsey, predicted that within the next year, ‘the majority of companies will reach the same conclusion.’ He explained, and we agree, that ‘AI doesn’t just automate tasks—it rewrites the entire operating system of a company.’
Well, here the job cuts come! Last week, Morgan Stanley announced a surprise round of layoffs totaling 2,500 jobs, or 3% of the mega bank’s global workforce. Morgan attributed the Wall Street giant’s latest bloodbath to ‘shifting business and location priorities,’ and ‘individual job performance.’
Oh ye of little faith. Only one-third of the members who took our poll thus far are not worried that AI will replace their jobs. 😳
We have no inside information, but it is clear to us that despite what the flacks say, this slash is more about AI efficiency. First, the firm’s back-office workers were the target of the layoffs. Second, Morgan Stanley has been one of the earliest adopters on AI on Wall Street.
For example, the first’s flagship tool is its AI @ Morgan Stanley Assistant, an internal generative AI chatbot developed with OpenAI that was rolled out in September 2023. It gives financial advisors rapid access to Morgan’s extensive knowledge base (e.g., 100,000+ research reports). Reportedly, over 98% of advisor teams use it actively, boosting productivity and allowing more focus on client relationships.
Like with Block’s, Morgan’s job cuts announcement sent its shares soaring. 🤷🏻♂️
Elon on humanoid robots, ‘amazing abundance,’ and UBI
To our boy Elon, AI + robots just might put some universal basic income (UBI) in our pockets! (Senator Bernie ‘The Bern Man’ Sanders must be flopping in his seat with euphoria.) He told the Davos crowd in January that ‘Tesla is about sustainable technology. Now we have added a bigger goal: sustainable abundance.’ The Cowardly Lion’s new favorite phrase, which he upgraded in a recent X post to ‘amazing abundance.’
Sustainable abundance has long captivated pop culture—in Iain M. Banks’ Culture series, K. Eric Drexler’s Engines of Creation, Disney’s Pixar’s Wall-E animated film (2008), and now with Elon’s vision of AI-driven post-scarcity prosperity, with humanoid robots handling toil so humans thrive.
‘If you have ubiquitous AI that is essentially free or close to it and ubiquitous robotics, you will have an explosion in the global economy that is truly beyond all precedent,’ Elon exuberantly prophesies.
The Tesla Optimus robots—at your service for work and play.
Number crunching cynics say Tesla’s ‘pivot to humanoid robots is a big leap and a big gamble.’ Elon may be overshooting, but his vision is not like his fantasy of conscious AI models, and he may not land on Mars, but he will be flying among the stars.
UBI is the biggest leap of faith in this dream. This is a view Elon shares with his fellow proponent, Peter Diamandis, where AI-powered robots scurry around gardening and cooking for us, and we just kick back and collect fat checks from the government that redistributes huge corporate profits. Work will essentially be a hobby that we can choose to take up or not. 🧐
Without a doubt, humanoid robots, like AI, will play a role in our lives at home and work. UBI? Feels like a quick way to turn us into lazy teenagers. And if they are talking about robots doing stuff like unassisted heart surgery, that’s where we fall off the fantasy train.
We have a more measured view, like Patri Friedman, a tech investor and the grandson of Nobel Prize-winning economist Milton Friedman. ‘It all seems plausible, but with collaboration, a quality partnership, in which they are below us or next to us,’ he told the NY Post recently. But as we read on, our boy seemed to fall off the Doomer cliff.
‘Robots can become smarter than us and enslave us; that’s terrifying. The AI can create a super plague that will do us in, or change the oxygen or carbon dioxide levels to be better for computers. It doesn’t have to be the AI acting against us or caring about us. It could just take over the world in order to benefit itself.’