Page Inspect
Internal Links
133
External Links
101
Images
36
Headings
84
Page Content
Title:Stratechery by Ben Thompson
Description:On the business, strategy, and impact of technology.
HTML Size:634 KB
Markdown Size:98 KB
Fetched At:November 17, 2025
Page Structure
h4Can't Spell Eviction Without N-I-C-O, Emptying the Notebook After Nuggets-Clips, The GOAT Names a Child
h2ChatGPT Group Chats, Meta and the Encryption Trade-off, Network Effects and Ad Models
h22025.46: Satellites and Strategy
h3Stratechery Articles and Updates
h3Sharp Text by Andrew Sharp
h3Dithering with Ben Thompson and Daring Fireball’s John Gruber
h3Asianometry with Jon Yu
h3Sharp China with Andrew Sharp and Sinocism’s Bill Bishop
h3Greatest of All Talk with Andrew Sharp and WaPo’s Ben Golliver
h3Sharp Tech with Andrew Sharp and Ben Thompson
h2The Benefits of Bubbles
h3Financial Speculation and Physical Capacity
h3The Conditions for Cognitive Capacity
h3Is AI Different?
h3Stagnation: The Bubble Alternative
h2Resiliency and Scale
h3US-East-1 and the End of Resiliency
h3Rare Earths and China Dependency
h3COVID and Information Resiliency
h3The Costs of Resiliency
h2OpenAI’s Windows Play
h3Platform Establishment
h3Second Sourcing
h3The AI Linchpin
h2Sora, AI Bicycles, and Meta Disruption
h3My Creativity Blindspot
h3The AI Bicycle
h3Instagram’s Social Umbrella
h3Meta Concerns
h2The YouTube Tip of the Google Spear
h3A Brief History of Social Media
h3The Giant in Plain Sight
h3The DeepMind-to-YouTube Pipeline
h3AI Monetization
h3A Bull’s Journey
h2iPhones 17 and the Sugar Water Trap
h3Apple in the Background
h3Apple’s Enviable Position
h3The Cost of Pure Profit
h3The Sugar Water Trap
h2U.S. Intel
h3The Geopolitical Case
h3Decisions Over Decades
h3Competing with TSMC
h3Intel’s Credibility Problem
h3Steelmanning
h2Facebook is Dead; Long Live Meta
h3Lessons Learned
h3AI That Matters
h3Reels and Dials
Markdown Content
Stratechery by Ben Thompson – On the business, strategy, and impact of technology. - By Ben Thompson - About - Email/RSS - @benthompson - Explore - Concepts - Companies - Topics **Stratechery *Plus*** Subscribe Log In Learn More Member Forum Menu Search × **Stratechery *Plus*** Subscribe Log In Learn More Member Forum - By Ben Thompson - About - Email/RSS - @benthompson - Explore - Concepts - Companies - Topics - More by Ben Thompson - Updates - Interviews - Year in Review - All Articles **Latest Podcast** - #### Can't Spell Eviction Without N-I-C-O, Emptying the Notebook After Nuggets-Clips, The GOAT Names a Child Greatest Of All Talk | Nov 14 Can't Spell Eviction Without N-I-C-O, Emptying the Notebook After Nuggets-Clips, The GOAT Names a Child **Stratechery Plus Update** - ## ChatGPT Group Chats, Meta and the Encryption Trade-off, Network Effects and Ad Models Monday, November 17, 2025 - This Week in Stratechery ## 2025.46: Satellites and Strategy (Photo by Miguel J. Rodriguez Carrillo/Getty Images) Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone. Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings. On that note, here were a few of our favorites this week. - **SpaceX Buys Spectrum — and Apple Should Be Interested**. I’ve been taking a lot of interest in space recently, particularly SpaceX’s recent moves to buy wireless spectrum. What is particularly interesting are the comparisons and contrasts to the early years of the iPhone and Apple’s relations with traditional cellular companies; in this week’s episode of Sharp Tech — triggered by Tuesday’s Update — Andrew and I discuss the history of Apple and phone carriers, and why satellites are different. Some of those differences give reason for optimism, others for skepticism; the best way to achieve the optimistic outcome would be for Apple and SpaceX to work together. *—* *Ben Thompson* Continue reading - **Apple and Google, Together Again.** I’m happy to endorse any business analysis that compares a trillion dollar company to an alcoholic making promises, to itself and others, that it almost certainly won’t keep. Monday’s Daily Update checked that box, as Ben unpacked the short term logic and long term questions surrounding another Google-Apple partnership and the news that Apple is partnering with Friendly Gemini to remake Siri for the AI era. And which of these trillion dollar companies is the alcoholic, you ask? I don’t want to spoil it, but we do know one of them became notorious earlier this year for some promises that weren’t kept. *—* *Andrew Sharp* - **When Will America Catch Cup Fever?** We are three years into the NBA’s experiment with an in-season tournament, now called “the NBA Cup,” and it’s still mostly ignored by the mainstream. I wrote on Sharp Text about the ways in which that event is a keystone to understanding the NBA’s modern problems on a more general basis. Come for that story, and stay for my admittedly radical proposal to make the Cup itself worth watching *—* awarding first round picks to every team that makes the Final Four, including the number one pick to the winner *—* and why that solution could also be healthy for the league’s overall business**.** *—* *AS* * * * ### Stratechery Articles and Updates - Apple Earnings, Siri White-Labels Gemini, Short-Term Gains and Long-Term Risk *— Apple is already benefitting from AI via the App Store. Meanwhile, Siri will white-label Gemini; the long-term implications are significant.* - SpaceX Buys More Spectrum, SpaceX’s Pivot, Why Apple and SpaceX Should Partner *— SpaceX buys the spectrum it needs to be a standalone mobile carrier; the company should partner with Apple to deliver truly differentiated experiences.* - Microsoft Earnings, CoreAI/MantleAI, Additional Notes *— Microsoft declares independence from OpenAI and sketches out its future role building scaffolding for AI. Plus, Windows is tiny now.* - An Interview with Unity CEO Matthew Bromberg About Turnarounds *— An interview with Unity CEO Matthew Bromberg about a career focused on turnarounds, from EA’s KOTR to Zynga and now to Unity.* ### Sharp Text by Andrew Sharp - The NBA Cup Doesn’t Have to Be Terrible — *A closer look at the NBA Cup in Year 3, including the case for using the NBA Draft to save the event and walk back the league’s misguided push for parity*. ### Dithering with Ben Thompson and Daring Fireball’s John Gruber - Cold Weather and EU Regulations - iPhone Pocket and ChatGPT Prompts ### Asianometry with Jon Yu - Why the Original Apple Silicon Failed - Singapore Tried to Grow More of Its Own Food… ### Sharp China with Andrew Sharp and Sinocism’s Bill Bishop - US-China Follow-Through; New Xi Textbooks and a New Aircraft Carrier; A Wolf Warrior Greets Japan’s PM; More Setbacks for Nvidia ### Greatest of All Talk with Andrew Sharp and WaPo’s Ben Golliver - Wemby Goes to Hollywood, Mark Daigneault Moving Different, The Mavs Are the War on Drugs - Re-Drafting Paolo, Chet and the 2022 Draft Class, The Pistons are Electric, Changes Afoot in Dallas? ### Sharp Tech with Andrew Sharp and Ben Thompson - How Apple Changed the Cellular Economy, What SpaceX Wants to Do With Spectrum, Airlines and Carriers, Yann LeCun Departs Meta This week’s Stratechery video is on The Benefits of Bubbles. * * * **Get notified about new Articles** Sign Up Enter the code sent to your email. Resend email Please verify your email address to proceed. * * * - ## The Benefits of Bubbles Wednesday, November 5, 2025 Listen to Podcast Watch on YouTube **Listen to this **post**:** Log in to listen It’s funny to remember that a decade ago there were enough people convinced we were in a bubble that I felt compelled to write an Article entitled It’s Not 1999; that was right then, and it’s obviously right now, when we have a clear counter-example: *this* is a bubble. How else to describe a single company — OpenAI — making $1.4 trillion worth of deals (and counting!) with an extremely impressive but commensurately tiny $13 billion of reported revenue? Sure, the actual number may be higher, but that is still two orders of magnitude less than the amount of infrastructure OpenAI has publicly committed to buy over the coming years, and they are not the only big spenders. Over the past week every big tech company (except Apple) has significantly expanded their capital expenditure plans, and there is no sign of anyone slowing down. This does, understandably, have people wringing their hands. What goes up must come down, which is to say bubbles that inflate eventually pop, with the end result being a recession and lots of bankrupt companies. And, not to spoil the story, that will almost certainly happen to the AI bubble as well. What is important to keep in mind, however, is that that is not the end of the story, at least in the best case. Bubbles have real benefits. ### Financial Speculation and Physical Capacity The definitive book on bubbles has long been Carlota Perez’s Technological Revolutions and Financial Capital.1 Bubbles were — are — thought to be something negative and to be avoided, particularly at the time Perez published her book. The year was 2002 and much of the world was in a recession coming off the puncturing of the dot-com bubble. Perez didn’t deny the pain: in fact, she noted that similar crashes marked previous revolutions, including the Industrial Revolution, railways, electricity, and the automobile. In each case the bubbles were not regrettable, but necessary: the speculative mania enabled what Perez called the “Installation Phase”, where necessary but not necessarily financially wise investments laid the groundwork for the “Deployment Period”. What marked the shift to the deployment period was the popping of the bubble; what enabled the deployment period were the money-losing investments. In the case of the dotcom bubble, the money-losing investments that mattered were not actually the dotcom companies that mark that era in Silicon Valley lore: yes, a lot of people lost money on insane IPOs, but the loss was mostly equity, not debt. Where debt was a problem was in telecom, where a host of companies went bankrupt after a frenzied period of building far more fiber than could ever be justified by current usage, fast though it may have been growing. That fiber, however, became the background of today’s Internet; the fact that it basically existed for free — because the companies who built it went bankrupt — enabled the effectively free nature of the Internet today. ### The Conditions for Cognitive Capacity Late last year Byrne Hobart and Tobias Huber made a new contribution to our understanding of bubbles with their book Boom: Bubbles and the End of Stagnation. While Perez focused on the benefits that came from financial speculation leading to long-term infrastructure, Hobart and Huber identified another important feature of what they called “Inflection Bubbles” — the good kind of bubbles, as opposed to the much more damaging “Mean-reversion Bubbles” like the 2000’s subprime mortgage bubble. First, here is Hobart and Huber’s definition of an inflection bubble: > Inflection-driven bubbles have fewer harmful side effects and more beneficial long-term effects. In an inflection-driven bubble, investors decide that the future will be meaningfully different from the past and trade accordingly. Amazon was not a better Barnes & Noble; it was a store with unlimited shelf space and the data necessary to make personalized recommendations to every reader. Yahoo wasn’t a bigger library; it was a directory and search engine that made online information accessible to anyone. Priceline didn’t want to be a travel agent; it aspired to change the way people bought everything, starting with plane tickets. > > If a mean-reversion bubble is about the numbers after the decimal point, an inflection bubble is about orders of magnitude. A website, a PC, a car, a smartphone — these aren’t five percent better than the nearest alternative. On some dimensions, they’re incomparably better. A smartphone is a slightly more convenient tool than a PC for taking a photo and quickly uploading it to the internet, but it’s infinitely better at navigation. A car is not just slightly faster and more reliable than a horse (although in the early days of the automobile industry, it was apparently common for pedestrians to yell “Get a horse!” at passing motorists); cars transformed American cities. Modern-day Los Angeles is inconceivable on horseback. The manure problem alone beggars the imagination. This is what makes inflection bubbles valuable: > The fundamental utility of inflection bubbles comes from their role as coordinating mechanisms. When one group makes investments predicated on a particular vision of the future, it reduces the risk for others seeking to build parts of that vision. For instance, the existence of internet service providers and search engines made e-commerce sites a better idea; e-commerce sites then encouraged more ad-dependent business models that could profit from directing consumers. Ad-dependent businesses then created more free content, which gave the ISPs a better product to sell. Each sector grew as part of a virtuous circle. What I love about this formulation from a tech perspective is that it captures the other side of the dotcom era: no, Silicon Valley didn’t produce any lasting infrastructure (unless you count a surplus of Aeron chairs), but what the mania did produce were a huge number of innovations, invented in parallel, that unlocked the following two decades of growth. First, the dotcom era brought nearly the entire U.S. population online, thanks to that virtuous cycle that Hobart and Huber described in the above excerpt. This not only provided the market for the consumer Internet giants that followed, but also prepared an entire generation of future workers to work on the web, unlocking the SaaS enterprise market. Second, the intense competition of the dotcom era led to one of my favorite inventions of all time, both because of its impact and because of its provenance. Microsoft famously saw Netscape, the OpenAI of the dotcom era, as a massive threat; the company responded with Internet Explorer, and a host of legally questionable tactics to spur its adoption. What is forgotten, however, is that Microsoft was at that time actually quite innovative in terms of pushing browser technology forward, driven by the need to beat Netscape, and one of those innovations was `XMLHttpRequest`. `XMLHttpRequest`, introduced with Internet Explorer 5 in 1999, allowed Javascript to make asynchronous HTTP requests without reloading the page; previously to change anything on a webpage meant reloading the entire thing. Now, however, you could interact with a page and have it update in place, without a reload. What makes this invention ironic is that this was the key capability that transformed the browser from a media consumption app to a productive one, and it was the productivity capabilities that began the long breakdown of Microsoft’s application moat. Once work could be done in a browser, it would be done everywhere, not just on Windows; this, in the long run, created the conditions for the smartphone revolution and the end of Windows’ dominance. This was, to be clear, but one of a multitude of new protocols and innovations that made today’s tech stack possible; what is important is how many of them were invented at once thanks to the bubble. Third, the cost and complexity of serving all of these new use cases drove tremendous innovation on the backend. The Nvidia of the dotcom era was arguably not Cisco, but rather Sun: a huge percentage of venture capital went to buying Sun SPARC/Solaris servers to run these new-fangled companies. Solaris was the most advanced operating system in terms of running large websites, with the most mature TCP/IP stack, multithreading, symmetric multiprocessing, etc. Moreover, the dominance of Solaris meant that it had the largest pool of developers, which meant it was easier to hire if you ran Solaris. The problem, however, is that SPARC servers were extremely expensive, to the point of being nearly financially impractical for the largest-scale web applications like Hotmail or Yahoo. That’s why the former (in its startups days) ran its front-end on free software (FreeBSD) on commodity x86 hardware from the beginning, and why the latter made the same shift as it exploded in popularity. Both, however, had custom-built back-ends; it was Google, founded in 1998, that built the entire stack on commodity x86 hardware and Linux, unlocking the scalability that was critical to the huge growth in the Internet that followed. This entire stack was the product of a massive amount of uncoordinated coordination: people came online for better applications that ran on hardware powered by software built by a massive array of companies and individuals; that all of this innovation and invention happened at the same time was because of the bubble. Oh, and to return to Perez: all of this ran over fiber laid by bankrupt companies. What Perez got right is that bubbles install physical capacity; what Hobart and Huber added is that they also create cognitive capacity, thanks to everyone pulling in the same direction at the exact same time, based not on fiat, but on a shared belief that this time is different. ### Is AI Different? This question — or statement — is usually made optimistically. In this case, the optimistic take would be that AI is already delivering tangible benefits, that those benefits are leading to real demand from companies and consumers, and that all of the money being spent on AI will not be wasted but put to productive use. That may still be the case today — all of the hyperscalers claim that demand for their offerings exceeds supply — but if history is any indication we will eventually overshoot. There is, however, a pessimistic way to ask that question: will the AI bubble be beneficial like the positive bubbles chronicled by Perez and Hobart and Huber, or is it different? There have been reasons to be worried about both the physical buildout and the cognitive one. Start with the physical: a huge amount of the money being spent on AI has gone to GPUs, particularly Nvidia, rocketing the fabless design company to a nearly $5 trillion valuation and the title of most valuable company in the world. The problem from a Perez perspective is that all of this spending on chips is, relative to the sort of infrastructure she wrote about — railroads, factories, fiber, etc. — short-lived. Chips break down and get superseded by better ones; most hyperscalers depreciate them over five years, and that may be generous. Whatever the correct number is, chips don’t live on as fully-depreciated assets that can be used cheaply for years, which means that to the extent speculative spending goes towards GPUs is the extent to which this bubble might turn out to be a disappointing one. Fortunately, however, there are two big areas of investment that promise to have much more long-term utility, even if the bubble pops. The first is fabs — the places where the chips are made. I’ve been fretting about declining U.S. capacity in this area, and the attendant dependence on Taiwan, the most fraught geopolitical location in the world, for years, and for much of that time it wasn’t clear that anything would be done about it. Fast forward to today, and not only are foundries like TSMC and Samsung building fabs in the U.S., but the U.S. government is now a shareholder in Intel. There is still a long path to foundry independence for the U.S., particularly once you consider the trailing edge as well, but there is no question that the rise of AI has had a tremendous effect in focusing minds and directing investment towards solving a problem that might never have been solved otherwise. The second is power. Microsoft CFO Amy Hood said on the company’s earnings call: > As you know, we’ve spent the past few years not actually being short GPUs and CPUs per se, we were short the space or the power, is the language we use, to put them in. We spent a lot of time building out that infrastructure. Now, we’re continuing to do that, also using leases. Those are very long-lived assets, as we’ve talked about, 15 to 20 years. And over that period of time, do I have confidence that we’ll need to use all of that? It is very high. Amazon CEO Andy Jassy made a similar comment on his company’s earnings call: > On the capacity side, we brought in quite a bit of capacity, as I mentioned in my opening comments, 3.8 gigawatts of capacity in the last year with another gigawatt plus coming in the fourth quarter and we expect to double our overall capacity by the end of 2027. So we’re bringing in quite a bit of capacity today, overall in the industry, maybe the bottleneck is power. I think at some point, it may move to chips, but we’re bringing in quite a bit of capacity. And as fast as we’re bringing in right now, we are monetizing it. As I noted yesterday, this actually surprised me: I assumed that chips were in short supply, and the power shortage was looming, but actually power is already the limiter. This is both disappointing and unsurprising, given how power generation capacity growth has stagnated over the last two decades: At the same time, this is also encouraging: the fastest way to restart growth — and hopefully at an even higher rate than the fifty years that preceded this stagnation — is to have massive economic incentives to build, combined with massive government incentives to eliminate red tape. AI provides both, and my hope is that the fact we are already hitting the power wall means that growth gets started that much sooner. It’s hard to think of a more useful and productive example of a Perez-style infrastructure buildout than power. It’s sobering to think about how many things have never been invented because power has never been considered a negligible input from a cost perspective; if AI does nothing more than spur the creation of massive amounts of new power generation it will have done tremendous good for humanity. Indeed, if you really want to push on the bubble benefit point, wiping away the cost of building new power via bankruptcy of speculative investors — particularly if a lot of that power has low marginal fuel costs, like solar or nuclear — could be transformative in terms of what might be invented in the future. To that end, I’m more optimistic today than I was even a week ago about the AI bubble achieving Perez-style benefits: power generation is exactly the sort of long-term payoff that might only be achievable through the mania and eventual pain of a bubble, and the sooner we start feeling the financial pressure — and the excitement of the opportunity — to build more power, the better. * * * I’ve been less worried about the cognitive capacity payoff of the AI bubble for a while: while there might have been concern about OpenAI having an insurmountable lead, or before that Google being impregnable, nearly everyone in Silicon Valley is now working on AI, and so is China. Innovations don’t stay secret for long, and the time leading edge models stay in the lead is often measured in weeks, not years. Meanwhile, consumer uptake of AI is faster than any other tech product by far. What is exciting about the last few weeks, however, is that there is attention being paid to other parts of the stack, beyond LLMs. For example, last week I interviewed Substrate founder James Proud about his attempt to build a new kind of lithography machine as the center of a new American foundry. I don’t know if Proud will succeed, but the likelihood of anyone even trying — and of getting funding — is dramatically higher in the middle of this bubble than it would have been a decade ago. It was also last week that Extropic announced a completely new kind of chip, one based not on binary 1s and 0s, but on probabilistic entropy measurements, that could completely transform diffusion models. Again, I don’t know if it will succeed, but I love that the effort exists, and is getting funding. And meanwhile, there are massive investments by every hyperscaler and a host of startups to make new chips for AI that promise to be cheaper, faster, more efficient, etc. All of these efforts are getting funding in a way they wouldn’t if we weren’t in a bubble. Hobart and Huber write in Boom: > Not all bubbles destroy wealth and value. Some can be understood as important catalysts for techno-scientific progress. Most novel technology doesn’t just appear ex nihilo, entering the world fully formed and all at once. Rather, it builds on previous false starts, failures, iterations, and historical path dependencies. Bubbles create opportunities to deploy the capital necessary to fund and speed up such large-scale experimentation — which includes lots of trial and error done in parallel — thereby accelerating the rate of potentially disruptive technologies and breakthroughs. > > By generating positive feedback cycles of enthusiasm and investment, bubbles can be net beneficial. Optimism can be a self-fulfilling prophecy. Speculation provides the massive financing needed to fund highly risky and exploratory projects; what appears in the short term to be excessive enthusiasm or just bad investing turns out to be essential for bootstrapping social and technological innovations…A bubble can be a collective delusion, but it can also be an expression of collective vision. That vision becomes a site of coordination for people and capital and for the parallelization of innovation. Instead of happening over time, bursts of progress happen simultaneously across different domains. And with mounting enthusiasm…comes increased risk tolerance and strong network effects. The fear of missing out, or FOMO, attracts even more participants, entrepreneurs, and speculators, further reinforcing this positive feedback loop. Like bubbles, FOMO tends to have a bad reputation, but it’s sometimes a healthy instinct. After all, none of us wants to miss out on a once-in-a-lifetime chance to build the future. This is why I’m excited to talk about new technologies, the prospect for which *I don’t know.* The more *I don’t know* projects there are, the more likely there is to be one that succeeds. And, if you want an investment that pays off not for a few years, and not even for a few decades, but literally forever, then your greatest hope should be invention and innovation. ### Stagnation: The Bubble Alternative Hobart and Huber actually begin their book not by talking about ancient history, but about this century, and stagnation. > The symptoms of technological, economic, and cultural stagnation can be detected everywhere. Some of the evidence is hard to quantify, but it can perhaps best be summarized by a simple thought experiment: Will children born today experience as much change as children born a century ago—a time when cars, electrical appliances, synthetic materials, and telephones were still in their infancy? Futurists and science-fiction authors once prophesied an era of abundant energy due to nuclear fission, the arrival of full automation, the colonization of the solar system, the end of poverty, and the attainment of immortality. In contrast, futurists today ask questions about how soon and how catastrophically civilization will collapse. There is a science-fiction innovation that has been hovering around the edges of the tech industry for the last decade: virtual and augmented reality. It hasn’t gotten far. Meta has, since it started breaking out Reality Labs financials in Q4 2020, recognized $10.8 billion in revenue against $83.2 billion in costs; the total losses are far higher when you consider that the company bought Oculus VR for $2 billion six years before that breakout. Apple, meanwhile, announced the Vision Pro in 2023, launched it in 2024, and has barely talked about it since — and certainly not on earnings calls. Both companies would argue that the technology just isn’t there yet, and to the extent AR and VR are compelling, it’s because of the money and time they have spent developing it. I wonder, however, about a counter-factual where AR and VR were developed by a constellation of startups, not big companies: how much more innovation might there have been? Or, perhaps the bigger problem is that there was not — and, given that all of the investment is a line item in large company budgets, could not be — a bubble around AR and VR. More generally, tech simply wasn’t much fun by the time 2020 rolled around. You had your big five tech companies who had each carved out their share of the market, unassailable in their respective domains, and the startup industry was basically itself another big tech company: Silicon Valley Inc., churning out cookie-cutter SaaS companies with a proven formula and low risk. In fact, it’s the absence of risk that Hobart and Huber identify as the hallmark of stagnation: > Of course, the causes of stagnation are complex. But what these symptoms of stagnation and decline have in common is that they result from a societal aversion to risk, which has been on the rise for the past few decades. Societal risk intolerance expresses itself almost everywhere — in finance, culture, politics, education, science, and technology. Broadly, there seems to be a collective desire to suppress and control all risks and conserve what is at the expense of breaking the terminal horizon of the present and accelerating toward what could be. This is why Hobart told me in a Stratechery Interview that Boom was ultimately an exhortation: > **What I took away is your book was much more of a sociological exposé, like spiritual almost, and I shouldn’t say almost because you were actually quite explicit about it. It’s like you were seeking — the goal of this book, it feels like — is to call forth the spirit of the bubble as opposed to have some sort of technocratic overview. You give us useful history, but there’s not really charts or microeconomics, it’s an exhortation. Is that what you were going for?** > > **Byrne Hobart:** Yes, it is an exhortation. We do want people to pick up a copy and quit their job halfway through reading it or drop out of school and start something crazy. I don’t want to be legally liable if you do something sufficiently crazy, and I think that the spiritual element is something that we did want to talk about in the book, because I think if you — you can apply this totally secular framework to it, and it’s perfectly valid. Of course, if it is a mostly materialist framing of things, then it has a lot more real world data because it’s all reliant on that real world data, but if you have the belief or at least suspicion that all of us are unique and special, and there is something that we are, if not put on this Earth to do, at least there are things that we are able to do that other people wouldn’t do as well, that part of our job is to find those things and do them really well. Bubbles play into that in an interesting way because they tell you it’s time, it’s like you wanted to do this kind of thing. What is fascinating about the AI bubble is that there is at its core a quasi-spiritual element. There are people working at these labs that believe they are building God; that is how they justify the massive investment in leading edge models that never have the chance to earn back their costs before they are superceded by someone else. That’s why they push for policies that I think are bad for innovation and bad for national security. I don’t like these side effects, to be clear, but I appreciate the importance of the belief and the motivation. And, I must say, it certainly is fun and compelling in a way that tech was not a few years ago. Bubbles may end badly, but history does not end: there are benefits from bubbles that pay out for decades, and the best we can do now is pray that the mania results in infrastructure and innovation that make this bubble worth it. * * * - I highly recommend this overview if you are not familiar. ↩ * * * **Get notified about new Articles** Sign Up Enter the code sent to your email. Resend email Please verify your email address to proceed. * * * - ## Resiliency and Scale Wednesday, October 22, 2025 Listen to Podcast Watch on YouTube **Listen to this **post**:** Log in to listen There seems, at first glance, to be little in common between the two big stories of the last two weeks. On October 9, China announced expansive export controls on rare earths, which are critical to nearly all tech products; then, on October 20, US-East-1, the oldest and largest region of Amazon Web Services, suffered a DNS issue that impacted cloud services that people didn’t even know they used, until they were no longer available. There is, however, a commonality, one that cuts to the heart of accepted wisdom about both the Internet and international trade, and serves as a reminder that what actually happens in reality matters more than what should happen in theory. ### US-East-1 and the End of Resiliency The Internet story is easier to tell. While the initial motivation for ARPANET, the progenitor of the Internet, was to share remote computing resources, the more famous motivation of surviving a nuclear attack did undergird a critical Internet technology: packet switching. Knocking out one node of the Internet should not break the whole thing, and, technically, it doesn’t. And yet we have what happened this week: US-East-1 is but one node on the Internet, but it is so critical to so many applications that it effectively felt like the Internet was broken. The reasoning is straightforward: scale and inertia. Start with the latter: Northern Virginia was a place that, in the 1990s, had relatively cheap and reliable power, land, and a fairly benign natural-disaster profile; it also had one of the first major Internet exchange points, thanks to its proximity to Washington D.C., and was centrally located between the west coast and Europe. That drew AOL, the largest Internet Service Provider of the 1990s, which established the region as data center central, leading to an even larger buildout of critical infrastructure, and making it the obvious location to place AWS’s first data center in 2006. That data center became what is known as US-East-1, and from the beginning it has been the location with the most capacity, the widest variety of instance types, and the first region to get AWS’s newest features. It’s so critical that AWS itself has repeatedly been shown to have dependencies on US-East-1; it’s also the default location in tutorials and templates used by developers around the world. You might make the case that “no one got fired for using US-East-1”, at least until now. Amazon, meanwhile, has invested billions of dollars into AWS over the last two decades, making the case that enterprises ought not waste their time and money building out and maintaining their own servers: even if the costs penciled out similarly, the flexibility of being able to scale up and scale down instantly was worth shifting capital costs to operational ones. The fact that this was not only a winning argument but an immensely profitable one became clear with The AWS IPO, which is how I described the Amazon earnings where they first broke out AWS’s financials. For the first decade of AWS’ existence the conventional wisdom was that only Amazon, with its famous appetite for tiny margins, would be able to stomach similarly narrow margins in the cloud; in fact it turned out that AWS was extremely profitable, and that profitability increased with scale. And, nestled within AWS’s scale, was US-East-1: that was the place with the cheapest instances, because it had the most, and that is where both startups and established businesses started as they moved to the cloud. Sure, best practices meant you had redundancy, but best practices are not always followed practices, and when it comes to networking, things can break in weird ways, particularly if DNS is involved. The larger lesson, however, is that while the Internet provided resiliency in theory, it also dramatically reduced the costs of putting your data and applications anywhere; then, once you could put your data and applications anywhere, everyone put their data and applications in the place that was both the easiest and the cheapest. That, by extension, only increased the scale of the place where everyone put their data and applications, making it even cheaper and easier. The end result is that, as we saw this week, the Internet in practice is less resilient than it was 20 years ago. Back then data centers went down all of the time, but if that data center served a single customer in an office park it didn’t really matter; now one data center in Northern Virgina is a failure point that affects nearly everyone. ### Rare Earths and China Dependency Rare earths are very different from packets that move with the speed of light. You have to build massive mines, separate trace minerals from mounds of dirt, then process and refine them to get something useful. It’s a similar story for physical goods generally: you have to get the raw materials, refine and process them, manufacture components, do final assembly, and then ship them to stores and warehouses until they reach their final destinations in workplaces and homes. This process was so onerous that, midway through the last century, only a portion of the world’s countries had ever managed to industrialize, and those that did trod similar paths and developed similar capabilities. Geography mattered tremendously, which is why, to take some classic examples, every country had its own car companies, its own chemical companies, etc. Yes, countries did search and colonize the planet in pursuit of raw materials, but the industrial base was firmly established in the homeland. Technology of another sort changed this equation; from 2016’s The Brexit Possibility: > In the years leading up to the 1970s, three technological advances completely transformed the meaning of globalization: > > - In 1963, Boeing produced the 707-320B, the first jet airliner capable of non-stop service from the continental United States to Asia; in 1970 the 747 made this routine. > - In 1964, the first transpacific telephone cable between the United States and Japan was completed; over the next several years it would be extended throughout Asia. > - In 1968, ISO 668 standardized shipping containers, dramatically increasing the efficiency with which goods could be shipped over the ocean in particular. > > These three factors in combination, for the first time, enabled a new kind of trade. Instead of manufacturing products in the United States (or Europe or Japan or anywhere else) and trading them to other countries, multinational corporations could invert themselves: design products in their home markets, then communicate those designs to factories in other countries, and ship finished products back to their domestic market. And, thanks to the dramatically lower wages in Asia (supercharged by China’s opening in 1978), it was immensely profitable to do just that. What followed over the last several decades was the same establishment of scale and inertia that led to a dependency on US-East-1 on the Internet, only in this case the center of gravity was China. Once the cost of communication and transportation plummeted it suddenly became viable to shift industry across the globe in pursuit of lower labor costs, looser environmental laws, and governments eager to support factory build-outs. Then, over time, scale and inertia took over: if everyone else was building a factory in China, it was easier to build your factory there; if all of the factories for your components were there, it was easier to do final assembly there. This pattern applied to rare earths just as much as anything else. China identified rare earths as a strategic priority even as the United States made it increasingly untenable to maintain, much less expand, rare earth mining and processing here; over time nearly every part of the rare earth production chain, from separation to processing to refining to actual usage in final products became centered in China, and any attempts to build out an alternative saw their markets flooded by Chinese supply, driving down prices and dooming projects. Not that end users cared: they could just buy from China, just like everyone everywhere increasingly bought everything from China. One of the critiques I’ve previously leveled at classical free trade arguments is that they ignore the importance of learning curves; from A Chance to Build: > The story to me seems straightforward: the big loser in the post World War 2 reconfiguration I described above was the American worker; yes, we have all of those service jobs, but what we have much less of are traditional manufacturing jobs. What happened to chips in the 1960s happened to manufacturing of all kinds over the ensuing decades. Countries like China started with labor cost advantages, and, over time, moved up learning curves that the U.S. dismantled; that is how you end up with this from Walter Isaacson in his Steve Jobs biography about a dinner with then-President Obama: > > > When Jobs’s turn came, he stressed the need for more trained engineers and suggested that any foreign students who earned an engineering degree in the United States should be given a visa to stay in the country. Obama said that could be done only in the context of the “Dream Act,” which would allow illegal aliens who arrived as minors and finished high school to become legal residents — something that the Republicans had blocked. Jobs found this an annoying example of how politics can lead to paralysis. “The president is very smart, but he kept explaining to us reasons why things can’t get done,” he recalled. “It infuriates me.” > > > > Jobs went on to urge that a way be found to train more American engineers. Apple had 700,000 factory workers employed in China, he said, and that was because it needed 30,000 engineers on-site to support those workers. “You can’t find that many in America to hire,” he said. These factory engineers did not have to be PhDs or geniuses; they simply needed to have basic engineering skills for manufacturing. Tech schools, community colleges, or trade schools could train them. “If you could educate these engineers,” he said, “we could move more manufacturing plants here.” The argument made a strong impression on the president. Two or three times over the next month he told his aides, “We’ve got to find ways to train those 30,000 manufacturing engineers that Jobs told us about.” > > I think that Jobs had cause-and-effect backwards: there are not 30,000 manufacturing engineers in the U.S. because there are not 30,000 manufacturing engineering jobs to be filled. That is because the structure of the world economy — choices made starting with Bretton Woods in particular, and cemented by the removal of tariffs over time — made them nonviable. Say what you will about the viability or wisdom of Trump’s tariffs, the motivation — to undo eighty years of structural changes — is pretty straightforward! > > The other thing about Jobs’ answer is how ultimately self-serving it was. This is not to say it was wrong: Apple could not only not manufacture an iPhone in the U.S. because of cost, it also can’t do so because of capability; that capability is downstream of an ecosystem that has developed in Asia and a long learning curve that China has traveled and that the U.S. has abandoned. Ultimately, though, the benefit to Apple has been profound: the company has the best supply chain in the world, centered in China, that gives it the capability to build computers on an unimaginable scale with maximum quality for not that much money at all. The Apple-China story is so compelling because it is so representative of how the U.S. has become dependent on China. What is notable, however, is that this dependency points to another flaw in classic free trade formulations: while in theory free trade and globalization make supply chains more resilient because you can source from anywhere, in practice free trade has destroyed resiliency. Apple CEO Tim Cook famously said in what became known as The Tim Cook Doctrine: > We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution. The fact of the matter, however, is that Apple’s most important technology — the one architected by Cook himself — is its unmatched capability to make the most sophisticated and profitable devices at astronomical scale, and Apple ultimately does not own and control it: China does. So it goes for nearly everything else in the industrial supply chain, including rare earths. Rare earths are not, in fact, rare, but China’s scale and the inertia of the last forty years has led to total dependence on a country that is a geopolitical foe of the United States. And so, once again, removing or reducing the costs of transportation and communication — this time for atoms — did not increase resiliency but rather, thanks to the pursuit of lower costs enabled by scale, destroyed it. ### COVID and Information Resiliency There is a happier story to be told about overcoming resiliency collapse, but I should warn you up front, this might be a controversial take. It has to do with the current state of information, the earliest and most popular Internet content. Back in March 2020 I wrote an Article entitled Zero Trust Information that made the case that the Internet was under-appreciated as a medium for conveying information that cut against the prevailing wisdom; my go-to example was the Seattle Flu Study, which heroically traced the spread of COVID in the United States at the beginning of 2020, making the (correct) case that the virus was far more widespread in the U.S. than the CDC in particular was willing to admit. In truth, however, my optimism was misplaced, or at least early. What followed in the weeks and months and even years afterwards was one of the greatest failures in information discovery and propagation maybe ever. I actually wrote an Update on March 2, 2020 that included a huge amount of relevant COVID information — including the fact that it was both going to infect everyone, and that it was much less fatal than initially assumed — that only became widely accepted years later (and still isn’t accepted by a big chunk of the population). It’s hard not to think about how much differently we might have handled the ensuing months and years if just those two facts were widely accepted, much less other banal observations like the fact that of course natural immunity is a real thing, or that airborne viruses are all-but-inescapable indoors, but much less of an issue outdoors. Unfortunately what happened is that by 2020 information distribution was highly centralized on Facebook, Twitter, and YouTube, and all three companies went to extraordinary lengths to limit the aperture of acceptable discourse on topics that contained a great deal of unknowns; indeed, it’s possible that that March 2 Update, had it been posted on one of those platforms, would have at one point earned me a ban. In short, our resiliency in terms of information propagation was by 2020 completely destroyed, and we all suffered the consequences. Then Elon Musk bought Twitter. What is fascinating about what has happened in the years since Musk’s purchase is not that Twitter has become a fountain of truth, even if it did in some respects become considerably freer. More importantly, Musk’s purchase and ensuing political advocacy provided the impetus for a number of Twitter alternatives, including Threads, Mastodon, and BlueSky. Each of these networks has its own focus and mores and overall culture. What is critical about their existence, however, is not that any one of them has a monopoly on the truth: rather, given that such a monopoly is impossible, it’s heartening that there is more than one forum. To that end, should a COVID-like episode arise today, there may be an easily distinguishable and widely-held-on-the-platform X truth, and Threads truth, and Mastodon truth, and BlueSky truth; the fact that none of those truths will be completely right — and in come cases completely at odds — is not a bug but a feature: that’s actual resiliency, because it increases the likelihood that we collectively arrive at the right answer sooner than we did in the COVID era. ### The Costs of Resiliency What is worth noting is that the only way we arrived at this point is through a fair bit of value destruction: Musk overpaid for Twitter, and losing a monopoly on short-form text communication diminished the value further. I think, however, that the collective outcome was positive. Unwinding US-East-1 dependencies will also take a similar sort of pain: businesses will need to spend money to truly understand their stack, and build actual resiliency into their systems, such that one region on one cloud provider going down doesn’t screw up their business; it can be done, it just needs the budget. And, in the end, we can do something similar with China. There, though, the difference between atoms and bits is very profound, and exceptionally costly. Overcoming the advantages of scale and decades-long learning curves will be very painful and very expensive; the only solution to the inevitable destruction of resiliency that comes from decreased transportation and communications costs is to increase costs elsewhere, even if those costs are artificial and lead to deadweight loss. I am, needless to say, much more optimistic about our willingness to accept the costs of moving some bits around than I am the willingness to accept the drastically larger and longer costs of moving atoms. If we don’t, however, then we need to be clear that the true price being paid for global efficiency is national resiliency. Pursuing the former led to the destruction of the latter; there’s no way back other than destroying some value along the way. * * * * * * **Get notified about new Articles** Sign Up Enter the code sent to your email. Resend email Please verify your email address to proceed. * * * - ## OpenAI’s Windows Play Tuesday, October 7, 2025 Listen to Podcast Watch on YouTube **Listen to this **post**:** Log in to listen OpenAI’s flood of announcements is getting hard to keep up with. A selection — not exhaustive! — from just the last month: - A massive data center buildout in partnership with Oracle - A $100 billion investment from Nvidia and associated deal to acquire 10 GW worth of Nvidia chips - A new Instant Checkout offering for the long tail of e-commerce - A partnership with Samsung and SK hynix for memory for AI chips - The Sora 2 video generation model and Sora the app - A deal with AMD for 6 GW worth of AMD chips and an associated OpenAI stake in the chipmaker - A slew of DevDay announcements, including apps in ChatGPT, AgentKit, Sora 2 and GPT-5 Pro in the API, the GA release of Codex, and more. The last two announcements just dropped yesterday, and actually bring clarity and coherence to the entire list. In short, OpenAI is making a play to be the Windows of AI. For nearly two decades smartphones, and in particular iOS, have been the touchstones in terms of discussing platforms. It’s important to note, however, that while Apple’s strategy of integrating hardware and software was immensely profitable, it entailed leaving the door open for a competing platform to emerge. The challenge of being a hardware company is that by virtue of needing to actually create devices you can’t serve everyone; Apple in particular didn’t have the capacity or desire to go downmarket, which created the opportunity for Android to not only establish a competing platform but to actually significantly exceed iOS in market share. That means that if we want a historical analogy for total platform dominance — which increasingly appears to be OpenAI’s goal — we have to go back further to the PC era and Windows. ### Platform Establishment Before there was Windows there was DOS; before DOS, however, there was a fast-talking deal-making entrepreneur named Bill Gates. From The Truth About Windows Versus the Mac: > In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired for buying IBM.” > > IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS — DOS, which they acquired from another company — could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft. > > The rest, as they say, is history. The demand from corporations for IBM PCs was overwhelming, and DOS — and applications written for it — became entrenched. By the time the Mac appeared in 1984, the die had long since been cast. Ultimately, it would take Microsoft a decade to approach the Mac’s ease-of-use, but Windows’ DOS underpinnings and associated application library meant the Microsoft position was secure regardless. There is nothing like IBM and its dominant position in enterprise today; rather, the route to becoming a platform is to first be a massively popular product. Acquiring developers and users is not a chicken-and-egg problem: it is clear that you must get users first, which attracts developers, enhancing your platform in a virtuous cycle; to put it another way, first a product must Aggregate users and then it gets developers for free. ChatGPT is exactly that sort of product, and at yesterday’s DevDay 2025 keynote CEO Sam Altman and team demonstrated exactly that sort of pull; from The Verge: > OpenAI is introducing a way to work with apps right inside ChatGPT. The idea is that, from within a conversation with the chatbot, you can essentially tag in apps to help you complete a task while ChatGPT offers context and advice. The company showed off a few different ways this can work. In a live demo, an OpenAI employee launched ChatGPT and then asked Canva to create a poster of a name for a dog-walking business; after a bit of waiting, Canva came back with a few different examples, and the presenter followed up by asking for a generated pitch deck based on the poster. The employee also asked Zillow via ChatGPT to show homes for sale in Pittsburgh, and it created an interactive Zillow map — which the employee then asked follow-up questions about. > > Apps available inside ChatGPT starting today will include Booking.com, Canva, Coursera, Expedia, Figma, Spotify, and Zillow. In the “weeks ahead,” OpenAI will add more apps, such as DoorDash, OpenTable, Target, and Uber. OpenAI recently started allowing ChatGPT users to make purchases on Etsy through the chatbot, part of its overall push to integrate it with the rest of the web. It’s fair to wonder if these app experiences will measure up to these company’s self-built apps or websites, just as there are questions about just how well the company’s Instant Checkout will convert; what is notable, however, is that I disagree that this represents a “push to integrate…with the rest of the web”. This is the opposite: this is a push to make ChatGPT the operating system of the future. Apps won’t be on your phone or in a browser; they’ll be in ChatGPT, and if they aren’t, they simply will not exist for ChatGPT users. That, by extension, means the burden of making these integrations work — and those conversions performant — will be on third party developers, not OpenAI. This is the power that comes from owning users, and OpenAI is flexing that power in a major way. ### Second Sourcing There is a second aspect to the IBM PC strategy, and that is the role of AMD. From a 2024 Update: > While IBM chose Intel to provide the PC’s processor, they were wary of being reliant on a single supplier (it’s notable that IBM didn’t demand the same of the operating system, which was probably a combination of not fully appreciating operating systems as a point of integration and lock-in for 3rd-party software, which barely existed at that point, and a recognition that software is just bits and not a physical good that has to be manufactured). To that end IBM demanded that Intel license its processor to another chip firm, and AMD was the obvious choice: the firm was founded by Jerry Sanders, a Fairchild Semiconductor alum who had worked with Intel’s founders, and specialized in manufacturing licensed chips. The relationship between Intel and AMD ended up being incredibly fraught and largely documented by endless lawsuits (you can read a brief history in that Update); the key point to understand, however, is that (1) IBM wanted to have dual suppliers to avoid being captive to an essential component provider and (2) IBM had the power to make that happen because they had the customers who were going to provide Intel so much volume. The true beneficiary of IBM’s foresight, of course, was Microsoft, which controlled the operating system; IBM’s mandate is why it is appropriate that “Windows” comes first in the “Wintel” characterization of the PC era. Intel reaped tremendous profits from its position in the PC value chain, but more value accrued to Microsoft than anyone else. This question of who will capture the most profit from the AI value chain remains an open one. There’s no question that the early winner is Nvidia: the company has become the most valuable in the world by virtue of its combination of best-in-class GPUs, superior networking, and CUDA software layer that locks people into Nvidia’s own platform. And, as long as power is the limiting factor, Nvidia is well-placed to maintain its position. What Nvidia is not shy about is capturing its share of value, and that is a powerful incentive for other companies in the value chain to look for alternatives. Google is the furthest along in this regard thanks to its decade-old investment in TPUs, while Amazon is seeking to mimic their strategy with Trainium; Microsoft and Meta are both working to design and build their own chips, and Apple is upscaling Apple Silicon for use in the data center. Once again, however, the most obvious and most immediately available alternative to Nvidia is AMD, and I think the parallels between yesterday’s announcement of an OpenAI-AMD deal and IBM’s strong-arming of Intel are very clear; from the Wall Street Journal: > OpenAI and chip-designer Advanced Micro Devices announced a multibillion-dollar partnership to collaborate on AI data centers that will run on AMD processors, one of the most direct challenges yet to industry leader Nvidia. Under the terms of the deal, OpenAI committed to purchasing 6 gigawatts worth of AMD’s chips, starting with the MI450 chip next year. The ChatGPT maker will buy the chips either directly or through its cloud computing partners. > > AMD chief Lisa Su said in an interview Sunday that the deal would result in tens of billions of dollars in new revenue for the chip company over the next half-decade. The two companies didn’t disclose the plan’s expected overall cost, but AMD said it costs tens of billions of dollars per gigawatt of computing capacity. OpenAI will receive warrants for up to 160 million AMD shares, roughly 10% of the chip company, at 1 cent per share, awarded in phases, if OpenAI hits certain milestones for deployment. AMD’s stock price also has to increase for the warrants to be exercised. If OpenAI is the software layer that matters to the ecosystem, then Nvidia’s long-term pricing power will be diminished; the company, like Intel, may still take the lion’s share of chip profits through sheer performance and low-level lock-in, but I believe the most important reason OpenAI is making this deal is to lock in its own dominant position in the stack. It is pretty notable that this announcement comes only weeks after Nvidia’s investment in OpenAI; that, though, is another affirmation that the company who has the users has the ultimate power. There is one other part of the stack to keep an eye on: TSMC. Both Nvidia and AMD make their chips with the Taiwanese giant, and while TSMC is famously reticent to take price, they are positioned to do so in the long run. Altman surely knows this as well, which means that I wouldn’t be surprised if there is an Intel announcement sooner rather than later; maybe there is fire behind that recent smoke about AMD talking with Intel? ### The AI Linchpin When I started writing Stratechery, Windows was a platform in decline, superceded by mobile and, surprisingly enough, increasingly challenged by its all-but-vanquished ancient foe, the Mac. To that end, one of my first pieces about Microsoft was about then-CEO Steve Ballmer’s misguided attempt to focus on devices instead of services. I wrote a few years later in Microsoft’s Monopoly Hangover: > The truth is that both \[IBM and Microsoft\] were victims of their own monopolistic success: Windows, like the System/360 before it, was a platform that enabled Microsoft to make money in all directions. Both companies made money on the device itself and by selling many of the most important apps (and in the case of Microsoft, back-room services) that ran on it. There was no need to distinguish between a vertical strategy, in which apps and services served to differentiate the device, or a horizontal one, in which the device served to provide access to apps and services. When you are a monopoly, the answer to strategic choices can always be “Yes.” Microsoft at that point in time no longer had that luxury: the company needed to make a choice — the days of doing everything were over — and that choice should be services (which is exactly what Satya Nadella did). Ever since the emergence of ChatGPT made OpenAI The Accidental Consumer Tech Company I have been making similar arguments about OpenAI: they need to focus on the consumer opportunity and leave the enterprise API market to Microsoft. Not only would focus help the company capture the consumer opportunity, there was the opportunity cost of GPUs used for the API that couldn’t be used to deliver consumers a better experience across every tier. I now have much more appreciation for OpenAI’s insistence on doing it all, for two reasons. First, this is a company in pure growth mode, not in decline. Tradeoffs are in the long run inevitable, but why make them before you need to? It would have been a mistake for Microsoft to restrict Windows to only the enterprise in the 1980s, even if the company had to low-key retreat from the consumer market over the last fifteen years; there was a lot of money to make before that retreat needed to happen! OpenAI, meanwhile, is the hottest brand in AI, so why not make a play to own it all, from consumer touchpoint to API to everything in-between? Second, we’ve obviously crossed the line into bubble territory, which always was inevitable. The question now is whether or not this is a productive bubble: what durable infrastructure will be built by eventually bankrupt companies that we benefit from for years to come? GPUs are not that durable infrastructure; data centers are more long-lasting, but not worth the financial pain of a bubble burst. The real payoff would be a massive build-out in power generation, which would be a benefit for the next half century. Another potential payoff would be the renewed viability of Intel, and as I noted above, OpenAI may be uniquely positioned and motivated to make that happen. More broadly, this play to be the Windows of AI effectively positions OpenAI as the linchpin of the entire AI buildout. Just look at what the mere announcement of partnerships with OpenAI has done for the stocks of Oracle and AMD. OpenAI is creating the conditions such that it is the primary manifestation of the AI bubble, which ensures the company is the primary beneficiary of all of the speculative capital flooding into the space. Were the company more focused, as I have previously advised, they may not have the leverage to get enough funding to meet those more modest (but still incredible) goals; now it’s hard to see them not getting whatever money they want, at least until the bubble bursts. * * * What’s amazing about this overview is that I only scratched the surface of what OpenAI announced both yesterday and over the last month — and I haven’t even mentioned Sora (although I covered that topic yesterday). What the company is seeking to achieve is incredibly audacious, but also logical, and something we’ve seen before: And, interestingly enough, there is an Apple to OpenAI’s Microsoft: it’s Google, with their fully integrated stack, from chips to data centers to models to end user distribution channels. Instead of taking on a menagerie of competitors, however, Google is facing an increasingly unified ecosystem, organized, whether they wish to be or not, around OpenAI. Such is the power of aggregating demand and the phenomenon that is ChatGPT. * * * * * * **Get notified about new Articles** Sign Up Enter the code sent to your email. Resend email Please verify your email address to proceed. * * * - ## Sora, AI Bicycles, and Meta Disruption Monday, October 6, 2025 Listen to Podcast Watch on YouTube **Listen to this **post**:** Log in to listen The App Store charts tell the story, at least for the first week of AI-generated video apps: This doesn’t, somewhat embarrassingly, match my initial impressions: I liked the Vibes addition to the MetaAI app and was somewhat cool on Sora. I spent much of last week’s episode of Sharp Tech exploring why my initial impressions were so off base, and I think M.G. Siegler — who was sucked into Sora immediately — captures a few of them in Sora’s Slop Hits Different: > Anyway, what’s different, and what I underestimated about Sora, is that the AI content here is not just randomly generated things. It’s content that’s either loaded with “cameos” from your connections or it’s “real” world content that’s, well, hilarious. Not all of it, of course. But a lot of it! In this regard, it’s really not too dissimilar from TikTok — and back in the day, Vine! This is a lot more like those social networks but with the main difference being that it’s a lot easier to create such content thanks to AI. > > I think that’s the real revelation here. It’s less about consumption and more about creation. I previously wrote about how I was an early investor in Vine in part because it felt like it could be analogous to Instagram. Thanks in large part to filters, that app made it easy for anyone to think they were good enough to be a photographer. It didn’t matter if they were or not, they thought they were — I was one of them — so everyone posted their photos. Vine felt like it could have been that for video thanks to its clever tap-to-record mechanism. But actually, it became a network for a lot of really talented amateurs to figure out a new format for funny videos on the internet. When Twitter acquired the company and dropped the ball, TikTok took that idea and scaled it (thanks to ByteDance paying um, Meta billions of dollars for distribution, and their own very smart algorithms). > > In a way, Sora feels like enabling everyone to be a TikTok creator. I feel blessed for a whole host of reasons, many of them related to the fact I’ve been able to carve out a career as a creator. Sure, I call myself an analyst, and I write about primarily big tech companies, but one thing I realized over the years is that the success of Stratechery is tied to it being a creative endeavor; there have been a lot of analysts over the years who have launched similar sites, but what was often missing was the narrative element. The best Articles on Stratechery tell a story, with a beginning, middle, and end, and the analysis is along for the ride; analysis alone doesn’t move the needle. That I tell stories is itself a function of the way I think: I have a larger meta story in my head about how the world works, and I’m always adding and augmenting that story; that’s why, in various interviews, I’ve noted that being wrong is often the most inspiring (albeit painful) place to be. That means my story is incomplete, and I need to deepen my understanding of the world I’m seeking to chronicle. I certainly have that opportunity right now. ### My Creativity Blindspot This is what I wrote in my Update about Sora: > Indeed, it feels like each company has an entirely different target audience: YouTube is making tools for creators, Meta is building the ultimate lean back dream-like experience, and OpenAI is making an app that is, in my estimation, the easiest for normal people to use. > > In this new competition, I prefer the Meta experience, by a significant margin, and the reason why goes back to one of the oldest axioms in technology: the 90/9/1 rule. > > - 90% of users consume > - 9% of users edit/distribute > - 1% of users create > > If you were to categorize the target market of these three AI video entrants, you might say that YouTube is focused on the 1% of creators; OpenAI is focused on the 9% of editors/distributors; Meta is focused on the 90% of users who consume. Speaking as someone who is, at least for now, more interested in consuming AI content than in distributing or creating it, I find Meta’s Vibes app genuinely compelling; the Sora app feels like a parlor trick, if I’m being honest, and I tired of my feed pretty quickly. I’m going to refrain on passing judgment on YouTube, given that my current primary YouTube use case is watching vocal coaches break down songs from KPop Demon Hunters. > > I honestly have no idea if my evaluation of these apps is broadly applicable; as I’ve noted repeatedly, I’m hesitant to make any pronouncements about what resonates with society broadly given that I am the weirdo in the room. Still, I do think it’s striking how this target market evaluation tracks with the companies themselves: YouTube has always prioritized creators, while OpenAI’s business model is predicated on people actively using AI; it’s Meta that has stayed focused on the silent majority that simply consumes, and as a silent consumer, I still like Vibes! As I noted at the beginning, the verdict is in, and my evaluation of these apps is *not* broadly applicable. Way more people like Sora than Vibes, and OpenAI has another viral hit. What I hear from people who love the app, however, is very much in line with what Siegler wrote: yes, they are browsing the feed, but the real lure is losing surprisingly large amounts of time making content — Sora lets them be a content creator. This was a blind spot for me because I don’t have that itch! I’m creating content constantly — three Articles/Updates, an Interview, and three podcast episodes a week is enough for me, thank you very much. When I am vegging out on my phone, I want to passively consume, and I personally found the Vibes mix of fantastical environments and beautiful visages calming and inspiring; almost everyone else feels different: I had to laugh at this because I’ve spent way too much time watching Apple’s Aerial Video screensavers; apparently my tastes are consistent! Beyond that, however, is a second blind spot: how much of the 90/9/1 rule is a law of the universe, versus a manifestation of barriers when it comes to creation? At the risk of sounding like a snob, have I become the sort of 1%-er who is totally out of touch? ### The AI Bicycle Back in 2022, when AI image generation was just starting to get good, I wrote about The AI Unbundling and the idea propagation chain: > The evolution of human communication has been about removing whatever bottleneck is in this value chain. Before humans could write, information could only be conveyed orally; that meant that the creation, vocalization, delivery, and consumption of an idea were all one-and-the-same. Writing, though, unbundled consumption, increasing the number of people who could consume an idea. > > Now the new bottleneck was duplication: to reach more people whatever was written had to be painstakingly duplicated by hand, which dramatically limited what ideas were recorded and preserved. The printing press removed this bottleneck, dramatically increasing the number of ideas that could be economically distributed: > > The new bottleneck was distribution, which is to say this was the new place to make money; thus the aforementioned profitability of newspapers. That bottleneck, though, was removed by the Internet, which made distribution free and available to anyone. > > What remains is one final bundle: the creation and substantiation of an idea. To use myself as an example, I have plenty of ideas, and thanks to the Internet, the ability to distribute them around the globe; however, I still need to write them down, just as an artist needs to create an image, or a musician needs to write a song. What is becoming increasingly clear, though, is that this too is a bottleneck that is on the verge of being removed. This is what was unlocked by Sora: all sorts of people without the time or inclination or skills or equipment to make videos could suddenly do just that — and they absolutely *loved* it. And why wouldn’t they? To be creative is to be truly human — to actually think of something yourself, instead of simply passively consuming — and AI makes creativity as accessible as a simple prompt. I think this is pretty remarkable, so much so that I’ve done a complete 180 on Sora: this new app from OpenAI may be the single most exciting manifestation of AI yet, and the most encouraging in terms of AI’s impact on humans. Everyone — including lots of people in my Sora feed — are leaning into the concept of AI slop, which I get: we are looking at a world of infinite machine-generated content, and a lot of it is going to be terrible. At the same time, how incredible is it to give everyone with an iPhone a creative outlet? It reminds me of one of my favorite Steve Jobs moments, just before he died, at the introduction of the iPad 2; I wrote about it in 2024’s The Great Flattening: > My favorite moment in that keynote — one of my favorite Steve Jobs’ keynote moments ever, in fact — was the introduction of GarageBand. You can watch the entire introduction and demo, but the part that stands out in my memory is Jobs — clearly sick, in retrospect — moved by what the company had just produced: > > > I’m blown away with this stuff. Playing your own instruments, or using the smart instruments, anyone can make music now, in something that’s this thick and weighs 1.3 pounds. It’s unbelievable. GarageBand for iPad. Great set of features — again, this is no toy. This is something you can really use for real work. This is something that, I cannot tell you, how many hours teenagers are going to spend making music with this, and teaching themselves about music with this. > > Jobs wasn’t wrong: global hits have originated on GarageBand, and undoubtedly many more hours of (mostly terrible, if my personal experience is any indication) amateur experimentation. Why I think this demo was so personally meaningful for Jobs, though, is that not only was GarageBand about music, one of his deepest passions, but it was also a manifestation of his life’s work: creating a bicycle for the mind. > > > I remember reading an Article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B, and the condor won: it came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation. > > > > But somebody there had the imagination to test the efficiency of a human riding a bicycle. Human riding a bicycle blew away the condor, all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes, and so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities. > > > > I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, but I think that’s nothing compared to what’s coming in the next 100 years. > > In Jobs’ view of the world, teenagers the world over are potential musicians, who might not be able to afford a piano or guitar or trumpet; if, though, they can get an iPad — now even thinner and lighter! — they can have access to everything they need. In this view “There’s an app for that” is profoundly empowering. Well, now there’s an AI for that, and it’s accessible to everyone. And yes, I get the objections. I slave over these posts, thinking carefully about the structure and every word choice; it seems cheap to ask an LLM to generate the same. I’m certain that artists feel the same about AI images, or musicians about AI music, or YouTube and TikTok creators about Sora videos; what about the craft? That, though, is an easy concern to have when you already have a creative outlet; it’s also easy to make the case that more content means more compelling content to consume, even if the percentage of what is great is very small. What I didn’t fully appreciate, however, is what falls in the middle: the fact that so many more people get to be creators, and what a blessing that is. How many people have had ideas in their head, yet were incapable of substantiating them, and now can? I myself benefited greatly from the last unbundling — the ability for anyone to distribute content; why should I begrudge the latest unbundling, and the many more people who will benefit from AI substantiation of their creative impulses? Bicycles for all! ### Instagram’s Social Umbrella Siegler in his post discussed how he once thought Vine could be like Instagram, which made it easy to feel like a good photographer with its filters, but that was only step one; Chris Dixon described Instagram’s evolution as Come for the Tool, Stay for the Network: > A popular strategy for bootstrapping networks is what I like to call “come for the tool, stay for the network.” The idea is to initially attract users with a single-player tool and then, over time, get them to participate in a network. The tool helps get to initial critical mass. The network creates the long term value for users, and defensibility for the company. > > Here are two historical examples: 1) Delicious. The single-player tool was a cloud service for your bookmarks. The multiplayer network was a tagging system for discovering and sharing links. 2) Instagram. Instagram’s initial hook was the innovative photo filters. At the time some other apps like Hipstamatic had filters but you had to pay for them. Instagram also made it easy to share your photos on other networks like Facebook and Twitter. But you could also share on Instagram’s network, which of course became the preferred way to use Instagram over time. Dixon wrote that post in 2015, and Instagram has since gone much further than that, as I documented in 2021’s Instagram’s Evolution: - There was the tool to network evolution that Dixon talked about. - The second evolution was the addition of video. - The third evolution was the introduction of the algorithmic feed. - The fourth evolution was Stories, driven by competition with Snapchat. - The fifth evolution was what I was writing about in that Article: the commitment to short-form video, driven by competition with TikTok. That last evolution is fully baked in at this point; late last month Instagram announced that it was changing Instagram’s navigation to focus on private messaging and Reels; I didn’t explicitly cover the 2013 addition of Instagram Direct, but it certainly is the case that messaging is where social networking happens today. What is public is pure entertainment, where the content you see is pulled from across the network and tailored for you specifically. I think this evolution was both necessary and inevitable; I first wrote that Facebook needed to move in this direction in 2015’s Facebook and the Feed: > Consider Facebook’s smartest acquisition, Instagram. The photo-sharing service is valuable because it is a network, but it initially got traction because of filters. Sometimes what gets you started is only a lever to what makes you valuable. What, though, lies beyond the network? That was Facebook’s starting point, and I think the answer to what lies beyond is clear: the entire online experience of over a billion people. Will Facebook seek to protect its network — and Zuckerberg’s vision — or make a play to be the television of mobile? It wasn’t until TikTok peeled off a huge amount of attention that Facebook finally realized that viewing itself as a social network was actually limiting its potential. If the goal was to monopolize user attention — the only scarce resource on the Internet — then artificially limiting what people saw to their social network was to fight with one hand tied behind your back; TikTok was taking share not just because of its format, but also because it wasn’t really a social network at all. This is all interesting context for how OpenAI characterized Sora in their introductory post: it’s a *social* app. > Today, we’re launching a new social iOS app just called “Sora,” powered by Sora 2. Inside the app, you can create, remix each other’s generations, discover new videos in a customizable Sora feed, and bring yourself or your friends in via cameos. With cameos, you can drop yourself straight into any Sora scene with remarkable fidelity after a short one-time video-and-audio recording in the app to verify your identity and capture your likeness… > > This app is made to be used with your friends. Overwhelming feedback from testers is that cameos are what make this feel different and fun to use — you have to try it to really get it, but it is a new and unique way to communicate with people. We’re rolling this out as an invite-based app to make sure you come in with your friends. At a time when all major platforms are moving away from the social graph, we think cameos will reinforce community. First, just because Meta needed to move beyond the social network doesn’t mean social networking isn’t still valuable, or appealing. As an analogy, consider the concept of a pricing umbrella: when something becomes more expensive, it opens up the market for a lower-priced competitor. In this case Instagram’s evolution has created a social umbrella: sure, Instagram content may be “better” by virtue of being pulled from anywhere, but that means there is now a space for a content app that is organized around friends. Second, remember the creativity point above: one of the challenges of restricting Instagram content to just what your social network posted is that your social network may not post very many interesting things. That gap was initially filled by following influencers, but now Instagram simply goes out and finds what you are interested in without having to do anything. In Sora, however, your network is uniquely empowered to be creative, increasing the amount of interesting content in a network-mediated context (and, of course, Sora is also pulling from elsewhere as well to populate your feed). What you’re seeing, if you squint, is disruption: Instagram has gone “up-market” in terms of content, leaving space for a new entrant; that new entrant, meanwhile, is not simply cheaper/smaller. Rather, it’s enabled by a new technological paradigm that lets it compete orthogonally with the incumbent. Granted, that new paradigm is very expensive, particularly compared to the content that Instagram gets for free, but the extent it restores value to your social network is notable. ### Meta Concerns I am on the record as being very bullish about the impact of AI on Meta’s business: - It’s good for their ad business in the short, medium, and long-term (and YouTube’s as well). - More content benefits the company with the most popular distribution channels. - AI will be the key to unlocking both AR and VR. The key to everything, however, is maintaining the hold Meta has on user attention, and the release of both Vibes and Sora has me seriously questioning point number two. What I appreciate about both of these apps is the fact they are explicitly AI-content; I said in my Update about Vibes: > One of the reasons why AI slop is so annoying is — paradoxically — the fact that a lot of it has gotten quite good. That means that when consuming content you have to continually be ascertaining if what you see is real or AI-generated; to put it in the terms of the Article I just quoted, you might want to lean back, but if you don’t want to be taken in or make a fool of yourself then you have to constantly be leaning forward to figure out what is or isn’t AI. > > What this means for Vibes is the fact it is unapologetically and explicitly all AI is quite profound: it’s a true lean-back experience, where the fact none of it is real is a point of interest and — if Holz is right — inspiration and imagination. I find it quite relaxing to consume, in a way I don’t find almost any other feed on my phone. The reason this is problematic for Meta (and YouTube) is that I’m not sure the company can counter Sora — or any other AI-generated content app that appears — in the same way they countered Snapchat and TikTok. Both challengers introduced new formats — Stories in the case of Instagram, and short-form video in the case of TikTok — but the content was still produced by humans; that made it much more palatable to stuff those formats into Instagram. AI might be different: Meta certainly has data on this question, but I could imagine a scenario where users are actually annoyed and turned off by mixing AI-generated content with human content — and because Instagram isn’t really a social network anymore, the fact that that content might be made by or include your friends might not be enough. Implicit in this observation is the fact that I don’t think that human content is going anywhere; there just might be a smaller percentage of time devoted to it, and that’s a problem for a company predicated on marshaling attention. The second issue for Meta is that their AI capabilities simply don’t match OpenAI, or Google’s for that matter. It’s clear that Meta knows this is the case — look no further than this summer’s hiring spree and total overhaul of their AI approach — but creating something like Sora is a lot more difficult than copying Stories or short-form video. I imagine this shortcoming will be rectified, but Sora is in the market now. I also think that it is fair to raise some questions about point three. I have been a vocal proponent of AI being the key to the Metaverse, but my tastes in content may not be very broadly applicable! I loved Vibes because to me it felt like virtual reality, but if it was virtual reality, and no one liked it, maybe the concept actually isn’t that appealing? Time will tell, but I do keep coming back to the social aspects of Sora: people like the real world, and they like people they know, and virtual reality in particular just might not be that broadly popular. And, while I’m here, I continue to think that Meta’s recent financial success is not entirely organic: > It turns out I was right last quarter that Meta had a lot of room to increase Reels monetization, but not just because they could target ads better (that was a part of it, as I noted above): rather, it turns out that short-form video is so addictive that Meta can simply drive more engagement — and thus more ad inventory — by pushing more of it. That’s impression driver number one — and the most important one. The second one is even more explicit: Meta simply started showing more ads to people (i.e. “ad load optimization”). > > All of this ties back to where I started, about how Meta learned that you have to give investors short term results to get permission for long term investments. I don’t think it’s a coincidence that, in the same quarter where Meta decided to very publicly up its investment in the speculative “Superintelligence”, users got pushed more Reels and Facebook users in particular got shown more ads. The positive spin on this is that Meta has dials to turn; by the same token, investors who have flipped from intrinsically doubting Meta to intrinsically trusting them should realize that it was the pre-2022 Meta, the one that regularly voiced the importance of not pushing too many ads in order to preserve the user experience, that actually deserved the benefit of the doubt for growth that was purely organic. This last quarter is, to my mind, a bit more pre-determined. CEO Mark Zuckerberg framed the company’s new Personal Superintelligence like this: > As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be. > > Meta’s vision is to bring personal superintelligence to everyone. We believe in putting this power in people’s hands to direct it towards what they value in their own lives. > > This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well. I agree with the sentiment, but it’s worth being honest about today’s reality: Meta’s financial fortunes, at least for now, are in fact tied up in a centralized content engine that gives users “a dole of its output”; it’s nice from an investor perspective that Meta can turn the dials and get people to spend that much more time in Instagram. I for one can’t say that I feel particularly great when I’m done watching Reels for longer than I planned, and it’s certainly not a creative endeavor on my part — that’s for the content creators. OpenAI, meanwhile, with both ChatGPT and Sora, is in fact placing easily accessible tools in people’s hands today, first with text and now with video. And, as I noted above, I actually find it exciting precisely because of the possibility that many more people are on the verge of discovering a creativity streak they didn’t even know they had, now that AI is available to substantiate it. So much Meta optimism is, paradoxically, pessimistic about the human condition; it may be the case that, to the extent that AI makes humans better, is the extent that Meta faces disruption. * * * * * * **Get notified about new Articles** Sign Up Enter the code sent to your email. Resend email Please verify your email address to proceed. * * * - ## The YouTube Tip of the Google Spear Tuesday, September 23, 2025 Listen to Podcast Watch on YouTube **Listen to this **post**:** Log in to listen Action is happening up-and-down the LLM stack: Nvidia is making deals with Intel, OpenAI is making deals with Oracle, and Nvidia and OpenAI are making deals with each other. Nine years after Nvidia CEO Jensen Huang hand-delivered the first Nvidia DGX-1 AI computer to OpenAI, the chip giant is investing up to $100 billion in the AI lab, which OpenAI will, of course, spend on Nvidia AI systems. This ouroboros of a deal certainly does feel a bit frothy, but there is a certain logic to it: Nvidia is uniquely dominant in AI thanks to the company’s multi-year investment in not just superior chips but also an entire ecosystem from networking to software, and has the cash flow and stock price befitting its position in the AI value chain. Doing a deal like this at this point in time not only secures the company’s largest customer — and rumored ASIC maker — but also gives Nvidia equity upside beyond the number of chips it can manufacture. More broadly, lots of public investors would like the chance to invest in OpenAI; I don’t think Nvidia’s public market investors are bothered to have now acquired that stake indirectly. The interconnectedness of these investments reflects the interconnectedness of the OpenAI and Nvidia stories in particular: Huang may have delivered OpenAI their first AI computer, but it was OpenAI that delivered Nvidia the catalyst for becoming the most valuable company in the world, with the November 2022 launch of ChatGPT. Ever since, the assumption of many in tech has been that the consumer market in particular has been OpenAI’s to lose, or perhaps more accurately, monetize; no company has ever grown faster in terms of users and revenue, and that’s before they had an advertising model! And beyond the numbers, have you used ChatGPT? It’s so useful. You can look up information, or format text, and best of all you can code! Of course there are other models like Anthropic’s Claude, which has excelled at coding in particular, but surely the sheer usefulness makes ultimate success inevitable! ### A Brief History of Social Media If a lot of those takes sound familiar, it’s because I’ve made some version of most of them; I also, perhaps relatedly, took to Twitter like a fish to water. Just imagine, an app that was the nearly perfect mixture of content I was interested in and people I wanted to hear from, and interact with. Best of all it was text: the efficiency of information acquisition was unmatched, and it was just as easy to say my piece. It took me much longer to warm up to Facebook, and, frankly, I never was much of a user; I’ve never been one to image dump episodes of my life, nor have I had much inclination to wade through others’. I wasn’t interested in party photos; I lusted after ideas and arguments, and Twitter — a view shared by much of both tech and media — was much more up my alley. Despite that personal predilection, however, and perhaps because of my background in small town Wisconsin and subsequently living abroad, I retained a strong sense of the importance of Facebook. Sure, the people who I was most interested in hearing from and interacting with may have been the types to leave their friends and family for the big city, but for most people, friends and family were the entire point of life generally, and by extension, social media specifically. To that end, I was convinced from the beginning that Facebook was going to be a huge deal, and argued so multiple times on Stratechery; social media was ultimately a matter of network effects and scale, and Facebook was clearly on the path to domination, even as much of the Twitterati were convinced the company was the next MySpace. I was similarly bullish about Instagram: no, I wasn’t one to post a lot of personal pictures, but while I personally loved text, most people liked photos. What people really liked most of all, however — and not even Facebook saw this coming — was video. TikTok grew into a behemoth with the insight that social media was only ever a stepping stone to personal entertainment, of which video was the pinnacle. There were no network effects of the sort that everyone — including regulators — assumed would lead to eternal Facebook dominance; rather, TikTok realized that Paul Krugman’s infamous dismissal of the Internet actually was somewhat right: most people actually don’t have anything to say that is particularly compelling, which means that limiting the content you see to your social network dramatically decreases the possibility you’ll be entertained every time you open your social networking app. TikTok dispensed with this artificial limitation, simply showing you compelling videos period, no matter where they came from. ### The Giant in Plain Sight Of course TikTok wasn’t the first company to figure this out: YouTube was the first video platform, and from the beginning focused on building an algorithm that focused more on giving you videos you were interested in than in showing you what you claimed to want to see. YouTube, however, was and probably always has been my biggest blind spot: I’m just not a big video watcher in general, and YouTube seemed like more work than short-form video, which married the most compelling medium with the most addictive delivery method — the feed. Sure, YouTube was a great acquisition for Google — certainly in line with the charge to “organize the world’s information and make it universally accessible and useful” — but I — and Google’s moneymaker, Search — was much more interested in text, and pictures if I must. The truth, however, is that YouTube has long been the giant hiding in plain sight: the service is the number one streaming service in the living room — bigger than Netflix — and that’s the company’s 3rd screen after mobile and the PC, where it has no peer. More than that, YouTube is not just the center of culture, but the nurturer of it: the company just announced that it has paid out more than $100 billion to creators over the last four years; given that many creators earn more from brand deals than they do from YouTube ads, that actually understates the size of the YouTube economy. Yes, TikTok is a big deal, but TikTok stars hope to make it on YouTube, where they can actually make a living. And yet, YouTube sometimes seems like an afterthought, at least to people like me and others immersed in the text-based Internet. Last week I was in New York for YouTube’s annual “Made on YouTube” event, but the night before I couldn’t remember the name; I turned to Google, natch, and couldn’t figure it out. The reason is that talk about YouTube mostly happens on YouTube; I, and Google itself, still live in a text-based world. That is the world that was rocked by ChatGPT, especially Google. The company’s February 2023 introduction of Bard in Paris remains one of the most surreal keynotes I’ve ever watched: most of the content was rehashed, the presenters talked as if they were seeing their slides for the first time, and one of the demos of a phone-based feature neglected to remember to have a phone on hand. This was a company facing a frontal assault on their most obvious and profitable area of dominance — text-based information retrieval — and they were completely flat-footed. Google has, in the intervening years, made tremendous strides to come back, including dumping the Bard name in favor of Gemini, itself based on vastly improved underlying models. I’m also impressed by how the company has incorporated AI into search; not only are AI Overviews generally useful, they’re also incredibly fast, and as a bonus have the links I sometimes prefer already at hand. Ironically, however, you could make the case that the biggest impact LLMs have had on Search is giving a federal judge an excuse to let Google continue paying its biggest would-be competitors (like Apple) to simply offer their customers Google instead. The biggest reason to be skeptical of the company’s fortunes in AI is that they had the most to lose; the company is doing an excellent job of minimizing the losses. What I would submit, however, is that Google’s most important and most compelling AI announcements actually don’t have anything to do with Search, at least not yet. These announcements start, as you might expect, with Google’s Deep Mind Research Lab; where they hit the real world, however, is on YouTube — and that, like the user-generated streaming service, is a really big deal. ### The DeepMind-to-YouTube Pipeline A perfect example of the DeepMind-to-YouTube pipeline was last week’s announcement of Veo 3-based features for making YouTube Shorts. From the company’s blog post: > We’ve partnered with Google DeepMind to bring a custom version of their most powerful video generation model, Veo 3, to YouTube. Veo 3 Fast is designed to work seamlessly in YouTube Shorts for millions of creators and users, for free. It generates outputs with lower latency at 480p so you can easily create video clips – and for the first time, with sound – from any idea, all from your phone. This initial launch will allow you to not only generate videos, but also use one video to animate another (or a photo), stylize your video with a single touch, and add objects. You can also create an entire video — complete with voiceover — from a collection of clips, or convert speech to song. All of these features are