Public Post 04.11.24

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Nibh sed pulvinar proin gravida hendrerit lectus a. Egestas diam in arcu cursus euismod quis viverra nibh cras. Faucibus turpis in eu mi bibendum neque egestas. Lorem sed risus ultricies tristique. Eget egestas purus viverra accumsan in. Eget est lorem ipsum dolor sit amet consectetur adipiscing elit. Ut sem nulla pharetra diam sit. Nunc sed id semper risus in hendrerit. Pellentesque massa placerat duis ultricies lacus sed turpis. Etiam dignissim diam quis enim lobortis. Sit amet nisl purus in mollis. Quam quisque id diam vel quam elementum pulvinar etiam non. Duis ultricies lacus sed turpis tincidunt id aliquet risus. Dolor sit amet consectetur adipiscing elit duis. Amet venenatis urna cursus eget nunc. Iaculis nunc sed augue lacus viverra vitae. Volutpat lacus laoreet non curabitur gravida.

Massa massa ultricies mi quis hendrerit dolor. Placerat duis ultricies lacus sed. Pellentesque habitant morbi tristique senectus et netus et malesuada fames. Feugiat vivamus at augue eget arcu dictum varius duis. Id consectetur purus ut faucibus pulvinar elementum integer. Urna et pharetra pharetra massa. Vivamus arcu felis bibendum ut tristique et egestas. At consectetur lorem donec massa sapien. Nulla porttitor massa id neque aliquam vestibulum morbi blandit cursus. Sed pulvinar proin gravida hendrerit lectus. Elit ut aliquam purus sit amet luctus. Mi bibendum neque egestas congue quisque egestas diam in. Amet massa vitae tortor condimentum lacinia quis vel. Aliquet eget sit amet tellus cras adipiscing enim eu. Vitae sapien pellentesque habitant morbi.

Gravida neque convallis a cras semper. Pellentesque nec nam aliquam sem et tortor. Suspendisse potenti nullam ac tortor vitae purus. Nunc id cursus metus aliquam eleifend mi. Tortor pretium viverra suspendisse potenti. Dignissim enim sit amet venenatis urna. Quis lectus nulla at volutpat diam ut venenatis tellus. Pulvinar mattis nunc sed blandit. Ut enim blandit volutpat maecenas volutpat blandit aliquam etiam erat. Auctor elit sed vulputate mi. Nibh nisl condimentum id venenatis a condimentum vitae sapien. Auctor elit sed vulputate mi sit amet mauris commodo quis. Aliquam malesuada bibendum arcu vitae elementum curabitur vitae.

Enim tortor at auctor urna nunc id. Quisque id diam vel quam elementum pulvinar etiam non quam. Blandit cursus risus at ultrices mi. Sed cras ornare arcu dui vivamus arcu felis. Netus et malesuada fames ac turpis egestas. Etiam erat velit scelerisque in dictum non. Imperdiet massa tincidunt nunc pulvinar. Ultrices neque ornare aenean euismod elementum nisi quis. Amet volutpat consequat mauris nunc congue nisi. Tristique senectus et netus et malesuada fames ac.

Urna porttitor rhoncus dolor purus non enim. Ut sem viverra aliquet eget sit. Feugiat vivamus at augue eget arcu dictum varius duis. Eu scelerisque felis imperdiet proin fermentum leo vel. Tellus elementum sagittis vitae et leo duis ut diam. Adipiscing elit pellentesque habitant morbi tristique senectus et netus. Sapien pellentesque habitant morbi tristique senectus et netus. Velit sed ullamcorper morbi tincidunt ornare massa eget. Ornare lectus sit amet est placerat. Id consectetur purus ut faucibus. Blandit turpis cursus in hac habitasse platea. Risus in hendrerit gravida rutrum quisque non.

Nec feugiat in fermentum posuere. Risus at ultrices mi tempus imperdiet nulla malesuada pellentesque. Nulla facilisi nullam vehicula ipsum a arcu. Porttitor lacus luctus accumsan tortor posuere ac. Ultricies tristique nulla aliquet enim. Consectetur libero id faucibus nisl tincidunt eget. Eu tincidunt tortor aliquam nulla. Vitae semper quis lectus nulla. Nec feugiat nisl pretium fusce id velit ut. Sodales ut eu sem integer vitae justo eget magna fermentum. Adipiscing elit duis tristique sollicitudin. Sed egestas egestas fringilla phasellus faucibus scelerisque eleifend donec. Maecenas pharetra convallis posuere morbi leo urna molestie at elementum. Mus mauris vitae ultricies leo integer malesuada nunc.

Ultricies leo integer malesuada nunc vel risus commodo. Sapien et ligula ullamcorper malesuada proin. Ultricies tristique nulla aliquet enim tortor at. Diam donec adipiscing tristique risus nec feugiat in. Vel fringilla est ullamcorper eget nulla facilisi etiam dignissim. Metus aliquam eleifend mi in nulla posuere sollicitudin. In arcu cursus euismod quis viverra nibh cras pulvinar. Nisi vitae suscipit tellus mauris a diam maecenas. At lectus urna duis convallis convallis tellus id interdum. Etiam dignissim diam quis enim lobortis scelerisque fermentum dui. Amet tellus cras adipiscing enim eu. Cras adipiscing enim eu turpis egestas pretium aenean pharetra.

Lacus luctus accumsan tortor posuere ac ut consequat semper viverra. Vitae justo eget magna fermentum iaculis eu. In vitae turpis massa sed elementum tempus egestas sed. Dictum sit amet justo donec enim diam vulputate. Vulputate mi sit amet mauris commodo quis imperdiet massa tincidunt. Non pulvinar neque laoreet suspendisse interdum consectetur libero. Sodales ut etiam sit amet nisl purus in. Mi sit amet mauris commodo quis imperdiet massa tincidunt. Nisl nunc mi ipsum faucibus. Habitant morbi tristique senectus et netus et malesuada fames ac. Non odio euismod lacinia at quis risus. Sit amet consectetur adipiscing elit. Facilisis sed odio morbi quis commodo odio aenean sed adipiscing. Amet consectetur adipiscing elit duis tristique sollicitudin nibh sit amet. Nunc mattis enim ut tellus. Dapibus ultrices in iaculis nunc. Dolor sit amet consectetur adipiscing.

Elements Test 50K

Block Quote

Airbnb, on the other hand, has been building something truly unique; the company explains in its S-1:

Travel is one of the world’s largest industries, and its approach has become commoditized. The travel industry has scaled by offering standardized accommodations in crowded hotel districts and frequently-visited landmarks and attractions. This one-size-fits-all approach has limited how much of the world a person can access, and as a result, guests are often left feeling like outsiders in the places they visit.

Airbnb has enabled home sharing at a global scale and created a new category of travel. Instead of traveling like tourists and feeling like outsiders, guests on Airbnb can stay in neighborhoods where people live, have authentic experiences, live like locals, and spend time with locals in approximately 100,000 cities around the world. In our early days, we described this new type of travel with the tagline “Travel like a human.” Today, people simply refer to it with a single word: “Airbnb.”

Image within Block Quote

The problem with the vast majority of antitrust complaints about big tech generally, and online services specifically, is that Page is right. You may only have one choice of cable company or phone service or any number of physical goods and real-world services, but on the Internet everything is just zero marginal bits.

That, though, means there is an abundance of data, and Google helps consumers manage that abundance better than anyone. This, in turn, leads Google’s suppliers to work to make Google better — what is SEO but a collective effort by basically the entire Internet to ensure that Google’s search engine is as good as possible? — which attracts more consumers, which drives suppliers to work even harder in a virtuous cycle. Meanwhile, Google is collecting information from all of those consumers, particularly what results they click on for which searches, to continuously hone its accuracy and relevance, making the product that much better, attracting that many more end users, in another virtuous cycle:

Google benefits from two virtuous cycles

One of the central ongoing projects of this site has been to argue that this phenomenon, which I call Aggregation Theory, is endemic to digital markets. From the original Aggregation Theory Article:

The value chain for any given consumer market is divided into three parts: suppliers, distributors, and consumers/users. The best way to make outsize profits in any of these markets is to either gain a horizontal monopoly in one of the three parts or to integrate two of the parts such that you have a competitive advantage in delivering a vertical solution. In the pre-Internet era the latter depended on controlling distribution…Note how the distributors in all of these industries integrated backwards into supply: there have always been far more users/consumers than suppliers, which means that in a world where transactions are costly owning the supplier relationship provides significantly more leverage.

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

In short, increased digitization leads to increased centralization (the opposite of what many originally assumed about the Internet). It also provides a lot of consumer benefit — again, Aggregators win by building ever better products for consumers — which is why Aggregators are broadly popular in a way that traditional monopolists are not.

Unfortunately, too many antitrust-focused critiques of tech have missed this essential difference. I wrote about this mistake in Where Warren’s Wrong:

Perhaps it is best for Senator Warren’s argument that her article never does explain how these companies became so big, because the reason cuts at the core of her argument: Google, Facebook, Amazon, and Apple dominate because consumers like them. Each of them leveraged technology to solve unique user needs, acquired users, then leveraged those users to attract suppliers onto their platforms by choice, which attracted more users, creating a virtuous cycle that I have christened Aggregation Theory.

Aggregation Theory is the reason why all of these companies have escaped antitrust scrutiny to date in the U.S.: here antitrust law rests on the consumer welfare standard, and the entire reason why these companies succeed is because they deliver consumer benefit.

The European Union does have a different standard, rooted in a drive to preserve competition; given that the virtuous cycle described by Aggregation Theory does tend towards winner-take-all effects, it is not a surprise that Google in particular has faced multiple antitrust actions from the European Commission. Even the EU standard, though, struggles with the real consumer benefits delivered by Aggregators.

Consider the Google Shopping case: Google was found guilty of antitrust violations in a case brought by a shopping comparison site called Foundem, which complained about their site being buried when consumers were searching for items to buy. This complaint made no sense, as I explained in Ends, Means, and Antitrust:

If I search for a specific product, why would I not want to be shown that specific product? It frankly seems bizarre to argue that I would prefer to see links to shopping comparison sites; if that is what I wanted I would search for “Shopping Comparison Sites”, a request that Google is more than happy to fulfill:

Screen Shot 2017-06-28 at 6.40.22 PM

The European Commission is effectively arguing that Google is wrong by virtue of fulfilling my search request explicitly; apparently they should read my mind and serve up an answer (a shopping comparison site) that is in fact different from what I am requesting (a product)?

There is certainly an argument to be made that Google, not only in Shopping but also in verticals like local search, is choking off the websites on which Search relies by increasingly offering its own results. At the same time, there is absolutely nothing stopping customers from visiting those websites directly, or downloading their apps, bypassing Google completely. That consumers choose not to is not because Google is somehow restricting them — that is impossible! — but because they don’t want to. Is it really the purview of regulators to correct consumer choices willingly made?

Not only is that answer “no” for philosophical reasons, it should be “no” for pragmatic reasons, as the ongoing Google Shopping saga in Europe demonstrates. As I noted last December, the European Commission keeps changing its mind about remedies in that case, not because Google is being impertinent, but because seeking to undo an Aggregator by changing consumer preferences is like pushing on a string.

Block Quote within Block Quote

After the House Antitrust Committee released its report on the tech industry I wrote that it was important to distinguish between Anti-monopoly and Antitrust:

That, though, is why it is a mistake to read the report as some sort of technocratic document. There are, to be sure, a lot of interesting facts that were dug up by the committee, and some bad behavior, which may or may not be anticompetitive in the legal sense. Certainly the companies would prefer to have a legalistic antitrust debate, for good reason: it is exceptionally difficult to make the case that any of these companies are causing consumer harm, which is the de facto standard for antitrust in the United States. Indeed, what makes Google’s contention that “The competition is only a click away” so infuriating is the fact it is true.

What matters more is the context laid out by Letwin: there is a strain of political thought in America, independent of political party (although traditionally associated with Democrats), that is inherently allergic to concentrated power — monopoly in the populist sense, if not the legal one.

Hatred of monopoly is one of the oldest American political habits and like most profound traditions, it consisted of an essentially permanent idea expressed differently at different times. “Monopoly”, as the word was used in America, meant at first a special legal privilege granted by the state; later it came more often to mean exclusive control that a few persons achieved by their own efforts; but it always meant some sort of unjustified power, especially one that raised obstacles to equality of opportunity.

In other words, this subcommittee report is simply a new expression of an old idea; the details matter less than the fact it exists.

This is going to be a critical sentiment to keep in mind as this case unfolds. If I had to bet on an outcome, I would bet on Google winning. Apple and everyone else are free to enter into whatever contracts they wish, and consumers are free to undo the defaults that flow from those contracts. Where is the harm?

Images in Body and Block Quote

I wrote last month about the New York Times‘s traditional role in setting the national news agenda; headlines in the New York Times in the morning were lead stories on national newscasts in the evening, and headlines in regional papers the following day. If you map this dynamic to Moore’s model, it might look like this:

The Idea Adoption Curve

Where, though, did the New York Times get its ideas? Obviously a lot of stories came from its own reporting, or that of its peers, but the “enthusiast” part of the curve was mostly centered in academia. Visionaries, meanwhile, were a collection of think tanks, journals, and speciality magazines that operated at a loss, which was fine because making money was never the point: getting ideas into publications like the New York Times was.

Idea generation in the analog age

What happened in the 2000s, when Yglesias and Klein first burst on the scene as part of the original generation of political bloggers, was the development of a new genre of “enthusiasts” who were creating and debating new ideas mostly for free. Sure, most of these bloggers found work with publications like American Prospect (Yglesias) or Washington Monthly (Klein), but those publications were political projects, not economic ones, with the goal of influencing the mass market, not monetizing it.

Vox, on the other hand, has been something much different, both in terms of mission statement and business model. “Explaining the news” is, from a certain perspective, about crossing the chasm on the idea curve; enthusiasts have created new ideas, and visionaries have refined them, and now the challenge is to spread those ideas to the population generally. Vox’s business model, meanwhile, is firmly on the right hand side of the curve. Advertising is all about scale, and the vast majority of the market falls to the right of the chasm.

The problem with this approach, though, is that publications simply aren’t as good at advertising as Facebook and Google. I wrote five years ago in Popping the Publishing Bubble:

Publishers and ad networks are locked in a dysfunctional relationship that doesn’t serve readers or advertisers, and it’s only a matter of time until advertisers — which again, care only about reaching potential customers, wherever they may be — desert the whole mess entirely for new, more efficient and effective advertising options that put them directly in front of the people they care about. That, first and foremost, is Facebook, but other social networks like Twitter, Snapchat, Instagram, Pinterest, and others will benefit as well:

A drawing of Facebook As a More Efficient Advertising Option

I don’t know the specifics of how Vox’s business is doing, although it is notable that the site’s previous ad inventory is now mostly filled with requests for donations; meanwhile, there is no confusion about the business models of either Substack or the New York Times.

VideoPress Test (Both in Body and Block Quote)

The single most compelling line of questioning came from Louisiana junior Senator John Kennedy;1 first, he exposed the company’s implied claims that they could actually fix the problem as a sham:

This is the exact issue I discussed in The Super Aggregators and the Russians:

Super-aggregators not only have zero transaction costs when it comes to users and content, but also when it comes to making money. This is at the very core of why Google and Facebook are so much more powerful than any of the other purely information-centric networks. The vast majority of advertisers on both networks never deal with a human (and if they do, it’s in customer support functionality, not sales and account management): they simply use the self-serve ad products like the one pictured above (or a more comprehensive tool built on the companies’ self-serve API).

I added up the numbers in Trustworthy Networking, estimating that Facebook served 276 million unique ads per quarter, and my entire point was the same as Kennedy’s: there is no way that Facebook could ever review every ad, much less investigate who is behind them, without completely ruining their revenue model.

Kennedy wasn’t done, though: he went on to press Stretch in particular about just how much data Facebook has about, well, everyone:

Stretch was insistent that Facebook would never look up the data on any one individual, both because of internal policies as well as the way the company’s data store was engineered. What Kennedy was driving at, though, is that Facebook could; here is the transcription:

Lina Khan, who rose to prominence with her 2017 law review article Amazon’s Antitrust Paradox, and who served as counsel for the antitrust subcommittee over the course of the investigation that culminated in Wednesday’s hearings, summarized the New Brandeis Movement of antitrust in 2018:

As the name suggests, this new movement traces its intellectual roots to Justice Louis Brandeis, who served on the Supreme Court between 1916 and 1939. Brandeis was a strong proponent of America’s Madisonian traditions—which aim at a democratic distribution of power and opportunity in the political economy. Early in the twentieth century, Brandeis successfully updated America’s antimonopoly regime, along Madisonian lines, for the industrial era, and his philosophy held sway well into the 1970s. As the ‘New Brandeis School’ gains prominence — even prompting two floor speeches by Senator Orrin Hatch (a Republican from Utah) — it’s worth understanding what this vision of antimonopoly does and does not represent.

While the article is worth reading in full, it is no accident that Khan started with “democracy”:

Brandeis and many of his contemporaries feared that concentration of economic power aids the concentration of political power, and that such private power can itself undermine and overwhelm public government. Dominant corporations wield outsized influence over political processes and outcomes, be it through lobbying, financing elections, staffing government, funding research, or establishing systemic importance that they can leverage. They use these strategies to win favorable policies, further entrenching their dominance.

Brandeis also believed that the structure of our markets and of our economy can determine how much real liberty individuals experiences in their daily lives. Most people’s day-to-day experience of power comes not from interacting with public officials, but through relationships in their economic lives — negotiating pay with an employer, for example, or wrangling the terms of business with a trading partner. Brandeis feared that autocratic structures in the commercial sphere — such as when one or a few private corporations call all the shots — can preclude the experience of liberty, threatening democracy in our civic sphere.

Chairman David Cicilline, in the conclusion to his opening statement, made a clear gesture to the New Brandeis Movement:2

Because concentrated economic power also leads to concentrated political power, this investigation also goes to the heart of whether we as a people govern ourselves, or whether we let ourselves be governed by private monopolies. American democracy has always been at war against monopoly power. Throughout our history, we’ve recognized that concentrated markets and concentrated political control are incompatible with democratic ideals. When the American people confronted monopolies in the past, be it the railroads or the oil tycoons or AT&T and Microsoft, we took action to ensure no private corporation controls our economy or our democracy.

What was notable to me is that in the hearing Cicilline and the other Democrats never really focused on this point; the rest of Cicilline’s opening statement, and much of the Democratic questioning, focused on what they perceived as illegal activities that caused economic harm, without necessarily tying that to political harm. Again, that’s not to say they were ignoring the linkage, but it wasn’t a priority.

What made this stand out was that Republicans were focused almost entirely on the politics of monopoly, specifically their contention that large platforms, particularly Google and Facebook (and Twitter), were censoring conservative viewpoints and working to elect Democratic candidates to office, particularly the presidency, and that this was a problem because of their dominance. Leave aside, if you can, your opinion about the veracity of these complaints (and remember my note at the beginning about focusing on analysis): while the Republicans were certainly not endorsing the “New Brandeis School” — Ranking Member James Sensenbrenner in particular took care to highlight support for the consumer welfare standard, also known as the “Chicago School” — their concern with the size of tech companies had nothing to do with economic effects and everything to do with political effects. It was a striking contrast.

Table Test (In body, in block quote)

I am tempted to claim vindication, but the truth is that we very well may see a 12S next year; note the pattern for top-of-the-line models:

Industrial Design Cameras
iPhone New 1
iPhone 3G New 1
iPhone 3GS Old 1
iPhone 4 New 1
iPhone 4S Old 1
iPhone 5 New 1
iPhone 5S Old 1
iPhone 6 New 1
iPhone 6S Old 1
iPhone 7 Old 2
iPhone X New 2
iPhone XS Old 2
iPhone 11 Pro Old 3
iPhone 12 Pro New 3

With the iPhone 7 Apple kept the same industrial design for a third generation, but added a second camera; the company did the same thing last year with the iPhone 11, but in this case added a third camera. If the pattern holds, next year will be a quiet one on the industrial design and camera front — an ‘S’ year, in other words.

Of course Apple’s line has also expanded over the last four years to include a lower-cost model (that is actually the price that flagship iPhones used to be):

Industrial Design Cameras
iPhone 8 Old 1
iPhone XR New 1
iPhone 11 Old 2
iPhone 12 New 2

If this patterns holds the iPhone 12S will keep the current design as well as the 12S Pro, but perhaps add a third camera.

iPhone Pricing

This is the bit that is more interesting; from Gruber:

Here’s a matrix with the new lineup, organized the way I think makes the most sense. In the bottom row, I compare all storage tiers to 128 GB as a baseline. Storage is priced very consistently this year: across all iPhone models, every 64 GB of additional storage costs $50.

64 GB 128 GB 256 GB 512 GB
iPhone 12 Pro Max $1,100 $1,200 $1,400
iPhone 12 Pro 1,000 1,100 1,300
iPhone 12 830 880 980
iPhone 12 Mini 730 780 880
iPhone 11 600 650 750
iPhone XR 500 550
iPhone SE 400 450 550
Δ from 128 GB -50 +100 +300

I think it’s useful to include last year’s prices for the then-new iPhone 11 lineup for comparison:

64 GB 128 GB 256 GB 512 GB
(2019) 11 Pro Max $1,100 $1,250 $1,450
(2019) 11 Pro 1,000 1,150 1,350
(2019) iPhone 11 700 750 850

Zeroth, all prices I’ve listed are for unlocked phones. Apple’s promotion of the iPhone 12 and 12 Mini as starting (respectively) at $799/699 rather than $829/729 just because of some sort of marketing deals they cut with AT&T and Verizon for existing AT&T and Verizon customers is $30 worth of bullshit. (And now Apple has let T-Mobile join the club.)

Footnote Test (normal and block quote within footnote)

To put it another way, I was too focused on demand — the key to Aggregation Theory — and didn’t think deeply enough about the evolution of supply. User-generated content didn’t have to be simply picture of pets and political rants from people in one’s network; it could be the foundation of a new kind of network, where the payoff from Metcalfe’s Law is not the number of connections available to any one node, but rather the number of inputs into a customized feed.3

Moreover, a lot of commentary felt stuck in 2012: Facebook forever competing against MySpace (but Instagram being a bargain was totally predictable!), Amazon against no one (Shopify was mentioned once), Google versus ten blue links, and Apple, well, they are in good shape: despite having arguably the most egregious practices under traditional antitrust law, the iPhone maker was the only company in the Executive Summary to be praised for its impact on society.4 In the committee’s telling, these companies are bad actors that do bad things, case closed.

Twitter Test (in body, in block quote, with video)

On November 2, 2016, Microsoft announced Teams at a special event in New York City. Slack decided to mark the occasion:

Last night’s Academy Awards ceremony provided drama that was itself worthy of an award:

I opened The Super-Aggregators and the Russians with this anecdote:

In August 2011, just a day or two into my career at Microsoft, I sat in on a monthly review meeting for Hotmail (now known as Outlook.com); the product manager running the meeting was going through the various geographies and their relevant metrics — new users, churn, revenue, etc. — and it was, well, pretty boring. It was only later that I realized just how astounding “boring” was; a small group of people in a conference room going over numbers that represented hundreds of millions of people and dollars in revenue, and most of us cared far more about what was on the menu for lunch.

I’ve reflected on that meeting often over the years, particularly when it comes to Facebook and controversies like censoring too much, censoring too little, or “fake news”, and I was reminded of it again with this tweet:

Mark Warner, the senior Senator from Virginia, is referring to a Russian company, thought to be linked to the Kremlin’s propaganda efforts, having bought $100,000 worth of political ads on Facebook, some number of which directly mentioned 2016 presidential candidates Donald Trump and Hillary Clinton. Facebook has released limited details about the ads, likely due to its 2012 consent decree with the FTC, which bars the company from unilaterally making private information public, as well as the problematic precedent of releasing information without a clear order compelling said release. To that end, it was reported over the weekend that special counsel Robert Mueller received a much more comprehensive set of data from Facebook after obtaining a search warrant.

Even with all that context, though, I found Senator Warner’s tweet puzzling: how else would the propaganda group have paid? Facebook’s self-service ad portal lets you buy ads in 55 different currencies, including the Russian Ruble:5

That, though, brought me back to that Hotmail meeting: that I, and probably many more in the tech industry, find the idea of Facebook selling ads in rubles to strangers to be utterly unremarkable, even as thousands find it equally outrageous and damning, is a reminder of just how unprecedented and misunderstood aggregators like Facebook continue to be, and what a challenge it will be to regulate them.

IG Test

However, the first lesson from Dave Chappelle’s latest release on Instagram, Unforgiven, is that one best not compete with Chappelle when it comes to story-telling; the way in which the comedian weaves together multiple stories from his childhood on up to the present to make his argument about why he should be paid for the rights to stream Chappelle’s Show is truly extraordinary.

 

View this post on Instagram

 

A post shared by Dave Chappelle (@davechappelle)

Overall Embed Test (Social)

 

View this post on Instagram

 

A post shared by Dithering (@ditheringfm)


  1. No relation to the Massachusetts political family 

  2. That is Khan sitting behind Cicilline 

  3. This, by the way, is actually a much more accurate manifestation of Metcalfe’s Law, which is about potential contacts in a network, not actual contacts; a long-standing criticism of using Metcalfe’s Law to describe social networks is that the attractiveness of most social networks is a function of how many people you know that are on the network, not how many you might know. That is why, for example, LINE is a much more valuable chat app for me in Taiwan than is WeChat, even though WeChat has vastly more users; more people I know are on LINE. TikTok, though, surfaces content from anyone, which is to say its value hews much more closely to Metcalfe’s Law. 

  4. From the report:

    Apple’s mobile ecosystem has produced significant benefits to app developers and consumers. Launched in 2008, the App Store revolutionized software distribution on mobile devices, reducing barriers to entry for app developers and increasing the choices available to consumers.

    Again, there is not a single positive word about the other three companies in the executive summary. 

  5. For what it’s worth, Stratechery has never actually taken out a Facebook ad, or any ad for that matter 

Back to the Future of Twitter

Here’s a Facebook Post:

Here’s another:

Elon Musk wrote in a letter to Twitter’s board:

I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.

However, since making my investment I now realize the company will neither thrive nor serve this societal imperative in its current form. Twitter needs to be transformed as a private company.

As a result, I am offering to buy 100% of Twitter for $54.20 per share in cash, a 54% premium over the day before I began investing in Twitter and a 38% premium over the day before my investment was publicly announced. My offer is my best and final offer and if it is not accepted, I would need to reconsider my position as a shareholder.

Twitter has extraordinary potential. I will unlock it.

The vast majority of commentary about the Musk-Twitter saga has focused on the first three paragraphs: what does Musk mean by making Twitter more free speech oriented? Why doesn’t Musk believe he can work with the current board and management? Does Musk have the cash available to buy Twitter, and would the Twitter board accept his offer (no on the latter, but more on this below)?

The most interesting question of all, though, is the last paragraph: what potential does Musk see, and could he unlock it? For my part, not only do I agree the potential is vast, but I do think Musk could unlock it — and that itself has implications for the preceding paragraphs.

What is Twitter?

It’s popular on Twitter to point to a funny tweet or exchange and marvel that “This website is free“, or alternatively, “This app is free“. That raises the question, though, what is Twitter: is it a website or an app, or something different?

The answer, of course, is “All of the above”, but it’s worth being clear about the different pieces that make up Twitter; “Jin” made this illustration on Medium:

Twitter's architecture

Twitter is actually a host of microservices, including a user service (for listing a user’s timeline), a graph service (for tracking your network), a posting service (for posting new tweets), a profile service (for user profiles), a timeline service (for presenting your timeline), etc.; the architecture to tie all of these together and operate at scale all around the world is suitably complex.

The key thing to note, though, is that only the green boxes in the diagram above are actually user-facing; a dramatically simplified version of Twitter, that condenses all of those internal services to a big blue “Twitter” box and focuses on the green interfaces might look something like this:

A dramatic over-simplification of Twitter's architecture

Again, this is a dramatic oversimplification, but the important takeaway is that the user-facing parts of Twitter are distinct from — and, frankly, not very pertinent to — the core Twitter service.

Twitter’s API Drama

The general idea behind a services architecture is that various functionalities are exposed via application programming interfaces, more commonly known as APIs; a “client” will leverage these APIs to build an end user experience. There is no requirement that these clients be owned or managed by the centralized service, and for the first several years of Twitter’s existence, that is exactly how the service operated: Twitter the company ran the service and the Twitter.com website, while third-parties built clients that let you access Twitter first on the desktop and then on smartphones.

Mobile was an absolute boon for Twitter: the public messaging service, modeled on SMS, was a natural fit for a smartphone screen, and the immediacy of Twitter updates was perfectly suited to a device that was always connected to the Internet. The explosion in mobile usage, though, led to a situation where Twitter didn’t actually control the user experience for a huge portion of its users. This actually led to a ton of innovation: Twitterrific, for example, the earliest third party client, came up with the Twitter bird, the term “tweet” for a Twitter message, and early paradigms around replies and conversations. It also led to problems, the solutions to which fundamentally changed Twitter’s potential as a business.

The first problem that came from Twitter the service relying on third party clients is that the company, which descended into politics and backstabbing from the board level on down almost immediately, was drifting along without a business model; the obvious candidate was advertising, but the easiest way to implement advertising was to control the user interface (and thus insert ads — ads, including promoted tweets, are another distinct service from Twitter itself). Thus Twitter bought Tweetie, widely regarded as the best Twitter mobile client (I was a user), in April 2010, and rebranded and relaunched it as the official Twitter for iPhone app a month later.

The second problem is that starting in 2010, a Silicon Valley entrepreneur named Bill Gross (who invented search advertising) started trying to build his own Twitter monetization product called TweetUp; when Twitter acquired Tweetie and made clear they were going to monetize it via advertising, Gross started buying up multiple other third party Twitter clients with the idea of creating a competing network of clients that would monetize independently. Twitter responded in the short term by kicking several of Gross’s clients off of the platform for dubious terms-of-service violations, and in the long term by killing the 3rd party API for everyone. Clients could keep the users they had but could only add 100,000 more users — ever.

The net result of these two decisions was that Twitter, its architecture notwithstanding, would be a unified entity where Twitter the company controlled every aspect of the experience, and that that experience would be monetized via advertising.

Twitter’s Reality

Twitter has, over 19 different funding rounds (including pre-IPO, IPO, and post-IPO), raised $4.4 billion in funding; meanwhile the company has lost a cumulative $861 million in its lifetime as a public company (i.e. excluding pre-IPO losses). During that time the company has held 33 earnings calls; the company reported a profit in only 14 of them.

Given this financial performance it is kind of amazing that the company was valued at $30 billion the day before Musk’s investment was revealed; such is the value of Twitter’s social graph and its cultural impact: despite there being no evidence that Twitter can even be sustainably profitable, much less return billions of dollars to shareholders, hope springs eternal that the company is on the verge of unlocking its potential. At the same time, these three factors — Twitter’s financials, its social graph, and its cultural impact — get at why Musk’s offer to take Twitter private is so intriguing.

Start with the financials: Twitter’s business stinks. Yes, you can make an argument that this is due to mismanagement and poor execution — who enjoys seeing a stale promoted tweet about something that happened weeks ago?1 — but I have also made the argument that Twitter just isn’t well suited to direct response advertising in particular:

Think about the contrast between Twitter and Instagram; both are unique amongst social networks in that they follow a broadcast model: tweets on Twitter and photos on Instagram are public by default, and anyone can follow anyone. The default medium, though, is fundamentally different: Twitter has photos and videos, but the heart of the service is text (and links). Instagram, on the other hand, is nothing but photos and video (and link in bio).

The implications of this are vast. Sure, you may follow your friends on both, but on Twitter you will also follow news breakers, analysts, insightful anons, joke tellers, and shit posters. The goal is to mainline information, and Twitter’s speed and information density are unparalleled by anything in the world. On Instagram, though, you might follow brands and influencers, and your chief interaction with your friends is stories about their Turkey Day exploits. It’s about aspiration, not information, and the former makes a lot more sense for effective advertising.

It’s more than just the medium though; it’s about the user’s mental state as well. Instagram is leisurely and an escape, something you do when you’re procrastinating; Twitter is intense and combative, and far more likely to be tied to something happening in the physical world, whether that be watching sports or politics or doing work:

Instagram is a lean-back experience; on Twitter you lean forward

This matters for advertising, particularly advertising that depends on a direct response: when you are leaning back and relaxed why not click through to that Shopify site to buy that knick-knack you didn’t even know you needed, or try out that mobile game? When you are leaning forward, though, you don’t have either the time or the inclination.

That article made the argument for Twitter to move towards more of a subscription offering; that may be the wrong idea, but the bigger takeaway is that what Twitter has been trying to build for years just isn’t working, and the challenges aren’t just bad management. To put it another way, when it comes to Twitter’s business, there really isn’t much to lose, but Twitter could only risk losing what there is if it were a private company, free from the glare of public markets who, for very justifiable reasons, give Twitter’s management a very short leash.

What is valuable is that social graph: while Facebook understands who you know, Twitter, more than any other company, understands what its users are interested in. That is, in theory, much more valuable; said value is diminished by the fact that Twitter just doesn’t have that many users, relatively speaking; the users it has, though, are extremely influential, particularly given the important of Twitter in media, tech, and finance. For this group Twitter is completely irreplaceable: there is no other medium with a similar density of information or interest-driven network effects.

This, by extension, drives Twitter’s cultural impact: no, most people don’t get their news off of Twitter; the places they get their news, though, are driven by Twitter. Moreover, Twitter not only sets the agenda for media organizations, it also harmonizes coverage, thanks to a dynamic where writers, unmoored from geographic constraints or underlying business realities of their publications, end up writing for other writers on Twitter, oftentimes radicalizing each other in plain sight of their readership. Twitter itself is part of this harmonization, going so far as to censor politically impactful stories in the weeks before an election; it’s no surprise that when Musk says he wants to impose a stronger free speech ethos that the reaction is fierce and littered with motte-and-baileys (“actually we just care about limiting abuse and spam!”).

Back to the Future

This is all build-up to my proposal for what Musk — or any other bidder for Twitter, for that matter — ought to do with a newly private Twitter.

  • First, Twitter’s current fully integrated model is a financial failure.
  • Second, Twitter’s social graph is extremely valuable.
  • Third, Twitter’s cultural impact is very large, and very controversial.

Given this, Musk (who I will use as a stand-in for any future CEO of Twitter) should start by splitting Twitter into two companies.

  • One company would be the core Twitter service, including the social graph.
  • The other company would be all of the Twitter apps and the advertising business.

TwitterAppCo would contract with TwitterServiceCo to continue to receive access to the Twitter service and social graph; currently Twitter earns around $13/user/year in advertising, so you could imagine a price of say $7.50/user/year, or perhaps $0.75/user/month. TwitterAppCo would be free to pursue the same business model and moderation policies that Twitter is pursuing today (I can imagine Musk sticking with TwitterServiceCo, and the employees upset about said control being a part of TwitterAppCo).

However, that relationship would not be exclusive: TwitterServiceCo would open up its API to any other company that might be interested in building their own client experience; each company would:

  • Pay for the right to get access to the Twitter service and social graph.
  • Monetize in whatever way they see fit (i.e. they could pursue a subscription model).
  • Implement their own moderation policy.

This last point would cut a whole host of Gordian Knots:

  • Market competition would settle the question about whether or not stringent moderation is an important factor in success; some client experiences would be heavily moderated, and some wouldn’t be moderated at all.
  • The fact that everyone gets access to the same Twitter service and social graph solves the cold start problem for alternative networks; the reason why Twitter alternatives always fail is because Twitter’s network effect is so important.
  • TwitterServiceCo could wash its hands of difficult moderation decisions or tricky cultural issues; the U.S. might have a whole host of Twitter client options, while Europe might be more stringent, and India more stringent still. Heck, this model could even accommodate a highly-censored China client (although this is highly unlikely).

I strongly suspect that a dramatic increase in competition amongst Twitter client services would benefit TwitterServiceCo, growing its market in a way that hasn’t happened in years. What is most exciting, though, is the potential development of new kinds of services that don’t look like Twitter at all.

Step back a moment and think about the fundamental infrastructure of the Internet: we have a media protocol in HTTP/web, and a communications protocol in SMTP/email; what is missing is a notifications protocol. And yet, at the same time, if there is one lesson from mobile, it is just how important notifications are; a secondary consideration is how important identity is. If you can know how to reach someone, and have the means to do so, you are set, whether you be a critical service, an advertiser, or anything in-between. Twitter has the potential to fill that role: the ability to route short messages to a knowable endpoint accessible via a centralized directory has far more utility than political signaling and infighting. And yet, thanks to Twitter’s early decisions and lack of leadership, the latter is all the service is good for; no wonder user growth and financial results have stagnated!

A truly open TwitterServiceCo has the potential to be a new protocol for the Internet — the notifications and identity protocol; unlike every other protocol, though, this one would be owned by a private company. That would be insanely valuable, but it is a value that will never be realized as long as Twitter is a public company led by a weak CEO and ineffective board driving an integrated business predicated on a business model that doesn’t work.

Twitter’s Reluctance

The surest evidence of the Twitter board’s lack of imagination and ineffectiveness is that their response to Musk’s proposal is to further dilute existing shareholders as a means of denying Musk control. This is, in my estimation, clearly against the interest of Twitter shareholders (which, for what it’s worth, don’t in any meaningful way include Twitter’s board members); given Twitter’s performance over the last decade, though, this isn’t really a surprise.

Indeed, when you consider the fact that Twitter’s board members not only don’t own much of Twitter, but famously, barely use Twitter at all, it is easy to wonder if the actual goal is not financial results but rather harnessing that immense cultural impact. This suspicion only intensifies when you consider that the bidder in this case is one of the most successful entrepreneurs of all time: if there was one person in the world who could realize Twitter’s latent value, wouldn’t Musk be at the top of the list? And yet he is anathema, not for his business acumen, but despite it.

This, more than anything, makes me even more sure that my proposal for competition amongst Twitter client companies is essential: not only do I think that more competition would lead to dramatically more innovation, but it would also solve the problem of who decides what we see by undoing the centralization of that power and subjecting decisions to market forces. That this is unacceptable to some says more about their ultimate motivations than anything else.

I wrote a follow-up to this Article in this Daily Update.


  1. A friend sent me this promoted tweet on April 15, almost a full month since Kentucky had been eliminated from March Madness, and well over a week after the entire tournament had ended 

DALL-E, the Metaverse, and Zero Marginal Content

Last week OpenAI released DALL-E 2, which produces (or edits) images based on textual prompts; this Twitter thread from @BecomingCritter has a whole host of example output, including Teddy bears working on new AI research on the moon in the 1980s:

Teddy bears working on new AI research on the moon in the 1980s

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window:

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

And, in the most on-the-nose example possible, A human basking in the sun of AGI utopia:

A human basking in the sun of AGI utopia]

OpenAI has a video describing DALL-E on its website:

While the video does mention a couple of DALL-E’s shortcomings, it is quite upbeat about the possibilities; some excerpts:

Dall-E 2 is a new AI system from OpenAI that can take simple text descriptions like “A koala dunking a basketball” and turn them into photorealistic images that have never existed before. DALL-E 2 can also realistically edit and re-touch photos…

DALL-E was created by training a neural network on images and their text descriptions. Through deep learning it not only understands individual objects like koala bears and motorcycles, but learns from relationships between objects, and when you ask DALL-E for an image of a “koala bear riding a motorcycle”, it knows how to create that or anything else with a relationship to another object or action.

The DALL-E research has three main outcomes: first, it can help people express themselves visually in ways they may not have been able to before. Second, an AI-generated image can tell us a lot about whether the system understands us, or is just repeating what it’s been taught. Third, DALL-E helps humans understand how AI systems see and understand our world. This is a critical part of developing AI that’s useful and safe…

What’s exciting about the approach used to train DALL-E is that it can take what it learned from a variety of other labeled images and then apply it to a new image. Given a picture of a monkey, DALL-E can infer what it would look like doing something it has never done before, like paying its taxes while wearing a funny hat. DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

That last line may raise some eyebrows: at first glance DALL-E looks poised to compete with artists and illustrators; there is another point of view, though, where DALL-E points towards a major missing piece in a metaverse future.

Games and Medium Evolution

Games have long been on the forefront of technological development, and that is certainly the case in terms of medium. The first computer games were little more than text:

A screenshot from Oregon Trail

Images followed, usually of the bitmap variety; I remember playing a lot of “Where in the world is Carmen San Diego” at the library:

A screenshot from "Where in the world is Carmen San Diego"

Soon games included motion as you navigated a sprite through a 2D world; 3D followed, and most of the last 25 years has been about making 3D games ever more realistic. Nearly all of those games, though, are 3D images on 2D screens; virtual reality offers the illusion of being inside the game itself.

Still, this evolution has had challenges: creating ever more realistic 3D games means creating ever more realistic image textures to decorate all of those polygons; this problem is only magnified in virtual reality. This is one of the reasons even open-world games are ultimately limited in scope, and gameplay is largely deterministic: it is through knowing where you are going, and all of your options to get there, that developers can create all of the assets necessary to deliver an immersive experience.

That’s not to say that games can’t have random elements, above and beyond roguelike games that are procedurally generated: the most obvious way to deliver an element of unpredictability is for humans to play each other, albeit in well-defined and controlled environments.

Social and User-Generated Content

Social networking has undergone a similar medium evolution as games, with a two-decade delay. The earliest forms of social networking on the web were text-based bulletin boards and USENET groups; then came widespread e-mail, AOL chatrooms, and forums. Facebook arrived on the scene in the mid-2000s; one of the things that helped it explode in popularity was the addition of images. Instagram was an image-only social network that soon added video, which is all that TikTok is. And, over the last couple of years in particular, video conferencing through apps like Zoom or Facetime have delivered 3D images on 2D screens.

Still, medium has always mattered less for social networking, just because the social part of it was so inherently interesting. Humans like communicating with other humans, even if that requires dialing up a random BBS to download messages, composing a reply, and dialing back in to send it. Games may be mostly deterministic, but humans are full of surprises.

Moreover, this means that social networking is much cheaper: instead of the platform having to generate all of the content, users generate all of the content themselves. This makes it harder to get a new platform off of the ground, because you need users to attract users, but it also makes said platform far stickier than any game (or, to put it another way, the stickiest games have a network effect of their own).

Feeds and Algorithms

The first iterations of social networking had no particular algorithmic component other than time: newer posts were at the top (or bottom). That changed with Facebook’s introduction of the News Feed in 2006. Now instead of visiting all of your friends’ pages you could simply browse the feed, which from the very beginning made decisions about what content to include, and in what order.

Over time the News Feed evolved from a relatively straightforward algorithm to one driven by machine learning, with results so inscrutable that it took Facebook six months to fix a recent rankings bug. The impact has been massive: not just Facebook but also Instagram saw huge increases in engagement and increased growth the better their algorithmically-driven feeds became; it was also great for monetization, as the same sort of signals that decided what content you saw also influenced what ads you were presented.

However, the reason why this discussion of algorithmically-driven feeds is in a different section than social networking is because the ultimate example of their power isn’t a social network at all: it’s TikTok. TikTok, of course, is all user-generated content, but the crucial distinction from Facebook is that you aren’t limited to content from your network: TikTok pulls in the videos it thinks you specifically are most interested in from across its entire network. I explained why this was a blindspot for Facebook in 2020:

What is interesting to point out is why it was inevitable that Facebook missed this: first, Facebook views itself first-and-foremost as a social network, so it is disinclined to see that as a liability. Second, that view was reinforced by the way in which Facebook took on Snapchat. The point of The Audacity of Copying Well is that Facebook leveraged Instagram’s social network to halt Snapchat’s growth, which only reinforced that the network was Facebook’s greatest asset, making the TikTok blindspot even larger.

TikTok combines the zero cost nature of user-generated content with a purely algorithmic feed that is divorced from your network; there is a network effect, in that TikTok needs lots of content to choose from, but it doesn’t need your specific network.

The Machine Learning Metaverse

I get that metaverses were so 2021, but it strikes me that the examples from science fiction, including Snow Crash and Ready Player One, were very game-like in their implementation. Their virtual worlds were created by visionary corporations or, in the case of the latter, a visionary developer who also included a deterministic game for ultimate ownership of the virtual world. Yes, third parties could and did build experiences with strong social components, most famously Da5id’s Black Sun club in Snow Crash, but the core mechanic — and the core economics — were closer to a multi-player game than anything else.

That, though, is exceptionally challenging in the real world: remember, creating games, particularly their art, is expensive, and the expense increases the more immersive the experience is. Social media, on the other hand, is cheap because it uses user-generated content, but that content is generally stuck on more basic mediums — text, pictures, and only recently video. Of course that content doesn’t necessarily need to be limited to your network — an algorithm can deliver anything on the network to any user.

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before:

  • Game developers pushed the limits on text, then images, then video, then 3D
  • Social media drives content creation costs to zero first on text, then images, then video
  • Machine learning models can now create text and images for zero marginal cost

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

Zero Marginal Content

There is another way to think about DALL-E and GPT and similar machine learning models, and it goes back to my longstanding contention that the Internet is a transformational technology matched only by the printing press. What made the latter revolutionary was that it drastically reduced the marginal cost of consumption; from The Internet and the Third Estate:

Meanwhile, the economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.

How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing languages across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers. This consolidation occurred at varying rates — England and France several hundred years before Germany and Italy — but in nearly every case the First Estate became not the clergy of the Catholic Church but a national monarch, even as the monarch gave up power to a new kind of meritocratic nobility epitomized by Burke.

The Internet has had two effects: the first is to bring the marginal cost of consumption down to zero. Even with the printing press you still needed to print a physical object and distribute it, and that costs money; meanwhile it costs effectively nothing to send this post to anyone in the world who is interested. This has completely upended the publishing industry and destroyed the power of gatekeepers.

The other impact, though, has been on the production side; I wrote about TikTok in Mistakes and Memes:

That phrase, “Facebook is compelling for the content it surfaces, regardless of who surfaces it”, is oh-so-close to describing TikTok; the error is that the latter is compelling for the content it surfaces, regardless of who creates it…To put it another way, I was too focused on demand — the key to Aggregation Theory — and didn’t think deeply enough about the evolution of supply. User-generated content didn’t have to be simply pictures of pets and political rants from people in one’s network; it could be the foundation of a new kind of network, where the payoff from Metcalfe’s Law is not the number of connections available to any one node, but rather the number of inputs into a customized feed.

Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds needs virtual content created at virtually zero cost, fully customizable to the individual.

Of course there are many other issues raised by DALL-E, many of them philosophical in nature; there has already been a lot of discussion of that over the last week, and there should be a lot more. Still, the economic implications matter as well, and after last week’s announcement the future of the Internet is closer, and weirder, than ever.

App Store Arguments

Arguments in Epic Games, Inc. v. Apple Inc. wrapped up yesterday; Judge Yvonne Gonzales Rogers noted she had thousands of documents to pore over, but hoped to issue a decision within the next few months. I think there is a strong chance that Apple prevails, for reasons I’ll explain below, but that doesn’t mean the trial has been waste of time: it has cast into stark relief the different arguments that pertain to the App Store, and not all of them have to do with the law.

The Legal Argument

Apple came into the trial with a strong hand rooted in Supreme Court precedent.1

First, while it is possible to define the App Store for iPhones as a distinct aftermarket (see Kodak v. Image Technical Services), appellate courts have significantly narrowed that decision to limit its application to situations where the company selling the product that leads to an aftermarket is only barred from changing the rules after-the-fact to foreclose competition in the aftermarket; if the rules foreclosing competition are consistent, however, then there is no harm, because customers know what they are getting into. In the case of the iPhone, this means that Apple can control the market for iPhone apps, because customers already know that Apple controls the App market; if they don’t like it they can buy a different phone. This is why Apple spent time in the trial establishing that its control of the App Store was in fact a selling point of the iPhone, and a reason why customers chose to enter iOS’s more restrictive ecosystem.

Second, Apple also made the case that there is a competitive market for developers. This was an especially effective line of reasoning with regard to Fortnite, which makes more money from other platforms than it does from iOS; moreover, those platforms have rules that are similar to iOS, including exclusive payment platforms, no-steering provisions, and 30% commissions.

The most important case for Apple’s defense, though, is 2004’s Verizon v. Trinko, which established and/or reiterated several important precedents that support Apple’s position, even if Apple were held to be a monopolist.

First, a monopolist has a right to monetize its intellectual property; the opinion states:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices — at least for a short period — is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

This is why CEO Tim Cook in his testimony kept insisting that Apple had a right to monetize its intellectual property, and why the company has emphasized the cost of not simply running the App Store but also in developing APIs and developer tooling.2

Second, a monopolist has no duty to deal with any other company; the opinion states:

Firms may acquire monopoly power by establishing an infrastructure that renders them uniquely suited to serve their customers. Compelling such firms to share the source of their advantage is in some tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities. Enforced sharing also requires antitrust courts to act as central planners, identifying the proper price, quantity, and other terms of dealing — a role for which they are ill suited. Moreover, compelling negotiation between competitors may facilitate the supreme evil of antitrust: collusion. Thus, as a general matter, the Sherman Act “does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.”

What this means for this case is that Apple has no duty to provide access to those APIs and development tools to companies that do not abide by its terms. There is a very limited exception to this provision (see Aspen Skiing v. Aspen Highlands Skiing), but like Kodak, it only applies to situations where the monopolist changes the rules; Epic, on the other hand, has always operated under the same set of in-app purchase rules.3

Third, the Court stressed that judges should carefully weigh the costs of enforcement with the benefits of an injunction:

Against the slight benefits of antitrust intervention here, we must weigh a realistic assessment of its costs. Under the best of circumstances, applying the requirements of §2 “can be difficult” because “the means of illicit exclusion, like the means of legitimate competition, are myriad”…Mistaken inferences and the resulting false condemnations “are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” The cost of false positives counsels against an undue expansion of §2 liability. One false-positive risk is that an incumbent LEC’s failure to provide a service with sufficient alacrity might have nothing to do with exclusion. Allegations of violations of §251(c)(3) duties are difficult for antitrust courts to evaluate, not only because they are highly technical, but also because they are likely to be extremely numerous, given the incessant, complex, and constantly changing interaction of competitive and incumbent LECs implementing the sharing and interconnection obligations. Amici States have filed a brief asserting that competitive LECs are threatened with “death by a thousand cuts,” the identification of which would surely be a daunting task for a generalist antitrust court. Judicial oversight under the Sherman Act would seem destined to distort investment and lead to a new layer of interminable litigation, atop the variety of litigation routes already available to and actively pursued by competitive
LECs.

All of that mumbo-jumbo in the second part of that paragraph is referring to the specifics of the 1996 Communications Act and the complexities of Verizon’s obligations as a local exchange carrier (LEC). In another sense, though, the mumbo-jumbo is the point: the Court’s argument is that judges are ill-equipped to understand technical specifics and weigh important trade-offs, particularly absent any egregious anti-competitive behavior (as defined by the Court).

The Pragmatic Argument

This matters because the reality is that the App Store is complicated. One of the arguments that Epic raised in the case is that there are scams in the App Store; Epic’s argument is that effectiveness-at-rooting-out-scams should be a plane of competition, while Apple pushed out a post arguing that the App Store in fact stopped many more scams than it allowed, and that the fact there is only one way to get apps on your phone is a feature, not a bug.

As John Gruber noted, the sudden appearance of this post suggested that Apple felt it was losing the PR war:

There’s nothing curious about the timing of this post — it’s in response to some embarrassing stories about fraud apps in the App Store, revealed through discovery in the Epic v. Apple trial, and through the news in recent weeks. The fact that Apple would post this now is pretty telling — to me at least — about how they see the trial going. I think Apple clearly sees itself on solid ground legally, and their biggest concern is this relatively minor public relations issue around scam apps continuing to slip through the App Store reviewing process.

Another reason to think this is true is that I actually think that Apple underplayed their case: there are a whole category of transactions that are not explicit scams of the types documented in that post, but which are clearly designed to remove customers from their money as efficiently as possible. Now Apple’s incentives here are not pure: the company does make 30% of these purchases. But Epic is compromised as well, particularly once you realize that most of these problematic apps are games.

I’m not, to be clear, arguing that Fortnite is a problem; I’m sure Epic would be the first to say that they have a reputation to uphold — as, of course, would Apple. Not all developers, though, would be so scrupulous, and a world where any app could collect credit card information directly is one where it seems more likely than not that consumers will find themselves losing much more money than they anticipated, with no obvious recourse to get it back or make it stop.

This is where the malware discussion in the trial was relevant; Apple argued that iOS had less of it, while Epic attributed that to iOS’s sandbox architecture that keeps apps isolated from each other. The other malware takeaway, though, is the fact that it massively suppressed the market for third-party applications on Windows. Consumers didn’t suddenly get smart about apps, thanks to the pressure of competition; they simply stopped downloading and installing apps at all. One of the great triumphs of the App Store is the fact that it made consumers feel safe and secure about installing apps, dramatically expanding the market for developers — including Epic.

At the same time, there is a practical cost to Apple’s approach: there are entire classes of apps that for all practical purposes can’t exist on the iPhone, particularly those that entail paying licensing fees to 3rd parties on a percentage-of-total-sales basis. Apple, in one of the more damning emails to emerge over the last year, admitted as such:

Steve Jobs stating that iBooks will be the only viable bookstore on the iPhone

This does, I would note, put Apple’s antitrust conviction in the ebooks case in considerably more dubious light: Apple was trying to shift the industry away from a wholesale model to an agency model, which is the exact sort of model that doesn’t work with the App Store. That the company was offering its own alternative — iBooks — makes it worse, just as the introduction of Apple Music made the application of App Store rules to Spotify particularly problematic.

What is also worth acknowledging, though, are the kinds of businesses that never get off the ground. During the pandemic, for example, Apple originally sought to take a cut of person-to-person businesses like counselors and trainers; the company did change the rules to allow one-to-one interactions to be paid for via an alternative payment method, but still demands a cut of one-to-many offerings like classes. There is also the impact on the burgeoning creator economy; I tweeted over the weekend:

Ben's tweet about how little a creator makes with Twitter's new ticket feature

In this world you don’t need 1,000 true fans to make a living; you need 1,786 — 536 fans to pay Apple, 253 fans to pay Twitter, and only then the 1,000 that make it possible to create something new. It is inevitable that some number of businesses never get started, because of this deadweight loss.

The Duopoly Argument

That tweet, I will admit, wasn’t entirely fair. While I noted that Twitter earned its cut by creating a market where one did not previously exist, the same argument can absolutely be made about Apple. As I noted above, Apple not only created the iPhone, it also created a willingness to download and experiment and pay that vastly expanded the market for developers. If Twitter deserves a cut, why not Apple? But then again, how far does it go?

Ben asking why AT&T and Verizon shouldn't get 30%

Where does the “made something previously impossible possible” chain of causation and entitlement end? With the phone? With carriers? With TSMC? Obviously Apple believes it ends with the iPhone, but it’s worth exploring why that isn’t simply a wish but a reality; after all, the carriers actually did take a share of all transactions on their networks 14 years ago. The company that changed the status quo was Apple; I explained How Apple Creates Leverage in 2014:

While Apple in 2006 (in the runup to the iPhone) was in a much stronger position than 2003, they were still much smaller ($60.6 billion market cap) than AT&T ($102.3 billion) or Verizon ($93.8 billion) on an individual basis, much less the carrier industry as a whole. More importantly, carriers weren’t facing a collective existential threat like piracy, which significantly increased their BATNA [Best Alternative to a Negotiated Agreement] relative to the music labels.

The music labels, though, benefitted from a relatively low elasticity of substitution: if I wanted one particular band that wasn’t on the iTunes Music Store, I wouldn’t be easily satisfied by the fact another band happened to be available. The carriers, on the other hand, largely offered the same service: voice, SMS, and data, all of which was interoperable. This increased elasticity of substitution gave Apple an opportunity to pursue a divide-and-conquer strategy: they just needed one carrier.

Apple reportedly started iPhone negotiations with Verizon, but it turned out that Verizon was already kicking AT&T’s (then Cingular’s) butt through aggressive investment and technology choices, resulting in increasing subscriber numbers largely at AT&T’s expense. Verizon saw no need to change their strategy, which included strong branding and total control over the experience on phones on their network. AT&T, meanwhile, was on the opposite side of the coin: they were losing, and that in turn had a significant effect on their BATNA – they were a lot more willing to compromise when it came to branding and the user experience, and so the iPhone launched on AT&T to Apple’s specifications.

That is when Apple’s user experience advantage and corresponding customer loyalty took over: for the first time ever customers were willing to endure the hassle and expense of changing phone carriers just so they could have access to a specific device. Over the next several years Verizon began to bleed customers to AT&T even though their service levels were not only better, but actually widening the gap thanks to the iPhone’s impact on AT&T. Four years after launch the iPhone did finally arrive on Verizon with the same lack of carrier branding and control over the user experience; in other words, Verizon eventually accepted the exact same deal they rejected in 2006 because the loyalty of Apple customers gave them no choice.

Apple followed the same playbook in country after country: insistence on total control (and over time, significant marketing investments and a guaranteed number of units sold) with a willingness to launch on second or third-place carriers if necessary. Probably the starkest example of the success of this strategy was in Japan. Softbank was in a distant third place in the Japanese market when they began selling the iPhone in 2008; finally after four years second-place KDDI added the iPhone, but only after Softbank had increased its subscriber base from 19 million to 30 million. NTT DoCoMo, long the dominant carrier and a pioneer in carrier-branded services finally caved last year after seeing its share of the market slide from 52% in 2008 to 46%.

Forgive the long excerpt, but the specifics of what happened between Apple and the carriers is essential to understanding what makes the App Store question so challenging. What is critical to note is that Apple was able to break the carrier’s hold on what happened on their networks because competition (1) existed at the carrier level and (2) occurred in multiple markets. That meant that Apple had a place to leverage its ability to make something better.

The App Store market, on the other hand, is a worldwide duopoly, which dramatically reduces the point of leverage for any one app. Suppose Twitter wanted to push back on Apple’s policies, and go exclusively to Android; that would entail giving up around 20% of the market worldwide, and about 50% of the market in the U.S. It’s simply not viable to do a divide-and-conquer approach when there are only two alternatives in a worldwide market. That is why Apple and Google mostly copy each other’s policies, comfortable in the knowledge that no one app really matters.

I suppose Twitter could make its own phone, just as Apple could have built its own phone carrier, but given the essentialness of an App ecosystem, the former is actually a far more intractable challenge than the latter. One app may not matter to an ecosystem, but as a collective nothing matters more.

The Moral Argument

This gets at the part of this case that is, to be perfectly honest, kind of infuriating. Back in 2009 Apple came up with a memorable campaign for the iPhone:

I admit it — the first tweet above was wrong. Apple absolutely did a tremendous amount for developers. It invented the iPhone, it brought the concept of an App Store to the mass market, and it has iterated on both. And yes, it spends a lot of money on APIs and developer tools.

What the company no longer admits, though, is that developers did a lot for Apple too. They made the iPhone far more powerful and useful than Apple ever would have on its own, they pushed the limits on performance so that customers had reasons to upgrade, and even when Android came along iPhone developers never abandoned the platform. Sure, they had good economic reasons for their actions, but that’s the point: Apple had good economic reasons for building out all of those APIs and developer tools as well.

Apple said as much when the App Store first launched; Steve Jobs said in 2008 that “Our purpose in the App Store is to add value to the iPhone”, even as he admitted that “the App Store is much larger than we ever imagined.” When Apple introduced the iPhone SDK Jobs had analogized the company’s 70/30 split to iTunes, but it was already clear a few months later that the opportunity was far larger than music.

This worried longtime Apple executive Phil Schiller; in one of the most striking emails to emerge from the trial, he suggested that Apple might consider voluntarily capping its App Store profit to $1 billion, which was far more than the break-even amount Jobs hoped for at launch.

Slide from Apple v Epic

Schiller argued that Apple ought to make such a move from “a position of strength rather than weakness”; one can certainly make the argument that he was gravely mistaken, given that Apple makes somewhere in the neighborhood of $15 billion from the App Store, and that, for all of the reasons I just explained, is as secure in its competitive position as it has ever been.

And yet, it’s worth remembering that Apple did $294 billion in sales last year; the App Store is not and never will be its main business. Is it strong to continue to tarnish the customer experience with popular apps, its reputation with developers, and provoke criticism from Congress simply because it can? How much might it have been worth to remember that while Apple will always have the leverage with individual developers, developers as a collective — along with creators and authors and musicians and everyone else who wants to build a business on the iPhone — are exactly what makes the iPhone so compelling?

The Anti-Steering Argument

The argument that Judge Gonzales Rogers seemed the most interested in pursuing was one that Epic de-emphasized: Apple’s anti-steering provisions which prevent an app from telling a customer that they can go elsewhere to make a purchase. Apple’s argument, in this case presented by Cook, goes like this:

Tim Cook's Best Buy argument

This analogy doesn’t work for all kinds of reasons; Apple’s ban is like Best Buy not allowing products in the store to have a website listed in the instruction manual that happens to sell the same products. In fact, as Nilay Patel noted, Apple does exactly this!

Nilay Patel refutes the Best Buy argument

The point of this Article, though, is not necessarily to refute arguments, but rather to highlight them, and for me this was the most illuminating part of this case. The only way that this analogy makes sense is if Apple believes that it owns every app on the iPhone, or, to be more precise, that the iPhone is the store, and apps in the store can never leave.

Judge Gonzales Rogers, meanwhile, is not the only one that finds Apple’s entitlement to apps problematic; the European Commission specifically cited the App Store anti-steering provision in its Statement of Objections about Apple’s approach to competition “in the market for the distribution of music streaming apps through its App Store”. That position of strength is starting to weaken.


After the European Commission’s announcement, Benedict Evans wrote in Resetting the App Store:

We’ve been arguing about this for a decade, but now something is going to change, partly because of Epic’s lawsuit (which it might or might not win) but much more importantly because the EU has a whole set of competition investigations into the sandbox, the store and the payment system, and is highly unlikely to accept the status quo. When Apple launched the store in July 2008 it had only sold 6m iPhones ever, but now a billion people have one, and competition laws apply, whether you like it or not. Epic might lose, but Spotify will win. What will that mean, though? What rules will change, and how, and what will that mean for anyone who isn’t an Apple or Spotify shareholder?…

It’s possible to believe that the sandboxed App Store model has been a hugely good thing, and also that Apple has too often made the wrong decisions in running it, and also that unwinding those decisions while preserving the underlying model won’t actually change much for many companies or consumers, and won’t really be a significant structural change in how tech works.

What Evans highlights is that all of these arguments about the App Store are good ones. Apple has good ones, Epic has good ones, Spotify has good ones, the European Commission has good ones, and I’d like to think I have good ones as well. As the Supreme Court has noted, though, a realm with lots of complexity and lots of good arguments about every single trade-off is one that is extremely poorly suited to judicial oversight. Congress is certainly an option — there is a utility sort of argument to be made about the App Store — but that comes with massive risks, given the relative frequency of changes in the law relative to changes in technology (the Epic case is being argued under a law passed 121 years ago).

What I wish would happen — and yes, I know this is naive and stupid and probably fruitless — is that Apple would just give the slightest bit of ground. Yes, the company has the right to earn a profit from its IP, and yes, it created the market that developers want to take advantage of, and yes, the new generation of creators experimenting with new kinds of monetization only make sense in an iPhone world, but must Apple claim it all?

Let developers own their apps, including telling users about their websites, and let creatives build relationships with their fans instead of intermediating everything.4 And, for what it’s worth, continue controlling games: I do think the App Store is a safer model, particularly for kids, and the fact of the matter is that consoles have the same rules. The entire economy, though, is more than a game, and the real position of strength is unlocking the full potential of the iPhone ecosystem, instead of demanding every dime, deadweight loss be damned.


  1. Obvious caveat: I am not a lawyer, although I have consulted with both an antitrust lawyer and an antitrust economist about legal precedents surround the App Store; this is also about U.S. law only 

  2. There is a question as to whether Apple’s approach is the least restrictive way to achieve legitimate business ends; this is a very difficult argument to win in court, though, as any proposed solution has to be better in all regards, including appropriately compensating the IP holder. 

  3. Some “reader” apps like Kindle did have the rules changed on them — Apple used to let them link out to the web for payments — but that was years ago before the iPhone was as dominant as it is today; parental control apps may have a more compelling case. 

  4. Customers, of course, could decline; that would be a reason for developers and creatives to opt into Apple’s model 

Distribution and Demand

Felix Salmon asked a question on Twitter:

Felix Salmon's question about AT&T and Amazon

While Salmon’s question was driven by the news that AT&T would spin out WarnerMedia and merge it with Discovery, under Discovery’s management, it was also prescient given a report in The Information a few hours later that Amazon is considering buying MGM.

It remains to be seen if Amazon will actually follow through with a purchase, and even longer to know whether or not it was the right play; there are reasonable arguments both for and against. What has always been clear, though, is that AT&T’s foray into media was a bad idea that made zero strategic sense.

Being open to Amazon’s purchase of a studio, and opposed to AT&T’s acquisition of a media conglomerate (which I was from the beginning) isn’t based on some sort of bias against old-line firms versus tech firms; rather, it stems from an understanding that the Internet really is different from the analog world.

AT&T and the Economics of Distribution

Distribution is one of the most frequently used words in both tech and media, and in the vast majority of cases it is used wrongly; explaining why the marriage of AT&T and Time Warner was destined for divorce is not one of them. Take it from a legend of the industry, in the Wall Street Journal:

Cable mogul John Malone, a major Discovery shareholder, said that although he believes Time Warner is doing fine, merging content and distribution usually doesn’t make sense. “I think that the technology of connectivity and digital technologies are one focus, and creating content that people get addicted to is another focus,” he said. “And you seldom would find both of those in the same management team.”

Look no further than Jeff Bezos and Amazon to see why Hollywood can be a distraction, but in my estimation the problem of merging content and distribution are much more fundamental.

Start with the latter: distribution, properly understood, entails the build-out of physical infrastructure. Comcast needs to lay cable and fiber; DirecTV, another AT&T misadventure, needs to launch satellites and install dishes; AT&T’s core mobile business requires buying spectrum and installing cell phone towers and base stations. These are not businesses for the faint of heart, given the massive capital costs.

The payoff for this investment, though, is a competitive moat. Comcast, for example, may only compete with satellite for TV service, and the local phone company for Internet access; no other cable company is going to lay competitive cable lines, because Comcast could simply lower prices and drive them out of the market. This isn’t even predatory pricing, because the actual utilization of its infrastructure is effectively free on a marginal basis. Laying cable is expensive, but using it is cheap.

That said, to the extent competition exists, it is quite brutal because it is zero sum; an AT&T customer is not Verizon customer is not a T-Mobile customer. The combination of scarcity in customers and zero marginal cost for incremental usage can lead to price wars, which T-Mobile used over the last decade to take a good amount of share from Verizon and AT&T; perhaps that is why both tried their hand at content.

Differentiation and Commoditization in Content

Verizon’s approach was to get into the commoditized content space, acquiring and eventually merging AOL and Yahoo into a company bizarrely named “Oath”. The problem for Verizon is that Oath had the opposite problem of its core business: digital content requires effectively zero fixed costs to get started, but never-ending marginal costs to produce, which means it had effectively infinite competition for both eyeballs and advertising dollars, and no differentiation in either market.

AT&T, on the other hand, acquired highly differentiated content with its acquisition of Time Warner (which it renamed WarnerMedia). The problem for AT&T is that differentiated content has a business model that is orthogonal to AT&T’s core business. Whereas AT&T competes for customers in a zero sum game, content is best leveraged by reaching as many customers across as many distributors as possible. That means that what would have been best for AT&T’s core business — being the exclusive way to get access to WarnerMedia content, thus giving a reason for customers from Verizon or T-Mobile to switch carriers — would have been value destructive to WarnerMedia, because the cost of producing its differentiated content would have been amortized across fewer customers.

AT&T instead went in the opposite direction: by creating HBO Max, WarnerMedia stopped selling content to the highest bidder and instead started bidding for content itself, which was the worst of all possible worlds, at least in the short run. HBO Max content was limited in reach — it was only available to HBO Max subscribers — which meant that its content was leveraged against an even smaller base than, say, AT&T’s customers. The potential payoff, of course, was a service like Netflix, which reaches everyone everywhere, no matter their carrier or cable provider (which, of course, raises the question as to what strategic benefit would have accrued to AT&T’s core business even in the best case scenario).

This is where it is important to be precise about the meaning of “distribution.” For Netflix, distribution is the Internet, which is to say it is completely commoditized. You can watch Netflix on your phone, on your TV, on your console, on your computer, on your set-top box, basically through any device that has an Internet connection. This distribution isn’t technically free — Netflix has huge bandwidth bills, and if you watch enough of it you may hit bandwidth caps — but from a strategic perspective it might as well be.

In fact, everything that is distributed via the Internet is effectively free. Netflix, Oath, HBO Max, Google, Facebook, Wikipedia, Spotify, YouTube, Stratechery — all are effectively free to access for anyone from anywhere. This is why I get so confused when companies or regulators complain about Google or Facebook controlling distribution; neither company controls the cables or routers or switches that deliver content. They have zero control of distribution. Rather, what those companies control is demand.

Distribution Versus Demand

What are the ways one might read this Article? My subscribers are reading an email, or perhaps an RSS feed, or listening to a podcast. New readers may have had this Article forwarded to them from a friend, followed a link on social media, or searched for an article distinguishing between “Distribution and Demand.”

The sheer scale of the Internet is such that the last two channels have many orders of magnitude more potential readers than the ones I directly control. After all, people use social media to connect to basically everyone on the planet, and post basically anything; a link to my site is one link among an infinite number of other links. It’s the same thing with search: everyone uses search to find information about anything you might think of; the scale of the Internet — thanks in part to the never-ending content creation that characterizes businesses like Oath — is such that there are almost always scads of results for your query.

What makes Google and Facebook so successful is that they are the linchpin upon which these massive markets pivot, in large part because both services increase in functionality with scale: more sites and more links mean better results in Google, which drives increased usage, which provides a feedback function allowing Google to refine its results; similarly, more people and more content mean stronger networks and more engagement on Facebook, which drives increased usage, which provides a feedback function improving the recommendation algorithm.

Both services built monetization engines that perfectly align with and benefit from their core feedback function: Google gives you search ads that are more relevant with more advertisers, and which improve in relevance with more customer clicks; Facebook shows you ads that you are interested in because they are similar to the content you already enjoy, and like Google, more ads mean better ads for the marginal user, which improve in relevance with more customer clicks.

The problem for a company like Oath is that its content was simply grist for Google and Facebook, and to the extent it earned page views, the accompanying ad inventory wasn’t nearly as differentiated as ads sold by Google or Facebook. Oath had distribution — anyone anywhere could access its sites and apps for free — the problem is that it didn’t have demand, from either users or advertisers.

HBO Max, meanwhile, did have demand for its content, but because that content required a subscription to HBO Max, it placed an artificial cap on its distribution to whoever was willing to subscribe. This can work — again, look at Netflix — but it takes billions of dollars, both in actual spending and, just as importantly, in foregone content sales, to pull it off. That may make sense for a startup, but its nigh-on suicidal for a (real) distribution company like AT&T.

Amazon Prime

There is another important difference between a company like Oath and a company like Google: while the former has minimal fixed costs and ongoing marginal costs, the latter has massive fixed costs and minimal marginal costs. Google spends a massive amount of money on both R&D and in capital costs to dot its servers all over the world, and link them together with an expansive private network; Google leverages these costs across the virtuous cycle of users, content suppliers, and advertisers.

Another company that expends massive amount of money on R&D and capital costs is Amazon;1 the company spends billions of dollars on everything from distribution centers to delivery drivers, paying for it with an ever-expanding virtuous cycle of suppliers, customers, and, increasingly, advertisers. Because these costs are fixed Amazon wants to both expand its customer base and also increase its share of wallet with its existing customers; one of the most effective ways to do this is convince customers to subscribe to Prime, which, among other benefits, offers two-day (increasingly one-day) shipping. This encourages shoppers to start their searches for products on Amazon, where they may click on sponsored search results and ultimately buy on Amazon, without even considering alternatives.

This is not, to be clear, because Amazon controls distribution. Other e-commerce sites are, as they say, “only a click away” — Google in particular is eager to direct shoppers to alternatives, which are easily accessed from your phone or computer. Rather, Amazon’s goal is to control demand: its best customers go to Amazon because Amazon has what they need and gets it to them quickly, and because they already paid for shipping with their Prime subscription.

Notice how much different this kind of competition is from that engaged in between true distribution companies: AT&T and Verizon spend a lot of up-front, but the payoff is physical lock-in; Google and Amazon, meanwhile, spend massive amounts of money on fixed costs, but they don’t have any lock-in at all: they win not by limiting customer choice, but by being the top choice in a world where alternatives are easily accessible. At the same time, while a dependency on physical infrastructure limits AT&T and Verizon’s scale, companies like Google, Amazon, and Facebook have free distribution — remember, they’re on the Internet! — which means they can serve anybody.

This is why Amazon’s long-running investment in Prime Video, and potential acquisition of MGM, makes far more strategic sense than AT&T’s gambit ever did. Amazon has to work to win and retain customers on a continual basis, ideally to its Prime subscription service, and bundling differentiated content is a great way to do that. Moreover, Amazon isn’t trying to build a subscription service from scratch, thus drastically limiting its leverage on that content; Prime has 147 million members in the United States, and 200 million worldwide, and none of those customers are making a zero-sum choice between Prime and, say, Netflix. After all, distribution is free.

Aggregators and Platforms

You can probably guess my vote in Salmon’s poll:

I voted that AT&T was being stupid and Amazon was being smart

I do get the argument that Prime Video is a waste of money for Amazon; Brad Stone notes in his new book Amazon Unbound that “there was little evidence of a connection between viewing and purchasing behavior” and that “any correlation was also obfuscated by the fact that Prime was growing rapidly on its own.” I have no beef with people who voted “No” on both — what is important is that we all agree that AT&T’s approach didn’t make any sense at all.

This is why getting definitions right is so important: if you conflate controlling distribution with controlling demand, you are liable to waste billions of dollars on acquisitions that make no sense, or, in the case of regulators, spend years pursuing court actions against companies like Google that result in zero change in the relevant markets. I wrote in a 2019 article:

There is a Sisyphean aspect to regulating power predicated on consumer choice: look no further than the European Union, where regulators are frustrated that remedies for the Google shopping case aren’t working, even though those same regulators were happy with the remedies in theory; the problem was trying to regulate consumer choice in the first place.

The premise of that Article is that effective regulation meant distinguishing between Aggregators and Platforms; drawing the distinction between delivery and demand is making the same point in a different way. Aggregators win by consolidating demand; platforms exert dominance by controlling distribution. Search results and social networks are the former; App Stores and access-control are the latter. Not understanding the difference is, as both AT&T and Verizon have learned, exceptionally costly.


  1. I am, for purposes of this article, going to focus on Amazon.com, not AWS, although all of the same principles apply. 

Cloudflare on the Edge

Matthew Prince, at the end of his prepared remarks after Cloudflare’s recent earnings report, related a story from the company’s earliest days:

Back in 2010, right before Cloudflare’s first Board meeting and our launch, I got some advice from one of our early investors. He said running a company is a bit like flying an airplane. You want to make sure it’s well maintained at all times. And that when you’re flying, you keep the wheel steady and the nose 10 degrees about the horizon. That’s stuck with me, and we’ve designed Cloudflare for consistent and disciplined execution. That shows in quarters like the one we just had.

What is most important of all, though, is the destination that airplane is headed for.

TechCrunch Disrupt

The launch Prince referred to happened at TechCrunch Disrupt 2010; the entire video is worth a watch, but there are three highlights in particular. First, Prince — despite a three-minute technical delay — did an excellent job of laying out Cloudflare’s core value proposition:

Prince, a graduate of Harvard Business School, explicitly invoked HBS Professor Clayton Christensen while answering a question about competition:

The most memorable moment of the presentation, though, was Prince’s response to a seemingly anodyne question about when companies might grow out of Cloudflare’s offering:

Despite the audacity of Prince’s answer — Our vision is that we’re going to power the Internet — the company’s list of competitors in its 2019 S-1 seemed rather aspirational, in both breadth and scale:

Our current and potential future competitors include a number of different types of companies, including:

  • On-premise hardware network vendors, such as Cisco Systems Inc., F5 Networks, Inc., Check Point Software Technologies Ltd., FireEye, Inc., Imperva, Inc., Palo Alto Networks, Inc., Juniper Networks, Inc., and Riverbed Technology, Inc.;
  • Point-cloud solution vendors, including cloud security vendors such as Zscaler, Inc. and Cisco Systems Inc. through Umbrella (formerly known as OpenDNS), content delivery network vendors such as Akamai Technologies, Inc., Limelight Networks, Inc., Fastly, Inc., and Verizon Communications Inc. through Edgecast, domain name system vendors services such as Oracle Corporation through DYN, NeuStar, Inc., and UltraDNS Corporation, and cloud SD-WAN vendors; and
  • Traditional public cloud vendors, such as Amazon.com, Inc. through Amazon Web Services, Alphabet Inc. through Google Cloud Platform, Microsoft Corporation through Azure, and Alibaba Group Holding Limited through Alibaba Cloud.

The first two categories make sense; after all, Cloudflare’s value proposition from the beginning was speed and security, so of course they would grow up to compete with network and security vendors. It was that last bullet point, though, that even now leads to raised eyebrows: Cloudflare’s big quarter entailed $138 million in revenue; AWS, over the same period, made $150 million a day.

Cloudflare Disrupt

To understand why Cloudflare sees public cloud vendors as competitors it helps to go back to what made Cloudflare disruptive; Christensen wrote in The Innovator’s Dilemma:

Occasionally, however, disruptive technologies emerge: innovations that result in worse product performance, at least in the near-term. Ironically, in each of the instances studied in this book, it was disruptive technology that precipitated the leading firms’ failure. Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

That was basically Prince’s value proposition: Cloudflare’s CDN would be cheaper (free), simpler (just change DNS servers), smaller (only 5 servers to start), and more convenient (ridiculously easy!). And Cloudflare’s customers were definitely fringe:

What Cloudflare had in its favor, though, was the most potent advantage on the Internet: the service, much like Google a decade-earlier with its link-based ranking system, got better with use. This was because Cloudflare paired its content delivery network with DDoS protection; the latter was extremely attractive to websites, gave Cloudflare an in with ISPs who valued the protection to build point-of-presence servers around the world, and, critically, gave Cloudflare better-and-better data about how data flowed around the world (improving its service) even as it improved its CDN capabilities.

Cloudflare’s focus on security-for-free also meant its CDN was built on general-purpose hardware from the beginning; from the S-1:

To achieve the level of efficiency needed to compete with hardware appliances required us to invent a new type of platform. That platform needed to be built on commodity hardware. It needed to be architected so any server in any city that made up Cloudflare’s network could run every one of our services. It also needed the flexibility to move traffic around to serve our highest paying customers from the most performant locations while serving customers who paid us less, or even nothing at all, from wherever there was excess capacity.

As time went on those general purpose machines were used for more-and-more offerings beyond a CDN and DDoS protection; HHHypergrowth has a fantastic overview of everything Cloudflare is working on, and the article is daunting in length because Cloudflare’s portfolio is so vast. It is Cloudflare Workers, though, that are responsible for the big cloud players being in Cloudflare’s competitive set.

Cloudflare Workers

Cloudflare launched Workers seven years after the company’s launch at Disrupt; from the introductory blog post:

Cloudflare is about to go through a similar transition [as programmable CPUs]. At its most basic level, Cloudflare is an HTTP cache that runs in 117 locations worldwide (and growing). The HTTP standard defines a fixed feature set for HTTP caches. Cloudflare, of course, does much more, such as providing DNS and SSL, shielding your site against attacks, load balancing across your origin servers, and so much else.

But, these are all fixed functions. What if you want to load balance with a custom affinity algorithm? What if standard HTTP caching rules aren’t quite right, and you need some custom logic to boost your cache hit rate? What if you want to write custom WAF rules tailored for your application?

You want to write code.

We can keep adding features forever, but we’ll never cover every possible use case this way. Instead, we’re making Cloudflare’s edge network programmable. We provide servers in 117+ locations around the world — you decide how to use them.

Workers were extremely limited in functionality to start; just a bit of stateless Javascript code running in a V8 isolate, but as close to users as possible. In 2018 Cloudflare added a key-value store, giving Workers access to highly distributed eventually-consistent data storage; in 2020 the company introduced Workers Unbound, dramatically expanding Workers capabilities, and Durable Objects, which not only store data but also state, which means a single source of truth. Once again Cloudflare’s network comes to the rescue:

When using Durable Objects, Cloudflare automatically determines the Cloudflare datacenter that each object will live in, and can transparently migrate objects between locations as needed. Traditional databases and stateful infrastructure usually require you to think about geographical “regions”, so that you can be sure to store data close to where it is used.

Thinking about regions can often be an unnatural burden, especially for applications that are not inherently geographical. With Durable Objects, you instead design your storage model to match your application’s logical data model. For example, a document editor would have an object for each document, while a chat app would have an object for each chat. There is no problem creating millions or billions of objects, as each object has minimal overhead.

In Cloudflare’s example of a chat app, every individual conversation is an object, and that object is moved as close to the participants as possible; two people chatting in the U.S. would utilize a Durable Object in a U.S. data center, for example, while two in Europe would use one there. There is a bit of additional latency, but less than there might be with a centralized cloud provider. That’s ok, though, because the real advantage of Workers isn’t what Cloudflare thought it was.

Public Cloud Economics

The economics of public clouds are very straightforward: it makes far more sense for Amazon or Microsoft or Google to build and maintain data centers all over the world and rent out capacity than it does for companies for whom data centers are not their core competency to duplicate their efforts at a sub-scale level. It’s so compelling I labeled the current state The End of the Beginning:

This last point gets at why the cloud and mobile, which are often thought of as two distinct paradigm shifts, are very much connected: the cloud meant applications and data could be accessed from anywhere; mobile made the I/O layer available anywhere. The combination of the two make computing continuous.

A drawing of The Evolution of Computing

What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

While this view of the omnipresent cloud is true for end users, the story is a bit more complicated for developers; if you want to set up a new instance you need to first select a region. AWS, for example, has twenty-five regions around the world:

AWS regions

Once you choose a region your actual app is geographically contained in that region. In theory that limitation gives an advantage to Cloudflare Workers; Prince wrote in a blog post:

Since we’re unlikely to make the speed of light any faster, the ability for any developer to write code and have it run across our entire network means we will always have a performance advantage over legacy, centralized computing solutions — even those that run in the “cloud.” If you have to pick an “availability zone” for where to run your application, you’re always going to be at a performance disadvantage to an application built on a platform like Workers that runs everywhere Cloudflare’s network extends.

The truth, though, is that this performance doesn’t matter very much for most applications. Stratechery’s podcast service runs in the US East (Ohio) region, for example, and it doesn’t really make a difference for me, despite the fact I’m halfway around the world. Price admitted as such:

But let’s be real a second. Only a limited set of applications are sensitive to network latency of a few hundred milliseconds. That’s not to say under the model of a modern major serverless platform network latency doesn’t matter, it’s just that the applications that require that extra performance are niche…People who talk a lot about edge computing quickly start talking about IoT and driverless cars. Embarrassingly, when we first launched the Workers platform, I caught myself doing that all the time.

Indeed, for almost all applications the public clouds were good enough, and again, the economics made any other choice a bad idea.

The Edge Opportunity

Earlier this year, in the wake of January 6, I wrote Internet 3.0 and the Beginning of (Tech) History; after raising the arguments from The End of the Beginning I noted:

In the case of the Internet, we are at the logical endpoint of technological development; here, though, the impasse is not the nature of man, but the question of sovereignty, and the potential re-liberation of megalothymia is the likely refusal by people, companies, and countries around the world to be lorded over by a handful of American giants.

As long as economics were all that mattered, we would only ever have the centralized cloud providers; the “limited set of applications” that needed minimal latency could pay a bit more to run on those blue AWS edge providers in the maps above. The point of that article, though, is that economics weren’t the only thing that mattered: going forward politics would be even more important.

Prince had the same realization; the blog post I have been quoting is entitled The Edge Computing Opportunity: It’s Not What You Think, and the chief benefits Prince cites are very much about politics:

Most computing resources that run on cloud computing platforms, including serverless platforms, are created by developers who work at companies where compliance is a foundational requirement. And, up until to now, that’s meant ensuring that platforms follow government regulations like GDPR (European privacy guidelines) or have certifications providing that they follow industry regulations such as PCI DSS (required if you accept credit cards), FedRamp (US government procurement requirements), ISO27001 (security risk management), SOC 1/2/3 (Security, Confidentiality, and Availability controls), and many more.

But there’s a looming new risk of regulatory requirements that legacy cloud computing solutions are ill-equipped to satisfy. Increasingly, countries are pursuing regulations that ensure that their laws apply to their citizens’ personal data. One way to ensure you’re in compliance with these laws is to store and process data of a country’s citizens entirely within the country’s borders.

The EU, India, and Brazil are all major markets that have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data. China has already imposed data localization regulations on many types of data. Whether you think that regulations that appear to require local data storage and processing are a good idea or not — and I personally think they are bad policies that will stifle innovation — my sense is the momentum behind them is significant enough that they are, at this point, likely inevitable. And, once a few countries begin requiring data sovereignty, it will be hard to stop nearly every country from following suit.

This potential reality presents a big problem for Amazon, Microsoft, and Google: what scales on their side is the cloud as a whole, from management to interface to purchasing; individual developers are meant to stay in their regions. Yes, all three companies guarantee that data in one region won’t go elsewhere, but it’s a development nightmare: you have to maintain different apps with different data stores in different regions.

Cloudflare, meanwhile, can use the same capabilities that seamlessly transfer Durable Objects to the nearest data center, to follow local compliance data sovereignty laws at a granual level; from an announcement for Jurisdictional Restrictions for Durable Objects:

Durable Objects, currently in limited beta, already make it easy for customers to manage state on Cloudflare Workers without worrying about provisioning infrastructure. Today, we’re announcing Jurisdictional Restrictions for Durable Objects, which ensure that a Durable Object only stores and processes data in a given geographical region. Jurisdictional Restrictions make it easy for developers to build serverless, stateful applications that not only comply with today’s regulations, but can handle new and updated policies as new regulations are added…

By setting restrictions at a per-object level, it becomes easy to ensure compliance without sacrificing developer productivity. Applications running on Durable Objects just need to identify the jurisdictional rules a given Object should follow and set the corresponding rule at creation time. Gone is the need to run multiple clusters of infrastructure across cloud provider regions to stay compliant — Durable Objects are both globally accessible and capable of partitioning state with no infrastructure overhead.

Durable Objects are not, in-and-of-themselves, going to kill the public clouds; what they represent, though, is an entirely new way of building infrastructure — from the edge in, as opposed to the data center out — that is perfectly suited to a world where politics matters more than economics.

Internet 3.0

I actually already covered Cloudflare’s differentiated approach, albeit in passing, and by accident. Back in March, I interviewed Prince in the process of writing Moderation in Infrastructure; one thing that stood out to me was how his response to Internet fragmentation differed from Microsoft President Brad Smith, and Google Cloud CEO Thomas Kurian:

Smith:

I think, it’s a reflection of the fact that if you’re a global technology business, most of the time, it is far more efficient and legally compliant to operate a global model than to have different practices and standards in different countries, especially when you get to things that are so complicated. It’s very hard to have content moderators make decisions about individual pieces of content under one standard, but to try to do it and say, “Well, okay, we’ve evaluated this piece of content and it can stay up in the US but go down in France.” Then you add these additional layers of complexity that add both cost and the risk of non-compliance which creates reputational risk.

Kurian:

So far, we have tried to get to what’s common, and the reality is, Ben, it’s super hard on a global basis to design software that behaves differently in different countries. It is super difficult. And at the scale at which we’re operating and the need for privacy, for example, it has to be software and systems that do the monitoring. You cannot assume that the way you’re going to enforce ToS and AUPs is by having humans monitor everything, I mean we have so many customers at such a large scale. And so that’s probably the most difficult thing is saying virtual machines behave one way in Canada, and a different way in the United States, and a third way…I mean that’s super complicated.

Prince:

Everywhere in the world, governments have some political legitimacy, and they certainly have a lot more political legitimacy than I do…It’s important that we comply with the laws in each jurisdiction in which we operate. We should help our customers comply with the laws in each jurisdiction we operate…Germany can set whatever rules they want for Germany, but it has to be the rules inside of Germany.

And you can manage that okay. You can manage on a per country basis. You feel good about that?

Sure. I mean, for us, that’s easy. And then we can provide that to our customers as a function of what we’re doing. But I think that if you could say, German rules don’t extend beyond Germany and French rules don’t extend beyond France and Chinese rules don’t extend beyond China and that you have some human rights floor that’s in there.

Right. But given the nature of the internet, isn’t that the whole problem? Because, anyone in Germany can go to any website outside of Germany.

That’s the way it used to be, I’m not sure that’s going to be the way it’s going to be in the future. Because, there’s a lot of atoms under all these bits and there’s an ISP somewhere, or there’s a network provider somewhere that’s controlling how that flows and so I think that, that we have to follow the law in all the places that are around the world and then we have to hold governments responsible to the rule of law, which is transparency, consistency, accountability. And so, it’s not okay to just say something disappears from the internet, but it is okay to say due to German law it disappeared from the internet. And if you don’t like it, here’s who you complain to, or here’s who you kick out of office so you do whatever you do. And if we can hold that, we can let every country have their own rules inside of that, I think that’s what keeps us from slipping to the lowest common denominator.

The quotes aren’t perfectly comparable — you can read the full interviews to get the context — but it makes sense that Microsoft and Google (and presumably Amazon) would be very concerned about a world where individual countries make their own laws about what can be put on the Internet, or even seen. Theirs are services predicated on the superior economics that come from centralization; Cloudflare, on the other hand, is already doing all of its computing on the edge — data sovereignty rules are simply a variable. It’s “easy”.

This is why the direction of Cloudflare’s metaphorical plane is so fascinating: Cloudflare’s current addressable market of enterprise security and networking is significant, particularly as remote work has laid bare the problems with traditional approaches; the destination with outsized upside, though, is Internet 3.0, and the resultant need for a service that routes around obstacles, not from nuclear war, but sovereign governments.1


  1. As I explained last week, I use “Internet 3.0” instead of Web3 because the world beyond centralization isn’t only going to be about crypto; Cloudflare’s potential is a great example. 

Market-Making on the Internet

Two months ago I bemoaned how Web 2.0 had failed to live up to its promise in one crucial area; from The Web’s Missing Interoperability:

[From] the Wikipedia…definition:

Web 2.0…refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatible with other products, systems, and devices) for end users.

Seventeen years on and there is more user-generated content than ever, in part because it is so easy to generate: you can type an update on Facebook, post a photo on Instagram, make a video on TikTok, or have a conversation on Clubhouse. That, though, points to Web 2.0’s failure: interoperability is nowhere to be found.

The context for that piece was Twitter’s preview of Super Follows, acquisition of Revue, and imminent launch of Spaces, all of which helped explain how centralization had won the web. Twitter’s latest acquisition, though, leverages centralization in a way that bodes well for realizing the “thousand flowers bloom” vision of Web 2.0.

Web 2.0 and the Aggregators

The first page of O’Reilly’s definition of 2.0 had a “Web 2.0 Meme Map”:

The Web 2.0 Meme Map from O'Reilly

Don’t worry about the low resolution and tiny type; what should feel familiar is this idea of an interlocking web of properties and services between which a user could flit to and fro, tying their content and data together with links. It turned out, though, that centralized services offered better user experiences that gained them critical masses of users, drawing suppliers onto the service on the service’s terms, attracting more users in a virtuous cycle; meanwhile, by virtue of being centralized, the service could gather better data about its users and offer a one-stop shop for advertisers. Aggregation Theory, in other words.

Aggregation was the antithesis of the Web 2.0 promise; the best suppliers could do was either subject themselves to the Aggregator’s terms and try and make the best of it (call it the BuzzFeed strategy) or work to build a direct connection with customers that went around the Aggregators (the New York Times strategy); Twitter, though, may be on the verge of offering a middle path: market-making.

Ad Agencies

Market-making in media isn’t a new concept; perhaps the best example is the traditional advertising agency. I explained back in 2017’s Ad Agencies and Accountability:

Few advertisers actually buy ads, at least not directly. Way back in 1841, Volney B. Palmer, the first ad agency, was opened in Philadelphia. In place of having to take out ads with multiple newspapers, an advertiser could deal directly with the ad agency, vastly simplifying the process of taking out ads. The ad agency, meanwhile, could leverage its relationships with all of those newspapers by serving multiple clients:

A drawing of The Pre-Internet Ad Agency Structure

It’s a classic example of how being in the middle can be a really great business opportunity, and the utility of ad agencies only increased as more advertising formats like radio and TV became available. Particularly in the case of TV, advertisers not only needed to place ads, but also needed a lot more help in making ads; ad agencies invested in ad-making expertise because they could scale said expertise across multiple clients.

In this case the ad agencies gave a single point of contact for advertisers on one side, and ad inventory sellers on the other, creating a market.

That was then, though; aggregation has been terrible for ad agencies for the same reason it has been bad for publishers: the more that advertising becomes centralized on Facebook and Google, whether on their sites or on programmatic exchanges, the fewer advertising dollars are available for the inventory that ad agencies used to abstract away for clients. And, as I noted in that article:

A drawing of The Post-Internet Ad Agency Structure

That’s a problem for the ad agencies: when there are only two places an advertiser might want to buy ads, the fees paid to agencies to abstract complexity becomes a lot harder to justify.

In this world, the more effective of the two strategies I noted above has clearly been the New York Times model, at least for the New York Times and its 7.5 million subscribers; Visual Capitalist put together this striking infographic last week showing how the New York Times dominated the subscription market (click through to see a veritcal version of the infographic that shows every publication listed):

An inforgraphic of news websites ranked by subscriptions
Licensed from Visual Capitalist

The problem is that there are a lot of publications in the world that would like to be supported by subscriptions, and a lot of readers in the world that would prefer to pay for ad-free content, but nobody is making a market. This is where Twitter is making its play.

Twitter Buys Scroll

From the Scroll blog:

Twitter is acquiring Scroll. The service will be going into private beta as we integrate into a broader Twitter subscription later in the year. We’re very excited!

Since launch last year, Scroll has proven that there’s a model that gives consumers a better experience and journalists a better future. Today when Scroll members visit hundreds of top sites like The Atlantic, The Verge, USA Today, The Sacramento Bee, The Philadelphia Inquirer or The Daily Beast, they get a site that feels built solely for them: a blazing fast experience that loads with no ads, no dodgy trackers and no chumboxes of clickbait. At the same time, publishers get to deliver a site that increases engagement and makes them more money than they would make from ads. It’s a better internet and we’ve proven the model works.

I’ve been a subscriber to Scroll for some time now, and it’s a great experience: while ad-blockers are oppositional to publishers, blocking their trackers and advertisements while users take content they refuse to help monetize, Scroll partnered with publications directly, such that Scroll subscribers were never served ads in the first place. Publishers would instead get paid their usage-based share of the Scroll user’s subscription fee, which Scroll claimed was more than a publication would have made by serving that user ads.

Scroll’s list of partner sites has been scrubbed from their website, but I recall it being pretty impressive; the real challenge for the startup, though, was acquiring users, which is the first reason why Twitter is a great match: Twitter not only has 199 million monetizable daily active users, but is also one of the places where users are most likely to click-through to publications.

Twitter between publishers and readers

Now imagine this model, but with Scroll’s business model on top:

Twitter using Scroll to monetize publishers

This is a great example of market-making in action: Twitter is taking its user base, which no one publication could realistically reach or monetize on its own, and re-distributing their subscription fee across publications that no one user could ever support individually.

This doesn’t necessarily mean that every publication will buy in; the New York Times, to take an obvious example, is doing quite well on its own, as are sovereign writers. Twitter, though, because of its outsized role in driving traffic to sites across the web, is uniquely positioned to bundle everything else together. Moreover, there are, as they say levels to this: Twitter could offer one price tier for no ads, and another price tier for getting past paywalls, and perhaps even individual site subscriptions above that.

Multi-Medium Creators

Spotify’s announcement last week, meanwhile, shows that markets need not be exclusive. Start with creators: today someone might build a following on Instagram, before branching out to YouTube or TikTok, much as an artist may have started out singing and ended up making movies. Today this ability to move across mediums extends down to even the smallest creators: even Stratechery is multi-medium, with text available on the web or via email, and podcasts on your phone.

What is new is that the business model can be more than fame (and, naturally, advertising). Subscriptions work for creators just as well as they do for publications (arguably better given an individual creator’s cost structure), but to-date that has meant that creators were limited to mediums that they could fully control — i.e. text and podcasts — and only on the open web; what Spotify made clear is that they want into this world. From Spotify’s Surprise:

For full disclosure, I have been briefed on the Open Access Platform, and Spotify has addressed all of my concerns; no, they won’t support arbitrary RSS feeds, but instead another open technology — OAuth. Some time soon Stratechery and Dithering subscribers will be able to link their subscriptions to their Spotify accounts, and Spotify isn’t going to charge a dime — they will be my customers from email address to credit card. Spotify Chief R&D Officer Gustav Söderström told me, “Having all of audio on Spotify means meeting independent creators on their terms, not ours.”

In this case Spotify isn’t market-making: rather, it is recognizing that creators are going to want to make their own markets, pulling their fans from medium to medium and service to service, and they want to make sure they are plugged in to that:

The creator carrying audiences to different mediums

What is neat about markets is that they create the conditions for win-win outcomes; Spotify aligning with creators doesn’t hurt Spotify’s core business, it enhances it by making sure Spotify’s podcast service is as complete as it can be. Critically, it does this not by fighting over users, but rather by linking them.

Spotify existing in harmony with creators

This is the part that Web 2.0 got wrong; much like the Facebook model of social networking emphasized being your whole self, Web 2.0 assumed that your one identity would connect together the different pieces of your web existence. However, just as the future of social networking is about different identities for different contexts, interoperability via markets is about linking together distinct user bases in a way that is appropriate for different services, all under the control of the user who is paying for the privilege.

Shopify and Platforms

The other place where market-making might seem familiar is that most classic of tech concepts: platforms. I wrote in 2019’s Shopify and the Power of Platforms, in the context of the e-commerce service’s planned foray into logistics:

Notice, though, that Shopify is not doing everything on their own: there is an entire world of third-party logistics companies (known as “3PLs”) that offer warehousing and shipping services. What Shopify is doing is what platforms do best: act as an interface between two modularized pieces of a value chain.

A drawing of Shopify as an Interface

On one side are all of Shopify’s hundreds of thousands of merchants: interfacing with all of them on an individual basis is not scalable for those 3PL companies; now, though, they only need to interface with Shopify.

The same benefit applies in the opposite direction: merchants don’t have the means to negotiate with multiple 3PLs such that their inventory is optimally placed to offer fast and inexpensive delivery to customers; worse, the small-scale sellers I discussed above often can’t even get an audience with these logistics companies. Now, though, Shopify customers need only interface with Shopify.

What makes the Shopify platform so fascinating is that over time more and more of the e-commerce it enables happens somewhere other than a Shopify website. Shopify, for example, can help you sell on Amazon, and in what will be an increasingly important channel, Facebook Shops. In the latter case Facebook and Shopify are partnering to create a fully-integrated market: Facebook’s userbase and advertising tools on one side, and Shopify’s e-commerce management and seller base on the other. The broader takeaway, though, is that Shopify’s real value proposition is working across markets, not creating an exclusive one.

Market-Making and Aggregators

Market-making is certainly a characteristic of Aggregators; Google, for example, is a one-stop shop for users, advertisers, and content suppliers. What makes Aggregators unique, though, is their infinite scalability, driven by the effectively zero marginal and transactional costs necessary to serve one more user, advertiser, or supplier. This characteristic, by necessity, reduces everything to a commodity. In contrast, the commonality between what Twitter appears to be building, the phenomenon that Spotify is seeking to plug into, and Shopify and e-commerce is the inherent friction of transferring money (usually via Stripe), for something that is not flattened, but differentiated.

Some of these plays are certainly more Aggregator-like — Twitter/Scroll appears likely to (non-exclusively) abstract away publishers from subscribers — but I think the distinction from advertiser-driven Super Aggregators is a notable development, and an exciting one. The web started with no economy, then built a commoditized advertising-only one, and now is increasingly a market for all sorts of goods and services — the more differentiated the better. That doesn’t just mean interoperability, in the purest most fungible form of money, but also opportunity.