The Closing of the Frontier

The Anthropic Mythos announcement is the first time in my life I’ve felt truly poor. Maybe because I grew up on the internet and it was the one permissionless place where you could have leverage and a shot at uncapped exploration and ambition. That is now changing with the gap between models that are publicly available vs those reserved for the already wealthy and pre-established.

In 1893, Frederick Jackson Turner argued that much that is distinctive about America was shaped by the existence of free land to the West where anyone could start over, and that this condition infused America with its characteristic liberty, egalitarianism, rejection of feudalistic hierarchy, self-sufficiency, and ambition.

Since the days when the fleet of Columbus sailed into the waters of the New World, America has been another name for opportunity... But never again will such gifts of free land offer themselves... each frontier did indeed furnish a new field of opportunity, a gate of escape from the bondage of the past... And now, four centuries from the discovery of America, at the end of a hundred years of life under the Constitution, the frontier has gone, and with its going has closed the first period of American history. – Frederick Jackson Turner, The Significance of the Frontier in American History, 1893

We are witnessing the closing of yet another frontier in history. Even though the American dream is nearly dead, the one somewhat accessible escape hatch that offered economic mobility and cherished individual agency was the wired. Perhaps you would never own a house, but when it came to technology, a poor person and the wealthiest person in the world had access to the same internet, the same phone, the same encryption protocols (my TLS connection wasn’t using AES-ECB-quant-8 vs your AES-GCM-512).

A 16-year-old with no credentials and no capital could just do things. The world of bits offered the freedom to build without being drowned in arbitrary constraints, in a way that didn’t require assembling vast capital or prestige or connections, where your creativity and work could speak for itself, and you had agency. This is a precious thing and we should seek to preserve it for as long as it is possible, because there is still much possibility left. We’ve only just begun scratching the surface for what is possible to build and how best to harness the intelligence of powerful models.

I feel this most acutely in the cordoning off of frontier models from public access, though the logic also applies to the general replacement of labor and intelligence with capital. Rudolf Laine articulates this well in his essay, Capital, AGI and Ambition.

Those with significant capital when labour-replacing AI started have a permanent advantage. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field. – Rudolf Laine, 2024

George Hotz more bluntly calls it neofeudalism.

This isn’t like nuclear weapons, this is intelligence itself. A nuclear weapon can only destroy; intelligence is the greatest creative force in the world. If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are. – George Hotz, 2026

The Manhattan Project comparison the labs reach for again and again, has long been a pet peeve of mine. Nuclear non-proliferation worked, to the extent it did, because nukes are instruments of mass destruction and laws are written in blood. Intelligence is economically valuable in a wholly different way. Every country will pursue it as far as it can, and given the multipolar world we are back in, and our recent record with treaties and commitments, I do not believe there will be global alignment on risk reduction. Not before there is blood, at the very least.


Anthropic has mentioned that it does not plan to make Mythos generally available. However, it’s one thing to not release the model at all and keep it under full containment. It’s also valid to have some embargo period after which you’ll release it for public use with some vetting.

Today we’re announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. – Anthropic

But it is another thing entirely to share access only with enterprise partners such as Crowdstrike, Cisco, and Microsoft, which are known to have massive security incidents regularly. How dangerous from a safety standpoint would it be if the private capability gap grows exponentially (already happening with recursive self improvement) before the world has had any time to price it in, and there were to be a security breach at one of these labs, or their partners? Or what if a foreign lab drops close to an equivalent model with minimal access restrictions? Though the limited availability of compute has a sizable hand in the restriction calculus here as well.

Those are not the only organizations with security concerns. I am not arguing that the model should be made publicly available to anyone via API. But structurally speaking, a private company has built the most capable AI model in the world, and has decided unilaterally who gets access and is worth protecting. They and their established partners are now sitting on a zero day generator, accumulating private knowledge of exploits in everyone else’s infrastructure: capabilities that once belonged to nation states and are now being privatized to a handful of well-connected organizations. These are state-scale capabilities without state-scale accountability. If you believe in democracy, we built three branches of the government for a reason. Anthropic is simultaneously the manufacturer, the regulator, and the appeals court, with no on-ramp even for someone willing to pay and undergo strong KYC.


API access may not be full ownership, but at least it is a programmable surface that doesn’t foreclose possibility. Locking that down for safety and “unapproved” use certainly helps prevent abuse, but it also stifles innovation. Public access also forces latent capabilities into the open, which given how eval-aware models are (Mythos alignment report calls eval awareness “a key challenge”) and the constraints of artificial red-teaming, is better from a safety standpoint. Fail fast and fix, as opposed to accumulating a capabilities overhang that has never been tested in the real world. It’s bad enough as is for the world to adjust and make sense of AI capabilities, when half the American population thinks AI is worthless because they are forced to use Copilot at work.

The reaction to AIs finding security vulnerabilities also feels overstated. Security is always an arms race. A decade ago fuzzers like American Fuzzy Lop looked like a gift to attackers, but many security-first projects instead built fuzzing into their CI pipelines and now catch most bugs before release. I wrote about this symmetry in my post on the death of security through obscurity. Here again, frontier model access will allow more people to build security systems that will help the world upskill its security. For too long, organizations have been cavalier about security and risked their customers’ data with poor security practices. The transition will be rough, but this is a period of great upheaval in many dimensions, so why would we expect security to get by unscathed?

And the people who would actually do rigorous safety research on these models can’t get access to them. A couple weekends ago I was at the MATS research symposium. MATS is one of the most serious AI safety programs out there, and about two-thirds of the posters involved a Chinese open source model. Many experiments require white-box access, and these researchers can’t get it anywhere else. Meanwhile, the mainstream AI safety position is that open source models are dangerous. Most projects were also restricted to tiny models due to compute limitations, leaving open whether their results would survive at frontier scale. Thank god for open source models, because if meaningful safety research depends on the benevolence of the labs, or on being hired by one, that would be disappointing.


You can generate your own electricity with a solar panel (think local models), but most people would rather pay a utility bill. And the power company doesn’t decide, on the basis of pedigree, who is worthy of electricity. Intelligence should work similarly, where the capabilities you can access scale may scale with vetting and due process, but the presumption should be access. Add safety guardrails to restrict dangerous use; start by making them overly trigger-happy if you must, and calibrate over time. But the default should be to allow entry.

If you have government-level capabilities, time to start acting like a government. There should be due process, publicly disclosed criteria for who gets access and why, and a clear appeals mechanism that isn’t email the trust and safety team and pray. And when you cut someone off, you should be required to say why, because getting your frontier model access revoked is akin to being unbanked. From an audit perspective, there should be FOIA-style obligations to show your work in safety-critical areas.


There is something special about training a model on all of humanity’s data and then locking it up for the benefit of a few well-connected organizations that you have relationships with. Maybe you’ll notice another historical pattern here. Extract value from a population that can’t meaningfully consent, concentrate the returns within a small inner circle, and then offer some version of charity to the people you extracted from as moral cover for the arrangement. The pattern repeats itself with labs promising post-AGI UBI or encouraging EA philanthropy while continuing to concentrate frontier capability. Not saying the intent is malicious, I think many are trying to do the best they can, I’m simply noticing.

If we are lucky, none of this will matter. This might just be the mainframe era of AI, a waypoint on the way to personal computing. When the Apple II came out it was woefully underpowered compared to mainframes, and most adoption was driven by hobbyists and aesthetics. Compared to that gap, open source models already pack quite a punch, running 3-12 months behind the frontier depending on the dimension. So perhaps hardware supply chains will scale, a glut of chips and energy will become available, and intelligence will be too cheap to meter.

The city is cutting down twenty-year-old ficus trees in my neighborhood because they could fall on someone during a hurricane and the city doesn’t want to get sued. San Francisco gets about one thunderstorm a year at best. I hope we don’t snuff out the wired in a similar way.