Good morning!
This is The Daily Grind for Wednesday, August 6.
Yesterday was an Era-Defining News Day that captured the state of technology and our world. I try to tie it all together in our One Headline.
Then we’ll follow up with a One Page that gives us perspective on just how big these technological changes are.
Finally, I get a bit sentimental with our One Question, and I hope you do too.
Let’s get into it!
Yesterday, a flurry of news stories dropped that perfectly encapsulate the new era of technology we are in: AI-Driven Deep Tech.
Let’s start with the AI side of things.
The three of the top Frontier Model makers released new versions yesterday:
Google DeepMind released Genie 3, a Worlds Model that simulates 3D, interactive environments that users can move through like a video game. Worlds Model are obviously significant for game design and virtual reality, but they can also be used in robotics and agentic AI training. DeepMind says they are also a key part of developing Artificial General Intelligence (AGI):
World models are also a key stepping stone on the path to AGI, since they make it possible to train AI agents in an unlimited curriculum of rich simulation environments.
OpenAI released gpt-oss-20b and -120b, two open-weight models designed to be run locally on consumer devices. Open-weight means users can see under the hood at how the models prioritize information and make decisions. These are OpenAI’s first open weight models since GPT-2.
The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure.
Anthropic released Claude 4.1, an incremental update to Claude 4 with improved agentic capabilities, coding, and reasoning. Anthropic is also promising more significant updates “in the coming weeks.” Coding is the real story of this upgrade:
GitHub notes that Claude Opus 4.1 improves across most capabilities relative to Opus 4, with particularly notable performance gains in multi-file code refactoring. Rakuten Group finds that Opus 4.1 excels at pinpointing exact corrections within large codebases without making unnecessary adjustments or introducing bugs, with their team preferring this precision for everyday debugging tasks. Windsurf reports Opus 4.1 delivers a one standard deviation improvement over Opus 4 on their junior developer benchmark, showing roughly the same performance leap as the jump from Sonnet 3.7 to Sonnet 4.
Overseas competition continue to show up too:
Alibaba’s Qwen released Qwen-Image, a new image generation model that rivals the best from US firms like Dall-e (OpenAI) and Grok (xAI). According to Reddit, Qwen-Image is, “[e]specially strong at creating stunning graphic posters with native text.”
🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully integrated
🔹 Bilingual support, diverse fonts, complex layouts
While some of these releases feel incremental, it’s also possible that we’ve simply grown accustom to the incredible pace of innovation for LLM models. Any one of these releases would have been revolutionary just a few years ago. Today, they barely make headlines.
These AI model releases are just one half of the news day that defines our era. The other half is deep tech investment and innovation.
Yesterday, energy startup General Matter announced it would open the United States’ first uranium enrichment plant on the very site the last plant was shut down in 2013: Paducah, Kentucky.
General Matter, a privately funded company based in California, announced on Tuesday that it plans to invest $1.5 billion to build the first commercial uranium enrichment facility in the United States on the Department of Energy site. The project would effectively restart enrichment on property that fueled America’s nuclear energy and defense for over sixty years.
General Matter was founded in 2024 by Scott Nolan (formerly Founders Fund), Lee Robinson (formerly Department of Defense), and Ashby Bridges (formerly Los Alamos Research Institute) with a singular goal to rebuild the US’ enrichment capacity.
Mike Solana of Pirate Wires shared the stakes of such a venture in his feature on the startup:
America no longer had the capacity to enrich uranium — I only learned myself this year — which meant it could no longer fuel itself without the help of foreign governments. Mostly, that placed us at the mercy of Europe, which refused to fuel our military bases. But we were also buying enriched uranium from Russia. In fact, we were buying it that very afternoon in November 2023, as war raged in Ukraine. Our government hadn’t included enriched uranium in its initial sanctions against Russia on account of it really couldn’t. Fuel-dependence was not only a risk to our grid, but a risk to our national security.
Energy is upstream from all innovation, not to mention our day-to-day quality of life. It’s the foundation of the Abundance movement, a bi-partisan push to rebuild American capacity to build infrastructure and provide services without bloat.
Most crucially (to this story, anyway), energy is the main bottleneck for AI development and proliferation. Generative AI is driving up energy costs, and AI tools like Cursor and Claude are having to rate limit their power users.
Several tech giants, like Google, Microsoft, and OpenAI, have been in talks to partner directly with nuclear energy sites to power their LLMs. Microsoft is currently in talks with the US government to re-open Three Mile Island, Google is funding three new nuclear sites in the US, and OpenAI recently floated the idea of creating 5 gigawatt, nuclear-powered power plants.
Adding nuclear capacity is great, but as Scott Nolan said in the Pirate Wires feature, “How are you going to fuel them?” General Matter aims to produce that fuel, as well as re-ignite the fuel enrichment industry in the US.
This is also an incredible boon for Midwest and Southern cities like Paducah—which was given the moniker “Atomic City” for its role in US energy development for over 60 years. There are other startups hoping to bring back lost industries, such as steel development, which could mean more jobs in places that were left behind by globalization.
Yesterday’s headlines—the release of new, world-changing AI models (and the collective yawn of the public at such progress), and the re-development of American power—perfectly encapsulate this unique era of technology.
It also raises questions.
Could AI’s need for power actually spur a green energy revolution?
Could we finally achieve the promise of energy “too cheap to meter?”
If so, what would that mean for our lives?
Will these innovations lead us to a green Utopia?
Or will we choose to live in an expansive virtual world and leave reality behind?
Here are more stories to explore today:
There are several striking lines in Mike Solana’s feature on General Matter, which is well worth a full read.
This one line is especially shocking, and reminded me of Stripe Press’s newest book, Boom: Bubbles and the End of Stagnation by Byrne Hobart and Tobias Huber.
Here is the passage from Solana:
Lee had no idea how his colleagues in the DOD were thinking about enrichment. So he flew home and asked everyone he knew. Some didn’t understand the problem, or disagreed with the premise of the question. Most assumed that everything would work itself out. Money was occasionally funneled into this or that science project. Results were rarely interrogated. Lee pressed a colleague working in the Department of Energy.
“When it’s a really big problem,” the man said, “we’ll just do another Manhattan Project.”
Lee was speechless. He suspected things weren’t great, but this was an incomprehensible divorce from reality. Many years earlier, Scott learned the term at play here from his mentor Peter Thiel — “indeterminate optimism.” We’ll be fine, we just have no idea how, and no interest in actually doing anything.
Now here is the page from Boom, in its chapter dedicated to the Manhattan Project. It emphasizes just how much effort, resources, and consensus we would need to “just do another Manhattan Project.”
Imagine a group of physicists exploring a new body of theoretical knowledge, which they suspect might enable the development of weapons possessing unprecedented power. The scientists don’t know if it will work. They don’t even know if their goal is physically achievable. Just attempting it will require a $250 billion investment.
That’s roughly the decision the US government faced in 1939 when confronted with the $2 billion price tag for the Manhattan Project.¹⁵⁴ True, the government today routinely spends far more on Medicare, but that’s a relatively sure thing—we know what we’re getting. The same was not true of the atomic bomb. Building it required an immense investment in untested, specialized physical infrastructure, and the request for that investment came in the middle of an economic depression and a world war.
The Manhattan Project ends up looking like a bubble: Physicists convinced one another that it was possible, then convinced politicians it was worth attempting. Substantial resources had to be marshaled in order to build the physical infrastructure necessary to construct a bomb, and this had to happen in parallel with the bomb’s design. All of this required new methods and scarce inputs—someone puzzling over atomic reactions couldn’t devote that time and brainpower to more pragmatic and measurably rewarding tasks like designing conventional weapons, and every pound of steel used for an isotope separation facility was a pound that wasn’t going into a battleship or tank.
Designing the bomb meant sequestering a few hundred of the world’s smartest physicists, mathematicians, and engineers in the remote New Mexico town of Los Alamos. It also required systems whose exclusive purpose was enriching uranium, a major sacrifice of industrial capacity during wartime. The scientists needed a new computer—a hybrid of machine and human operators (who were, in fact, called “computers”)—to approximate diffusion rates under conditions different from those in the theoretical models. This simulation, which was used to calculate the physical and chemical chain reactions following a thermonuclear detonation, was the most ambitious mathematical project ever undertaken. It was, according to Stanislaw Ulam, a key actor in the Manhattan Project,
¹⁵⁴ GDP in 1939 through 1945 averaged $163 billion. In today’s dollars, as a percentage of GDP, the Manhattan Project would cost about $250 billion.
The difference in the Manhattan Project and a government-led Uranium enrichment program is that the latter is (I believe) a relatively known process. I’m sure General Matter is working to improve it, but I think it would fall under the “sure thing” category if the government wanted to pursue it.
Still, the fact that they were not pursuing it at all is a concern, especially given this current administration’s tendency to alienate rivals as well as allies.
I am expecting my firstborn in 5 weeks. Naturally, I’m experiencing a daily roller coaster of emotions: excitement, nerves, dread, and joy.
I have also thought more than ever about how I want to shape the world, if I could. And whether or not you have (or plan to have) kids, this is an important question. Arguably, it’s a required question:
What type of world do you want to leave for your children?
Here was the answer I shared with a friend recently:
I want to see deep, intentional, human-oriented knowledge/storytelling thrive in this world of short attention spans. I've been thinking about this a lot with my firstborn on the way—what type of world do I want him to live in? I hope it's full of books, rabbit holes, the pursuit of knowledge, and a love of art.
What’s your answer?
That’s it for this Wednesday, August 6 edition of The Daily Grind!
Thank you for the continued readership and feedback.
A special thank you to for re-stacking our One Question from yesterday’s newsletter!