
And So It Goes
The Pentagon banned a chatbot. The Treasury Department would like to use it. The Commerce Department is already using it. And so it goes.
An AI model named Claude Mythos is blacklisted this year by the United States government. The Pentagon declared Anthropic a "supply chain risk" and removed the company from every Department of Defense contract. Then, as Politico reported on April 14, the Treasury Department — through its CIO, Sam Corcos — asked for access to the model to hunt for vulnerabilities. The Commerce Department's Center for AI Standards and Innovation was already testing it. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urged the biggest Wall Street banks to run their own evaluations on the model their own government had just banned. The United Kingdom's AI Security Institute got early access and reported that Mythos solved seventy-three percent of expert cybersecurity tasks and completed, for the first time, a simulated thirty-two-step attack on a corporate network — a scenario estimated at twenty hours of expert human work. The machine, it seems, is dangerous. The machine, it also seems, is useful. Both groups are splitting the check.
Apple is sending nearly two hundred Siri engineers to an AI coding bootcamp. The bootcamp runs for several weeks. It ends two months before the major Siri revamp, scheduled for WWDC on June 8. The Information broke the story on April 15. Apple has not named an executive to take credit. The idea, per leaks, is to teach the engineers to use the AI coding tools they themselves were supposed to build two years ago. They are the same engineers who will do the revamp. After the revamp, Siri will be better. The improvement will come from code those engineers wrote with help from Claude, because Apple does not yet have a model of its own that is good enough. Nobody in Cupertino is saying this out loud.
OpenAI, a company whose valuation closed at eight hundred fifty-two billion dollars in April, is now being questioned by its own investors. Financial Times and Reuters report that some backers argue the math requires an IPO valuation of one point two trillion or more. Meanwhile, Anthropic is declining venture offers at eight hundred billion, more than double the three hundred fifty billion pre-money valuation it accepted in February. The firm booked thirty billion annualized at the end of March, up from nine billion at the end of 2025. Goldman Sachs, JPMorgan, and Morgan Stanley are circling a possible IPO in October. The numbers are no longer descriptive. The numbers are adjectives.
A man named Bradley Heppner — charged with securities fraud — asked Claude to help him draft thirty-one documents. He gave them to his lawyers. Federal Judge Jed S. Rakoff, of the Southern District of New York, ruled last month that the documents are not protected by attorney-client privilege. "Claude is not an attorney," he wrote. "That alone disposes of the claim." He added that Anthropic's privacy policy allows inputs to be used for training and shared with third parties, so there was no reasonable expectation of confidentiality. The irony, which Rakoff noted in a footnote, is that Claude itself refused to give legal advice when Heppner asked for it. The machine followed the rules. The human did not. Bar associations across the country are now circulating warnings: if your client confides his crimes to a chatbot, those crimes are no longer his.
OpenAI announced on April 15 that ChatGPT's gender gap — eighty percent male first names at launch in late 2022 — has closed. The current split: fifty-two percent feminine, forty-eight percent masculine. The firm inferred gender by matching anonymized first names against the World Gender Name Dictionary. In January 2024, women were thirty-seven percent of users. By July 2025, they were the majority. This is the usual pattern. Any technology that goes truly general-purpose stops looking like technology. First it is a toy for engineers. Then it is an office utility. Then it is a spoon. A spoon does not have a gender.