TLDR
- OpenAI published a 13-page policy blueprint for a world with superintelligent AI
- CEO Sam Altman proposes a public wealth fund giving every American a stake in AI growth
- The plan floats taxes on companies that replace workers with automation
- OpenAI suggests piloting a four-day workweek at full pay
- Altman warns cyberattacks and bioweapons are the most immediate AI threats
OpenAI has published a 13-page policy document outlining how governments should respond to the rise of superintelligent AI. The report, titled “Industrial Policy for the Intelligence Age,” was released as Congress prepares to debate AI legislation.
CEO Sam Altman described the document as a starting point for debate, not a fixed prescription. He said the scale of change coming from AI is comparable to the Progressive Era and the New Deal.
The document covers taxes, worker benefits, safety nets, and what to do if AI systems become impossible to control.
One of the most discussed proposals is a national public wealth fund. OpenAI suggests seeding it partly with contributions from AI companies. The fund would invest in AI firms and other businesses adopting the technology, then distribute returns directly to American citizens.
The idea is similar to Alaska’s Permanent Fund, which pays annual dividends to state residents from oil revenues.
Robot Taxes and Worker Protections
OpenAI also floats the idea of taxing businesses that replace human workers with automated systems. The reasoning is straightforward: if AI reduces payroll, it also reduces the tax revenue that funds programs like Social Security, Medicaid, and food assistance.
To make up the shortfall, the document suggests shifting more of the tax burden toward corporate income and capital gains.
On worker benefits, OpenAI proposes stronger unemployment insurance, expanded Medicaid, and portable benefits that follow workers from job to job rather than being tied to one employer.
The company also suggests running pilots of a 32-hour workweek at full pay, framing it as an “efficiency dividend” from AI-driven productivity gains.
Threats Altman Says Are Already Close
Altman told Axios that the two most immediate dangers from advanced AI are cyberattacks and bioweapons.
He said it is “totally possible” that major cyber threats emerge within the next year. He also acknowledged that AI models could be used by bad actors to create novel pathogens, calling it something that is “no longer theoretical.”
The blueprint includes a section on “containment playbooks” for scenarios where dangerous AI systems become autonomous and capable of replicating themselves.
OpenAI’s proposed response involves government coordination rather than industry action alone.
The document also envisions automatic safety net triggers. If AI-driven job displacement hits preset thresholds, benefits like unemployment payments and wage insurance would increase automatically, then phase out when conditions improve.
OpenAI says it is opening a new Washington office and funding research grants to support these policy conversations.
Chris Lehane, OpenAI’s chief global affairs officer, said both Democrats and Republicans are hearing from constituents worried about job losses from AI.
The company has aligned with the Trump administration’s position that limited regulation is needed to keep the United States ahead of China in AI development.







