Last week, OpenAI released a 13-page policy paper called "Industrial Policy for the Intelligence Age: Ideas to Keep People First."
It's not a product announcement.
It's a proposal to redesign the economic and social systems around work, wealth, safety, and benefits in response to what they believe is coming: AI systems that outperform the smartest humans across every domain.
They're comparing this moment to the Industrial Revolution, arguing that just as the combustion engine and mass production eventually required the New Deal, the Intelligence Age requires a new social contract.
Whether you agree with that framing or not, the proposals are worth understanding.
Because if even a fraction of this plays out, it touches your career, your compensation, and how your practice operates.
Let me walk you through the key proposals and what they actually mean when you think them through.
A Public Wealth Fund
A government-managed fund that distributes returns from AI-driven economic growth directly to citizens. Think sovereign wealth fund, seeded by AI-era profits.
The goal is to prevent AI's gains from concentrating in a handful of tech companies.
The challenge? Building something like this that actually reaches a veterinary technician in North Carolina or a practice owner in rural Kansas.
The Four-Day Work Week
AI handles routine work. Humans work less. Same pay.
Sounds great on paper.
But here's where it gets complicated.
In production-based compensation models like pro sal in veterinary medicine, working fewer days means seeing fewer patients, which means earning less. The efficiency gain goes to the practice, not necessarily the associate.
And then there's emergency medicine.
ERs are open 24/7/365. They already struggle to staff those hours and pay premiums to get people to work them. If everyone is truly working a 32-hour week, that doesn't reduce the need for coverage. It increases the number of bodies you need to fill the same schedule.
And even when your shift technically ends, medicine doesn't care.
You're still intubating a patient, still finishing a surgery, still writing up medical notes, still calling back a client whose lab results just came in. That's why most veterinarians aren't paid hourly or even straight salary. The work doesn't fit neatly into a time block.
Maybe AI reduces administrative overhead enough that you need fewer support staff handling phones, scheduling, and records. But that doesn't change the schedule for the clinician or the technician in the treatment room. Their work is tied to living patients, not spreadsheets.
So what does the efficiency dividend actually look like for these roles?
Does the government pay a bonus for hours worked beyond 32? Are those hours tracked and compensated separately even when you're not on an hourly model?
The paper doesn't answer that.
Here's what I think is more likely to happen. And it's not necessarily bad.
AI frees up 30 to 45 minutes of charting and administrative work, and that time gets redirected into the parts of the job that actually require a human. More time with a client in the exam room explaining a diagnosis. Less rushing through a discharge. More attention on a patient instead of on a keyboard.
That's a meaningful improvement in the quality of your work and your day. And it's the kind of change that might actually reduce burnout in a way that a policy paper can't capture in a spreadsheet.
But let's be clear about what that is and what it isn't.
That's a better version of the same hours. It's not what OpenAI is proposing. They're proposing time back. Those are two very different outcomes, and I think it's important to name that honestly.
So if efficiency gains in clinical medicine can't easily convert into time off, the remaining path is compensation.
If AI reduces overhead and practice profitability improves, that should translate into raises, better benefits, or better working conditions for staff. When the practice does well, there is a trickle-down effect to employees through higher pay.
And in the current structure, that's the most realistic version of the efficiency dividend for veterinary professionals.
But here's the tension I keep coming back to.
That trickle-down is discretionary. There is no mechanism today that requires a practice or a corporate group to pass AI-driven savings to staff. If a group saves $200K annually on administrative costs through automation, that money can just as easily go to shareholder returns or expansion as it can to associate raises.
The proposals in this paper, the efficiency dividends, the wealth funds, the wage-linked tax incentives, exist precisely because trickle-down historically doesn't trickle reliably without structural incentives.
So the honest position might be this: practice-level profitability is the most visible path to compensation gains right now, but without formal mechanisms, there's no guarantee those gains reach the people doing the clinical work.
And there's another layer most people aren't talking about.
Right now, AI is not getting cheaper. Subscriptions, implementation costs, training time, workflow disruption during adoption. For a two-doctor practice deciding whether to invest in an AI documentation tool or a scheduling assistant, the question isn't whether AI is the future.
It's whether this $300-a-month subscription generates more than $300 a month in measurable value.
Some of that ROI is easy to measure. More booked appointments from after-hours AI support? You can count those. Fewer missed callbacks? Trackable. Increased new client acquisition from better marketing automation? You can see it in the numbers.
But "my team seems less burned out" or "turnover dropped 15% this year"?
Those take months to show up in the data, and most practices don't have the infrastructure to measure them.
If practices want to justify AI investment on intangible benefits, they need to start tracking them. Quarterly staff satisfaction surveys, retention rates, time-to-fill for open positions. Those become your leading indicators for whether AI is actually improving your working environment or just reshuffling the same pressure.
None of this is simple.
And I don't think anyone, including OpenAI, has a clean answer for how efficiency dividends work in fields where the work is physical, unpredictable, and tied to living beings that don't care what time your shift ends.
But asking the question honestly is where the real conversation starts.
Taxing AI Instead of Human Labor
This might be the most important proposal in the paper.
Social Security, Medicare, and SNAP are funded by payroll taxes on human labor. If AI displaces large portions of the workforce, that revenue disappears. The entire safety net erodes.
OpenAI proposes shifting the tax base to AI-generated productivity and corporate capital gains.
The structural problem is real. Whether the political will exists to solve it is another question.
Portable Benefits
Your healthcare, retirement, and training follow you. Not your employer.
This one matters for veterinary medicine.
Smaller independent practices have never been able to compete with corporate groups on benefits packages. If benefits are portable and publicly backed, that leverage disappears.
People choose where to work based on culture and compensation, not because they're locked in by health insurance.
That changes everything about where professionals decide to build their careers.
The Adaptive Safety Net
Instead of waiting for mass unemployment, real-time public metrics track AI-driven disruption at the local level. When indicators cross predefined thresholds, support activates automatically: cash assistance, wage insurance, retraining funds.
Think adverse event reporting, but for your career.
The concept is sound. The government's track record on responsive deployment is not.
The AI Trust Stack
Layered safety: containment playbooks for when AI behaves dangerously, independent auditing before deployment, and technical safeguards to verify AI outputs and protect privacy.
For any field where AI touches clinical decisions, this matters.
AI medical advice must be verifiable. Patient and client data must be protected. And there need to be clear protocols for when AI gives bad guidance.
Who Decides the Values?
OpenAI says the values guiding AI shouldn't be decided solely by engineers in Silicon Valley.
They propose structured ways for professionals and citizens to provide input.
For veterinary professionals, that means engaging through AVMA, state VMAs, and professional networks.
Because if you don't participate in shaping these policies, the decisions get made without you.
Where Veterinary Medicine Stands Right Now
Here's the reality.
Our profession has almost no formal framework for AI use in practice or research.
The closest thing we have is the AVMA's telemedicine and teletriage guidelines, which address some boundaries around remote care but were not designed with AI-driven tools in mind. There are no published standards for how AI should be used in clinical decision-making, documentation, diagnostic support, or client communication within veterinary medicine.
But there are real gaps that AI can genuinely help solve.
Client communication is one of them. Missed callbacks, unanswered questions after hours, appointment requests that slip through the cracks. These are operational problems that cost practices revenue and cost clients a good experience.
That's why I built AI Agent Vet, a tool that improves client engagement and helps practices capture appointments that would otherwise be lost.
It's one example of what responsible AI adoption can look like when it's built by someone who actually practices medicine, not just someone who builds software.
But one tool isn't a framework.
The profession needs formal guidance from its institutions, developed by veterinarians, for veterinarians. Until that exists, individual practitioners and practice owners are navigating this transition on their own, making decisions about AI adoption without a professional standard to reference.
That should change. And it needs to change before the technology outpaces the profession's ability to govern it.
The Bottom Line
The scope of these proposals is hard to fully comprehend.
Tax restructuring, portable benefits, adaptive safety nets, public wealth funds, universal AI access, layered safety systems. Every one of these is a major policy undertaking on its own. Together, they represent a complete redesign of the relationship between work, technology, and the social contract.
No one, including OpenAI, knows exactly how this transition unfolds. The proposals range from incremental to radical, and many face enormous political, economic, and practical hurdles.
But do not wait for the government to figure this out for you.
Historically, bureaucracies move slowly. There's too much red tape, too many steps between a policy proposal and an actual safety net landing in your hands. And there is going to be too much disruption happening too fast for the government to catch everyone who slips through the cracks.
What isn't going away is the technology itself. And the need for people who understand it, who know how to use it, and who can apply it within their industry and their area of expertise.
If you feel like your field is at risk of being disrupted, get ahead of it. Learn AI as best you can now so you can catch the wave instead of being pulled under by it.
The people building this technology have been warning everyone for a while now. That's what this paper is.
A warning, dressed up as a policy proposal.
Here's your action this week: Read the paper yourself. It's 13 pages. Link to OpenAI paper. Then ask yourself one question — if these proposals were implemented tomorrow, how would it change how your practice operates? You don't need the answer yet. You just need to start thinking about it.
OpenAI is accepting feedback at [email protected]. Research grants up to $1 million are available for work building on these ideas. A new OpenAI Workshop opens in May in Washington, DC.
Written By
Dr. Katie Jackson · DVM
Veterinarian · AI Educator · Builder

