There’s a lot of money in AI. That’s not just something that startup founders rushing to cash in on the latest fad believe; some very reputable economists are predicting a massive boom in productivity as AI use takes off, buoyed by empirical research showing tools like ChatGPT boost worker output.
But while previous tech founders such as Larry Page or Mark Zuckerberg schemed furiously to secure as much control over the companies they created as possible — and with it, the financial upside — AI founders are taking a different tack, and experimenting with novel corporate governance structures meant to force themselves to take nonmonetary considerations into account.
Demis Hassabis, the founder of DeepMind, sold his company to Google in 2014 only after the latter agreed to an independent ethics board that would govern how Google uses DeepMind’s research. (How much teeth the board has had in practice is debatable.)
ChatGPT maker OpenAI is structured as a nonprofit that owns a for-profit arm with “capped” profits: First-round investors would stop earning after their shares multiply in value a hundredfold, with profits beyond that going into OpenAI’s nonprofit. A 100x return may seem ridiculous but consider that venture capitalist Peter Thiel invested $500,000 in Facebook and earned over $1 billion when the company went public, an over 2,000x return. If OpenAI is even a 10th that successful, the excess profits returning to the nonprofit would be huge.
Meanwhile, Anthropic, which makes the chatbot Claude, is divesting control over a majority of its board to a trust composed not of shareholders, but independent trustees meant to enforce a focus on safety ahead of profits.
Those three companies, plus Microsoft, got together on Wednesday to start a new organization meant to self-regulate the AI industry.
I don’t know which of these models, if any, will work — meaning produce advanced AI that is safe and reliable. But I have hope that the hunger for new governance models from AI founders could maybe, possibly, if we’re very lucky, result in many of the potentially enormous and needed economic gains from the technology being broadly distributed.
Where does the AI windfall go?
There are three broad ways the profits reaped by AI companies could make their way to a more general public. The first, and most important over the long-term, is taxes: There are a whole lot of ways to tax capital income, like AI company profits, and then redistribute the proceeds through social programs. The second, considerably less important, is charity. Anthropic in particular is big on encouraging this, offering a 3-1 match on donations of shares in the company, up to 50 percent of an employee’s shares. That means that if an employee who earns 10,000 shares a year donates half of them, the company will donate another 15,000 shares on top of that.
Why is everything getting so expensive?
The third is if the companies themselves decide to donate a large share of their profits. This was the key proposal of a landmark 2020 paper called “The Windfall Clause,” released by the Centre for the Governance of AI in Oxford. The six authors notably include a number of figures who are now senior governance officials at leading labs; Cullen O’Keefe and Jade Leung are at OpenAI, and Allan Dafoe is at Google DeepMind (the other three are Peter Cihon, Ben Garfinkel, and Carrick Flynn).
The idea is simple: The clause is a voluntary but binding commitment that AI firms could make to donate a set percentage of their profits in excess of a certain threshold to a charitable entity. They suggest the thresholds be based on profits as a share of the gross world product (the entire world’s economic output).
If AI is a truly transformative technology, then profits of this scale are not inconceivable. The tech industry has already been able to generate massive profits with a fraction of the workforce of past industrial giants like General Motors; AI promises to repeat that success but also completely substitute for some forms of labor, turning what would have been wages in those jobs into revenue for AI companies. If that revenue is not shared somehow, the result could be a surge in inequality.
In an illustrative example, not meant as a firm proposal, the authors of “The Windfall Clause” suggest donating 1 percent of profits between 0.1 percent and 1 percent of the world’s economy; 20 percent of profits between 1 and 10 percent; and 50 percent of profits above that be donated. Out of all the companies in the world today — up to and including firms with trillion-dollar values like Apple — none have high enough profits to reach 0.1 percent of gross world product. Of course, the specifics require much more thought, but the point is for this not to replace taxes for normal-scale companies, but to set up obligations for companies that are uniquely and spectacularly successful.
The proposal also doesn’t specify where the money would actually go. Choosing the wrong way to distribute would be very bad, the authors note, and the questions of how to distribute are innumerable: “For example, in a global scheme, do all states get equal shares of windfall? Should windfall be allocated per capita? Should poorer states get more or quicker aid?”
A global UBI
I won’t pretend to have given the setup of windfall clauses nearly as much thought as these authors, and when the paper was published in early 2020, OpenAI’s GPT-3 hadn’t even been released. But I think their idea has a lot of promise, and the time to act on it is soon.
If AI really is a transformative technology, and there are companies with profits on the order of 1 percent or more of the world economy, then the cat will be far out of the bag already. That company would presumably fight like hell against any proposals to distribute its windfall equitably across the world, and would have the resources and influence to win. But right now, when such benefits are purely speculative, they’d be giving up little. And if AI isn’t that big a deal, then at worst those of us advocating these measures will look foolish. That seems like a small price to pay.
My suggestion for distribution would be not to attempt to find hyper-specific high-impact opportunities, like donating malaria bednets or giving money to anti-factory farming measures. We don’t know enough about the world in which transformative AI develops for these to reliably make sense; maybe we’ll have cured malaria already (I certainly hope so). Nor would I suggest outsourcing the task to a handful of foundation managers appointed by the AI firm. That’s too much power in the hands of an unaccountable group, too tied to the source of the profits.
Instead, let’s keep it simple. The windfall should be distributed to as many individuals on earth as possible as a universal basic income every month. The company should be committed to working with host country governments to supply funds for that express purpose, and commit to audits to ensure the money is actually used that way. If there’s need to triage and only fund measures in certain places, start with the poorest countries possible that still have decent financial infrastructure. (M-Pesa, the mobile payments software used in central Africa, is more than good enough.)
Direct cash distributions to individuals reduce the risk of fraud and abuse by local governments, and avoid intractable disputes about values at the level of the AI company making the donations. They also have an attractive quality relative to taxes by rich countries. If Congress were to pass a law imposing a corporate profits surtax along the lines laid out above, the share of the proceeds going to people in poverty abroad would be vanishingly small, at most 1 percent of the money. A global UBI program would be a huge win for people in developing countries relative to that option.
Of course, it’s easy for me to sit here and say “set up a global UBI program” from my perch as a writer. It will take a lot of work to get going. But it’s work worth doing, and a remarkably non-dystopian vision of a world with transformative AI.