OpenAI says it wants to implement ideas from the public about how to ensure its future AI models “align to the values of humanity.”
To that end, the AI startup is forming a new Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services, the company announced today.
“We’ll continue to work with external advisors and grant teams, including running pilots to incorporate … prototypes into steering our models,” OpenAI writes in a blog post. “We’re recruiting … research engineers from diverse technical backgrounds to help build this work with us.”
The Collective Alignment team is an outgrowth of OpenAI’s public program, launched last May, to award grants to fund experiments in setting up a “democratic process” for deciding what rules AI systems should follow. The goal of the program, OpenAI said at its debut, was to fund individuals, teams and organizations to develop proof-of-concepts that could answer questions about guardrails and governance for AI.
In its blog post today, OpenAI recapped the work of the grant recipients, which ran the gamut from video chat interfaces to platforms for crowdsourced audits of AI models and “approaches to map beliefs to dimensions that can be used to fine-tune model behavior.” All of the code used in the grantees work was made public this morning, along with brief summaries of each proposal and high-level takeaways.
OpenAI has attempted to cast the program as divorced from its commercial interests. But that’s a bit of a tough pill to swallow, given OpenAI CEO Sam Altman’s criticisms of regulation in the EU and elsewhere. Altman, along with OpenAI president Greg Brockman and chief scientist Ilya Sutskever, have repeatedly argued that the pace of innovation in AI is so fast that we can’t expect existing authorities to adequately rein in the tech — hence the need to crowdsource the work.
Some OpenAI rivals, including Meta, have accused OpenAI (among others) of trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D. OpenAI unsurprisingly denies this — and would likely point to the grant program (and Collective Alignment team) as an example of its “openness.”
OpenAI is under increasing scrutiny from policymakers in any case, facing a probe in the U.K. over its relationship with close partner and investor Microsoft. The startup recently sought to shrink its regulatory risk in the EU around data privacy, leveraging a Dublin-based subsidiary to reduce the ability of certain privacy watchdogs in the bloc to unilaterally act on concerns.
Yesterday — partly to allay regulators, no doubt — OpenAI announced that it’s working with organizations to attempt to limit the ways in which its technology could be used to sway or influence elections through malicious means. The startup’s efforts include making it more obvious when images are AI-generated using its tools and developing approaches to identify generated content even after images have been modified.