Science

AI Needs Rules, but Who Will Get to Make Them?


About 150 government and industry leaders from around the world, including Vice President Kamala Harris and billionaire Elon Musk, descended on England this week for the U.K.’s AI Safety Summit. The meeting acted as the focal point for a global conversation about how to regulate artificial intelligence. But for some experts, it also highlighted the outsize role that AI companies are playing in that conversation—at the expense of many who stand to be affected but lack a financial stake in AI’s success.

On November 1 representatives from 28 countries and the European Union signed a pact called the Bletchley Declaration (named after the summit’s venue, Bletchley Park in Bletchley, England), in which they agreed to keep deliberating on how to safely deploy AI. But for one in 10 of the forum’s participants, many of whom represented civil society organizations, the conversation taking place in the U.K. hasn’t been good enough.

Following the Bletchley Declaration, 11 organizations in attendance released an open letter saying that the summit was doing a disservice to the world by focusing on future potential risks—such as the terrorists or cybercriminals co-opting generative AI or the more science-fictional idea that AI could become sentient, wriggle free of human control and enslave us all. The letter said the summit overlooked the already real and present risks of AI, including discrimination, economic displacement, exploitation and other kinds of bias.

“We worried that the summit’s narrow focus on long-term safety harms might distract from the urgent need for policymakers and companies to address ways that AI systems are already impacting people’s rights,” says Alexandra Reeve Givens, one of the statement’s signatories and CEO of the nonprofit Center for Democracy & Technology (CDT). With AI developing so quickly, she says, focusing on rules to avoid theoretical future risks takes up effort that many feel could be better spent writing legislation that addresses the dangers in the here and now.

Some of these harms arise because generative AI models are trained on data sourced from the Internet, which contain bias. As a result, such models produce results that favor certain groups and disadvantage others. If you ask an image-generating AI to produce depictions of CEOs or business leaders, for instance, it will show users photographs of middle-aged white men. The CDT’s own research, meanwhile, highlights how non-English speakers are disadvantaged by the use of generative AI because the majority of models’ training data are in English.

More distant future-risk scenarios are clearly a priority, however, for some powerful AI companies, including OpenAI, which developed ChatGPT. And many who signed the open letter think the AI industry has an outsize influence in shaping major relevant events such as the Bletchley Park summit. For instance, the summit’s official schedule described the current raft of generative AI tools with the phrase “frontier AI,” which echoes the terminology used by the AI industry in naming its self-policing watchdog, the Frontier Model Forum.

By exerting influence on such events, powerful companies also play a disproportionate role in shaping official AI policy—a type of situation called “regulatory capture.” As a result, those policies tend to prioritize company interests. “In the interest of having a democratic process, this process should be independent and not an opportunity for capture by companies,” says Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center.

For one example, most private companies do not prioritize open-source AI (although there are exceptions, such as Meta’s LLaMA model). In the U.S., two days before the start of the U.K. summit, President Joe Biden issued an executive order that included provisions that some in academia saw as favoring private-sector players at the expense of open-source AI developers. “It could have huge repercussions for open-source [AI], open science and the democratization of AI,” says Mark Riedl, an associate professor of computing at the Georgia Institute of Technology. On October 31 the nonprofit Mozilla Foundation issued a separate open letter that emphasized the need for openness and safety in AI models. Its signatories included Yann LeCun, a professor of AI at New York University and Meta’s chief AI scientist.

Some experts are only asking regulators to extend the conversation beyond AI companies’ primary worry—existential risk at the hands of some future artificial general intelligence (AGI)—to a broader catalog of potential harms. For others, even this broader scope isn’t good enough.

“While I completely appreciate the point about AGI risks being a distraction and the concern about corporate co-option, I’m starting to worry that even trying to focus on risks is overly helpful to corporations at the expense of people,” says Margaret Mitchell, chief ethics scientist at AI company Hugging Face. (The company was represented at the Bletchley Park summit, but Mitchell herself was in the U.S. at a concurrent forum held by Senator Chuck Schumer of New York State at the time.)

“AI regulation should focus on people, not technology,” Mitchell says. “And that means [having] less of a focus on ‘What might this technology do badly, and how do we categorize that?’ and more of a focus on ‘How should we protect people?’” Mitchell’s circumspection toward the risk-based approach arose in part because so many companies were so willing to sign up to that approach at the U.K. summit and other similar events this week. “It immediately set off red flags for me,” she says, adding that she made a similar point at Schumer’s forum.

Mitchell advocates for taking a rights-based approach to AI regulation rather than a risk-based one. So does Chinasa T. Okolo, a fellow at the Brookings Institution, who attended the U.K. event. “Primary conversations at the summit revolve around the risks that ‘frontier models’ pose to society,” she says, “but leave out the harms that AI causes to data labelers, the workers who are arguably the most essential to AI development.”

Focusing specifically on human rights situates the conversation in an area where politicians and regulators may feel more comfortable. Mitchell believes this will help lawmakers confidently craft legislation to protect more people who are at risk of harm from AI. It could also provide a compromise for the tech companies that are so keen to protect their incumbent positions—and their billions of dollars of investments. “By government focusing on rights and goals, you can mix top-down regulation, where government is most qualified,” she says, “with bottom-up regulation, where developers are most qualified.”


Source link

Related Articles

Do you run a company that want to build a new website and are looking for a web agency in Sweden that can do the job? At Partna you can get connected to experienced web agencies that are interested in helping you with your website development. Partna is an online service where you simply post your web development needs in order to get business offers from skilled web agencies in Sweden. Instead of reaching out to hundreds of agencies by yourself, let up to 5 web agencies come to you via Partna.
Back to top button