fbpx
Home BUSINESS The Fight to Define When AI Is ‘High Risk’

The Fight to Define When AI Is ‘High Risk’

0
84

Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

People should not be slaves to machines, a coalition of evangelical church congregations from more than 30 countries preached to leaders of the European Union earlier this summer.

The European Evangelical Alliance believes all forms of AI with the potential to harm people should be evaluated, and AI with the power to harm the environment should be labeled high risk, as should AI for transhumanism, the alteration of people with tech like computers or machinery. It urged members of the European Commission for more discussion of what’s “considered safe and morally acceptable” when it comes to augmented humans and computer-brain interfaces.

The evangelical group is one of more than 300 organizations to weigh in on the EU’s Artificial Intelligence Act, which lawmakers and regulators introduced in April. The comment period on the proposal ended August 8, and it will now be considered by the European Parliament and European Council, made up of heads of state from EU member nations. The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

Keep Reading

The Fight to Define When AI Is ‘High Risk’

Search our artificial intelligence database and discover stories by sector, tech, company, and more.

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU’s General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Introducing the AI Act

Within the EU, some legal structures do exist to address how algorithms are used in society. In 2020, a Dutch court declared an algorithm used to identify fraud among recipients of public benefits a violation of human rights. In January, an Italian court declared a Deliveroo algorithm discriminatory for assigning gig workers reliability scores that penalize people for things like personal emergencies or being sick.

But the AI Act would create a common regulatory and legal framework for 27 countries in the European Union. The draft proposes the creation of a searchable public database of high-risk AI that’s maintained by the European Commission. It also creates a European AI Board tasked with some yet-to-be-decided forms of oversight. Significantly, the AI Act hinges upon defining what forms of AI deserve the label “high risk.”

Work on the AI Act started in 2018 and was preceded by a number of reports about the making of trustworthy AI systems, including work with a 52-member expert group and processes involving thousands of business, government, and society stakeholders. Their feedback helped inform which forms of AI are listed as high risk in the draft proposal.

Source link

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here