The artificial intelligence market is evolving rapidly, and with it, the discussion on AI's regulatory framework is also highlighted. As more people and businesses adopt intelligent systems, it is inevitable to determine standards so that their use is responsible and safe.
As a result, governments, companies, and entities that promote artificial intelligence move globally to propose laws on the subject and adapt existing legislation to the current context of technology.
Research consultancy Cognilytica published earlier this year a report that maps Artificial Intelligence laws and regulations around the world.
According to the survey, the majority of legislation proposals concern facial recognition technologies, autonomous vehicles, use of personal data in AI, and aspects such as unconscious bias and degree of human supervision over decisions made by AI.
The challenge, however, is to ensure that emerging regulations do not limit the pace of technology evolution, and maintain the competitive advantage of companies that invest in the area without introducing new risks.
Some points of attention that motivate the creation of laws for artificial intelligence are:
- Data protection laws
- Ethical implications of AI
- Wrong decisions in facial recognition systems, autonomous cars, and weapons
- Use of AI for malicious purposes
Artificial Intelligence laws are at different stages of discussion and legislation in countries that have dealt with the topic.
Check out how the global regulatory scenario, how it relates to data protection laws, advantages, and risks of artificial intelligence regulation, and the positioning of technology companies on the subject.
What motivates proposals to regulate AI around the world?
Before exploring what are the main milestones and proposals on the regulation of AI in the world, it is important to recover the reasons that motivate this type of legislation.
In 2016, the Organization for Economic Cooperation for Development (OECD) already released a document listing concerns about the use of artificial intelligence. They were:
- Increase in unemployment due to automation
- Greater imbalance in income distribution
- Results skewed by the absence of human supervision
With these and other aspects in mind, in May 2019 the OECD created a guide for recommendations for the development of artificial intelligence.
Artificial Intelligence regulation proposals
The document created by the OECD on principles and responsibilities had an influence on the discussion on the topic in European Union countries, China, the United States, and Brazil.
In June 2019, China's Ministry of Science and Technology, one of the leading countries in the dispute for Artificial Intelligence, released governance rules that must be followed in the development of artificial intelligence in the country.
In January 2020, the United States government published a proposal to regulate artificial intelligence. Then, in February, the OECD presented the AI Policy Observatory, whose mission is to assist the responsible development of technology.
Also in February 2020, the European Union Commission opened a public consultation to find out the society's opinion on the topic. Based on these data, the Commission will define a legislative proposal that will be debated by the European Parliament and national governments.
Before that, in 2019, the EU Commission had created a reliable Ethics Guide for AI, which provided for pillars such as human supervision, technical security, and robustness, privacy and data governance, transparency, and non-discrimination.
France and Canada, in turn, lead The Global Partnership on AI (GPAI), an initiative with the participation of governments from several countries, including Australia, Germany, Mexico, Singapore, and the United States.
The main objective of the initiative is to guide the development of artificial intelligence that respects:
- human rights
- economic growth
The project has the support of OECD and UNESCO and was made official in June 2020.
However, according to data from the Cognilytica study, also published in Forbes, although discussions about regulations around the world are advancing no country has yet implemented advanced legislation on ethical use or unconscious biases in AI.
Regulation of Artificial Intelligence in Brazil
Brazil is following the same path as other countries that have already proposed regulatory measures, according to OECD guidelines on the development of artificial intelligence.
In 2019, two bills were proposed:
- Bill 5051, which aims to establish principles and regulate the use of Artificial Intelligence in Brazil
- Bill 5691, to create the National Artificial Intelligence Policy
The main articles of the projects suggest that AI-based decision systems will always be an aid in human decision-making, in addition to suggesting the creation of specific policies for the protection and qualification of workers in the face of automation.
Both texts are inspired by European recommendations, and, so far, they are still pending in the Senate, specifically in the Commission for Science, Technology, Innovation, Communication and Informatics (CCT).
In February 2020, this same commission opened a public consultation to discuss legislation and the ethical use of AI considering areas such as the Workforce, research and development, application in the public and private sectors, and public security.
To regulate or not to regulate? That is the question
Despite the many regulatory proposals underway around the world, one of the industry's questions is about the extent to which artificial intelligence regulation can limit or expand innovation in the area.
After all, there is no global consensus on ethical and privacy parameters between companies and countries when it comes to artificial intelligence. For example: using data for a particular purpose may be acceptable in China, but not in the United States or Europe, for example.
Establishing good practices for AI is important, but at the same time, it is a very subjective process and depends on the applications and consequences of its use.
Therefore, regulations must balance the possibilities of exponential evolution of technology and its risks, reflecting both sides. Still, it is important that legislation on AI is created with the involvement of the scientific community and the technology industry since governments do not always keep up with the speed of market transformations and complexities.
The positioning of AI companies
Besides governments and sectoral Artificial Intelligence entities, technology giants have also taken a stand on the regulation of the area, in addition to creating guides to good practices for the application of AI in their structures.
In January of this year, during the World Economic Forum in Davos, Switzerland, Microsoft CEO Satya Nadella described the regulation of Artificial Intelligence as "crucial". He warned of the risk of leaving decisions 100% at the mercy of machines, without human supervision.
Google CEO Sundar Pitchai wrote in the Financial Times that "there is no doubt that Artificial Intelligence needs to be regulated", stressing that legislation on the topic needs to balance the benefits of AI to society and potential damage, as related to autonomous cars and deepfakes (computer-generated images).
Other companies, such as Apple and IBM, have also called on the United States government to set standards for the use of AI, as suggested by the European Union.
Some of the big techs, including Google, Amazon, and IBM, have even announced that they would stop collaborating with government agents in the development of mass surveillance technologies, such as facial recognition technologies for public use, due to concerns about the consequences of using the technology.
Data Protection Laws and Artificial Intelligence
One of the concerns raised by advocates of AI regulation has to do with the use of users' personal and sensitive data. It is not by chance that we are seeing the rise of data protection regulations around the world, such as the European GDPR and the LGPD, in Brazil.
According to the Cognilytica report, at least 31 countries already have prohibitive laws regarding the use and sharing of data without user consent.
Since data supplies AI systems, requiring high volumes of quality data, one of the risks is the use of data without the use and consent of users, hence the importance of Data Protection Laws. Increasingly, consumers want to know how smart systems are using their data.
Future of AI regulation
The discussion on the regulation of artificial intelligence is still largely theoretical. At the moment, governments and companies are focused on creating manuals and best practice guides for the sector, before moving forward with the enactment of laws.
However, in the wake of data protection laws and as technology advances, it is likely that countries will be forced to consolidate legislation in the area, preferably taking into account the local development context of Artificial Intelligence
Want to learn more about Artificial Intelligence concepts and applications? Listen to the Inside Alana Podcast episodes!