AI laws are changing fast, and many countries still don’t have full regulations. But some regions already have rules in place, and new laws are being introduced. Here’s a snapshot of where things stand and how you can keep up.
Why should you care? Because these new rules will change how AI is developed and used, affecting everything from your social media feed to new technologies at work.
Here’s a look at the most important current and upcoming AI laws in the European Union (EU) and the United States (U.S.), and what they mean.
The European Union: The world’s first comprehensive AI law
In December 2023, the EU agreed on the Artificial Intelligence Act (AI Act), a major law aimed at making AI safe and trustworthy. The rules focus on the risk AI can pose: higher-risk systems face stricter regulations. Key points include:
- What it does: It’s designed to make sure AI is safe and trustworthy across Europe. The law uses a ‘risk-based’ approach: the greater the potential for an AI system to cause harm, the stricter the rules will be.
- Key Changes:
- Stricter Rules for High-Risk AI: Think AI used in healthcare or critical infrastructure.
- New Rules for Powerful AI Models: Special attention is given to very large AI models (like the ones behind ChatGPT) that could pose a “systemic risk” in the future.
- Ban on Certain Uses: The law bans some highly invasive uses of AI, though it allows exceptions for law enforcement, like using remote biometric identification (e.g., facial recognition) in public spaces under strict safety rules.
- Protecting Rights: Before deploying high-risk AI, companies must check its impact on people’s fundamental rights.
- Timeline: While the agreement is final, most of the rules won’t actually start being enforced for 1 to 2 years.
The United States: Guidance and executive action
The U.S. hasn’t passed a comprehensive AI law yet, but it’s using several other methods to set rules.
- Presidential Action: In October 2023, the White House issued a major Executive Order on AI. This order is meant to protect Americans from AI risks and does the following:
- Requires AI developers to share safety test results with the government.
- Promotes new standards and tools to make sure AI is safe, secure, and trustworthy.
- Aims to protect citizens from AI-fueled fraud and other dangers.
- Laws in Congress: Several smaller laws and proposed bills are also on the table:
- AI Training Act (Passed 2022): Requires federal agencies to train their staff who purchase technology on how to use AI.
- Algorithmic Accountability Act (Proposed): Would require companies that use AI for critical decisions (like loan applications) to study and report on how those systems impact consumers.
- Data Privacy Existing data privacy laws, like Europe’s GDPR and similar laws in U.S. states such as California, Virginia, and Colorado, are also important. These laws already have rules that affect how AI and automated systems handle your personal data.
Beyond the big players
It’s not just the EU and U.S. getting involved. Other countries are also developing their own AI regulations, including Brazil, Canada, China, and South Korea.
As AI technology keeps evolving, the legal landscape will change. Staying informed about these developments is key for businesses, developers, and everyday citizens alike.
Resources to follow
Here are additional resources for keeping current on AI policies and regulations:
- OECD AI Policy Observatory (OCED.AI)
- Global AI Legislation Tracker
- EU AI Act Resources: In addition to the European Commission’s EU AI Act page and the official Eur-Lex site for Regulation (EU) 2024/1689 contain the text of the law and updates on its implementation timeline, The EU is an unofficial tracker that posts the Act’s legislative history, key documents, and timelines.
- NCSL Artificial Intelligence Legislation Tracker: The US National Conference of State Legislatures maintains an updated tracker summarizing AI-related bills in all U.S. states, including enacted laws (like those in Colorado, California, etc.) and pending bills.