Abstract
The European Union (“EU”) has taken the remarkable step of coming to agreement to implement the first-of-its-kind comprehensive legislation regarding artificial intelligence (“AI”), the AI Act. The AI Act adopts a risk-based approach to address diverse AI system applications and potential harms associated with AI technology. The AI Act categorizes AI systems based on risk levels, ranging from unacceptable to minimal, with corresponding regulatory requirements tailored to mitigate associated risks. Efforts are underway in the United States to establish AI regulatory frameworks, as demonstrated by the Bipartisan Framework for U.S. Act, proposed by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT); the proposed No Section 230 Immunity for AI Act; and President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, inter alia. This Article provides a synopsis of the final provisions adopted in the AI Act. The Article then explores United States policymakers’ most recent efforts to establish a regulatory path that balances principles of innovation with risk mitigation.
Recommended Citation
Renee Henson,
Bridging the Divide: Does the EU's AI Act Offer Code for Regulating Emergent Technologies in America?,
89 Mo. L. Rev.
(2024)
Available at: https://scholarship.law.missouri.edu/mlr/vol89/iss3/6