The US White House Office of Science and Technology Policy (OSTP) has recently unveiled a significant initiative: the “Blueprint for an AI Bill of Rights.” This development is accompanied by a series of related agency actions, all aimed at shaping the landscape of artificial intelligence (AI) governance.
The blueprint serves as a crucial framework for fostering collaboration among government, technology companies, and citizens to promote accountable AI practices. It introduces new guidelines that are essential for understanding how AI should be handled.
It’s also worth noting that while these guidelines are key, there is still ongoing work in progress to enhance AI accountability.
I’ll explain to you what these guidelines outline, what they cover, and, equally importantly, what areas they do not address. The question that naturally arises is, why do we need an AI Bill of Rights in the first place?
What Is the AI Bill of Rights? & Why is it needed?
The White House’s Blueprint for an AI Bill of Rights outlines a set of principles to help responsibly guide the design and use of artificial intelligence. Until now, Washington has struggled to keep pace with the swiftly evolving AI sector.
However, the emergence of the AI Bill of Rights, which is indicative of the Biden administration’s stance on AI regulation, may herald an impending wave of increased government involvement.
The Responsible Use of Artificial Intelligence (AI) has taken center stage over the last eight years, capturing the attention of countries, citizens, and businesses alike. Today, approximately 60 countries have laid out their National AI Strategies, demonstrating their commitment to harnessing the potential of AI while ensuring it is used responsibly.
This shift reflects the growing awareness that while AI can deliver significant benefits, it also poses the risk of causing substantial harm to individuals and society if left unchecked.
As a result, many nations are in the process of formulating policies that strike a balance, allowing for innovation and progress while safeguarding against potential pitfalls.
Since 2018, the European Union (EU) has been on a mission to design and develop AI to how it’s implemented keeping their people safe from any AI-related mishaps. Fast forward to 2024, and we’ll see the EU AI Act.
Everyone’s wondering how they’ll react to this new set of rules from the EU. And China is also setting up its own AI-related rules & regulations.
The USA and China are in a tech tug-of-war kinda situation, leading to technological decoupling – they’re heading in different tech directions. But the real question is, are we seeing the start of totally different tech paths and ways of handling Artificial intelligence around the world?
Here’s where the EU-US Trade and Technology Council comes in. They’re trying to get the EU and the USA to sync up better. Meanwhile, the EU and China are keen on tightening the reins on AI, like getting a better handle on algorithms. This makes it look like the USA might be a bit behind in setting up digital rules.
There are many global efforts to nail down the best way to use AI. Take the Global Partnership on AI, which started in 2020. Both the EU and the USA are in on this.
Then there’s UNESCO and the OECD, laying down the law on AI use. And let’s not forget the World Economic Forum, which has been working since 2017 to create frameworks that everyone can use. They’re focusing on making sure governments and businesses are managing AI the right way.
AI Bill of Rights – A Closer Look
This bill has five main principles and serves as a technical companion for the responsible implementation of AI.
- Safe and Effective Systems
- Algorithmic Discrimination Protections
- Data Privacy
- Notice and Explanation
- Alternative Options
This bill intends to guide the design, use, and deployment of automated systems and to protect the American Public. But it’s non-regulatory and non-binding. Think of it more as guidance, not a strict set of rules.
This blueprint is 76 pages long and includes examples of AI use cases. White House OSTP makes it about impacting the American public’s rights, opportunities, or access.
One thing to note: it doesn’t cover every kind of intelligence program. It skips over a bunch of industrial and operational AI applications. But it does give examples of AI in areas like Lending, Human Resources, and surveillance, showing how this tech is becoming a big part of our lives.
The Main 5 Principles of the AI Bill of Rights Explain
Each principle offers illustrative instances and actionable measures that companies, governments, and various organizations can adopt to integrate these safeguards into their policies and operations alongside overarching technology design recommendations.
Protection Principle
This principle highlights the necessity for safeguarding individuals against the use of automated systems that pose safety risks or prove ineffective.
It also underscores the importance of shielding individuals from the inappropriate or irrelevant use of data during the design, development, and deployment phases of these systems while also preventing the compounding of harm through data reuse.
To achieve their goals, OSTP suggests involving diverse groups of independent parties and domain experts in the development of AI systems..
These AI systems should undergo comprehensive processes, including pre-deployment testing, risk assessment, and mitigation strategies.
Ongoing monitoring is crucial to ensure adherence to domain-specific standards and prevent unauthorized use.
The option to either discontinue the use of a system or refrain from using it in the first place is also presented as a viable course of action.
Furthermore, all information gathered during independent evaluations of a system’s safety and effectiveness should, whenever feasible, be made publicly accessible.
Algorithmic Discrimination Protections
Algorithmic discrimination refers to situations where automated systems treat specific individuals unfairly or unfavorably due to biases in their training data.
AI models rely on data as their foundation, and if this data is prejudiced, distorted, or incomplete, it can result in outputs that reflect and potentially amplify these biases.
This can lead to discrimination against particular individuals or groups, impacting areas such as employment, housing, and access to public services. In some cases, algorithmic discrimination may even contravene legal standards.
This principle proposes that those responsible for the development and deployment of AI systems should take proactive and continuous steps to ensure fairness in their design.
These measures encompass equity assessments, the utilization of representative data from diverse groups and perspectives, ensuring accessibility for individuals with disabilities, addressing any identified biases through testing and remediation, and establishing clear organizational oversight.
The principle recommends presenting all evaluations and reports on algorithmic impact in a clear and comprehensible manner to maintain protective measures.
Data Privacy
This principle states that everyone should have control over the personal data they generate online and how it is collected and used by companies. It also says designers and developers should ask for permission in a clear and understandable way regarding the collection, use, access, transfer, and deletion of people’s personal data.
And respect their wishes “to the greatest extent possible.” When it is not possible, organizations need to have “alternative privacy by design” safeguards that prevent abusive data practices.
While all data should be protected, the OSTP says there should be more “enhanced” protections for information in more sensitive areas like health, work, finance, criminal justice, and education.
By extension, the principle states that continuous surveillance should not be used in education, work, housing, or any other context where it could likely lead to limiting one’s rights or opportunities. And, if surveillance must be used, these systems should be subject to “heightened oversight.”
Notice and Explanation
This principle underscores the importance of notifying individuals when an automated system is utilized in a manner that could have an impact on them.
It further mandates that individuals should receive comprehensive explanations regarding the functioning of the system, the role of automation in the process, the rationale behind the system’s decisions, and the parties responsible for these decisions.
Developers and operators of the system are tasked with the responsibility of conveying all this information in straightforward, easily understandable language.
Timely, clear, and accessible communication of this information is imperative. Additionally, users should be promptly informed of any substantial alterations in the system’s usage.
Alternative Options
In situations where individuals prefer opting out of automated systems in favor of human alternatives, the OSTP asserts that this choice should be available “where appropriate.”
The determination of appropriateness should be context-specific and prioritize considerations of reasonableness, accessibility, and protection against particularly harmful consequences.
In certain instances, legal requirements may mandate the use of a human or alternative approach.
Additionally, the principle emphasizes that users should have prompt access to human resources in the event of AI system failures, errors, or outcomes that require appeal or contestation.
This process should be characterized by accessibility, fairness, effectiveness, maintenance, and adequate operator training and should not impose an unreasonable burden on the user or the general public.
Enforceability of the AI Bill of Rights
Can the AI Bill of Rights be enforced? The simplest answer would be No, it’s not. The AI Bill of Rights, as outlined in the White House’s Blueprint, isn’t something that can be legally enforced or binding. It’s more of a collection of recommendations.
Patrick Lin, a technology law researcher and the author of “Machine See, Machine Do,” described it as such.
“It’s on the right track, but lacks legal force,” explained Manasi Vartak, founder and CEO of the operational AI company Verta, in a conversation with Built In. “I see it as an initial step towards legislation. Now that we’ve identified the principles we need to safeguard, we can start crafting a law based on these principles.”
This Blueprint bill received a mixed response from media, industry professionals, and academic circles. Some proponents of governmental oversight feel it falls short, doubting its effectiveness.
They hoped for a document with more robust checks and balances, akin to the EU AI Act. Additionally, publications like The Wall Street Journal pointed out concerns among tech executives who worry that regulation might hinder AI innovation.
Conversely, there’s significant support for avoiding immediate regulation to nurture beneficial AI innovation and competition. Policy experts have emphasized the potential benefits of the document, particularly for groups such as Black and Latino Americans.
As mentioned in the MIT Technology Review by the leader of the nonprofit Center for AI and Digital Policy, the AI Bill of Rights is seen as an essential starting point and an “impressive” one at that.
AI accountability?
The World Economic Forum has been proactive in helping businesses, governments, and the public navigate the responsible use of AI in areas like Human Resources and Law Enforcement. Last year, the Forum launched a practical toolkit aimed at guiding the ethical use of AI in human resources.
Furthermore, a framework they developed provides essential advice on using facial recognition in law enforcement.
Although the Blueprint is advisory in nature, the White House has indicated that various federal agencies will introduce actions and guidelines related to AI use, including new procurement policies.
The extent of these agencies’ involvement with the Blueprint varies, and it’s still uncertain how these new guidelines will integrate with or enhance existing AI directives (such as Executive Order 13960 on Promoting Trustworthy AI in the Federal Government or the National Institute of Standards and Technology’s AI Risk Management Framework) and statements from entities like the FTC, EEOC, CFPB, and HHS.
Nevertheless, the “Blueprint for an AI Bill of Rights” might reinforce existing standards and laws, adding normative weight to other proposed legal frameworks like the revised Algorithmic Accountability Act, reintroduced in the Senate in 2022.
These legal frameworks could give more force to the AI Bill of Rights and potentially lead to a closer alignment of regulatory best practices between the US and the EU’s forthcoming AI Act.
In the broader context of international AI governance and best practices, the AI Bill of Rights is a significant step. However, it must be viewed alongside other upcoming initiatives in the US and abroad. The EU and China are advancing their regulatory regimes, which will influence global best practices.
To maintain its influence over international AI standards and protect its citizens’ interests and beneficial innovation, the US must continue to develop and implement effective policies and practices. These efforts are likely to impact the evolving global best practices in AI governance.
Existing Artificial Intelligence Laws, Legislation & Regulations
When it comes to AI laws in the U.S., there’s no federal legislation yet that directly limits Artificial Intelligence use or guards citizens against AI-related harm. But things are moving.
Last year, President Joe Biden rolled out the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This order’s a big deal – it makes AI developers tell the U.S. government about any safety test results, especially if there’s a national security risk. Plus, federal agencies have to whip up guidelines and standards for AI safety and security.
Even before this order, departments like the Department of Defense, the U.S. Agency for International Development, and the Equal Employment Opportunity Commission were already setting their own AI rules.
More than a dozen agencies, including heavy-hitters like the Food and Drug Administration, Federal Trade Commission, and the U.S. Copyright Office, have issued joint statements. Yet, all these guidelines from government bodies? They’re just suggestions, not something you can enforce by law.
Now, states and cities across the U.S. are getting in on the action, too. Colorado has laws for the fair use of AI by insurers, while California said a big no to undisclosed chatbots selling stuff or swaying votes.
Illinois stepped up as the first state to limit AI in hiring practices. New York City went ahead and made companies tell job applicants if they’re using hiring algorithms and get those algorithms checked for bias every year.
Also, cities are trying to put limits on police using facial recognition tech. Twenty-five more states, Puerto Rico and Washington, D.C., have introduced AI bills in 2023.
Internationally, the scene’s a bit different. China is way ahead in AI laws, covering everything from development to usage. The European Union is busy crafting the AI Act, aiming for what they call “trustworthy AI.”
This act puts stricter checks on riskier AI like facial recognition and bans some uses outright, like emotion recognition. Other countries like France, Italy, and Japan are also reacting to AI developments, like OpenAI’s ChatGPT, with their own rules and bans.
Since the AI Bill of Rights came out, there’s been a real buzz among politicians, human rights groups, and tech experts. They’re all pushing for clearer AI rules in the U.S. This could mean we’re heading towards a stronger legal framework for AI pretty soon.
Over in the states, places like California have kicked off a trend with new laws on consumer data privacy. These laws let people get to their data, delete it, or say no to it being sold. And there’s talk about new rules for how AI is used in things like sports betting and mental health.
On the federal side, there’s the American Data Protection and Privacy Act (ADPPA). It’s meant to control how businesses handle and use our data, which is a big deal for AI. The thing is, even though it got some support, it’s stuck in Congress. And another bill, this one for checking up on AI, didn’t make the cut.
Last year, President Biden stepped in with an executive order. He’s setting up guidelines to make AI safe and secure, trying to make sure our rights are covered and keep up with other AI laws.
But regulating AI? It’s tricky. These AI systems are complex, and even the pros struggle to figure out why they make certain choices. It’s tough to regulate something when you’re not quite sure how it ticks.