I recently stumbled upon this article on AI Bill of Rights, and I thought to share some of my highlights and thoughts on it. It is an exciting piece and one you should read if you are interested in conversations about how AI can be regulated in the interests of humans.
The article outlined a very realistic framework called the “Blueprint for an AI Bill of Rights,” and the goal is protecting the rights, opportunities, and access of the American public in the age of artificial intelligence (AI).
Even though it is focused on the USA, I think there is a lot that other countries can borrow a leaf or two from.
The framework discussed in that article is based on five core principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, Human Alternatives, Consideration, and Fallback.
Each of these principles is as important as the other. But let’s go back to my take.
Safe and effective systems
This principle took my mind back to a piece from AI Now Institute I read about how some tech giants are simply in a race against competitors and might be releasing AI products that are not entirely ready or safe for public use. But let’s not digress. The safe and effective systems as a principle in this bill advocate that Automated systems should be safe and effective and undergo pre-deployment testing and risk mitigation to ensure they are free from endangering individuals’ safety or the safety of their communities.
In detail, it suggests that it should be illegal to develop an automated system without consulting “diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.”
To put this simply, it means that if you want to decide on an AI robot that would speed up construction, you cannot do it without consulting with the stakeholders, which would include the builders and everyone involved in a typical construction site. This is a principle I wholly subscribe to 100%. Why?
Because it is easy to get overwhelmed with the excitement of creating the following big product, one might overlook tiny details about how the product might endanger other people involved. True, we want AI to help us do things faster and cheaper, but we must also begin to consider “safer.” And who would better understand safety in an industry better than its stakeholders?
Don’t design from the outside. Consult with the people in the system and identify and mitigate potential risks even before the product is out. Not just in the usage but also in the outcome, you should endeavour to take proactive safety measures. Your safety measures should include the possibility of not deploying the system or removing a system from use. Even if this were not a law, it should be the responsibility of any AI entrepreneur.
Human Alternatives, Consideration, and Fallback
Always keep an opt-out button active. If you are designing a product that is supposed to automate the processes of a particular department, for instance, there should be an opt-out option and a provision for human alternatives. Your product should also have seamless fallback and escalation processes for users to seek human consideration and remedy when automated systems fail or produce errors. These processes should be accessible, equitable, and effective.
I mean, if you are trying to make a product that makes my work more accessible, the product should not make my life more difficult. I should always have the opt-out option or a fast and seamless means to resolve every possible error. As an entrepreneur, no one understands better that time is money. And if the technology fails, humans should be able to hold briefly while the error is addressed.
Data, Data, Data!
As far as the conversation is about AI, there will be concerns about data. What I like about the propositions on data privacy in this blueprint is that it is simple. Let users choose what will be done with their data and respect their choice.
Abusive data practices must be avoided. If this blueprint becomes the law across several countries, then you, as a user, will have total control over your data. You decide if or how they collect your data, use it, transfer it, and you even get to say when it should be deleted. I think this is already applicable in some technology, but the catch is that there are no clear consent systems. So, you probably click yes to one thing and are oblivious that the Yes now applies to several other things. There should be clear consent practices and protections against unchecked surveillance, especially for sensitive data domains.
If you are conversant with trends on Twitter, you’ll probably have seen that a certain bank had its customer database hacked, and some customers’ funds moved without their permission. You would also know that this is not the first of such occurrences, even in this year alone. One can never be too careful with data.
This could become a blueprint for several countries to adopt. What do you think? Are there other principles that you think should be a part of an AI bill of rights? What do you think about the five principles outlined in this article? Please share your thoughts.