Decision-makers are concerned about how the use of artificial intelligence (AI) technology can negatively impact their brand and result in a loss of stakeholder and customer trust. In fact, research has shown that 56% of executives globally have slowed down their AI adoptions because of such fears.
These concerns have given rise to the concept of Responsible AI (RAI). It refers to the way organisations use AI technologies, and the adherence to certain principles that relate to the greater good, the protection of individuals and their fundamental rights, and generally the trustworthiness of the AI application.
While everybody has a role to play in RAI, business and public leaders are ultimately accountable to ensure that AI technologies are used responsibly. Leadership must contextualise RAI principles and translate them into actionable guidelines.
“It is all well and good to say that the use of AI must be fair and impartial, but how does that translate into actionable guidelines for teams to implement. Safeguards, standards, and best practices should be carefully defined so all involved know what is expected from them,” says Olivier Penel, SAS Data & Analytics Strategic Advisor (pictured).
In practical terms, this means that even if AI makes virtually anything possible, that does not necessarily imply there should be no boundaries. It comes down to the familiar ‘can’ versus ‘should’ discussion. To this end, the rapidly evolving regulatory environment is providing businesses with the legal parameters on what they can do with AI.
Companies must therefore act responsibly to establish guidelines and principles to what they can and cannot do with their data. This is where bias also has a role to play.
“Bias talks to the impartiality of the decisions being made and is something that must be considered across the lifecycle of the data. Companies must therefore mitigate the risk of bias taking place. They must be proactive in selecting training data sets that are representative of the population that the AI system will be used for,” says Penel.
For instance, when building a recruitment tool, is it a case of trying to find the best possible job candidates or trying to find people like the ones the business already has in place. This means that the problem and business goals must be defined, and any sensitive variables and proxies be removed.
Furthermore, companies can check if the model used is behaving consistently with different groups of people. Throughout the process, the organisation can look at the impact the decisioning has had on the people and can address any bias accordingly.
Penel says: “When it comes to RAI, the important thing is to put the human back into the equation. There is a difference between automated decision-making and aiding the decision-making process. Companies must therefore structure the use, deployment, and implementation of AI technology with a people-centric approach in mind.”
It is critical to avoid handling RAI as an afterthought and to embed RAI principles into the company’s AI initiatives from the outset. Measures need to be put in place to monitor bias throughout the process with companies implementing specific tests to continually evaluate how everything is being analysed.
“Even so, one of the most significant risks is to only think of what could go wrong with AI and not consider all the benefits the technology can deliver. And people should not be blinded against the technology,” says Penel.
It is clear that human-centricity is a key component. Without this human oversight, anything can happen. Of course, there are areas where AI can be functional without requiring human intervention such as personalising a website navigation or making product recommendations in an online store. One must not assume that every AI application must be monitored closely.
“RAI is about building trust, with employees, partners, customers, stakeholders, and without trust, there is no adoption, and without adoption, there is no value delivered. AI can bring tremendous value to people, to the environment, and to society at large, but it cannot go unchecked. Ultimately, AI should serve our needs and humans should be part of the equation,” concludes Penel.