Rolls-Royce releases new data bias detector, revealing music and oncology AI collaborations

Rolls-Royce has released a simple and effective new data bias tool to its pioneering artificial intelligence (AI) ethics and trustworthiness toolkit, The Aletheia Framework. We have also announced AI ethics collaborations with music cataloguing start-up, Musiio; and with international AI oncology experts.

Bias in the requirements, algorithms and data used to train AIs impacts the effectiveness and trustworthiness of AI and is one of the hardest challenges to overcome. It causes inaccuracy and negative bias in the way the AI analyses data and subsequently makes decisions, eroding trust in a technology that should be a valuable partner in our daily lives at home or at work.

Sitting as part of The Aletheia Framework 2.0 ecosystem, released today, the new tool is based on a tried and tested method of identifying and managing risk in very complex and novel systems. It has been adapted to perform the same role in AI, helping developers and organisations achieve highly accurate and fairer outcomes from their use of the technology.

Caroline Gorski, Group Director for Rolls-Royce’s data innovation unit, R2 Data Labs, said: “We’re excited to be adding even greater practicality to The Aletheia Framework, which is uniquely concise and focused on navigating the day-to-day intricacies of applying AI in an ethical and trustworthy way, such as bias in data.

“In the year since we first published the framework, we’ve been humbled by the level of interest, feedback and enthusiasm for something that started out as an answer to an internal challenge – crucially in a business-critical context.

“To enhance its effectiveness, not only are we adding this new AI bias tool, but we’ve also sought out collaborations with Musiio; international AI oncology experts to test how the framework performs and to hear how it can be more user-friendly and flexible. All these lessons have been included in The Aletheia Framework v2.0, which is released today and we believe that it can be applied to any use of AI, either as a template or a general guide for organisations to structure their thinking on this complex topic.”

The new data bias tool also extends the ability of The Aletheia Framework to enable organisations to apply rigor across the entire life of their AI product: from pre-development ethical considerations; to training data bias mitigation; and then the trustworthiness check on the decisions an AI makes after it has been deployed.

Crucially, The Aletheia Framework does not scrutinise algorithms themselves, which are highly complex, often commercially sensitive and always evolving. Instead, it focuses on the inputs to and continuously checks the outputs from those algorithms. This makes it simple and fast to use, as well as being applicable in any AI context.

Examples of how The Aletheia Framework has been used:


Hazel Savage, co-founder and CEO, Musiio, said: “There are more than 60,000 songs being released on to streaming services every day, which is an unmanageable amount to process manually. We’ve trained an AI that can listen to music. I was already having many of these thoughts and ideas around ethical AI and when I saw The Aletheia Framework, I thought someone has a and we’re now using The Aletheia Framework to guide our product strategy in terms of how we think around using AI from an ethical perspective.”


Massachusetts, USA-based Matthew Katz MD, Partner in Radiation Oncology Associates, said: “As a doctor, my purpose is to help people with decision making, they are often difficult decisions in cancer care. We have to trust the tools that we have, including artificial intelligence. The data in healthcare mostly relies on clinical trials and research often published in elite institutions. The selection bias in that process means many people may not be included in those data sets, so what resonated for me about The Aletheia Framework was the potential for transparency in how data works; in making sure that the data available applies fully to the person in front of me, even if it’s incomplete data; or if it requires my clinical judgement to be included. I am accountable to patients and the artificial intelligence systems should be also and the framework captures that.”

Dr. Marianne Aznar, Senior Lecturer in Adaptive Radiotherapy at The University of Manchester, UK, said: “When I heard about the Aletheia Framework and that Rolls-Royce was working with artificial intelligence and ethics, I thought this was yet another chance for us in the radiotherapy community to learn from that field and to apply their work to our own processes. So far, a lot of our research has been around the accuracy of the solutions. But what The Aletheia Framework is going to help us to do it to start discussions in the other areas so that we can bring AI solutions from research and really into the daily clinic workflow. “

The AI in oncology working group also included Dr. Raj Jena from Cambridge University Hospitals NHS Foundation Trust; Dr. Matthew Williams, Imperial College Healthcare NHS Trust; Dr. Issam El Naqa, Moffitt Cancer Centre; and Clifton David Fuller MD from The University of Texas MD Anderson Cancer Center.


Lord Tim Clement-Jones, chair of The Institute for Ethical AI in Education, said: “I commented on the original version of The Aletheia Framework, and it deals with many of the same areas in education as it does for Rolls-Royce in manufacturing – ethics, impact, compliance, data protection. So, I saw an equivalence there and we adapted The Aletheia Framework for our needs.”

The Aletheia Framework v2.0 can be downloaded from website, along with the data bias tool, which is a pre-configured Excel spreadsheet, as well as an FAQ. User guides and other case studies can also be accessed.


Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.