Sony AI has launched a revolutionary tool to combat bias in artificial intelligence. The new dataset, called the Fair Human-Centric Image Benchmark (FHIBE, pronounced “Fee-bee”), lets you check AI models for fairness.
It aims to solve a persistent industry problem: many AI systems are trained on images scraped from the web without permission. These models often do worse for some groups of people, which can lead to unfair and even harmful results.
This new ethical AI benchmark is built on a foundation of consent and global diversity. There are more than 10,000 pictures in the collection, all taken by nearly 2,000 volunteers from more than 80 countries.
Everyone who took part agreed to be included and can take their picture down at any time. This method raises the standard for responsible AI development. It shows that you can get good data without scraping the web without permission.
It’s clear that we need a tool like FHIBE. AI models can take on and even make worse stereotypes that people have about each other. For instance, they might wrongly associate certain demographics with specific jobs or criminal activities.

One of the lead researchers, Alice Xiang, explained that a common misconception is that computer vision is an objective reflection of people. In reality, it can change things based on the biases in the data it was trained on. Sony’s ethical AI project gives developers a much-needed way to check their systems for this kind of algorithmic bias.
Why and how does Sony AI FHIBE work for the future?
FHIBE is more than just a bunch of pictures. It is a diagnostic tool with a lot of detail. There are a lot of notes on each image, including self-reported demographics like age, pronouns, and ancestry, as well as physical and environmental information.
Researchers can figure out exactly where and why a model fails because of this depth. Tests with FHIBE, for instance, showed that some models are less accurate for people who use “she/her” pronouns. The Sony AI benchmark helped identify the cause of this problem, which is the presence of a wider variety of hairstyles.
The release of Sony AI FHIBE to the public, along with a research paper in the journal Nature, gives the whole tech industry a common standard. Previously, developers faced significant challenges. They had to choose between using ethically questionable datasets or not checking for bias at all.
Now there is a free, public resource that promotes doing the right thing. Alice Xiang, Global Head of Sony AI Governance at Sony, said, “FHIBE is proof that fair and responsible practices can be achieved when ethical AI is a priority.” This step pushes AI development in the direction of more openness and responsibility.
It’s not new that Sony is committed to responsible AI. Since 2018, the company has set up Sony AI ethics guidelines and governance frameworks. The start of FHIBE is a direct result of that long-term promise. It demonstrates a practical path forward for building trustworthy AI. FHIBE helps make sure that the AI systems of the future are fair and reliable for everyone, everywhere, by fixing algorithmic bias at the dataset level.





