If only all algorithmic bias were as easy to spot as this: FaceApp, a photo editing app that uses a neural network for editing selfies in a photorealistic way has apologized for building a racist algorithm.
The app lets users upload a selfie or a photo of a face, and offers a series of filters that can then be applied to the image to subtly or radically alter its appearance — its appearance-shifting effects include aging and even changing gender.
The problem is the app also included a so-called “hotness” filter, and this filter was racist. As users pointed out the filter was lightening skin tones to achieve its mooted ‘beautifying’ effect. You can see the filter pictured above in a before and after shot of President Obama.
In an emailed statement apologizing for the racist algorithm, FaceApp’s founder and CEO Yaroslav Goncharov told us: “We are deeply sorry for this unquestionably serious issue. It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour. To mitigate the issue, we have renamed the effect to exclude any positive connotation associated with it. We are also working on the complete fix that should arrive soon.”
FaceApp has temporarily changed the name of the offending filter from “hotness” to “spark”, although it would have been smarter to pull it from the app entirely until a non-racist replacement was ready to ship. Possibly they’re being distracted by the app’s moment of viral popularity (it’s apparently adding around 700,000 users per day).
While the underlying AI tech powering FaceApp’s effects includes code from some open source libraries, such as Google’s Tensorflow, Goncharov confirmed to us that the data set used to train the “hotness” filter is its own, not a public data set. So there’s no getting away from where the blame lies here.
Frankly it would be hard to come up with a better (visual) example of the risks of biases being embedded within algorithms. An machine learning model is only as good as the data it’s fed — and in FaceApp’s case the Moscow-based team clearly did not train their algorithm on a diverse enough dataset. We can at least thank them for illustrating the lurking problem of underlying algorithmic bias in such a visually impactful way.
With AI being handed control of more and more systems there’s a pressing need for algorithm accountability to be fully interrogated, and for robust systems to be put developed to avoid embedding human biases into our machines. Autonomous tech does not mean ‘immune to human flaws’, and any developer that tries to claim otherwise is trying to sell a lie.