• btc = $60 434.00 - 994.50 (-1.62 %)

  • eth = $2 336.46 - 103.36 (-4.24 %)

  • ton = $5.27 -0.17 (-3.06 %)

  • btc = $60 434.00 - 994.50 (-1.62 %)

  • eth = $2 336.46 - 103.36 (-4.24 %)

  • ton = $5.27 -0.17 (-3.06 %)

4 Oct, 2023
1 min time to read

While previous efforts have focused on the lightness or darkness of skin tones, the research argues that red and yellow skin hues should also be included in AI bias tests to ensure more diverse and representative outcomes.

The paper, authored by William Thong and Alice Xiang from Sony AI, along with Przemyslaw Joniak from the University of Tokyo, defends the idea of "multidimensional" skin color measurement. The goal is to identify and address biases that may have been overlooked when evaluating AI systems based solely on the light-to-dark spectrum.

Existing approaches, such as the Monk Skin Tone Scale and the Fitzpatrick scale, primarily consider the lightness or darkness of skin tone. Sony's research highlights that these scales do not account for biases against people with skin tones that fall outside this narrow spectrum, such as East Asians, South Asians, Hispanics, Middle Eastern individuals, and others.

Sony's proposal involves an automated approach based on the CIELAB color standard that provides a more refined assessment of skin color. This approach is intended to replace the manual categorization used in the Monk scale and provide a more complete understanding of skin color diversity.

The study found that common image databases too often include people with lighter and redder skin tones, leading to less accurate AI systems. For example, Twitter's image-capture algorithm and two other image-generating algorithms favor more red skin, and some AI systems mistakenly consider people with redder skin tones to be "more smiley."