• btc = $91 810.00 947.09 (1.04 %)

  • eth = $3 098.74 -47.83 (-1.52 %)

  • ton = $5.42 -0.15 (-2.74 %)

  • btc = $91 810.00 947.09 (1.04 %)

  • eth = $3 098.74 -47.83 (-1.52 %)

  • ton = $5.42 -0.15 (-2.74 %)

19 Jan, 2023
2 min time to read

Workers filtered the texts for AI training manually.

OpenAI used Kenyan workers, paid less than $2 an hour, to filter traumatic content in ChatGPT. The developers sought to make AI less toxic, according to TIME’s investigation.

The developers of ChatGPT partnered with San Francisco-based firm Sama to help with AI content filtering. Since November 2021, OpenAI has sent  to the partner thousands of texts, that needed to be checked for the absence of child sexual abuse, bestiality, murder, incest, and other sensitive content. Sama used a team of workers from Kenya to work with the texts. According to its employees, they had to read from 150 to 250 passages of text daily in a nine-hour shift. Interviewed employees described that their work as “morally traumatizing” and compared it to “torture”.

According to the contract, OpenAI would pay an hourly rate of $12.50 to Sama for the job, but employees were paid 6-9 times less. According to sources, depending on the position and efficiency, employees received from $ 1 to $ 2 per hour. They said junior data labelers were paid a basic salary of $170 per month not including bonuses.

Sama denied the workers' claims. According to the company’s statement, employees' basic salary was higher (from $1.46 to $3.74 per hour) and the quotas to read daily were lower (70 pieces of text per day). In addition, counselors’ service and group therapy were provided to employees. They, however, called working with counselors useless, and one employee's requests for individual consultations instead of group therapy were repeatedly denied.

Moreover, a Kenyan team of employees was also used to collect and label images for an unknown project of OpenAI. According to a document obtained by TIME, Sama provided a sample set of images containing child sexual abuse, bestiality, rape, and bodily harm. In a conversation with TIME, both parties stated that the collection of images in some categories occurred by mistake, after which the parties ended the cooperation ahead of schedule.

The deal between OpenAI and Sama was worth $200,000, but due to the early termination of the contract, the conditions have been changed. As a result, the contractor received only $150,000 from OpenAI. Employees named the investigation into how Sama hired Facebook content moderators for $1.5 an hour the reason for such termination.

In January 2023, Sama announced that they would be phasing out content moderation. The company canceled a $3.9 million contract with Facebook, which resulted in dismissal of approximately 200 workers in Nairobi. “After numerous discussions with our global team, Sama made the strategic decision to exit all [natural language processing] and content moderation work to focus on computer vision data annotation solutions,” the company said in a statement.

The investigation raises another question about the ethical issues associated with the development of AI. Andrew Strait, AI ethic specialist, stressed that labeling of content is a necessity for artificial intelligence development. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent. These are serious, foundational problems that I do not OpenAI addressing,” wrote see Andrew on Twitter.