It is a vast, vital aspect of artificial intelligence that is hidden from the public

Of all the stories about artificial intelligence that have come out recently, a new one from Josh Dzieza in collaboration with New York And the Verge is equal parts compelling and amazing. It explores a seemingly simple premise: for AI models to work, they need to be fed data—so much data, almost unimaginable amounts of data. Enter “Reviewers”. Meaning, millions of people around the world work for generally low wages toiling at monotonous tasks like tagging photos of clothes, all until AI models get smarter. Dzieza writes that behind “even the most impressive AI systems are people — huge numbers of people labeling data to train it and annotating data when it gets confused.”
In what he calls a “rising global industry,” they work for companies that sell this data to big players at an exorbitant price, all while reinforcing a culture of secrecy.

In fact, commentators are usually forbidden to talk about their work, though they’re usually kept in the dark about the big picture anyway. (One of the big players is Scale AI, Silicon Valley’s data vendor.) “The upshot is that, with few exceptions, little is known about the information that shapes the behavior of these systems, and much less is known about the people who are shaping it.” Dzieza interviewed Twenty commentators around the world, he even worked as one person to get the full picture. At some point in describing the human-machine feedback loop, he offers this mind-boggling gem: “ChatGPT seemed so human because it was trained by an AI that was simulating humans who were rating a human-simulated AI that was pretending to be a better version of an AI He was trained in human typing.” (the Full Story Worth reading.)

See also  Ford raises prices for the F-150 Lightning and resumes production

Leave a Reply

Your email address will not be published. Required fields are marked *