A gaming CEO shares a nightmare scenario of using AI to spy on developers

At least one video game company has considered using a big-language AI model to spy on its developers. TinyBuild CEO, who publishes Hello Neighbor 2 And Tinkindiscussed this during a recent talk at this month’s Develop: Brighton conference, showing how ChatGPT can be used to try and monitor employees who are toxic, at risk of burnout, or just talk about themselves too much.

“This was very strange black mirror-We are for me,” TinyBuild president Alex Nichiporchik admitted, according to a A new report from WhyNowGaming. We’ve detailed the ways in which text can be entered from Slack, Zoom, and various task managers, specifying which information has been removed in ChatGPT to define patterns. At that point, the AI ​​chatbot obviously scans the information for warning signs that can be used to help identify “potential problem players on the team.”

Nichiporchik took issue with how the presentation was framed by WhyNowGaming, he claimed in an email to Kotaku That he was discussing a thought experiment, not describing the practices the company is currently using. “This part of the show is hypothetical. No one is actively monitoring the staff,” he wrote. “I talked about a situation where we were in the middle of a critical situation in a studio where one of the leads was suffering from burnout, and we were able to step in quickly and find a solution.”

While the presentation may have been aimed at the overarching concept of trying to predict employee burnout before it happens, thereby improving conditions for both developers and the projects they work on, Nichiporchik also appears to have some controversial opinions about the cause of the types of behavior. Problem and how best for HR to report it.

See also  The second-generation Apple Watch Ultra is coming this fall, a new large iMac "in early development" with a screen larger than 30 inches

In Nichiporchik’s assumption, one of the things ChatGPT will monitor is how often people refer to themselves using “me” or “me” in office communications. Nichiporchik referred to employees who talk too much during meetings or about themselves as “vampires”. “Once that person is with the company or with the team, the meeting takes 20 minutes and we get done five times more,” he suggested during his presentation according to WhyNowGaming.

Another controversial theoretical practice is to scan employees for the names of co-workers with whom they have interacted positively in recent months, and then tag the names of people who were never mentioned. These three approaches, Nichiporchik suggested, can help the company “identify someone on the verge of burnout, who may be the cause of the disenfranchisement of colleagues who work with that person, and may be able to identify and fix it early on.”

This use of artificial intelligence, theoretical or not, prompted a quick reaction on the Internet. Warner Bros. writer Mitch Dyer tweeted. Montreal, in a tweet: “If you have to qualify over and over again that you know how miserable and horrible it is to monitor your employees, you could be the damn problem my man.” UC Santa Cruz Associate Professor Mattie Price tweeted, “A wonderful and awful example of how uncritically using AI makes those in power take it at face value and internalize its biases.”

Corporate interest in generative AI has increased in recent months, leading to a backlash among creatives around the world. Many different fields from music to games. Hollywood writers and actors are currently on strike after negotiations with movie studios and broadcasting companies stalled, in part over how to use artificial intelligence to create screenplays. Or take photos of the actors’ likenesses and use them forever.

See also  AMD claims the 8-Core Ryzen 7 7800X3D is 20% faster than the Core i9-13900K in games at 1080p

Leave a Reply

Your email address will not be published. Required fields are marked *