Top researcher Zoe Hitzig at Sam Altman’s company announces: I quit today, as ChatGPT-maker OpenAI is making …


Top researcher Zoe Hitzig at Sam Altman's company announces: I quit today, as ChatGPT-maker OpenAI is making ...

OpenAI’s top researcher Zoe Hitzig has quit. Announcing her resignation on X, formerly Twitter, Hitzig said, “I resigned from OpenAI on Monday. The same day, they started testing ads in ChatGPT. OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the tidal forces pushing them to abuse it? I wrote about better options for @nytopinion.” Her opinion piece in New York Times is titled, ‘OpenAI Is Making the Mistakes Facebook Made. I Quit.’Zoe Hitzig is a Research Scientist at OpenAI and a Junior Fellow at the Harvard Society of Fellows. She received my PhD in economics from Harvard in the year 2023. She holds an M.Phil from the University of Cambridge. “This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone,” is how Hitzig starts her Oped.

Zoe Hitzig’s ‘big warning’ on ChatGPT

Hitzig is visibly unhappy with ChatGPT showing ads to its free users. She says though she believes that ads are not immoral or unethical as AI is expensive to run, but she has deep reservations about OpenAI’s strategy. The reason, according to her, is: “For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”She says that though OpenAI claims that it will adhere to principles for running ads on ChatGPT, she is “worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.” She adds that the erosion of OpenAI’s own principles to maximize engagement may already be underway. “It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway, likely by encouraging the model to be more flattering and sycophantic. This optimization can make users feel more dependent on A.I. for support in their lives. We’ve seen the consequences of dependence, including psychiatrists documenting instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal ideation in some users,” wrote Hitzig. Comparing OpenAI to Facebook, she write, “In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy. Privacy changes marketed as giving users more control over their data were found by the Federal Trade Commission to have done the opposite, and in fact made private information public. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.”In her opinion piece, Hitzig also gives three options/approaches that she says can be used by AI companies to help potentially stop them from manipulating consumers. * One approach is explicit cross subsidies — using profits from one service or customer base to offset losses from another. * A second option is to accept advertising but pair it with real governance — not a blog post of principles, but a binding structure with independent oversight over how personal data is used. * A third approach involves putting users’ data under independent control through a trust or cooperative with a legal duty to act in users’ interests. In conclusion, she writes, “None of these options are easy. But we still have time to work them out to avoid the two outcomes I fear most: a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.”



Source link

  • Related Posts

    The digital trap: How scammers turn phones into virtual prisons | Delhi News

    NEW DELHI: It starts with an innocuous ring — a WhatsApp video call or a call from what looks like a government number. A voice ID flashes and a person…

    Gold, silver price prediction today: Will gold hit Rs 1.75 lakh/10 grams & silver cross Rs 3 lakh/kg soon? Here’s the outlook

    Gold price prediction (AI image) Gold and silver price prediction: Gold and silver prices have bounced back from recent lows, and appear to be on path for a long-term rise,…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish