Ex-Google CEO Eric Schmidt has ‘killer’ AI warning for everyone


Ex-Google CEO Eric Schmidt has ‘killer’ AI warning for everyone

Eric Schmidt, who led Google as chief executive from 2001 to 2011, has delivered a ‘dangerous’ assessment of artificial intelligence (AI) security risks while simultaneously expressing optimism about the technology’s transformative potential. Speaking at the Sifted Summit during a fireside chat this week, Schmidt addressed concerns about whether AI poses dangers comparable to nuclear weapons, acknowledging significant vulnerabilities in current AI systems.

Schmidt lists AI risks and security vulnerabilities

“Is there a possibility of a proliferation problem in AI? Absolutely,” Schmidt stated, highlighting concerns that the technology could fall into the hands of malicious actors who could repurpose and misuse it.Schmidt pointed to evidence that AI models, whether closed or open-source, can be compromised to bypass their built-in safety mechanisms.“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt highlighted.Schmidt noted that while all major AI companies implement safeguards to prevent their models from answering dangerous questions—a decision he characterised as both well-executed and ethically sound—these protections are not foolproof.“All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature,” Schmidt said.

Gemini Comes To Chrome: How Google Is Improving Browser With AI – Explained

How AI can be exploited

AI systems face multiple forms of attack, with prompt injections and jailbreaking among the most concerning methods, according to the former Google CEO.In prompt injection attacks, hackers embed malicious instructions within user inputs or external data sources such as web pages or documents. These hidden commands trick the AI into performing unintended actions, including sharing private information or executing harmful commands.Schmidt expressed concern about the lack of effective safeguards to address AI’s proliferation risks.“There isn’t a good ‘non-proliferation regime’ yet to help curb the dangers of AI,” he stated, suggesting that current regulatory frameworks are insufficient to manage the technology’s potential misuse.Despite these warnings, Schmidt struck an optimistic tone about AI’s broader impact and argued that the technology is actually underestimated rather than overhyped.





Source link

  • Related Posts

    ‘I can’t wear a Rs 3 lakh watch around my friends’: Varun Chakravarthy’s heartfelt confession | Cricket News

    DUBAI, UNITED ARAB EMIRATES – SEPTEMBER 24: Varun Chakravarthy of India warms up ahead of the Asia Cup match between India and Bangladesh at Dubai International Stadium on September 24,…

    ‘Get to the bloody gym!’: Ravi Shastri reveals how Virat Kohli made fitness message clear to young players | Cricket News

    Ravi Shastri has spoken highly of Virat Kohli and how much the batter prioritizes his physical regime (Images via Getty Images) Former India head coach Ravi Shastri has praised Virat…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish