People must be careful of risks while developing technologies: AI expert | India News


People must be careful of risks while developing technologies: AI expert
Visitors explore cutting-edge AI gizmos at the summit on Wednesday

New Delhi: Current frontier models of AI are developed by a handful of companies and in countries such as the US and China, while most other countries are “passive victims of things they built”, Yoshua Bengio, professor of computer science at University of Montreal, said at the AI Impact Summit in New Delhi on Wednesday. Emphasising that only a handful of countries leading this domain is “unacceptable”, Bengio, who is widely considered as an AI pioneer said nations must take this up at the highest diplomatic level. Frontier models in AI are the most advanced, large-scale, general-purpose machine learning models that currently push the boundaries of capability, multimodality (text, image, audio, video), and performance. Yoshua said it’s not just a question of morality, but also sovereignty. He added this is also related to concentration of power. “If AI capability continues to grow, there’s a real possibility that there’ll be a huge discrepancy, even more than there is now, between the models, let’s say, in the US and China and the models that are being developed in other countries. And that could give those two countries or whoever is leading huge economic power...”, noting that “…the stability, the geopolitical stability that we’ve known since the Second World War could just go up in flames. I’m not saying it’s going to happen, but when you introduce so much power and when it is concentrated in such a way, there’s a real danger that you’re going to break the house.” These observations are significant at a time when India is spearheading the campaign for democratising AI. On what India should be careful with regard to AI, Bengio said people should be mindful of the effects or the risks that are being taken that are going to affect society while developing the technologies. “We need to understand it scientifically. We need to understand it socially because there’s a social component psychologically in the case of AI because we’re talking about systems that interact with people and language. So, a country like India could contribute to that understanding,” he said. He also said there is a dire need to carry out independent risk assessment before deploying open AI models. “If the risks are not too large compared to the benefits, because clearly there are benefits to sharing, in particular in developing countries like here in India, then sure, you should absolutely go open. But if you see that the risks cross a threshold of social accessibility, then you should not. So this way we might be able to get the benefits of open source when it makes sense and we could prevent catastrophic uses otherwise,” Bengio said. The professor also drew a parallel to how thorough risk assessment is carried out before allowing the sale of medicines and the same should be followed when using Artificial Intelligence in the public. “You can’t do whatever you want that’s going to make you money. You need to first show to an independent party, like representing the govt, that your product is not going to be harmful. But there’s no such thing right now. It’s a scandal,” he said.



Source link

  • Related Posts

    ‘When friends connect, innovation follows’: French President Macron shares AI image with PM Modi | India News

    AI image of French president Macron with PM Modi (Image shared by Macron on X) French President Emmanuel Macron on Wednesday shared an AI image with Prime Minister Narendra Modi,…

    ‘Bail, not jail’ norm can’t be applied in financial scams: SC | India News

    New Delhi: Given the proliferation of financial scams through ponzi schemes, Supreme Court has ruled that conning people of their hard-earned money falls in the category of heinous crimes and…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish