Artificial intelligence as dangerous as epidemics, experts say
‘The Guardian’ recently released an article saying a group of leading technology experts from around the world have warned about the social risks of AI or artificial intelligence. According to them, it should be prioritized as an ‘extinction risk’ in the same category as pandemics and nuclear war.
The statement with hundreds of signatures was released by the Center for AI Safety on Tuesday, 30th May. According to the statement, artificial intelligence should be a global priority in the same way that pandemics and nuclear war are identified as extinction risks on a societal scale. The risk of extinction from this technology must be reduced. Among the signatories are the CEOs of Google’s DeepMind, OpenAI ChatGPT and AI startup Anthropic. Others, including experts associated with OpenAI, expressed existential fears and called for regulation of the technology in the statement. AI will significantly affect the job market, potentially harming the health of millions of people. Also confusion, discrimination and fraud can take the form of weapons.
ChatGPT created a global sensation after its launch last November. Since then, the mechanical model of language has been used by millions of people and technology has advanced faster than any prediction. Realizing the matter, the experts have been warning ever since.
Comment here