Dr. Sasha Luccioni, a machine-learning research scientist at Hugging Face, likens the new CAIS letter to sleight of hand: 'First of all, mentioning the hypothetical existential risk of AI in the same breath as very tangible risks like pandemics and climate change, which are very fresh and visceral for the public, gives it more credibility,' she says. 'It's also misdirection, attracting public attention to one thing (future risks) so they don't think of another (tangible current risks like bias, legal issues and consent).' ...
'Certain subpopulations are actively being harmed now,' says Margaret Mitchell, chief ethics scientist at Hugging Face: 'From the women in Iran forced to wear clothes they don't consent to based on surveillance, to people unfairly incarcerated based on shoddy face recognition, to the treatment of Uyghurs in China based on surveillance and computer vision techniques.'
So while it's possible that someday an advanced form of artificial intelligence may threaten humanity, these critics say that it's not constructive or helpful to focus on an ill-defined doomsday scenario in 2023. You can't research something that isn't real, they note.
Synthograph generated by T.Olten using Midjourney 5.1 |
Read more:
Open AI Execs warn of "risk of extinction" from artificial intelligence in new open letter
arstechnica.com