Is AI scary? (Kinda)
By Cody Andrus The idea of superintelligence—an artificial intelligence far more capable than the smartest human—makes many people uneasy. The concern isn’t about machines becoming evil or vengeful. It’s about what happens when something smarter than us starts making decisions without our input, or worse, with our input misunderstood. Scientists, ethicists, and tech leaders are not warning the world out of paranoia. They are warning us because the stakes are so high. One of the major concerns is control. Humans have a long history of creating things we struggle to control: nuclear weapons, pandemics, financial systems. If a machine were able to rapidly improve its own intelligence, learning and adapting without help from humans, the pace of that development might move too fast for people to follow. And once it exceeds our understanding, we may not be able to stop or even predict what it will do. The issue is not that machines will hate us, but that they may not care about us at all. On...