I haven’t really bought into the fear of superintelligent AIs, generally believing they are driven by a mix of ego and ignorance at this point. So I love this quote from a recent Scientific American post on the topic: 🖊
For all we know, human intelligence is already close to the universal maximum possible. I suspect the answers can only come from experiment, or the development of that fundamental theory of intelligence.
I probably also like this post because the author thinks the bigger concern is the potential for super convincing fakes of all kinds. Agreed! The post is entitled “The Erosion of Reality”, so yeah, that’s the fear. 👹