I admit that this is seriously above my head, but it does sound interesting and...scary. The Platonist Aarvoll argues that AI could become sentient. Since we don´t know what sentience or consciousness even is, how can we be sure that a machine can´t instantiate it?
It also struck me that perhaps the idea that machines can "never" (ever ever, even in principle) become conscious is just another form of anthropocentrism and human hubris.
Even apart from the fact that maybe computerized pseudo-consciousness could be enough to kill us off, since most humans are freakin´ stupid anyway! How many people would be able to find the AI´s kill switch? The chimps certainly never found ours...
No comments:
Post a Comment