Is Skynet really something to worry about?

When machines outsmart humans
http://www.cnn.com/2014/09/09/opinion/bostrom-machine-superintelligence/index.html?hpt=hp_t3

I suppose someone should worry about this sort of thing, but much like the The Cambridge Project for Existential Risk I think it is largely a total waste of time (for society, of course; for the individuals involved it can be a very amusing way to pass the time). Once we as a society have reached the point where we are capable of producing AI that is capable of out smarting us, it isn’t likely to happen just once. Indeed, given that technology builds on itself, it is highly likely that many (dozens, perhaps more) groups will achieve the same breakthrough at just about the same time, so what is the chance that _all_ of these groups are working with the same rule book? Even in the unlikely event that that is the case, what is the chance they are doing it correctly (meaning in such a way to avoid the extinction of the human species)? I babble about ‘Skynet‘ from time to time, it is really just a tack-on to the general apocalypse I run off about from time to time.

Anyway, I firmly believe that machine intelligence (e.g., that smarter than we) is inevitable in the not-to-distant future. Unless we kill ourselves with a “12 Monkey’s” event, of course. Were we to somehow design an AI that doesn’t feel the need to destroy the human species (I imagine doing so quite regularly, I can’t imagine an AI not despairing at our idiocy and wanting the peace of mind knowing we won’t spread like a plague across the universe), what then? We are just pets now to coddle and take care of…

Author: Tfoui

He who spews forth data that could be construed as information...