I've read a lot of articles abvout the Singularity, and I've observed a truly weird phenomenon: Commenters (including techies) tend to contradict the plausibility of the Singularity on the grounds that: 1) Moore's law is unsustainable, 2) Computers don't work the same way as a brain. 3) We haven't even simulated a brain yet. Even Bill Gates and other prominent industry actors do these statements.
I think that's insane. It's like even programming engineers don't understand at all AI and/or neurology.
That's overlooking : a) The fact that people who work on AI are quickly switching to real neuronal networks instead of using traditional computers. By "real", I mean that they now leave a lot of space for "relfexes" and statistics in their artificial brains, instead of wiring and programming everything. That's how the brain works. "It takes too much calculation to make a decision? Fuck it. I'll take a shortcut and do what my DNA tells me to do" b) It's only since recently that IBM started massively producing cheap neuronal chips that work in a truly parallel way, and that you use in bulk (we're talking millions of them). And it's only since recently that AI specialists really taught themselves massively-parallel programming. c) We don't know how brains work? Yes we do. We don't know the details, but who cares. As I said it's all about the brain teaching itself, and about having quick cycles of artificial evolution. It's building shitty artificial brains by trial and error that will teach us how the human brain works and what is intelligence, not the other way around. d) Moore's law is unsustainable? No relevant. Parallel programming doesn't care about computing power. Your brain is super duper slow (less than 100 operations per second. That's 100Hz, as opposed to the several GHz of a regular computer). It's all about the trillions of neurons firing up in an intricate and semi-structured way.
To me, it makes no doubt: singularity is on its way. We might fucking see it before we die! I'm so scared and thrilled at the same time.
It will come gradually. And the scary thing is that the very nature of intelligence will make that we won't know exactly when artificial braisn will become smarter than humans. We won't know what tasks should be given to them or not. And because intelligence can emerge only from randomness and neuronal "shortcuts" (reflexes, misconceptions, subbornness, etc.), it will be impossible to set simple rules like "don't harm a human".
I think that's insane. It's like even programming engineers don't understand at all AI and/or neurology.
That's overlooking : a) The fact that people who work on AI are quickly switching to real neuronal networks instead of using traditional computers. By "real", I mean that they now leave a lot of space for "relfexes" and statistics in their artificial brains, instead of wiring and programming everything. That's how the brain works. "It takes too much calculation to make a decision? Fuck it. I'll take a shortcut and do what my DNA tells me to do" b) It's only since recently that IBM started massively producing cheap neuronal chips that work in a truly parallel way, and that you use in bulk (we're talking millions of them). And it's only since recently that AI specialists really taught themselves massively-parallel programming. c) We don't know how brains work? Yes we do. We don't know the details, but who cares. As I said it's all about the brain teaching itself, and about having quick cycles of artificial evolution. It's building shitty artificial brains by trial and error that will teach us how the human brain works and what is intelligence, not the other way around. d) Moore's law is unsustainable? No relevant. Parallel programming doesn't care about computing power. Your brain is super duper slow (less than 100 operations per second. That's 100Hz, as opposed to the several GHz of a regular computer). It's all about the trillions of neurons firing up in an intricate and semi-structured way.
To me, it makes no doubt: singularity is on its way. We might fucking see it before we die! I'm so scared and thrilled at the same time.
It will come gradually. And the scary thing is that the very nature of intelligence will make that we won't know exactly when artificial braisn will become smarter than humans. We won't know what tasks should be given to them or not. And because intelligence can emerge only from randomness and neuronal "shortcuts" (reflexes, misconceptions, subbornness, etc.), it will be impossible to set simple rules like "don't harm a human".