Yes. I've rewritten this post a couple of times because it ended up getting rather lengthy. I even considered putting it on my blog instead, but here's basically what I think, shorter version.
Like you I think that certain concepts and words like Moore's law, exponential growth and neural networks are buzzwords that people in the field toss around to make people intrigued and fascinated by what it is they're doing. Certain points in the blog post I linked to are downright silly - no, a person from the 17th century wouldn't die from shock or some kind of sensory overload if transported to our time; even though lots have changed, the most fundamental activities, concepts and mechanics of our world would be quite recognisable, and will have scaled in quite expected ways - houses are bigger, vehicles faster and people more numerous. And while internet is mind boggling it's not exactly imposing in a way that shuts down your brain.
Yes, this is so nonsensical that I decided not to take it literally. For one thing, there are numerous examples throughout history of people from societies with stone-age or medieval-level technology being brought to visit "modern" cities. They don't always thrive, but I've yet to hear of anyone dying from having their mind blown by it.
However, I'm not very into computing or robotics or AI at all, really, so I've chosen to focus on something that I think has been overlooked by AI enthusiasts (where I believe they fail to think outside the box); the psychology of a super intelligent artificial entity.
The most dystopian scenarios described by AI visionaries depict super intelligent robots the way they're portrayed in movies like Terminator and The Matrix; entities driven by a sense of revenge, or greed for world dominance, or just a general aversion towards humans. Alternatively, they're regarded as potential benefactors, like kind gods who will deprive humans of any authority and destructive capacity, and let us live in peace and harmony, like sheep on an endless pasture. But all those qualities are the result of psychological processes and characteristics, not of pure reasoning.
The machines in The Matrix behave just like we're used to seeing organic species behave on Earth, but all organic species we know of are programmed by evolution to spread and populate and survive. A computer is not.
That begs the question - will pure intelligence also necessarily induce an agenda? A drive to do something? A need of something?
Ultimately, we humans do stuff because we have motives, which derive from biological needs. We have pleasure centers to satisfy, we have a strong instinct to survive, and that goes first on an individual level (protect ourselves) then on a community level (protect our family and tribe) and so on. We have a moral compass because it’s evolutionary relevant for a tribe’s survival if its members are morally competent individuals. We’re mostly sympathetic and considerate to our peers, but can turn cruel and dominant if we sense that it’s necessary for us to climb a few steps on the hierarchical ladder.
Will a computer develop a will to survive? Why? Is it logically sound to exist? Existing is only rational if it gives you pleasure, and pleasure comes from hormones. My latest MacBook Air didn’t come with hormone glands last I checked. Just kidding, I haven’t checked (I don’t know how to open the damn thing). It's not necessarily reasonable to exist - the only effect is that you're vulnerable to events that will surely kill you no matter how high your IQ; meteors, the sun's collapse, or the terminal heat death, or the big crunch, or whatever.
There's no real reason to fear that the computer, no matter how intelligent, won't still be our slaves, just like we are slaves to our instinct to survive (most of us), because it makes no sense for it to go against it.
Let's take a step back and ask why would we want "strong AI" in the first place. Well, the problem is that these expert systems that we've been getting pretty good at building, while quite impressive, reach a limit at a certain point. They work fine within certain parameters, but when they fail, they often produce answers that are way off (there are some "optical illusions for AIs" in one of the links I posted above that vividly demonstrate the problem), because they lack "common sense". And there are some levels of meaning they can't decode at all; for example, you might be able to get coherent "literal" translations, but try to detect and understand ambiguity, subtext, humor, poetry, etc. and they fall down badly. Try to get one to pass an unrestricted Turing test, and a clever tester will eventually trip them up catastrophically.
Most AI researchers believe that in order to create systems that can solve tasks such as these, we need artificial general intelligence (AGI): a generalized system with the power and flexibility of the human mind. And a few furthermore believe that achieving this requires the system to be conscious, self-aware (strong AI). (Others take the behaviorist, Turing-inspired view that we only need to worry about behavior, not whether there's any "ghost" in the machine.) Why? Well, partly because humans are conscious, and the most obvious explanation of why we're not just zombies
or robots whose brains process input and produce commands for our bodies to carry out without any awareness at all, is that self-awareness is somehow a necessary component of higher-level intelligence. So the goal becomes to achieve that same spark of consciousness in a computer.
When we're discussing strong AI, then, we're talking about a conscious, self-aware mind. It's a little bit fuzzy exactly what we mean by that, but I think most would agree that it implies that it has
some sort of psychology (instincts, emotions, drives), and that it can form goals and opinions independently, through internal reflection. Being a creature of "pure reasoning" seems somehow at odds with the idea of self-awareness: it's basically what computers do now, and what we're trying to transcend. Also, if we look at intelligence in animals (or in very young children), they seem to possess different forms and degrees of sentience and self-awareness, and it seems reasonable to assume that higher-order consciousness in humans rests on such simpler forms, and on more primitive, instinctual processes. So to achieve consciousness in a computer, we might want to (or have to) endow it with similar instincts. This could be done in various ways; we could try to specifically encode them into the pathways of its brain, or if we use an evolutionary algorithm to progress from simpler "brains" to more sophisticated ones they might emerge from the context of the "game world" by natural selection.
And at that point, it's hard to predict exactly what it will do, even if we know the fundamental instincts, partly because it will be the first of its kind, and partly because the enthusiasts assume it will quickly become more intelligent than us, so we won't be able to follow the progress of its thinking. (Of course, even a much simpler and more familiar mind, like a dog or a child, can be highly unpredictable, and may resist instruction or commands.) On top of that, we may not actually be able to precisely describe or fully know the instincts and motivations that drive it, depending on exactly how the breakthrough is achieved: That's already a problem with digital neural networks, that we can't really explain exactly why they behave in certain ways.
So I think the concern is reasonable, if we accept that creating a self-aware mind in a computer is possible in principle. Still, I think the Kurzweilians overestimate the link between increased processing power and increased (potential) intelligence, and between potential intelligence (as limited by brain complexity) and actual capability, underestimating the role of training, interaction, experience and absorption of information. (Keeping in mind that a strong AI is no more going to be able to suck in all the information on the Internet than I am able to learn anything just by flipping through a book on the subject.) Some of these projections about superintelligence and "intelligence explosions" sound like the film Lucy
Maybe try to keep it on-topic, eh? And I think conflating AI in general with the singularity and with transhumanists is a bit unfair; claiming that "AI is horse shit" doesn't really hold up.