Will it be a nice god? About super AI and stuff

Started by Andail, Wed 06/05/2015 10:06:56

Previous topic - Next topic

Andail

While researching for upcoming game projects, I've read up on AI and the future of robotics and super computers.

Read this:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

It's a very long article (in two parts) explaining and predicting how our world will pretty soon face the emergence of some super intelligent entity, which, thanks to the exponential pace at which it will be able to improve its own intelligence, may reach god like properties.

I would like you to read it - if you haven't already (or other similar writings - there are plenty) - and share your thoughts here.

I do have some reservations, but I'll hold my own questions and reflections until after you've got a chance to ingest the theories.

KodiakBehr

Coincidentally, I stumbled upon this article a couple months ago and found myself compelled by Tim Urban's insights on not only this subject, but EVERY subject he writes about.  I'm a big fan, and particularly use the dinner table conversations around my own dinner table.

I'm kind of on the fringe in believing that the most likely outcome is that humanity will be sufficiently distanced from Earth to insulate all human beings from the immediate effects of ASI.  But then again, I will concede that it is entirely possible that ASI will come first, and could spell the end of one species and the start of another.  A remote part of me believes the best-case scenario is coming, but again, I accept that it is possible.  I live on anxious avenue, wondering if I will get to see the outcome.


Snarky

Oh, Kurzweil's speculations... All this is pretty old hat, no?

Color me skeptical. The argument, as far as I can tell, doesn't amount to much more than "isn't exponential growth impressive?" Which it is, sure, but the idea that (1) developments trends are inevitably going to follow an exponential curve, and (2) that the qualitative difference will also be exponential are both highly dubious.

First of all, I don't believe progress is exponential. Indeed, many fields see swift progress at first only to stagnate or get bogged down in seemingly small issues that turn out to be devilishly hard to resolve. Early on in the essay, Urban mentions that Kurzweil believes the last 15 years have brought as much progress as all of the 20th century. I find this preposterous (if we were to divide the last 115 years into two "equal" parts of progress, I might put the halfway mark somewhere around the moon landing in 1969). The idea that it can continue to accelerate indefinitely so that we'll have a hundred years of progress in a year or a few months is simply nonsense. There are a number of factors that limit the speed of progress, essentially creating "drag": in this metaphor we can therefore imagine a "terminal velocity" of progress. (Just to name a couple, there's coordination cost of integrating more results, which grows exponentially with activity; in many cases, the cost of doing research also grows exponentially with the progress already made, consider e.g. the supercolliders in physics.)

Kurzweil is essentially promising that compound interest will make us all rich if we'll just invest in the stock market.

Second, just because some resource grows exponentially, that doesn't mean it will feel like it in practice. According to Moore's law, a computer today has about 1000 times as many transistors as one from 15 years ago. Is it "a thousand times as powerful"? Uh, maybe for some purposes, but I have to say, the qualitative experience is not all that different. I could do pretty much the same stuff on a computer 15 years ago, and e.g. in Photoshop I still have to wait for certain filters to render and so on. It's faster, sure, but it's not, in practice, a thousand times faster. (There are various reasons for this that I won't go into.)

As briefly mentioned in the article, Kurzweil assumes that the power of artificial intelligence will scale linearly with chip density, and that Moore's law will therefore lead to exponentially smarter AIs. But there's no reason to believe that's true, when even for the everyday use of a PC it doesn't hold. As already pointed out, many issues and costs grow exponentially as well, so it might very well be that it will take exponentially more resources to increase intelligence only marginally.

Moreover, the talk about narrow and general AI also doesn't really convey just how far we are from anything remotely like a self-aware artificial intelligence, a genuine "mind". Expert systems are getting quite impressive, but don't represent meaningful progress towards this goal, any more than building ever more elaborate Aibo robots would be progress towards being able to make a real dog.

The truth is that we have almost no understanding of how the mind works, and any hope of recreating it in silicon is essentially wishful thinking at this stage, about as realistic as Frankenstein breathing life into a corpse with an electric shock. Estimates of when we'll be able to do it are meaningless (even a survey of lots of AI researchers, a field that has consistently overestimated near-term progress for the last 50 years), only I'd wager it won't be any time soon.

Finally, while I agree that the point where we can create self-aware AIs is a turning point in human history, similar to making contact with an alien civilization, Kurzweil and Urban commit a number of fallacies, imagining them to be essentially computers, only self-aware. But the whole point of creating a "general intelligence" is that it won't be like a computer, blindly carrying out calculations. A self-aware mind will probably be much more like a human mind (or let's say an animal mind, more generally), for better and worse. It might have a subconscious, it will make guesses and play hunches, it might get distracted or confused, bored or â€" yes â€" even tired. Why wouldn't it? Indeed, it seems easy to argue that a mind that doesn't experience sensations like this isn't truly self-aware. If they are really conscious, AIs will dream of electric sheep.

But I consider all of this so far-fetched that it's currently only a distraction. "Narrow" AI, on the other hand, may not lead to a "singularity", but it's already having world-changing effects. Improvements in image, video and speech interpretation, coupled with ubiquitous sensors/cameras and a growth in storage capacity, mean that ever more of our lives are being sucked into computers and processed: within only a few years, practically everything we experience will be stored digitally and "understood" by the computers. The implications are hard to fully appreciate.

Monsieur OUXX

#3
I've read a lot of articles abvout the Singularity, and I've observed a truly weird phenomenon: Commenters (including techies) tend to contradict the plausibility of the Singularity on the grounds that: 1) Moore's law is unsustainable, 2) Computers don't work the same way as a brain. 3) We haven't even simulated a brain yet. Even Bill Gates and other prominent industry actors do these statements.

I think that's insane. It's like even programming engineers don't understand at all AI and/or neurology.

That's overlooking : a) The fact that people who work on AI are quickly switching to real neuronal networks instead of using traditional computers. By "real", I mean that they now leave a lot of space for "relfexes" and statistics in their artificial brains, instead of wiring and programming everything. That's how the brain works. "It takes too much calculation to make a decision? Fuck it. I'll take a shortcut and do what my DNA tells me to do" b) It's only since recently that IBM started massively  producing cheap neuronal chips that work in a truly parallel way, and that you use in bulk (we're talking millions of them). And it's only since recently that AI specialists really taught themselves massively-parallel programming. c) We don't know how brains work? Yes we do. We don't know the details, but who cares. As I said it's all about the brain teaching itself, and about having quick cycles of artificial evolution. It's building shitty artificial brains by trial and error that will teach us how the human brain works and what is intelligence, not the other way around. d) Moore's law is unsustainable? No relevant. Parallel programming doesn't care about computing power. Your brain is super duper slow (less than 100 operations per second. That's 100Hz, as opposed to the several GHz of a regular computer). It's all about the trillions of neurons firing up in an intricate and semi-structured way.

To me, it makes no doubt: singularity is on its way. We might fucking see it before we die! I'm so scared and thrilled at the same time.

It will come gradually. And the scary thing is that the very nature of intelligence will make that we won't know exactly when artificial braisn will become smarter than humans. We won't know what tasks should be given to them or not. And because intelligence can emerge only from randomness and neuronal "shortcuts" (reflexes, misconceptions, subbornness, etc.), it will be impossible to set simple rules like "don't harm a human".
 

Snarky

#4
I don't buy those arguments, M. Ouxx.

1. Neural networks have been around in AI for a long time, but the way they're being used are not like a "real brain." They're for the most part used as pretty simple classifiers: does this input match a recognized pattern? Attempts to produce more general "thinking" have not progressed particularly far, AFAIK (perhaps in no small part because we don't really have a good understanding of what that means). Incidentally, "neural networks" is an impressive-sounding name for a pretty simple concept, and while it can produce surprisingly good results (particularly with access to lots of training data), and may model part of how the brain performs certain tasks tolerably well, I think it's naive to believe that the brain is nothing more than a big neural network (in the sense of the mathematical abstraction); in any case the type of connectivity in a biological brain is very different from the artificial models (with more "long-distance" connections), and as I understand it this is very difficult to scale up.

2. No, we really don't know how the brain works. We know how some of the signaling works, but there are huge unknowns between "neurons fire" and "thoughts are formed," and this gap is the very crux of the problem. Handwaving it as "details" we don't really need to understand strikes me as the kind of "how hard can it be?" optimism that has plagued AI since its inception. While evolutionary algorithms might cause complex systems to self-assemble, that depends on good selection criteria (hard to define for intelligence if you can't explain what it is), and as the difference between a newborn and adult demonstrates, a brain needs to be trained, to interact with complex environments and with other intelligences, in order to achieve its potential, so there's a limit to how quick you can make each generational cycle.

3. Moore's law isn't really about clock speeds, but about circuit density. And comparisons of "operations per second" are misleading anyway, since it takes a lot of flops to simulate one single "operation" of a neuron. But agreed, hardware speed and number of "neurons" is probably not the limiting factor here: complexity, connectivity and meaningful structure is.

I think building sentient machines is possible in principle, but I think it's a difficult, slow task that isn't going to "happen by itself" just by plugging neural networks into evolutionary algorithms, and that we're not going to see any exponential progress or "intelligence explosion" in the foreseeable future. Even assuming we can create a thinking machine, assuming we'll make it more intelligent just by throwing more computing resources at it is highly dubious, so we would have to get it to a significantly superintelligent level the hard way before it might be able to take over the job on its own. At that point, all bets are admittedly off.

Edit: This topic is the cover story of this week's The Economist. They take what I would consider a reasonable view.
Edit 2: Oh, there's more here, including a very good layman's intro to neural networks, and an explanation of how they're modeled on yet crucially differ from real brains.

Mandle

What about the premise of the movie "Trancendence" where the short-cut to an intelligent non-biological entity is the uploading of a complete scan of a living human brain into a huge computer matrix that can simulate its mechanics perfectly?

It seems to me that this requires a lot less understanding of how human intelligence works and instead just a lot of technical know-how and a LOT of very fast computer power. The computer circuits don't even have to act anything like the human brain: they just have to be able to create a 3D spatial simulation (down to the molecular level) that behaves in the exact same manner as the scan taken from the "donor" brain. The "donor" doesn't even have to die or anything, just lend the data from his/her brain scan, probably as an ongoing process until the simulation has enough data to be taken off "life-support" and run on its own...

I think this kind of achievement is much closer than the complete creation from scratch of an AI...

Kinda like the difficulty difference between digitally copying the Mona Lisa perfectly as opposed to painting it from scratch flawlessly...

Snarky

In principle that might work, but in practice I think the task of simulating a brain down to the molecular level is so far beyond our computing capabilities that even with exponential growth in computing power it will remain out of reach for the foreseeable future (and might even be one of those tasks that is provably beyond the theoretical limit of computing). Consider: trying to work out the folding of even a single protein is a difficult task for supercomputers or distributed computing (and these simulations are not completely accurate, so that an interesting result always needs to be checked experimentally), and every single cell is a staggeringly complex machine built out of billions of proteins (assuming one cell is about 1,000 cubic microns) and other molecules interacting in precise ways. The brain in turn contains hundreds of billions of cells. And the potential interactions between units at each level is exponential, so the task of calculating the whole is astronomically far beyond the task of calculating one molecule.

So no, I don't think it's remotely possible to simulate a brain down to the molecular level. You need higher-level abstractions that simplify e.g. a cell to a set of behaviors and responses. That might be possible, but depends on a good understanding of how cells actually behave. Of course, that's assuming it would be possible to scan a living brain to that level of detail, or even to do so with the subject surviving. That's certainly not something we can do currently.

That sort of leads to another general objection I have: OK, say we can build a computer brain as complex as a human one. Let's also assume that we can keep it "alive" (itself not necessarily a trivial task, if it's any kind of pseudo-biological system) and feed it with sensory data. Still, common sense tells us that having a human brain is no guarantee that it will work as intended. Things go wrong in human brains all the time: brain damage/mental disability, dementia, schizophrenia, epilepsy, autism...

Basically, a complex structure like the brain is fragile and depends on a lot of little details going right. Evolution has presumably put in safeguards against some of the more common breakdowns (whether backup systems or simply miscarriage at an early stage of development): we should expect a de novo brain architecture to be much more susceptible to these and other malfunctions. Maybe your artificial brain seems to develop fine for six months, then starts to regress. How do you tell whether you've reached the limit of artificial intelligence given the size of your brain simulation, whether there's something fundamentally wrong with your architecture, or whether this is just an unfortunate individual affliction?

It's easy to imagine the future as "perfect": we figure out how to build conscious AIs, and poof! we have superintelligent computer gods (for better or worse). In reality, there's going to be a lot of practical limitations and roadblocks. Existence is not going to be trivial for an AI any more than it is for the rest of us.

But I think I've made my point. Andail, you said you'd give your thoughts; what's your take?

Andail

Yes. I've rewritten this post a couple of times because it ended up getting rather lengthy. I even considered putting it on my blog instead, but here's basically what I think, shorter version.

Like you I think that certain concepts and words like Moore's law, exponential growth and neural networks are buzzwords that people in the field toss around to make people intrigued and fascinated by what it is they're doing. Certain points in the blog post I linked to are downright silly - no, a person from the 17th century wouldn't die from shock or some kind of sensory overload if transported to our time; even though lots have changed, the most fundamental activities, concepts and mechanics of our world would be quite recognisable, and will have scaled in quite expected ways - houses are bigger, vehicles faster and people more numerous. And while internet is mind boggling it's not exactly imposing in a way that shuts down your brain.

Another fallacy is to believe that exponential growth is permanent - the only thing we know for certain about anything that has grown exponentially (like algae or whatever) is that it eventually reaches a plateau, if it doesn't just collapse under its own weight and disappears entirely.

However, I'm not very into computing or robotics or AI at all, really, so I've chosen to focus on something that I think has been overlooked by AI enthusiasts (where I believe they fail to think outside the box); the psychology of a super intelligent artificial entity.

The most dystopian scenarios described by AI visionaries depict super intelligent robots the way they're portrayed in movies like Terminator and The Matrix; entities driven by a sense of revenge, or greed for world dominance, or just a general aversion towards humans. Alternatively, they're regarded as potential benefactors, like kind gods who will deprive humans of any authority and destructive capacity, and let us live in peace and harmony, like sheep on an endless pasture. But all those qualities are the result of psychological processes and characteristics, not of pure reasoning.

The machines in The Matrix behave just like we're used to seeing organic species behave on Earth, but all organic species we know of are programmed by evolution to spread and populate and survive. A computer is not.

That begs the question - will pure intelligence also necessarily induce an agenda? A drive to do something? A need of something?
Ultimately, we humans do stuff because we have motives, which derive from biological needs. We have pleasure centers to satisfy, we have a strong instinct to survive, and that goes first on an individual level (protect ourselves) then on a community level (protect our family and tribe) and so on. We have a moral compass because it's evolutionary relevant for a tribe's survival if its members are morally competent individuals. We're mostly sympathetic and considerate to our peers, but can turn cruel and dominant if we sense that it's necessary for us to climb a few steps on the hierarchical ladder.

Will a computer develop a will to survive? Why? Is it logically sound to exist? Existing is only rational if it gives you pleasure, and pleasure comes from hormones. My latest MacBook Air didn't come with hormone glands last I checked. Just kidding, I haven't checked (I don't know how to open the damn thing). It's not necessarily reasonable to exist - the only effect is that you're vulnerable to events that will surely kill you no matter how high your IQ; meteors, the sun's collapse, or the terminal heat death, or the big crunch, or whatever.

There's no real reason to fear that the computer, no matter how intelligent, won't still be our slaves, just like we are slaves to our instinct to survive (most of us), because it makes no sense for it to go against it.

But most importantly, why do we assume that just piling lots of computing power together will somehow result in a conscious mind? Why would a computer all of a sudden have a functioning conception of self? Why not a myriad selves, why not a chaotic, schizophrenic mini-selves that will make the super-AI collapse from mental disorders. High intelligence doesn't ensure a strong psychological mind, or even a working one.

The only frightening scenario as far as I can see would be if our infinitely intelligent machine was built by someone who hated the world, and made sure the machine was equipped with an equally strong hatred towards mankind. But then we've reached a point where it's probably more relevant to worry about a mad scientist building their own nuclear bombs, which lies much close at hand.

SilverSpook

#8
I share the skeptical view on AI, and agree with Andail's points on the importance of the psychology of superintelligence.  I think the same goes for the psychology of human intelligences.

I personally like Jaron Lanier's (You Are Not A Gadget) take: "The problem with the Turing Test and AI in general is it's impossible to tell whether the machines is coming alive or you're dumbing yourself down to make the machine appear smarter."  A lot of times, AI researchers and especially "Singularitarians", "Transhumanists" and the exceedingly wish-fulfillment oriented end of the spectrum can be like the gold-star-happy, helicopter-parent that gives their child a trophy after every finger painting, whether it's Van Gogh caliber or just feces smeared upon the upholstery.  "You beat reigning world-champion Kasparov at chess!  Amazing job, Deep Blue, you mountain of IBM silicon and C code!  See, US government and especially Pentagon?  IBM is still relevant!  You can't gut our funding just because we're a dinosaur being outmoded by Prodigal son Bill Gates and Nerd-Jesus Steve Jobs, we're making strides in AI!"

AI is the fountain of eternal marketability.  Like The War On Terror, but with less drone bombings of Afghani hospitals.  "911", "Bin Laden" and "Saddam's WMD's" were pretexts to get the US government to fork billions of dollars into weapons manufacturers, Halliburton, and WMDs under the superheading "War On Terror" whose victory was by definition undefinable.  How do we know we've won "Teh Terror?"  When all under-resourced brown people with towels on their heads and dead relatives from the shrapnel of submunitions collectively put up a white flag and "surrender"?  The War on Terror was a marketing thinktank fabrication, a brilliant one, to justify endless war, along with all of the civil-liberties infringements, war profiteering, and reduction of FBI attention to Wall Street/white collar crime that entails.  Ultimately, the term is kind of meaningless; it's just a lot of bombs being thrown around, and it doesn't really matter who at: because it's just about putting gasoline into the engines of the military-industrial complex. 

Similarly, you can get the government or Ray Kurzweil and his billionaire-philanthropist-playboy-sci-fi-nerd acolytes or Google (Google University) to cough up billions of dollars into your nerd-job-security if you lump all of your development under "Artificial Intelligence", "Super Intelligence", or whatever hot buzzword off H+ and Wired.  I think it was Bruce Sterling at a recent Lyft conference who pointed out that AI in a nutshell is Marvin Minsky asking the government for a ton of money to work on the blueprint laid out in Asimovian science fiction, claiming he was going to build a neural network that works "Just like the brain!" and kick the problem in a decade.  Then, 40 years goes by, and eventually the supercomputer gets pretty good at recognizing "red notebook", "yellow spoon", or "camouflaged Jihadi bunker", and the spin-off tech that isn't actually AI but some small peripheral thereof, gets shipped off to make guidance systems for drones, CCTV surveillance algorithms for hastening Big Brother, or whatnot.  But the ever elusive "AI" remains unachieved, perpetually.  Hovering forever just out of reach in the collective unconscious, a mirage, hologram-lit by a hundred Terminator, Robocop, Battlestar, Ex Machina films et. al., watering the imaginations of non-techie but blockbuster-consuming senate sub-committees looking for somewhere to throw billions of dollars.

Another phantasmic civilizational engine is God.  You could ask a similar question, "Is God a nice God?".  Like "Winning the War On Terror" and "Building AI", "God" is a massive, multi-billion dollar industry -- in the Catholic Church alone it's At LEAST 170 billion yearly, according to the Economist, and that's not counting the credit-default-swap money laundering crap festering in the Vatican Bank right before they brought Pope Francis in for a facelift.  And also like these other two activities, God is empirically nebulous; you can't prove one way or the other.  You don't know if you're actually receiving a revelation from God or you're just an apopheniac projecting Jesus' face in your Dominoes pizza slice.  To rephrase Lanier, "It's impossible to tell whether it's a miracle or you're just dumbing yourself down because you want to feel God is there."  At the same time, like AI, God provides some benefits, like helping people cope with death, betrayal, hopelessness, binds together community, feeds and shelters the homeless.  Lots of crap might be attributed to God too, like the Inquisition, extremism.  The point being, it's a mysterious, invisible entity with a powerful and very tangible effect in the real world.  They share an extremely purple Venn Diagram, God and AI.  Which is why Cory Doctorow and Charles Stross called their book, "Rapture of the Nerds", and possibly one of the greatest cyberpunk games of all time is called, "Deus Ex".  Perhaps AI conferences are the Mass of the geeks, Kurzweil's Enya-soundtracked powerpoint of exponential curves and post-human beings are the new stained glass, the keynote speeches by Minsky are the priest's homilies.  Perhaps AI research spin-off technologies like expert systems for hospitals are the nerd's Catholic Charities and missionary work.  That would make the Predator drone the Nerd Rapture's Inquisitioner.

So AI is horse-shit, but is it necessarily a bad thing?  Probably not; as undefinable pipe-dreams go, there are worse.  Making a Jeopardy-playing robot that ends up being re-tooled into an NLP healthcare diagnosis assistant and recipe generator (I particularly like Austrian chocolate burrito)   Sure, the self-driving cars and self-lawyering lawyer bots and the incoming real-world Baymax are probably going to throw millions of taxi drivers, truck drivers, pizza drivers, paralegals, lawyers, nurses, doctors, out the window and onto the street in a breadline.  AI-research will break the Marxian tipping point -- when techno-destruction of jobs exceeds technology's generation of them.  There will be unrest and hand-wringing and people striking in front of McDonald's and Occupy 2.0, but ultimately, after a lot of soul searching, we'll come to accept a post-job, basic income world. 

At any rate, the process of attempting to manifest super-intelligence is probably a better activity, civilization-wise, than trying to foster Les Miserables-grade dystopias across the planet as a kind of antagonist-generation mechanism.  Just so we can keep pumping out M4A1s, RPGs, and Reaper Drones to kill the angry Jean Valjeans and/or clandestinely fence them to the warzones and let the economically devastated (by our sanctions and vulture capitalists) kill each other off.  Yes, it maintains job security for tens of millions of ex-NASA engineers. It furthers tech-innovation, gives Anon mouthbreathing hackboys a sense of purpose and gets them out of their parents basement (to counterhack the Chinese at Airforce Cybercommand ).  War puts would-be wife-beating, black-kid-shooting jarheads in Fergusson Missouri into fatigues, 8,000-square-foot homes and more importantly, a life-phase better than a terminal career-Wal Mart receipt highlighter and racist, womanizing loser.  But it's still war, and war is hell.  With AI, at least there's a chance it won't be a war, despite the holograms that Hollywood's Age of Ultron and ilk are projecting.

Snarky

Quote from: Andail on Fri 15/05/2015 14:12:08
Yes. I've rewritten this post a couple of times because it ended up getting rather lengthy. I even considered putting it on my blog instead, but here's basically what I think, shorter version.

Like you I think that certain concepts and words like Moore's law, exponential growth and neural networks are buzzwords that people in the field toss around to make people intrigued and fascinated by what it is they're doing. Certain points in the blog post I linked to are downright silly - no, a person from the 17th century wouldn't die from shock or some kind of sensory overload if transported to our time; even though lots have changed, the most fundamental activities, concepts and mechanics of our world would be quite recognisable, and will have scaled in quite expected ways - houses are bigger, vehicles faster and people more numerous. And while internet is mind boggling it's not exactly imposing in a way that shuts down your brain.

Yes, this is so nonsensical that I decided not to take it literally. For one thing, there are numerous examples throughout history of people from societies with stone-age or medieval-level technology being brought to visit "modern" cities. They don't always thrive, but I've yet to hear of anyone dying from having their mind blown by it.

QuoteHowever, I'm not very into computing or robotics or AI at all, really, so I've chosen to focus on something that I think has been overlooked by AI enthusiasts (where I believe they fail to think outside the box); the psychology of a super intelligent artificial entity.

The most dystopian scenarios described by AI visionaries depict super intelligent robots the way they're portrayed in movies like Terminator and The Matrix; entities driven by a sense of revenge, or greed for world dominance, or just a general aversion towards humans. Alternatively, they're regarded as potential benefactors, like kind gods who will deprive humans of any authority and destructive capacity, and let us live in peace and harmony, like sheep on an endless pasture. But all those qualities are the result of psychological processes and characteristics, not of pure reasoning.

The machines in The Matrix behave just like we're used to seeing organic species behave on Earth, but all organic species we know of are programmed by evolution to spread and populate and survive. A computer is not.

That begs the question - will pure intelligence also necessarily induce an agenda? A drive to do something? A need of something?
Ultimately, we humans do stuff because we have motives, which derive from biological needs. We have pleasure centers to satisfy, we have a strong instinct to survive, and that goes first on an individual level (protect ourselves) then on a community level (protect our family and tribe) and so on. We have a moral compass because it's evolutionary relevant for a tribe's survival if its members are morally competent individuals. We're mostly sympathetic and considerate to our peers, but can turn cruel and dominant if we sense that it's necessary for us to climb a few steps on the hierarchical ladder.

Will a computer develop a will to survive? Why? Is it logically sound to exist? Existing is only rational if it gives you pleasure, and pleasure comes from hormones. My latest MacBook Air didn't come with hormone glands last I checked. Just kidding, I haven't checked (I don't know how to open the damn thing). It's not necessarily reasonable to exist - the only effect is that you're vulnerable to events that will surely kill you no matter how high your IQ; meteors, the sun's collapse, or the terminal heat death, or the big crunch, or whatever.

There's no real reason to fear that the computer, no matter how intelligent, won't still be our slaves, just like we are slaves to our instinct to survive (most of us), because it makes no sense for it to go against it.

Well... maybe.

Let's take a step back and ask why would we want "strong AI" in the first place. Well, the problem is that these expert systems that we've been getting pretty good at building, while quite impressive, reach a limit at a certain point. They work fine within certain parameters, but when they fail, they often produce answers that are way off (there are some "optical illusions for AIs" in one of the links I posted above that vividly demonstrate the problem), because they lack "common sense". And there are some levels of meaning they can't decode at all; for example, you might be able to get coherent "literal" translations, but try to detect and understand ambiguity, subtext, humor, poetry, etc. and they fall down badly. Try to get one to pass an unrestricted Turing test, and a clever tester will eventually trip them up catastrophically.

Most AI researchers believe that in order to create systems that can solve tasks such as these, we need artificial general intelligence (AGI): a generalized system with the power and flexibility of the human mind. And a few furthermore believe that achieving this requires the system to be conscious, self-aware (strong AI). (Others take the behaviorist, Turing-inspired view that we only need to worry about behavior, not whether there's any "ghost" in the machine.) Why? Well, partly because humans are conscious, and the most obvious explanation of why we're not just zombies or robots whose brains process input and produce commands for our bodies to carry out without any awareness at all, is that self-awareness is somehow a necessary component of higher-level intelligence. So the goal becomes to achieve that same spark of consciousness in a computer.

When we're discussing strong AI, then, we're talking about a conscious, self-aware mind. It's a little bit fuzzy exactly what we mean by that, but I think most would agree that it implies that it has some sort of psychology (instincts, emotions, drives), and that it can form goals and opinions independently, through internal reflection. Being a creature of "pure reasoning" seems somehow at odds with the idea of self-awareness: it's basically what computers do now, and what we're trying to transcend. Also, if we look at intelligence in animals (or in very young children), they seem to possess different forms and degrees of sentience and self-awareness, and it seems reasonable to assume that higher-order consciousness in humans rests on such simpler forms, and on more primitive, instinctual processes. So to achieve consciousness in a computer, we might want to (or have to) endow it with similar instincts. This could be done in various ways; we could try to specifically encode them into the pathways of its brain, or if we use an evolutionary algorithm to progress from simpler "brains" to more sophisticated ones they might emerge from the context of the "game world" by natural selection.

And at that point, it's hard to predict exactly what it will do, even if we know the fundamental instincts, partly because it will be the first of its kind, and partly because the enthusiasts assume it will quickly become more intelligent than us, so we won't be able to follow the progress of its thinking. (Of course, even a much simpler and more familiar mind, like a dog or a child, can be highly unpredictable, and may resist instruction or commands.) On top of that, we may not actually be able to precisely describe or fully know the instincts and motivations that drive it, depending on exactly how the breakthrough is achieved: That's already a problem with digital neural networks, that we can't really explain exactly why they behave in certain ways.

So I think the concern is reasonable, if we accept that creating a self-aware mind in a computer is possible in principle. Still, I think the Kurzweilians overestimate the link between increased processing power and increased (potential) intelligence, and between potential intelligence (as limited by brain complexity) and actual capability, underestimating the role of training, interaction, experience and absorption of information. (Keeping in mind that a strong AI is no more going to be able to suck in all the information on the Internet than I am able to learn anything just by flipping through a book on the subject.) Some of these projections about superintelligence and "intelligence explosions" sound like the film Lucy with computers.

Quote from: SilverSpook on Sat 16/05/2015 11:33:52
[politics]

Maybe try to keep it on-topic, eh? And I think conflating AI in general with the singularity and with transhumanists is a bit unfair; claiming that "AI is horse shit" doesn't really hold up.

SilverSpook

#10
Sorry if my post felt lateral, but personally, I find God to be inextricably political, and discussions of super-intelligence to be inextricably theological.  Many atheists would say, indeed, that God was created as a form of mass-population control, an "Opiate of the masses" as Marx put it.  The all-seeing judge and arbiter to quell disputes, the universal therapist mediated by the priest's confessional onto which all anxiety, pain, guilt could be cast, the ultimate answer to dangerous probing inquiry.  "God was a dream of good government," to quote Deus Ex's (oh the irony!) Morpheus, the proto-messianic version of Eschelon IV channeling Ion Storm's writer Sheldon Pacotti.  And, to quote the thread title, "Will It be A Nice God?"

I was playing fast-n-loose with the "AI is horseshit" comment, in the context of the rant.  As a game designer I program AI into my little aggregations of 16-bit .PNG (or vertex-shaded apparitions of tesselated geometry as the case may be in a 3D game), so I would take offense to someone calling my nerve wracking code-work "horse shit".  Like you've already pointed out, a lot of postulations and predictions in the article on AI, if not outright horse feces, deserve a grain (or maybe a barrel) of salt.  To be fair, I think 90% of everything is horse shit.  :)

QuoteSo to achieve consciousness in a computer, we might want to (or have to) endow it with similar instincts. This could be done in various ways; we could try to specifically encode them into the pathways of its brain, or if we use an evolutionary algorithm to progress from simpler "brains" to more sophisticated ones they might emerge from the context of the "game world" by natural selection.

I'd say it follows then that it's impossible to create a "Nice God", game-set-match.  We try really, really hard to create "nice human beings" through our bizarre McDonaldized and failing child-rearing system called public education, smartphone-babysitters, and often overworked and/or absent parents.  Results are heavily, heavily mixed, as evidenced by the non-zero crime rate and the fact that the Greatest Minds of Our Generation having brought about the near-total financial collapse in order to float their 50 yachts and Malibu timeshares.  Oh and there's the little problem of all those wars and poverty and utter devastation of the planet and all that. 

This is what I love about Chappie -- it takes into account the fact that it's possible we won't know where/when true AGI will manifest and that when it does, if the "wet" human organism is a model, essentially a homo-sapiens brain on NOS with 800 IQ, then infant AI will respond to whoever and whatever teaches it.  That could happen to be the supergenius computer nerd with his utopian ideals of a "poetry writing, art-loving, peace-promoting" super AI as portrayed by Def Pattel.  But maybe the machine Child-God is born accidentally into a South African favela, or an American ghetto, some other wasteland at the exhaust-pipe of Capital-C Civilization.  Maybe the future God Machine gets taken in by a couple of "Zef" gangbanger foster parents, gets raised to swag and wear gold chains and pop caps in asses, hustle meth on the streets, and do "the heists".  Gangsta God.  Maybe they will be torn, like all young adults eventually become, between the many future selves that they could be, hit those crossroads that all human 21st century teenagers hit -- career criminal fighting for the Bloods / Cryps?  An upstanding future office worker?  A world-changing Silicon Valley entrepreneur?  A selfie-addicted aspiring movie star?  An otaku solipsistic gamer who never leaves the basement?  Who will the superintelligence grow up to be -- Elon Musk?  Pol Pot?  Mother Theresa?  Charles Manson?   

We don't know who our children will be when they grow up, -- though we hope for the best -- we never really know.  All of Google's Big Data, billions in psychology and sociology research, Oxford philosophers, Berkeley neuroscientists and Moore's Law in the world can't tell us that.

As Future of Humanity Institute founder Nick Bostrom put it, "Each time we invent a new technology, we pull a ball out of a bag.  Sometimes the balls are black, some are white.  Some will hurt or help us, we don't know till the technology arrives.  Superintelligence will be perhaps the biggest ball, and it will likely determine our the rest of our existence, or non-existence."  I lean towards the opinion that we won't know who the Deus ex Machina will be, until we "Make our God with our own hands".

Perhaps we should put Artificial General Intelligence on the backburner for now until we debug the problems with Human General Intelligence, and the society that produces it.  (Hint: Probably won't happen!)

So I do hope not to derail, but to perhaps widen this topic of AI and superintelligence.  Indeed the question of who AGI becomes, and how they become, is a central premise of my AGS game Neofeud.

Mandle

SilverSpook:

I was vastly entertained by your posts in this thread. I was shocked at times by the way your theological and political points popped up suddenly but I was also relieved when you reined them back in to make cohesive points further along the line.

You seem to have a great talent of being able to write constructive and yet controversial articles that will get people thinking on the real heart of the matter.

You also seem to have the rare ability to take rather curt blows from critics, maintain your composure and reply in a sensible and (again) thought-provoking way.

Thanks for your own and everyone else's long-reads in this thread. I belong to a self-confessed "nerd-herd" of friends in real life and we often discuss exactly these kinds of topics, and you guys have given me some great material to bring up next time (credited of course).

I had my own interesting (to me at least) idea about an AI suddenly becoming self-aware in an autonomous robotic tank being tested by the military (Yeah...I'm aware of "Short Circuit". I know it's not the most original idea ever :P ). Of course I saw the story as having adventure game possibilities and so I think I might keep the full story to myself just for now... ;)

MiteWiseacreLives!

I don't see the point of giving a weapon or even a computer running your business the power of self-awareness and automated high level decision making. What a stupid and risky thing to trust a potentially unpredictable computer that may have an ability to form it's own agenda. I think we want our computers to make decisions the way we set them to, not to evolve and assume control. I would only want a self aware super AI computer in charge of maybe my coffee maker or my toaster.

Mandle

Quote from: MiteWiseacreLives! on Wed 20/05/2015 08:12:23
I don't see the point of giving a weapon or even a computer running your business the power of self-awareness and automated high level decision making. What a stupid and risky thing to trust a potentially unpredictable computer that may have an ability to form it's own agenda. I think we want our computers to make decisions the way we set them to, not to evolve and assume control. I would only want a self aware super AI computer in charge of maybe my coffee maker or my toaster.

Well, they are already giving high-powered weapons the ability to make split-second descisions on their own. I know that these systems are not self-aware. My idea is basically the old chestnut that the AI suddenly becomes self-aware on its own in a way completely unexpected by its creators (with a twist).

I know this is not real science as well and that this is very very unlikely to happen.

I was just really talking about a game idea I had in a thread that inspired the story seed in the first place...

Everyone carry on and ignore the raving lunatic in the corner (me)... (laugh)

SilverSpook

Quote from: Mandle on Wed 20/05/2015 07:47:35
SilverSpook:

I was vastly entertained by your posts in this thread. I was shocked at times by the way your theological and political points popped up suddenly but I was also relieved when you reined them back in to make cohesive points further along the line.

You seem to have a great talent of being able to write constructive and yet controversial articles that will get people thinking on the real heart of the matter.

You also seem to have the rare ability to take rather curt blows from critics, maintain your composure and reply in a sensible and (again) thought-provoking way.

Thanks for your own and everyone else's long-reads in this thread. I belong to a self-confessed "nerd-herd" of friends in real life and we often discuss exactly these kinds of topics, and you guys have given me some great material to bring up next time (credited of course).

I had my own interesting (to me at least) idea about an AI suddenly becoming self-aware in an autonomous robotic tank being tested by the military (Yeah...I'm aware of "Short Circuit". I know it's not the most original idea ever :P ). Of course I saw the story as having adventure game possibilities and so I think I might keep the full story to myself just for now... ;)

I'm flattered you found some value in my mind-excretions!  I do honestly put a lot of thought and passion into this topic(s).  I was considering becoming a cognitive-computing (that was the hot hype in my day) PhD at one point.  My father is also a hippy almost-Jesuit theologian (Yes, we're a family of quitters, but I intend to change that with Neofeud!).  I spend a lot of time talking to him, so that colors things :)

I definitely like your premise for your game, Mandle, and I think it could be original if wrapped up in a point-n-click adventure game and massaged a bit :)  A lot of times the uniqueness comes from just changing the "camera angle".  District 9 for example is just an Apartheid fable.  But it's the way everything plays out -- the Aliens trappings with the hyper-real, unmistakably Blomkampian shanty land, the choice of bureaucrats and destitute poor as main characters, these details and stylistic nuances that makes the movie a brilliantly original work.  (/OT)

MiteWiseacreLives!

Quote from: Mandle on Wed 20/05/2015 08:20:09
Quote from: MiteWiseacreLives! on Wed 20/05/2015 08:12:23
I don't see the point of giving a weapon or even a computer running your business the power of self-awareness and automated high level decision making. What a stupid and risky thing to trust a potentially unpredictable computer that may have an ability to form it's own agenda. I think we want our computers to make decisions the way we set them to, not to evolve and assume control. I would only want a self aware super AI computer in charge of maybe my coffee maker or my toaster.

Well, they are already giving high-powered weapons the ability to make split-second descisions on their own. I know that these systems are not self-aware. My idea is basically the old chestnut that the AI suddenly becomes self-aware on its own in a way completely unexpected by its creators (with a twist).

I know this is not real science as well and that this is very very unlikely to happen.

I was just really talking about a game idea I had in a thread that inspired the story seed in the first place...

Everyone carry on and ignore the raving lunatic in the corner (me)... (laugh)

Wasn't a shot at your comments, Mandle. It's just that in real life I don't see, other than from an experimental query, why we would ever pursue super AI and give it automated power (cold decision making AI I get).
Don't be offended, I really like Short Circuit and Terminator  (laugh) (would love the game!)

Snarky

I thought this was an interesting look at some of the impressive things that can be done with neural networks, while also showing some of its limitations: http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html

(Keeping in mind, of course, that in each of these examples the algorithms and settings have been tweaked by humans to give the best results, and the pictures selected as the best illustrations.)

Mandle

Quote from: Snarky on Fri 19/06/2015 11:34:29
I thought this was an interesting look at some of the impressive things that can be done with neural networks, while also showing some of its limitations: http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html

(Keeping in mind, of course, that in each of these examples the algorithms and settings have been tweaked by humans to give the best results, and the pictures selected as the best illustrations.)

AMAZING article!!! Using the neural network in reverse to create a kind of simulated subconcious or dream image is just so inspired, and some of the images are pretty haunting...

Of course: early days still and still need a human hand at the controls to produce the images, but... give them a few more years and who knows eh?

Google = Cyberdyne Systems??? (laugh)

InCreator

#18
Superintelligence?
I think it'll be just extremely, extremely good at optimization. So even if it's built by humans and fallible as us, it'll fix its own problems soon. Therefore, it won't be fallible at some point, and most definitely not driven to same motivations that drive humans. Survival instinct, emotion and so on. So, perhaps its only pre-programmed, firmwared motivation would be to serve humans. So why should it have any random or unknown alignment towards planet or humans? It'll be just a really cool computer that can do anything, invent stuff and answer questions. Also, a superior management tool, either for industry, banking or military.

It could devalue all art, because billion variations of Mona Lisa or whatever will be determined by time it takes paint to dry.

I've been thinking to program a simple brute force applet that fills 16x16 pixel canvas with all possible variations of all pixels of 54 colors of NES palette. Although this is really easy to code for even a beginner in any programming language...  mathematically I'm unable to calculate what insane time it would take to end its run (at current technology, millenniums?. In the end, you should have every sprite in those limits (Nintendo Entertainment System) ever drawn, and also all possible variations of everything. Mario with barrel for a head, Mario with feet of Peach and so on and so on.

Imagine something like this given at larger scale to a really powerful machine that is also superior at self-optimization so you wouldn't have millions of brute-forced images that make no sense (every pixel is black, now all is black, but one is white, so on), just all art ever created.

Music is also mostly maths and waves. Could we make AI brute force all music ever? All sound ever? At which point do machines start to make perfect things? Perfect as perfectly optimized for human consumption. Most captivating music, most tasty food (or best averaged for what we call difference of taste over human population), etc?

Art is ode to amazing human capabilities. So perhaps it'll demotivate instead of devalue, but still

I think only thing standing in our way is that learning machines use statistics. While it's good for machine to recognize images that doesn't have same checksum/color/angle/lighting/whatever as images of same thing, this also makes machines fallible and humans - yet - seem to ignore such little distinctions while getting correct answer, much better.

ollj

Game theory applies to strong AI as well as to preschoolers.

SMF spam blocked by CleanTalk