Thursday, May 28, 2015

From Smartphones to Cyborgs: Is Artificial Intelligence really ‘intelligent’?

Zombie apocalypse from The Walking Dead

(Image from IMDb)

In spite of movies like World War Z and TV series like The Walking Dead, there aren’t any organizations preparing for the zombie apocalypse.  

However, there are institutions preparing for the rise of the machines—that is, a time in the future when Artificial Intelligence (AI) becomes so advanced that it might develop a conscious “superintelligence” threatening the very existence of humanity.

Rise of the machines from Terminator 3

(Image from IMDb)

For example, The Future of Life Institute is a research organization preparing for the rise of AI and the existential risks it may entail.  In fact, this very year, 2015, the institute published an open letter warning about "potential pitfalls" that AI may hold for human survival.  

That letter has been signed by prestigious intellectuals, including physicist Stephen Hawking and AI researcher Nick Bostrom.  Lately, similar concerns have been expressed by business technology leaders such as Bill Gates.

(Image from IMDb)

  

(Image from IMDb)

The possible hazards of AI may no longer be mere science fiction, as the cinematic zeitgeist of our time seems to be telling us.  Plenty of upcoming movies such as Terminator Genysis and Ex Machina are warning about the perils of AI, which may, as The Economist has recently reported, move from science fiction to scientific fact sooner than we think.  Should we really be so worried?

Part of what make this question difficult is that it’s tricky to define what we mean by 'Artificial Intelligence.'  Heck, it’s challenging enough to define 'Intelligence.'  For example, there’s traditional IQ (Intelligence Quotient), but the power of the human mind also involves Emotional IntelligenceSocial Intelligence, and nowadays even Business Intelligence.

Perhaps it’s better, then, to speak of 'intelligences' in the plural, or varieties of intelligence.  Albert Einstein was certainly one kind of genius, embodying a scientific intelligence.  James Brown was a musical genius of artistic intelligence, and George Carlin a comic one of incredible witty intelligence.

So what kind of ‘intelligence’ is meant by the ‘artificial’ kind?  Perhaps we can clarify by differentiating human brains from computer hardware.  There are similarities for sure, but there are also major differences.  On one hand, both our brains and our computers share a common undertaking: processing information.  Our brains perform information-processing tasks; our computers work as information-processing technologies.  One the other hand, consider the differences:
  • Unlike computer hardware, brains are embodied: We humans come from the animal kingdom, and one major trait that separates us from plants and fungi (as any biologist will tell you) is that our evolutionary survival depends on bodily action (or sensorimotor activity).  Unlike trees and mushrooms, we grow muscles to move around and explore the world, and the brain is an organ that the animal body evolved over time to coordinate sense perception and muscular movement.  Many neuroscientists will point out that perception and movement are so closely linked in the brain that they are really one sensorimotor process.
  • Thus, mind is embodied: Since our brains evolved in the context of bodily action, our ‘higher’ mental functions (e.g., math and logic) naturally evolved out of our ‘lower’ sensorimotor activities.  For instance, our bodily movement naturally gives rise to what cognitive scientists call “image schemas,” or kinesthetic, visual patters, such as up-down, inner-outer, part-whole, etc.  We recruit these image schemas to create complex logic.  For example, in economics, we use ‘up-down’ schema to construct statistical logic—think of financial charts or graphs, where ‘up’ means ‘more value’ and ‘down’ means ‘less value.’

(There are other ways brains differ from computers—for instance, decision making in the brain depends on the body's emotional signals—what neuroscientists call somatic markerswhich allow humans to judge and evaluate facts.  I only highlighted a few differences here, but see Hubert Dreyfus’ classic book What Computers Still Can’t Do for more, or see my post on Embodied Cognition.)

Given these differences, is it right to call computers ‘intelligent’ machines?

At first, we might say no.  Intelligence, strictly speaking, is a quality of a conscious mind, and machines don’t have conscious minds per se.  But then again, this reaction is narrow.  For instance, strength is a quality of the muscles; buildings don’t have muscles, and yet we speak of 'strong' buildings.  The same goes for intelligence.  All computers are 'intelligent' in some sense.  That’s why it’s not a jump to refer to devices like smartphones.

So yes, we can say that AI is intelligent, at least in a limited sense of being able to execute certain tasks, even if it doesn’t have a conscious mind as we do.  (Computer scientists refer to this limited machine intelligence as “Weak AI,” which has no self-awareness.)  But will we ever have machines with conscious minds that can supersede our own?  (That is something computer scientists would call “Strong AI,” which just might have full self-awareness like human consciousness.)  Well, you'll need to decide for yourself, but I’ll briefly share my opinion.

Given what we know about embodied cognition, I’d say you probably have little to fear from your computer or smartphone (that is, Weak AI).  We know that a fully conscious mind depends on some form of embodiment—an autonomous, living body that's coupled to an environment (or what systems theorists call an autopoietic network).  But if that’s true, then when scientists combine Artificial Intelligence with Artificial Life to create something synthetic, such as a living cyborg or android, we may indeed need to prepare for the rise of conscious machines (or Strong AI).

In other words, a HAL Computer probably couldn't happen, but a Terminator Machine just might.

            

Don't worry about a HAL Computer 

from 2001: A Space Odyssey

(Image from IMDb)

But beware of the Terminator Machine 

from The Terminator

(Image from IMDb)


Or, for those who saw The Avengers: Age of Ultron, a robot like Ultron is unlikely, but an android like Vision may be a future possibility.

            

The robot Ultron

in Avengers: Age of Ultron

(Image from IMDb)

The android Vision

In Avengers: Age of Ultron

(Image from Wikipedia)



Wednesday, May 6, 2015

Building the Mind, Scaffolding the Brain—sometimes in the name of love!

In the realm of movies, I don't actively seek out chick flicks or romance films.  But since my wife will kick back to some muscle-flexing, action flicks with me, I'll watch the occasional sweet, romantic comedy with her.  Consequently, we tend to alternate genres during our stay-at-home movie nights.  She sits through Terminator with me, I watch Pretty Woman with her.

Image from (IMDb)


Lately we watched a romantic comedy neither of us had seen before: 50 First Dates, with Adam Sandler and Drew Barrymore.  I usually find Adam Sandler movies hilarious, and this one was no exception.  Sandler plays Henry, a veterinarian who tries to win the heart of Drew Barrymore’s character, Lucy, a lady with amnesitic disorder (a memory problem caused by brain damage).  Although Henry and Lucy meet and fall for each other at a diner, Lucy wakes up the next day unable to remember Henry and their encounter.

Henry soon learns that Lucy wakes up each day thinking it’s October 13th, the date she suffered head injuries from a car accident.  She cannot form any new memories of meeting the guy who loves her.  Henry therefore has to court her again and again … throughout the course of 50 ‘first dates’!

It’s a charming film.  Without giving away the whole plot, I’ll just say that Henry and Lucy finally end up together by figuring out a clever trick.  Each day, she wakes up to a videotape labeled “Good Morning Lucy,” a home-made recording that recaps her accident, her 50+ dates with Henry, and their eventual wedding.  She also learns to keep a diary and express her new experiences in painting.  In short, she uses cues from video, audio, and print media in place of her damaged brain to remember her life and the man she loves.

There’s a revelation here about relationships, and I don't just mean boyfriend-girlfriend relationships.  I also mean the relationship between our mind and our technology.  When you think about it, we all do what Lucy does to some extent.  I keep a notebook and a planner handy to track my weekly tasks.  My wife and I use Google Calendar to remind us about our upcoming appointments.  IBM Notes—including its contact lists, email, calendar, and to-do list—is one of my best friends at work.  In today’s Information Age, we all use technologies to support our memory in one way or another, especially as data increase exponentially, exceeding our own brain storage capacity and creating information overload.

Technology not only supports our memory; it also does some of our thinking for us.  We use calculators to calculate problems instead of doing the math in our heads.  We use Google Maps to plot directions instead of mapping them ourselves.  Much of your mind, from reminiscence to thinking, isn’t just in your brain.  Memory and thought are also in your technologies.

We might say that parts of our minds are ‘extended’ via technology.  In fact, many cognitive scientists talk about our “extended mind” in this way.  Andy Clark, a respected researcher in cognitive science, likes to say that technology is “scaffolding” around the mind.  Just as a work crew uses scaffolding to support building construction, we use technology and media to support and extend our mental functions (like memory and thought).  We can call the result our “extended mind,” where “mind” includes both brain (a foundation of mind) and technology (the scaffolding of mind).

    

(Image from Amazon.com)

(Image from Amazon.com)

As a quick aside, I should mention how the term “scaffolding” was inspired by psychologist Lev Vygotsky—see his book Thought and Language—who coined the phrase “Zone of Proximal Development” (ZPD).  ZPD refers to what students can’t yet recall by themselves without resources that support learning and memory—e.g., books support language learning; multiplication tables support algebraic memory.  

Educational psychologists today call these resources “instructional scaffolding,” which may be removed after students commit lessons to memory (just as a building’s scaffolding may be removed when construction is completed).

Clark uses the term a bit differently.  For Clark—see his book Being There—“scaffolding" is a perpetual extension of our brain, which “offloads” information onto technology.

So perhaps, like Lucy, our minds aren’t just in our heads.  Try to remember how you remember, or try to think about how you think, with and without the support of technology.  Just as many kinds of thinking (e.g., artistic and scientific) may not be possible without language or tools, some types of thought and memory may not be possible without technology and media.

And yet our mental dependence on technologies and media may not be a bad thing … especially when we use them, as did Lucy, in the name of love.

Lucy and Henry using painting and print media to remember their love.
(Image from IMDd)