Wednesday, December 9, 2015

Are we “alone together”? (Book Synopsis, Part II)

(Image from Amazon)



To conclude a book synopsis of Alone Together: Why We Expect More from Technology and Less from Each Other (check out Part I of the synopsis if you missed it), let’s look at Sherry Turkle’s concerns about replacing human relationships with technology.  As we spend more time texting, Tweeting, and playing video games, embodied human connections give way to digital ones, and interfaces get more attention than faces.  Is that healthy from a psychological perspective?  Turkle raises several concerns in Part II of her book (Chapters 8-14, which I summarize below).

In Chapter 8, she discusses what most experienced teachers and managers know about their students and employees: in general, we’re terrible at multitasking with technology.

“When psychologists study multitasking, they do not find a story of new efficiencies.  Rather, multitaskers don’t perform as well on any of the tasks they are attempting.  But multitasking feels good because the body rewards it with neurochemicals that induce a multitasking ‘high.’  The high deceives multitaskers into thinking they are being especially productive.  In search of the high, they want even more” (p 163).

Chapter 9 shows how adolescent solitude and rites of passage suffer when teenagers are constantly connected through cell phones and wireless networks.

“When parents give children cell phones . . . the gift typically comes with a contract: children are expected to answer their parents’ calls.  This arrangement makes it possible for the child to engage in activities—see friends, attend movies, go shopping, spend time at the beach—that would not be permitted without the phone.  Yet the tethered child does not have the experience of being alone with only him- or herself to count on” (p 173).

Chapter 10 discusses how emotional nuances of face-to-face and verbal communication get lost as we replace talking with texting.

“One of the emotional affordances of digital communication is that one can always hide behind deliberated nonchalance” (p 198).

Video games and second lives form the topic of Chapter 11.  In some cases, virtual games and simulations can prepare individuals for real-life scenarios, giving people practice and confidence.

“Laboratory research suggests that how we look and act in the virtual affects our behavior in the real.  I found this to be the case in some of my clinical studies of role-playing games.  Experimenting with behavior in online worlds—for example, a shy man standing up for himself—can sometimes help people develop a wider repertoire of real-world possibilities” (p 223).

South Park episode "Make Love, Not Warcraft"

satirizing video game addiction

(Image from South Park Wiki)

However, the real world is not necessarily like a virtual world, and this discrepancy can create disappointing expectations with daily life, leading people to take addictive refuge in video games.  (Indeed, many psychiatrists are considering classifying video game addiction as a disorder—here, I'm reminded of an episode from South Park that portrayed this 'disorder' by mocking World of Warcraft.)

“The gambler and video game player share a life of contradiction: you are overwhelmed, and so you disappear into the game.  But then the game so occupies you that you don’t have room for anything else.

When online life becomes your game, there are new complications.  If lonely, you can find continual connection.  But this may leave you more isolated, without real people around you.  So you may return to the Internet for another hit of what feels like connection” (p 227).

The next two chapters give insights about online ‘flaming’ and ‘electronic shadows.’

On ‘flaming’ - in Chapter 12:

“by detaching words from the person uttering them, it can encourage a coarsening of response.  Ever since e-mail first became popular, people have complained about online ‘flaming.’  People say outrageous things, even when they are not anonymous” (p 235).

On ‘electronic shadows’ - in Chapter 13:

“living with an electronic shadow begins to feel so natural that the shadow seems to disappear—that is, until a moment of crisis: a lawsuit, a scandal, an investigation.  Then, we are caught short, turn around, and see that we have been the instrument of our own surveillance.  But most of the time, we behave as if the shadow were not there rather than simply invisible.   Indeed, most of the adolescents who worry with me about the persistence of online data try to put it out of their minds” (p 260-261).

In conclusionand here's the bombshell in her bookTurkle suggests that AI and digital technologies don’t necessarily provide solutions to our desire to feel connected to other people.  In fact, they may be symptoms of our failure to connect to others:


“When technology is a symptom, it disconnects us from our real struggles” (p 283).

Agree or disagree with Turkle?  Her psychological observations resonate with what media gurus Marshall McLuhan and Neil Postman said long ago: new technologies create new possibilities yet destroy existing ones, so we need to look at what we lose in addition to what we gain; otherwise, we may lose an important part of our humanity.


PS: Are there other book synopses, summaries, or reviews you'd like to see on this blog?  Email me or comment below with your requests.


Sunday, November 22, 2015

Are we “Alone Together”? (Book Synopsis, Part I)

Being the bibliophile I am, I decided to add a 'Recommended Books' page to this blog, where I provide book synopses, summaries, or reviews for the intellectually curious.  What types of books?  Well, a major motif of our time is the relationship between humanity and technology, the mental and the mechanical.  When do mind and media work in harmony, and when don’t they?  When are we in control of our technology, and when aren't we? Naturally, I’ll recommend books that deal with such questions.

   

(Image from Amazon)

A recent book that impressed me was Sherry Turkle’s Alone Together: Why We Expect More from Technology and Less from Each Other.  Turkle, a psychologist at MIT, has spent decades studying Artificial Intelligence (AI) and human-computer interaction, fields that have been dominated by computer scientists and engineers (and men).  However, she focuses not on the technicalities of robots but on the mental health of people who interact with these machines.  In the Intro she states (p 11): “this is not a book about robots.  Rather, it is about how we are changed as technology offers us substitutes for connecting with each other face-to-face.”

Alone Together divides into two parts.  Part I (Chapters 1-7, which I'll summarize here) examines social robots (robots designed to play and communicate with people) and how they affect kids and adults.  For instance, Turkle found that social robots rouse people to become more expressive of their thoughts and feelings.  Kids who played with robotic pets like AIBO (Artificial Intelligence Robot—basically a robot dog) surprisingly let out all sorts of strong emotions: some kids were tender with AIBO, reacting to its ostensible cuteness; others turned very aggressive, even bullying what they perceived as a mechanical, “generic” creature.  What do these mixed responses to robots tell us about kids?  As a psychologist, Turkle concludes (in Chapter 3, p 62),

“The strong feelings that robots elicit may help children to a better understanding of what is on their minds, but a robot cannot help children find the meaning behind the anger it provokes.  In the best case, behavior with an AIBO could be discussed in a relationship with a therapist.”

(Image from Wikipedia)


She makes a similar observation for adults who played with robots such as PARO (a robotic pet designed for seniors) or My Real Baby (an animatronic infant for teens and adults).  Adults tend to be more skeptical than kids as to whether a robot is alive or conscious; but, like kids, they tend to become emotional and even talkative.  According to Turkle (Chapter 6, p 116),

“When talking to sociable robots, adults, like children, move beyond a psychology of projection to that of engagement … The robots’ special affordance is that they simulate listening, which meets a human vulnerability: people want to be heard.  From there it seems a small step to confide in them.”

PARO the robotic pet
(Image from PARO Photo Gallery)


Turkle analyzes how kids and adults reacted to other robots such as COG (a humanoid) and Kismet (a robot head) as well as to computer programs like ELIZA (a psychotherapeutic computer program that gives generic responses to questions from patients).  In so many cases, people find it easier to divulge personal information to machines rather than to other individuals.

Now there’s nothing wrong with using technology to facilitate self-expression.  (As a writer, I’m all for that—writers, after all, use technologies like keyboards and screens, pens and papers, to help form and disclose thoughts.)  But there’s a problem when people use so-called social robots to substitute for human relationships.  Machines aren’t companions, and Turkle argues the difference is between outer performance vs. interpersonal authenticity:

“Computers ‘understand’ as little as ever about human experience—for example, what it means to envy a sibling or miss a deceased parent.  They do, however, perform understanding better than ever, and we are content to play out part.  After all, our online lives are all about performance.  We perform on social networks and direct the performances of our avatars in virtual worlds.” (Chapter 1, p 26.)

“My Real Baby was marketed as a robot that could teach your child ‘socialization.’  I am skeptical.  I believe that sociable technology will always disappoint because it promises what it cannot deliver.  I promises friendship but can only deliver performances.  Do we really want to be in the business of manufacturing friends that will never be friends?”  (Chapter 5, p 101.)

In sum, computers and robots can mimic emotions, but they are ersatz, not real feelings.  Although people usually know better, they enjoy the simulated pleasure they get from machines.  So what’s the harm if folks feel satisfied enough talking to computers instead of people?

Turkle believes there’s psychological harm over time: intimate bonding gets replaced by instant gratification, and long-term human needs are replaced by short-term conveniences.  She discusses this danger in in Part II of her book, which we’ll look at in the next post.


Tuesday, October 13, 2015

Technology in the Classroom: Why the Medium is the Message, and other ‘McLuhanisms’

For a while there has been ongoing debate in education about whether putting digital technology in classrooms helps students learn better or more quickly.  (This debate has been happening in my home city of St. Paul, MN over the last year since local schools placed iPads in the classroom.)  As schools experiment on kids with the latest digital devices, will teachers and parents be able to witness how new media affect student learning?

Yes, results are in.  Last month (September 2015), the OECD (Organization for Economic Co-operation and Development, an international organization of policy researchers) released the first international assessment of digital learning.  The bottom line, to the surprise of many, is that digital technologies have contributed “no noticeable improvement” to student learning (at least as measured by international literacy tests on math, science, and reading, which come from the Programme for International Student Assessment, or PISA).  According to the assessment,
  • Students who frequently use computers get worse results.
  • Students who moderately use computers have “somewhat better learning outcomes” than students who rarely use computers.
(The full report on this assessment can be viewed here.)

According to OECD education director Andreas Schleicher, these results look “disappointing” because they “show no appreciable improvements in student achievement in reading, mathematics or science in the countries that had invested heavily in information and communication technology (ICT) for education.”  (In fact, schools in countries with better results have lower levels of computer use.)

It appears that information-processing technologies haven’t helped students process information. Why not?

Although a tempting question, it may be misleading, because just about any type of learning requires some kind of technology: books are technologies for reading; pencils and papers are technologies for writing; numerals and calculators are technologies for mathematics.  The real question is what kinds of technologies work best for which types of learning.

We’re in interesting times, because learning experiences today offer wide varieties of technologies, old and new.  Should we use books or iPads, paper or laptops?  Really, the answer depends on what we’re trying to learn.

 

Marshall McLuhan
(Image from The Official Site for the Estate of Marshall McLuhan)

Different technologies are better (or worse) for different types of learning.
Media scholar Marshall McLuhan pointed this out in his book Understanding Media, where he wrote a famous aphorism:

the medium is the message.”

What McLuhan meant was that technologies and media do more than communicate information: technologies and media shape how our minds process information.

(Personally, I prefer Neil Postman’s reworking of McLuhan’s saying:

the medium is the metaphor.”

Technologies and media are metaphors for thought itself, because different media support different types of thinking.  See the previous post for an intro to Postman’s ideas, which make up a field called Media Ecology.)

For example, in his book The Gutenberg Galaxy, McLuhan wrote about a striking difference between old media like books and newspapers vs. new media like computers.  A crucial difference between old print media vs. new screen media is what he called “light-on” vs. “light-through”: is the light reflected off a page, or is the light coming directly at you through a screen?

(Image from Amazon)

Why would reflected light versus direct light make any difference?  (McLuhan's explanation in The Gutenberg Galaxy is a bit convoluted, however correct, so I'll give a more straightforward answer here.)  When print media like books reflect light, they’re easier to focus on for a long period of time—the light doesn’t tire your eyes.  Screen media like laptops emit direct light, so staring at them can lead to digital eye strain.

This difference—reflected light vs. direct light—is a reason why books are better for teaching deep reading and concentration (e.g., reading comprehension, reflective thought or contemplation, committing ideas to long-term memory).  Screen media, in contrast, tend to exercise scanning and multitasking, which make them better for visual and spatial learning (e.g., studying maps and patterns, simulating driving/navigation skills, illustrating scientific ideas).

In short, different information technologies extend different parts of your mind: print media edify skills like basic literacy, while screen media train skills like pattern visualization.  (For other differences, see Screen vs. Print Part I, II, and III.)  

So perhaps digital technology has contributed “no noticeable improvement” to education because students aren't learning the right lessons with the right technologies.  Schools that want to teach literacy with iPads probably won't succeed (excepting cases like digital literacy).  But iPads certainly can help kids learn skills associated with spatial orientation, navigation, and visualization (e.g., reading maps, understanding geography, illustrating concepts in physics).  These are the nuances that policy makers in education need to consider.

Sunday, September 13, 2015

Are we amusing ourselves to death? (Neil Postman & Media Ecology?)

I’ve a theory as to why John Stewart and Steven Colbert ‘retired’ from their former jobs as news satirists: they don’t need to satirize the news anymore because it now seems to satirize itself.

Take the 2016 presidential race, which already resembles an amusing Reality TV show with Donald Trump as he dominates polls and debates.  Political commentators may try to explain this quirky situation by saying something like this: voters crave authenticity so desperately that they don’t mind Trump’s political incorrectness.  If Trump’s political ambitions are a “circus act,” as CNN reported in this video before he began his campaign, then at least it’s an authentic one (so the reasoning goes).


Authenticity may explain Trump’s success, but something else is also happening.  Trump has celebrity status—he’s a real-estate tycoon turned Reality TV star.  Before him, many presidential and gubernatorial elections included celebrities (President Reagan in 1980, Minnesotan Governor Jesse Ventura in 1999, and Californian Governor Arnold Schwarzenegger in 2003).

                    

Reagan the actor
(Image from Wikipedia)

Jesse "The Body"
(Image from Wikipedia)


Arnold "The Governator"

(Image from IMDb)

With such precedents, it’s little surprise Kanye West declared himself a presidential candidate for 2020 (and Kim Kardashian a potential First Lady).

Kanye West rapping his way to the presidency?
(Image from The Washington Post)

Reality TV stars, celebrities, pop singers…as president?  What’s happening here?

A plausible answer was provided by Neil Postman, an educator who founded a discipline called “Media Ecology,” the study of how media and technologies change our world and our understanding of it.  Here’s the idea: new media and technologies don’t just add tools to our society—they dramatically change our way of life, especially the way we think.

       

(Image from Amazon)

In his book Technopoly, Postman explains why he uses the word “ecology” to describe media:

“I mean ‘ecological’ in the same sense as the word is used by environmental scientists.  One significant change generates total change.  If you remove the caterpillars from a given habitat, you are not left with the same environment minus caterpillars: you have a new environment...  This is how ecology of media works as well.  A new technology does not add or subtract something.  It changes everything.”

(For folks familiar with chaos theory, Postman uses 'ecology' in the sense of a dynamical system, where one change can create butterfly effects with large-scale, unpredictable consequences.)

What does all this have to do with politics?  The technological shift from newspapers to screens, from lengthy dialogues (e.g., the Lincoln-Douglas debates) to digital sound bites (e.g., 'Tweeting’ political positions), changed the game of politics.  Media like TV and Twitter have made us think about politics less in terms of news and more in terms of entertainment.  TV and Twitter encourage flashy images and cursory comments, even online flaming (occasionally papers and books do too, but much less so).  As a result, politics has become less informative and more entertaining.  If it isn’t great showbiz, then it loses TV ratings and Internet traffic.

(Image from Amazon)




It's a problem Postman discusses in his most famous book Amusing Ourselves to Death: Public Discourse in the Age of Show Business.  In it, he draws on an aphorism by media scholar Marshall McLuhan:

the medium is the message.”

McLuhan meant that media do more than communicate information: media shape how people perceive information.

(A famous example is the Kennedy-Nixon debate—the first presidential debate to occur on live TV: people who watched the debate on TV felt Kennedy won, while people who listened to the debate on radio thought Nixon won—overall, Kennedy ‘looked’ better, which TV imagery magnified, but Nixon ‘sounded’ better, which radio amplified.)

Postman reworks (and in my opinion improves) McLuhan’s aphorism:

the medium is the metaphor.”

What is the medium a metaphor for?  For thought itself.  In other words, different media enhance different kinds of thinking.  TV and Twitter prompt quick thoughts good for brief reminders or short-term memory; newspapers and books suit deeper focus and intellectual analysis.  In fact, this difference is a special case of the differences between screen vs print media: screens work better for multitasking, print for reflection.

What does this mean for the future of politics?  In my opinion, Postman’s analysis is partially right but incomplete.  If TV and digital screens continue to dominate civic discourse, then expect plenty of entertainment but little substance.  On the bright side, however, new media have given new life to organizing.  Civil protests and meetups are often organized online now, which is much easier than door-to-door canvassing.  Clay Shirky makes this case in his book Here Comes Everybody, which argues that digital technology has lowered the transaction costs for organizing many people.

Perhaps in the end we get a paradox: new media positively affected civil organizing but negatively affected civic discourse.  What sorts of future leaders that’ll leave us with remains to be seen.

Sunday, August 16, 2015

From Digital Distraction to Stoic Serenity: What can Seneca teach us?

Since my teenage years when I eased my adolescent angst by reading Friedrich Nietzsche, I liked philosophy.  But philosophy came into bad repute because of its jargon.  Some philosophical writing got SO BAD that a “Bad Writing Competition” awarded ‘prizes’ to academic malarkey.  A famous/infamous ‘winner’ was this bizarre sentence from academic philosopher Judith Butler:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Don’t even try to understand it.  When writing gets this ridiculous, there’s a fine line between philosophy and hot air.

It wasn’t always this way.  In ancient times, philosophy was well written and practical, laying out guidelines for living a good, happy life.  So do age-old philosophers have anything to teach us in the Digital Age?  You bet they do.

   

Bust of Seneca, Stoic Philosopher  
Part of The Double Herm of Socrates and Seneca  
(Image from Wikipedia)  

Here’s an example of ancient wisdom that resonates with me: Roman Stoicism.  Although the word ‘stoic’ nowadays means emotionless or indifferent, it had a different meaning in Classical times.  To be Stoic meant to be virtuous and prudent—you didn’t lack emotions, but you calmly controlled destructive ones by using wise judgment or “Reason.”  (Stoic virtue is similar to what Buddhists call mindfulness.)

Stoic philosophers, then, taught peace of mind by controlling negative emotions such as anger or anxiety.  Consider what we, in the age of social media, may learn from Seneca, an ancient Stoic philosopher.

Anyone on Twitter or Facebook knows that anger and anxiety can easily explode over social media (usually referred to as ‘flaming’).  Some folks (‘flamers’) seem to be searching for something to be offended about so they can write online vitriol, which often screams of trivialities—like the latest insane thing Ted Nugent blurted.  Seneca had many recommendations for controlling anger and anxiety, but a common theme that runs through his letters (to his friend Lucilius) and essays (e.g., “On Anger”) can be summed up thus:

  

(Image from Amazon)

don’t get distracted by petty things.

Long before the Internet, Seneca felt that people already suffered from information overload, which distracted them from important matters (imagine how he would feel now).  We must filter out irrelevant stuff.  To quote the man himself (from Letter 88, Elaine Fantham’s translation):

Whatever part of human and divine matters you grasp, you will be wearied by a vast abundance of things to be investigated and learned.  In order to give free hospitality to these many and great themes, you must remove superfluous thoughts from your mind.  Therefore, “Measure your life; it cannot contain so many distractions.

Now in the age of abundant, instantaneous information, removing minutia isn’t easy, especially when so much salacious chatter pollutes the web.  Some people love to offend and get offended by viral videos, celebrity gossip, and memes, which just inflate unhealthy egoism.  As Seneca points out (in Letter 48):

We are not harmed by anything that offends us, but self-indulgence drives people to a frenzy, so that anything which does not answer their whim calls forth their rage.

So, Seneca advises, ignore the superfluous.  Instead, concentrate on matters that involve moral self-improvement and perfecting the mind, including virtue and civic knowledge.

  
(Image from Amazon)
  

(Image from Amazon)

Two millennia later, Seneca’s message has made a comeback.  The writer William Powers devotes a chapter to Seneca in Hamlet’s Blackberry, which modernizes Seneca’s distraction-avoiding philosophy for the Digital Age.

By eliminating digital distractions and concentrating on virtue, we experience serenity, or what the author William Irvine calls “Stoic Joy” in his accessible book A Guide to the Good Life.  This is how ancient wisdom becomes modern philosophy of technology.

Basically, the message is to get our priorities straight.

Unlike a lot of academic philosophy, Seneca’s wisdom was practical and came from his own experience.  He was not just a teacher, having tutored Emperor Nero (who eventually, and ironically, had him killed).  Seneca was also a statesman who effectively administrated the Roman Empire while Nero was distracted by seductions of excessive wealth.


(For anyone interested in Seneca’s life, I highly recommend Dying Every Day: Seneca at the Court of Nero, by the historian James Romm—as the title suggests, the details of Seneca’s life are gripping and sometimes gruesome, as life and death in ancient Rome often was.)

(Image from Amazon)



Friday, July 24, 2015

What does it mean to have a body: AI with minds and bodies

2015 seems to be the year of discussing Artificial Intelligence (AI).  From philosophers like Nick Bostrom to scientists like Steven Hawking, many high-profile people have joined the conversation about the possibilities of AI research.  Lately, it was Facebook’s CEO Mark Zuckerberg who made headlines when he hosted an online Q&A open to the public on his Facebook page, where he talked, among many things, about exploring AI.

Of course, the perils of AI always arise in these conversations, as sci-fi flicks like Terminator illustrate through a terrifying scenario: instead of serving us, intelligent machines enslave humanity.  Interestingly, Arnold Schwarzenegger joined the online Q&A to ask Zuckerberg about exercise, but he also asked if machines would eventually "win."  Zuckerberg lightheartedly replied no.

(Image from Mark Zuckerberg's Facebook page)

Putting aside quibbles about the rise of the machines, a fascinating possibility of AI was raised in Zuckerberg’s Q&A: computers with vision, which could see and describe visual images such as pictures and videos.  That possibility would not only be AI that ‘thinks’ intelligently—by processing information and presenting solutions to problems.  It would also be AI that ‘perceives’ the world—through visual sense.  In other words, this AI would have mental abilities and bodily senses.

(Image from Mark Zuckerberg's Facebook page)

Of course, as philosophers like William James and neuroscientists like Antonio Damasio have argued, mind and body are inseparable.  So mind-body continuity should apply to AI too.  If we want AI with a fully conscious, intelligent mind (what's called Strong AI), then it needs a living body.  Cognitive scientists refer to this idea as embodied mind theory.

The Kismet robot
(Image from Kismet's Official Website)

The Cog humanoid
(Image from Cog's Official Website)

In fact, AI research has already taken insights from embodied mind theory to design robots.  You may recall Cog, a humanoid that could interact with people and learn from environmental stimuli.  Perhaps you saw Kismet, a robobic head with visual and audio systems that could recognize and express emotions (using an approach called affective computing).

Now here’s an interesting question: What makes these AI bodies different from the ‘bodies’ of other technologies.  For example, doesn’t a smart phone have a body around its memory card?  Doesn’t a computer have a body around its silicon chips?  Doesn’t most AI already have a body anyway?

Here’s an answer: There’s a difference between a living body and an outer covering.  Just because smartphones or computers possess outer coverings doesn’t mean they have living bodies.

Consider what living bodies have that outer coverings don’t:
  • Emotion: Bodies generate emotions felt by the brain, and emotions provide feedback to evaluate thoughts.  I read the news, a thought occurs, my body produces an emotion, and my brain feels it and realizes how the news makes me feel—e.g., happy, sad, angry.  (Neuroscientists call emotions somatic markers—from the Greek word soma meaning ‘body’—because they’re how your body ‘marks’ your thoughts with positive or negative valence.)
  • Self-Organization: Unlike phones, computers, or manufactured products, your body self-organizes itself—using genetic material and environmental resources.  (Systems theorists refer to self-organizing bodies as autopoietic—meaning ‘self-creating’—because they organize themselves; phones and computers are heteropoietic—‘other created’—because they're organized by outside intelligence.)
  • Meaning Making: Phones and computers run algorithms without understanding the meaning of data.  Bodies, with their emotional value and self-organizing capacities, create meaning—a point obvious to artists such as James Joyce or D. H. Lawrence, to name two of my favorites.  (Cognitive scientists refer to this meaning-making process as sense-making.)
  • (Image from Amazon)

  • Multidimentionality: A final point, which philosopher Mark Johnson makes in The Meaning of the Body.  The body isn't merely physical.  It’s "multidimensional," with a biological dimension (body schema), an ecological dimension (environmental affordances of the organism), a phenomenological/subjective dimension (body image), a social dimension (body politic/corporate entity), and a cultural dimension (gender, ethnicity, etc.).

Cog and Kismet had some characteristics of living bodies, such as rudimentary (or maybe just quasi-) emotion and meaning-making.  I wonder how much longer till AI incorporates other characteristics that make living bodies with conscious minds.  If recent science fiction in cinema gives us any clue, future AI may be creating not sentient computers like HAL but cyborgs like Terminator, Ava, or Vision.


Terminator in Terminator Genisys
(Image from IMDb)

Eva from Ex Machina
(Image from IMDb)

Vision from Avengers: Age of Ultron
(Image from Wikipedia)