Sunday, December 18, 2016

Our technological paradox: digital tribalism in the global village

 

Nicholas Carr
(Image from Wikimedia Commons

The author Nicholas Carr wrote an article some time ago asking, "Is Google Making Us Stupid?"

He followed it up with his book The Shallows, which asks, is the Internet is turning us into shallow thinkers?

(Image from Amazon)

While many of us would say no, I understand what he was getting at when it comes to how people use the Internet.  The best thing about the Internet is that you can find out anything about anything—that's also the worst thing about the Internet.  For instance, when I want to know what the Fed does to affect interest rates, I can go it its website and look up the policy.  But if I want to believe that President Obama is a space alien, I can find a website out there that confirms this conspiracy theory.  (Go ahead, try it!)

On one hand, the Internet is probably the greatest invention since the printing press, at least in terms of cultural impact.  On the other hand, the Internet, and social media in particular, can be a trap for what psychologists call the confirmation bias: people tend to seek information that confirms what they want to hear, not what they need to know.  Now we're living in a paradox: we're much more interconnected through digital technology and social media; but the information we usually find is customized, limited, or insular to our social network.

Take a recent example: an MIT study that looks at Twitter users and how they followed political Tweets during the 2016 election.  In the graph below, the dots represent Twitter users, and their color represents which political candidate they followed.  Clinton supporters on Twitter are to the left, and Trump supporters on Twitter are to the right.  Among Clinton supporters on Twitter, there were few users who followed only Trump (the red dots), some users who followed both Clinton and Trump (the purple dots), and many users who only followed Clinton (the blue dots).  Among Trump supporters on Twitter, most users only followed Trump, creating their own information bubble (the big cluster of red dots).

(Image from The Electome project at MIT and reported in Vice News)

In terms of media ecology (the study of how media affect culture and behavior), the Internet may or may not be making us stupid or shallow.  However, it may be making us more tribal.  Years ago, Marshall McLuhan warned us about this tribalism in The Gutenberg Galaxy:

"The new electronic interdependence recreates the world in the image of a global village."
"We live in a single constricted space resonant with tribal drums."[1]

That's our technological paradox, which McLuhan foresaw before the Internet's invention.  Technological interconnectivity—via the Internet and social media—has connected us in incredible ways, and now we're all sharing a single space (a global village).

And yet the result has been anything but a shared, cosmopolitan perspective (rather, its what we'd call digital tribalism).

Is there a way out of this paradox?  Here are some suggestions...

  • Never expect meaningful information from Twitter.  Twitter is primarily entertainment, not news.  (On occasion, meaningful info may come from Twitter, but it's more likely to come from, say, magazines such as Scientific American.)
  • Don't post political memes on Facebook.  Seriously, don't do it—it only inflames tribalism, and memes won't change anyone's mind.  (Changing minds requires discussion and dialogue).
  • Take breaks from social media to read news from nonpartisan sources.  Of course, there's no such thing as perfectly unbiased, purely objective news, because there's no such thing as perfectly unbiased, purely objective people (we are human beings, not gods).  But news can be covered in a way that respects facts and doesn't have a partisan axe to grind.  Also, if you can, try to read news in print, because print media often work better than screens for reflective, long-term memory, as research in cognitive science demonstrates.  (Personally, I like to read The Economist.) 

[1] Marshall McLuhan, The Gutenberg Galaxy: The Making of Typographic Man (Toronto: University of Toronto Press, 1962), 36.

Monday, November 21, 2016

Don’t wait to be inspired: the art of civics

For years to come, the U.S. will reflect upon the fact that a reality TV star became the commander in chief of the most powerful military in the world.  Last month I made a couple observations about what the 2016 presidential election said about our media—namely, it broke apart the wall between entertainment and news, giving us unrestrained infotainment and an unappetizing dose of sexism.

What’s just as striking is that almost half of the country didn’t vote.  NPR recently broke down the exit poll data (the only accurate polls this year), and one takeaway is this: low voter turnout, particularly among younger voters, had an enormous effect on the election results, which is why Trump’s victory surprised so many.  To put it simply, Obama won because of the Millennial vote; Clinton lost because of a lack thereof—the people that pollsters expected to vote just didn’t show up on Election Day.


There are various reasons for this political indifference.  The election was unbearably negative, turning off potential voters.  People are sick of the two-party system and establishment politics.  Many think that too much money influences elections.  There are problems with the Electoral College.

I don’t disagree with any of these reasons.  However, I do want to highlight another reason that’s often ignored in public discourse: civic apathy.
Basically, it sounds like this:

I don't need to vote.  Politics doesn't affect me anyway.
I don't want to vote unless I'm inspired.

In such statements, there’s no consideration or caring about how politics affects our lives—or the lives of others.  For instance, the outcome of this election will undoubtedly affect the climate crisis, healthcare, and how we include (or fail to include) immigrants and minority groups in our society.

To help lead us out of this apathy, I’d like to pose a personal challenge: inspire, and don’t wait to be inspired.

Some citizens don't vote because they're not ‘inspired’ to make an effort.  I don’t believe that’s a healthy way to see civics.  It’s not even a healthy way to see life.  If I sit around waiting for the rest of the world to inspire me, then nothing will become of my life.

So let’s put it this way ...

It’s not always the responsibility of politicians to inspire voters.  Quite the reverse: it’s the responsibility of voters to inspire politicians.
(Image from Amazon)

Perhaps we should think of civics as an artthe art of inspiring and caring, as opposed to not giving a hoot.
I find this attitude much healthier than just waiting for the world to inspire us or to go perfectly our way.  If we don’t inspire or care, shady interests will fill the void.  As John Dewey wrote in The Public and its Problems,

“Nature abhors a vacuum.”  When the public is as uncertain and obscure as it is to-day, and hence as remote from government, bosses with their political machines fill the void between government and the public.[1]

On his final trip abroad as president, Mr. Obama recently said something similar.  Check it out in the video below.


[1] John Dewey, The Public and its Problems (Athens: Ohio University Press, 1954 [1927]), 120.


Thursday, October 20, 2016

Political Media and the Art of Manliness

Weeks away from the 2016 election, I thought I’d make a couple observations—not on the presidential campaigns but upon what they say about our media and culture.

First, don’t expect real news from TV or Twitter.

There’s no question about it: television and social media are technologies that dominate how most people get their information about politics.
  • The good news: more people have access to political information.
  • The bad news: much of the information from television and social media is, to say the least, very shallow.
Shallowness shouldn't surprise us when it comes to media like TV and Twitter.  If you can’t express ideas in sound bites or 140 characters, then you don’t get attention.  But discussing complex issues—the climate crisis, bank regulation, fiscal and monetary policy, etc.—simply can’t be learned through talking points or tweets.

Talking points and tweets make great entertainment, but they often lack substance.  And when we sacrifice substance for entertainment, we’re in danger of “Amusing Ourselves to Death—to quote author Neil Postman.  The Trump campaign in particular may signify the breaking point where presidential races became virtually indistinguishable from reality TV.  I’ve written about this problem before, and I see no easy or clear solution.  My suggestion: replace TV and tweets with book clubs and Socratic-style discussions, at least for political information.

(Image from Amazon)

Second, the art of manliness needs a reboot.

If we believe the statisticians at FiveThirtyEight (a website that specializes in poll analysis), Trump will probably lose.  His ostentatious behavior paid off early in the presidential race, but, as a result, he has insulted too many groups, including women.  His sexist comments, combined with groping accusations by numerous women, will likely cost him the presidency (as it should).

Unfortunately, more women then men see a problem with Trumpism, which is clearly a form of machoism (or chauvinism).  However, I see it as a symptom, not a cause.  What’s the cause?  There’s no single answer, but perhaps a lack of male role models is part of the explanation.

Scholars and poets like Joseph Campbell and Robert Bly have written much about this pedagogical problem.  For instance, medieval societies had myths for boys, who were to model themselves upon heroes such as the Knights of the Round Table.


Knights of the Round Table, by William Dyce
(Image from Wikimedia Commons)

Now obviously medieval societies were far from equal, but here's the larger point: like gallant knights, boys were responsible for recognizing their own hero’s journey, their personal transformation from dependent children to dependable adults.

We need to modernize this hero's journey, because when we lack such a myth, boys may get their role models from shows like The Simpsons and The Family Guy.  They may be funny, but Homer and Peter are anything but healthy expressions of masculinity.

                    

                    (Image from Wikipedia)

(Image from Wikipedia)


There is, however, a glimmer of light.  The revival of superheroes in movies such as Captain America and in TV series like The Flash are providing new male role models.  Many of the latest superhero myths are rebooting the art of manliness—being a heroic, responsible, courteous guy.  That’s a great thing for both our culture and post-Trump politics.

Captain America (Image from IMDb)

The Flash (Image from IMDb)




Monday, September 19, 2016

Food (and Technology) as a Relationship—Book Review of In Defense of Food

In the previous post, I recommended the book Cooked, by Michael Pollan.
With recent reports about the sugar industry manipulating nutritional science and health policy, now may be a good time to recommend one of Pollan’s earlier books, because it illustrates another great insight about food—and about technology for making food.

(Image from Amazon)

In Defense of Food is my favorite of Pollan’s writings because it gets to the heart of what food truly is.  Right now our culture tends to think of food as a bunch of nutrients (e.g., carbohydrates and proteins, fats and vitamins).  Pollan calls this kind of thinking “nutritionism.”  In contrast, he proposes thinking about food in a different way: as a relationship.  In his words,

"What would happen if we were to start thinking about food as less of a thing and more of a relationship?  In nature, that is of course precisely what eating has always been: relationships among species in systems we call food chains, or food webs, that reach all the way down to the soil.  Species coevolve with the other species that they eat, and very often there develops a relationship of interdependence: I’ll feed you if you spread around my genes.  A gradual process of mutual adaptation transforms something like an apple or a squash into a nutritious and tasty food for an animal." [1]

Now before you dismiss this line of thinking as hippie talk, consider the evidence Pollan draws upon.
For example, he draws upon what scientists David Jacobs and Lyn Steffen call “food synergy”: 
food is more than just nutrients; it also includes synergies between nutrients.

Synergies?

Basically, this means that nutrients don’t act in isolation; they interact and cooperate with each other and with the biology of a person when they are digested.  In other words, it’s not just the nutrients in the food but the food as a whole that affects our health, which may explain why whole foods are healthier than artificially processed foods.  Someone who eats whole foods will be healthier than someone who eats artificially processed foods, even when those two kinds of foods have the exact same amount of nutrients! [2]
We’re not exactly sure why yet, but some kind of “synergy” between nutrients is at work.  For instance, the way a nutrient interacts with other nutrients in the digestive process seems to matter as much, if not more so, than just consuming the nutrient alone.  Consuming omega-6 and omega-3 fatty acids matters less than having a balanced ratio between omega-6 and omega-3 fatty acids.

Michael Pollan
(Image from Wikimedia Commons)

So does this research support Pollan’s proposal to think about food as a relationship?
Well, on one hand, Pollan may be overstating his case.  There’s some truth to nutritionism:
a balanced intake of carbs, proteins, omega fats, and vitamins probably matters.
On the other hand, that’s not the whole story, and nutritional science can be problematic: there are few reliable measures of nutritional data; [3] and there are so many variables that impact health that it’s often questionable to draw reliable conclusions about those data. [4]  Pollan spends much time in his book showing the limitations of nutritional (pseudo)science, and respected researchers and statisticians (such as Archer et al. and Ioannidis—see footnotes) have made related critiques about nutritional and health data.
Hence, the title of Pollan’s book—In Defense of Food (defending food against nutritionism).
Pollan gives a seven-word maxim as an alternative to obsessing over nutrients:

Eat food.  Not too much.  Mostly plants.

You can listen to him explain these seven words in this short PBS clip.
Pollan is an excellent writer and throws some provocative ideas out there, so I'll just add a final thought.

Like food itself, the technologies we use to make food are more than meets the eye.  From simple gardening tools to agricultural machines, the technologies we use to produce and consume food are not just things; they embody our relationship to nature.  Tools and machines used by family farms, for instance, entail a different relationship to the land and animals than, say, megamachines used by factory feedlots.



[1] Michael Pollan, In Defense of Food: An Eater’s Manifesto (New York: The Penguin Press, 2008), 102.

[2] David R. Jacobs and Lyn Steffen, “Nutrients, Foods, and Dietary Patterns as Exposures in Research: A Framework for Food Synergy,” American Journal of Clinical Nutrition, 2003; 78 (suppl): 508S-13S.

http://www.ncbi.nlm.nih.gov/pubmed/12936941


[3] Archer E, Hand GA, Blair SN (2013) Validity of U.S. Nutritional Surveillance: National Health and Nutrition Examination Survey Caloric Energy Intake Data, 1971–2010. PLoS ONE 8(10): e76632. doi:10.1371/journal.pone.0076632.


http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0076632


[4] Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124.


http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124




Sunday, August 21, 2016

Did Cooking Technology make us Human?—Book Review of Cooked

I've been getting a bit of summer reading done, so now I have to recommend a book.  Or maybe two books.  Or at least a documentary.

This summer I was delighted to see that Netflix added a four-part documentary based on the book Cooked, by Michael Pollan.  I'm an admirer of Pollan's earlier writingsThe Omnivore's Dilemma deserves to be recognized as a classic, and In Defense of Food may be my favorite of his books.  Cooked is also a great read, but if you're short on reading time, then definitely try to watch the documentary.  (Check out the preview below.)


(Image from Amazon)

I learned something new from reading and watching Cooked.  Pollan interviews a primatologist by the name of Richard Wrangham, who proposed a novel theory about origins of humanity.

To understand his theory, let's take a couple steps back in the evolutionary tree of life.

Homo Erectus was an ancestor of Homo sapiens (that's us), and Homo habilis was an ancestor of Homo erectus.  Many scientists agree that meat eating may explain the evolution of Homo habilis, but Wrangham doesn’t think it explains the emergence of Homo erectus.  Cooking, however, may explain that later transition.  That's right, Wrangham's theory says that “cooking technology” played a role in the evolution of early Homo species, particularly Homo erectus[1]  

To understand that evolution, we need to understand something about nutrition.  A fact I didn't know previously is that cooked vegetables and meats give us more energy than raw food.  Cooking food over a fire or a heat source is almost like pre-digesting it, because our jaws and digestive systems don't have to work too hard to break it down.  As a result, we end up absorbing more nutrients and calories when food is cooked.  When we eat raw food, however, our jaws and digestive systems have to work more (burning calories to eat calories), so we get less bang for the buck (we absorb fewer nutrients and calories).

So how does that explain the transition from Homo habilis to Homo erectus?  According to Wrangham's theory, when our ancestors learned to cook, they could absorb food without their jaws and guts needing to work so hard.

For example, by cooking tubers (the thick, round stems of plants such as potatoes), our ancestors like Homo erectus increased the amount of energy their bodies could obtain from those foods: they got more energy (easily digested food) for less effort (reduced chewing).

By working their jaws and guts less, more energy was freed up for other areas of the body, including the brain.   As a result, our ancestors evolved smaller teeth, jaws, and guts, and they grew larger brains. [2]

Wrangham lays out his theory in his book Catching Fire: How Cooking Made Us Human, where he basically argues that our ancestors discovered cooking as “a technological way of externalizing part of the digestive process.” [4]


(Image from Amazon)


For those of us familiar with Marshall McLuhan's ideas, we've certainly thought of information technology and media as extensions of our minds.

Understanding cooking technology as an extension of our guts, however, was a new idea for me, and I'm thankful to Wrangham and Pollan for introducing and popularizing it.





[1] Richard Wrangham et al., “The Raw and the Stolen: Cooking and the Ecology of Human Origins” Current Anthopology 40 (5) (December 1999), 572.

[2] Elizabeth Pennisi, “Did Cooked Tubers Spur the Evolution of Big Brains?” Science 283, No. 5410 (March 26, 1999): 2004-2005.

[3] Sushma Subramanian, “Fact or Fiction: Raw veggies are healthier than cooked ones,” Scientific American, March 31, 2009:

[4] Richard Wrangham, Catching Fire: How Cooking Made Us Human (New York: Basic Books, 2009), 56.


Tuesday, July 19, 2016

Pokémon Madness: from Cyberspace to Augmented Reality

Pikachu, a popular Pokémon

(image from pokemon.wikia)

I'm sure you've noticed the Pokémon madness by now.

You probably heard that Nintendo released a downloadable game for mobile devices such as your smartphone.
The game, called Pokémon Go, allows you to walk around and capture little creatures known as Pokémons.

That may sound trivial at first, but Pokémon Go is unique, because it brings together virtual images of Pokémons with real-time video.  While you view the surrounding scenery through the screen of your mobile device's camera, the game can project virtual Pokémons into live video.  (See the advertising video below.)


The game went viral within days, leading to millions of downloads and topping the charts on Google Play.  Consumer demand was so high that it generated enormous strain on the servers, an unforeseen problem that Nintendo repeatedly has had to fix.  Even with this technical issue, the game is exploding, and Nintendo’s market value has soared as a result.

Full disclosure here: I haven’t played the game, and I probably won’t.  As a general rule, I like to keep my distance from the latest hype, which allows me to look at it more objectively.  So for example, if we look at Pokémon Go from such a distance, we can make a philosophical observation here about our relationship to reality in the digital age.

Back in the day we talked about cyberspace, an umbrella term that usually meant online, interactive spaces made possible by the World Wide Web—i.e., computer networks connected via the Internet.

Now, however, cyberspace has blended into real space.  Pokémon Go is the latest example of virtual reality seeping into reality.  That’s why we refer to mobile devices as part of the Internet of things—it's not just the Internet alone, but the Internet connected to things in the world.

What all this means is that the term 'cyberspace' may be outdated.  Cyberspace isn't separate from real space any longer.  An updated description of this new situation is the phrase Augmented Reality, which means that physical reality is supplemented or augmented by virtual reality.  (There’s also the phrase Computer-Mediated Reality, in which perception is modified or mediated by wearable computers—think of Google Glasses.)

Google Glasses

(image and description from Wikipedia)

Pokémon Go is just the latest example of us moving from cyberspace to Augmented Reality.

So is the emergence of Augmented Reality a positive development in our lives?  Well, like all new technological innovations, there’s always the good, the bad, and the ugly.
  • The Good: Augmented Reality gives us plenty of virtual applications to enhance work we do in the real world.  For instance, GPS (in cars, planes, maritime vessels, etc.) allows us to navigate terrains more efficiently in real-time.  In human sciences like ergonomics or usability, wearable computers like eye tracking technologies allow us to study vision with more precision.  In such cases, technology doesn’t interfere with life but truly supplements it and helps us make improvements.
  • The Bad: As Pokémon Go shows, Augmented Reality takes video games to a whole new level.  While fun, that also makes video games much more addictive, leading people to do highly irrational things.  Within days of the Pokémon madness, there have been what police describe as “trespassing epidemics” and even instances of people quitting their jobs to become full-time Pokémon hunters.  In these cases, technology interferes with life, and not in a good way.
  • The Ugly: Augmented Reality may create amusing games like Pokémon Go, but sometimes the public may literally amuse itself to death (as Neil Postman cautioned long ago).  Some players have been so absorbed in the game that they unintentionally plunged into ponds, walked into highway traffic, crashed into trees, and fell off of cliffs.  In those cases, technology ruins life.

Regardless, Augmented Reality is here to stay.  The ultimate question is how to make best use of it by understanding its good, bad, and ugly sides.

I think we can do better than Pokémon Go.


Friday, June 24, 2016

Will voting with smartphones promote democracy?

Consumer Reports
July 2016 issue
(image from consumerreports.org)

In an age where sensationalism has overshadowed journalism, it’s comforting to know that there are real sources of news out there.  I’d include Consumer Reports among those real news sources,  even though it just focuses on products and services.  This month’s issue has a thought-provoking article, “Online Voting and Democracy in the Digital Age,” which asks the question:

“We now use the Internet to shop for cars, file taxes, and everything in-between.
But are we ready to vote with our smartphones?”

Well, we may ask, why not?  The US has low voter turnout on election days compared to other democratic, industrialized countries, so perhaps making voting more convenient will help increase civic participation.
At least, that appears to be the underlying assumption behind the question.

However, I’m not so sure civic participation would necessarily improve in a smartphone-driven society.

To understand why, let’s first bear in mind a famous aphorism from media guru Marshall McLuhan: “the medium is the message.”  What McLuhan meant was that media (and technologies, more broadly speaking) don’t just communicate information; they actively shape it.  More specifically, media influence how our minds process information.  (This insight is the main idea of what educator Neil Postman called media ecology.)

Marshall McLuhan
(Image from marshallmcluhan.com)

For instance, digital media like smartphones promote quick information processing.  Very quick.  They process information almost literally at the speed of lightdigital information travels at lightening speed.  The light then comes through a glowing screen and into your eyes.  Staring directly at a source of light is difficult, so your vision tends to bounce around (to mitigate digital eye strain), finding bright buttons and distracting links in the process.  This is why we tend to scan and multitask when it comes to digital screens.

Contrast that with print media, which promote more slow information processing.  With books, for instance, light reflects off the page, allowing you to focus on a book for a longer period of time without feeling digital eye strain (reflected light is much easier on your vision than direct light).  Also, there’s nothing to click on print media, so those kinds of distractions are eliminated.  This is why its easier to focus on a book than on a screen.

Not that digital is bad and print is good.

Different media are designed for different purposes, that's all (e.g., see my Screen vs. Print series).

Which brings us back to the proposed smartphone-driven democracy.

Your smartphone is a great digital device designed for fast-paced tasks—e.g., checking email, getting directions or bus schedules, shopping, etc.
It’s not the greatest medium for careful reflection (in-person dialogue and books are better for that purpose).  Also, when you use your smartphone, you do so alone—even if you’re connected to a billion users, you’re still doing it by yourself.

Now contrast that with the purpose of democracy and civics, which is about careful reflection on the character as well as the policies of candidates.  Civics also is about coming together as a community in dialogue, not about multitasking alone on a computer.

In short, the design of smartphones is at odds with the purpose of civics.

So voting with smartphones is really a misguided answer to the problem of voter turnout.  To encourage civic mindedness and thoughtful reflection in politics, we’d want to create social spaces where people come together in dialogue.  We don’t want to commodify politics more with smartphone and just be, as the psychologist Sherry Turkle says, “alone together.”  Instead of voting with smartphones, a real solution would turn election day into a national holiday, encouraging people to put down their phones, leave their confines, and come together in the spirit of community.