SAMPLE  ARTICLE
JIS XXXII 2020: 1-16

TAMING  THE  DIGITAL  BEHEMOTH:
RETHINKING  THE  DIGITAL – HUMAN  DIVIDE

Oskar Gruenwald
Institute for Interdisciplinary Research

This essay explores the digital challenge, how to humanize technology, and the need to rethink the digital-human divide.  This is imperative in view of superintelligent AI, which may escape human control.  The information age poses quandaries regarding the uses and abuses of technology.  A major critique concerns the commercial design of digital technologies that engenders compulsive behavior.  All technologies affect humans in a reciprocal way.  The new digital technologies–from smartphones to the Internet–where humans are tethered to machines, can impair our autonomy, hijack attention, rewire the brain, and diminish concentration, empathy, knowledge, and wisdom.  The remedy is to restore deep reading, human interactions, personal conversations, real friendships, and respect for autonomy and privacy, building a nurturing culture of tolerance, coupled with transcendent norms and ideals worthy of a creature created in the image and likeness of God.  This aspiration should be at the center of a new inter-disciplinary field of inquiry–a phenomenology of communications.

HUMANIZING TECHNOLOGY

This 32nd thematic volume of the Journal of Interdisciplinary Studies on “The Digital Challenge: How to Humanize Technology” seeks to initiate a dialogue on the promises and pitfalls of digital technologies and their impact on individuals, society, and culture.  We live in an era of computerization of everyday life.  Automation pervades all aspects of work and leisure, from the shop floor to education and dating services, with limitless possibilities for instant communication via smartphones, the Internet, and social media.  But, is humanity becoming overly dependent on machines? Meredith Brossard cautions in Artificial Unintelligence that “computers misunderstand the world.”  She contends that: “When we rely exclusively on computation for answers to complex social issues, we are relying on artificial unintelligence” (Broussard 2018: 11).  Simon Head points out in Mindless that smarter machines are “making dumber humans,” and that the age of machines as labor-saving devices has intensified management control of employees via Taylorism or “scientific management” (2014: 185).  Humanoid robots pose an even greater challenge. For example, how can humans avoid emotional attachment and dependence on their creations, as explored by Sherry Turtle (2017) in experiments where children interact with sociable robots, and epitomized by the individual willing to “marry” his iPhone?

Kentaro Toyama raises the question of normative guides in an era of rapid progress in digital technologies. Toyama maintains in Geek Heresy that: “Technology doesn’t boot strap an ethical outlook on its own.  Ultimately, people govern technology. Any progress worthy of its name requires progress in human heart, mind, and will” (2014: 217-18).  The question arises: In a world dominated by machines, what is the proper role for humans?  Toyama proposes that technology is Janus-faced, and that genuine social progress depends on caring humans.  The thesis of this essay is that the future of humanity may well depend on scientists and humanists’ willingness and ability to bridge C. P. Snow’s famous The Two Cultures (1959) conceptual divide in order to humanize technology and promote individual fulfillment and social felicity in the global village.

In 2020, U.S. higher education is grappling with novel challenges due to the coronavirus pandemic (Zimmerman 2020).  Michael Meagher’s essay chronicles an educator’s journey from classroom to online instruction.  The Fall 2020 semester in the U.S. is likely to rely on distance learning empowered by digital technology at all levels of education.  The downside is that kids learning at home miss their friends and the all-important socialization that depends on face-to-face interaction.  Corine Sutherland describes the ethical challenges of online-learning which include positives for college students obtaining an education long-distance, and negatives such as novel opportunities for cheating.  Sutherland draws on Isaac Asimov’s rules for robots, adapting them as ethical codes for students.  Indeed, the ethical dimension looms large in humanizing digital technologies.  Martin Yina examines the continuing digital divide between developed and developing countries like Nigeria.  This inter- and intra-societal digital divide was noted already in 2002 by the Pontifical Council for Social Communications’ Ethics in Internet (reproduced as document).
How to humanize technology is the focus of Bruce Lundberg’s essay, “The Virtues of Leonhard Euler,” a famous eighteenth-century mathematician who exhibited “the gifts of human strength, practices, good will, dependence on others, and friendships, which made possible Euler’s own astonishing corpus of work and that of many other scientists and mathematicians, engineers and technologists” (2020: 58).  Lundberg concludes that: “As ethics enable and embody an ethos, so technologies are means and manifestations of a telos.  Thus, thought and action for thriving through the digital needs to contemplate and conciliate the ends of humans and of the digital” (2020: 58).  This volume is dedicated, indeed, to initiating a dialogue concerning the quandary how to conciliate the ends of humans and pre-programmed digital.

Daniel Topf shows why this is a crucial concern as we advance in the era of the Fourth Industrial Revolution, the age of AI and robotics.  Topf’s essay considers Yuval Noah Harare’s claim that automation and robotics–machines–could replace humans who may become not only unemployed, but eminently “unemployable,” thus giving rise to a “useless class.”  The optimists–futurists like Ray Kurzweil and Nick Bostrom–celebrate the prospect of a digital utopia.  Topf is more realistic in proposing a faith-based approach that emphasizes the notion of irreplaceable humans capable and worthy of love and care, apart from their employability or work.  Such an ethos reflects a Judeo-Christian anchoring of the self in the intrinsic worth and dignity of human beings created in the imago Dei.

Providentially, this volume includes essays on two great mathematicians –Euler and Kurt Gödel–both religious believers.  Miloš Dokulil, a Czech logician, probes Gödel’s life and work, but considers Gödel’s proof of God as “outside interpersonal conditions for an objective construction” beyond a “verbal proof.”  While acknowledging Gödel’s singular contributions to metamathematics, Dokulil concludes that Gödel’s religious worldview represented for him a personal Security that requires no proof whatever.  Daniel Hollis remarks that Dokulil becomes a participant in the search for Gödel’s God.  Dokulil’s enigmatic quest for religious certainty and a redeeming faith evokes sympathy and prayers.  Uncannily, Dokulil’s travails confirm the Biblical injunction that the truths of the Bible are spiritually discerned; they appear as “foolishness” to the “natural man” (the flesh).  As Paul relates in the New Testament: “But the natural man receiveth not the things of the Spirit of God: for they are foolishness unto him: neither can he know them, because they are spiritually discerned” (I Cor 2:14).

Gödel is best known for his Incompleteness Theorems, according to which all formal systems with arithmetic have inherent limitations.  Gödel’s Incompleteness Theorems remind one of Werner Heisenberg’s Uncertainty/ Indeterminacy Principle in quantum mechanics which describes a different world from classical mechanics.  In a nutshell, Heisenberg’s Uncertainty Principle refers to the phenomenon that it is impossible simultaneously to determine the precise position and momentum of a particle.  Interestingly, Heisenberg’s Uncertainty Principle inspired Jacob Bronowski (1973) to re-interpret it as a “Principle of Tolerance” in human affairs (Gruenwald 1990).  Both Gödel’s Incompleteness Theorems and Heisenberg’s Uncertainty Principle may be relevant for prospects to humanize AI, and transform a growing digital challenge to benefit, rather than harm, humanity.

TAMING SUPERINTELLIGENT AI

A new day has dawned when scientists and humanists need to build bridges of mutual understanding to stay one step ahead of, and tame, superintelligent machines.  Stuart Russell, a computer scientist at U.C. Berkeley, has come to share the concern of many humanists regarding the potential of superintelligent AI enslaving humanity.  Russell admits that “many people had finally begun to consider the possibility that superhuman AI might not be a good thing–but these people were mostly outsiders rather than mainstream AI researchers” (2019: 2).  However, Russell continues, “By 2013, I became convinced that the issue not only belonged to the mainstream but was possibly the most important question facing humanity” (2019: 2).  But, why should anyone be concerned?  We heard such alarmism before regarding technological innovations.  Why should it be different this time?  Russell’s answer is cautious yet hopeful: “The arrival of superintelligent AI is in many ways analogous to the arrival of a superior alien civilization but much more likely to occur. Perhaps most important, AI, unlike aliens, is something over which we have some say” (2019: 2-3).

Most of Russell’s book then retraces the various stages of AI development, highlighting “deep learning” techniques in three major areas: Speech recognition, visual object recognition, and machine translation.  Russell points out that: “By some measures, machines now match or exceed human capabilities in these areas” (2019: 6).  Russell is aware that the potential economic and social benefits of AI are enormous, but poses a quandary facing humanity: “If all goes well, it would herald a golden age for humanity, but we have to face the fact that we are planning to make entities that are far more powerful than humans. How do we ensure that they never, ever, have power over us?” (2019: 8).

What both scientists and humanists should find troubling are peculiar essential aspects of AI and digital technologies–their programmed capabilities to change human preferences.  This capability arises given the design of AI-digital technologies for maximum click-through, and thus potential impact on the human mind.  Normally, this would be a “canary-in-the-coal-mine” or “red flag” raised by tech critics.  Russell sums up the intrinsic AI challenge: “Like any rational entity, the algorithm learns how to modify the state of its environment–in this case, the user’s mind–in order to maximize its own reward” (2019: 9).  But, why the concern about this, at first glance, innocuous digital programming?  Russell’s answer speaks volumes:

The consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO.  Not bad for a few lines of code, even if it had a helping hand from some humans.  Now imagine what a really intelligent algorithm would be able to do (2019: 9).

In retracing what went wrong, Russell revisits the concept of intelligence as central to Homo sapiens.  Crucial to his argument, Russell demarcates human from machine intelligence by positing that: “Machines are intelligent to the extent that their actions can be expected to achieve their objectives” (2019: 9).  Unexpectedly, this encapsulates the AI dilemma, though at the same time offers hope for taming the digital behemoth.  Russell puts it this way: “In summary, it seems that the march towards superhuman intelligence is unstoppable, but success might be the undoing of the human race.  Not all is lost, however.  We have to understand where we went wrong and then fix it” (2019: 11).

Russell is hopeful that human ingenuity can fix the central challenge of AI: to make machines that are not only intelligent, but also “beneficial” for humans.  The desired outcome in such an endeavor is to program (or train) machines to achieve human objectives rather than simply the machine’s objectives.  This would require a revolutionary new approach to modeling and designing AI.  The key element of this new approach should be to build “uncertainty about objectives” into the very backbone of AI.  Curiously, both Gödel’s Incompleteness Theorems and Heisenberg’s Uncertainty Principle may offer new insights for more complex programming.  Russell explains the rationale for adopting “uncertainty” as a new core principle of AI: “Uncertainty about objectives implies that machines will necessarily defer to humans: they will ask permission, they will accept correction, and they will allow themselves to be switched off” (2019: 12).  In brief, Russell proposes that we need to re-think the entire superstructure and infrastructure of AI.  To the engaged reader, it sounds reassuring as a counterpoint to the nefarious HAL supercomputer in Stanley Kubrick and Arthur C. Clarke’s sci-fi movie, 2001: A Space Odyssey (1968).

In fact, Russell is confident that such a new programming approach to AI, based on “uncertainty of objectives,” would result in “a new relationship between humans and machines,” one that he hopes “will enable us to navigate the next few decades successfully” (2019: 12).  A major challenge facing humanity, according to Russell, is to create and build “provable beneficial AI systems,” which would “eliminate the risk that we might lose control over superintelligent machines” (2019: 246).  Such new beneficial AI systems would need to “allow for uncertainty” in the specifications where human inputs/answers would then “help in refining a machine’s specifications” at all levels (2019: 248).  In brief, the objective would be clearly to reclaim human control over superintelligent machines.

Russell calls also for reaching new agreements, rules, and standards for governing this Brave New World of “provably beneficial machines” (2019: 248).  And, he cautions concerning the undisciplined pursuit of AI technologies without built-in safeguards.  An unregulated AI jungle is scarcely a desirable scenario, where Russell observes, half in jest, that: “A `bunch of dudes chugging Red Bull’ at a software company can unleash a product or an upgrade that affects literally billions of people with no third-party oversight whatsoever” (2019: 252).  Russell is not concerned as much that “evil schemes” would succeed but, rather, that “poorly designed intelligent systems,” in particular “ones imbued with evil objectives and granted access to weapons,” may escape human control (2019: 252).  He notes that we are already “losing the battle against malware and cybercrime,” and that future highly intelligent programs “would be much harder to defeat” (Russell 2019: 252).

Russell’s concluding thoughts concern the human condition, and the need to strengthen human autonomy, choice, and agency.  It is a reminder concerning the imperative for greater self-understanding and the renewal of human civilization’s normative guideposts or ethos.  This, Russell believes, calls for a new cultural ethos, indeed a new cultural movement reaffirming the primacy of human ends.  In Russell’s formulation: “We will need a cultural movement to reshape our ideals and preferences towards autonomy, agency, and ability and away from self-indulgence and dependency–if you like, a modern, cultural version of ancient Sparta’s military ethos” (2019: 255-56).

A CULTURE IN SEARCH OF MEANING

In 480 BC, at the Battle of Thermopylae, the legendary King Leonidas and his 300 Spartans faced down a much larger army and, at the cost of their lives, held off the Persian invaders long enough for the Greek city-states to mount a common defense that ultimately succeeded in expelling the enemy.  They kept their pledge as Spartans, epitomized by their send-off into battle with the spirited words of their wives or mothers, handing each their shield: “With it, or on it!”  Sparta’s courage resonates with soldiers in every era.  But Sparta was a militarized society, remarkable as a close-knit band of soldiers, a nation of warriors.  It was not a liberal democracy, while the common bonds that bind a society appear badly frayed in much of the contemporary world transitioning from traditional societies into modernity and postmodernity.

The postmodern challenge is reflected in the search for normative standards and truths that bind people together as families and communities.  This quest is made the more difficult given a postmodern cultural preference for subjectivism and moral-ethical relativism that brackets all Absolutes (Gruenwald 2016).  It is only unclear how such a culture of eviscerated norms can accommodate Russell’s project of creating beneficial AI systems.  One hopes that Russell does not imply a militarized society like ancient Sparta, as societies in the twenty-first century already struggle with the increasing centralization of political power and decision-making that steadily encroach on individual freedoms and privacy.  This is so even apart from the portending advent of dystopias enabled by gentech in Aldous Huxley’s Brave New World and Big Brother telescreens in George Orwell’s 1984 (Gruenwald 2013).

Not readily acknowledged, though obvious, dystopian elements may be found in existing dictatorships such as P.R. China which employs digital technology to track friend and foe alike, hacking and silently stealing Western, especially U.S., tech secrets.  While the country is open to international trade and tourism, co-opting science and technology and even aspects of the liberal arts approved by the Chinese Communist Party, its “Great Chinese Firewall” carefully filters information reaching the Chinese public.  No wonder that Communist China’s “smoke and mirror” domestic and foreign policies bewilder most.  The country many refer to by a misleading moniker, “China,” is hostile to inquiries regarding human rights and freedoms, democracy, multiparty elections, or limitations on the all-encompassing power of the New Mandarins whose critics are expelled, silenced, or sent for “re-education” in the Laogai, the Chinese Gulag (Kempton & Richardson 2009).  This is a great irony, since “made in China” is now stamped on many products for sale in the U.S.  The self-congratulatory claim–the West’s overcoming Communism–needs to be tempered by the many iterations of one-party rule, monopoly of power and ideology, stretching from P.R. China, North Korea, Vietnam, Laos and Cuba to Putin’s authoritarian Russia, despite the implosion of the Soviet empire.

One does not have to subscribe to the ethos of the 1960s “flower children” to ask the probing question of “what technology has done to, as well as, for, people.”  Critics of digital technologies rarely lament their uses, but focus on their abuses.  Yet, one wonders whether the two–the uses and abuses of digital technology–may actually be two sides of the same coin and, as such, cannot be neatly separated.  If this were the case, then humanity may expect unprecedented challenges and daunting choices.  More than ever, this would bring us back to the quintessential quandary, revisiting the age-old question of what it means to be human.  In a sense, then, where Russell’s study ends, a deeply humanistic inquiry only begins.

TECHNOLOGY AND ITS DISCONTENTS

A common refrain among contemporary writers about technology and its discontents is that they do not dismiss science or technology.  Modern civilization and its many conveniences would be unthinkable without technology in its myriad uses and applications in such diverse fields as transportation, manufacturing, medicine and health care, farming, home building, gas-water-electric-electronic grids, education, entertainment, news media, communications, defense, and space exploration. Paradigmatically, technology reflects human creativity. Indeed, apart from language, creativity is perhaps the most distinguishing trait or characteristic of human beings.  David Eagleman and Anthony Brandt (2018) rightly regard humans as the creative species par excellence.

However, all technologies impact human beings in a reciprocal way. Already in the 1960s, Marshall McLuhan’s inquiries regarding mainstream media of radio, movies, and television pointed to their relaxing, enjoyable, soothing effects as well as their impact on individual consciousness and society, perpetuating an “age of anxiety.”  According to McLuhan, the news media of mass communications act as extensions of human senses, where the medium is not only the “message,” but also the “massage” (1967: 26). Fast forward to 1994, and the publication of Sven Birkerts’ The Gutenberg Elegies, which explored a cultural revolution in the societal shift from print-based to electronic communications.  Birkerts’ autobiographical and anecdotal book was prophetic in addressing the seismic shift from print to an electronic culture whose impact on human self-understanding is yet to be gauged fully.  Birkerts claimed not only that the older print-based culture hung in the balance, but also our essential identity, the self or the soul. While Birkerts’ Elegies was impressionistic, recent studies that draw on socio-psychological, behavioral, neurological, AI, and brain research confirm much of Birkerts’ musings, and extend the scope of inquiry.

Perhaps the most famous study characterizing a new genre is Nicholas Carr’s The Shallows: What the Internet is Doing to Our Brains (2020). Together with Maryanne Wolf’s Reader, Come Home (2018), Carr’s study bemoans the passing of the age of the book in that “deep reading” has “become a struggle” in the age of the Internet (2020: 6).  The main reason for this phenomenon is that “the Internet, by design, subverts patience and concentration” (2020: ix-x).  Other critics, like Cal Newport (2019), agree with Carr that the main culprits in the twenty-first century information culture are the new electronic media that scatter our attention which, in turn, “strains our cognitive abilities, diminishing our learning and weakening our understanding” (Carr 2020: 129).

Now, some might object that the more pedestrian twentieth-century media of radio, movies, and television could be just as distracting.  One need only recall (with a bit of nostalgia) the not always subtle background noise of one’s favorite radio station or the reassuring constant chatter of a TV set, not to mention the attraction of competitive video-games.

But Carr’s argument points to the irresistible “attention-grabbing” allure of hypermedia that combine “not just words that are served up and electronically linked, but also images, sounds, and moving pictures” (2020: 129).  The power of the Internet and hypermedia are impacting not just culture, but tinkering with our brain, mind, our self, and arguably producing changes in behavior.  At the center of such behavior modification is the introduction of a new device–the smartphone.  Carr sums up the far-reaching effects of this new communication technology:

Along with the attendant growth of social media, the proliferation of smartphones–more than 10 billion have been sold–has had a sweeping influence on almost every aspect of life and culture.  It has given a new texture and tempo to our days.  It has upset social norms and relations.  It has reshaped the public square and the political arena (2020: 226).

Wolf’s Reader, Come Home is in many ways complementary to Carr’s The Shallows.  Wolf champions also the benefits of “deep reading” and the irreplaceable platform of a literacy and word-based culture in contrast to an encroaching faster-faced digital and screen-based one.  Wolf begins her study with a surprising statement that “human beings were never born to read.  The acquisition of literacy is one of the most important epigenetic achievements of Homo sapiens” (2018: 1).  Yet, the deep reading, characteristic of an earlier book culture, is not only about “reading.”  Rather, Wolf recalls, “The long developmental process of learning to read deeply and well changes the very structure of that circuit’s connections, which rewired the brain, which transformed the nature of human thought” (2018: 2).

Wolf shares the concerns of tech critics that the immediacy of digital communications not only scatters attention, but diminishes our “cognitive patience” (2018: 92).  Thus, attention, cognition, and memory are all under siege in an age of digital distraction.  Wolf cautions that: “What we attend to and how we attend make all the difference in how we think” (2018: 108). The focus of attention in a digital era may be summed up by a favorite texting expression: “TL; DR: Too long; Didn’t read” (Wolf 2018: 92).  This short-circuits not only attention and the art of deep reading, but also diminishes our capacity for thought, meditation, human imagination, and creativity.  Perhaps most important, the fast-paced digital era impoverishes knowledge and feeling crucial for the development of empathy–the capacity to share in the perspective of others (Wolf 2018: 42-54).

The remedy, proposed by Wolf, is a biliterate brain that can skim information, but also capable of greater concentration presupposed by deep reading.  Retaining the more focused deep reading is, indeed, essential for the survival of civilization.  This is necessary, since the digital dilemma “is being acted out this moment in the cognitive, affective, and ethical processes now connected in the present reading circuitry and now threatened” (Wolf 2018: 204).  Developing a biliterate brain, which retains the acquired capacity for deep reading, is paramount for the future of humanity, since: “The atrophy and gradual disuse of our analytical and reflective capacities as individuals are the worst enemies of a truly democratic society, for whatever reason, in whatever media, in whatever age” (Wolf 2018: 199).

On the positive side, electronic communications and the Internet are now enabling also online-learning in a pandemic, where the education of students from K-12 to college hangs in the balance.  As Meagher points out, the pandemic even has a silver lining in that the expansion of distance education and online-learning is enabling more diverse student groups and adult learners across the U.S. and the world to obtain a higher education, while buttressing higher education institutions, many of which are strapped financially.

Still, concerns linger regarding our ability to manage the new technologies and find a proper balance between their positive uses and negative side-effects.  A major complaint concerning digital technologies is that they may contribute to behavioral addiction.  Yina objects to Newport’s terminology in that substance addiction, widespread in much of the world, should be distinguished from habits that may morph into addictive behavior.  But Newport’s claim is that the “compulsive connectivity” of the new technologies is addictive by design and may lead to “compulsive behavior” (2019: 16).  In Johann Wolfgang von Goethe’s (1749-1832) famous masterpiece of world literature, Faust, the tempter’s words find an echo in human gullibility through the ages: “At your first step, free; at the second, a slave.”  Adam Alter offers a definition of behavioral addiction that consists of six elements:

(1) compelling goals that are just beyond reach; (2) irresistible and unpredictable positive feedback; (3) a sense of incremental progress and improvement; (4) tasks that become slowly more difficult over time; (5) unresolved tensions that demand resolution; and (6) strong social connections (2018: 9).

Crucially, Alter and Newport reflect an emerging consensus on a key point regarding the new digital technologies.  It is not digital technologies per se, but their commercial uses that are addictive by design as they focus on “click-through,” that is, “more time on device.”  As Alter admits: “Tech isn’t morally good or bad until it’s wielded by the corporations that fashion it for mass consumption” (2018: 8).

Yet, Sutherland emphasizes the role of individual moral choice as decisive in students’ use of technology for cheating.  The company that produces software that enables student cheating may also bear part of the responsibility for the end-uses of technology.  Even more important is the role of parents and their responsibility of instilling a code of ethics and high moral standards in their offspring.  This crucial formative role of character-building should, then, continue through formal schooling and throughout life, a classical prescription now more important than ever.  Alas, a postmodern culture of moral-ethical relativism encourages what David Brooks’ The Road to Character calls a “Big Me” ethics of self-aggrandizement (2015: 240).  A culture shorn of Absolutes offers few guides for student conduct beyond a utilitarian, self-referential, ethics.

PHENOMENOLOGY OF COMMUNICATIONS

The quandaries concerning the Internet and new digital technologies call for an interdisciplinary field of inquiry exploring the phenomenology of communications.  This new field can deepen and build on the research of scientists and engineers like Russell and Newport, combined with insights by humanists like Carr, Alter, Wolf, and Turkle.  Social-behavioral sciences and humanities need to be brought into the conversation regarding optimal ways to define a new relationship between humans and machines.  Builders of commercial products that design, and have addiction built into, the devices are taking advantage of human vulnerabilities–our need for love, sociability, connectedness, friendship, acceptance, recognition, and validation.

The new digital technologies built around ubiquitous uploading, downloading, would-be “sharing”–humans tethered to machines–are said to impair our autonomy, hijack attention, and rewire the brain, thus diminishing concentration, empathy, knowledge, and wisdom.  The remedy, according to critics of digital tech, is more human interactions, personal face-to-face conversations, real friendships, greater respect for autonomy and privacy.  In sum, what is required is building a truly human culture of tolerance, capped with transcendent norms and ideals worthy of a creature created in the image and likeness of God (Gen 1:27).  There is an app for that.  According to Turkle, “We are the empathy app” (2017: xxvi).  Turkle offers practical advice how to reclaim good manners in an era of machine-connectivity: “Talk to colleagues down the hall, no cellphones at dinner, on the playground, in the car, or in company” (2017: 296).

However, Newport argues that, for most people, well-intentioned but vague resolutions may not work, that they need a regimen to reclaim autonomy and reassert human choice over machine compulsion.  Newport’s solution is a “digital declutter,” which involves first disconnecting for 30 days, and then purposefully re-introducing only those activities and technologies that support things we value (2019: 28).  We need to guard against temptations, such as falling in love with our creations, because they may hijack not only our attention, but our very souls.  Turkle cautions that “no robot can ever love us back” (2017: 28).

This contrasts with perennial high-tech cultural expectations like the beguiling Hollywood-style ending in the movie, Blade Runner (1982).  In the final scene of this acclaimed sci-fi thriller, the protagonist, Rick Deckard, whose job was to hunt down and terminate replicants, rides off into the sunset with his beautiful android fiancée, Rachael, to live happily ever after . . . (Martin 2005).

When it comes to digital media of communications, our engrossing fascination with high-tech comes at a price.  Turkle expresses well our present human predicament:

We are lonely, but fearful of intimacy.  Digital connections and the sociable robot may offer the illusion of companionship without the demands of friendship.  Our networked life allows us to hide from each other, even as we are tethered to each other.  We’d rather text than talk (2017: 1).

Futurists like Ray Kurzweil (2005) are enthralled by the Singularity–the transhumanist vision of humanity’s next evolutionary phase where it has conquered diseases, the frailties of old age, and perhaps even cheated death.  Yet, most people would probably settle for what Turkle calls the “empathy app,” which reflects the Judeo-Christian hope for God’s love, forgiveness, mercy, redemption, and eternal life beyond this vale of tears in Paradise.

As for “empathy,” Alter suggests that it is “a very slowly developing skill” that requires live face-to-face interaction among humans, observing not only words but behavior, rather than mediated online interactions (2018: 40).  Alter cites studies that show a decline of empathy among college students over several decades (1979-2009).  According to such studies, college students are “less likely to take the perspective of other people, and show less concern for others” (Alter 2018: 40).  Could it be that the impersonal medium of the screens and machine-mediated electronic communications also diminish empathy among the wider public in the U.S. and abroad?  This certainly would deserve further inquiry if we desire a more peaceful world.

While they celebrate the transhumanist project, Braden Allenby and Daniel Sarewitz also admit the reality of human fallibility and imperfection (2011: 195).  Such a reminder is necessary to keep our scientific and technological prowess in proper perspective. Allenby and Sarewitz conclude that: “To match its science museum, every city needs its own museum of humility, of ignorance and uncertainty, of the techno-human condition, to help us better understand how to act wisely, prudently, and compassionately in the world” (2011: 195).

In conclusion, there is a need to refocus our attention onto our human world of relationships, reclaiming deep reading, personal conversations, and real friendships that can contribute to restoring our fractured selves and help us live more fulfilling lives in a more harmonious, peaceful, and prosperous global village.  Technology is God’s gift; its proper uses our human challenge and stewardship responsibility.  The proper uses of technology may hark back to the classical notion of moderation (sophrosyne) and the right ordering of the human soul–the teleological imperative (Gruenwald 2007).  The lure of social media and the promise of digital connectedness, and their impact on core human identity, the self, remain to be unraveled as essential for understanding the phenomenology of communications.

REFERENCES:

Allenby, Braden R. & Daniel Sarewitz. 2011. The Techno-Human Condition. Cambridge, MA: MIT Press.
Alter, Adam. 2018. Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked. New York: Penguin Books.
Birkerts, Sven. 1994. The Gutenberg Elegies: The Fate of Reading in an Electronic Age. London: Faber & Faber.
Bronowski, Jacob. 1973. The Principle of Tolerance. The Atlantic 232 (December): 60-66.
Brooks, David. 2015. The Road to Character. New York: Random House.
Broussard, Meredith. 2018. Artificial Unintelligence: How Computers Misunder- stand the World. Cambridge, MA: MIT Press.
Carr, Nicholas. 2020. The Shallows: What the Internet is Doing to Our Brains. New York: Norton.
Dokulil, Miloš. 2020. Kurt Gödel’s Religious Worldview: An Immanent Personal Conception. Journal of Interdisciplinary Studies XXXII (1/2): 95-118.
Eagleman, David & Anthony Brandt. 2018. The Runaway Species: How Human Creativity Remakes the World. New York: Catapult.
Gruenwald, Oskar. 1990. Christianity and Science: Toward a New Episteme of Charity. Journal of Interdisciplinary Studies II (1/2): 1-21.
_____. 2007. The Teleological Imperative. Journal of Interdisciplinary Studies XIX (1/2): 1-18.
_____. 2013. The Dystopian Imagination: The Challenge of Techno-Utopia. Journal of Interdisciplinary Studies XXV (1/2): 1-38.
_____. 2016. The Postmodern Challenge: In Search of Normative Standards. Journal of Interdisciplinary Studies XXVIII (1/2): 1-18.
Head, Simon. 2014. Mindless: Why Smarter Machines are Making Dumber Humans. New York: Basic Books.
Hollis, Daniel W. III. 2020. The Paradox of Kurt Gödel: A Response. Journal of Interdisciplinary Studies XXXII (1/2): 119-36.
Kempton, Nicole & Nan Richardson, eds. 2009. Laogai: The Machinery of Repression in China. Washington, DC: Umbrage Editions.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.
Lundberg, Bruce N. 2020. The Virtues of Leonhard Euler: Ethics, Mathematics and Thriving in a Digital Era. Journal of Interdisciplinary Studies XXXII (1/2): 58-80.
Martin, Michael. 2005. Meditations on Blade Runner. Journal of Interdisciplinary Studies XVII (1/2): 105-22.
McLuhan, Marshall, Quentin Fiore & Jerome Agel. 1967. The Medium is the Massage: An Inventory of Effects. New York: Bantam/Penguin.
Meagher, Michael E. 2020. The Challenge of Distance Learning: An Educator’s Journey. Journal of Interdisciplinary Studies XXXII (1/2): 137-52.
Newport, Cal. 2019. Digital Minimalism: Choosing a Focused Life in a Noisy World. New York: Portfolio/Penguin.
Pontifical Council for Social Communication. 2020. Ethics in Internet. Rome: The Vatican.
Russell, Stuart. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking Press.
Snow, C. P. 2012. The Two Cultures. Cambridge, UK: Cambridge University Press [1959].
Sutherland, Corine S. 2020. Isaac Asimov’s Rules for Humans: Ethics and Online-Learning. Journal of Interdisciplinary Studies XXXII (1/2): 39-57.
Topf, Daniel. 2020. “Useless Class” or Uniquely Human? The Challenge of Artificial Intelligence. Journal of Interdisciplinary Studies XXXII (1/2): 17-38.
Toyama, Kentaro. 2014. Geek Heresy: Rescuing Social Change from the Cult of Technology. New York: Public Affairs.
Turkle, Sherry. 2017. Alone Together: Why We Expect More From Technologies and Less From Each Other. 3rd edition. New York: Basic Books.
Wolf, Maryanne. 2018. Reader, Come Home: The Reading Brain in a Digital World. New York: Harper.
Zimmerman, Jonathan. 2020. The Great Online-Learning Experiment. Chronicle of Higher Education (20 March): 40-41.
Yina, Martin. 2020. The Challenges of Digital Technologies for Nigeria. Journal of Interdisciplinary Studies XXXII (1/2): 81-94.
_____________________________________

Oskar Gruenwald, Ph.D., IIR-ICSA Co-Founder & Editor-in-Chief, Journal of Interdisciplinary Studies.