Blog 23: Can the internet help people live through a pandemic?

Contributed by Ronald Baecker and Judith Langer

Ron is an Emeritus Professor of Computer Science at the University of Toronto and co-author of The COVID-19 Solutions Guide.

Judith is  the Vincent O’Leary Distinguished Professor Emeritus at the University of Albany, State University of New York. and co-author of The COVID-19 Solutions Guide.

On Sunday, April 19, while on one of my daily walks—which have helped keep me sane in what has been a COVID-19 life of no face-to-face contact with family and friends—I (Ron) asked myself the question: can the internet help people live through a pandemic? Of course, the internet can help, and in important ways.  Because I am an author, the answer immediately suggested a more difficult question.  Should I write a book that describes how digital technologies and being online are helping people, and how those who need more assistance can find the resources to get more help?

Over the next few days, I decided that I should write the book, and that it needed to be done quickly, and with lots of help from extraordinary collaborators.  The COVID-19 Solutions Guide is the result.  I will describe the process.

That same day, I asked my friend Judith Langer, a renowned scholar of literacy and a distinguished Professor Emeritus, to work on the book with me.  A partnership between an eclectic scientist/engineer/designer (me) and an accomplished humanist/scientist (Judith) would be a good start.  I began writing on Monday morning April 20; the outline I sketched then is close to the structure of the final book.  That evening, I was playing bridge online (another of my virus coping mechanisms) with a friend of 70 years since 1st grade in Pittsburgh—Dr. Gary Feldman—who had been in charge of public health for two California counties for 14 years.  By then I knew we needed a medical expert; Gary answered the call.  I also recruited my personal financial adviser, Justin Stein, whose expertise and sound financial advice seemed essential.  I soon realized that no publisher would meet my goal of publishing by June 1 (we did not make it, but getting the book written and online in 2 months is nonetheless ok), so we had to do it ourselves.  I therefore recruited the amazing Uma Kalkar, who recruited the equally amazing Ellie Burger, to handle production, publishing, marketing, and social media.

But what kind of a book?  I am both a scientist and an engineer, I like to understand phenomena and use that understanding to build innovative software.  Hence the book needed description — what was happening and why, and prescription — a guide to ways in which technology could help us cope, survive, and enjoy life as best we can.  And yet, going back to doing science, there needed to be evidence that the methods that we would describe seemed to work.  There was no time to assemble hard evidence, but there needed to be at least anecdotal, narrative evidence that we were discussing solutions.  Hence the book has both scientific information, especially about medical issues, and also stories of real people, stories that were told to us by trusted friends, or reported publicly in reliable sources. 

Citing cognitive theory, Judith suggests that a general suggestion or concept such as, “When in lockdown, find alternative ways to interact with others,” will be best understood, remembered, and acted upon when readers can easily relate it to experiences they have had. The example, or narrative, triggers their memories of similar experiences they and others have had.  This mental connection enables them to use their own funds of knowledge to think of possibilities they hadn’t previously considered. The connection of known-to-new enables them to interpret what the general rule means to them and, in the case of how to ease loneliness and anxiety, ways they can personally act upon it. The example makes the author’s suggestions meaningful, memorable, and helpful.

The title was by then revealed —The COVID-19 Solutions Guide.  Yet just writing a book seemed insufficient.  Multidimensional experiences are useful for understanding.  We therefore wanted a blog, which also gave us one method of updating our description of phenomena that were and would continue to evolve rapidly.  Also, in amusing ourselves to stay sane, we invented a game that highlights the challenges of safe physical distancing and the opportunities to imagine creative virtual experiences.  We call this the COVID-19 Solutions Game.  We will release it on June 10 with announcement of the first competition.  The COVID-19 Solutions Guide will be online by June 17.  Stay tuned, and follow us on Facebook, Instagram, and Twitter.

FOR THINKING AND DISCUSSION

What sources and resources have you found the most useful for understanding what is going on (description) and how best to cope (prescription). You can send details to us here, so we can cite the best of these in a future blog post.

Blog 22: Pandemic models must be transparent and their creators must explain them publicly

Contributed by Ronald Baecker

Ron is an Emeritus Professor of Computer Science at the University of Toronto and co-author of The COVID-19 Solutions Guide.

A forecasting model is a prediction of how the world will evolve, of what will happen in the future with respect to some phenomena (such as the motion of objects, the financial health of a business) or the spread of an epidemic.

Models are sometimes expressed as equations.  Newton’s Second Law of Motion, often stated as F = M × A, describes the relationship between the force F acting on an object, its mass M, and its acceleration A. It is useful. For example, it can predict how quickly a hockey puck weighing M ounces will accelerate towards the net if it is struck by the hockey stick of a player whose strength will result in a certain force applied to the puck.

Models can also be expressed as computer programs, and in spreadsheets which allow assumptions to be expressed without writing a program.  These also are useful. For example, wise modern business owners build spreadsheets forecasting their profit-and-loss, cash flow, and balance sheets.  If the assumptions build into the model are good ones, the spreadsheet will help them know when they might need a loan, or when they might reach profitability.

The word “might” here is key.  Newton’s Laws have been validated in countless ways over centuries.  They are guaranteed to hold true in all normal situations that we encounter as humans, even though they do not hold true at a very tiny scale, a world understood by quantum mechanics, or a very large scale, a world understood by theories of relativity.  Such certainty is not the case with a typical spreadsheet built by a business owner, who makes assumptions in the model that may or may not be valid.

Recently, models have become political weapons, to be used when convenient, to be ignored or hidden when not convenient.  For example. the American Civil Liberties Union (ACLU) in Idaho questioned the use of an Excel spreadsheet that was being used to justify cutting the level of Medicaid assistance given to individuals with developmental and intellectual disabilities. When asked about the logic embedded in the spreadsheet, Medicaid refused to disclose it, claiming it was a ‘trade secret’. A court granted an injunction against the cuts, and ordered the formula made public. It soon became clear that the spreadsheet had numerous errors in it. A 2015 class action suit against the state of Idaho is still being deliberated by the Idaho Supreme Court (for the second time).

Pandemic forecasting models guide the life-and-death decisions about how quickly physical distancing rules or guidelines should be relaxed by various jurisdictions.  They also have become weapons.  A recent case occurred in the US state of Arizona.  On May 5, Donald Trump visited Arizona.  On May 6, hours after the State’s governor relaxed Stay at Home restrictions, the Arizona Department of Health Services shut down a project in which approximately ten University of Arizona researchers had developing an Arizona-specific model to guide public policy with respect to such restrictions.  One rationale given was that the model was not needed, because the U.S. federal agency FEMA (Federal Emergency Management Agency) had its own model. That decision was rescinded two days later after a public outcry.

The problem is that the FEMA model and the algorithm that determines its predictions are secret.  Nobody from the media, from the medical establishment, or the public can examine the assumptions used to derive conclusions and to guide policy, and to help it be wise policy.  This is dangerous — a good example of science and mathematics being used for political ends.  See here, here, and here to learn more about this topic.

 FOR THINKING AND DISCUSSION

How should society engage with the creators of pandemic models and the assumptions that animate them?

Blog 21: COVID-19: Computer scientists and CS students can act proactively for good

Readers of my blog will recall what I describe as digital dreams and digital nightmares.

Our world has been enriched by digital technologies used for collaboration, learning, health, politics, and commerce. Digital pioneers imagined giving humanity greater control over the universe; augmenting knowledge and creativity; replacing difficult and dangerous physical labour with robot efforts; improving our life span with computationally supported medicine; supporting free speech with enhanced internet reason and dialogue; and developing innovative, convenient, and ideally safe products and services.  Online apps and resources are proving very valuable, even essential, in the era of COVID-19.

Yet there is much that is troubling. We depend upon software that nobody truly understands and that is vulnerable to hackers and cyberterrorism. Privacy has been overrun by governments and surveillance capitalism. There are signs of totalitarian control that go way beyond those envisioned by the Panopticon and 1984. The internet floods us daily with news tailored to match our opinions and prejudices with an increasing inability to tell what is true and what is false. Our children are addicted to their devices. We have become workaholics. Jobs and livelihoods are being demolished without adequate social safety nets. A few digital leviathans threated to control not only their domains, but all commerce. Finally, there is huge hype associated with modern artificial intelligence, resulting in huge risks to society stemming from premature use of AI software.

Yet there are many ways in which computer scientists and digital media professionals can do good rather than evil.  Even students can make a difference.  An example, in this era of COVID-19, is a grade 12 Toronto high school student Adam Gurbin.

In January, Adam founded and now leads Canada’s first high school-based e-NABLE Chapter (e-NABLE Toronto), allowing over 20 of his fellow students to become part of a global humanitarian network that uses 3D printing technology to create mechanical prosthetics for children and adults in need. He organised and trained many of the students in 3D printing. He also led fundraising and social media marketing initiatives.

Also, to help fight COVID-19, he has recently dedicated his self-made cryptocurrency mining rig (which includes GPUs) to run protein folding simulations.  Adam brought the Toronto e-NABLE team onboard to accelerate this research which is being aided by a distributed network of computing power — Folding@Home — from close to 1,000,000 participants worldwide.

Most recently, after schools closed in March, Adam pivoted his 3D printing. He and other e-NABLE volunteers are now working with teams from University of Toronto and McMaster University to create 3D printed face shields/frames to send to frontline workers.  Adam has printed and sent out over 200 face shields from his home in the past two weeks, devices that are now being used in Toronto hospitals.  After discovering that some face shield designs were too big for some printers, and noting that they ideally need to be stackable, Adam is now working on new design using two pieces that can snap together with a sufficiently strong mechanism. Stay tuned!

How does he feel? A shy but articulate young man, Adam Gurbin is “just happy to be able to use his talents to help people.”

FOR THINKING AND DISCUSSION

If you are a computer scientist or digital tech professional or student, or you have one as a relative or friend, consider how to make a difference, now, just as Adam has.

Blog 20: Digital collaboration technologies flourish during COVID-19

For most of human history, dyads and groups were only able to work and play together if they were collocated.  All of this changed in the 19th century, when the first remote collaboration and entertainment technologies — the telegraph, the telephone, and the radio — were developed and widely commercialized.  These were joined in the 20th century by television.  By the middle part of the century, medical images were being transmitted over phone lines; soon thereafter, 2-way television was being used for remote medical consultations.

Digital collaboration technologies have existed since then and have turbocharged collaborative work and play at a distance.  Research on computer-based learning and computer-aided instruction began at the University of Illinois and Stanford University in 1960 and 1962. In 1968, in “The Mother of All Demos”, Douglas Engelbart at the Stanford Research Institute in Menlo Park, California, used a closed-circuit television link to show attendees at a computer conference in San Francisco mind-boggling ways to use computers in collaborative document creation.  In 1969, the first message was sent between two universities on the ARPANET, the research prototype for the internet.  Soon collaborative work could also be done by individuals located anywhere in the world over the internet.  By the mid-1970s, collaborative gaming began over both local-area (within a room or building) and wide-area networks such as the internet.  By the mid-1980s, computer matchmaking services were supplemented by online dating services operating over the internet.

Yet entertainment, medicine, learning, meetings, gaming, and intimacy were always better face-to-face.  Recently, however, COVID-19 has devastated country after country and has made physical contact and adjacency impossible. 

People will not be allowed to gather in performances spaces such as sports arenas and concert halls for a long time.  Yet we are now almost overwhelmed with what is now available online — historic sports events, popular music, symphonies, and plays .  One of the best examples is New York City’s Metropolitan Opera, which has leveraged its long history of artistic excellence and technological innovation by streaming one opera every day.

The profession of medicine adopted telemedicine slowly over the last 70 years.  Yet there are signs of rapid innovation in the face of COVID-19, given the advantages of consultation at a distance, or even remote testing, in cases where one or both of a patient and a health care provider has been diagnosed with the virus or may be a remote carrier already infected but not yet exhibiting symptoms.

More than 1.5 billion students across the globe have been evicted from school by the virus.  University students have been affected the least, because almost all are equipped with technology and are accustomed to learning independently.  The effects are most severe for the youngest school children, especially in homes with little technology, or where the one computer need be shared by several children as well as adults working from home.

Business meetings have changed profoundly, but in ways that were predictable from procedures used by large distributed corporations for many years.  Attendees gather in a virtual meeting space, in which they can hear and see one another, and share documents that are the focal point of discussions.  The industry was ready to expand, especially Zoom whose first-time installations have grown by over a factor of 8 during the past 5 weeks.

Online gaming has also boomed, with people seeking community, fun, mental stimulation, an outlet for their creativity, and solace.

Finally, computer dating sites have flourished. Even more interesting are the efforts and imagination that couples and families have applied to growing or holding steady as a couple or a family that can no longer see one another, touch one another, or hold one another.  Good examples are virtual dinners, cocktail parties, and birthday celebrations; watching a film or listening to a jazz trio together; dancing in synchrony, while being in different places; yet sharing music via a meeting site such as Zoom or Skype; or even phone sex for couples.

FOR THINKING AND DISCUSSION

Will life ever be the same again?  How will entertainment, medicine, learning, business meetings, gaming, and intimacy differ 2 years after the end of the pandemic as compared to how they were 2 years before its start?

Blog 19: COVID-19 information and misinformation web portal for Canadians

Contributed by Dr. Anatoliy Gruzd
Anatoliy is the Director of Research of the Social Media Lab at Ryerson University.

“In the face of an unprecedented global and national health emergency, today we are announcing the launch of a new COVID-19 web portal for Canadians. Visit the portal at: covid19misinfo.org.

The web portal is a rapid response project of the Ryerson Social Media Lab at Ted Rogers School of Management. The aim of this project is two-fold: (1) put a spotlight on COVID-19 related misinformation and (2) to provide Canadians with timely and actionable information that we all can use to protect ourselves and our communities.

Our team of computational social scientists, communications professionals and developers are hard at work curating trusted sources about COVID-19 and developing real-time information visualization dashboards to keep track of false claims related to the spread of the virus from around the web.

The web portal features a wide range of resources that might be helpful to you and your family right now and in the weeks to come. You can use it to:

The web portal is part of a new two-year research initiative funded by the Government of Canada. The initiative, Inoculating Against an Infodemic: Microlearning Interventions to Address CoV Misinformation, is a collaboration between researchers at Ryerson University and Royal Roads University.”

FOR THINKING AND DISCUSSION

To get a feeling for the seriousness of this problem worldwide, see recent articles from The New York Times on virus misinformation and the emerging role of China and Russia.  How can you contribute to fighting the spread of misinformation about COVID-19?

Blog 18: Censored contagion on Chinese social media

Contributed by Masashi Crete-Nishihata
Masashi is the Associate Director of The Citizen Lab at the University of Toronto.

The Citizen Lab just published a report: Censored Contagion: How Information on the Coronavirus is Managed on Chinese Social Media, authored by Lotus Ruan, Jeffrey Knockel and Masashi Crete-Nishihata.  

Among the key findings in this report, we show that YY, a popular live-stream platform based in China, began to censor keywords related to the coronavirus outbreak on December 31, 2019, only one day after doctors (including the late Dr. Li Wenliang) tried to warn the public about the then unknown virus. 

By reverse engineering the YY application, we found that keywords like “武汉不明肺炎” (Unknown Wuhan Pneumonia) and “武汉海鲜市场” (Wuhan Seafood Market) began to be censored on YY weeks before central authorities publicly acknowledged the outbreak and prior to the virus even being named.

Our experiments also found that another popular social media platform, WeChat, has been broadly censoring coronavirus-related content, including criticism of the government and references to Dr. Li Wenliang, from January 2020 and then expanding substantially through February. Between January 1 and 31, 2020, we found 132 keyword combinations were censored in WeChat. The number increased to 384 in a two week testing window between February 1 and 15.

Social media plays a major role in Chinese society and in particular among the Chinese medical community. Although many social media platforms have wrestled with how to combat misinformation and disinformation about COVID-19, our research shows China’s social media platforms were either directly instructed or under pressure to block a much broader range of content, including criticism of the government’s handling of the outbreak. 

We also find that censorship on both platforms lacks transparency and accountability: there are no public details about censorship regulations, and users are not notified if a message containing sensitive keywords is censored from their chat conversations. Furthermore, our discovery of keyword filtering on YY before the virus was even named strongly suggests that at least one social platform in China received government directives to censor content at early stages of the outbreak.

This type of systematic censorship of social media communications about disease information and prevention harms the ability of the public to share information that may be essential to their health and safety.

You can read the full report (including details on our methods and links to our keyword database) here.

FOR THINKING AND DISCUSSION

Assume you are having dinner tonight with Xi Jinping, President of the People’s Republic of China.  What arguments would you make to him to convince him that such censorship does not benefit China, and ultimately damages the country.

Blog 17: Social credit

Nosedive was the first episode of the third season of the British science fiction television anthology Black Mirror.  In this episode, everyone has a mobile phone which, when pointed at another person, reveals his or her name and rating. Everyone has a rating, which ranges from 0 to 5. The following happens continually as you are walking down a street or along the corridor of a building. You give a ‘thumbs up’ or ‘thumbs down’ to each person you pass, based on your instantaneous impression of that person and the nature of the encounter, no matter how trivial or quick the encounter is. A ‘thumps up’ raises that person’s rating a tiny bit; a ‘thumbs down’ lowers it. The other person concurrently rates you. Ratings determine one’s status in life, and the ability to get perks such as housing and travel. Therefore, people are on a never-ending, stressful, and soul-destroying quest to raise their online ratings for real-life rewards. Heroine Lacie desires a better apartment; she has a meltdown as she deals with unsurmountable pressure in the context of her childhood best friend’s wedding.

Interestingly, or, more accurately, chillingly, the Chinese government is introducing a Social Credit System that goes far beyond that envisioned in Nosedive.  Aspects of the ultimate system have been tested regionally since 2009, and nationally since 2014.  Various sources indicate different goals of the system, including fighting corruption and business fraud, regulating social behaviour, enhancing citizen ‘trustworthiness’, and improving public trust.

Each citizen is given a score. It goes up with good deeds, such as donating blood, donating to charity, or doing volunteer work. It goes down with bad deeds, such as jaywalking, parking illegally, not turning up for a restaurant booking, not visiting one’s parents often enough, not sorting one’s personal waste properly, fraudulently using other people’s travel ID card, and behaving fraudulently in financial matters.

Implications of low scores include being blacklisted and not being able to take long-distance planes or trains, being relegated to slow trains, or not being able to attend private schools or universities.  As of March 2019, over 13 million people were on blacklists.  People with high social credit scores wait less time at hospitals and government agencies, get discounts at hotels, can get free health check-ups, and have a better chance to get good jobs.

The system is implemented with a vast programme of video and other surveillance, including advanced face recognition, big data processing, and AI.  Interestingly, it is called Skynet, the same name that was applied to the evil superintelligence network in the Terminator movie franchise.  The social credit system was originally supposed to be rolled out in 2020, but it is way behind schedule.

An interesting variant of the system being used now in some Chinese cities uses big data to draw automated conclusions about whether an individual is a Coronavirus health risk, and creates a green, yellow, or red QR code on a person’s phone.  Police and subway guards force individuals to show what is on their phone.  People showing green can move around freely.  A yellow QR code results in being asked to stay home for a week.  A red code results in being quarantined for 2 weeks.

FOR THINKING AND DISCUSSION

There are many aspects of this that are troubling, and much to discuss.  To what extent would it be ok if the ratings were correct, and, in the latter situation, could slow spread of the virus?  What recourse does one have in case ratings are incorrect, perhaps based on an erroneous report? How does one get off a blacklist?  To what extent are we moving towards a society in which ratings will be visible to all, as for example if the measure of contagion risk were displayed on an electronic armband, sort of a modern Scarlet Letter?  To what extent are we moving towards a society in which nothing is unimportant, in which the most trivial of actions or appearances or glances can reduce your status in life?

Blog 16: Intelligent tutors

In this column, in my textbook, and in a speech “What Society Must Require from AI” I am currently giving around the world, I document some of the hype, exaggerated claims, and unrealistic predictions that workers in the field of artificial intelligence (AI) have been making for over 50 years.  Here are some examples.  Herb Simon, an AI pioneer at Carnegie-Mellon University (CMU), who later won a Novel Prize in Economics, predicted in 1958 that a program would be the world’s best champion by 1967.   Marvin Minsky of MIT, and Ray Kurzweil, both AI pioneers, made absurd predictions (in 1967 and 2005) that AI would achieve general human intelligence by 1980 and by 2045.  John Anderson, discussed below, made the absurd prediction in 1985 that it was already feasible to build computer systems “as effective as intelligent human tutors”.   IBM has recently made numerous false claims about the effectiveness of its Watson technology for domains as diverse as customer support, tax filing, and oncology.

I am particularly interested in the use of computers in education.  I have watched and participated in computer innovations for education since I worked with Seymour Papert and Wally Feurzeig on the first version of the LOGO language in 1966, and since I taught a course focusing on social issues raised by technology in education in 1972.

The field of intelligent tutoring is an exciting area of AI research.  The field was pioneered by John Anderson and his collaborators at CMU in the 1980s.  However, work has progressed slowly, because of difficulties in specialized topics like user modelling, that is, understanding what a student knows, what misconceptions he or she may have, and how he or she derives an answer to a question.  The biggest successes have been in teaching subjects such as mathematics, where answers and methods of reasoning are well-defined.  There have been few other successes.

This past week, I participated in a day-long seminar at the UNSESCO Mahatma Gandhi Institute for Education in Peace and Sustainable Development (MGIEP).  The topic was the use of AI for teaching social and emotional learning, which they define as being comprised of empathy, mindfulness, compassion, and critical inquiry (EMC2).  EMC2 is a wonderful idea, but I argued that AI could not yet play a fundamental role in such teaching because of the following serious problems:

1. It is often unclear if one is communicating with a person or an artificially agent.

2.  AIs are often incompetent, unreliable and inconsistent.

3. AIs have no common sense and no intuition.

4. AI decisions and actions, especially those of machine learning, are not transparent and cannot be understood.

5.  Decisions and actions are often biased and unfair.

6. AIs exercise no discretion or good judgment in deciding what to say to people and when to say it.

7. We have no reasonable way of assigning and enforcing accountability and responsibility for algorithmic decisions and actions.

8. Finally, we use AIs even though we do not trust them.

The temptation to view AI as a near-term solution for educational systems that have insufficient budget and resources manifests itself throughout the globe.  For example, in my home province of Ontario, where coservative governments are typically at odds with teachers’ unions over issues including salaries and benefits, the current government has in the past year been discussing allowing high school students to do all their work online and introducing e-learning courses as requirements for high school students with the goals of slashing education budgets and raising average class sizes to 35.

FOR THINKING AND DISCUSSION

Should we trust education in empathy and compassion and critical thinking, or for that matter history or literature, to robot teachers that are not competent, reliable, patient, empathic, sensitive, and wise?  Does the answer change in venues such as India, where the student-teacher ratio in rural schools often is as high as 80.

Blog 15: The age of surveillance capitalism

There is still time to buy a substantive book for the thoughtful techie or concerned citizen in your life.  Allow me to recommend two choices that were published in 2019.  One good option is my wide-ranging textbook Computers and Society: Modern Perspectives, enough said ….  But an unbiased choice is Shoshana Zuboff’s monumental The Age of Surveillance Capitalism.  The author signals her intentions with the book’s subtitle: The Fight for a Human Future at the New Frontier of Power.

Zuboff, the Charles Edward Wilson Professor Emerita, Harvard Business School, defines and describes surveillance capitalism (p. 8):

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.  Although some … data are applied to product or service improvement, the rest are declared as proprietary behavioural surplus. fed into manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, or later.  Finally, these prediction products are traded in a new kind of marketplace for behavioral predictions that I call behavioral future markets.  Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.

… Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening … in order to nudge, coax, turn, and herd behavior toward profitable outcomes.  Competitive pressures produced this shift, in which automated machine processes not only know our behavior, but also shape our behavior at scale.  With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. … the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’  In this way, surveillance capitalism births a new species of power that I shall call instrumentarianism.  Instrumentarian power knows and shapes human behavior toward other’s ends.  Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of “smart” networked devices, things, and spaces.”Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.  Although some … data are applied to product or service improvement, the rest are declared as proprietary behavioural surplus. fed into manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, or later.  Finally, these prediction products are traded in a new kind of marketplace for behavioral predictions that I call behavioral future markets.  Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.

Zuboff discusses how Google invented and perfected surveillance capitalism, and how it has been adopted by others such as Facebook.  She states that the threat of a totalitarian Big Brother has been supplanted by a “Big Other” with unprecedented knowledge and power, free from effective democratic oversight.

Stressing that “,,, surveillance capitalism is a logic in action and not a technology…” (p. 15), she states that “… surveillance capitalists asserted their right to invade at will, usurping individual decision rights in favor of unilateral surveillance and the self-authorized extraction of human experience for others’ profit.” (p. 19).  “Much of this … is accomplished under the banner of ‘personalization’, a camouflage for aggressive extraction operations that mine the intimate depths of everyday life”, she notes. (p. 19).  In response to this, we seem helpless, victims of “… a psychic numbing that inures us to the realities of being tracked, parsed, trained, and modified.” (p. 11)

Zuboff proposes: “Only ‘we the people’ can reverse the course, first by naming the unprecedented, then by mobilizing new forms of collaborative action: the crucial friction that reasserts the primacy of a flourishing human future as the foundation of our information civilization.” (p. 21). If one can make any criticism about this landmark work, it is that the collective action that she proposes is not described.

For that person in your life who wants not just a dose from a fire hose but total immersion, may I suggest that you also purchase Brett Frischmann and Evan Selinger’s thoughtful and imaginative Re-Engineering Humanity.  Happy holidays to all, and may next year be better than this one!

FOR THINKING AND DISCUSSION

How are the surveillance capitalist approaches of Google, Facebook, and Amazon similar or different?  How are Zuboff and Frischmann/Selinger’s theories complementary?

Blog 14: Ethics throughout a Computer Science curriculum

Every Computer Science student should get significant exposure to the social, political, legal, and ethical issues raised by the accelerating progress in the development and use of digital technologies.

The standard approach is to offer one undergraduate course, typically called Computers and Society or Computer Ethics.  I have done this during the current term at Columbia University, using my new textbook, Computers and Society: Modern Perspectives (OUP, 2019).  We meet twice a week for 75 minutes.  In class, I present key topics covered in the book, and welcome a number of guest speakers who present their own experiences and points of view.  Every class is interactive, as I try to get the students to express their own ideas.  There have been four assignments: a policy brief, a book report, a debate, and a research paper.  Such courses are typically not required by major research universities, which is a mistake, but they are often required by liberal arts colleges.

An imaginative approach, but one that is rarely used, is to introduce key issues by the reading and viewing of science fiction novels or stories or films.  This has been done at over a dozen universities (both those with significant research activities and those lacking them) and colleges in the U.S.  Both faculty and students find the material engaging and an effective vehicle for discussing ethical issues raised by computers, robots, and artificial intelligence software.

Recently, under the leadership of Computer Science Prof. Barbara Grosz, in collaboration with Philosophy Professor Alison Simmons, Harvard has been developing an exciting alternative called Embedded Ethics.  The web site asserts:

“Ethical reasoning is an essential skill for today’s computer scientists. The Embedded EthiCS distributed pedagogy embeds philosophers directly into computer science courses to teach students how to think through the ethical and social implications of their work”.

Each year, an increasing number of Harvard’s undergraduate CS courses have embedded into them one lecture per term discussing an ethical issue relevant to the course, such as data bias in a machine learning course, fake news in a networks course, and the need for accessible interfaces in a human-computer interaction course.  Material is presented by a philosophy teaching fellow or graduate student after consultation with the instructor.  A follow-up homework question or exercise is assigned to the students.  No particular ethical framework is stressed; an approach is chosen that seems best for each specific topic.  The program began in 2017; by 2019 14 courses had been equipped with the content to deliver one class dealing with ethics.  The goal is to equip all their courses within the next few years.

The results have been uniformly positive.  Students are engaged, with many “many expressing eagerness for more exposure to ethics content and more opportunities to develop skills in ethical reasoning and communication”.  A major strength of the program is that it keeps the importance of ethics at the forefront throughout the curriculum.

Prof. Grosz reports that a number of other universities are considering adopting the program.  Challenges that will be faced include identifying champions both in Computer Science and Philosophy, obtaining sufficient buy-in from faculty who are willing to devote one class per term to the activity, and the costs of developing the material for each local context.

FOR THINKING AND DISCUSSION

What are the advantages and disadvantages of each of the three approaches discussed in this post?  One way of thinking about this is in terms of stakeholders, e.g., students interested in this material, students not interested, faculty believers, faculty disbelievers, the university, and the public at large.