Blog 23: Can the internet help people live through a pandemic?

Contributed by Ronald Baecker and Judith Langer

Ron is an Emeritus Professor of Computer Science at the University of Toronto and co-author of The COVID-19 Solutions Guide.

Judith is  the Vincent O’Leary Distinguished Professor Emeritus at the University of Albany, State University of New York. and co-author of The COVID-19 Solutions Guide.

On Sunday, April 19, while on one of my daily walks—which have helped keep me sane in what has been a COVID-19 life of no face-to-face contact with family and friends—I (Ron) asked myself the question: can the internet help people live through a pandemic? Of course, the internet can help, and in important ways.  Because I am an author, the answer immediately suggested a more difficult question.  Should I write a book that describes how digital technologies and being online are helping people, and how those who need more assistance can find the resources to get more help?

Over the next few days, I decided that I should write the book, and that it needed to be done quickly, and with lots of help from extraordinary collaborators.  The COVID-19 Solutions Guide is the result.  I will describe the process.

That same day, I asked my friend Judith Langer, a renowned scholar of literacy and a distinguished Professor Emeritus, to work on the book with me.  A partnership between an eclectic scientist/engineer/designer (me) and an accomplished humanist/scientist (Judith) would be a good start.  I began writing on Monday morning April 20; the outline I sketched then is close to the structure of the final book.  That evening, I was playing bridge online (another of my virus coping mechanisms) with a friend of 70 years since 1st grade in Pittsburgh—Dr. Gary Feldman—who had been in charge of public health for two California counties for 14 years.  By then I knew we needed a medical expert; Gary answered the call.  I also recruited my personal financial adviser, Justin Stein, whose expertise and sound financial advice seemed essential.  I soon realized that no publisher would meet my goal of publishing by June 1 (we did not make it, but getting the book written and online in 2 months is nonetheless ok), so we had to do it ourselves.  I therefore recruited the amazing Uma Kalkar, who recruited the equally amazing Ellie Burger, to handle production, publishing, marketing, and social media.

But what kind of a book?  I am both a scientist and an engineer, I like to understand phenomena and use that understanding to build innovative software.  Hence the book needed description — what was happening and why, and prescription — a guide to ways in which technology could help us cope, survive, and enjoy life as best we can.  And yet, going back to doing science, there needed to be evidence that the methods that we would describe seemed to work.  There was no time to assemble hard evidence, but there needed to be at least anecdotal, narrative evidence that we were discussing solutions.  Hence the book has both scientific information, especially about medical issues, and also stories of real people, stories that were told to us by trusted friends, or reported publicly in reliable sources. 

Citing cognitive theory, Judith suggests that a general suggestion or concept such as, “When in lockdown, find alternative ways to interact with others,” will be best understood, remembered, and acted upon when readers can easily relate it to experiences they have had. The example, or narrative, triggers their memories of similar experiences they and others have had.  This mental connection enables them to use their own funds of knowledge to think of possibilities they hadn’t previously considered. The connection of known-to-new enables them to interpret what the general rule means to them and, in the case of how to ease loneliness and anxiety, ways they can personally act upon it. The example makes the author’s suggestions meaningful, memorable, and helpful.

The title was by then revealed —The COVID-19 Solutions Guide.  Yet just writing a book seemed insufficient.  Multidimensional experiences are useful for understanding.  We therefore wanted a blog, which also gave us one method of updating our description of phenomena that were and would continue to evolve rapidly.  Also, in amusing ourselves to stay sane, we invented a game that highlights the challenges of safe physical distancing and the opportunities to imagine creative virtual experiences.  We call this the COVID-19 Solutions Game.  We will release it on June 10 with announcement of the first competition.  The COVID-19 Solutions Guide will be online by June 17.  Stay tuned, and follow us on Facebook, Instagram, and Twitter.


What sources and resources have you found the most useful for understanding what is going on (description) and how best to cope (prescription). You can send details to us here, so we can cite the best of these in a future blog post.

Blog 22: Pandemic models must be transparent and their creators must explain them publicly

Contributed by Ronald Baecker

Ron is an Emeritus Professor of Computer Science at the University of Toronto and co-author of The COVID-19 Solutions Guide.

A forecasting model is a prediction of how the world will evolve, of what will happen in the future with respect to some phenomena (such as the motion of objects, the financial health of a business) or the spread of an epidemic.

Models are sometimes expressed as equations.  Newton’s Second Law of Motion, often stated as F = M × A, describes the relationship between the force F acting on an object, its mass M, and its acceleration A. It is useful. For example, it can predict how quickly a hockey puck weighing M ounces will accelerate towards the net if it is struck by the hockey stick of a player whose strength will result in a certain force applied to the puck.

Models can also be expressed as computer programs, and in spreadsheets which allow assumptions to be expressed without writing a program.  These also are useful. For example, wise modern business owners build spreadsheets forecasting their profit-and-loss, cash flow, and balance sheets.  If the assumptions build into the model are good ones, the spreadsheet will help them know when they might need a loan, or when they might reach profitability.

The word “might” here is key.  Newton’s Laws have been validated in countless ways over centuries.  They are guaranteed to hold true in all normal situations that we encounter as humans, even though they do not hold true at a very tiny scale, a world understood by quantum mechanics, or a very large scale, a world understood by theories of relativity.  Such certainty is not the case with a typical spreadsheet built by a business owner, who makes assumptions in the model that may or may not be valid.

Recently, models have become political weapons, to be used when convenient, to be ignored or hidden when not convenient.  For example. the American Civil Liberties Union (ACLU) in Idaho questioned the use of an Excel spreadsheet that was being used to justify cutting the level of Medicaid assistance given to individuals with developmental and intellectual disabilities. When asked about the logic embedded in the spreadsheet, Medicaid refused to disclose it, claiming it was a ‘trade secret’. A court granted an injunction against the cuts, and ordered the formula made public. It soon became clear that the spreadsheet had numerous errors in it. A 2015 class action suit against the state of Idaho is still being deliberated by the Idaho Supreme Court (for the second time).

Pandemic forecasting models guide the life-and-death decisions about how quickly physical distancing rules or guidelines should be relaxed by various jurisdictions.  They also have become weapons.  A recent case occurred in the US state of Arizona.  On May 5, Donald Trump visited Arizona.  On May 6, hours after the State’s governor relaxed Stay at Home restrictions, the Arizona Department of Health Services shut down a project in which approximately ten University of Arizona researchers had developing an Arizona-specific model to guide public policy with respect to such restrictions.  One rationale given was that the model was not needed, because the U.S. federal agency FEMA (Federal Emergency Management Agency) had its own model. That decision was rescinded two days later after a public outcry.

The problem is that the FEMA model and the algorithm that determines its predictions are secret.  Nobody from the media, from the medical establishment, or the public can examine the assumptions used to derive conclusions and to guide policy, and to help it be wise policy.  This is dangerous — a good example of science and mathematics being used for political ends.  See here, here, and here to learn more about this topic.


How should society engage with the creators of pandemic models and the assumptions that animate them?

Blog 21: COVID-19: Computer scientists and CS students can act proactively for good

Readers of my blog will recall what I describe as digital dreams and digital nightmares.

Our world has been enriched by digital technologies used for collaboration, learning, health, politics, and commerce. Digital pioneers imagined giving humanity greater control over the universe; augmenting knowledge and creativity; replacing difficult and dangerous physical labour with robot efforts; improving our life span with computationally supported medicine; supporting free speech with enhanced internet reason and dialogue; and developing innovative, convenient, and ideally safe products and services.  Online apps and resources are proving very valuable, even essential, in the era of COVID-19.

Yet there is much that is troubling. We depend upon software that nobody truly understands and that is vulnerable to hackers and cyberterrorism. Privacy has been overrun by governments and surveillance capitalism. There are signs of totalitarian control that go way beyond those envisioned by the Panopticon and 1984. The internet floods us daily with news tailored to match our opinions and prejudices with an increasing inability to tell what is true and what is false. Our children are addicted to their devices. We have become workaholics. Jobs and livelihoods are being demolished without adequate social safety nets. A few digital leviathans threated to control not only their domains, but all commerce. Finally, there is huge hype associated with modern artificial intelligence, resulting in huge risks to society stemming from premature use of AI software.

Yet there are many ways in which computer scientists and digital media professionals can do good rather than evil.  Even students can make a difference.  An example, in this era of COVID-19, is a grade 12 Toronto high school student Adam Gurbin.

In January, Adam founded and now leads Canada’s first high school-based e-NABLE Chapter (e-NABLE Toronto), allowing over 20 of his fellow students to become part of a global humanitarian network that uses 3D printing technology to create mechanical prosthetics for children and adults in need. He organised and trained many of the students in 3D printing. He also led fundraising and social media marketing initiatives.

Also, to help fight COVID-19, he has recently dedicated his self-made cryptocurrency mining rig (which includes GPUs) to run protein folding simulations.  Adam brought the Toronto e-NABLE team onboard to accelerate this research which is being aided by a distributed network of computing power — Folding@Home — from close to 1,000,000 participants worldwide.

Most recently, after schools closed in March, Adam pivoted his 3D printing. He and other e-NABLE volunteers are now working with teams from University of Toronto and McMaster University to create 3D printed face shields/frames to send to frontline workers.  Adam has printed and sent out over 200 face shields from his home in the past two weeks, devices that are now being used in Toronto hospitals.  After discovering that some face shield designs were too big for some printers, and noting that they ideally need to be stackable, Adam is now working on new design using two pieces that can snap together with a sufficiently strong mechanism. Stay tuned!

How does he feel? A shy but articulate young man, Adam Gurbin is “just happy to be able to use his talents to help people.”


If you are a computer scientist or digital tech professional or student, or you have one as a relative or friend, consider how to make a difference, now, just as Adam has.

Blog 16: Intelligent tutors

In this column, in my textbook, and in a speech “What Society Must Require from AI” I am currently giving around the world, I document some of the hype, exaggerated claims, and unrealistic predictions that workers in the field of artificial intelligence (AI) have been making for over 50 years.  Here are some examples.  Herb Simon, an AI pioneer at Carnegie-Mellon University (CMU), who later won a Novel Prize in Economics, predicted in 1958 that a program would be the world’s best champion by 1967.   Marvin Minsky of MIT, and Ray Kurzweil, both AI pioneers, made absurd predictions (in 1967 and 2005) that AI would achieve general human intelligence by 1980 and by 2045.  John Anderson, discussed below, made the absurd prediction in 1985 that it was already feasible to build computer systems “as effective as intelligent human tutors”.   IBM has recently made numerous false claims about the effectiveness of its Watson technology for domains as diverse as customer support, tax filing, and oncology.

I am particularly interested in the use of computers in education.  I have watched and participated in computer innovations for education since I worked with Seymour Papert and Wally Feurzeig on the first version of the LOGO language in 1966, and since I taught a course focusing on social issues raised by technology in education in 1972.

The field of intelligent tutoring is an exciting area of AI research.  The field was pioneered by John Anderson and his collaborators at CMU in the 1980s.  However, work has progressed slowly, because of difficulties in specialized topics like user modelling, that is, understanding what a student knows, what misconceptions he or she may have, and how he or she derives an answer to a question.  The biggest successes have been in teaching subjects such as mathematics, where answers and methods of reasoning are well-defined.  There have been few other successes.

This past week, I participated in a day-long seminar at the UNSESCO Mahatma Gandhi Institute for Education in Peace and Sustainable Development (MGIEP).  The topic was the use of AI for teaching social and emotional learning, which they define as being comprised of empathy, mindfulness, compassion, and critical inquiry (EMC2).  EMC2 is a wonderful idea, but I argued that AI could not yet play a fundamental role in such teaching because of the following serious problems:

1. It is often unclear if one is communicating with a person or an artificially agent.

2.  AIs are often incompetent, unreliable and inconsistent.

3. AIs have no common sense and no intuition.

4. AI decisions and actions, especially those of machine learning, are not transparent and cannot be understood.

5.  Decisions and actions are often biased and unfair.

6. AIs exercise no discretion or good judgment in deciding what to say to people and when to say it.

7. We have no reasonable way of assigning and enforcing accountability and responsibility for algorithmic decisions and actions.

8. Finally, we use AIs even though we do not trust them.

The temptation to view AI as a near-term solution for educational systems that have insufficient budget and resources manifests itself throughout the globe.  For example, in my home province of Ontario, where coservative governments are typically at odds with teachers’ unions over issues including salaries and benefits, the current government has in the past year been discussing allowing high school students to do all their work online and introducing e-learning courses as requirements for high school students with the goals of slashing education budgets and raising average class sizes to 35.


Should we trust education in empathy and compassion and critical thinking, or for that matter history or literature, to robot teachers that are not competent, reliable, patient, empathic, sensitive, and wise?  Does the answer change in venues such as India, where the student-teacher ratio in rural schools often is as high as 80.

Blog 15: The age of surveillance capitalism

There is still time to buy a substantive book for the thoughtful techie or concerned citizen in your life.  Allow me to recommend two choices that were published in 2019.  One good option is my wide-ranging textbook Computers and Society: Modern Perspectives, enough said ….  But an unbiased choice is Shoshana Zuboff’s monumental The Age of Surveillance Capitalism.  The author signals her intentions with the book’s subtitle: The Fight for a Human Future at the New Frontier of Power.

Zuboff, the Charles Edward Wilson Professor Emerita, Harvard Business School, defines and describes surveillance capitalism (p. 8):

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.  Although some … data are applied to product or service improvement, the rest are declared as proprietary behavioural surplus. fed into manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, or later.  Finally, these prediction products are traded in a new kind of marketplace for behavioral predictions that I call behavioral future markets.  Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.

… Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening … in order to nudge, coax, turn, and herd behavior toward profitable outcomes.  Competitive pressures produced this shift, in which automated machine processes not only know our behavior, but also shape our behavior at scale.  With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. … the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’  In this way, surveillance capitalism births a new species of power that I shall call instrumentarianism.  Instrumentarian power knows and shapes human behavior toward other’s ends.  Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of “smart” networked devices, things, and spaces.”Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.  Although some … data are applied to product or service improvement, the rest are declared as proprietary behavioural surplus. fed into manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, or later.  Finally, these prediction products are traded in a new kind of marketplace for behavioral predictions that I call behavioral future markets.  Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.

Zuboff discusses how Google invented and perfected surveillance capitalism, and how it has been adopted by others such as Facebook.  She states that the threat of a totalitarian Big Brother has been supplanted by a “Big Other” with unprecedented knowledge and power, free from effective democratic oversight.

Stressing that “,,, surveillance capitalism is a logic in action and not a technology…” (p. 15), she states that “… surveillance capitalists asserted their right to invade at will, usurping individual decision rights in favor of unilateral surveillance and the self-authorized extraction of human experience for others’ profit.” (p. 19).  “Much of this … is accomplished under the banner of ‘personalization’, a camouflage for aggressive extraction operations that mine the intimate depths of everyday life”, she notes. (p. 19).  In response to this, we seem helpless, victims of “… a psychic numbing that inures us to the realities of being tracked, parsed, trained, and modified.” (p. 11)

Zuboff proposes: “Only ‘we the people’ can reverse the course, first by naming the unprecedented, then by mobilizing new forms of collaborative action: the crucial friction that reasserts the primacy of a flourishing human future as the foundation of our information civilization.” (p. 21). If one can make any criticism about this landmark work, it is that the collective action that she proposes is not described.

For that person in your life who wants not just a dose from a fire hose but total immersion, may I suggest that you also purchase Brett Frischmann and Evan Selinger’s thoughtful and imaginative Re-Engineering Humanity.  Happy holidays to all, and may next year be better than this one!


How are the surveillance capitalist approaches of Google, Facebook, and Amazon similar or different?  How are Zuboff and Frischmann/Selinger’s theories complementary?

Blog 14: Ethics throughout a Computer Science curriculum

Every Computer Science student should get significant exposure to the social, political, legal, and ethical issues raised by the accelerating progress in the development and use of digital technologies.

The standard approach is to offer one undergraduate course, typically called Computers and Society or Computer Ethics.  I have done this during the current term at Columbia University, using my new textbook, Computers and Society: Modern Perspectives (OUP, 2019).  We meet twice a week for 75 minutes.  In class, I present key topics covered in the book, and welcome a number of guest speakers who present their own experiences and points of view.  Every class is interactive, as I try to get the students to express their own ideas.  There have been four assignments: a policy brief, a book report, a debate, and a research paper.  Such courses are typically not required by major research universities, which is a mistake, but they are often required by liberal arts colleges.

An imaginative approach, but one that is rarely used, is to introduce key issues by the reading and viewing of science fiction novels or stories or films.  This has been done at over a dozen universities (both those with significant research activities and those lacking them) and colleges in the U.S.  Both faculty and students find the material engaging and an effective vehicle for discussing ethical issues raised by computers, robots, and artificial intelligence software.

Recently, under the leadership of Computer Science Prof. Barbara Grosz, in collaboration with Philosophy Professor Alison Simmons, Harvard has been developing an exciting alternative called Embedded Ethics.  The web site asserts:

“Ethical reasoning is an essential skill for today’s computer scientists. The Embedded EthiCS distributed pedagogy embeds philosophers directly into computer science courses to teach students how to think through the ethical and social implications of their work”.

Each year, an increasing number of Harvard’s undergraduate CS courses have embedded into them one lecture per term discussing an ethical issue relevant to the course, such as data bias in a machine learning course, fake news in a networks course, and the need for accessible interfaces in a human-computer interaction course.  Material is presented by a philosophy teaching fellow or graduate student after consultation with the instructor.  A follow-up homework question or exercise is assigned to the students.  No particular ethical framework is stressed; an approach is chosen that seems best for each specific topic.  The program began in 2017; by 2019 14 courses had been equipped with the content to deliver one class dealing with ethics.  The goal is to equip all their courses within the next few years.

The results have been uniformly positive.  Students are engaged, with many “many expressing eagerness for more exposure to ethics content and more opportunities to develop skills in ethical reasoning and communication”.  A major strength of the program is that it keeps the importance of ethics at the forefront throughout the curriculum.

Prof. Grosz reports that a number of other universities are considering adopting the program.  Challenges that will be faced include identifying champions both in Computer Science and Philosophy, obtaining sufficient buy-in from faculty who are willing to devote one class per term to the activity, and the costs of developing the material for each local context.


What are the advantages and disadvantages of each of the three approaches discussed in this post?  One way of thinking about this is in terms of stakeholders, e.g., students interested in this material, students not interested, faculty believers, faculty disbelievers, the university, and the public at large.

Blog 13: Digital technology firms, monopolies, and antitrust actions

Today’s digital technology industries are characterized by intense degrees of corporate concentration.

Amazon revolutionized access to books and continues to grow its market share of both print books and eBook sales — approaching 50% of print sales and more than 90% of eBook sales.  It is also starting to dominate the sale of many other kinds of goods, and now vigorously seeks a dominant market share in sectors such as grocery retailing and pharmacies. Facebook, which owns 54% of the social media market, is responsible for a great deal of the Internet hate speech and fake news nightmares we face today. Google, which revolutionized the business of search, and now owns 76% percent of that market, seems to manipulate the search engine algorithm for its own commercial benefit.  Apple, which demonstrated that it was possible to design for ease of learning and ease of use and still achieve commercial success, now owns 66% of the tablet market and 22% of the mobile phone market, and seems to manipulate the policies of software distribution on its platforms for its own commercial benefit.

There is now increasing activity by the U.S. Congress and various agencies of the executive branch of the government to investigate the degree of monopoly control exercised by these four firms, and whether or not this deserves a vigorous response.

The United States has a rich tradition of opposing monopoly control of critical industries through antitrust legislation and action. Columbia University Professor Tim Wu has documented this in a brilliant new book, The Curse of Bigness.  Wu documents the history of antitrust action against monopolists J.P. Morgan and John D. Rockefeller through legislation, such as the Sherman Act of 1890, the Clayton Act of 1914, and the Federal Trade Commission Act of 1914, as well as the later Anti-Merger Act of 1950, and political actions by figures such as President Teddy Roosevelt.  Perhaps the highlight of such activities was the 1984 mandated break-up of AT&T into seven regional “Baby Bells”.

U.S. antitrust actions faded in the latter part of the 20th century as the “Chicago school’ of antitrust became dominant. As evangelized by lawyers such as Robert Bork, it asserted that the only role for government with respect to possible monopolies was to ensure that consumers benefited, which they generally interpreted as lower prices for purchases.  An example of the influence of the Chicago school was the failure of the US Government’s antitrust action against Microsoft.

Wu suggests, towards the end of his book, that the degree of market concentration in digital technology industries today requires more research to better understand the evolving monopolies; merger reviews, with full public consultation; big antitrust cases, such as those carried out against J.P. Morgan, John D. Rockefeller, and AT&T; and, ultimately, break-ups of firms that have become monopolies.

But does monopoly control of an industry actually lead to the lowest possible price?  Strong argument can be made that did this that this is not the case. We shall address this question in a future blog post. We shall also discuss other considerations, other values, that should be considered in deciding to what extent monopoly control of digital technology industries should be allowed, and when government needs to take decisive action against such monopoly control.


Do you believe that monopolies provide lower prices to consumers?  Why or why not?  What other considerations and values are important in deciding whether or not to allow or encourage firms whose market position is so strong that they are achieving significant dominance over a market.

Blog 12: Diverse design thinking in technology

Contributed by Muriam Fancy
Muriam is a masters student at the Munk School of Global Affairs and Public Policy. She recently completed her BA in Peace, Conflict, and Justice with a double minor in Indigenous Studies; Diaspora & Transnational Studies. She runs Diverse Innovations (@diverseinnovat1), a platform discussing social good technology.

Amazon launched an artificial intelligence (“AI”) system in efforts to revolutionize its recruitment strategy, and found that their AI program was discriminatory against women. A Chicago court implemented an AI system called COMPAS to do a predictive risk analysis of the chances offenders are to re-offend either by committing the same crime that they were charged for or committing a more significant offense. However, the AI system used discriminated against black defendants noting that they will most likely commit a more significant offense in comparison to white defendants – read more in Chapter 11 of Computers and Society: Modern Perspectives

Technology can act as a catalyst in our daily lives if it mitigates issues of discrimination and racism. However, this calls for the question of how can we make technology more inclusive? Well, it begins with how the technology is designed, thus calling for diverse design thinking, which explains why a “human-centered” approach is required when technology is trying to make objective analysis and decisions.

The Hasso Plattner Institute of Design at Stanford University found that one of the fundamental steps to implement diverse design thinking is to have better data. But how do we define “better data,” really that means that we need data that is inclusive of issues at the societal level, to achieve “collective experience of inclusion.” Similarly, Friedman and Hendry (authors of “Value Sensitive Design: Shaping Technology with Moral Imagination) found that using a “value sensitive design” questioned whether the stakeholders involved impacted the framework of the technology, thus imposing biases on the type of data included or who the technology is meant to target.

How can we expect to use technology in our daily lives when our experiences are not mirrored in the data used to design the technology itself? Diversity forces innovation and technology to change the fabric of its purpose but also forces us to question how society functions fundamentally. If we are to use an empirical approach through technology to solve issues we face in society – how can we do that if a percentage of the population cannot be included in the function of the product/service itself. 

For example, an issue concerning many people to date is the effect of automation on work, an issue explored in depth in Chapter 10 of Dr. Baecker’s book. Based on the premise discussed, the consequences will only be detrimental if we are unable to tackle bias in the design thinking methodology of the creation of technology. “IMF res­earchers predict disproportionately higher job losses among wom­en when automation displaces an estimated 10 per cent of jobs over the next two decades, according to analysis conducted on 30 countries.”

Furthermore, this article would like to posit the notion that genuinely diverse design thinking in technology can also begin in the room of ideation. Society internationally combats the issue of the imbalance of perspectives within a room, whether that be a gender imbalance, racial imbalance, or even concerning the ages of individuals. Diverse design thinking can be implemented if there is diversity in thought. Issues of racism and discrimination in technology is merely a reflection of the fractionalization of society. Cathy O’Neill (author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy) comes to a similar conclusion noting that it is up to us as society to model algorithms based on how we want the world to operate, and hopefully that means a more equitable and inclusive society.

The inclusion of others in both the creation and facilitation of technology will indeed be successful if it is diverse in the fabric of the product. For a productive and positive future of the integration of technology, diversity and inclusion must be associated with growth, as that is the only way we can achieve progress. 


What other steps would we need to include to accomplish a diverse design thought approach to technology?

Blog 11: The tweetocracy

A session at the New Yorker Festival this past weekend discussing how history will judge Trump got me thinking again about media, tweeting, and Donald J. Trump.

Media play a huge role in politics. Here are some examples. In the medium of a large enclosed space filled with people, Adolf Hitler was able to whip crowds to a frenzy. Franklin Delano Roosevelt in his radio fireside chats reassured Americans that they could and would survive the economic hardships of the Great Depression. Winston Churchill’s stirring oratory during World War II lifted the spirits of people in Great Britain despite the Germans’ intense aerial bombardment.  John F Kennedy‘s photogenic and relaxed television manner when contrasted with Richard Nixon’s swarthy scowling played a huge role in his victory in the 1960 US presidential election.  Finally, Ronald Reagan’s commanding performances in televised addresses and his style of speaking to Americans in ways that they could understand and could trust justified his being called “the great communicator“.

Politics is now in the age of social media. There are of course perversions, such as Russian election hacking and the occurrence of fake news, but there is also the day-to-day business of political communications via social media. Despite all the evidence that Donald J. Trump is incompetent, evil, and corrupt, and that he is racist and misogynist, he continues to command the allegiance of upwards of 40% of the population.

Trump’s use of social media, and, in particular, his tweeting, has sometimes earned his government the monicker of a tweetocracy. Despite a minimal command of the English language, and poor judgement as to what he should tweet and when he should do it, analysis of his tweeting suggests that it is working for him.

Trump has issued an average of more than 10 tweets a day for many years now.  Analyses of the resulting rich dataset are in their infancy.  There are numerous attacks directed at his enemies, accusing them of weakness, stupidity, or failure, or of being illegitimate (“fake” is a favorite word) or corrupt.  Yet even more tweets are positive (“great” is a favorite word), beating the drum for his agenda or for himself.  A linguistic analysis has characterized Trump’s tweets as comprising advice, critique, opinion, prediction, and promotion.

Although some analysts argue that Trump’s tweets are reducing in effectiveness, based on the number of retweets and likes he gets divided by the size of his audience, he still commands an audience of 66 million followers, trailing only Barack Obama in the size of a politician’s following.

Here is my conjecture about why his tweets help him continue to command the absolute loyalty of his followers.  Almost once every waking hour, they receive messages — propaganda and misinformation — exhorting them to believe in the one true Trump, inciting them to hate their enemies.  Neither Hitler not Roosevelt not Churchill nor Kennedy nor Reagan could get into the heads of their publics so frequently and so completely.  I do not know how to prove this conjecture; hopefully others more skilled in social and personality psychology will do so.


How should the Democratic presidential candidate in 2020 use social media to best advantage to help him or her defeat Trump in the election? 

Blog 9: Power, politics, and the internet

Contributed by Uma Kalkar
Uma is a senior undergraduate at the University of Toronto and 2019-2020 International Presidential Fellow at the Center for the Study of the Presidency and Congress researching the politics of domestic and national digital divides.

In 2016, the United Nations classified internet access as a human right, deeming that cutting or censoring the internet by states impinges on personal freedoms. Unfortunately, conflict-heavy zones and politically unstable states deny their citizens unfiltered internet in order to isolate and control discussion and debate. Through internet censorship, governments attempt to hide regime atrocities and to revise history.

The internet is one of the main vehicles of communication and activism in the modern era. Yet nearly 4 billion people – 50% of the world’s current population – are left offline. Lack of digital infrastructure and difficulty using technology are reasons covered extensively in Chapter 1 of Computers and Society: Modern Perspectives. However, for many people, issues of affordability and agency in manipulating the internet play large roles in the digital divide.

In 2011, protests for change ignited all across the Middle East and Northern Africa. To contain the revolutionary “Arab Spring” and its use of social media, some governments censored internet websites, tracked activity of dissenters, and ultimately shut down internet access. While countries like Egypt, Tunisia, and Libya managed to enact some political change, other MENA countries, specifically, Syria, fell into full-blown civil war.

According to Reporters Without Borders, Syria and its President Bashar al-Assad, are active “Enemies of the Internet”. Due to extremely high restrictions to the web and on what sites can be access, Syrians are denied freedom of expression. In fact, Freedom House found that in 2016, only 30% of Syria is connected to the internet.

Why is the Syrian government determined to control the internet? With the help of pro-government group “Syrian Electronic Army”, the country blocks anti-government websites, targets activists with malware, and steals personal login information in order to contain and track protestors. By suppressing freedom of expression, Syria shrouds rebel activity and violence in a cloak of misinformation. Additionally, they prevent Syrians on-the-ground from communicating to the outside, limiting reliable data transmission and allowing inequality and atrocity to fester in darkness.

Inequal internet access is not only found in developing or instable countries. Disproportionately, racialized and/or lower-income people in the West are priced out of internet access because of oligopolistic telecommunications companies. Deprived of internet lifeblood, these residents cannot Google top news stories, ask Siri a question, search for jobs , or research topics for schoolwork.

A study on internet access in Canada from 2012 to 2016 uncovered that 64% of families in the lowest income quintile did not have broadband access at home. In order to keep their power and prices high, Bell, Rogers, and Telus pressured federal and local Canadian governments to roll back telecommunications regulations and permit barriers to entry to prevent smaller internet service providers from getting into the market.

The U.S. is not free from internet strongmen either – communications giants Comcast and Charter dominate internet in America. Moreover, these two companies do not infringe on each other’s territory – creating a monopoly hold on broadband as they split cities among themselves. Disincentivized to connect everyone, less densely populated areas are often underconnected.

Both tyrants and tycoons use internet access as a means of control to digitally divide nations at a domestic and national level. Unfettered ability and agency to use the internet must occur to make sure no one is left behind.


What steps can we take, locally and globally to protect internet free speech and access?