{"id":112,"date":"2023-07-11T16:56:10","date_gmt":"2023-07-11T20:56:10","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/chapter\/algorithms-humans-r-social-media-open-textbook-edition-winter-2022\/"},"modified":"2023-08-29T13:40:07","modified_gmt":"2023-08-29T17:40:07","slug":"21-algorithms","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/chapter\/21-algorithms\/","title":{"raw":"Chapter 21: Algorithms","rendered":"Chapter 21: Algorithms"},"content":{"raw":"<div class=\"bc-section section\">\r\n<div id=\"algorithms-from-human-sources-to-seismic-reverberations\" class=\"chapter standard\" title=\"Algorithms\">\r\n<div class=\"chapter-title-wrap\">\r\n<p class=\"chapter-subtitle\"><span style=\"text-align: initial;font-size: 1em\">Nearly any software platform you use performs its work based on algorithms, which enable it to make rapid decisions and respond predictably to stimuli. An algorithm<\/span><span style=\"text-align: initial;font-size: 1em\"> is a step-by-step set of instructions for getting something done, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z). This chapter looks at how computing algorithms work, who tends to create them, and how that affects their outcomes. We will also consider whether certain algorithms should be used at all.<\/span><\/p>\r\n\r\n<\/div>\r\n<div class=\"ugc chapter-ugc\">\r\n<div id=\"app\">\r\n<div id=\"publication-viewer\" class=\"publication-viewer\">\r\n<div class=\"publication-view\">\r\n<div class=\"wp-swipe-panel-group wp-first-panel-displayed wp-last-panel-displayed\">\r\n<div class=\"wp-swipe-panel-group-view\">\r\n<div class=\"wp-swipe-panel-group-panel article-panel wp-panel-active\">\r\n<div class=\"article crisp-theme sections-article-layout\" data-article-type=\"sections-article\">\r\n<div class=\"section title-section title-center hidden\" data-section-behavior=\"crisp-title\" data-layer=\"1\" data-layer-name=\"over\" data-scroll-after-animation=\"false\">\r\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<div class=\"image\">\r\n<div class=\"image-wrapper\">\r\n<div id=\"attachment_57\" class=\"wp-caption alignleft\" style=\"width: 406px\" aria-describedby=\"caption-attachment-57\">\r\n\r\n<img class=\"wp-image-57\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Movie_algorithm.svg_.png\" alt=\"\" width=\"406\" height=\"439\" \/>\r\n<div id=\"caption-attachment-57\" class=\"wp-caption-text\">Algorithms: They can all be reduced to simple steps, which computers need in order to follow them.<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Humans Make Computers What They Are.<\/span><\/span><\/h2>\r\nMost platforms have many algorithms at work at once, which can make the work they do seem so complex it\u2019s almost magical. But all functions of digital devices can be reduced to simple steps if needed. The steps have to be simple because computers interpret instructions very literally.\r\n<p class=\"\">Computers don\u2019t know anything unless someone has already given them instructions that are explicit, with every step fully explained. Humans, on the other hand, can complete their understanding if you skip unimportant steps, and can complete tacit or incomplete instructions. But give a computer instructions that skip steps or include tacit steps, and the computer will either stop working or get the process wrong.<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"section single-column-section split-layout image-on-right hidden\" data-section-behavior=\"crisp-split-layout\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n\r\n&nbsp;\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<p class=\"\">Here\u2019s an example of the human cooperation that goes into the giving and following of instructions, demonstrated with a robot.<\/p>\r\n<p class=\"\">As an instructor, I can say to human students on the first day of class, \u201cLet\u2019s go around the room. Tell us who you are and where you\u2019re from.\u201d Easy for humans, right? But imagine I try that in a mixed human\/robot classroom. All will probably go well with the first two (human) students, but then the third student, a robot with a computer for a brain, says, \u201cI don\u2019t understand.\u201d It seems my instructions were not clear enough. Now imagine another (human) student named Lila helpfully tells the robot, \u201cWell, first just tell us your name.\u201d The robot still does not understand. Finally, Lila says, \u201cWhat is your name?\u201d<\/p>\r\n<p class=\"\">That works; the robot has been programmed with an algorithm instructing it to respond to \u201cWhat is your name?\u201d with the words, \u201cMy name is Feefee,\u201d which the robot now says. Then Lila continues helping the robot by saying, \u201cNow tell us where you\u2019re from, Feefee.\u201d Again the robot doesn\u2019t get it. At this point, though, Lila has figured out what works in getting answers from this robot, so Lila says, \u201cWhere are you from?\u201d This works; the robot has been programmed to respond to \u201cWhere are you from?\u201d with the sentence, \u201cI am from Silicon Valley.\u201d<\/p>\r\n<p class=\"\">In the above example, human intelligence was responsible for the robot\u2019s successes and failures. The robot arrived with a few communication algorithms, programmed by its human developers. Feefee had not been taught enough to converse very naturally, however. Then Lila, a human, figured out how to get the right responses out of Feefee by modifying her human behaviour to better match behaviour Feefee had learned to respond to. Later, the students might all run home and say, \u201cA robot participated in class today! It was amazing!\u201d They might not even acknowledge the human participation that day, which the robot fully depended on.<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom visible\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Two Reasons Computers Seem So Smart Today<\/span><\/span><\/h2>\r\n<p class=\"\">What computers can do these days is amazing, for two main reasons. The first is cooperation from human software developers. The second is cooperation on the part of users.<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"section single-column-section split-layout image-on-right hidden\" data-section-behavior=\"crisp-split-layout\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<div id=\"attachment_59\" class=\"wp-caption alignright\" style=\"width: 513px\" aria-describedby=\"caption-attachment-59\">\r\n\r\n<img class=\"size-full wp-image-59\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/C_Hello_World_Program.png\" alt=\"Programming Languages\" width=\"513\" height=\"337\" \/>\r\n<div id=\"caption-attachment-59\" class=\"wp-caption-text\">Source code of a simple computer program:\r\nThis code written in the C programming language will display\r\nthe \u201cHello, world!\u201d message.<\/div>\r\n<\/div>\r\n<p class=\"\">First, computers seem so intelligent today because human software developers help one another teach computers. Apps that seem groundbreaking may simply include a lot of instructions. This is possible because developers have coded many, many algorithms, which they share and reuse on sites like <a href=\"http:\/\/www.github.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Github<\/a>. The more a developer is able to copy the basic steps others have already written for computers to follow, the more that developer can then focus on building new code that teaches computers new tricks. The most influential people, known as \u201ccreators\u201d or \u201cinventors\u201d in the tech world, may be better described as \u00a0<a href=\"https:\/\/www.newyorker.com\/magazine\/2011\/11\/14\/the-tweaker\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201ctweakers\u201d<\/a> who improved and added to other people\u2019s code for their \u201ccreations\u201d and \u201cinventions.\u201d<\/p>\r\n<p class=\"\">The second reason computers seem so smart today is because users are teaching them. Algorithms are increasingly designed to \u201clearn\u201d from human input. New algorithms automatically plug input into new programs, then automatically run those programs. This sequence of automated learning and application is called artificial intelligence (AI). AI essentially means teaching computers to teach themselves directly from their human users.<\/p>\r\n<p class=\"\">If only humans were always good teachers!<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Teaching Machines the Best and Worst About Ourselves<\/span><\/span><\/h2>\r\n<p class=\"\">In 2016, Microsoft introduced Tay, an AI online robot they branded as a young female. Their intention was for Tay to learn to communicate from internet users who conversed with her on Twitter\u2014and learn she did. Within a few hours, Tay\u2019s social media posts were so infected with violence, racism, sexism, and other bigotry that <a href=\"https:\/\/www.inverse.com\/article\/13387-microsoft-chinese-chatbot\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Microsoft had to take her down and apologize<\/a>.<\/p>\r\n<p class=\"\">Microsoft had previously launched <a href=\"https:\/\/www.msxiaobing.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Xiaolce<\/a>, an AI whose behaviour remained far less offensive than TAY, on Chinese sites, including the microblog Weibo. However, the Chinese sites Xiaolce learned from were heavily censored. The English-language Twitter was far less censored and rife with trolls networked and ready to coordinate attacks. <a href=\"https:\/\/arstechnica.com\/information-technology\/2016\/03\/tay-the-neo-nazi-millennial-chatbot-gets-autopsied\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Developers and users who were paying attention already knew Twitter was full of hate.<\/a><\/p>\r\n<p class=\"\">Tay was an embarrassment for Microsoft in the eyes of many commentators. How could they not have predicted and protected her from bad human teachers? Why didn\u2019t Tay\u2019s human programmers teach her what not to say? It certainly involved a lack of research, since bots like <a href=\"https:\/\/twitter.com\/oliviataters?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">@oliviataters <\/a>have been more successful and even benefited from a shared <a href=\"http:\/\/tinysubversions.com\/2013\/09\/new-npm-package-for-bot-makers-wordfilter\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">list of banned words<\/a> that could easily be added to their algorithms.<\/p>\r\n<p class=\"\">In addition to these oversights, Tay\u2019s failure may also have been caused by a lack of diversity in Microsoft\u2019s programmers and team leaders.<\/p>\r\n\r\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Programming and Bias<\/span><\/span><\/h2>\r\n<div id=\"attachment_61\" class=\"wp-caption alignleft\" style=\"width: 539px\" aria-describedby=\"caption-attachment-61\">\r\n\r\n<img class=\"wp-image-60\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Programming_language.png\" alt=\"\" width=\"539\" height=\"309\" \/>\r\n<div id=\"caption-attachment-61\" class=\"wp-caption-text\">Programming languages: Basic, C++. and Java are just a few of these. All translate human instructions into algorithms, which are instructions computers \u200bcan understand.<\/div>\r\n<\/div>\r\nHumans are at the heart of any computer program. Algorithms for computers to follow are all written in programming languages, which translate instructions from human language into the <a href=\"https:\/\/www.quora.com\/How-exactly-does-a-computer-program-work-How-do-lines-of-text-tell-a-box-of-wires-to-do-anything-I-thought-computers-were-based-on-0s-and-1s-How-does-it-translate\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">computing language of binary numerals, 0s and 1s<\/a>. Algorithms and programs are selective and reflect personal decision-making. There are usually different ways they could have been written.\r\n\r\nComputer programming languages like Python, C++, and Java are written in source code. Writing programs, sometimes just called \u201ccoding,\u201d is an intermediary step between human language and the binary language that computers understand. Learning programming languages takes time and dedication. To learn to be a computer programmer, you either have to feel driven to teach yourself on your own equipment or you have to be taught to program\u2014and this is still not common in US schools.\r\n<div class=\"wp-caption alignright\" style=\"width: 555px\" aria-describedby=\"caption-attachment-61\">\r\n\r\n<img class=\"wp-image-61\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Tech_geek_7-21_1.png\" alt=\"A search for tech geek\" width=\"555\" height=\"241\" \/>\r\n<div class=\"wp-caption-text\">A Google search for \u201ctech geek\u201d:\r\nThe many images of young white male \u201ctech geeks\u201d help\r\nexplain why youth who are not white or male may feel out of\r\nplace teaching themselves to code.<\/div>\r\n<\/div>\r\nBecause computer programmers are self-selected this way, and because many people think of the typical tech geeks as white and male (as suggested by the Google Image search to the right), people who end up learning computer programming in the US are more likely to be white than any other race, and are more likely to identify as male than any other gender.\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom visible\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\r\n<div class=\"section-view\">\r\n<div class=\"section-content\">\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">How Can Computers Carry Bias?<\/span><\/span><\/h2>\r\n<p class=\"\">Many people think computers and algorithms are neutral\u2014racism and sexism are not programmers\u2019 problems. In the case of Tay\u2019s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human biases is a devastating threat to computer-dependent societies in general\u2014and to those targeted or harmed by those biases in particular.<\/p>\r\n\r\n<div id=\"attachment_62\" class=\"wp-caption alignleft\" style=\"width: 609px\" aria-describedby=\"caption-attachment-62\">\r\n\r\n<img class=\"wp-image-62\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/640px-A_Google_Glass_wearer.jpg\" alt=\"A white man wearing Google Glass\" width=\"609\" height=\"406\" \/>\r\n<div id=\"caption-attachment-62\" class=\"wp-caption-text\">Google Glass was considered by some to be an example of a poor decision by a homogenous workforce.<\/div>\r\n<\/div>\r\n&nbsp;\r\n\r\n<\/div>\r\n<\/div>\r\n&nbsp;\r\n<div class=\"section-content-view\">\r\n<div class=\"content-container\">\r\n<p class=\"\">Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially, not enough people who prioritize user emotions.<\/p>\r\n<p class=\"\">Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of \u201ctech nerd\u201d fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as \u201cglassholes.\u201d<\/p>\r\n\r\n<\/div>\r\n&nbsp;\r\n<div class=\"embedded-link-wrapper widescreen-aspect-ratio\">\r\n<div class=\"textbox shaded\">\r\n<h4>Code: Debugging the Gender Gap\u200b<\/h4>\r\nCreated in 2015, the film <a href=\"https:\/\/vimeo.com\/136884902\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Code: Debugging the Gender Gap<\/a> encapsulates many of the biases in the history of the computing industry, as well as their implications. Women have always been part of the US computing industry, and <a href=\"https:\/\/www.inc.com\/salvador-rodriguez\/why-tech-needs-immigrants.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">today that industry would collapse without engineers from diverse cultures.<\/a> Yet there is widespread evidence that women and racial minorities have always been made to feel that they did not belong in the industry. <a href=\"https:\/\/gigaom.com\/2014\/08\/21\/eight-charts-that-put-tech-companies-diversity-stats-into-perspective\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">And the numbers of engineers and others in tech development show<\/a>\u00a0a serious problem in Silicon Valley with racial and ethnic diversity, resulting in <a href=\"https:\/\/www.cnet.com\/news\/google-apologizes-for-algorithm-mistakenly-calling-black-people-gorillas\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">terrible tech decisions <\/a>that spread racial and ethnic bias under the guise of tech neutrality. Google has made some headway in achieving a more diverse workforce, but not without <a href=\"https:\/\/gizmodo.com\/exclusive-heres-the-full-10-page-anti-diversity-screed-1797564320\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">backlash founded on bad science.<\/a>\r\n\r\nBelow is the trailer for the film. The film is available through most <a href=\"https:\/\/search.ebscohost.com\/login.aspx?direct=true&amp;AuthType=ip,sso&amp;db=cat09549a&amp;AN=dcl.oai.edge.douglascollege.folio.ebsco.com.fs00001139.8750503b.e9b8.5131.9e7b.acc8dffa6f7c&amp;site=eds-live&amp;scope=site&amp;custid=s5672421\" target=\"_blank\" rel=\"noopener\">college libraries<\/a> and outlets that rent and sell feature films, and through <a href=\"https:\/\/www.finishlinefeaturefilms.com\/code\">Finish Line Features<\/a>.\r\n\r\n<\/div>\r\n<\/div>\r\n<h1 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Exacerbating Bias in Algorithms: T<\/span><\/span><span class=\"heading-content-wrapper\"><span class=\"heading-text\">he Three \"I\"s<\/span><\/span><\/h1>\r\n<p class=\"\">In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow\u2019s 1996 <a href=\"https:\/\/vimeo.com\/111576518\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Declaration of the Independence of Cyberspace <\/a>represents this utopian vision, in which the internet liberates users from all biases and <a href=\"https:\/\/en.wikipedia.org\/wiki\/On_the_Internet,_nobody_knows_you%27re_a_dog\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">even from their own bodies<\/a> (at which human biases are so often directed). Barlow\u2019s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use\u2014and are worsened in a climate where information value is determined by marketability and profit, as sociologist Zeynep Tufecki explains in <a href=\"https:\/\/embed.ted.com\/talks\/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads\">this Ted Talk<\/a>.<\/p>\r\nBecause algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans carry biases and algorithms pick those up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or, as <a href=\"https:\/\/www.ted.com\/talks\/eli_pariser_beware_online_filter_bubbles?language=en\">author Eli Pariser<\/a> describes it, a <span class=\"glossary-term\">filter bubble<\/span>, in which we only see news and information we like and agree with, leading to political polarization.\r\n\r\nAlthough algorithms can generate very sophisticated recommendations, algorithms do <em>not <\/em>make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as <span class=\"glossary-term\">The Three \"I\"s<\/span>: algorithms\u2019 decisions become <em>invisible, irreversible, <\/em>and <em>infinite. <\/em>Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as <span class=\"glossary-term\">black box algorithms<\/span>.\r\n\r\n<\/div>\r\n<div class=\"textbox shaded\">\r\n<h2>Exposing Invisible Algorithms: Pro Publica<\/h2>\r\nJournalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies such as ProPublica\u2019s presented in the \u201cBreaking the Black Box\u201d series (below) have been based on groups systematically testing algorithms from different machines, locations, and users. Using investigative journalism, Pro Publica has also <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">found<\/a> that algorithms used by law enforcement are significantly more likely to label African Americans as \"High Risk\" for reoffending and white Americans as \"Low Risk.\"\r\n\r\n<\/div>\r\n<\/div>\r\n<div class=\"section-content-view\">\r\n<h2>Fighting Unjust Algorithms<\/h2>\r\nAlgorithms are laden with errors. Some of these errors can be traced to the biases of those of developed them, as when a facial recognition system meant for global implementation is only trained using <a href=\"https:\/\/www.face-rec.org\/databases\">data sets<\/a> from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, as Microsoft\u2019s Tay was. Algorithms are also grounded in the values of those who shape them; these values may reward some involved while disenfranchising others.\r\n\r\nDespite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences or their fears. They can identify who is in photos online or in recordings of offline spaces.\r\n\r\nConfronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: what roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?\r\n<div class=\"textbox shaded\">\r\n<h2>The Algorithmic Justice League Versus Facial Recognition Tech in Boston<\/h2>\r\nMIT Computer Scientist and \u201cPoet of Code\u201d Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during Covid-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini\u2019s testimony half an hour in. Boston\u2019s City Council subsequently <a href=\"https:\/\/www.npr.org\/sections\/live-updates-protests-for-racial-justice\/2020\/06\/24\/883107627\/boston-lawmakers-vote-to-ban-use-of-facial-recognition-technology-by-the-city\">voted unanimously<\/a> to ban facial recognition technologies by the City.\r\n\r\n<\/div>\r\n<div class=\"content-container\">\r\n<h1><span style=\"font-size: 1.424em\">Media Attributions<\/span><\/h1>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div class=\"media-attributions clear\">\r\n<ul>\r\n \t<li>OA_image-5fd085054d2f2 \u00a9 Omar Amanullah adapted by Emily Gammons is licensed under a <a href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\" rel=\"license\">CC BY (Attribution)<\/a> license<\/li>\r\n \t<li>Movie_algorithm.svg \u00a9 <a href=\"https:\/\/en.wikipedia.org\/wiki\/User:TheNewPhobia\" rel=\"dc:creator\">Jonathan<\/a> is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/mark\/1.0\" rel=\"license\">Public Domain<\/a> license<\/li>\r\n \t<li>Source code of a simple computer program \u00a9 <a href=\"https:\/\/commons.wikimedia.org\/w\/index.php?title=User:Esquivalience&amp;action=edit&amp;redlink=1\" rel=\"dc:creator\">Esquivalience<\/a> is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/zero\/1.0\" rel=\"license\">CC0 (Creative Commons Zero)<\/a> license<\/li>\r\n \t<li>Programming_language<\/li>\r\n \t<li>Tech_geek_7-21_1 is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/mark\/1.0\" rel=\"license\">Public Domain<\/a> license<\/li>\r\n \t<li>640px-A_Google_Glass_wearer \u00a9 Lo\u00efc Le Meur is licensed under a <a href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\" rel=\"license\">CC BY (Attribution)<\/a> license<\/li>\r\n \t<li>Print<\/li>\r\n<\/ul>\r\n<h1>Attributions<\/h1>\r\nThis chapter was adapted from <a href=\"https:\/\/opentextbooks.library.arizona.edu\/hrsmwinter2022\/\" target=\"_blank\" rel=\"noopener\"><em>Humans R Social Media<\/em><\/a>\u00a0<span style=\"text-align: initial;font-size: 1em\">by Diana Daly, which is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.<\/span>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>","rendered":"<div class=\"bc-section section\">\n<div id=\"algorithms-from-human-sources-to-seismic-reverberations\" class=\"chapter standard\" title=\"Algorithms\">\n<div class=\"chapter-title-wrap\">\n<p class=\"chapter-subtitle\"><span style=\"text-align: initial;font-size: 1em\">Nearly any software platform you use performs its work based on algorithms, which enable it to make rapid decisions and respond predictably to stimuli. An algorithm<\/span><span style=\"text-align: initial;font-size: 1em\"> is a step-by-step set of instructions for getting something done, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z). This chapter looks at how computing algorithms work, who tends to create them, and how that affects their outcomes. We will also consider whether certain algorithms should be used at all.<\/span><\/p>\n<\/div>\n<div class=\"ugc chapter-ugc\">\n<div id=\"app\">\n<div id=\"publication-viewer\" class=\"publication-viewer\">\n<div class=\"publication-view\">\n<div class=\"wp-swipe-panel-group wp-first-panel-displayed wp-last-panel-displayed\">\n<div class=\"wp-swipe-panel-group-view\">\n<div class=\"wp-swipe-panel-group-panel article-panel wp-panel-active\">\n<div class=\"article crisp-theme sections-article-layout\" data-article-type=\"sections-article\">\n<div class=\"section title-section title-center hidden\" data-section-behavior=\"crisp-title\" data-layer=\"1\" data-layer-name=\"over\" data-scroll-after-animation=\"false\">\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<div class=\"image\">\n<div class=\"image-wrapper\">\n<div id=\"attachment_57\" class=\"wp-caption alignleft\" style=\"width: 406px\" aria-describedby=\"caption-attachment-57\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-57\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Movie_algorithm.svg_.png\" alt=\"\" width=\"406\" height=\"439\" \/><\/p>\n<div id=\"caption-attachment-57\" class=\"wp-caption-text\">Algorithms: They can all be reduced to simple steps, which computers need in order to follow them.<\/div>\n<\/div>\n<\/div>\n<\/div>\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Humans Make Computers What They Are.<\/span><\/span><\/h2>\n<p>Most platforms have many algorithms at work at once, which can make the work they do seem so complex it\u2019s almost magical. But all functions of digital devices can be reduced to simple steps if needed. The steps have to be simple because computers interpret instructions very literally.<\/p>\n<p class=\"\">Computers don\u2019t know anything unless someone has already given them instructions that are explicit, with every step fully explained. Humans, on the other hand, can complete their understanding if you skip unimportant steps, and can complete tacit or incomplete instructions. But give a computer instructions that skip steps or include tacit steps, and the computer will either stop working or get the process wrong.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"section single-column-section split-layout image-on-right hidden\" data-section-behavior=\"crisp-split-layout\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<p>&nbsp;<\/p>\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<p class=\"\">Here\u2019s an example of the human cooperation that goes into the giving and following of instructions, demonstrated with a robot.<\/p>\n<p class=\"\">As an instructor, I can say to human students on the first day of class, \u201cLet\u2019s go around the room. Tell us who you are and where you\u2019re from.\u201d Easy for humans, right? But imagine I try that in a mixed human\/robot classroom. All will probably go well with the first two (human) students, but then the third student, a robot with a computer for a brain, says, \u201cI don\u2019t understand.\u201d It seems my instructions were not clear enough. Now imagine another (human) student named Lila helpfully tells the robot, \u201cWell, first just tell us your name.\u201d The robot still does not understand. Finally, Lila says, \u201cWhat is your name?\u201d<\/p>\n<p class=\"\">That works; the robot has been programmed with an algorithm instructing it to respond to \u201cWhat is your name?\u201d with the words, \u201cMy name is Feefee,\u201d which the robot now says. Then Lila continues helping the robot by saying, \u201cNow tell us where you\u2019re from, Feefee.\u201d Again the robot doesn\u2019t get it. At this point, though, Lila has figured out what works in getting answers from this robot, so Lila says, \u201cWhere are you from?\u201d This works; the robot has been programmed to respond to \u201cWhere are you from?\u201d with the sentence, \u201cI am from Silicon Valley.\u201d<\/p>\n<p class=\"\">In the above example, human intelligence was responsible for the robot\u2019s successes and failures. The robot arrived with a few communication algorithms, programmed by its human developers. Feefee had not been taught enough to converse very naturally, however. Then Lila, a human, figured out how to get the right responses out of Feefee by modifying her human behaviour to better match behaviour Feefee had learned to respond to. Later, the students might all run home and say, \u201cA robot participated in class today! It was amazing!\u201d They might not even acknowledge the human participation that day, which the robot fully depended on.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom visible\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Two Reasons Computers Seem So Smart Today<\/span><\/span><\/h2>\n<p class=\"\">What computers can do these days is amazing, for two main reasons. The first is cooperation from human software developers. The second is cooperation on the part of users.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"section single-column-section split-layout image-on-right hidden\" data-section-behavior=\"crisp-split-layout\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<div id=\"attachment_59\" class=\"wp-caption alignright\" style=\"width: 513px\" aria-describedby=\"caption-attachment-59\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-59\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/C_Hello_World_Program.png\" alt=\"Programming Languages\" width=\"513\" height=\"337\" \/><\/p>\n<div id=\"caption-attachment-59\" class=\"wp-caption-text\">Source code of a simple computer program:<br \/>\nThis code written in the C programming language will display<br \/>\nthe \u201cHello, world!\u201d message.<\/div>\n<\/div>\n<p class=\"\">First, computers seem so intelligent today because human software developers help one another teach computers. Apps that seem groundbreaking may simply include a lot of instructions. This is possible because developers have coded many, many algorithms, which they share and reuse on sites like <a href=\"http:\/\/www.github.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Github<\/a>. The more a developer is able to copy the basic steps others have already written for computers to follow, the more that developer can then focus on building new code that teaches computers new tricks. The most influential people, known as \u201ccreators\u201d or \u201cinventors\u201d in the tech world, may be better described as \u00a0<a href=\"https:\/\/www.newyorker.com\/magazine\/2011\/11\/14\/the-tweaker\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201ctweakers\u201d<\/a> who improved and added to other people\u2019s code for their \u201ccreations\u201d and \u201cinventions.\u201d<\/p>\n<p class=\"\">The second reason computers seem so smart today is because users are teaching them. Algorithms are increasingly designed to \u201clearn\u201d from human input. New algorithms automatically plug input into new programs, then automatically run those programs. This sequence of automated learning and application is called artificial intelligence (AI). AI essentially means teaching computers to teach themselves directly from their human users.<\/p>\n<p class=\"\">If only humans were always good teachers!<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom hidden\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Teaching Machines the Best and Worst About Ourselves<\/span><\/span><\/h2>\n<p class=\"\">In 2016, Microsoft introduced Tay, an AI online robot they branded as a young female. Their intention was for Tay to learn to communicate from internet users who conversed with her on Twitter\u2014and learn she did. Within a few hours, Tay\u2019s social media posts were so infected with violence, racism, sexism, and other bigotry that <a href=\"https:\/\/www.inverse.com\/article\/13387-microsoft-chinese-chatbot\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Microsoft had to take her down and apologize<\/a>.<\/p>\n<p class=\"\">Microsoft had previously launched <a href=\"https:\/\/www.msxiaobing.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Xiaolce<\/a>, an AI whose behaviour remained far less offensive than TAY, on Chinese sites, including the microblog Weibo. However, the Chinese sites Xiaolce learned from were heavily censored. The English-language Twitter was far less censored and rife with trolls networked and ready to coordinate attacks. <a href=\"https:\/\/arstechnica.com\/information-technology\/2016\/03\/tay-the-neo-nazi-millennial-chatbot-gets-autopsied\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Developers and users who were paying attention already knew Twitter was full of hate.<\/a><\/p>\n<p class=\"\">Tay was an embarrassment for Microsoft in the eyes of many commentators. How could they not have predicted and protected her from bad human teachers? Why didn\u2019t Tay\u2019s human programmers teach her what not to say? It certainly involved a lack of research, since bots like <a href=\"https:\/\/twitter.com\/oliviataters?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">@oliviataters <\/a>have been more successful and even benefited from a shared <a href=\"http:\/\/tinysubversions.com\/2013\/09\/new-npm-package-for-bot-makers-wordfilter\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">list of banned words<\/a> that could easily be added to their algorithms.<\/p>\n<p class=\"\">In addition to these oversights, Tay\u2019s failure may also have been caused by a lack of diversity in Microsoft\u2019s programmers and team leaders.<\/p>\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Programming and Bias<\/span><\/span><\/h2>\n<div id=\"attachment_61\" class=\"wp-caption alignleft\" style=\"width: 539px\" aria-describedby=\"caption-attachment-61\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-60\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Programming_language.png\" alt=\"\" width=\"539\" height=\"309\" \/><\/p>\n<div id=\"caption-attachment-61\" class=\"wp-caption-text\">Programming languages: Basic, C++. and Java are just a few of these. All translate human instructions into algorithms, which are instructions computers \u200bcan understand.<\/div>\n<\/div>\n<p>Humans are at the heart of any computer program. Algorithms for computers to follow are all written in programming languages, which translate instructions from human language into the <a href=\"https:\/\/www.quora.com\/How-exactly-does-a-computer-program-work-How-do-lines-of-text-tell-a-box-of-wires-to-do-anything-I-thought-computers-were-based-on-0s-and-1s-How-does-it-translate\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">computing language of binary numerals, 0s and 1s<\/a>. Algorithms and programs are selective and reflect personal decision-making. There are usually different ways they could have been written.<\/p>\n<p>Computer programming languages like Python, C++, and Java are written in source code. Writing programs, sometimes just called \u201ccoding,\u201d is an intermediary step between human language and the binary language that computers understand. Learning programming languages takes time and dedication. To learn to be a computer programmer, you either have to feel driven to teach yourself on your own equipment or you have to be taught to program\u2014and this is still not common in US schools.<\/p>\n<div class=\"wp-caption alignright\" style=\"width: 555px\" aria-describedby=\"caption-attachment-61\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-61\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/Tech_geek_7-21_1.png\" alt=\"A search for tech geek\" width=\"555\" height=\"241\" \/><\/p>\n<div class=\"wp-caption-text\">A Google search for \u201ctech geek\u201d:<br \/>\nThe many images of young white male \u201ctech geeks\u201d help<br \/>\nexplain why youth who are not white or male may feel out of<br \/>\nplace teaching themselves to code.<\/div>\n<\/div>\n<p>Because computer programmers are self-selected this way, and because many people think of the typical tech geeks as white and male (as suggested by the Google Image search to the right), people who end up learning computer programming in the US are more likely to be white than any other race, and are more likely to identify as male than any other gender.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"section single-column-section large-content-spacing-top large-content-spacing-bottom visible\" data-section-behavior=\"crisp-single-column\" data-layer=\"0\" data-layer-name=\"over\">\n<div class=\"section-view\">\n<div class=\"section-content\">\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<h2 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">How Can Computers Carry Bias?<\/span><\/span><\/h2>\n<p class=\"\">Many people think computers and algorithms are neutral\u2014racism and sexism are not programmers\u2019 problems. In the case of Tay\u2019s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human biases is a devastating threat to computer-dependent societies in general\u2014and to those targeted or harmed by those biases in particular.<\/p>\n<div id=\"attachment_62\" class=\"wp-caption alignleft\" style=\"width: 609px\" aria-describedby=\"caption-attachment-62\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-62\" src=\"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-content\/uploads\/sites\/2031\/2023\/07\/640px-A_Google_Glass_wearer.jpg\" alt=\"A white man wearing Google Glass\" width=\"609\" height=\"406\" \/><\/p>\n<div id=\"caption-attachment-62\" class=\"wp-caption-text\">Google Glass was considered by some to be an example of a poor decision by a homogenous workforce.<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<div class=\"section-content-view\">\n<div class=\"content-container\">\n<p class=\"\">Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially, not enough people who prioritize user emotions.<\/p>\n<p class=\"\">Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of \u201ctech nerd\u201d fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as \u201cglassholes.\u201d<\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<div class=\"embedded-link-wrapper widescreen-aspect-ratio\">\n<div class=\"textbox shaded\">\n<h4>Code: Debugging the Gender Gap\u200b<\/h4>\n<p>Created in 2015, the film <a href=\"https:\/\/vimeo.com\/136884902\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Code: Debugging the Gender Gap<\/a> encapsulates many of the biases in the history of the computing industry, as well as their implications. Women have always been part of the US computing industry, and <a href=\"https:\/\/www.inc.com\/salvador-rodriguez\/why-tech-needs-immigrants.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">today that industry would collapse without engineers from diverse cultures.<\/a> Yet there is widespread evidence that women and racial minorities have always been made to feel that they did not belong in the industry. <a href=\"https:\/\/gigaom.com\/2014\/08\/21\/eight-charts-that-put-tech-companies-diversity-stats-into-perspective\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">And the numbers of engineers and others in tech development show<\/a>\u00a0a serious problem in Silicon Valley with racial and ethnic diversity, resulting in <a href=\"https:\/\/www.cnet.com\/news\/google-apologizes-for-algorithm-mistakenly-calling-black-people-gorillas\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">terrible tech decisions <\/a>that spread racial and ethnic bias under the guise of tech neutrality. Google has made some headway in achieving a more diverse workforce, but not without <a href=\"https:\/\/gizmodo.com\/exclusive-heres-the-full-10-page-anti-diversity-screed-1797564320\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">backlash founded on bad science.<\/a><\/p>\n<p>Below is the trailer for the film. The film is available through most <a href=\"https:\/\/search.ebscohost.com\/login.aspx?direct=true&amp;AuthType=ip,sso&amp;db=cat09549a&amp;AN=dcl.oai.edge.douglascollege.folio.ebsco.com.fs00001139.8750503b.e9b8.5131.9e7b.acc8dffa6f7c&amp;site=eds-live&amp;scope=site&amp;custid=s5672421\" target=\"_blank\" rel=\"noopener\">college libraries<\/a> and outlets that rent and sell feature films, and through <a href=\"https:\/\/www.finishlinefeaturefilms.com\/code\">Finish Line Features<\/a>.<\/p>\n<\/div>\n<\/div>\n<h1 class=\"\"><span class=\"heading-content-wrapper\"><span class=\"heading-text\">Exacerbating Bias in Algorithms: T<\/span><\/span><span class=\"heading-content-wrapper\"><span class=\"heading-text\">he Three &#8220;I&#8221;s<\/span><\/span><\/h1>\n<p class=\"\">In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow\u2019s 1996 <a href=\"https:\/\/vimeo.com\/111576518\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Declaration of the Independence of Cyberspace <\/a>represents this utopian vision, in which the internet liberates users from all biases and <a href=\"https:\/\/en.wikipedia.org\/wiki\/On_the_Internet,_nobody_knows_you%27re_a_dog\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">even from their own bodies<\/a> (at which human biases are so often directed). Barlow\u2019s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use\u2014and are worsened in a climate where information value is determined by marketability and profit, as sociologist Zeynep Tufecki explains in <a href=\"https:\/\/embed.ted.com\/talks\/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads\">this Ted Talk<\/a>.<\/p>\n<p>Because algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans carry biases and algorithms pick those up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or, as <a href=\"https:\/\/www.ted.com\/talks\/eli_pariser_beware_online_filter_bubbles?language=en\">author Eli Pariser<\/a> describes it, a <span class=\"glossary-term\">filter bubble<\/span>, in which we only see news and information we like and agree with, leading to political polarization.<\/p>\n<p>Although algorithms can generate very sophisticated recommendations, algorithms do <em>not <\/em>make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as <span class=\"glossary-term\">The Three &#8220;I&#8221;s<\/span>: algorithms\u2019 decisions become <em>invisible, irreversible, <\/em>and <em>infinite. <\/em>Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as <span class=\"glossary-term\">black box algorithms<\/span>.<\/p>\n<\/div>\n<div class=\"textbox shaded\">\n<h2>Exposing Invisible Algorithms: Pro Publica<\/h2>\n<p>Journalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies such as ProPublica\u2019s presented in the \u201cBreaking the Black Box\u201d series (below) have been based on groups systematically testing algorithms from different machines, locations, and users. Using investigative journalism, Pro Publica has also <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">found<\/a> that algorithms used by law enforcement are significantly more likely to label African Americans as &#8220;High Risk&#8221; for reoffending and white Americans as &#8220;Low Risk.&#8221;<\/p>\n<\/div>\n<\/div>\n<div class=\"section-content-view\">\n<h2>Fighting Unjust Algorithms<\/h2>\n<p>Algorithms are laden with errors. Some of these errors can be traced to the biases of those of developed them, as when a facial recognition system meant for global implementation is only trained using <a href=\"https:\/\/www.face-rec.org\/databases\">data sets<\/a> from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, as Microsoft\u2019s Tay was. Algorithms are also grounded in the values of those who shape them; these values may reward some involved while disenfranchising others.<\/p>\n<p>Despite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences or their fears. They can identify who is in photos online or in recordings of offline spaces.<\/p>\n<p>Confronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: what roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?<\/p>\n<div class=\"textbox shaded\">\n<h2>The Algorithmic Justice League Versus Facial Recognition Tech in Boston<\/h2>\n<p>MIT Computer Scientist and \u201cPoet of Code\u201d Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during Covid-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini\u2019s testimony half an hour in. Boston\u2019s City Council subsequently <a href=\"https:\/\/www.npr.org\/sections\/live-updates-protests-for-racial-justice\/2020\/06\/24\/883107627\/boston-lawmakers-vote-to-ban-use-of-facial-recognition-technology-by-the-city\">voted unanimously<\/a> to ban facial recognition technologies by the City.<\/p>\n<\/div>\n<div class=\"content-container\">\n<h1><span style=\"font-size: 1.424em\">Media Attributions<\/span><\/h1>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"media-attributions clear\">\n<ul>\n<li>OA_image-5fd085054d2f2 \u00a9 Omar Amanullah adapted by Emily Gammons is licensed under a <a href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\" rel=\"license\">CC BY (Attribution)<\/a> license<\/li>\n<li>Movie_algorithm.svg \u00a9 <a href=\"https:\/\/en.wikipedia.org\/wiki\/User:TheNewPhobia\" rel=\"dc:creator\">Jonathan<\/a> is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/mark\/1.0\" rel=\"license\">Public Domain<\/a> license<\/li>\n<li>Source code of a simple computer program \u00a9 <a href=\"https:\/\/commons.wikimedia.org\/w\/index.php?title=User:Esquivalience&amp;action=edit&amp;redlink=1\" rel=\"dc:creator\">Esquivalience<\/a> is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/zero\/1.0\" rel=\"license\">CC0 (Creative Commons Zero)<\/a> license<\/li>\n<li>Programming_language<\/li>\n<li>Tech_geek_7-21_1 is licensed under a <a href=\"https:\/\/creativecommons.org\/publicdomain\/mark\/1.0\" rel=\"license\">Public Domain<\/a> license<\/li>\n<li>640px-A_Google_Glass_wearer \u00a9 Lo\u00efc Le Meur is licensed under a <a href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\" rel=\"license\">CC BY (Attribution)<\/a> license<\/li>\n<li>Print<\/li>\n<\/ul>\n<h1>Attributions<\/h1>\n<p>This chapter was adapted from <a href=\"https:\/\/opentextbooks.library.arizona.edu\/hrsmwinter2022\/\" target=\"_blank\" rel=\"noopener\"><em>Humans R Social Media<\/em><\/a>\u00a0<span style=\"text-align: initial;font-size: 1em\">by Diana Daly, which is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"author":1660,"menu_order":21,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-112","chapter","type-chapter","status-publish","hentry"],"part":3,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapters\/112","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/wp\/v2\/users\/1660"}],"version-history":[{"count":5,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapters\/112\/revisions"}],"predecessor-version":[{"id":278,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapters\/112\/revisions\/278"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/parts\/3"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapters\/112\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/wp\/v2\/media?parent=112"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/pressbooks\/v2\/chapter-type?post=112"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/wp\/v2\/contributor?post=112"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/socialmedia\/wp-json\/wp\/v2\/license?post=112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}