{"id":25,"date":"2023-09-12T12:55:19","date_gmt":"2023-09-12T16:55:19","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/chapter\/algorithms-and-your-data-digital-tattoo\/"},"modified":"2023-09-12T14:36:17","modified_gmt":"2023-09-12T18:36:17","slug":"algorithms-and-your-data","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/chapter\/algorithms-and-your-data\/","title":{"raw":"Algorithms and Your Data","rendered":"Algorithms and Your Data"},"content":{"raw":"<div class=\"textbox\">This is an imported copy of <a href=\"https:\/\/digitaltattoo.ubc.ca\/tutorials\/privacy-and-surveillance\/data\/algorithms-and-your-data\/#\">Algorithms and Your Data - Digital Tattoo<\/a>.<\/div>\r\n<h1>Watch<\/h1>\r\nhttps:\/\/www.youtube.com\/watch?v=162VzSzzoPs\r\n\r\n<strong>Video credit:\u00a0<\/strong> The Coded Gaze: Unmasking Algorithmic Bias \u2013 posted by Joy Buolamwini on <a href=\"https:\/\/www.youtube.com\/channel\/UCSJCNSQvojH7h_mx30ag4zg\">YouTube<\/a>\r\n<h1><i class=\"icon-check\"><\/i> Think<\/h1>\r\n<ol>\r\n \t<li>What is an algorithm?\r\n<ol type=\"a\">\r\n \t<li>Frankly, it sounds very complicated and I have no idea.<\/li>\r\n \t<li>They are rules that help computers work.<\/li>\r\n \t<li>They are used in many technologies like computer software to help automate the decision-making process.<\/li>\r\n<\/ol>\r\n<\/li>\r\n \t<li>When do you encounter algorithms?\r\n<ol type=\"a\">\r\n \t<li>Wait, I encounter algorithms?<\/li>\r\n \t<li>I know they're used in advertising online.<\/li>\r\n \t<li>They\u2019re all over the place, both online and in the outside world too.<\/li>\r\n<\/ol>\r\n<\/li>\r\n \t<li>How do algorithms affect the information you access?\r\n<ol type=\"a\">\r\n \t<li>They shouldn\u2019t affect anything, right? Don\u2019t they just make websites run correctly?<\/li>\r\n \t<li>They use my data to do things like show me interesting ads or social media posts that I\u2019m likely to click on.<\/li>\r\n \t<li>They affect a lot of the content and information that I see, even my Google searches.<\/li>\r\n<\/ol>\r\n<\/li>\r\n \t<li>Can algorithms be biased?\r\n<ol type=\"a\">\r\n \t<li>Uh, no? Aren\u2019t they just math?<\/li>\r\n \t<li>I guess it\u2019s technically a \u201cbias\u201d to use your data for greater personalization, but it\u2019s harmless, right?<\/li>\r\n \t<li>Yes, and this can have some serious consequences.<\/li>\r\n<\/ol>\r\n<\/li>\r\n \t<li>When do you think algorithms should be used?\r\n<ol type=\"a\">\r\n \t<li>Never! If they\u2019re biased, we should ban them outright.<\/li>\r\n \t<li>I mean, there\u2019s so much information to go through these days. Surely, it\u2019s not bad to use them in things like initial job screenings, right?<\/li>\r\n \t<li>They are probably okay in low-stakes cases, but we should treat them with caution.<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h1><i class=\"icon-book\"><\/i> Explore<\/h1>\r\nWhen you hear terms like \u201calgorithm,\u201d \u201cmachine learning,\u201d and \u201cartificial intelligence (AI),\u201d they might seem intimidating and complicated. And while yes, the mathematics behind them can be rather complex, it is easiest to just think of algorithms as \u201cthe set of rules a machine (and especially a computer) follows to achieve a particular goal,\u201d of which artificial intelligence are the result (<a href=\"https:\/\/www.merriam-webster.com\/dictionary\/algorithm\">Merriam Webster<\/a>).\r\n\r\nAlgorithms exist all around us, both on the internet and in the physical world too. They have a huge variety of uses, from gathering your data in order to populate your social media feed with posts that the algorithm thinks you will likely click on to more external uses such as facial recognition software for tasks like unlocking your phone.\r\n\r\nWhile algorithms do make the world around us faster and more convenient, it\u2019s important to remember that they \u2013 like all technology \u2013 are not just neutral sets of numbers working away in the background with no agenda. Algorithms might be rules, but they are rules created by people, each of whom has their own biases that get written into the programming that they create. These biases are not always intentional, but they can have a huge effect on the type of information that we access. And when algorithms are used to make complicated, human-based decisions such as job suitability or prison sentencing, the biases inherent in algorithmic programming can have the serious result of perpetuating existing inequalities.\r\n<h2>Popping the Filter Bubble<\/h2>\r\nOne important use of algorithms online is to take your data (including demographic information like your age, gender, sexual orientation, and more) and use it to place advertisements on your feed that you are more likely to click on. However, did you know that this same tactic is used for your search results on websites like Google and YouTube too? Depending on the data that Google gathers about you, <a href=\"https:\/\/arxiv.org\/abs\/1706.05011\">the links that you see in a Google search result will be different<\/a>.\r\n\r\nRemember, while Google is a search engine, it is also a company that wants to make as much profit as possible. The more you interact with the search results and the ads around them, the more money Google and its affiliate websites make. It makes sense from a profit standpoint to customize search results using algorithms fed by your data.\r\n\r\nHowever, this process of customizing search results means that you are likely to only see things online that you like and agree with. This makes it all too easy for objectivity to be lost, especially since people tend to use Google uncritically as an information source and not think very hard about its profit-driven nature. This particular type of personalization can lead to a phenomenon called \u201cfilter bubbles,\u201d which are <a href=\"https:\/\/digitaltattoo.ubc.ca\/2021\/02\/09\/guest-post-personalized-personal-lives-students-vs-filter-bubbles\/\">\u201cspheres of algorithmically imposed ignorance that mean we don\u2019t know how the content we\u2019re seeing might be biased to please us and protect us from information that challenges our views.\u201d<\/a>\u00a0The combination of uncritical acceptance of algorithmically influenced search results as fact and a lack of transparency from companies such as Google and Facebook about their practices is believed by some to be linked to issues such as <a href=\"https:\/\/www.vox.com\/recode\/21534345\/polarization-election-social-media-filter-bubble\">increasing partisanship and the spread of misinformation<\/a>.\r\n<h2>Algorithms and Bias: Facial Recognition and Beyond<\/h2>\r\nOnline filter bubbles are not the only places where the dangers of uncritical acceptance of algorithmically produced results can be seen. Algorithms are also commonly used in facial recognition software, which can be used for many tasks, from unlocking your smartphone to police surveillance. Joy Bulomwini, founder of the <a href=\"https:\/\/www.ajl.org\/about\">Algorithmic Justice League<\/a>, discovered while working on a project as an MIT student that the facial recognition software that she was using could recognize the faces of white people with high accuracy, but was shockingly bad at registering the presence of Black faces. As she discovered, many facial recognition algorithms are trained using data sets that mostly include white male faces, making them bad at identifying faces of BIPOC and women, with <a href=\"https:\/\/www.youtube.com\/watch?v=TWWsW1w-BVo\">Black women being accurately recognized at the lowest rates<\/a>. Given that facial recognition technologies are used in surveillance by police forces <a href=\"https:\/\/www.ctvnews.ca\/canada\/rcmp-admits-to-using-controversial-clearview-ai-facial-recognition-technology-1.4830939?cid=ps:localnewscampaign:searchad:ds:vancouvercrawl\">including the RCMP<\/a>, this lack of accuracy has the potential to lead to serious harms such as false identification and unjust detainment.\r\n\r\nAlgorithmically driven technologies can also be used for complex, human-centered tasks such as recruiting job candidates and risk assessment tools that attempt to determine a person\u2019s likelihood to criminally reoffend. Given what we already know about algorithms, it will likely not surprise you to know that in both tasks, the algorithms used created biased results. A recent study found that when algorithms were used to target ads for job recruitment, <a href=\"https:\/\/hbr.org\/2019\/05\/all-the-ways-hiring-algorithms-can-introduce-bias\">\u201cbroadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% Black,\u201d<\/a>\u00a0exacerbating previously existing stereotypes and creating knowledge barriers for entry into different types of work. In the criminal justice case, studies found that software used in several U.S. states to determine the likelihood of criminal re-offense was <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">\u201cparticularly likely to falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.\u201d<\/a>\u00a0While it\u2019s tempting to think about algorithms as neutral technologies that take away human errors like bias in complex cases such as these, algorithms have actually been shown to perpetuate existing biases.\r\n\r\n&nbsp;\r\n\r\n<img src=\"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-content\/uploads\/sites\/2075\/2023\/09\/ThinkB2.png\" alt=\"Think before you ink\" \/>\r\n\r\nSome legal challenges to unregulated algorithm use are now starting to come forward. In Canada in February of 2021, the Office of the Privacy Commissioner ruled that Clearview facial recognition technology, the same technology used by the RCMP, was <a href=\"https:\/\/www.securitymagazine.com\/articles\/94530-canadian-authorities-rule-clearview-facial-recognition-technology-illegal\">an illegal violation of Canadian citizens\u2019 privacy rights<\/a>. While <a href=\"https:\/\/www.cbc.ca\/news\/canada\/nova-scotia\/facial-recognition-police-privacy-laws-1.5452749\">there are no specific regulations<\/a> on how facial recognition technologies can be used by Canadian law enforcement, greater regulation is likely forthcoming. Similar <a href=\"https:\/\/www.govtrack.us\/congress\/bills\/116\/s2763\">legislation calling for transparency about filter bubbles<\/a> was introduced in the U.S. Congress in 2019, but the bill did not receive enough support to move forward.\r\n\r\nClearly, a more nuanced and critical conversation around algorithms and their uses is needed in the wider public. Fortunately, these conversations are beginning to happen in both Canada and in other parts of the world. Documentaries like <a href=\"https:\/\/www.codedbias.com\/\">Coded Bias<\/a> and <a href=\"https:\/\/digitaltattoo.ubc.ca\/2020\/12\/22\/the-social-dilemma-an-aftermath-of-change\/\">The Social Dilemma<\/a> are raising awareness of the biases that can be exacerbated by facial recognition technologies and social media filter bubbles. While algorithmic bias may be a daunting challenge to overcome, engaging with these issues, spreading awareness, and calling for political change by contacting your elected representatives and voting are vitally important ways in which you can actively encourage social and legal change on these issues.\r\n<h1><i class=\"icon-link\"><\/i> Links<\/h1>\r\n<h2>Algorithmic Bias<\/h2>\r\n<a href=\"https:\/\/www.ajl.org\/about\">The Algorithmic Justice League<\/a>\r\n\r\n<a href=\"https:\/\/www.youtube.com\/watch?v=TWWsW1w-BVo\">Gender Shades \u2502 Joy Buolamwini with MIT Media Lab<\/a> (2018)\r\n\r\n<a href=\"https:\/\/www.youtube.com\/watch?v=QxuyfWoVV98\">AI, Ain\u2019t I A Woman? \u2502 Joy Buolamwini<\/a> (2018)\r\n\r\n<a href=\"https:\/\/digitaltattoo.ubc.ca\/2017\/08\/23\/the-ethics-of-algorithms\/\">The Ethics of Algorithms \u2502 Margaux Smith with the Digital Tattoo Project<\/a> (2017)\r\n\r\n<a href=\"https:\/\/www.priv.gc.ca\/en\/opc-actions-and-decisions\/ar_index\/202021\/sr_rcmp\/#toc1\">Special Report to Parliament on RCMP\u2019s Use of Facial Recognition Technologies<\/a> (June 10, 2021)\r\n\r\n<a href=\"https:\/\/www.ctvnews.ca\/canada\/privacy-investigation-finds-5-million-shoppers-images-collected-at-malls-across-canada-1.5166162\">Privacy investigation finds 5 million shoppers\u2019 images collected at malls across Canada \u2502 CTV News<\/a> (2020)\r\n\r\n<a href=\"https:\/\/www.securitymagazine.com\/articles\/94530-canadian-authorities-rule-clearview-facial-recognition-technology-illegal\">Canadian authorities rule Clearview facial recognition technology illegal \u2502 Security Magazine<\/a> (2021)\r\n\r\n<a href=\"https:\/\/www.priv.gc.ca\/en\/opc-news\/news-and-announcements\/2021\/nr-c_210203\/?=february-2-2021\">Clearview AI\u2019s unlawful practices represented mass surveillance of Canadians, commissioners say \u2502 Office of the Privacy Commissioner of Canada<\/a> (2021)\r\n\r\n<a href=\"https:\/\/ccla.org\/clearview-ai-engaged-in-mass-surveillance\/\">Clearview AI engaged in \u201cmass surveillance\u201d \u2502 Canadian Civil Liberties Association<\/a> (2021)\r\n\r\n<a href=\"https:\/\/www.ctvnews.ca\/canada\/rcmp-admits-to-using-controversial-clearview-ai-facial-recognition-technology-1.4830939?cid=ps:localnewscampaign:searchad:ds:vancouvercrawl\">RCMP admits to using controversial Clearview AI facial recognition technology \u2502 CTV News<\/a> (2020)\r\n\r\n<a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">Machine Bias \u2502 Pro Publica<\/a> (2016)\r\n\r\n<a href=\"https:\/\/hbr.org\/2019\/05\/all-the-ways-hiring-algorithms-can-introduce-bias\">All the Ways Hiring Algorithms Can Introduce Bias \u2502 Harvard Business Review<\/a> (2019)\r\n\r\n<a href=\"https:\/\/sitn.hms.harvard.edu\/flash\/2020\/racial-discrimination-in-face-recognition-technology\/\">Racial Discrimination in Face Recognition Technology \u2502 Harvard University\u2019s Science in the News<\/a> (2020)\r\n<h2>Filter Bubbles<\/h2>\r\n<a href=\"https:\/\/www.ted.com\/talks\/eli_pariser_beware_online_filter_bubbles#t-524036\">Beware of Online Filter Bubbles \u2502 Eli Pariser<\/a> (2011)\r\n\r\n<a href=\"https:\/\/digitaltattoo.ubc.ca\/2021\/02\/09\/guest-post-personalized-personal-lives-students-vs-filter-bubbles\/\">Personalized Personal Lives: Students vs. Filter Bubbles \u2502 Joe Wright with the Digital Tattoo Project<\/a> (2020)\r\n\r\n<a href=\"https:\/\/www.prindleinstitute.org\/2019\/08\/youtube-and-the-filter-bubble\/\">YouTube and the Filter Bubble \u2502 The Prindle Post<\/a> (2019)\r\n<h1><i class=\"icon-comments-alt\"><\/i> Discuss<\/h1>\r\nAlgorithms are an essential functional component for many online functions, and yet they have potential to be biased in ways that are extremely harmful for marginalized populations. What do you think can be done to combat algorithmic bias?","rendered":"<div class=\"textbox\">This is an imported copy of <a href=\"https:\/\/digitaltattoo.ubc.ca\/tutorials\/privacy-and-surveillance\/data\/algorithms-and-your-data\/#\">Algorithms and Your Data &#8211; Digital Tattoo<\/a>.<\/div>\n<h1>Watch<\/h1>\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"The Coded Gaze: Unmasking Algorithmic Bias\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/162VzSzzoPs?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p><strong>Video credit:\u00a0<\/strong> The Coded Gaze: Unmasking Algorithmic Bias \u2013 posted by Joy Buolamwini on <a href=\"https:\/\/www.youtube.com\/channel\/UCSJCNSQvojH7h_mx30ag4zg\">YouTube<\/a><\/p>\n<h1><i class=\"icon-check\"><\/i> Think<\/h1>\n<ol>\n<li>What is an algorithm?\n<ol type=\"a\">\n<li>Frankly, it sounds very complicated and I have no idea.<\/li>\n<li>They are rules that help computers work.<\/li>\n<li>They are used in many technologies like computer software to help automate the decision-making process.<\/li>\n<\/ol>\n<\/li>\n<li>When do you encounter algorithms?\n<ol type=\"a\">\n<li>Wait, I encounter algorithms?<\/li>\n<li>I know they&#8217;re used in advertising online.<\/li>\n<li>They\u2019re all over the place, both online and in the outside world too.<\/li>\n<\/ol>\n<\/li>\n<li>How do algorithms affect the information you access?\n<ol type=\"a\">\n<li>They shouldn\u2019t affect anything, right? Don\u2019t they just make websites run correctly?<\/li>\n<li>They use my data to do things like show me interesting ads or social media posts that I\u2019m likely to click on.<\/li>\n<li>They affect a lot of the content and information that I see, even my Google searches.<\/li>\n<\/ol>\n<\/li>\n<li>Can algorithms be biased?\n<ol type=\"a\">\n<li>Uh, no? Aren\u2019t they just math?<\/li>\n<li>I guess it\u2019s technically a \u201cbias\u201d to use your data for greater personalization, but it\u2019s harmless, right?<\/li>\n<li>Yes, and this can have some serious consequences.<\/li>\n<\/ol>\n<\/li>\n<li>When do you think algorithms should be used?\n<ol type=\"a\">\n<li>Never! If they\u2019re biased, we should ban them outright.<\/li>\n<li>I mean, there\u2019s so much information to go through these days. Surely, it\u2019s not bad to use them in things like initial job screenings, right?<\/li>\n<li>They are probably okay in low-stakes cases, but we should treat them with caution.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h1><i class=\"icon-book\"><\/i> Explore<\/h1>\n<p>When you hear terms like \u201calgorithm,\u201d \u201cmachine learning,\u201d and \u201cartificial intelligence (AI),\u201d they might seem intimidating and complicated. And while yes, the mathematics behind them can be rather complex, it is easiest to just think of algorithms as \u201cthe set of rules a machine (and especially a computer) follows to achieve a particular goal,\u201d of which artificial intelligence are the result (<a href=\"https:\/\/www.merriam-webster.com\/dictionary\/algorithm\">Merriam Webster<\/a>).<\/p>\n<p>Algorithms exist all around us, both on the internet and in the physical world too. They have a huge variety of uses, from gathering your data in order to populate your social media feed with posts that the algorithm thinks you will likely click on to more external uses such as facial recognition software for tasks like unlocking your phone.<\/p>\n<p>While algorithms do make the world around us faster and more convenient, it\u2019s important to remember that they \u2013 like all technology \u2013 are not just neutral sets of numbers working away in the background with no agenda. Algorithms might be rules, but they are rules created by people, each of whom has their own biases that get written into the programming that they create. These biases are not always intentional, but they can have a huge effect on the type of information that we access. And when algorithms are used to make complicated, human-based decisions such as job suitability or prison sentencing, the biases inherent in algorithmic programming can have the serious result of perpetuating existing inequalities.<\/p>\n<h2>Popping the Filter Bubble<\/h2>\n<p>One important use of algorithms online is to take your data (including demographic information like your age, gender, sexual orientation, and more) and use it to place advertisements on your feed that you are more likely to click on. However, did you know that this same tactic is used for your search results on websites like Google and YouTube too? Depending on the data that Google gathers about you, <a href=\"https:\/\/arxiv.org\/abs\/1706.05011\">the links that you see in a Google search result will be different<\/a>.<\/p>\n<p>Remember, while Google is a search engine, it is also a company that wants to make as much profit as possible. The more you interact with the search results and the ads around them, the more money Google and its affiliate websites make. It makes sense from a profit standpoint to customize search results using algorithms fed by your data.<\/p>\n<p>However, this process of customizing search results means that you are likely to only see things online that you like and agree with. This makes it all too easy for objectivity to be lost, especially since people tend to use Google uncritically as an information source and not think very hard about its profit-driven nature. This particular type of personalization can lead to a phenomenon called \u201cfilter bubbles,\u201d which are <a href=\"https:\/\/digitaltattoo.ubc.ca\/2021\/02\/09\/guest-post-personalized-personal-lives-students-vs-filter-bubbles\/\">\u201cspheres of algorithmically imposed ignorance that mean we don\u2019t know how the content we\u2019re seeing might be biased to please us and protect us from information that challenges our views.\u201d<\/a>\u00a0The combination of uncritical acceptance of algorithmically influenced search results as fact and a lack of transparency from companies such as Google and Facebook about their practices is believed by some to be linked to issues such as <a href=\"https:\/\/www.vox.com\/recode\/21534345\/polarization-election-social-media-filter-bubble\">increasing partisanship and the spread of misinformation<\/a>.<\/p>\n<h2>Algorithms and Bias: Facial Recognition and Beyond<\/h2>\n<p>Online filter bubbles are not the only places where the dangers of uncritical acceptance of algorithmically produced results can be seen. Algorithms are also commonly used in facial recognition software, which can be used for many tasks, from unlocking your smartphone to police surveillance. Joy Bulomwini, founder of the <a href=\"https:\/\/www.ajl.org\/about\">Algorithmic Justice League<\/a>, discovered while working on a project as an MIT student that the facial recognition software that she was using could recognize the faces of white people with high accuracy, but was shockingly bad at registering the presence of Black faces. As she discovered, many facial recognition algorithms are trained using data sets that mostly include white male faces, making them bad at identifying faces of BIPOC and women, with <a href=\"https:\/\/www.youtube.com\/watch?v=TWWsW1w-BVo\">Black women being accurately recognized at the lowest rates<\/a>. Given that facial recognition technologies are used in surveillance by police forces <a href=\"https:\/\/www.ctvnews.ca\/canada\/rcmp-admits-to-using-controversial-clearview-ai-facial-recognition-technology-1.4830939?cid=ps:localnewscampaign:searchad:ds:vancouvercrawl\">including the RCMP<\/a>, this lack of accuracy has the potential to lead to serious harms such as false identification and unjust detainment.<\/p>\n<p>Algorithmically driven technologies can also be used for complex, human-centered tasks such as recruiting job candidates and risk assessment tools that attempt to determine a person\u2019s likelihood to criminally reoffend. Given what we already know about algorithms, it will likely not surprise you to know that in both tasks, the algorithms used created biased results. A recent study found that when algorithms were used to target ads for job recruitment, <a href=\"https:\/\/hbr.org\/2019\/05\/all-the-ways-hiring-algorithms-can-introduce-bias\">\u201cbroadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% Black,\u201d<\/a>\u00a0exacerbating previously existing stereotypes and creating knowledge barriers for entry into different types of work. In the criminal justice case, studies found that software used in several U.S. states to determine the likelihood of criminal re-offense was <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">\u201cparticularly likely to falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.\u201d<\/a>\u00a0While it\u2019s tempting to think about algorithms as neutral technologies that take away human errors like bias in complex cases such as these, algorithms have actually been shown to perpetuate existing biases.<\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-content\/uploads\/sites\/2075\/2023\/09\/ThinkB2.png\" alt=\"Think before you ink\" \/><\/p>\n<p>Some legal challenges to unregulated algorithm use are now starting to come forward. In Canada in February of 2021, the Office of the Privacy Commissioner ruled that Clearview facial recognition technology, the same technology used by the RCMP, was <a href=\"https:\/\/www.securitymagazine.com\/articles\/94530-canadian-authorities-rule-clearview-facial-recognition-technology-illegal\">an illegal violation of Canadian citizens\u2019 privacy rights<\/a>. While <a href=\"https:\/\/www.cbc.ca\/news\/canada\/nova-scotia\/facial-recognition-police-privacy-laws-1.5452749\">there are no specific regulations<\/a> on how facial recognition technologies can be used by Canadian law enforcement, greater regulation is likely forthcoming. Similar <a href=\"https:\/\/www.govtrack.us\/congress\/bills\/116\/s2763\">legislation calling for transparency about filter bubbles<\/a> was introduced in the U.S. Congress in 2019, but the bill did not receive enough support to move forward.<\/p>\n<p>Clearly, a more nuanced and critical conversation around algorithms and their uses is needed in the wider public. Fortunately, these conversations are beginning to happen in both Canada and in other parts of the world. Documentaries like <a href=\"https:\/\/www.codedbias.com\/\">Coded Bias<\/a> and <a href=\"https:\/\/digitaltattoo.ubc.ca\/2020\/12\/22\/the-social-dilemma-an-aftermath-of-change\/\">The Social Dilemma<\/a> are raising awareness of the biases that can be exacerbated by facial recognition technologies and social media filter bubbles. While algorithmic bias may be a daunting challenge to overcome, engaging with these issues, spreading awareness, and calling for political change by contacting your elected representatives and voting are vitally important ways in which you can actively encourage social and legal change on these issues.<\/p>\n<h1><i class=\"icon-link\"><\/i> Links<\/h1>\n<h2>Algorithmic Bias<\/h2>\n<p><a href=\"https:\/\/www.ajl.org\/about\">The Algorithmic Justice League<\/a><\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=TWWsW1w-BVo\">Gender Shades \u2502 Joy Buolamwini with MIT Media Lab<\/a> (2018)<\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=QxuyfWoVV98\">AI, Ain\u2019t I A Woman? \u2502 Joy Buolamwini<\/a> (2018)<\/p>\n<p><a href=\"https:\/\/digitaltattoo.ubc.ca\/2017\/08\/23\/the-ethics-of-algorithms\/\">The Ethics of Algorithms \u2502 Margaux Smith with the Digital Tattoo Project<\/a> (2017)<\/p>\n<p><a href=\"https:\/\/www.priv.gc.ca\/en\/opc-actions-and-decisions\/ar_index\/202021\/sr_rcmp\/#toc1\">Special Report to Parliament on RCMP\u2019s Use of Facial Recognition Technologies<\/a> (June 10, 2021)<\/p>\n<p><a href=\"https:\/\/www.ctvnews.ca\/canada\/privacy-investigation-finds-5-million-shoppers-images-collected-at-malls-across-canada-1.5166162\">Privacy investigation finds 5 million shoppers\u2019 images collected at malls across Canada \u2502 CTV News<\/a> (2020)<\/p>\n<p><a href=\"https:\/\/www.securitymagazine.com\/articles\/94530-canadian-authorities-rule-clearview-facial-recognition-technology-illegal\">Canadian authorities rule Clearview facial recognition technology illegal \u2502 Security Magazine<\/a> (2021)<\/p>\n<p><a href=\"https:\/\/www.priv.gc.ca\/en\/opc-news\/news-and-announcements\/2021\/nr-c_210203\/?=february-2-2021\">Clearview AI\u2019s unlawful practices represented mass surveillance of Canadians, commissioners say \u2502 Office of the Privacy Commissioner of Canada<\/a> (2021)<\/p>\n<p><a href=\"https:\/\/ccla.org\/clearview-ai-engaged-in-mass-surveillance\/\">Clearview AI engaged in \u201cmass surveillance\u201d \u2502 Canadian Civil Liberties Association<\/a> (2021)<\/p>\n<p><a href=\"https:\/\/www.ctvnews.ca\/canada\/rcmp-admits-to-using-controversial-clearview-ai-facial-recognition-technology-1.4830939?cid=ps:localnewscampaign:searchad:ds:vancouvercrawl\">RCMP admits to using controversial Clearview AI facial recognition technology \u2502 CTV News<\/a> (2020)<\/p>\n<p><a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">Machine Bias \u2502 Pro Publica<\/a> (2016)<\/p>\n<p><a href=\"https:\/\/hbr.org\/2019\/05\/all-the-ways-hiring-algorithms-can-introduce-bias\">All the Ways Hiring Algorithms Can Introduce Bias \u2502 Harvard Business Review<\/a> (2019)<\/p>\n<p><a href=\"https:\/\/sitn.hms.harvard.edu\/flash\/2020\/racial-discrimination-in-face-recognition-technology\/\">Racial Discrimination in Face Recognition Technology \u2502 Harvard University\u2019s Science in the News<\/a> (2020)<\/p>\n<h2>Filter Bubbles<\/h2>\n<p><a href=\"https:\/\/www.ted.com\/talks\/eli_pariser_beware_online_filter_bubbles#t-524036\">Beware of Online Filter Bubbles \u2502 Eli Pariser<\/a> (2011)<\/p>\n<p><a href=\"https:\/\/digitaltattoo.ubc.ca\/2021\/02\/09\/guest-post-personalized-personal-lives-students-vs-filter-bubbles\/\">Personalized Personal Lives: Students vs. Filter Bubbles \u2502 Joe Wright with the Digital Tattoo Project<\/a> (2020)<\/p>\n<p><a href=\"https:\/\/www.prindleinstitute.org\/2019\/08\/youtube-and-the-filter-bubble\/\">YouTube and the Filter Bubble \u2502 The Prindle Post<\/a> (2019)<\/p>\n<h1><i class=\"icon-comments-alt\"><\/i> Discuss<\/h1>\n<p>Algorithms are an essential functional component for many online functions, and yet they have potential to be biased in ways that are extremely harmful for marginalized populations. What do you think can be done to combat algorithmic bias?<\/p>\n","protected":false},"author":724,"menu_order":1,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-25","chapter","type-chapter","status-publish","hentry"],"part":3,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapters\/25","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/wp\/v2\/users\/724"}],"version-history":[{"count":4,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapters\/25\/revisions"}],"predecessor-version":[{"id":57,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapters\/25\/revisions\/57"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/parts\/3"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapters\/25\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/wp\/v2\/media?parent=25"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/pressbooks\/v2\/chapter-type?post=25"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/wp\/v2\/contributor?post=25"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/digitaltattooimport\/wp-json\/wp\/v2\/license?post=25"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}