Last semester, I began a graduate program to study Technology, Cybersecurity, and Policy. I have also, like many other people, been digging deeper into learning about anti-racism since the summer of 2020. As my learning about these two topics coincided, it didn’t take me long to begin looking into the ways in which these topics intersect. I did research on the topic at the end of last semester and have since narrowed the research to focus on higher education for this post.

a person gesturing in front of a laptop with several others attending a video call
Photo by Surface on Unsplash

There are a lot of industries in the United States where white people are disproportionately over-represented in positions of authority. This is often an outcome of historical racial bias, which in turn perpetuates racial bias across society in leadership, policies, and access to future opportunities. Higher ed is one such industry that is impacted by this trend, but beyond institutions themselves, institutions can also experience biases that come from the technology that they use. Higher education has been struggling to address bias and inequality since the summer of 2020 when the murder of George Floyd brought renewed urgency to racial inequality in the U.S. Institutions must also take into account the bias that is embedded in the technology that they use. Technology has many biases, and without proper oversight, issues are not always addressed when they should be. It is therefore essential that institutions understand the many ways that technology and cybersecurity can fail their students, especially students who are Black, Latinx, Native American, Asian, and other students of color. To strive towards anti-racism in higher education, it is important to recognize the many forms that racism can take within cyberspace and address the different problems wherever they occur.

Unequal Vulnerabilities

To begin with, many technologies offer a platform for race-based bullying or cybercrime to take place. Harassment in cyberspace is more scalable and less location-based than harassment that takes place in person. A quarter of Black Americans say they’ve faced race-based harassment online. Women of color face disproportionately more harassment on Twitter than other groups. A wide range of technologies and social media can be utilized as a platform for racial harassment. And technologies can be hacked to cause further harassment or even facilitate hate crimes. One such example of this is in “Zoom bombings,” which spiked in frequency in the spring of 2020 before Zoom increased its security practices. These events often had a racial element and frequently used anti-Black and anti-Asian language (the latter of which resulted in part from racist ideas about the origins of the pandemic). Any communication technology can be hacked and can then be used in a race-based attack. Another example of this can be found in the exploited insecurities in Amazon’s Ring security camera and microphone, which hackers have used as a platform for racist harassment.

While the previous examples of racism in cyberspace are preventable because they are caused by people abusing technology rather than by biases embedded in the technology itself, it is important to recognize that any tech that students are using could be susceptible to such race-based attacks or harassment. Therefore, it is essential that academics ensure, whenever possible, that students are not experiencing such harassment, especially within any tech tools used by the institution for learning and communication.

Bias in Technology

The following examples of the ways that systemic racism impacts cyberspace pertain to the ways that bias is embedded within technology itself. These examples may be just as harmful as the previous examples but potentially less obvious and harder to address because technologies themselves are perpetuating the issues, rather than humans.

a person using a laptop
Photo by heylagostechie on Unsplash

Systemic racism can be discreet in its impacts on cyberspace. The use of algorithms, facial recognition, and surveillance all have the potential to have very negative impacts especially on students who are Black, Latinx, Native American, or other people of color. Each of these topics is rooted in something that is marketed as positives:

  • algorithms are supposed to help filter data quickly and make predictions to find solutions faster;
  • facial recognition allows individuals to use their faces as biometric data to do things like unlock their phones or make payments; and
  • surveillance, although the word is often used in the negative, is supposed to be used as a tool for promoting safety or protection.

However, just like in many aspects of society outside of cyberspace, even things that are promoted as helpful or positive can have embedded bias that furthers inequality and are a disservice to peoples’ safety and their access to education. Algorithmic bias can be the unintended consequence of algorithms that results when historical data used in training an algorithm is biased or even when the code itself is written from a biased point of view. Failings in algorithms can contribute to widening racial divides in a number of circumstances from determining who needs increased medical care to the likelihood that someone who has previously been arrested will reoffend. Meanwhile, the bias in facial recognition, far from providing the convenience and ease of unlocking a smartphone, can cause misidentification in arrests, as the technology used in facial recognition misrecognizes people of color more often than white people; women more often than men; and children and the elderly more often than all ages in between. Lastly, surveillance can be used in combination with facial recognition technology to monitor the movement of people, or it can be used on the internet to monitor the search terms, activities, and communications or specific people online.

Unequal Access

Finally, another major issue from the physical world that has crept into cyberspace is the issue of access, which contributes to the digital divide. Neighborhoods that were previously redlined, rural areas, and a significant portion of Tribal Lands, lack high quality access to the internet. While the digital divide is not strictly race-based, as there are white people who live in places with poor access to the internet including rural areas who face their own issues around internet access, historical racism has a significant impact on where internet connectivity issues exist.

This divide subsequently contributes to poor access to education, information, employment, telehealth, and more. A limited number of possible internet providers in the U.S. creates low levels of competition between them, which has the potential to leave rates high because there is nothing to drive rates down. When rates are too high, poverty-impacted people must forgo digital access and all the privileges that come with it. Additionally, in some cities, internet providers have replicated the same issues that redlining caused within their own coverage. Since formerly redlined neighborhoods now lack the accumulated wealth that neighborhoods that weren’t redlined have, residents may be unable to pay as much for broadband access. As a result, internet providers do not connect high speed fiber networks to those neighborhoods because of apparent low demand (though a lack of available funds is the more likely reason), which perpetuates historical inequalities into present day inequalities.

While access to high-speed broadband remains unequal, there is an insignificant racial gap in smartphone ownership. A 2019 study from Pew Research Center shows that while Black and Hispanic adults have less access to computers and internet than white adults, they own smartphones at a similar rate as white adults. Because smartphones can use data or connect to public Wi-Fi, smartphones can serve as an alternative option for accessing the internet for people who do not have home computers or high-speed internet access at home. However, smartphones cannot fill all the same functions that a computer that is connected directly to the internet can. While a large portion of the web is now ‘mobile first,’ there remains many aspects of the web that are considerably less accessible on a phone than a computer. For example, infographics, tables of data, online forms, and many PDFs can be difficult to navigate from a smartphone. Unfortunately, such digital forms are often needed when applying for jobs or schools, when filling in financial information, and when accessing health services. Furthermore, public WI-Fi can present vulnerabilities to the user if the network is insecure, which is the case for many free and public WI-Fi networks.

Current Events and Data Insecurity

Just this semester, the school that I’m attending has been spotlighted in the news for a major hack to a 3rd party system that has compromised over 300,000 student records. I recently received one of multiple communications from the university stating which records may have been hit. Included in the overview are items like veteran status, visa status, disability status, medical information, and occasionally financial information. While I am not feeling too great about the breach which could have compromised some of my own data, I have also been considering the ways in which this hack might disproportionately impact students who already experience marginalization. Additionally, because of societal inequality, the recovery from this hack could be unequal as well, with international students, students with disabilities or medical issues, or impoverished students needing to deal with the fallout of their data being exposed online.

Looking Forward – Consider the Impacts of the Tools We Use

a woman working on a laptop
Photo by Surface on Unsplash

Although the tech industry bears much of the responsibility for addressing racism in cyberspace, as does public policy, higher education should be cautious of the way that it uses technologies because of the potential impacts that it could have on their students. Racial bias is not limited to that which exists within institutions themselves when the tools that they use carry their own bias. I recommend the following as we all try to address these issues:

  1. Use Technology Intentionally – Education practitioners should use technology intentionally and understand the potential issues that their students could face in using different tools.
  2. Continue Anti-racism Efforts – Institutions should continue making their own efforts towards anti-racism in modifying their hiring practices, educating their students and staff about racial inequality, and properly compensating those that work on anti-racism projects.
  3. Consider Potential Bias in Tech Tools – When adopting new technology (anything from an LMS that has features that are tricky on mobile to adopting a technology that uses algorithms, and anything in between) administrators should consider potential bias issues the same way that they consider potential accessibility issues. It is essential that schools understand the bias within the systems they use even if they are provided by an outside vendor.

Further Learning

Although the topic of cyber racism is still relatively niche, it is expanding. I recently attended a virtual viewing of the film Coded Bias, which is based on impacts of algorithmic bias and facial recognition on society. I am also making my way through a growing list of books that tackle a range of topics within cyber racism.

Check out the following to learn more, and email us if you can think of more that I didn’t include:


Rosa Calabrese

Senior Manager, Digital Design, WCET


303-541-0219

rcalabrese@wiche.edu

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,527 other subscribers

Archive By Month

Blog Tags

Distance Education (337)Student Success (311)Online Learning (241)Managing Digital Learning (239)State Authorization (227)WCET (221)U.S. Department of Education (213)Regulation (210)Technology (169)Digital Learning (163)Innovation (125)Teaching (121)Collaboration/Community (114)WCET Annual Meeting (105)Course Design (103)Professional Development (101)Access (99)SAN (98)Faculty (90)Cost of Instruction (89)Financial Aid (84)Legislation (83)Completion (74)Assessment (69)Instructional Design (68)Open Educational Resources (68)Accessibility (67)Accreditation (65)Professional Licensure (65)COVID-19 (64)SARA (64)Credentials (62)Competency-based Education (61)Quality (61)Data and Analytics (60)Diversity/Equity/Inclusion (59)Research (58)Reciprocity (57)WOW Award (54)Outcomes (47)Workforce/Employment (46)Negotiated Rulemaking (44)Regular and Substantive Interaction (43)Policy (42)Higher Education Act (41)Virtual/Augmented Reality (37)Title IV (36)Artificial Intelligence (35)Practice (35)Academic Integrity (34)Disaster Planning/Recovery (34)Leadership (34)State Authorization Network (31)Every Learner Everywhere (30)WCET Awards (30)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)Survey (27)Credits (26)Disabilities (25)MOOC (23)WCET Summit (23)Evaluation (22)Complaint Process (21)Retention (21)Enrollment (21)Correspondence Course (18)Physical Presence (17)WICHE (17)System/Consortia (16)Cybersecurity (16)Products and Services (16)Blended/Hybrid Learning (15)Forprofit Universities (15)Member-Only (15)WCET Webcast (15)Digital Divide (14)NCOER (14)Textbooks (14)Mobile Learning (13)Consortia (13)Personalized Learning (12)Futures (11)Marketing (11)Privacy (11)STEM (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Tuition and Fees (7)Readiness and Developmental Courses (7)What's Next (7)International Students (6)K-12 (6)Lab Courses (6)Nursing (6)Remote Learning (6)Testing (6)Graduation (6)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Minority Serving Institution (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Virtual Summit (2)Department of Education (2)Higher Education (2)Title IX (1)Business of Higher Education (1)OPMs (1)Third-Party Servicers (1)microcredentials (1)equity (1)Community College (1)Formerly Incarcerated Students (1)Global (1)