Racial Justice x Technology Policy

Tech Policy Resources

an illustration of three people of color, two women and one man sitting around a large webpage with "Tech Policy Resources" in the search bar

In the spirit of sharing knowledge and developing a community of tech policy learners and advocates, we hope these resources provide an exciting foundation from which to learn. This is not an exhaustive list, but these organizations and institutions have contributed good research and implemented meaningful programs. Brows the resources below or download a PDF.

Download PDF

Read the RJxTP Year One Report

Facing the Tech Giants, D. Raji 

Raji, D. (2022). Facing the tech giants. A. G. Opoku-Agyeman (Ed.). The Black Agenda: Bold Solutions for a broken System (127-128). New York: St. Martin’s Press

Organizations Mentioned

  • Investigative journalists from ProPublica
  • Investigative journalists from Markup
  • Federal Trade Commission (FTC)
  • The national Institute of Standards and Technology
  • Government accountability office
  • Algorithmic Justice League
  • Data for Black Lives
  • Foxglove
  • American Civil Liberties Union
  • Fight for the Future

Works Cited

ACLU of Michigan Complaint Re Use of Facial Recognition.” ACLU.

Arnold, David and Dobbie, Will S and Hull, Peter. “Measuring Racial Discrimination in Algorithms.” National Bureau of Economic Research, December 2020. 

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, May 23, 2016.

Bass, Dina. “Amazon Schooled on AI Facial Technology by Turning Award Winner.” Bloomberg, April 3, 2019.

Boutin, Chad. “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” NIST, December 19,2019.

Community Control Over Police Surveillance (CCOPS) Model Bill.” ACLU. Updated April 2021.

Cook, C.M., J. J. Howard, Y. B . Sirotin, J. L. Tipton, and A. R. Vemury. “Demographic Effects in Facial Recognition and Their Dependence on Image Acquition: An Evaluation of Eleven Commercial Systems.” IEEE Transactions on Biometrics, Behavior, and Identity Science 1, no. 1 ( 2019): 32-41.

Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters, October 10, 2018. 

Guliani, Neema Singh. “Amazon Met with ICE Officials to Market Its Facial Recognition product.” CLU, October 24, 2018. 

Hao, Karen. “We Read the Paper That Forced Timnit Gebru Out of Google. Here’s What It Says.” MIT Technology Review, December 4, 2020.

Hill, Kashmir. “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match.” New York Times, December 29,2020. 

Hill, Kashmir. “Wrongfully Accused by an Algorithm.” New York Times, June 24,2020.

“IBM Response to ‘Gender Shades: Intersectional Accuracy Disparities in  Commercial Gender Classification.’” Gender Shades. http://gendershades.org/docs/ibm.pdf.

Johnson, Carolyn Y. “Racial Bias in a Medical Algorithm Favors White Patients over Sicker Black Patients.” Washington Post, October 24, 2019.

Kirchner, Lauren. “The Obscure Yet Powerful Tenant-Screening Industry Is Finally Getting Some Scrutiny.” Markup, January 11, 2021. 

Laperruque, Jake. “About-Face: Examining Amazon’s Shifting Story on Facial Recognition Accuracy.” POGO, April 10, 2019.

McShane, Julianne. “‘60 Minutes’ Ran an Episode About Algorithm Bias. Only White Experts Were Given Airtime.”

NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI).” National Science Foundation.

Raji, Inioluwa Deborah, and Joy Buolamwini. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” Proceedings of the 2019 AAAI/ACM Conference  on AI, Ethics, and Society, January 2019:429-435.

Reuters Staff. “U.S. SEC Blocks Amazon Effort to Stop Shareholder Votes on Racial Equity Audit.” Reuters.

Roach, John. “Microsoft Improves Facial Recognition Technology to Perform Well Across All Skin Tones, Genders.” AI Blog, June 26, 2018. 

Romano, Benjamin. “Amazon’s Role in AI Fairness Research Raises Eyebrows,” Government Technology, April 1, 2019.

Sandvig v. Barr- Challenge to CFAA Prohibition on Uncoverning Racial Discrimination Online.” ACLU. Updated May 22, 2019.

Shead, Sam. “Tik Tok Apologizes After Being Accused of Censoring #BlackLivesMatter Posts.” CNBC, June 2, 2020. 

Sherwin, Galen, and Esha Bhandari. “Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform.” ACLU, March 19, 2019.

Simonite, Tom. ”Congress Is Eyeing Face Recognition, and Companies Want a Say.” Wired, November 23, 2020. 

Wood, Matt. “Thoughts on Recent Research Paper and Associated Article on Amazon Rekognition.” Amazon Web Services, January 26,2019.

Yin, Leon, and Aaron Sankin. “Google Blocks Advertisers from Targeting Black Lives Matter Youtube Videos.” Markup, April 9, 2021. 

Zhang, Maggie. “Google Photos Tags Two African-Americans as Gorillas Through Facial Recognition Software.” Forbes, July 1, 2015.

Algorithmic Assault, B. Marshall

Marshall, B. (2022). Algorithmic assault. A. G. Opoku-Agyeman (Ed.). The Black Agenda: Bold Solutions for a broken System (139-143). New York: St. Martin’s Press

Works Cited

General, John, and Jon Sarlin. “A False Facial Recognition Match Sent This Innocent Black Man to Jail.” CNN, April 29,2021. 

Hill, Kashmir. “Wrongfully Accused by an Algorithm.” New York Times, June 24, 2020.

Johnson, Khari. “Microsoft Researchers Say NLP Bias Studies Must Consider Role of Social Hierarchies Like Racism.” VentureBeat, June 1, 2020.

Juarez, Jeffrey A., and Kyle D. Brown. “Extracting or Empowering? A Critique of Participatory Methods for Marginalized Populations.” Landscape Journal 27, no.2 (2008): 190-204.

We’re Talking About AI Wrong, J. Harrod

Harrod, J. (2022). We’re talking about AI wrong. A. G. Opoku-Agyeman (Ed.). The Black Agenda: Bold Solutions for a broken System (144-151). New York: St. Martin’s Press

Organizations Mentioned

  • Neural Information Processing Systems Conference

Works Cited

Blodgett, S. L., S. Barocas, H. Daume III, and H. Wallach. “Language (Technology) Is Power: A Critical Survey of Bias in NLP.” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 5454-5476.

D’Amour, A., K. Heller, D. Moldovan, B. Adlam, B. Alipanahi, A. Beutel, C. Chen, et al. “Underspecification Presents Challenges for Credibility in Modern Machine Learning.” 2020. ArXiv:2011.03395 [Cs, Stat].

Fast, E., and E. Horvitz. “Long-Term Trends in the Public Perception of Artificial Intelligence.” 2016. ArXiv:1609.04904 [Cs].

Garfin, D.R., R. Cc. Silver, and E. A. Holman. “The Novel Coronavirus (COVID-2019) Outbreak: Amplification of Public Health Consequences by Media Exposure.” Health Psychology 39, no. 5 (2020): 355-357.

Glaser, A., and  C. Adams. “Google Advised Mental Health Care When Workers Complained About Racism and Sexism.” NBC News, March 7, 2021.

 Johnson, D.G., and M. Verdicchio. “Reframing AI Discourse.” Minds and Machines 27, no. 4 (2017):575-590.

Kasy, M., and R. Abebe. “Fairness, Equality, and Power in Algorithmic Decision-Making.” Working paper, October 8, 2020.

Lipton, Z. C., and J. Steinhardt. “Troubling Trends in Machine Learning Scholarship: Some ML Papers Suffer from Flaws That Could Mislead the Public and Stymie Future Research.” Queue 17, no. 1 (2019):45-77.

Maurer, M., J. C. Gerdes, B. Lenz, and H. Winner, eds. “Chapter 32: Consumer Perceptions of Automated Driving Technologies: An examination of Use Cases and Branding Strategies.” In Autonomous Driving. Berlin: Springer Berlin Heidelberg, 2016.

On-Road Automated Driving ( ORAD) Committee. “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.” SAE International. 2018.

Raji, Inioluwa Deborah, and Joy Bouolamwini. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society ( AIES`19),2019:429-435.

Raji, I. D., M.K. Scheuerman, and R. Amironesei. “You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021:515-525.

Schiffer, Z. “Timnit Gebru Was Fired from Google - Then the Harassers Arrived.” Verge, March 5, 2021.

Verma, S., and J. Rubin. “Fairness Definitions Explained.” Proceedings of the International Workshop on Software Fairness, 2018: 1-7.

Additional Sources

Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2017). Fairness in Criminal Justice Risk Assessments: The State of the Art (arXiv:1703.09207). arXiv.

Cockerill, R. G. (2020). Ethics Implications of the Use of Artificial Intelligence in Violence Risk Assessment. The Journal of the American Academy of Psychiatry and the Law, 48(3), 5.

Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness.

Hogan, N. R., Davidge, E. Q., & Corabian, G. (2021). On the Ethics and Practicalities of Artificial Intelligence, Risk Assessment, and Race. The Journal of the American Academy of Psychiatry and the Law, 49(3), 9.

Montoya, I. L. (2020). Enabling excellence and racial justice in universities by addressing structural obstacles to work by and with people from racially minoritized communities: Response to Charity Hudley et al. Language, 96(4), e236–e246. https://doi.org/10.1353/lan.2020.0075

Oswick, C., & Noon, M. (2014). Discourses of Diversity, Equality and Inclusion: Trenchant Formulations or Transient Fashions? British Journal of Management, 25(1), 23–39.

Valentine, M. (2021). Tech Companies and Social Justice: The Pandemic Race for Diversity, Access and Inclusion Moves Online. International Journal of Business and Management Research, 9(4), 401–414.

Additional Organizations

Note: These organizations are doing work related to AI/tech ethics, diversity/inclusion in tech, or AI/data policy. Several organizations that do not primarily focus on these areas, but are pursuing or funding related projects have also been included, as have some organizations that do more general work at the intersection of technology and human rights. Some of the major AI research organizations are also listed. This list was originally compiled a year or two ago, so links should be double-checked to ensure they are still live.