How did you first become interested in algorithms and disability policy as a topic for your dissertation?
To be honest, there were two ideas I was thinking about, and I was really lost, wondering, ‘What now? Which idea should I be working on?’ And I had a conversation with my mom, who told me, “Do the one which you are more excited about, the one which, on the worst of your nights, can keep you feeling that you are doing something nice.”
Before joining Heller, I was already in disability policy and looking at how workplace and employment issues affect people with disabilities. As I was thinking about my dissertation, I was researching different use cases of artificial intelligence and algorithms in social contexts, and I wanted to look at putting these two issues together. It came from a deep curiosity: Are we paying enough attention to the relationship [between humans and algorithms]?
Tell me about your dissertation.
We know that organizations are using algorithms for all different kinds of decision-making, and some of the discussions we hear about algorithms include, “Are algorithms good or bad, or are they biased or unbiased?” There is a decent amount of understanding around why an algorithm would be biased, namely, that somebody programmed it incorrectly (either because of poor understanding of our society, or programming it in a way that is technically wrong), or the data that the algorithms are using are biased.
In my research, I am looking at one of the social contexts where algorithms are being used — helping to make hiring decisions. I’m specifically looking at the relationship between algorithms and human decision-makers and how they work together when deciding hiring outcomes for people with disabilities. If you’re making a hiring decision with algorithmic aid, an algorithm would look at your past hiring decisions. In that scenario, if there has never been a person with a disability in your workplace, and if you believe there are certain markers that are correlated with people with disabilities — say, the kind of schools they go to, or the kinds of activities they take part in — then they’re less likely to get selected. For example, an algorithm might be selecting people based on their extracurricular activities, like playing team sports, because it’s assumed to be a good marker of leadership, or a good indicator that you function well as part of a team. But a person with mobility limitations might not be on a football team, so where does that person land?
What I am trying to investigate, especially in the context of inclusion of people with disabilities in workplaces and hiring, is a very specific piece of that question. We know that these are the issues with the algorithms. But we also know one of the issues is with humans. Humans can be biased, so what happens when humans and algorithms interact when making these hiring decisions? If there is a biased algorithm and a biased human, or an unbiased algorithm and an unbiased human, or some combination, what happens in terms of decision-making?
What have you uncovered so far in your research?
There are some mixed results between studies. I’m trying to figure out whether there are theoretical reasons, or determine if there needs to be another set of studies to understand the human-algorithm relationship better. One thing we are seeing is that there is some indication of algorithmic appreciation, that is, people agreeing with algorithmic suggestions; in the case of hiring, people are more appreciative of algorithmic advice than human advice, but that’s only happening when there is broader context available for human decision-makers.
How has Heller supported you in your research?
I received the Wyatt Jones Dissertation Grant, which helped because my research involves paying research participants. That’s material support, without which some of my work wouldn’t be possible. Outside of that, I’ve talked to many faculty members about my research idea and gained immensely from their experience and suggestions. There was never a time when I didn’t get a patient ear.
In developing your dissertation proposal, you talk about it with your mentors here at Brandeis. During my process, I spoke to professor Jody Hoffer Gittell, who chairs my dissertation committee and is an amazingly supportive adviser; and professor Monika Mitra, who has been my mentor from day one and is now part of my dissertation committee; and professor Dominic Hodgkin, who is also a member of my committee and whose courses helped me immensely. Talking to these and other wonderful scholars and exchanging ideas helped me to develop my dissertation idea. You take your idea, break it up, bring it together through different avenues, and then you come up with the plan.