In an era where artificial intelligence is reshaping the workplace, a new team member is silently joining meetings, drafting reports, and trying to solve complex problems—Large Language Models (LLMs). As these AI-powered systems become ubiquitous in professional settings, they're not just changing how we work with LLMs, but also how we perceive and interact with our human colleagues. Crucially, this perception may be colored by the race of the LLM user, adding a layer of complexity to workplace dynamics.
This research project aims to unravel the intricate dynamics of human-AI collaboration in the workplace, focusing on how knowledge of a colleague's LLM use influences perceptions, expectations, and collaborative intentions. At the heart of our investigation is the examination of how these perceptions and expectations differ based on the race of the LLM user. We seek to understand whether and how racial biases manifest in attributions of AI use and subsequent collaboration decisions, addressing a significant gap in our understanding of AI-augmented work environments.
Our work builds upon studies on LLMs' technical capabilities and their impact on productivity. We also draw from research on human-AI collaboration and the changing nature of work in AI-augmented environments. However, we extend beyond these antecedents by specifically examining the social and interpersonal implications of LLM usage among potential colleagues, with a critical focus on the intersection of race and AI use. We believe our findings will contribute to theoretical understanding and inform practical strategies for fostering inclusive and effective collaboration in modern, AI-augmented workplaces. As organizations increasingly integrate AI tools, understanding these race-mediated dynamics becomes crucial for maintaining team cohesion, trust, and productivity in this new era of work.