Publications

Overworking in HCI: A Reflection on Why We Are Burned Out, Stressed, and Out of Control; and What We Can Do About It

Published in CHI Conference on Human Factors in Computing Systems, 2024

In this alt.chi submission, we explore overwork in academic Human-Computer Interaction (HCI) research. We first ask why it is that we overwork: a combination of external pressures including cutthroat publication-centric competition, lack of recognition for invisible research labor facilitated by technologies that promote overwork and further hide the labor behind research, and institutionalized overwork norms reified through toxic advising practices; along with internal pressures, including information opacity and precarious employment as tools for self-exploitation, intense personal and emotional investment in research, and our relational commitments to each other. We explore overwork’s detrimental consequences to individual researchers, the relationships between them, and research integrity. Our analysis of overwork in academia underscores the urgent need to halt our overwork norms and pivot towards reasonable, responsible, and health-conscious work practices—before we burn to a crisp in the name of more publications.

Recommended citation: Abraham Mhaidli* and Kat Roemmich*. 2024. Overworking in HCI: A Reflection on Why We Are Burned Out, Stressed, and Out of Control; and What We Can Do About It. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '24), May 11-16, 2024. Honolulu, HI, USA. 10 pages. https://doi.org/10.1145/3613905.3644052 *Co-first authors contributed equally https://doi.org/10.1145/3613905.3644052

Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist

Published in Proceedings of the ACM on Human-Computer Interaction, 2024

Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults’ open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects’ voices; 3) monitoring data subjects for potential harm; and 4) involved parties’ understandings and uses of mental health inferences. Participants’ remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects’ mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties’ understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects’ voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects’ privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups.

Recommended citation: Kat Roemmich, Shanley Corvite, Cassidy Pyle, Nadia Karizat, and Nazanin Andalibi. 2024. Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 47 (April 2024), 46 pages. https://doi.org/10.1145/3637324 https://doi.org/10.1145/3637324

Values in Emotion Artificial Intelligence Hiring Services: Technosolutions to Organizational Problems

Published in Proceedings of the ACM on Human-Computer Interaction, 2023

Despite debates about emotion artificial intelligence’s (EAI) validity, legality, and social consequences, EAI is increasingly present in the high stakes context of hiring, with potential to shape the future of work and the workforce. The values laden in technology play a significant role in its societal impact.We conducted qualitative content analysis on the public-facing websites (N=229) of EAI hiring services. We identify the organizational problems that EAI hiring services claim to solve and reveal the values emerging in desired EAI uses as promoted by EAI hiring services to solve organizational problems. Our findings show that EAI hiring services market their technologies as technosolutions to three purported organizational hiring problems: 1) hiring (in)accuracy, 2) hiring (mis)fit, and 3) hiring (in)authenticity. We unpack these problems to expose how these desired uses of EAI are legitimized by the corporate ideals of data-driven decision making, continuous improvement, precision, loyalty, and stability. We identify the unfair and deceptive mechanisms by which EAI hiring services claim to solve the purported organizational hiring problems, suggesting that they unfairly exclude and exploit job candidates through EAI’s creation, extraction, and affective commodification of a candidate’s affective value through pseudoscientific approaches. Lastly, we interrogate EAI hiring service claims to reveal the core values that underpin their stated desired use: techno-omnipresence, techno-omnipotence, and techno-omniscience. We show how EAI hiring services position desired use of their technology as a moral imperative for hiring organizations with supreme capabilities to solve organizational hiring problems, then discuss implications for fairness, ethics, and policy in EAI-enabled hiring within the US policy landscape.

Recommended citation: Kat Roemmich, Tillie Rosenberg, Serena Fan, and Nazanin Andalibi. 2023. Values in Emotion Artificial Intelligence Hiring Services: Technosolutions to Organizational Problems. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 109 (April 2023), 28 pages. https://doi.org/10.1145/3579543 https://doi.org/10.1145/3579543

Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional Privacy

Published in ACM Conference on Human Factors in Computing Systems, 2023

Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace.

Recommended citation: Kat Roemmich, Florian Schaub, and Nazanin Andalibi. 2023. Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional Privacy. In CHI ’23: ACM Conference on Human Factors in Computing Systems, April 23–28, 2023,Hamburg, Germany. ACM, New York, NY, USA, 20 pages. https://doi.org/10.1145/3544548.3580950 https://doi.org/10.1145/3544548.3580950

Data Subjects’ Perspectives on Emotion Artificial Intelligence Use in the Workplace: A Relational Ethics Lens

Published in Proceedings of the ACM on Human-Computer Interaction, 2023

The workplace has experienced extensive digital transformation, in part due to artificial intelligence’s commercial availability. Though still an emerging technology, emotional artificial intelligence (EAI) is increasingly incorporated into enterprise systems to augment and automate organizational decisions and to monitor and manage workers. EAI use is often celebrated for its potential to improve workers’ wellbeing and performance as well as address organizational problems such as bias and safety. Workers subject to EAI in the workplace are data subjects whose data make EAI possible and who are most impacted by it. However, we lack empirical knowledge about data subjects’ perspectives on EAI, including in the workplace. To this end, using a relational ethics lens, we qualitatively analyzed 395 U.S. adults’ open-ended survey (partly representative) responses regarding the perceived benefits and risks they associate with being subjected to EAI in the workplace. While participants acknowledged potential benefits of being subject to EAI (e.g., employers using EAI to aid their wellbeing, enhance their work environment, reduce bias), a myriad of potential risks overshadowed perceptions of potential benefits. Participants expressed concerns regarding the potential for EAI use to harm their wellbeing, work environment and employment status, and create and amplify bias and stigma against them, especially the most marginalized (e.g., along dimensions of race, gender, mental health status, disability). Distrustful of EAI and its potential risks, participants anticipated conforming to (e.g., partaking in emotional labor) or refusing (e.g., quitting a job) EAI implementation in practice. We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI’s accuracy and bias are addressed.

Recommended citation: Shanley Corvite*, Kat Roemmich*, Tillie Rosenberg, and Nazanin Andalibi. 2023. Data Subjects’ Perspectives on Emotion Artificial Intelligence Use in the Workplace: A Relational Ethics Lens. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 124 (April 2023), 38 pages. https://doi.org/10.1145/3579600 *Co-first authors contributed equally. https://doi.org/10.1145/3579600

Data Subjects’ Conceptualizations of and Attitudes Toward Automatic Emotion Recognition-Enabled Wellbeing Interventions on Social Media

Published in Proceedings of the ACM on Human-Computer Interaction, 2021

Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects’ conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants’ attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants’ attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions’ impact on others, rather than themselves. Though with reluctance, a minority of participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US.

Recommended citation: Kat Roemmich and Nazanin Andalibi. 2021. Data Subjects’ Conceptualizations of and Attitudes Toward Automatic Emotion Recognition-Enabled Wellbeing Interventions on Social Media. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 308 (October 2021), 34 pages. https://doi.org/10.1145/3476049 https://doi.org/10.1145/3476049