Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Proceedings of the ACM on Human-Computer Interaction, 2021
We argue that to increase the trustworthiness of automatic ER-enabled wellbeing interventions on social media, companies that deploy them would need to at least fulfill requirements that preemptively protect individuals from the vast harms it presents, take measures to attenuate harms, and align with data subjects’ development and design requirements. These requirements include high computational accuracy, contextual sensitivity, positive outcome guarantees, individual controls, external regulation, and meaningful consent over being subject to automatic ER-enabled wellbeing interventions. We conclude with a message of caution and restraint about the use of automatic ER-enabled wellbeing interventions on social media in the US, based on its current regulatory landscape and social context.
Recommended citation: Kat Roemmich and Nazanin Andalibi. 2021. Data Subjects’ Conceptualizations of and Attitudes Toward Automatic Emotion Recognition-Enabled Wellbeing Interventions on Social Media. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 308 (October 2021), 34 pages. https://doi.org/10.1145/3476049 http://kroemmich.github.io/files/3476049-a.pdf
Published in CHI ’23: ACM Conference on Human Factors in Computing Systems, 2023
Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace.
Recommended citation: Kat Roemmich, Florian Schaub, and Nazanin Andalibi. 2023. Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional Privacy. In CHI ’23: ACM Conference on Human Factors in Computing Systems, April 23–28, 2023,Hamburg, Germany. ACM, New York, NY, USA, 20 pages. https://doi.org/10.1145/3544548.3580950 http://kroemmich.github.io/files/CHI23_Emotion_AI_at_Work.pdf
Published in Proceedings of the ACM on Human-Computer Interaction, 2023
We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI’s accuracy and bias are addressed.
Recommended citation: Shanley Corvite*, Kat Roemmich*, Tillie Rosenberg, and Nazanin Andalibi. 2023. Data Subjects’ Perspectives on Emotion Artificial Intelligence Use in the Workplace: A Relational Ethics Lens. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 124 (April 2023), 38 pages. https://doi.org/10.1145/3579600. *Co-first authors contributed equally. http://kroemmich.github.io/files/CSCW23_EAI_Data_Subject_Perceptions_Workplace.pdf
Published in Proceedings of the ACM on Human-Computer Interaction, 2023
Pointing to tensions between company claims and broader algorithmic fairness and equity scholarship, we argue that EAI service claims dangerously obscure the potential harms introduced by EAI and reinforce exclusionary hiring practices despite their concurrent claims of debiasing hiring processes and outcomes. Lastly, we discuss this work’s implications for design and policy to address deception and unfairness in EAI hiring services.
Recommended citation: Kat Roemmich, Tillie Rosenberg, Serena Fan, and Nazanin Andalibi. 2023. Values in Emotion Artificial Intelligence Hiring Services: Technosolutions to Organizational Problems. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 109 (April 2023), 28 pages. https://doi.org/10.1145/3579543 http://kroemmich.github.io/files/CSCW23_Values_in_EAI_Hiring_Services.pdf
Published in CHI ’24: ACM Conference on Human Factors in Computing Systems, 2024
Our analysis of overwork in academia underscores the urgent need to halt our overwork norms and pivot towards reasonable, responsible, and health-conscious work practices—before we burn to a crisp in the name of more publications.
Recommended citation: Abraham Mhaidli* and Kat Roemmich*. 2024. Overworking in HCI: A Reflection on Why We Are Burned Out, Stressed, and Out of Control; and What We Can Do About It. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '24), May 11-16, 2024. Honolulu, HI, USA. 10 pages. https://doi.org/10.1145/3613905.3644052 *Co-first authors contributed equally http://kroemmich.github.io/files/CHI24_altCHI_Overwork.pdf
Published in CSCW ’24: 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing, 2024
In this paper, we qualitatively analyzed U.S. adults’ open-ended survey responses (n=395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects.
Recommended citation: Kat Roemmich, Shanley Corvite, Cassidy Pyle, Nadia Karizat, and Nazanin Andalibi. 2024. Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 47 (April 2024), 46 pages. https://doi.org/10.1145/3637324 http://kroemmich.github.io/files/CSCW24_Emotion_AI_Mental_Health.pdf
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.