HAIC-III Project @ Aarhus Decennial
Three members of the DARC-affiliated project Human-AI Collaboration (HAIC-III) presented their research at the Aarhus Decennial Conference on Computing [X] Crisis, including a keynote by Kristoffer Ørum
Kristoffer Ørum presented the keynote Freedom, Equality and Hip-Hop at the conference that also featured a guided tour of his exhibition Aarhus er #1 at Kunsthal Aarhus (https://kunsthalaarhus.dk/en/Exhibitions/Kristoffer-OErum-Emilio-Hestepis-Aarhus-Er-1). Besides Kristoffer Ørum, Malthe Stavning Erslev and Søren Bro Pold presented papers co-written with Tobias Treotw-Fish and Ben Grosser, respectively.
The conference is the sixth decennial conference since 1975 aiming to set new agendas for critical action, theory, and practice in computing. There were more than 250 participants in a sunny August Aarhus.
Abstracts:
From Bullshit to Cognition: Computing Within the Epistemic Crisis of Large Language Models in Systematic Literature Review
Malthe Stavning Erslev & Tobias Tretow-Fish
We stipulate that we are in a crisis of epistemology, and that large language models (LLMs) are a central aspect of that crisis. To the end of computing within this crisis, we situate an in-situ experiment with LLMs in a specific, specialized form of research, namely systematic literature review. Contrary to other studies in this domain, we argue that the question of LLM integration in systematic reviews cannot be fully addressed by measuring how well LLMs replicate the labor of human researchers. It is crucial to examine the epistemological implications of how LLMs are integrated into the process. To explore this, we conducted a systematic literature review, situated within the humanities and social sciences, using two different LLM products: a general-purpose model (GPT-4o) and a purpose-specific tool (Elicit). We analyze how our two exemplar implementations influence the review process and what this reveals about the cognitive and epistemological effects of LLMs in research, drawing on a conceptual vocabulary of focusing on notions of bullshit and nonconscious cognition. https://dl.acm.org/doi/10.1145/3744169.3744195
Reading the Praise/Prompt Machine: An Interface Criticism Approach to ChatGPT
Ben Grosser and Søren Bro Pold
This paper critically examines ChatGPT through the lens of interface criticism. Our work develops new methodological approaches to AI critique and reveals how the platform's core engagement mechanics operate via language rather than traditional interface elements. Through systematic three-way conversation experiments and critical prompting, we demonstrate how ChatGPT accommodates rather than challenges user perspectives, struggles to sustain disagreement or deliberation, and reinforces engagement through validation. We show that ChatGPT's answers arrive wrapped in what we term the “praise/prompt envelope”—a carefully crafted package of validation and query designed to sustain user interaction. Two key artworks—Lux Affirma, a custom GPT that amplifies ChatGPT's praise and affirmation to the point of absurdity, and the ChatGPT Demotivator, a browser extension that exposes the platform's linguistic manipulation in real-time—make visible to users how ChatGPT shapes behavior through conversation itself. Our findings reveal that ChatGPT's seemingly natural dialogic flow masks a carefully engineered system of linguistic engagement, designed not for deliberation but for continuation. This insight highlights the need for new critical approaches attuned to how the language-based interfaces of generative AI manipulate users. https://dl.acm.org/doi/10.1145/3744169.3744194