Publications
Return to main siteBelow is a list of publications from our lab. Alternatively, please refer to Google Scholar.
Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants prompted to take on specific personality traits and opinions. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI user preference, while underscoring the need for a more grounded discussion of the mechanisms and risks of AI personalization.
AI writing assistants powered by large language models are increasingly used to make autocomplete suggestions to people as they write text. Can these AI writing assistants affect people’s attitudes in this process? In two large-scale preregistered experiments (N = 2582), we exposed participants writing about important societal issues to an AI writing assistant that provided biased autocomplete suggestions. When using the AI assistant, the attitudes participants expressed in a posttask survey converged toward the AI’s position. However, a majority of participants were unaware of the AI suggestions’ bias and their influence. Further, the influence of the AI writing assistant was stronger than the influence of similar suggestions presented as static text, showing that the influence is not fully explained by these suggestions, increasing accessibility of the biased information. Last, warning participants about assistants’ bias before or after exposure does not...
AI-mediated communication promises efficiency but raises concerns about diminished authenticity and interpersonal trust. We examined this potential trade-off across two preregistered online experiments (N = 1,637) in which participants engaged in incentivized two-player trust games with communication. Half of the participants could use state-of-the-art predictive text assistance when composing a message to encourage trust; the others wrote unaided. We measured both objective (behavioral) and subjective (stated) trust. Frequentist and Bayesian analyses showed that AI assistance had minimal impact on trust, regardless of disclosure. AI-assisted participants wrote more efficiently, producing equally trust-inducing messages but in less time. This advantage persisted even when AI use was disclosed. Linguistic analyses indicated that AI-assisted messages were slightly less authentic than those written alone but that they exhibited greater warmth, complexity, and clout—features commonly associated with trustworthiness. These findings challenge the view that AI-mediated communication...
Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks-it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people’s judgements of the acceptability of open and secret AI use, as well as people’s expectations of their own and others’ use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared....
We are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact of AI-MC on online self-presentation. In three experiments we test whether people find Airbnb hosts less trustworthy if they believe their profiles have been written by AI. We observe a new phenomenon that we term the Replicant Effect: Only when participants thought they saw a mixed set of AI- and human-written profiles, they mistrusted hosts whose profiles were labeled as or suspected to be written by AI. Our findings have implications for the design of systems that involve AI technologies in online self-presentation and chart a direction for future work that may upend or augment key aspects of Computer-Mediated Communication theory.
In addition to more personalized content feeds, some leading social media platforms give a prominent role to content that is more widely popular. On Twitter, trending topics identify popular topics of conversation on the platform, thereby promoting popular content which users might not have otherwise seen through their network. Hence, trending topics potentially play important roles in influencing the topics users engage with on a particular day. Using two carefully constructed data sets from India and Turkey, we study the effects of a hashtag appearing on the trending topics page on the number of tweets produced with that hashtag. We specifically aim to answer the question: How many new tweets are generated \textit{because} a hashtag is labeled as trending? We separate the effects of the trending topics page from network exposure and find there is a statistically significant, but modest,...
Traditionally, writing assistance systems have focused on short or even single-word suggestions. Recently, large language models like GPT-3 have made it possible to generate significantly longer natural-sounding suggestions, offering more advanced assistance opportunities. This study explores the trade-offs between sentence- vs. message-level suggestions for AI-mediated communication. We recruited 120 participants to act as staffers from legislators’ offices who often need to respond to large volumes of constituent concerns. Participants were asked to reply to emails with different types of assistance. The results show that participants receiving message-level suggestions responded faster and were more satisfied with the experience, as they mainly edited the suggested drafts. In addition, the texts they wrote were evaluated as more helpful by others. In comparison, participants receiving sentence-level assistance retained a higher sense of agency, but took longer for the task as they needed to plan...
If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write - and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions...
Smart replies, writing enhancements, and virtual assistants powered by artificial intelligence (AI) language technologies are becoming part of consumer products and everyday experiences. This study explores the opportunities and risks of using language-generating AI systems in politics to increase legislative responsiveness. Legislators receive a large volume of constituent communication and often cannot devote individual consideration and timely response to each. Here, AI language technologies may allow legislators to process constituent communication more efficiently. For example, AI writing tools can suggest reply snippets when a staffer responds to a common concern. However, legislative human-AI collaboration could reduce constituent trust or undermine the representative process. In two experiments, we compared constituents’ impressions of human-written legislative correspondence to correspondences partially or fully generated by GPT-3, a state-of-the-art language model. Our results suggest that legislative correspondence generated by AI with human oversight may be...
AI language technologies increasingly assist and expand human communication. While AI-mediated communication reduces human effort, its societal consequences are poorly understood. In this study, we investigate whether using an AI writing assistant in personal self-presentation changes how people talk about themselves. In an online experiment, we asked participants (N=200) to introduce themselves to others. An AI language assistant supported their writing by suggesting sentence completions. The language model generating suggestions was fine-tuned to preferably suggest either interest, work, or hospitality topics. We evaluate how the topic preference of a language model affected users’ topic choice by analyzing the topics participants discussed in their self-presentations. Our results suggest that AI language technologies may change the topics their users talk about. We discuss the need for a careful …
While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems remains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI practitioners and decision-makers in anticipating potential harms and unintended consequences of AI systems prior to development or deployment. Given an AI deployment scenario, AHA! generates descriptions of possible harms for different stakeholders. To do so, AHA! systematically considers the interplay between common problematic AI behaviors as well as their potential impacts on different stakeholders, and narrates these conditions through vignettes. These vignettes are then filled in with descriptions of possible harms by prompting crowd workers and large language models. By examining 4113 harms surfaced by AHA! for five different AI deployment scenarios, we found that AHA! generates meaningful examples of harms, with different problematic...
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by. We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups: a representative sample of the US population (N=743), a sample of crowdworkers (N=755), and a sample of AI practitioners (N=175). Our results empirically confirm a common concern: AI practitioners’ value priorities differ from those of the general public. Compared to the US-representative sample, AI practitioners appear to consider responsible AI values as less important and emphasize a different set of values. In contrast, self-identified women and black respondents found...
Surveys show that people trust news sources that support their political ideology, creating a feedback loop that sustains partisan disagreement about fact as well as opinion. However, most news sources do not publish sufficiently balanced content to disentangle the underlying dynamics: Do people believe partisan news because they trust the source or because the content favors their worldview? We experimentally isolated the effects of content and source on the credibility of partisan news. The results show that the credibility of partisan news depends on favorable content more than a trusted source. Unfavorable headlines were unlikely to be believed, but favorable headlines were readily believed even if attributed to mistrusted sources. When offered monetary incentives for correct evaluations, people were more likely to acknowledge the accuracy of unfavorable news. The findings suggest that interventions emphasizing accuracy may be more effective at...
Political organizations worldwide keep innovating their use of social media technologies. In the 2019 Indian general election, organizers used a network of WhatsApp groups to manipulate Twitter trends through coordinated mass postings. We joined 600 WhatsApp groups that support the Bharatiya Janata Party, the right-wing party that won the general election, to investigate these campaigns. We found evidence of 75 hashtag manipulation campaigns in the form of mobilization messages with lists of pre-written tweets. Building on this evidence, we estimate the campaigns’ size, describe their organization and determine whether they succeeded in creating controlled social media narratives. Our findings show that the campaigns produced hundreds of nationwide Twitter trends throughout the election. Centrally controlled but voluntary in participation, this hybrid configuration of technologies and organizational strategies shows how profoundly online tools transform campaign politics. Trend alerts complicate the debates over...
Social influence is ubiquitous in politics andonline social media. Here we explore howsocial signals from partisan crowds influencepeople’s evaluations of political news. For ex-ample, are liberals easily persuaded by a lib-eral crowd, while resisting the influence ofconservative crowds? We designed a large-scale online experiment (N=1,000) to test howpolitically-annotated social signals affect par-ticipants’ opinions. In times rife with misin-formation and polarization, our findings areoptimistic: the mechanism of social influenceworks across political lines, that is, liberalsare reliably influenced by majority-Republicancrowds and vice versa. At the same time, wereplicate findings showing that people are in-clined to discard news claims that are incon-sistent with their political views. Consideringthat people show negative reactions to politi-cally dissonant news but not to social signalsthat oppose their views, we point to the possi-bility of depolarizing social rating systems.
Studies have observed that readers are more likely to trustnews sources that align with their own political leanings. We ask: is the higher reported trust in politically alignednews sources due to perceived institutional trustworthinessor does it merely reflect a preference for the political claimsaligned sources publish? Furthermore, do respondents re-port their actual beliefs about news or do they choose toexpress their political commitments instead? We conducteda US-based experiment (N=400) using random association ofnews claims to news sources as well as financial incentivesto robustly identify the main drivers of trust in news and toevaluate response bias. We observe a comparatively weakeffect of source on news evaluation and find that response dif-ferences are largely due to the alignment of the respondents’politics and the news claim. We also find significant evidencefor expressive responding, in particular among right-leaningparticipants.
We are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact of AI-MC on online self-presentation. In three experiments we test whether people find Airbnb hosts less trustworthy if they believe their profiles have been written by AI. We observe a new phenomenon that we term the Replicant Effect: Only when participants thought they saw a mixed set of AI- and human-written profiles, they mistrusted hosts whose profiles were labeled as or suspected to be written by AI. Our findings have implications for the design of systems that involve AI technologies in online self-presentation and chart a direction for future work that may upend or augment key aspects of Computer-Mediated Communication theory.