
(with Olgahan Çat, Jiseon Chang, Roman Hlatky, Daniel L Nielson)
PNAS Nexus, 2024, 3 (8): 189.
Behavioral nudges in Facebook ads reached nearly 15 million people across six diverse countries and, consequently, many thousands took the step of navigating to governments’ vaccine signup sites. However, none of the treatment ads caused significantly more vaccine signup intent than placebo uniformly across all countries. Critically, reporting the descriptive norm that 87% of people worldwide had either been vaccinated or planned vaccination—social proof—did not meaningfully increase vaccine signup intent in any country and significantly backfired in Taiwan. This result contradicts prominent prior findings. A charge to “protect lives in your family” significantly outperformed placebo in Taiwan and Turkey but saw null effects elsewhere. A message noting that vaccination significantly reduces hospitalization risk decreased signup intent in Brazil and had no significant effects in any other country. Such heterogeneity was the hallmark of the study: some messages saw significant treatment effects in some countries but failed in others. No nudge outperformed the placebo in Russia, a location of high vaccine skepticism. In all, widely touted behavioral nudges often failed to promote vaccine signup intent and appear to be moderated by cultural context.
Can IOs influence attitudes about regulating “Big Tech”?
(with Terrence Chapman)
Review of International Organizations, 2023, 18 (4): 725–751
Can international organizations (IOs) influence attitudes about regulating “Big Tech?” Recent tech sector activity engenders multiple concerns, including the appropriate use of user data and monopolistic business practices. IOs have entered the debate, advocating for increased regulations to protect digital privacy and often framing the issue as a threat to fundamental human rights. Does this advocacy matter? We hypothesize individuals that score high on measures of internationalism will respond positively to calls for increased regulation that come from IOs and INGOs. We further predict Liberals and Democrats will be more receptive to IO and NGO messaging, especially when it emphasizes human rights, while Conservatives and Republicans will be more receptive to messaging from domestic institutions that emphasize antitrust actions. To assess these arguments, we fielded a nationally-representative survey experiment in the U.S. in July 2021 that varied the source and framing of a message about the dangers posed by tech firms, then asked respondents about support for increased regulation. The average treatment effect of international sources is largest for respondents who score high on an index of internationalism and for respondents on the left of the political spectrum. Contrary to expectations, we found few significant differences across human rights and anti-trust framings. Our results suggest the ability of IOs to influence attitudes about tech regulation may be limited in an era of polarization, but that individuals who value multilateralism may still be influenced by IO campaigns.

Diffusing AI Norms and Policies: The Role of IOs and Epistemic Communities (Job Market Paper)
As countries around the world adopt policies to regulate artificial intelligence (AI), they differ not only in how much they regulate, but also in the types of policies they implement, ranging from aspirational principles to concrete rules on data privacy and algorithms. What explains this variation in AI governance related to ethics and rights protection? This paper argues that international organizations (IOs) shape domestic policymaking through two channels: IO membership, which reflects formal affiliation and institutional commitment; and IO expertise, which captures a country’s prior engagement with IO’s epistemic communities. While IO membership facilitates norm diffusion, IO expertise provides technical knowledge and helps translate principles into practice. However, the influence of IO expertise is conditional on a country’s technical and bureaucratic capacity. Drawing on an OECD dataset covering 70 countries, I hand-coded a set of policy variables and conducted regression, matching, and difference-in-differences analyses. I examine both normative commitments, measured through keyword frequency, and substantive policy adoption, based on five key policies: AI ethical framework, data governance, algorithmic fairness, AI advisory committees, and guidelines for government use of AI. In particular, I investigate patterns of overall adoption and the issue-level variation between privacy and non-discrimination policies. The findings show that broader IO membership is associated with greater keyword usage on ethics and human rights. IO expertise significantly increases the likelihood of adopting substantive policies in countries with higher AI or bureaucratic capacity. This project contributes to the debate on the role of IOs and epistemic communities in norm and policy diffusion, identifying the conditions under which transnational experts shape national regulation of emerging technologies.
Mapping Text Similarity and AI Regulation Networks Worldwide
AI presents enormous opportunities but also poses significant risks. In response, national governments and multilateral organizations have adopted a range of laws, regulations, and initiatives to address concerns related to AI. What explains the explosive growth of AI policies regarding ethics and human rights? Why do some countries align while others diverge in their policy objectives? This chapter examines both the timing of AI policy adoption and the similarity of language used in national initiatives. I argue that shared IO memberships and AI adoption rates are associated with greater convergence in policy language. To analyze these patterns, I leverage AI policy objectives as indicators of regulatory priorities and apply a keyword-based approach using the OECD dataset of nearly 900 national policies. Through topic modeling and network analysis, I capture descriptive trends distinguishing social protection from economic objectives. I also conduct dyadic regression analyses on the frequency of specific keywords within the ethics and human rights category — namely, ethics, privacy, non-discrimination, transparency, accountability, and safety. The findings suggest that AI ethical and human rights norms have diffused globally within a short period. Regarding the correlates of specific keywords, shared IO membership and the use of AI in law enforcement consistently correlate with greater textual similarity across all categories. This chapter maps the evolving landscape of AI governance through policy language, offering a detailed account of the mechanisms behind policy diffusion and policy alignment across countries.
A Typology of AI Governance: Regulating Actors through Formal and Informal Rules
As AI policies addressing ethics and human rights proliferate globally, the absence of clear conceptual and analytical frameworks makes it difficult to understand this complex phenomenon. How do countries regulate AI differently? Why do they regulate different actors in different ways? This paper proposes a typology of AI governance along two key dimensions: the targets of regulation (primarily private firms and government agencies) and the legal status (formal versus informal governance). Countries vary significantly across these dimensions, both in their regulatory priorities and the mechanisms they employ. I argue that countries with strong high-tech sectors are more likely to adopt informal governance, while regime type is central in determining whether countries regulate government use of AI. Drawing on the OECD dataset of national AI policies, I hand-coded attributes capturing combinations of regulatory targets and legal status to operationalize the outcomes of interest. The analysis shows that AI capacity is the strongest predictor of the adoption of informal rules, particularly with respect to firms. Regime type, specifically freedom of expression, is significantly associated with the regulation of government use of AI. This paper offers a novel theoretical framework and new empirical evidence on cross-national variation in AI governance, contributing to the debates on regulatory forms and priorities.

Being Watched: What Drives Mass Attitudes about AI Surveillance?
(with Terrence Chapman and Nivedita Jhunjhunwala) Under review
Governments increasingly make use of new surveillance technologies powered by artificial intelligence. These technologies, such as facial recognition and collection of personal data, offer many benefits, but also generate multiple concerns prompting calls for scrutiny and new forms of regulation. Yet overregulation of new technology can stifle innovation, and advocates of surveillance argue the benefits to society far outweigh the dangers. To better understand how prominent concerns about the technology impact individuals’ preferences for government regulation, we conducted survey experiments in the United States and United Kingdom — two countries where AI surveillance technology is very common yet with different political cultures, histories with surveillance, and geopolitical positions. We presented respondents with some simple background information about AI surveillance, then randomized paragraph-long primes summarizing concerns about the technology that have been raised by think tanks, interest groups, and the media. Our analysis finds that concerns that Chinese surveillance technology presents a national security threat are especially salient in the U.S., while concerns about targeted surveillance for public safety may spill over into more routine surveillance resonate in the U.K. We also identify partisan and cross-country differences with respect to regulatory preferences.
Elite Attitudes about International AI Regulation
(with Terrence Chapman and Daniel Nielson) Under review
The fast-moving artificial intelligence boom has generated many concerns, including (but not limited to) violations of privacy and appropriate usage of personal data, algorithmic bias and discrimination, displacement of human workers, and a general lack of transparency of AI processes. With these concerns come calls for government regulation, and given the transnational scope of digital business, that regulation may require international collaboration. Yet we know little about how business elites—who would be tasked with complying with regulations and who also operate businesses affected by the concerns raised above—think about what form that collaboration should take and what obligations it might entail. To better understand elite attitudes toward potential international AI regulation, we designed and fielded a conjoint experiment targeting firm managers (as well as higher executive job categories) in four countries: the U.S., the UK, France, and Germany. The survey experiment varies several potential attributes of hypothetical transnational AI regulation, including member parties, scope, targeted actors, depth of obligation, and size of bureaucracy. Contrary to expectations that managers of private businesses are generally wary of government regulation, we find that managers in our sample prefer encompassing regulation that includes inputs from multiple stakeholders and binding obligations on both private firms and government agencies.
Policing “Big Tech”: How Enforcement Actions Differ Across Europe
As more countries begin to pass laws to regulate Big Tech, the outcomes of enforcement remain uncertain. Tech multinational corporations (MNCs) can weaponize their dominant position in the market and communications networks to gain greater negotiating power with host governments. As a result, they are likely to undermine a country’s regulatory capacity and shape enforcement outcomes in foreign jurisdictions. When do large tech companies successfully deter regulatory activities, and under what conditions do governments enforce rules and penalties? This project seeks to address these questions by compiling a new dataset on enforcement actions undertaken by governments targeting large tech firms in Europe. While EU member states are required to adhere to a common legal framework in areas such as data protection and antitrust, the implementation and enforcement of these laws is the responsibility of national authorities. Drawing on media coverage and official government press releases, I gather information on the reasons for violations, relevant legislation, the amount of fines, and other related details. The activities of tech MNCs, such as self-regulation and lobbying, can trigger diverse institutional responses. I argue that a shorter transposition period, which refers to how quickly a directive is adopted at the national level, as well as the availability of multiple legal tools, may empower regulators to take action and intervene in the market.