"

Anticipating the Fourth Age: Generative AI and algorithmic cultures

How can we leverage the notion of culture to envision and collectively build approaches that are more sensitive to the global dimensions of generative AI?

(Natale et al., 2025)

Building on Wellman’s (2011) Third Age of internet studies–when the internet became embedded, participatory, and plural–we propose that 2023 marked a watershed for digital culture and internet research, catalyzed by the release of ChatGPT in November 2022 (OpenAI, 2022, November 30; Stokel-Walker, 2022, December 9). In the Fourth Age, model-mediated communication is becoming routine, and AI and algorithmic cultures are emerging and shaping our digital worlds and encultured lives. Generative models (text, image, audio, video) are increasingly co-authors, translators, stylizers, curators, and interlocutors in everyday interaction, reshaping ideas of authorship, creativity, and identity. Content is now not only curated by algorithms but increasingly produced by them (see Box 2). Continuing the discussion of ‘computer-mediated colonization’ (Ess, 2002), we must now also ask: Who creates knowledge? Who is represented in training data? Whose values shape AI-generated communication? Language dominance, cultural references, and communicative norms are increasingly shaped by training corpora and tech platform values.

Box 2. What makes the Fourth Age different?

  • Model salience. Communication is now shaped not only by platform norms but by model training data, alignment/reward regimes, and default prompts, which in turn shape phrasing, stance, and what counts as “appropriate.”
  • Synthetic co-presence. Generative systems act as co-authors, translators, stylizers, recommenders, and moderators, mediating interaction in real time rather than merely storing or routing messages.
  • Generative scale & speed. Near-zero-cost creation across text, image, audio, and video accelerates diffusion and remix, producing content saturation and new attention dynamics.
  • Style normalization pressures. Prompts, safety policies, and “good writing” suggestions tend to standardize output–often toward Western/educated registers–flattening local varieties and minority styles.
  • Data/compute asymmetries. Control over training data, tuning, and deployment is concentrated in a few actors, whose choices set de facto cultural defaults and guardrails

 

Our own use of generative AI in this project made several of these dynamics tangible in practice. Working with ChatGPT to generate candidate codes, cluster topics, and refine prose repeatedly foregrounded model defaults shaped by opaque training data and interface design. For example, the model’s tendency to favour US-Anglophone generic academic phrasing and to smooth over tensions in the literature illustrated the normalizing and re-centring pressures we later describe in relation to model-mediated communication. The need to actively resist these tendencies by re-introducing ambiguity, reinstating local concepts, and cross-checking AI-suggested patterns against the source texts, reinforces our argument that generative AI systems do not simply “assist” scholarly work, but participate in shaping which cultural framings, styles, and voices appear most salient.

 

definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Culture and Communication in Digital Worlds Copyright © 2025 by Leah P. Macfadyen is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book