Anticipating the Fourth Age: Generative AI and algorithmic cultures
Large language models and intercultural communication
LLMs are language technologies. In practice they act as mediators (translation, repair, paraphrase), partners (drafting, turn-suggestions), and normalizers (stylistic smoothing). To support intercultural communication, we need to understand how these functions interact with the pragmatic work people do online (turn-taking, stance-marking, mitigation, repair) and with platform/model salience–the extent to which platform norms and model defaults shape what is said and how it is read (as discussed in Theme 3).
LLMs may offer new linguistic opportunities to intercultural communicators. For example, they can scaffold cross-lingual interaction (ad hoc translation/glossing), help soften or strengthen tone appropriately, and provide metapragmatic prompts (“offer a counter-argument politely,” “avoid idioms”). Such uses may expand participation and lower linguistic barriers. On the other hand, various risks are apparent. Default prompts and safety policies often normalize toward dominant styles; “good writing” suggestions may erase local registers; false fluency can mask misunderstanding; and absence of non-verbal cues remains unresolved even with avatar-based systems. If principles of intercultural communication are weakly specified, uncritical reliance on LLMs may amplify misunderstanding or reproduce bias.
Good training data is necessary but insufficient. Expanding beyond English-dominant corpora will be essential (Choudhury, 2023; Natale et al., 2025), but cultural alignment requires more than adding texts: it involves representing genres, pragmatics, and norms, engaging communities in data governance, and evaluating interactional outcomes (not only intrinsic benchmarks).
Future researchers might profitably ask: How do LLM-mediated suggestions alter turn-length, uptake, and disagreement across languages/scripts? When and how do systems standardize toward Western academic or professional styles; what guardrails counter this? Can we develop culturally situated fine-tuning, pluralist reward models, and pragmatics-aware metrics?
A type of AI system trained on large amounts of text to generate and interpret human-like language.
The explicit and implicit rules, expectations, and conventions that govern acceptable behaviour, interactions, and content sharing within a specific digital platform.
The built-in settings and behaviors of an AI model that shape what it tends to produce unless users adjust it.