A recent interview published in Speculative Friction — Hype as Governance — features a conversation with Andreu Belsunces exploring the political and epistemic role of hype in shaping technological futures. The discussion examines sociotechnical fictions, AI hype, legitimacy, and the need for purposeful frictions that can support more responsible innovation and governance.
The interview approaches hype as a governing force in the organisation of expectations and authority around emerging technologies. Drawing on the notion of sociotechnical fictions — claims that lack firm evidence yet acquire authority when voiced within scientific and technical contexts — the conversation analyses how such narratives gain credibility and orient investment, public imagination, and institutional direction, delimiting the horizons of the thinkable, the fundable, and the buildable.
The exchange also reflects on teaching practices that connect speculative research to a collective inquiry into how venture capital crafts our futures, while experimenting with worldbuilding as a way to design institutions capable of countering the current authoritarian drift. A key takeaway from the conversation is the need for purposeful frictions: forms of literacy and critical awareness that reduce vulnerability to hype by making belief, desire, and power more visible in technology driven futures.
Published as part of Bogdana (Bobby) Rakova Speculative F(r)iction in AI Use and Governance series — a platform dedicated to improving human agency and AI literacy through design fiction and critical storytelling — the piece contributes to ongoing debates on how futures are collectively constructed and governed.
Rakova is a Senior Data Scientist on the global Responsible AI team at DLA Piper, where she develops evaluation frameworks for AI systems with a particular emphasis on legal red teaming and safety guardrails. She is also an affiliate of the Data & Society Research Institute and a former Senior Trustworthy AI Fellow at the Mozilla Foundation (2022–2024), where she worked on rethinking consent and contestability in automated systems.