Generative Artificial Intelligence (GAI) is not neutral; neither is it generative, nor intelligent. It is a colonial technology of extraction and replication, built on stolen data, racialized labor, and computational enclosures of language and image. It materializes what Ruha Benjamin (2019) calls “the New Jim Code” and what Safiya Noble (2018) has shown as the algorithmic reproduction of white heteropatriarchal order. Queer, trans, Black, and Indigenous critiques of technology have named the body as a site of algorithmic violence and speculative reconstruction (i.e., Chen 2012; Lewis et al. 2021; McGlotten 2013; TallBear 2011). This issue therefore begins not from tacit consent qua fascination with GAI’s creative potential but from its feral undoing—its breakdowns, hallucinations, leaks, and refusals that reveal the violence of its making.
Rather than imagining how queer or feminist approaches might “humanize” AI, this issue recognizes the settler colonial violence intrinsic to techno-solutionism (i.e., Reyes-Cruz et al. 2025, Schwartz et al. 2023) and what Zhasmina Tacheva and Srividya Ramasubramanian (2023) name “AI Empire”—a global formation of hegemony, extractivism, surveillance, and subjugation that reproduces the logics of colonialism and racial capitalism through algorithmic and material infrastructures. The harms of AI, in other words, are structural necessities of empire in that they are violences that render certain lives exploitable, governable, and disposable. Yet communities long cast as vulnerable (Indigenous, Black, queer, trans, and disabled people) also demonstrate ingenuity, alterity, and sustained refusal. Their epistemologies of relationality, sovereignty, and liberatory praxis already articulate what justice in the wake of AI could be. Ferality thus becomes a critical posture of a “data resurgence” (Tacheva and Ramasubramanian 2023). This resistant epistemic stance tears at the ontological foundations of AI Empire.