👉 The corpus weapon refers to the strategic use of large language models (LLMs) or other computational tools within a linguistic corpus to uncover, analyze, and exploit patterns, biases, or vulnerabilities in language data. By leveraging extensive text corpora—collections of authentic linguistic samples—these tools can identify subtle syntactic, semantic, or pragmatic tendencies, such as gendered language stereotypes, ideological slants, or syntactic structures prone to misinterpretation. Researchers might weaponize this by training models on biased corpora to amplify harmful biases or, conversely, use them to detect and mitigate them. The term highlights both the dual potential of computational linguistics to both perpetuate and challenge linguistic inequities, depending on how these tools are designed, applied, and critically interrogated.