Strategic Prompt Engineering and Discourse Bias: Analysing Political Rhetoric and Hallucination in LLMs

Authors

  • Hamna Abrar
  • Ayesha Asghar Gill (PhD)

DOI:

https://doi.org/10.54487/jcp.v9i1.7473

Abstract

As large language models (LLMs) such as ChatGPT, Gemini, and Grok increasingly shape digital communication, their role in generating political discourse demands critical scrutiny. This study investigates the effect of strategic prompt engineering on the behaviour of large language models (LLMs), with a particular emphasis on discourse bias and AI-generated hallucinations in politically charged contexts. Using 36 outputs generated from 12 systematically crafted prompts, the research examines how six rhetorical prompting strategies affect refusal patterns, gender asymmetries, and the emergence of hallucinated content across models. Findings reveal that prompt design can significantly bypass ethical safeguards and elicit biased or fabricated content, especially in Grok, while ChatGPT and Gemini maintain stronger moderation but still exhibit gendered refusal asymmetries. The study introduces the concept of strategic hallucination, fabricated outputs shaped by rhetorical framing, and highlights the implications of large language model (LLM)-mediated political rhetoric for democratic discourse. The study concludes with recommendations for ethical AI governance and safer prompt design practices.

Keywords: Prompt engineering, discourse bias, AI hallucination, political rhetoric, gender bias, content moderation

Downloads

Published

2025-12-30