Public Section Preview
Predicted Questions with Model Answers
Q1 (5 marks — 50 words): What is algorithmic bias? Give one example relevant to public administration.
Model Answer:
Algorithmic bias occurs when an AI system, trained on historically discriminatory data, systematically produces unfair outcomes for particular groups. Example: A welfare beneficiary-identification algorithm trained on old data where tribal communities were systematically undercounted continues excluding eligible tribals — perpetuating historical injustice with mathematical precision and apparent objectivity, making the bias harder to detect and contest than individual officer bias.
Q2 (5 marks — 50 words): What is the "Human-in-the-Loop" principle and why is it important in administrative AI?
Model Answer:
Human-in-the-Loop (HITL) requires that a human with moral authority review and approve AI recommendations before high-stakes administrative decisions take effect — denial of rations, criminal profiling, benefit termination. Importance: (1) preserves accountability — a human is morally and legally responsible; (2) applies contextual conscience AI lacks; (3) allows compassionate exceptions; (4) protects natural justice (audi alteram partem) and constitutional due process rights of citizens.
Q3 (5 marks — 50 words): Why can AI not be a moral agent in the Kantian sense?
Model Answer:
Kant's moral philosophy requires a moral agent to possess: (1) rational will — the capacity to legislate universal moral laws; (2) autonomy — acting from duty rather than inclination or programming; (3) dignity — being an end in oneself. AI possesses none of these: it executes programmed instructions, optimises objective functions, has no duty, no will, and no dignity. Therefore, AI can be a moral instrument but never a moral agent — responsibility always remains with the human designer or user.
Q4 (10 marks — 150 words): "AI is a powerful tool but a dangerous master in public administration." Critically evaluate this statement with reference to the conflict between algorithmic efficiency and human conscience.
Model Answer:
AI brings genuine strengths to administration: scale, speed, consistency, and fraud detection. India's Aadhaar-DBT pipeline, GSTN fraud analytics, and satellite-based drought assessment demonstrate how algorithms extend the reach of the state to millions. These are areas where AI as a tool amplifies human administrative capacity.
However, unchecked AI becomes a dangerous master when it substitutes for conscience rather than serving it. Three critical dangers emerge: First, algorithmic bias — AI trained on historically discriminatory data perpetuates injustice at scale, targeting tribals, Dalits, and women who were already marginalised. Second, the accountability gap — the "problem of many hands" makes it impossible to locate moral responsibility when an AI causes harm. Third, dehumanisation — reducing citizens to data clusters denies them dignity.
Philosophically, AI embodies only Zweckrationalität (means-end rationality) while administration demands Wertrationalität (value rationality). Kant's categorical imperative and Aristotle's phronesis both point to conscience — not algorithms — as the bedrock of ethical administration.
The responsible framework is Human-in-the-Loop: AI recommends, humans decide — especially for rights-affecting decisions. India's Responsible AI for All (2021) framework and the proposed Digital India Act move in this direction. The RAS officer's role is to be an ethically trained moral agent who deploys AI wisely, not one who outsources conscience to a machine.
Q5 (5 marks — 50 words): What is "explainability" in AI and why does it matter for natural justice in administration?
Model Answer:
Explainability (XAI) means an AI system can articulate why it produced a particular recommendation in terms humans can understand. It matters for natural justice because: (1) audi alteram partem (hear the other side) requires citizens know the reason for decisions affecting them; (2) Article 14 demands reasoned orders; (3) without explanations, citizens cannot mount effective challenges or appeals. Black-box AI that excludes welfare beneficiaries silently violates both constitutional and ethical norms.
Q6 (10 marks — 150 words): "The greatest risk of AI in public administration is not that it will make wrong decisions, but that it will make right decisions for wrong reasons." Discuss with examples.
Model Answer:
The statement captures a profound ethical danger: AI can produce statistically correct outcomes that are morally hollow because the reasons behind them — correlations in training data — may be unjust, discriminatory, or arbitrary.
Consider predictive policing: an AI tool may correctly predict high crime rates in a particular neighbourhood because historically more police were deployed there (leading to more recorded crime). The AI's "right decision" to deploy more officers is grounded in a circular, unjust reason — it reinforces a surveillance pattern rooted in community profiling rather than actual criminality.
Similarly, a credit-scoring AI may correctly deny loans to rural women because historically they had lower repayment rates — a rate shaped by systemic poverty and patriarchal banking exclusion, not creditworthiness. The "right" statistical decision is grounded in structural injustice.
This is why conscience cannot be eliminated from administration: human moral reasoning can interrogate why a pattern exists, whereas AI can only learn that it exists. A conscience-driven officer asks: "Is this correlation just? Does it reflect structural disadvantage that the state should remediate, not replicate?"
The ethical administrator's task is to use AI's pattern-recognition power while subjecting it to moral scrutiny — deploying it where data reflect genuine merit and rejecting it where data reflect historical injustice. This requires not just technical oversight but ethical training: the kind that IAS/RAS foundation courses and ethics papers are designed to cultivate.
