AI is Everywhere: Finding Nuance Amidst the Grandstanding
Yusra Khan
AI SNAKE OIL: WHAT ARTIFICIAL INTELLIGENCE CAN DO, WHAT IT CAN’T, AND HOW TO TELL THE DIFFERENCE by By Arvind Narayanan & Sayash Kapoor Princeton University Press, 2024, 360 pp., $ 24.95/£ 20.00
October 2025, volume 49, No 10

Misled by clickbait headlines and a lack of understanding of the technology behind Artificial Intelligence (AI), it is safe to say that most people have a hard time finding a happy medium between alarmism and worship. Arvind Narayanan and Sayash Kapoor’s AI Snake Oil provides an ideologically-agnostic, no-nonsense look at the state of AI incorporation in different sectors. They also wisely steer clear of the debates surrounding its existential risk to humanity, which have been drawing in mythic levels of funding, in contrast to the real, tangible problems that remain unsolved in the wake of overenthusiastic AI deployment in areas of public interest. The authors are decisive on this—it is not AI acting alone that can become a destructive force, but the human agents helming it. The book does not take the bait of speculation but grounds itself entirely in currently available evidence.

Describing the umbrella nature of the term, they first list out the main areas under which research has taken place: predictive AI, generative AI, social media and content moderation. From loan approvals to granting of bails, predictive AI has found its way in several domains of life. Randomness terrifies people, the authors posit, and this uncertainty is what urges them to search for patterns—often where none exist. Relying on past data to determine the future is not just inaccurate at times, but morally fraught as it actively undermines human agency, and discounts the role of chance.

The authors list several federal experiments in the US criminal system, and ask: is the past tethered to the present without any qualifications? One such example is predicting pre-trial risk scores using the COMPAS tool, which contains social data that cannot be standardized to prescribe easy answers. It relies on metrics such as non-appearance in past hearings, and number of arrests till date, to record a score that is supposed to offer perspective to the judge. However, people often get arrested on false charges. Do we not know that certain communities are over-policed? Further, is reliance on the computer’s number-crunching powers allowing us to evade responsibility? The criminal justice system is not without its faults, but the court is your last place of refuge to prove your innocence. In such settings, AI becomes an unanswerable entity that is supposedly both technologically formidable and a black box that defies transparency.

Narayanan and Kapoor urge readers to resist the temptation to think of AI systems as fundamentally ‘unknowable’, as a priori hype obstructs accountability from people making billions by deploying AI tools to predict complex social phenomena. Prediction here also suffers from what is called ‘teaching to the test’ (p. 22), where the training occurs on the same data that is later used for evaluation to achieve high-performing results. This compromises the AI’s ability to generalize and does not scale in the real world. However, the authors are not blindly prescribing more extensive data collection, because they contend that some of these limitations are inherent to the tension between prediction and the dynamism of life. Social datasets contain a lot of noise (p. 76), and their constantly changing nature means that they could provide valuable insight in one context but aren’t particularly useful in another.

The authors then turn to generative AI, which produces text, images and other forms of media. Reiterating the labour that goes into labelling auto-scraped data, from publicly available information and imagery, the authors are sensitive to the shadow industry that is being subcontracted to the developing world to carry out this ‘ghost work’ of data annotation. Creative appropriation is not a bug, but a feature. The background work that goes into forming these all-knowing systems is done by leeching onto writers, painters, graphic designers, and other creatives. At one point, the authors propose an AI tax that would form a funding pipeline for the arts (p. 254), to make up for stealing their content without credit or compensation. Another danger posed by the flooding of AI-generated media, from deepfakes to AI slop, is not that of large-scale targeted manipulation, but of a widening trust deficit. If you can’t trust what you see or read, how do you think clearly about the world you inhabit?

This same algorithm powers chatbots that provide everything from customer assistance to even, contentiously, therapeutic services. Dismissing lingering fears of sentience, the authors explain that self-awareness can be faked due to the mass availability of conjecture on the topic online. It is simply parroting and remixing (p. 101) the content it has picked up. Similarly, anthropomorphism is so baked into the vocabulary of AI that producing incoherent text or answering out of context is termed as the algorithm ‘hallucinating’ in standard industry terms. But to probe language more deeply, it hallucinates in the same way that your phone ‘dies’ when it runs out of battery—it doesn’t mean very much in the literal sense. However, AI, making a text-prediction system, however sophisticated, seems like a sentient being, and is part of a project where its autonomy is being conceptualized through the lens of what it means to be a person.

The section discussing content moderation on social media explains how the nature of violations differs according to local context, and homogenous policies could be hindering what reasonable offense flagging means. Cultural competence is also a major factor (p. 192), and companies disproportionately focus on American and European users’ safety. People are also finding new ways to toe the line when it comes to outright platform violation; and political calculations have meant that companies have let themselves be strong-armed to keep their markets intact. The challenge to policy making in this scenario is to view regulation neither as a panacea, nor an innovation-destroying monster.

This book performs an autopsy on the hype-factory that has produced a lazy, sci-fi imagination around AI that seesaws between rogue machines and killer robots on one hand, and humanoid butlers, flying cars and drones that mow your lawns in pristine futuristic cities on the other. AI is neither malicious nor benevolent, the authors are trying to say.
Narayanan and Kapoor are critical of companies, investors, researchers and journalists—they are lenient only to the reader, who is supposed to know better once they are done with the book, armed with sectional knowledge and gritty examples. Reality is far more granular, contains much more detail than what easy narratives, either/or contentions present to us. This book is a powerful reminder of finding nuance amidst the grandstanding.

Yusra Khan is pursuing her Master’s degree in political science at Jamia Millia Islamia, New Delhi.