I approached this book, The Co-Intelligence Revolution: How Humans and AI Co-Create New Values, written by Venkat Ramaswamy and Krishnan Narayanan, with a great deal of interest and expectations. After all, the two authors are well-known techies, with a long experience of working in the Information Technology (IT) sector. Unfortunately, my expectations were belied. Instead of providing the readers with an understanding of what constitutes machine intelligence, in what way it is similar or dissimilar from human intelligence, we are subjected to a torrent of words designed to do the heavy lifting for the authors, without ever telling us what these words really mean. Or without ever explaining how their use of the word ‘intelligence’ is different from that of others in the AI field.
For the reader, it is even more confusing when a word which we think we understand is transformed by the authors to provide it with a different meaning. A Humpty Dumpty world: ‘…when I use a word, it means just what I choose it to mean.’ Take for example the concept of machine intelligence. This means, first our defining what human intelligence is; and when we can consider that we have created an intelligent machine. This is the distinction between Artificial Intelligence, which can answer narrowly framed questions and within certain boundaries, and Artificial General Intelligence, in which the machines can solve problems (and possibly make the same mistakes) the way that we as humans do, but of course much faster.
This appears to be a simple problem; except that some of the simplest problems are the hardest to solve. They go to the foundations of our science (or mathematics), and generally, changes to foundations are rare and far more difficult. For example, the theory of gravitation moved from Newtonian to an Einsteinian framework in a time frame of a few centuries. Many similar foundational changes in natural sciences also have a much longer time frame than, for example, changes in technology.
Even assuming that the pace of science and technology change today has multiplied manifold, it is difficult to conceive of as many revolutionary changes as the two authors find in the last few decades. To give just two examples:
Yes, IBM’s 360 architecture and NVIDIA’s CUDA software had a major impact on the computer industry and now AI. IBM 360 introduced a common platform for both scientific and commercial data processing, therefore multiplying its market, and achieving economies of scale that others could not match then.
Similarly CUDA, the language that NVIDIA created to ‘talk’ to the graphic processing units (GPUs) of computers, made it possible for what was originally thought to be useful for only processing graphics faster, to any computational task requiring parallel processing. This meant a large number of tasks, one of which is processing very large data sets, required for example, for training neural networks in AI can be done in parallel using GPUs. This allowed GPUs to become a part of all, a part of any general-purpose machine, including even the PCs today.
Were they revolutionary developments? Clearly, in terms of business, they were; for both IBM and NVIDIA. In terms of technology, they were not earth-shaking. The major business impact of IBM 360 was that its users were able to address a number of different kinds of tasks on the same machine. It was also compatible across the entire new IBM machine line. As the leading journal IEEE Spectrum wrote, ‘The central feature of the System/360 was, of course, its compatibility. A growing data center could install a small 360 computer and later upgrade to a larger one without rewriting software or replacing peripheral equipment.’¹
By using CUDA as a software platform for GPUs, NVIDIA essentially created a two-sided monopoly. If you wanted GPUs for providing performance to your applications, you needed CUDA. If you have already developed your applications using CUDA, you required NVIDIA GPUs. CUDA, like IBM 360, became the base for NVIDIA’s near monopoly over GPUs, and its expansion for industries like gaming, and later AI. Monopoly over CUDA meant a near monopoly over the GPU market, driven initially by gamers, video editors and later by AI systems.²
Why I am focussing so much on the two developments is that the authors recognize the importance of these two examples as major advances in computing. But they tell us about them almost entirely in terms of their business impact rather than of technology change. And tell us very little about what IBM 360 actually did or why CUDA-NVIDIA was so successful. This is also the key problem that unfortunately lies in the way they deal with the AI ‘revolution’. The description of a number of applications of current AI models, almost in terms of what their sales team would tell us, does not help either with us understanding what the two authors call ‘Co-intelligence’ or what are the ‘New Values’ that the authors tell us AI is creating.
The key development of the current Large Language Models (LLMs) is based on a landmark paper, ‘Attention Is All You Need’.³
Without getting into details, LLMs made it possible to process the training data of the LLMs much faster, and with parallel processing of this data, to help the LLMs find the likely relationship between words. These relationships are then presented to provide the answer to the question asked to the LLMs like ChatGPT. This brings out the importance of GPUs, as they help in parallel processing of not only the training data but also the queries we submit; not to process graphics data as originally intended, but the language data that it has to give us the answers. The LLMs are not in this sense original, but their architecture allows a much faster processing of language data.
These very large language models (LLMs) can also be developed and harnessed for a variety of tasks, but with one overriding caution: they are models of what we speak or write about the external world and do not share our understanding or our interactions with the outside world, or with each other. As a set of leading AI researchers warned us they can mimic our language well and therefore are ‘Stochastic Parrots’4, without a real understanding of the world. That is why LLMs are not a path towards Artificial General Intelligence or AGI. AI behaves as if it is human, but having been trained on only text—remember, they are language models—they do not understand the real world that even a 3-year-old does by physically interacting with it.
Can we reach an understanding of reality through only language models and with perhaps a little plus of something else? I would have liked the authors to have given us some overview of these and other issues, rather than simply reproducing the marketing hype of AI, particularly from those who seek continuous investments in their companies, such as Sam Altman of OpenAI. Remember how Altman, on his tour of India, publicly derided the possibility of an AI developed with a budget of just $10 million? This is more than what DeepSeek appears to have spent for their AI model, and it is also the only one to have subjected itself to an open peer review.5 For the AI companies, the hype is important, as it provides infusion of new capital to the enterprises they run, and the promise—or the hype—of reaching the horizon of AGI keeps that capital flowing into their enterprises.
Do we believe that LLMs are useful, will create new applications and increase the output per person and output per dollar invested? This is the acid test for capital. If not, this could easily become a bubble much like the Dot-com bubble of the 1990s.6 But let us also remember that while the Dot-com bubble led to huge losses and bankruptcies, it also expanded the internet and built new monopolies, the foremost being Google, Facebook (now Meta) and NVIDIA. Bubbles weed out a number of companies through bankruptcies, but if new areas of business are created, they also create new monopolies.
Even with the enthusiasm of the authors and their book under review, the jury is out on that question of whether the AI sector is a bubble. Or will it create completely new monopolies? If MIT’s new study is any indication, only 5% of AI projects are making money; 95% are not.7 While this does not mean an end to AI—and let us accept that AI is here to stay with us—just as the Dot-com bubble did not destroy the internet which has stayed with us. But it does mean that alternate approaches to the development of AI, with much lower capital and energy expenditures, might give us better solutions.
With the appearance of DeepSeek, we have now an alternate approach to training LLMs from the extremely computation-heavy route taken by OpenAI; or Google, Meta, Amazon and other US companies. From capital-heavy, brute-force LLM route of OpenAI and others. DeepSeek has pioneered an approach that is far less computationally heavy, and therefore less capital heavy. And in today’s critical global-warming times, also much less energy heavy and therefore lighter on the environment.
I will end this review of Ramaswamy and Narayanan’s The Co-Intelligence Revolution with two cautions. One is a rather simple one against hyping Large Language Models. Even though the LLMs extrapolate based on the huge data they have already processed, they are essentially Language Models—stochastic parrots however large—and therefore only appear to give us credible answers. They are mimicking us based on the enormous data they have already consumed, instead of understanding the meaning of our questions or of their answers. Yes, as their increasing use suggests, they are useful for a variety of tasks, just as Google’s Search Engine today incorporates AI without most users being even aware of it (confession: I use AI tools and Google for a variety of my tasks).
The second: do LLMs provide a path towards Artificial General Intelligence (AGI) that can truly understand the world we live in? And give us meaningful answers to our real-world problems rather than mimicking the text data that ‘they’ have ingested? Though the jury is still out on that one, I am going to vote with those who believe that LLMs are not going to lead us to the holy grail of AGI. I believe what Google’s ex-employee Gebru who worked on these issues, and Geoffrey Hinton, who won the Nobel Prize in Physics for his work on machine learning wrote: the LLMs are overhyped and their dangers not well understood; not that these models will turn into malign entities, but they can still cause deep social harm if we do not understand what they really are due to their being overhyped in the interest of big capital.
Unfortunately, Ramaswamy and Narayanan, though better placed than most of us to have a critical view of the field, have failed us by presenting an over-hyped view of LLMs and not critically examining what LLMs really do. Or their impact on all of us, as they are in the hands of a few very large global monopolies which focus narrowly only on their profits. That is why their book, though useful as it might be for some, is by and large a wasted opportunity.
Footnotes
1. James W. Cortada, ‘Building the New IBM 360 Nearly Destroyed IBM’, IEEE Spectrum, 05 April 2019. https://spectrum.ieee.org/building-the-system360-mainframe-nearly-destroyed-ibm
2. ‘NVIDIA’s Winning Platform Strategy with CUDA’, Harvard Digital Initiative. https://d3.harvard.edu/platform-digit/submission/NVIDIAs-winning-platform-strategy-with-cuda/
3. Vaswani, Ashish et al., ‘Attention Is All You Need’, arXiv preprint, 2017. https://arxiv.org/pdf/1706.03762
4. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. https://dl.acm.org/doi/10.1145/3442188.3445922
5. ‘Secrets of Chinese AI Model DeepSeek Revealed in Landmark Paper’, Scientific American. https://www.scientificamerican.com/article/secrets-of-chinese-ai-model-deepseek-revealed-in-landmark-paper/
6. ‘2000 Dot-Com Bubble’, Goldman Sachs: Our Firm History. https://www.goldmansachs.com/our-firm/history/moments/2000-dot-com-bubble
7. ‘MIT Report: 95 Percent of Generative AI Pilots at Companies Failing’, Fortune, 18 August 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
Prabir Purkayastha is writer, journalist and activist based in New Delhi.

