top of page

Can We Call Current AI’s Ability Intelligence, and Why Is This Like an Economic Liberalization Moment for Indian Tech Companies?

I decided to write this essay after an exchange of posts with Sameer Shisodia, the CEO of Raimatter Foundation, on Twitter. Even writing these lines can be considered a form of intelligence. However, I did not possess this specific type of intelligence just 4 or 5 years ago, and thus I wouldn't have understood Sameer’s post or been able to respond with an essay. The source of this intelligence is primarily the experiences and knowledge I've gained over the past 4 to 5 years through books, podcasts, videos, interactions, work, and personal experiences. 

I started this essay on Saturday, May 11th, and after last night's mesmerizing launches from OpenAI, this essay feels insignificant. Nonetheless, it is always valuable to understand those pivotal moments in the AI journey that have led many to believe in its potential for the next decade and consider AI as general-purpose technology like fire, electricity, internet etc. It is also crucial for us, as Indian builders, not to think of this solely in terms of technology. We must consider the intersection of Indian problems and technology so that technology can be tailored to our unique personas, skills, talents, capabilities, languages, and more specifically, our problems. Many Indian tech companies expect Indians to adapt to technology, which is one reason they struggle to penetrate deeper into the market and generate revenue.

Let’s begin with the process of writing these texts in the digital world, which is essentially the manipulation of electrical waves (0s and 1s). My mind (neurons specifically) is predicting the next suitable words based on all the inputs I've encountered, mostly in the past 4 to 5 years and throughout my life. This is also how current AI functions. Now, let's start with the definition of intelligence: in one sense, intelligence is the creation of “new knowledge” at a personal level, either by practising old knowledge or empirically. The mention of empiricism is important because, at the start of civilization, there was no knowledge; we empirically developed “new knowledge.” Today, we use “old knowledge” to develop “new knowledge” structurally.

The creation of knowledge starts with “learning and understanding.” My ability to learn and understand is based on past knowledge, meaning that exposure to past knowledge (let’s call this information) is responsible for “knowledge creation.” Of course, intelligence goes much deeper, encompassing problem-solving, reasoning and logic, adaptability, critical thinking, self-awareness, etc. Most likely, AI will follow the same evolution. However, today’s AI exhibits intelligence in terms of “learning and understanding” because the fundamentals of learning and understanding for LLM-based AI are the same as for humans. 

One specific example that illustrates why today’s AI has a component of intelligence is DeepMind’s DQN (Deep Q-Network), which was trained to play classic Atari games like Breakout. The DQN learned to play the games by itself. I remember playing this game on a mobile phone at the age of 12 or 13 but never thought of it from an intelligence (learning and understanding) perspective. As I tried a few times, my score improved, which is a sign of learning. When I read about DQN’s ability to learn and create new knowledge, I tried this game again. You can also try it here: Atari Breakout. It took me three tries and about five minutes to complete the easy level. I played randomly, without any strategy. 


The rule of the game is simple: the player controls a paddle (left and right) and bounces the ball (up and down) to destroy as many bricks as possible. The more bricks destroyed, the higher the score.

The DQN model was trained on raw pixels, frame by frame (you can see the Atari frame on your left), and the score, to learn the relationship between pixels and the control actions of moving the paddle left and right. Initially, the algorithm moved the paddle randomly, destroying bricks row by row once it found the rewarding action.

However, DQN discovered a clever strategy. Instead of steadily knocking out bricks row by row, it started targeting a few rows in a way that trapped the ball in an efficient loop, allowing it to destroy every brick. This method earned the maximum score with minimum effort (the most efficient way to complete the game). And after reading this you should be saying WoW because you see what has happened here - something impressive! 

This is the creation of new knowledge based on “learning and understanding.” Without understanding, the algorithm could not have found this impressive method, which is not obvious to many humans (including me when I played this game; I might have discovered this method if I had played longer). This cannot be termed as mere augmentation. It would have been augmentation if it had just played the way the data was fed during training. However, it learned with the old information-based rewards system and created entirely new knowledge, not obvious to many humans. This is a pure form of intelligence.

Of course, the AI world has advanced significantly since those early algorithms. Today, we have LLMs (Large Language Models) with billions of tokens containing 13,000 years of knowledge, trained on billions of petaFLOPs. To understand why most AI companies emphasize computing, it's because computing power is fundamental to intelligence. Let's grasp the magnitude of this computing: one petaFLOP equals a billion people each holding a million calculators, performing complex calculations, and hitting “equal” simultaneously. If you can wrap your head around that, good for you! This is why the future is both exciting and scary. Even one of the most intelligent humans, Isaac Newton, failed to solve the 3-body problem due to a lack of computational power. The number of variables was too vast to handle. Today, we can create such computational power artificially and solve not just the 3-body problem, but the n-body problem. Despite limitations in energy, this is purely mind-blowing.

One aspect of LLMs that isn't often mentioned is their foundation in Claude Shannon's Information Theory from around 1960. Shannon's “A Mathematical Theory of Communication” laid the groundwork for the digital world and is fundamental to today's LLMs. Shannon discovered that up to 80% of language is predictable, primarily involving smartly guessing the next suitable word based on exposure to trillions of tokens. This is how current AIs understand input and generate output, similar to how humans learn and understand. Hence, we call AI’s ability intelligence.

Interestingly, I believe hallucination in AI is a feature, not a bug. Humans also hallucinate when we don’t know something but are asked to answer. While we aim for AI with minimal hallucination, it highlights how the fundamental intelligence of “learning and understanding” is similar for both AI and humans. Intelligence, once a human phenomenon, is now an atomic phenomenon because we use silicon chips (atoms) to train these LLMs to create intelligence in the digital world that performs better than humans. However, as humans develop “knowledge,” our method of learning differs from LLMs. We are more methodical, learning alphabets and grammar to speak, write, and understand languages. Yet, the most fundamental form of intelligence is unstructured, such as learning a mother tongue without formal teaching, which is similar to the natural learning of LLMs.

For example, I hated grammar and translation in school and still struggle with grammar, pronunciation, and spelling mistakes. However, I can convey my thoughts in written form decently without following a structured learning process. This is due to unstructured learning from books, podcasts, videos, and interactions. All these inputs have formed a model in my mind, allowing me to guess the next word, punctuation, and phrasing. 

Intelligence with this method has its disadvantages, which is where reinforcement learning enhances LLMs to a god-like level. After initial human intervention, these models find the most effective paths, as in the previous example. This is both exciting and scary. On one hand, we can solve complex societal problems, make products and services affordable, and democratize intelligence. However, like most technologies, this comes with omni-usage potential. 

Now, about our work and why we are excited about progress in LLMs. When you understand technology fundamentally, you can get carried away by trends and buzzwords. In the past, I focused solely on technology. However, for countries like India, it should always be at the intersection of technology and local problems. One of my heroes, Nandan Nilekani, has aptly said:al population/problem. On that front, one of my heroes - Nandan Nilekani - quote fits perfectly: 

“Indian startups have to focus on applications and use cases rather than building the biggest large language models (LLMs) like Silicon Valley startups”

"Winners in AI in India will be those who meet customers where they are” 

“We are moving to a world where instead of you adjusting to the technology, the technology will adjust to you. It will adjust to you as a person, adjust to your skills, adjust your talents, adjust, your capabilities, adjust your language, and your skills……what we have found is whenever we make it easier for technology to adjust to you then our usage goes up,” 

“Our advantage currently lies not in compute, cloud, or chips. 𝐎𝐮𝐫 𝐚𝐝𝐯𝐚𝐧𝐭𝐚𝐠𝐞 𝐢𝐬 𝐨𝐮𝐫 𝐩𝐨𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐭𝐡𝐞𝐢𝐫 𝐚𝐬𝐩𝐢𝐫𝐚𝐭𝐢𝐨𝐧𝐬. 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐲 𝐰𝐞 𝐡𝐚𝐯𝐞 𝐭𝐨 𝐛𝐫𝐢𝐧𝐠 𝐝𝐨𝐰𝐧 𝐭𝐡𝐞 𝐜𝐨𝐬𝐭 𝐨𝐟 𝐢𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐟𝐫𝐨𝐦 𝐑𝐬 100 𝐭𝐨 𝐑𝐬 1……𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞 𝐟𝐫𝐮𝐠𝐚𝐥𝐥𝐲 𝐭𝐨 𝐝𝐫𝐚𝐦𝐚𝐭𝐢𝐜𝐚𝐥𝐥𝐲 𝐫𝐞𝐝𝐮𝐜𝐞 𝐭𝐡𝐞 𝐜𝐨𝐬𝐭 𝐨𝐟 𝐀𝐈. 𝐖𝐞 𝐜𝐚𝐧'𝐭 𝐝𝐞𝐥𝐢𝐯𝐞𝐫 𝐢𝐭 𝐭𝐨 𝐚 𝐛𝐢𝐥𝐥𝐢𝐨𝐧 𝐩𝐞𝐨𝐩𝐥𝐞 𝐮𝐧𝐥𝐞𝐬𝐬 𝐰𝐞 𝐜𝐚𝐧 𝐛𝐞𝐠𝐢𝐧 𝐭𝐨 𝐜𝐡𝐚𝐫𝐠𝐞 1 𝐫𝐮𝐩𝐞𝐞 𝐩𝐞𝐫 𝐭𝐫𝐚𝐧𝐬𝐚𝐜𝐭𝐢𝐨𝐧.”

The best way to reduce costs is to apply these technologies at scale. Even after all the adjustments, it will only be affordable if we make it suitable for around 500 million Indians. Except for e-commerce and online payments, almost all industries in India will benefit from this mindset. Our mission to make quality healthcare affordable for a billion Indians is driven by our understanding of AI’s crucial role in eliminating many cost components, making quality healthcare unaffordable for 90% of households.

I am super excited about intelligence being the outcome of “atoms” rather than humans alone. Unlike unscalable humans, atoms have the same properties across the universe, meaning intelligence will also be fundamental and accessible to everyone on this planet or even in this universe.

Thanks for reading if you find this interesting, please share this in your network. I shall see you all the next week :) 

Recent Posts

See All


bottom of page