top of page

Recent Developments in Artificial Intelligence

Possible implications for the world and India


Bala Parthasarathy and Harsha Garlapati


Abstract

This paper attempts to introduce the recent developments in the field of Artificial Intelligence (AI) that has led to recent offerings like the ChatGPT to a non-specialist Indian audience. To avoid making it look like another rabbit that the Western S&T establishment has pulled out of its hat, we look at the historical evolution of AI and show how this has been in the making for well over half a century. While sharing the widely prevalent excitement that products like ChatGPT, Bard etc are generating, we explain some key factors behind it and bring out how these are likely to bring about foundational changes in most spheres of human endeavour. We also briefly examine whether large part of the excitement is due to the hype that has been created around it, and if the bubble is likely to burst soon, as in some other recent cases like Cryptocurrencies etc. Considering the depth and scale of impact the recent developments in AI are likely to have in most spheres of human activity, we dwell at some length on the threats that it could pose to peoples and nations all over the world, particularly those who are committed to an open, liberal and democratic path of development. Finally we attempt to see how the scene may unfold in India with its vast and diverse society of extreme uneven development, and what roles the different players should be playing to ensure that we benefit from this technological revolution fully and equitably.


Some technical terms used in this paper:

Neural Network - Algorithmic versions of a network of computer neurons

GPT - Generative Pre-trained Transformers (Pre-trained means they are not continuously trained)

Fine Tuning - Training GPT for specific tasks/domains

Instruction Tuning - Training GPT to follow specific instructions, e.g.: “Chat like humans”

Parameter Size - Number of nodes in the network, each with variables like “weights & biases”

Zero-shot (or n-shot) - Number of examples to give GPT to get the answer you want.

GPU - Special purpose CPU s invented for gaming but now widely used for AI

Small models - Llama, Vicuna, Falcon, etc - open source models that are smaller but very powerful

Diffusion - Vision model for images, used by tools like MidJourney


I. Brief History of Artificial Intelligence


Artificial Intelligence (AI) has been a part of Computer science since the 1950s, where it was primarily in the field of game playing. Over the first few decades, the computing power was restricted to mainframes and evolution was slow. We will begin by summarising a 2023 paper on the topic by Vasant Dhar [1]


The first major development was in the field of Expert Systems. It was thought that if we could capture the knowledge of experts in specific fields, say an Electrical or Chemical Engineer working in an Oil Refinery, the computer can mimic the decisions such an expert would make if faced with real-world problems. This was an elaborate failure, to put it mildly, when it was realised that human beings and their decision making is far too complex to be even captured, let alone used in such simplistic fashion. It was not until the late 80s and 90s that the next generation of AI came into force with Machine Learning. With the emergence of the Internet, maturing of databases and better compute power, there was an important paradigm shift.


The Machine Learning paradigm shift, in simplified terms, is when researchers realised that instead of training the machines on “expert” rules, it was better to feed it empirical data of what worked and what did not and let the machine “learn” what is right. This was done by having a curated data set and a “loss function” which is a measure of how far the algorithm is wrong, measured empirically. By a repeated trial and error process on the training data, the machine learnt to minimise the loss. Once trained, the algorithm could work on any similar data and predict, with high probability, what it thinks is correct. It turned out that the algorithms could deduce the implicit IF-THEN in human knowledge better than the experts could explicitly specify them.


But there was a catch. In order for these algorithms to work, the raw data had to be fed into the machines and someone had to guide the algorithms by pointing out which “features” were more important than others in the data. For example, if it is a credit underwriting algorithm, a risk expert had to point out that the credit bureau score, past late payment history, etc. were more important than the person’s cash balance. This not only introduced a bottleneck but also needed the experts. What would be much better is if the machines could just ingest raw data and not have humans in the middle.


The second paradigm shift came with Deep Neural Networks by the 1990s. In the words of Dhar [1], "What is unique about DNN s is the organisation of hidden layers between the input and output that learn the features implicit in the raw data instead of requiring that they be specified by humans”. This was tremendously successful in Vision first and to this day, most of the Google Photos & Apple features we use are based on these sets of algorithms. The most important facet was that the machines could self-learn from billions of images or text over and over again, without any manual intervention.


A 2017 paper by Vaswani et al, “Attention is all you need” [2] along with the Transformer architecture, started the next massive shift that eventually resulted in ChatGPT, Bard and the current GenAI revolution.


Transformer models have been well described in multiple publications & YouTube videos and we will not go into the details here. The core idea is that human language in the broad sense - text as well as speech, images, videos, etc. are not random but contextualised. The relationship between the next word (token, in technical parlance) between the current and next word is not random but depends on all the words that have come before. This pairwise dependency increases with the length of the sequence; there are computational memory & processing power limitations of the “attention” to which it can keep. Creating these relationships is done during training but having the neural network train on a massive amount of data. In the case of OpenAI & ChatGPT, though the exact training data is not published, it is believed that they used ALL of the public internet as well as an unknown number of digital books and other text.

All this has been done by Google, Meta, OpenAI and others since 2018. The only challenge was, it was producing logical sounding but nonsensical (to humans) sentences or outright preposterous (still logical) sentences that would be unacceptable to the public.


OpenAI’s innovation was to “tune” the model using human sensibilities. It was done manually by hiring thousands of people who gave thumbs up & down to various answers as well as instructing the model to stay clear of sensitive topics (as decided subjectively by OpenAI) such as producing the next Covid virus recipe or how to make bombs or glorifying Hitler. These techniques are called RLHF (Reinforced Learning through Human Feedback) and Instruction Tuning.


The astonishing output was GPT 3 which suddenly started to make sense to everyone on a wide range of topics, vastly improved upon by GPT 4. Even more puzzling aspect of all this is that no scientist can explain why just taking a large neural network and training it on all of the (digitally) written knowledge from internet would suddenly create this program that could seamlessly connect unthinkably different topics & generate a Shakespearean sonnet about Shah Rukh Khan eating chapatis!


II. Why the excitement?


“AI is one of the most important things humanity is working on. It is more profound than, I don’t know, electricity or fire” – Sundar Pichai. Electricity & fire? Tech leaders are known for their hyperbole but GenAI has created a level of excitement about the future, even amongst the more sober leaders, that has not been seen in a long time.


1. Growth of usage - it took Netflix 10 years to get to 100M users worldwide. Facebook took 5 years. ChatGPT, without any virality, got there in 2 months. This is an astonishing achievement!


2. Range of usage - Unlike the last tech that was much hyped, Blockchain, this was immediately accessible to the common man. It did not need anyone to wrap their heads around abstract concepts or solutions to problems (distributed trust etc) that most people were simply not worried about. ChatGPT was simple & instantly usable. The full list of usage is too long to list, but here are the top few use cases:

  • Summarising, Proof reading & writing of documents - from office workers to students, this is the most widely used case across a range of domains. P

  • Programming - code generation, test case creation, code comprehension, etc.

  • Image generation - cousins of ChatGPT, like MidJourney (Stable) is already widely used by anyone creating images, dramatically reducing the need for photoshoots for more simple digital advertisements, for one.

  • Researchers for Literature survey - though far from perfect because of hallucination issues, it still simplifies the laborious literature survey process or helps researchers quickly summarise other articles. It does need to be used with the right set of prompts to improve accuracy

  • Students using it as a patient tutor or helping with homework is probably one of the biggest use cases and is attributed to cause a stock price drop of the company Chegg which was a popular homework site. Teachers using it to set Quizzes is another popular use.

  • There simply are way too many more uses including our favourite, create interesting recipes that simply don’t exist before and we’re soon reaching a point where one has to list what it CANNOT be used for.

3. Stakes are high - The debate on whether this is a tech-hype or real is not just for markets. If the leaders are even partially right, this is going to reshape the future for all of us in very profound ways.


There have been multiple reports from analysts. Consider the recent McKinsey report [3] ; here is a summary:

  • Generative AI will add GDP of ~$3.5 Trillion; In other words, India-size GDP will be added due to this (conservatively)

  • When will this happen? Guesses range from 2030 to 2040.

  • 75% of the impact will be in Customer Service , Sales & Marketing, Software Engineering, R&D.

  • Impact will be disproportionately high in the white collar jobs

  • In the Banking & Finance sector, we will have better Customer Service, Marketing, accelerating legacy IT systems. It is unclear as of now if LLM s will impact core sectors like credit underwriting, for instance.

  • Pharma/drug industry will get a boost, especially the area of drug discovery.

  • Though not covered in this article, Media, design, artwork, advertising, content creation will be significantly automated.

  • The way the paper measures the impact of labour productivity, as in other papers is to map

  • Occupations to Activities and further break it down to skills. Then, they analyse which skills AI will augment/replace to what degree.

  • In terms of Countries, western countries will be impacted fastest - India, Mexico & the Global South will be slower.

  • However this understates, the huge concentration of BPO & Software workers in India, Philippines & other countries, and their impact on the service economy of these countries

  • Bottom line - Decision Making & Collaboration; Masters & PhD s - high end skills & people - will ironically be impacted most

  • It would mean hundreds of millions of jobs being lost or modified at the cost of serious political instability.

4. ChatGPT is just the preview --The ChatGPT moment we are all living in would not have been possible without a confluence of multiple forces. The evolutionary research in AI algorithms culminating in the invention of Transformer models; massive & (almost) free data to train these models; GPU compute power at a scale never been seen before; bold & unorthodox entrepreneurs like Sam Altman from OpenAI who experimented with RLHF & other techniques and launched a product that could have easily been the laughing stock of the industry had it not gone right. And more.


However, what we have seen so far is just a preview. None of the world’s biggest consumer companies - Apple, Android (Google), Amazon, Facebook, Netflix have stepped into the arena. These are the products we’ve been using and worshiping daily for the last 15 years. While OpenAI has burst into the scene just last year, these big giants have a long history of creating App Stores that bring developers & innovators to create the amazing apps we take for granted today. The phone & laptop manufacturers like Apple, Xiaomi, Dell & others will undoubtedly see this as an opportunity to sell us bigger and better hardware that will run these algorithms.


To fuel all these, tens of billions of dollars are being invested as we speak in creating the next generation of consumer and enterprise applications using AI.


5. AI Infused companies--The first class of companies we will see are AI-infused companies. These are the regular software & apps we use - from Microsoft Office to Gmail to our day to day apps. These will all have an AI “button” or AI features that will significantly enhance their features. We are already seeing that with Email apps for instance, that summarise long emails succinctly. These will quickly improve usage and enhance our productivity.


6. AI First companies --What is even more exciting are the class of “AI-First” companies that are set to emerge. When the iPhone was launched, Travis Kalanick was able to take a painful & unsolved problem of cab-hailing and turn it upside down. He did not create an app-version of any existing website. He re-imagined a world where every consumer had a powerful computer (smartphone) in their pocket, with a GPS, maps & a payment method, all built in. The result stunned the world and Uber has become a verb. Every food-tech, delivery company followed, creating hundreds of billions of value and completely changing consumer behaviour.


So too will AI first companies. When you have an even more powerful computer in your pocket (or computer) that can understand human voice, recognize any image, any text in any language, and

most importantly synthesise knowledge across all these including pulling in knowledge from ALL of world’s written text, this is a recipe to re-imagine all our businesses all over again.


III. Hype or Bubble?


While these changes have been exciting, besides ChatGPT, there have been very few mind blowing applications that have come out in the last 9 months. After all, the websites and apps we use daily look just the same. ChatGPT plugins, which were hailed as the next app-store, have been very hard to use. And even the usage of ChatGPT has slowed down since May. So is this all hype?


Dhar [1] argues that while ChatGPT is like a light bulb - a literally shining example - the bigger story of Generative AI, like Electricity, is yet to unfold. The light bulb showed it was possible to turn night into day, to paraphrase Edison. But it took several decades after its invention for electricity to become prevalent.


Electricity is a general purpose technology that could light up bulbs but also replace the steam engine, run elevators, power factories and after 100 years, we are still discovering new use cases like running cars.


The “general purpose” use of AI would be to supplement (replace in some cases) what we think is uniquely human - intelligence, empathy, understanding emotional states, etc. This is called Artificial General Intelligence (AGI) and a hotly debated topic in the community. Before judging that extrapolating from our ChatGPT assistant today to general intelligence as improbable, consider this example.


In learning, there is a well-known concept called theory of mind, which in nutshell is a 4-year old’s brain: ability to understand & attribute mental states, such as beliefs, desires, intentions, & emotions.


To test this, I used a simple prompt - In a room there is Bala, Sanjay, a cat, a box and a basket. Sanjay puts the cat in the box and leaves the room. Bala then puts the cat in the basket and leaves the room. They all come back. What is everyone thinking?


GPT 3.5 totally flubbed the question and generated gibberish. But GPT 4 said:

  • "Bala might be thinking: "I wonder if Sanjay noticed that I moved the cat from the box to the basket."

  • Sanjay might be thinking: "Wait, I thought I left the cat in the box, not the basket. Did Bala move the cat?"

  • The cat might be thinking: "These humans keep moving me around. First, I was in the box, and now I'm in the basket. I hope they leave me alone soon."

  • The box and the basket, being inanimate objects, do not have thoughts or feelings.

  • Within just half a generation of AI, it was able to understand the hidden states implied in the question and come up with a nuanced understanding, reading the states of mind of various entities in the question. In the next 2 years, there will be a slew of these Large Language Models (LLM s) and their level of sophistication in “human” like understanding of the world remains to be seen.


Bubble or next industrial revolution?

What do tech bubbles look like? This is how every tech bubble looks and any of the recent hyper bubbles like the NFT/Crypto/Web 3 fit into this pattern:

  • There is a genuinely COOL initial piece of invention. Investment frenzy follows. Large investors, supplicant media and entrepreneurs hitch their bandwagon to it.

  • Every stodgy old company re-brands itself.

  • Celebrities and politicians also enter the fray (remember NFT s), and the bubble reaches its peak.

  • Finally reality sets in, usually spectacularly. And everyone tries to forget what they said only a couple of years ago.

Is the GenAI bubble following this pattern, perhaps on a steroid-induced shortened cycle? Seems like the first three criteria are already met and it’s just been 9 months!


“Crossing the Chasm” moment?

Having argued thus far that there are strong reasons as to why this may be another piece of hype-ware, let me make a case for why this is real and you should not ignore it. Geoffrey Moore coined the term Crossing the Chasm in 1991, and used it to describe how technology (PCs in those days) would be adopted. When disruptive innovation happens, early adopters jump in and everyone gets excited. Then reality sets in. Wider adoption takes much longer than the gurus predicted. What we are seeing today with Generative AI may be the gap, where the early adopters excited by the newest things swear by it.


IV. The dark side


The AI revolution promises a utopia, where machines do the hard labour and humans thrive. We are on the cusp of an ethical minefield as AI systems acquire capabilities that we may not fully understand or control. It is crucial to consider the potential hazards of unchecked AI, even as we embrace its benefits. If AI is as fundamentally important as fire, then it is crucial that we don’t play with fire, and that we don’t use it to burn down entire civilizations. Researchers like Eliezer Yudkowsky go as far to say that creation of a super-human AGI would mean the end of humanity, and there is almost nothing we can do to stop it. Even without such hyperbole, there are grave concerns that we have to address.


There are threats to jobs, of criminality, and misinformation campaigns, but also existential and identity threats unlike mankind has ever seen. Beyond the threats there are concerns of alignment to society’s best interests when some companies are data oligarchs and self-proclaimed purveyors of humanity’s interests. Here are some places that AI can cause large-scale societal disruptions.


1. 1000 poisons with no antidotes

While it’s novelty today to see news that a grocery website created recipe for toxic gas when asked for a sandwich [4], this is not far from the truth. ChatGPT and most other models have deliberately tuned their LLM s to not output anything involving bombs, bio weapons, viruses, etc. But the open source models can be easily “jail broken”, much like most Android phones today, and these restrictions can be removed. With more and more powerful open source models in the fray, this is one of the big arguments Google, OpenAI, Microsoft and other big tech companies have used to regulate AI. But like we saw with Crypto, there are plenty of outlaw jurisdictions that bad actors can easily shift to, if they want to create mayhem.


In February of this year, we saw one of the first instances of deep-fakes being used in the wild, and AI avatars being used for misinformation campaigns. These will only increase over time. Governments can try and regulate with policies mentioning “deep fakes should always be labeled as AI generated content”, but if the Government itself or any bad actor with their own model wants to spread such a campaign, that labeling regulation would become meaningless.


Even bad actors will have a productivity boost. AI can create problems we can't easily solve. Misinformation campaigns with deep-fakes, LLM s which help strategize, Stability Diffusion models (like MidJourney) that can quickly help create real-looking fake imagery etc. can prove to be difficult. While companies like OpenAI are launching models with a safety layer, the open-source democratised models can be run by State-sponsored bad actor agencies and even these safety layers can be hacked.


In Social media too, there would be a mayhem, from echo chambers to fake news will be aggravated and antidotes of requiring “marking of AI generated content” can prove hard to enforce.


2. Jobs, Jobs, Jobs

3 days after the launch of GPT-4, OpenAI released a report indicating that around a quarter of all workers may see at least half of their tasks impacted by AI. This might seem like a big number, but seeing the pace of AI evolution, it might even be an understatement.


We have always seen this kind of fear and outcry for jobs being replaced with new paradigm shifts (like the Industrial Revolution). But historically they have always transformed and more jobs are created than before. The main difference is that humanity’s primary “moat” over all other technologies has been their “intelligence”. This is what AI automates away, and that’s a big fundamental difference. LLM s are good at creative tasks already better than humans in many cases. It’s no longer the case that AI can be used only for mundane automated tasks, and it will impact every single industry and use-case, whether we like it or not.


We will have new kinds of jobs, the question is will they emerge fast enough and at the pace that LLM s and AI models are evolving? And will humanity up-skill before they are laid off at scale?


The tech industry argue, with strong historical precedence, that every new revolution has created not only new jobs but better paying jobs - even forced factory labour during Industrial Revolution, was better than subsistence agriculture. And surely none of us are missing the good old days of going to a bank in the 70s or 80s and seeing mounds of slow moving paperwork instead of sleek computers (which at that time was bitterly opposed as a job killer). However, these changes came at a huge political cost & in many cases, a lot of bloodshed. Karl Marx invented communism in response to the excesses of the new-born capitalists during the industrial revolution. The current political turmoil roiling the American politics or Brexit are not unrelated to the displacement caused by these shifts.


Humans might become just limit-extenders for their AI models in some jobs, especially jobs that are somewhat “formulaic” in their creativity or intelligence (such as simple copy-writing or simple graphic design or simple accounting) will be the first to replace. Humans will still be there for outlier scenarios, much like commercial pilots relying on auto-pilot, but in emergencies or special landings, taking back some control.


3. Now cyber criminals don’t need talent

Just as AI is set to disrupt job markets and create new avenues of employment, it will also revolutionise the darker corners of the internet, making them more productive than ever. The cat-and- mouse game between cyber criminals and cyber security experts is about to get a lot more complicated, and the stakes have never been higher.

Examples -

  • Deepfake Extortion: Criminals used Deep Learning algorithms to create highly convincing fake videos of CEO s admitting to illegal activities. These were used in blackmail attempts.

  • Automated Social Engineering: Chatbots trained in social engineering tactics conducted phishing attacks through social media and messaging platforms. These bots adapt their messages based on user interaction, making it harder to identify the malicious intent.

The rise of AI in cyber crime necessitates a parallel evolution in cyber security measures. Traditional firewalls and antivirus software are no longer sufficient. Just as AI can learn to hack, it can also learn to defend. Expect to see machine learning algorithms that monitor network behaviour, flag anomalies, and adapt in real-time.


4. Race to mediocrity - the great dilution

The arts and sciences, the very fields that underscore our human uniqueness, are under an unprecedented threat of automation. As AI starts to commodify creativity—whether it's in design, programming, or writing—we risk a deluge of mediocrity. This phenomenon may well lead to a regression to the mean, where originality becomes the casualty.


While AI-augmented artists might outperform the 'purists,' they could inadvertently homogenise their output. We may find ourselves in a world where human-made designs and programs are admired like exquisite but obsolete crafts—pleasing to the eye but impractical for everyday use, relegated to “handcraft museums”.


5. Pace of change & who is driving it

How fast is too fast? While AI development traditionally follows an asymptotic curve—rapid initial growth followed by a plateau—the pace at which advancements are occurring is astonishing. It's a race where even the front runners like OpenAI and Google are conflicted. On one hand, they advocate for cautious and ethical AI development; on the other, they announce models exponentially more powerful than their predecessors at breakneck speeds.


Most governments are behind the curve. The lack of comprehensive regulation exposes society to risks ranging from job displacement to national security threats.


6. Surrendering Sovereignty to Data Oligarchs

As AI algorithms become increasingly commodified, the real power shifts to those who control the data. This is creating a new form of oligarchy where a handful of tech giants wield disproportionate influence.


In this new world order, data is not just an asset; it's the currency that could define the sovereignty of individuals and even nations. Your personal data, from your shopping preferences to your political leanings, becomes a token in a global power game.


As we surrender more of our lives to these digital overlords, the call for robust data rights and regulations becomes urgent. Without these safeguards, we risk not only our privacy but also the very foundations of our democratic institutions.


In sum, as we stand on the precipice of this AI-driven future, the questions we must grapple with are not just about technology but also about the kind of society we want to be. The clock is ticking, and the choices we make now could reverberate through history. For this we also need good regulation.


7. Problems with Regulation

Different parts of the world have disparate approaches to AI regulation. The U.S. has a laissez-faire approach—outsourcing the responsibilities to companies and regulating via legal covenants. The EU aims to pioneer in 'trustworthy AI', with a focus on transparency, accountability, and human rights. China, on the other hand, employs a state-driven model, with concerns around ethics and data privacy being secondary to State interests.


These legislations are reactive rather than proactive, aiming to catch up with the technology after it has already been deployed, and dealing with consequences after they have happened. With a fast- evolving technology like AI, this approach of static regulations don’t make a lot of sense.

Rather than putting in place a new regulatory framework, the UK Government intends to follow an agile and iterative approach designed to learn from actual experience and to continuously adapt.


V. The Indian Scene


When it comes to India, there are two aspects that should be kept in mind:

  • India’s Data Abundance:India, with its large and diverse population, will continue to generate massive amounts of data, which can fuel AI algorithms.

  • India’s diverse and rich use-cases: India has the potential to be a use-case capital for AI, with diverse languages, and a diverse population.

India also has multiple stakeholders; too many to enumerate. But we will look at some important stakeholders and how they should view the change:


1. The General Public

Much like the general public finds great benefits in using electricity or the Internet, using AI too can empower them to achieve their life outcomes and goals. The General Public of India should use AI for their benefit while also not giving away access to their data to foreign entities without careful consideration.


The challenges here are around providing localised AI that allow conversations in Indic languages without ostracising a populace within India for not speaking a popular language such as English or even Hindi which most of the models support.


2. Research Community

The “ISRO model” of efficiency and achieving monumental breakthroughs at 5-10x less cost is relevant even for the AI ecosystem. The Research Community across institutions should spend their efforts focusing on coming up with methods to make models more efficient and also research to make them more applicable to indigenous languages and scripts.


The research community needs to collaborate with the Government as well as industry to make this happen. And focus on conducting research for unique situations of India such as translation among multiple languages and dialects, etc.


3. Businesses

AI will affect every industry in the next few years, and rather than denying their plausibility which can push Indian businesses back compared to other countries, they need to be at the forefront of up skilling their talent to understand the use-cases of the technology and be well-versed with it.


For companies, it shouldn’t be a solution looking for a problem for the businesses but rather it should be a technology that should be learned by the employees (much like using a web browser is learnt by employees), and applied where there is a clear use-case to do so.


4. Civil Society

NGO s (such as PeoplePlus.ai [5]) need to look across the ecosystem, Governments and Institutes, and help guide discussions that help India become a thriving ecosystem for AI companies. They also have an opportunity to build DPI s (Digital Public Infrastructure) to cater to Data Sharing, Pooled Compute, and other resources. Facilitating informed discussions on AI and its societal implications would also be crucial to keep a feedback loop that is closely coupled to the realities of the people and their expectations and needs.


5. Regulators and Governments

We need a principles-based regulation that is innovation-friendly that takes into account our unique challenges and opportunities in the AI landscape. The regulation should be agile, innovation-friendly, and have an iterative approach designed to learn from actual experience and to continuously adapt (much like the UK), yet rooted in principles that protect the privacy and safety of Indians, while allowing companies to use anonymised data to build models that improve the lives of the people. What if we adopt an agile approach to AI regulation that is grounded in a set of cross-cutting principles that describe, at a very high level, what we expect AI systems to do (and not do)? We can apply these principles across all the different ways in which AI is, and will be, deployed, across sectors and applications. Sector regulators can then refer to these principles, using them to identify harms at the margin and take appropriate corrective action before the effects become too widespread.


VI. Conclusion


Unlike any other automation before, this new breed of AI can automate the most fundamental characteristic of mankind—Intelligence. And it’s promising to encroach into other areas we consider uniquely human, such as empathy, understanding needs and desires of other creatures even without explicit verbal cues.


We in India have a choice. We can fight the changes by social protests, political changes and reactionary regulations, or embrace this change but prepare our institutions, society, business community and regulators to best adapt to these changes. Which way we go will shape the destiny of generations to come!


References:


[1] Vasant Dhar ,2023, The Paradigm Shifts in Artificial Intelligence. https://arxiv.org/abs/2308.02558


[2] Ashish Vaswani et al , 2017, Attention Is All You Need. https://arxiv.org/abs/1706.03762


[3] McKinsey Report ,2023, The economic potential of generative AI: The next productivity frontier.


[4] AI-powered grocery bot suggests recipe for toxic gas, “poison bread sandwich” https://tinyurl.com/bdhbz98n


[5] People plus AI https://peopleplus.ai/


About the authors:

Bala Parthasarathy , a graduate of IIT Madras, is a Director at People+ai, an initiative backed by the Ekstep Foundation in India. The objective of People+ai is to create a space for conversations about the impact new AI technologies will have on People, Purpose & Economic impact. Before People+ai, Bala co-founded and sold multiple startups in Silicon Valley (USA). After moving back to India in 2007, he volunteered for 2-years for the Government of India Aadhaar program under Nandan Nilekani. Bala then went on to co-found Prime Ventures (formerly Angelprime) India’s largest Seed Stage Venture Fund. He is currently the Chairman & Co-founder of Freo, India's first Credit-led Neobank . Bala can be reached at bala@peopleplus.ai


Harsha Garlapati is a key member of the founding team at People+ai. His professional journey has seen him take on roles as a product designer and product manager at startups like smallcase. He has also volunteered at iSPIRT, contributing to the improvement of the National Health Stack for better interoperability. An avid reader, Harsha explores the complexities of complex systems and shares his insights through a blog on Substack. He can be reached at harsha@peopleplus.ai







3 comments

3 Comments


Successive generations of technocrats have expressed generic sounding regulatory protection measures that invariably fail.  System is corruptible internally and privilege and access to power can influence and ease every regulation. A recent example in India is the Genetically Modified Seeds. Every precaution that was prescribed has been violated.   

 

Data protection even now in India is a joke, the extent and magnitude of compromise until now has been adequately recorded.  I will restrict myself to the day to day operational compromise points --

 

Lack of Ease of Usage - To begin with, we have not yet managed to design any technology with ‘’ease of usage’’ for everyone. The number of elders who cannot read the ATM machine and need…


Like

Mownam Achalam
Mownam Achalam
Dec 27, 2023

I commend the authors for a good contribution which maintains a balance of optimism and caution. I share with them the optimism that there could be unprecedented opportunities for business and the governments in India through nuanced harnessing of Generative AI. Their insight as to how Uber built a global business using the "smart" phone ecosystem with multiple spinoffs is an important and unusual one.


In this slightly long comment, I will start with recalling an insight I gained some thirty years back (shows my age?) with the then-head of research at the Asian Productivity Organisation in Japan. This was a time when Japan was still leading the consumer electronics sector globally. NEC computer was a recognised leader among persona…


Edited
Like

Meenakshi Murali
Meenakshi Murali
Dec 25, 2023

A descriptive account of an important disruptive technology happening right in front of us. We are definitely to privileged to see this happening. Bala, Harsha thanks for writing on this.


https://youtu.be/wPonuHqbNds?si=Wd1qqXAyYVA642wB


Would like to share a very insightful interview from our greatest intellect Prof. Noam Chomsky in the podcast Eye on AI. He is able to articulate challenges ahead of us in a very understandable manner sighting relatable examples.


I loved it. Recommend this for all who is interested in this topic.


Check this out

Edited
Like
bottom of page