Abhirup Bhattacharjee explains the evolution of Artificial Intelligence
Artificial Intelligence (AI) is the process of building machines that can replicate human intelligence. These machines can learn, reason, and adapt while carrying out activities that normally call for human intelligence. With artificial intelligence this world of natural language comprehension, image recognition, and decision making by computers can become a reality. Consider an AI – doctor that is able to recognise and feel the emotions of a patient in addition to diagnosing ailments.
History of AI
19th Century – Birth of modern computing and logical thinking
George Boole established Boolean algebra – the mathematical basis for binary logic, the language of computers.
Charles Babbage, often called the “father of the computer,” designed the Analytical Engine – mechanical general-purpose computer. Although never completed in his lifetime, the design featured key elements of modern computing, including memory and a processing unit.
Ada Lovelace, his collaborator, is often credited as the first computer programmer. She also speculated that a machine might one day compose music or process language – an eerie prediction of AI’s future.
1950s – Dawn of artificial intelligence
Turing’s paper introduced the concept of the universal machine – a theoretical
construct capable of simulating any other machine. This “Turing Machine” is the blueprint for all modern computers. In 1950, Turing published a revolutionary essay, “Computing Machinery and Intelligence,” in which he posed the now- famous question: “Can machines think?”
The term “artificial intelligence” was officially born in the summer of 1956, at a landmark conference at Dartmouth College in New Hampshire. The Dartmouth Conference brought together a small group of researchers who believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
1960s-1970s – AI’s early achievements and setbacks (1970s-1980s)
AI research was mimicking human conversation using simple pattern-matching techniques.
The 1970s witnessed the development of expert systems, which were intended to capture the knowledge of experts in a variety of domains. Data Scientists created rule-based systems that, could use pre- established guidelines to address certain issues.
1980s – Rise of expert systems software
AI experienced a resurgence, driven by the rise of “expert systems” software that mimicked the decision-making abilities of human specialists. Expert systems were adopted in fields like medicine, engineering, and finance, offering rule-based solutions to complex problems. These systems relied on vast databases of “if-then” rules, carefully encoded by human experts.
1990s Machine learning and data- driven approaches
In 1997, a watershed moment occurred when IBM’s Deep Blue defeated world chess champion Garry Kasparov in a six- game match. The victory demonstrated that AI could outperform humans in specific intellectual domains, given enough computing power and well-defined rules.
2000 – 2010 – AI boom: deep learning and neural networks
As computing power grew and data became more abundant, a new approach began to dominate: machine learning.
Algorithms could detect patterns in data and improve over time. Rather than being explicitly programmed, these systems learned from experience.
A key breakthrough came in 2006, when Geoffrey Hinton and his colleagues introduced a method called “deep learning,” which allowed multi-layered neural networks to be trained more effectively. Suddenly, AI systems began to rival human abilities in tasks long considered too difficult such as identifying objects in images or transcribing spoken language.
AI research also expanded into natural language processing. Systems like GPT-2 and later GPT-3 demonstrated remarkable abilities to generate coherent text, answer questions, and even write poetry. AI had become creative, conversational, and surprisingly human-like.
Generative Pre-trained Transformers: A New Era (GPT Series)
In the 2020s, artificial intelligence moved from research labs into the fabric of daily life. AI-powered tools now assist doctors in diagnosing diseases, help farmers optimize crops, and enable scientists to discover new drugs. Self-driving cars are undergoing real- world testing, and AI systems filter online content, personalize feeds, and power financial systems.
AI in healthcare
AI-powered diagnostics, personalized treatment plans, drug discovery, remote patient monitoring, predictive analytics for disease prevention and indemnifying new application for existing drugs. Artificial intelligence is revolutionizing healthcare by enhancing diagnostic accuracy, streamlining administrative tasks, and personalizing patient care, with projections indicating significant growth in the industry.
Current applications of AI in healthcare
Diagnostics: AI technologies are being used to analyze medical images, such as X-rays and MRIs, with greater accuracy than human radiologists. This capability allows for faster and more precise diagnoses, which can lead to improved patient outcomes.
Administrative Efficiency: AI is automating time-consuming administrative tasks, such as scheduling, billing, and documentation. This automation helps healthcare professionals spend more time with patients and reduces burnout associated with administrative burdens.
Personalized Treatment Plans: AI systems analyze vast amounts of patient data, including genetic information and medical history, to create tailored treatment plans. This personalization enhances the effectiveness of treatments and improves patient satisfaction.
Remote Monitoring: AI-powered devices and wearables enable continuous monitoring of patients’ health metrics, allowing for timely interventions and reducing the need for hospital visits. This capability is particularly beneficial for managing chronic conditions.
Drug Discovery: AI is streamlining the drug development process by predicting potential drug candidates, identifying side effects, and optimizing clinical trial designs. This can significantly reduce the time and cost associated with bringing new drugs to market.
Robotic surgery: Robotic surgery is done using small tools on robotic arms guided by a specially trained surgeon. Surgeon controls the robotic arms from a viewing screen, which is usually situated in the same room as the operating table. But the viewing screen could be located far away, allowing surgeons to perform telesurgery from remote locations. The screen is part of what is referred to as a console, which allows surgical procedures to be performed from a seated position, while the surgeon views a magnified three-dimensional view of the patient’s surgical site. Robotics and AI are revolutionizing surgery in healthcare today, bringing way better precision, speed, and results for patients. Navigation system with robotic arms act like a global positioning systems (GPS) for surgeons, guiding the instrument to the planned target with accurate precision. Recent studies indicate a rapid adoption of AI- assisted robotic surgery in hospitals across various surgical specialties, showing improvements in accuracy and reduced complication rates.

Challenges and ethical considerations
The possibilities for AI technology are endless and their future is bright. Still, there are a number of difficulties and moral conundrums in this promise. As technology transforms sectors, the threat of job loss looms. Subtle but ubiquitous algorithmic bias undermines inclusion and justice. We must strike a balance between innovation and individual rights in light of privacy breaches, which throw a shadow over our digital lives. We must follow ethical guidelines to make sure AI benefits mankind while upholding our fundamental principles.
Future of AI
There are obstacles in the way of this future, though. Professionals are already pondering the ethical implications of advanced artificial intelligence. There is hope for a future in which AI and humans work together productively enhancing each other advantages. The future is full with possibilities, but responsible growth and careful preparation are needed.
Abhirup Bhattacharjee is Chief Information s Digital Transformation Officer, Chellaram Group