Scientists Simplifying Science

From the Real to the Replica: Understanding the Global Impact of Artificial Intelligence

SHARE THIS

If someone tells you that putting a chip in your brain could make you smarter, would you believe them? Or would you laugh it off as something plucked from science fiction’s wildest fantasies?

 

Well, we are not too far from this. Kevin Warwick, Professor of Cybernetics at the University of Reading, has multiple chips ‘installed’ in his brain, technically making him a cyborg; or even the first of a new ‘species’ of ‘enhanced’ man, with the ability to control various electronic devices and communicate with computer systems in his vicinity. This is just one of the many experiments in the exciting discipline of Artificial Intelligence (AI) – a term that has captured the public imagination in recent times. It is common knowledge that AI aims to mimic human intellect and complete tasks through machines that would typically require immense human effort, but this is a rather restrictive view, considering how it has expanded across crucial sectors of our society, including health, transport, and communication. AI has come a long way from its humble beginnings in the 1950s, but it is not even close to peaking — the recent emergence of large language models (LLMs) like ChatGPT by OpenAI, Copilot by Microsoft, and Bard by Google have brought AI back into the public consciousness. AI tools geared toward the daily needs of the public include virtual assistants like Siri, consumer recommenders like the ones used by Amazon, customer support chatbots, and tools to translate languages and recognize images.

But how have we, as a society, come this far?

AI research was initially motivated by the need for process simplification through automation, accelerating scientific discoveries, and improving human-machine interactions by making machines more ‘intelligent.’ But as AI evolved over the years, its purpose transcended its original goals — like Warwick, technologists wanted to not only enhance but expand the boundaries of human capability. We are living in a world where direct brain-to-brain communication is no longer a wild idea but an exciting possibility, and autonomous robots with biological, lab-grown brains can be furnished for productive use.

The true beginnings of AI can be traced to the 1950s, when the brilliant Alan Turing formally questioned a machine’s ability to think in his proposal, “Computing Machinery and Intelligence.” Now known as the ‘Turing Test’, this has become a foundational concept of AI theory. Arthur Samuel is also a significant name in this decade as he ‘taught’ an AI program that could play a game of checkers to learn with experience. This was among the first (successful) practical manifestations of ‘machine-learning’— a term Samuel coined while detailing a machine’s ability to self-actualize and detect patterns in data with prolonged experience and feedback. Another step forward came with John McCarthy’s LISP (List Processor), a principal programming language in the USA that could represent data, even code, in the form of lists and manipulate them.

AI took off in the 1960s and 1970s with the development of problem-solving programs. James Slagle’s ‘SAINT’ solved low-intensity calculus, while Joseph Weizenbaum’s ‘ELIZA’ had conversations with people in English. Advancements in the 70s were multidimensional and saw the advent of ‘Expert Systems’; AI was used to store and replicate specialized knowledge obtained from different areas of research. Progress was made in Machine-learning and Robotics with WABOT-1, the first anthropomorphic robot, and Stanford Cart, one of the earliest self-functioning vehicles. SHRDLU, a program developed by Terry Winograd, depicted an AI machine’s ability to follow natural language commands in simulated worlds and set the groundwork for work in ‘Natural Language Processing’ (NLP), which studies the ability of computers to respond accurately to human language. These inventions opened up exciting future prospects and drove up expectations for AI’s rapid growth. The potential, however, remained unrealized, leading to wide-scale disappointment and funding cuts in the 1980s, which resulted in research stagnation and an “AI Winter”. But by the 1990s, AI had made a quick comeback and has been on an upward trajectory ever since.

Research in neural networks, a sub-branch of machine learning that focuses on imitating human brains, led to the invention of ‘Deep learning’, which allowed AI to identify, summarize, and predict patterns in large datasets without human instruction. Deep learning, combined with Natural Language Processing, led to the creation of Language Models, tools that use deep learning to analyze large datasets, detect patterns in them, make inferences, and use these inferences to predict and generate original text in response to a user prompt. Since language models create new content based on learned patterns, they are a subset of Generative AI— tools that are being used to complete text, synthesize images, and even compose music, making them the most used forms of AI in daily life. ChatGPT, the most prominent language model today, is a form of Generative AI. Additionally, AI-driven research in Expert Systems, Robotics, and Data Analytics, to name a few, has witnessed rapid growth. Existing technologies have been greatly improved, and tools to store, process, and analyze large datasets are being built. Most importantly, AI can now be applied practically by the common man, making it a true game-changer.

But moving into unexplored territories courts danger. The ethical implications of AI have been a source of vigorous debate ranging from unemployment concerns with AI taking over iterative tasks to the risks of unconscious biases baked into the algorithms. Concerns regarding privacy breaches and the weaponization of AI tools pose a very real threat. With easy accessibility, cost-cutting capabilities, and ever-increasing media attention, the potential for misuse is non-trivial— a fact that has been acknowledged by Elon Musk himself, co-founder of OpenAI, when he signed an open letter with 1100+ signatories seeking a 6-month-pause on AI research.

As we contemplate the merging boundaries of human cognition and machines, the chips in Warwick’s brain become emblematic of a brave new world— where science fiction whispers close to reality. Our voyage through the uncharted waters of AI has mapped remarkable territories, from controlling devices to transforming industries. Yet, this journey is far from over. As the script unfolds, we remain poised on the precipice of AI’s undiscovered potential. Our role in this grand narrative is both audacious and necessary. So, would you laugh off the prospect of a chip-augmented brain or indulge in the spirit of adventure and strain your ears for the symphony of possibilities it brings? The stage is set, and our collective script awaits its next chapters.

Cover image- Dall-E3

SHARE THIS

The contents of Club SciWri are the copyright of Ph.D. Career Support Group for STEM PhDs (A US Non-Profit 501(c)3, PhDCSG is an initiative of the alumni of the Indian Institute of Science, Bangalore. The primary aim of this group is to build a NETWORK among scientists, engineers, and entrepreneurs).

This work by Club SciWri is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Tags

Author

Bhuvaneshwari Mahesh is a dedicated journalism and economics student with a fervor for writing and a thirst for knowledge across diverse subjects. She enjoys singing, reading, penning poetry, and engaging in debates and is driven by the belief that education should be accessible, enjoyable, and comprehensible for all. She aspires to contribute to this cause in the future.

Editor

Saurja Dasgupta is originally from Kolkata, India. He obtained his Ph.D. at the University of Chicago, where he studied the structure, function, and evolution of catalytic RNA. He is currently doing his postdoctoral research at Massachusetts General Hospital, Boston, where he is trying to understand the biochemical milieu that could have given birth to life on Earth (and elsewhere) and reconstruct primitive cells. One of his scientific dreams is to observe the spontaneous emergence of Darwinian evolution in a chemical system.  When not thinking about science, Saurja pursues his love for the written word through poetry and songwriting (and meditating on Leonard Cohen’s music). His other passions are trying to make science easier to understand and fighting unreason and pseudoscientific thinking with a mixture of calm compassion and swashbuckling spirit.

Editor

Sumbul Jawed Khan is the Assistant Editor-in-Chief at Club SciWri. She received her Ph. D. from the Indian Institute of Technology Kanpur, post-doctoral research from the University of Illinois at Urbana-Champaign, and is currently at the Dana Farber Cancer Institute, Boston. She is committed to science outreach activities and believes it is essential to inspire young people to apply scientific methods to tackle the challenges faced by humanity. As an editor, she aims to simplify, translate, and excite people about current advances in science. 

Editor

Roopsha Sengupta is the Editor-in-Chief at Club SciWri. She did her Ph.D. at the Institute of Molecular Pathology, Vienna, and post-doctoral research at the University of Cambridge UK, specializing in Epigenetics. During her research, she was involved in many exciting discoveries and had the privilege of working and collaborating with many inspiring scientists. As an editor for Club SciWri, she loves working on diverse topics and presenting articles coherently while nudging authors to give their best.

Latest from Club SciWri