Harry Gilson: What is Artificial Intelligence (AI)?

#4 - Published on December 9, 2025 - 9 min read

Intelligence is the ability to abstract information and retain it as knowledge, for application within a certain context. Today, I'm researching artificial intelligence (or AI for short). Artificial Intelligence refers to the simulation of human intellectual processes by machines, often mimicking the human brain itself. AI processes are ordinarily carried out by computer systems. They include learning, reasoning, and self-correction. This can be useful when it comes to performing tasks that typically require human cognition, such as understanding natural language, recognizing patterns, solving problems, and making decisions.

How AI Works

Artificial intelligence uses algorithms and large amounts of data to help machines perform tasks that typically require human intelligence. At its core, AI relies on machine learning, where systems learn from examples rather than following strict rules. There are three main types of machine learning: supervised learning (trained on labeled data to make predictions), unsupervised learning (finds patterns in unlabeled data), and reinforcement learning (learns through trial and error with rewards). Modern AI often uses deep learning, which employs neural networks inspired by the human brain to process complex information. While AI has made remarkable progress, it remains a tool for pattern recognition and prediction, not true understanding or reasoning like humans.

A Brief History of Artificial Intelligence

1940s-1956: The Birth of AI

Roots in cybernetics (Norbert Wiener), neural models (Warren S. McCulloch and Walter Pitts), and Turing's 1950 paper 'Computing Machinery and Intelligence'. The pivotal moment: the 1956 Dartmouth Conference, organized by McCarthy, Minsky, Rochester, and Shannon, officially coins the term 'Artificial Intelligence'.

1957-1974: Early Optimism & First Golden Age

Sybolic AI and 'good old-fashioned AI' (GOFAI) dominate: Logic Theorist (1956), General Problem Solver (1959), perceptrons (Rosenblatt, 1958). Governments pour money in. ELIZA (1966) and SHRDLU (1970) impress with natural-language and block-world reasoning. Overhype leads to unrealistic promises.

1974-1980: First AI Winter

Funding dries up after the Lighthill Report (UK) and DARPA cuts. Perceptrons book (Minsky & Papert, 1969) exposes limitations of single-layer neural nets.

1980-1987: Expert Systems Boom

Second wave fueled by commercial expert systems (XCON, MYCIN). Japan's Fifth Generation project and U.S. response spark huge investment. Lisp machines flourish.

1987-1993: Second AI Winter

Expert systems prove brittle and expensive to maintain; Lisp machine market collapses; funding crashes again.

1993-2011: Rise of Machine Learning

Computing power and data explode. Key milestones:

2012-2020: Deep Learning Revolution

AlexNet (2012 ImageNet victory) marks the deep learning tsunami. GPUs + big data + ReLU + dropout unleash convolutional and recurrent networks. Milestones:

2021-2025: The Generative AI Era & AGI Debate

ChatGPT (Nov 2022) brings AI to the masses. Explosion of multimodal models (GPT-4, Gemini, Grok-1/2, Claude 3, Llama 3, Midjourney, Sora). Key themes:

2025 Snapshot

AI research is no longer confined to narrow tasks; frontier labs openly pursue Artificial General Intelligence (AGI). Reasoning capabilities in models are improving fast (o1-style chain-of-thought, test-time compute), robotics is accelerating (Figure, Tesla Bot), and agentic systems (AI that can plan and execute multi-step tasks) are emerging. The field is simultaneously the most exciting and most controversial it has ever been.

In ~75 years, AI has gone from philosophical thought experiment, to rule-based systems to statistical learning, to deep neural scaling, to today's race toward general intelligence. The next chapter is being written right now.

Broad Categories of Artificial Intelligence

Artificial Narrow Intelligence (ANI) / Weak AI / Narrow AI

The bulk of AI in use today falls into the Narrow AI category. Usually, these machines are designed with a specific task in mind. Some example use cases could be; recommendation algorithms on streaming platforms, image/sound recognition, voice assistants, email smart replies, vehicle autopilot, writing code, deepfakes, and fraud detection. You can think of Narrow AI as an incredibly complex mathematical calculator. It can solve incredibly complex problems in almost an instant, but it can't write poetry or taste food. It's specialist (narrow), not a generalist, and represents almost 100% of the AI you interact with as of 2025.

Artificial General Intelligence (AGI) / Strong AI / General AI

This is the future goal of AI. It can do any intellectual task a human can, and is not a current reality - at least as far as the general public is aware. This conceptual form of AI would posses human-like cognitive abilities, allowing it to understand, learn, reason, creativity, and apply its intelligence to a broad spectrum of tasks. For a machine to be deemed AGI, it must be able to complete any intellectual task that a human can with absolute autonomy. Many experts predict the arrival of AI between 2030-2040.

Artificial Super Intelligence (ASI) / Super AI / Super-intelligent AI

Further to AGI, exists Super AI. A form of intelligence that not only matches human intellect, but surpasses it by an order of magnitude. Hypothetically, this machine would outperform humans in almost (if not all) cognitive domains. Including fields such as scientific creativity, general wisdom, strategic thinking, social skills, as well as every other intellectually relevant field.

Sub-fields of AI Systems

Various sub-fields of AI systems have seen an explosion since 2020. Fuelled by massive data availability, computational power, and investment. As far as I can make of it, there are 5 distinct subsets worthy of note:

Machine Learning

The most important sub-field to date and serves as the core engine of modern AI. In essence, you give a the given system data and let it figure out the patterns autonomously. Within this field, there are three main categories:

Deep learning / Neural Networks

A subset of ML using neural networks with multiple layers to process complex data, often inspired by the human brain. The 'Deep' in Deep Learning, refers to the multiple layers of a given model (a software program that learns from data to perform tasks like classification, prediction, analysis, and generation. Often used in autonomous vehicles and facial recognition).

Natural Language Processing (NLP)

A subfield of Artificial Intelligence that enables computers to understand, interpret, generate, and interact with human language in a way that is both meaningful and useful. In 2011, it was a niche academic field. But has since become a core part of AI, powering search engines, virtual assistants, legal discovery, medical diagnosis, coding copilots, and almost every modern software product. Some notable breakthroughs were:

Computer Vision

This involves teaching machines to interpret and understand visual information from the world, this could be either images or videos. It uses algorithms to detect objects, recognize faces, or segment scenes, relying on techniques like edge detection, feature matching, and learning models like CNNs for high accuracy. Everyday uses include autonomous vehicles analyzing road scenes or medical imaging for tumor detection.

Robotics

Combines AI with mechanical engineering to create intelligent physical agents. It includes perception (using computer vision and sensors), planning (deciding actions), and control (moving in real world environments). Applications range from factory automation and warehouse robots (Boston Dynamics, Tesla Bot) to surgical robots and household assistants.

Applications and Impact

AI is integrated into everyday life and industries:

While AI offers efficiency and innovation, it raises ethical concerns like job displacement, bias in algorithms, privacy issues, and the need for regulation to ensure safe development.