If you’ve ever asked your phone a question, let Netflix decide what you should watch next, or watched a machine write, draw, or reason in ways that feel eerily human, you’ve already brushed up against a much bigger question: who started AI in the first place?
It’s a deceptively simple question. Most people expect a single name, a lone genius, a garage moment like Apple or a dorm-room story like Facebook. But artificial intelligence doesn’t fit neatly into that kind of origin story. AI wasn’t “started” in one afternoon or by one person with a bright idea. It was born slowly, through curiosity, war-time urgency, philosophical debates, mathematical breakthroughs, and decades of trial and error.
This question matters right now more than ever. AI is no longer a futuristic concept—it’s reshaping jobs, creativity, medicine, education, and how businesses compete. Understanding who started AI helps you understand why it works the way it does today, why it has certain limitations, and where it’s likely headed next. It also cuts through hype. When you know the roots, it’s easier to tell innovation from marketing noise.
This article is written for curious readers, students, creators, business owners, and anyone trying to make sense of AI beyond surface-level headlines. By the time you finish, you’ll understand not just the names behind AI’s beginnings, but the ideas, struggles, and real-world motivations that shaped it. You’ll walk away with a clear mental timeline, practical context, and a deeper appreciation for how today’s AI tools came to be.
Who Started AI? Understanding the Question Behind the Question
Before we name names, we need to slow down and clarify what we’re actually asking when we say who started AI. Artificial intelligence isn’t a single invention like the light bulb. It’s more like aviation. The Wright brothers didn’t invent flight from nothing—they combined centuries of ideas about lift, motion, materials, and control into something that finally worked.
AI is similar. It sits at the intersection of philosophy, mathematics, neuroscience, logic, engineering, and computer science. Asking who started AI is really asking: who first believed that machines could think, reason, or behave intelligently—and then tried to prove it.
Early thinkers weren’t even working with computers as we know them today. They were asking abstract questions. Can reasoning be reduced to rules? Can learning be mechanized? Is intelligence a property of the mind, or a process that can be replicated?
These questions go back hundreds of years, long before the term “artificial intelligence” existed. Philosophers like René Descartes and Gottfried Wilhelm Leibniz imagined symbolic logic systems that could represent human thought. In hindsight, these ideas feel like the philosophical grandparents of AI.
So when people search for who started AI, they’re often really looking for two things: the person who named it, and the people who made it practical. Those are not always the same individuals.
The Birth of Artificial Intelligence as a Field
The modern answer to who started AI usually begins in the mid-20th century, when computers became powerful enough to move beyond basic arithmetic. This is where artificial intelligence stops being a thought experiment and becomes a scientific pursuit.
The pivotal moment came in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence. This wasn’t a conference in the modern sense—it was more like an ambitious workshop among brilliant minds who believed machines could simulate every aspect of learning and intelligence.
The term “artificial intelligence” was officially coined here, and that alone makes this event historically significant. Naming a field gives it identity. It attracts funding, students, and skeptics. It creates momentum.
This moment didn’t create AI overnight, but it unified scattered ideas into a shared mission. From this point forward, AI wasn’t just philosophy or math—it was a field with goals, experiments, and expectations.
John McCarthy: The Man Who Named Artificial Intelligence
If you’re looking for the clearest, most defensible answer to who started AI, John McCarthy is the name you’ll see most often—and for good reason.
McCarthy was a mathematician and computer scientist who not only believed machines could be intelligent, but also gave the field its name. In 1956, he organized the Dartmouth workshop and deliberately chose the term “artificial intelligence” to distinguish it from cybernetics and automation.
That choice mattered. McCarthy wanted a term broad enough to include reasoning, learning, perception, and language—not just control systems or feedback loops. He wasn’t interested in machines that merely followed instructions. He wanted machines that could reason about the world.
Beyond naming AI, McCarthy made lasting technical contributions. He developed the Lisp programming language, which became the dominant language for AI research for decades. Lisp was designed to manipulate symbols and logic, aligning perfectly with early AI goals.
McCarthy’s vision was bold, sometimes overly optimistic, but foundational. He believed intelligence could be formally described and then implemented in machines. That belief still underpins much of AI research today.
Alan Turing and the Philosophical Foundations of AI
While John McCarthy named AI, many argue that Alan Turing started AI in spirit.
Turing never used the term “artificial intelligence,” but his ideas made it inevitable. In 1950, he published a paper titled Computing Machinery and Intelligence, which asked a radical question for its time: “Can machines think?”
Instead of getting stuck in definitions, Turing proposed a practical test—the Imitation Game, now known as the Turing Test. If a machine could converse indistinguishably from a human, it could be considered intelligent.
This reframing was powerful. It shifted the debate from abstract philosophy to observable behavior. Intelligence wasn’t about consciousness or feelings—it was about performance.
Turing also laid the groundwork for computer science itself. His theoretical “Turing Machine” defined what computation is and what problems machines can solve. Without this foundation, AI would have had no platform to stand on.
So while Turing didn’t start AI as a formal field, he made it intellectually possible. Many AI researchers still measure progress, consciously or not, against questions Turing raised over 70 years ago.
The Early AI Pioneers Who Shaped the Direction
Answering who started AI means recognizing that no single person could have done it alone. The field was built by a small group of pioneers, each contributing a piece of the puzzle.
Marvin Minsky was one of the most influential. A co-founder of the MIT AI Lab, Minsky explored how human intelligence might emerge from interactions between simple processes. His work inspired generations of researchers, even when his predictions proved too optimistic.
Herbert Simon and Allen Newell approached AI from a cognitive angle. They built some of the first AI programs, like the Logic Theorist, which could prove mathematical theorems. This was groundbreaking—it showed that machines could perform tasks associated with human reasoning.
Claude Shannon, known as the father of information theory, contributed ideas about machine learning and chess-playing algorithms. His work demonstrated that intelligence could be studied mathematically.
Together, these thinkers turned AI into a serious scientific endeavor. They wrote the first textbooks, trained the first students, and built the first labs dedicated entirely to artificial intelligence.
How Early Optimism Shaped—and Misled—AI Development
One of the most human aspects of AI’s origin story is how wrong its founders often were about timelines.
In the 1950s and 1960s, many researchers believed human-level AI was just a few decades away. Some predicted it would arrive within a generation. Funding poured in, expectations soared, and headlines promised thinking machines.
Reality was harsher. Early AI systems worked well in narrow, controlled environments but failed in messy, real-world situations. Language understanding, perception, and common sense proved far more complex than anticipated.
These disappointments led to periods known as “AI winters,” when funding dried up and public interest waned. Understanding who started AI also means understanding their miscalculations. They underestimated how much data, computing power, and nuance intelligence requires.
Ironically, those early failures laid the groundwork for modern success. Researchers learned what didn’t work, refined their models, and waited for technology to catch up.
The Transition from Symbolic AI to Data-Driven Intelligence
For decades, AI was dominated by symbolic approaches. Researchers tried to encode human knowledge as rules: if this, then that. This approach made sense given the tools of the time, but it struggled with ambiguity and scale.
The shift toward machine learning changed everything. Instead of telling machines what to think, researchers taught them how to learn from data. This transition wasn’t sudden—it evolved through the work of statisticians, neuroscientists, and computer scientists over many years.
People like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio didn’t start AI, but they revived it. Their work on neural networks and deep learning in the late 20th and early 21st centuries transformed AI from a niche academic field into a global force.
This evolution answers a modern version of who started AI: the founders planted the seeds, but later generations learned how to make them grow.
Real-World Benefits and Use Cases of AI Today
Understanding who started AI helps explain why it’s so versatile today. The original goal was general intelligence, but practical applications emerged first.
In healthcare, AI assists in diagnosing diseases, analyzing medical images, and predicting patient outcomes. These systems don’t “think” like humans, but they excel at pattern recognition.
In business, AI automates customer support, detects fraud, optimizes logistics, and personalizes marketing. The before-and-after difference is stark: tasks that once took teams weeks can now be done in minutes.
Creative industries use AI for writing, music composition, design, and video editing. This would have stunned early pioneers, many of whom believed creativity was uniquely human.
Each of these use cases traces back to foundational ideas about logic, learning, and representation developed by AI’s founders.
A Practical Step-by-Step Way to Understand AI’s Origins
If you want to truly grasp who started AI and why it matters, approach it in stages.
First, understand the philosophical roots. Intelligence was debated long before computers existed. This context helps you see AI as part of a broader human quest.
Next, study the mid-20th century breakthroughs. Learn about Turing, McCarthy, and the Dartmouth workshop. This is where AI becomes concrete.
Then, follow the technological evolution. Notice how limitations led to new approaches like machine learning and neural networks.
Finally, connect past ideas to present tools. When you use a chatbot or recommendation engine, trace its lineage back to early concepts like symbolic reasoning or pattern recognition.
This layered approach turns AI history from trivia into usable insight.
Tools, Comparisons, and Expert Recommendations
For learning AI history and fundamentals, beginners benefit from free resources like university lecture series and introductory books. Paid courses offer structure and mentorship but aren’t strictly necessary.
Academic papers provide depth but can be dense. Popular science books strike a balance, translating complex ideas into readable narratives.
As someone who has studied and written extensively about technology, the most effective approach is mixed. Start broad, then dive deep where curiosity pulls you. AI’s origin story rewards patience and context.
Common Misconceptions About Who Started AI
One common mistake is believing AI was invented recently. This ignores decades of foundational work.
Another is assuming AI progress has been linear. In reality, it has surged, stalled, and reinvented itself multiple times.
Many also credit modern tech companies with starting AI. While they’ve accelerated adoption, they’re building on ideas that predate them by half a century.
The fix is simple: separate invention from commercialization. Understanding that difference clarifies who started AI and who scaled it.
Conclusion
So, who started AI? The honest answer is that AI was started by a community, not a lone genius. John McCarthy named it, Alan Turing imagined it, and a small group of pioneers dared to formalize intelligence itself.
Their ideas were imperfect, ambitious, and sometimes wildly optimistic—but they were essential. Today’s AI systems, powerful as they are, are chapters in a story that began with a question: can machines think?
Understanding that story doesn’t just satisfy curiosity. It builds perspective, sharpens judgment, and grounds expectations. Whether you’re using AI, building with it, or simply trying to understand its impact, knowing where it started gives you an edge.
If this topic sparked new questions, explore further, experiment with AI tools, and keep learning. The story of AI is still being written—and now, you’re part of it.
FAQs
Who is considered the father of artificial intelligence?
John McCarthy is most often called the father of AI because he coined the term and organized the first major AI conference.
Did Alan Turing start AI?
Alan Turing didn’t start AI as a field, but his ideas and the Turing Test laid the intellectual foundation for it.
When was artificial intelligence officially started?
AI is generally considered to have started as a field in 1956 at the Dartmouth Conference.
Was AI invented for military purposes?
Some early funding came from military sources, but AI research was driven by broader scientific curiosity.
Why did early AI fail to meet expectations?
Early researchers underestimated the complexity of intelligence and lacked sufficient computing power and data.
Adrian Cole is a technology researcher and AI content specialist with more than seven years of experience studying automation, machine learning models, and digital innovation. He has worked with multiple tech startups as a consultant, helping them adopt smarter tools and build data-driven systems. Adrian writes simple, clear, and practical explanations of complex tech topics so readers can easily understand the future of AI.