By Brinkley Warren
This is the first in a brief series of posts about Artificial Intelligence. In the 1988 article “Mathematical Logic in Artificial Intelligence, John McCarthy said:
“Artificial Intelligence cannot avoid philosophy. If a computer program is to behave intelligently in the real world, it must be provided with some kind of framework into which to fit particular facts it is told or discovers. This amounts to at least a fragment of some kind of philosophy, however naive.”
When you think of “AI” or “artificial intelligence” — what do you think of? Invariably most people think about robots, about science fiction movies, about algorithms that search big data sets , and about talking to machines. All of these are true to some extent, but your concept of AI is likely all messed up. It’s also confusing when you only read snippets in the press or on Twitter about AI.
For instance the billionaire entrepreneur and inventor Elon Musk is one of the many to sound the alarm about the dangers of artificial intelligence, which he says could be more of a threat to humans than the nuclear era. “We need to be super careful with AI,” he tweeted, adding that artificial intelligence is “potentially more dangerous than nukes.”
Sure we could take Elon’s word for it, but if we don’t all have a more nuanced understanding of AI then it’s just hyperbole. So what does all of this buzz about AI really mean? Let me help you make sense of it…
Seeing as how I have been serving as the VP of an AI company (www.iactiveit.com), I thought I would write this post to help clarify what AI is, and what it isn’t; it’s history, it’s present, and it’s future — and most importantly, to help communicate to business leaders the opportunities that AI represents for the future of humanity.
I first heard about AI when I saw the movie A.I. in 1999. I first learned about AI when I read Ray Kurzweil’s book, “The Singularity is Near” in 2005. Six years later, in 2011, I attended Singularity University where I learned more about AI and had the opportunity to actually AI to the real world when I started a company called Primerlife which used AI-planning software and advanced search algorithms to help personalize wellness plans based on real-time genetic interpretation.
Our AI firm specializes in taking advanced AI technologies out of the research lab and into the real world where they can produce a positive impact. At IActive — we are all about empowering knowledge workers. Importantly however, we may not be the type of AI that you think of at first glance. We specialize in a branch of AI related to planning & scheduling, and over 50% of our employees have either a PhD or Masters degree in AI with an emphasis on this branch. We have brilliant AI practitioners — and I am most certainly not one of them. However, since working at IActive beginning in 2011 I have become an expert at the “language” of AI, and have sought to understand the field through intense questioning, reading, and its application to real world scenarios — and so while I don’t have a PhD in AI nor the skills to program using deep-math, I HAVE been trained as a journalist — and what follows is the first post of a longer “guide to AI” that I hope will be useful to business leaders who might benefit by understanding more about AI, and how it can transform the firms of tomorrowland.
So — you already got your first lesson. When you think of AI, you must understand that at it’s core, AI is basically computer science aimed at creating intelligent systems. In the same way that a human is an intelligent system comprised of many different aspects, or even the entire planet is an intelligent system……so too is the field of AI.
Research associated with artificial intelligence is highly technical and specialized, but the overall goal includes programming computers for certain traits such as:
My attraction to the field of AI is based on my love of technological innovation, ecologically sustainable cybernetic systems, and entrepreneurship — but also my love for philosophy. Indeed, AI could be the philosophical Zeitgeist of our modern era, and undoubtedly the late-night conversations that I’ve participated in concerning AI and the ramifications of AI for the World are some of the most interesting I’ve ever had.
The real magic draw of AI is not actually the technological aspects — in my opinion — but the philosophy of AI, and harnessing the philosophical discourse to create real-world applications that can help us solve huge massive global grand challenges.
Socrates said, “The understanding of mathematics is necessary for a sound grasp of ethics.”
Plato said, “The highest form of pure thought is in mathematics.”
Euclid said, “The laws of Nature are but the mathematical thoughts of God.”
Tesla said, “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
The history of artificial intelligence began in ancient history, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence.
The word “Intelligence” in Artificial Intelligence is the computational part of the ability to achieve goals in the world. This ability to achieve goals is the branch of AI my company specializes in, because to achieve goals you must plan — but to make the best plans, you must have the best knowledge.
So….because this is the first part of the guide, it’s important that we begin with some clear definitions that surround AI, and although this may be kind dry material — it’s an important foundation. To make the article more engaging, I put some thoughts about ontology at the end of this first post to spice things up a bit. So if you get bored reading the descriptive and topical stuff….skip to the end 🙂
Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach.
Machine learning is another core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.
Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with few sub-problems such as facial, object and speech recognition.
Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.
Logical AI: What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals.
Search: AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
Pattern recognition: When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
Representation: Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
Inference: From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.
Common sense knowledge and reasoning: This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
Learning from experience: Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
Planning: Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
Epistemology: This is a study of the kinds of knowledge that are required for solving problems in the world.
Ontology: Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s. I love the topic of ontology and at the bottom of this article I write some more about it…
Heuristics: A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other is the best approach (and one we use at IActive).
Genetic programming: Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza’s group and here’s a tutorial.
Game playing: You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation–looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
Speech recognition: In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.
Understanding natural language: Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.
Computer vision: The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
Expert systems: Our company specializes in these types of systems. A “knowledge engineer” interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.
Heuristic classification: One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).
To me, the most important — and yet least understood — of all of the aspects of AI relates to ontologies. Ontology is basically the philosophy of the state of being, or reality and it’s what I wrote one of my masters theses about….and specifically, the fallacy of a single approach to being. It’s my belief that there are actually multiple ontological tiers to any situation, any person, any condition, and any defined goal or desired outcome. There are different ways of knowing, and I believe in the ontology of Transcendence, of course.
In What Are Ontologies, and Why Do We Need Them? , Chandrasekaran, Josephson and Benjamins point out that:
“Ontological analysis clarifies the structure of knowledge. Given a domain, its ontology forms the heart of any system of knowledge representation for that domain. Without ontologies, or the conceptualizations that underlie knowledge, there cannot be a vocabulary for representing knowledge….Second, ontologies enable knowledge sharing.”
When creating AI programs and in selecting any representation of knowledge, we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments that have an opportunity cost of exclusion rather than inclusion….even though the best of human nature is one of inclusiveness. The commitments are, in effect, a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus at the expense of blurring other parts.
An ontology is an account of being, of reality. An ontology, in short, determines what is and is not real. For example, a materialist ontology assumes that matter is fundamentally real, while things like consciousness do not have any reality of their own. Consciousness is a mere function of matter (at best) or delusion (at worst). A spiritual ontology, on the other hand, might take the opposite view: consciousness is fundamentally real, while things like matter do not have any reality of their own. Matter is a mere function of consciousness (at best) or delusion (at worst). So is it possible to integrate such contradictory accounts of reality? Can we create AI that is truly as capable of being human has humans are, or will it always be bound by the ones and zeros of programatic representation of reality which is inherently wrapped up in an empirical-industrial ontology which operates as as recalcitrant dualism where people (and knowledge) are either Asending and seeking to transcend, or Descending and seeking to regress. So how can we have a more holistic approach to AI and knowledge representation?
This is a question I pose to all of you philosophers and AI folks out there, I’d love to hear your opinions.
In summary, as you see — when it comes to AI, it’s not as cut and dry as many make it out to seem. There are many aspects, many sub-fields that are very deep, and many applications and potential applications and unique approaches. AI — because it’s connected to human knowledge and the nature of reality — is perhaps the most complex and important scientific and philosophical endeavor of our age.
I will post more articles in the future about AI. If you have any ideas for what you’d like to discuss …please hit me up. Until then. Do it for impact. The rest will follow.