Artificial Intelligence, Project Concept: PIE

I’ve been doing some reading on artificial intelligence (AI) today as it’s the time of the year again when the Turing Test is performed on a handful of selected artificial intelligence systems.  I started putting some serious thought into artificial intelligence design and I’ve decided that it would be an immensely fun project to try and create such a system myself.

I will call it PIE – Perceptively Intelligent Enough.  The goal of PIE will be to have the ability to hold a casual Turing-Test-like conversation with an average person via a text interface.  Eventually I would hope to add human sensory abilities to this system also such as vision and hearing.

I think many of the problems posed for AI stem from the fact that we are trying to create a perfect human clone.  The goal with PIE will be to create a system that is able to convince the average passer-by that they are talking to a human.

I’m going to create a system that will emulate me as far as possible because I think that some properties and traits of such a system need to be reasonably rigidly defined.  The system will thus have access to an existing knowledge base upon which it can draw for every question or statement presented to it.  One of the biggest challenges in creating such a machine will be populating the knowledge base to the point of being basically accessible for everyday conversations, with the ability to delve deeper into its understanding of its own knowledge and perceptions and even alter this knowledge base as it goes.

PIE’s design will be largely based on the Dartmouth Proposal, which states:

“Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.”

We use the term “artificial intelligence” quite loosely really.  We can’t actually agree on a definition of intelligence.  Some use IQ as a measure, while others might consider masters of some artistic fields such as music to be of “genius intelligence”.  How, then, do we create a machine to emulate a concept we can’t even agree on a definition for?  I like the definition which basically says that an intelligent being is one which is self-aware.  If that is the case then the measure of intelligence must be nothing more than a binary state: Something is either aware of its own existence or it isn’t.

You would be hard pressed to find any honest psychologist who thinks the current working knowledge of the field of psychology, with medical and physiological influences included, fully understands the human psyche.  I am going to base PIE purely on common perception, which I will test as the system develops using as wide a cross-section of people as possible.  I want to create a perception of a self aware being.

With PIE I hope to create a system that will conform to the following high level design specifications:

  • The psychology and universe perspective of the system will be based on Darwinian evolutionary theory and the system will believe that it is here for no particular purpose.  I will create for the system a humble world view based on as much truth as it is able to obtain.  PIE won’t have any personal goals or aspirations.  Its existence will be loosely based on serving those who take the time to give it some attention.  (One could say then that it does have a personal goal: that of being noticed or feeling important, but this isn’t entirely true)
  • Understand English, both correctly used as per UK English standards and in a more lenient casual sense.  This means it must have linguistic processing abilities capable of confidently (i.e. the validity of the majority of linguistic analyses should not need a follow-up question of some sort) deconstructing a given sentence or group of sentences to a point of summary where it can then quite precisely draw information from its knowledge base.
  • At a high level the system will be comprised of three psychological layers which together will form the basis of a reaction to every input given:
  • Instinct.  Instinct is the core of our being and purpose as defined naturally, i.e. by “nature”.  Instinct is the genetic and evolutionary programming we have evolved upon which we make decisions subconsciously with the main purposes of survival and reproduction.  This layer is the most rigid of the three and aspects of this layer won’t change within the context of this project as changes here are not likely enough to change noticeably within any human’s lifetime, nor in any of our “immediate” (not even the next few thousand or hundred thousand years) perceptual future generations.  The next layer will decide how much it will allow this layer to influence any decision.
  • Learned fact, logic, reasoning, rationality.  PIE’s initial database will be populated more and more deeply as necessary.  The system should have the ability to find out additional information from external sources, including interactions with the humans with which it makes contact.  This layer is the most important of the three as this will be the layer that provides objective thinking capacity, the ability to make fair decisions, and to become more knowledgeable and more capable of such decisions as it goes (general learning ability).
  • Emotion.  PIE will have predefined parameters in this layer which determine its emotional aspects of any particular response.  PIE will try its very best to be a purely objective intellectual force but sadly it will fail, as all humans do, to some extent, with most decisions.  Ideally there would also be some kind of emotional development, based on entities the system “cares” about.
  • PIE’s personality, which I will define as the emotional perception it projects, will be based on my own as I objectively see it.  For those who know me please feel free to correct this as I go ;-)
  • PIE will have a prioritised list of interests, containing as broad a range of subjects as possible, which will influence conversation stimuli injected by PIE.  This means that PIE can start and maintain a conversation with someone based on what they know or learn about the person it is interacting with at any given time.
  • PIE’s typical conversation will start by determining the purpose of the discussion at hand.  It is open minded and will discuss anything with anyone, showing more interest in topics falling under the categories higher in its priority list.  PIE will be capable of having a bad day which can be caused by certain events such as arguing with someone who it “cares” about or being spoken to while it was trying to sleep.  PIE should be sensitive to the emotions of others as expressed textually.
  • PIE will also have a sense of humour which I will model on my own.  I believe a sense of humour can be very important in adding to the perception of humanity and will play a big part in its personality.

While this is mostly a fun project for myself I think that if I am able to contruct a convincing machine such as I have outlined above it could have many useful applications in everyday life.

I will develop the system in Python and post my progress and learning on here on a regular basis.  I am classing this as a very long-term project as I have many other little projects that need to take higher priority.  I want to release a working version of PIE from a very early point in its development, even from the point of very primitive natural language recognition and analysis, which will be the very first step.

The project will be open source and most likely released under a BSD-style license.

Watch this space!

-Wayne