Teaching Computers to Think Like Humans with Forward Chaining

Madé Lapuerta
5 min readSep 23, 2019

--

Breaking down the basics of artificial intelligence, and how engineers have learned to control computers. In python3.

Ah, artificial intelligence. Currently the biggest buzzword of them all, and often the subject of potential world-domination. Few people understand, however, the intricacies with which AI is built, the algorithms that back machine learning, and the bias that goes along with it.

This semester, I’ve been shuttling over to MIT to take their introductory to Artificial Intelligence course. I recently wrote about working on machine learning projects this past summer, so I cross-registered at MIT to learn more about the coding which drives computers to potentially take over the world.

The topic of our first lectures, forward chaining is an algorithmic process with which engineers can teach machines and computers to think like humans. Specifically, forward chaining is defined as a way a computer can “infer new information from known facts”.

Let’s say I start walking around campus with one purple sock and one blue sock. Other humans might infer, in response, that I am unfashionable. How, then, can a computer arrive to this same conclusion?

Forward chaining begins with two key components: a set of assertions, and a set of rules.

Assertions

Assertions are facts that we know to be true. Consequently, they are also referred to as “truths”. In our above example, our assertions would be:

Made is wearing one blue sock
Made is wearing one purple sock

Rules

Rules are a way to determine truths which are not explicitly given to the system. Rules often follow an “IF, THEN” logical flow.

IF (AND '(?x) is wearing one blue sock',
'(?x) is wearing one purple sock')
THEN '(?x) is unfashionable'

The above rule observes a single variable x , and uses a logical flow based on the given assertions to determine whether or not x is fashionable. To put the code into plain English, it checks IF x is wearing one blue sock AND one purple sock, and infers that THEN x must be unfashionable. In other words, we have just taught a computer to understand that mismatched socks might not be so trendy.

Rules are important because they are how machines can conclude new assertions. In this case, we now add Made is unfashionable to our list of truths.

Firing

Firing is the term used to denote when a rule in our system adds a new truth to our list of assertions. Let’s say, using our previously given assertions, that one of the rules in our system is as follows:

IF ("(?x) is wearing one purple sock", 
THEN "(?x) is wearing one blue sock"))

Our IF statement will bind Made to variable x , since Made is wearing one purple sock does exist in our list of assertions. However, the consequent of the rule, or its THEN statement, is also in our list of assertions. So, our rule does not fire.

Since rules are only significant in the sense that they allow computers to make new inferences from given information, a rule only fires when it is adding a new assertion to our list of truths.

System Flow

As demonstrated above, forward chaining in rule-based systems is a way to develop new truths based on previous assertions and logical rules. So, how exactly do we navigate these systems, and in what order?

In forward chaining, we begin with our list of rules, and for each rule, we look at our set of assertions. In terms of loops, we iterate through our system as follows:

for each rule:
for each assertion:
(see if there is a match)

Let’s say we have the following two rules:

IF (AND '(?x) is wearing one blue sock',
'(?x) is wearing one purple sock')
THEN '(?x) is unfashionable'
IF (OR '(?x) is unfashionable',
'(?x) doesn't like fashion')
THEN '(?x) is not invited to New York Fashion Week'

And, again, say we begin with the following two assertions:

Made is wearing one blue sock
Made is wearing one purple sock

We begin with Rule 1, which attempts to bind a variable x to two conditions of an IF statement in order to determine if x is unfashionable. We first search through our list of assertions to see if there exists an x which matches both conditions. Because we can bind Made to x , we can fire the new assertion that Made is unfashionable .

Only one new assertion can be fired per iteration through the rules. This means that once Rule 1 has fired a new assertion, we add this new assertion to our list, and start up again at Rule 1 to see if there exists another match. For example, if we had another variable wearing one blue sock and one purple sock, we would bind it to Rule 1 before continuing on to see if Made now matched Rule 2.

In our example, we have no other possible matches for Rule 1, so we can move on to Rule 2.

Under our new list of assumptions…

Made is wearing one blue sock
Made is wearing one purple sock
Made is unfashionable

…we do have a match for Rule 2, since only one condition must be true in an OR statement. By binding Made to variable x , we can now fire the new assertion that Made is not invited to New York Fashion Week. Sad.

Learning Human Intuition

After iterating through our system, we end with two more assertions than with which we started. This means that, based on the rules and assumptions it was given, our machine has made two human inferences. This process is specifically referred to as chaining because of how it breaks inferring information down into small tasks.

Forward chaining, as you can observe in this post, is incredibly biased in the favor of who is engineering the system. For example, what if I alternatively believed that wearing mismatched socks was, in fact, fashionable? Then, this model would essentially be useless to me, since it was engineered with an opposing inference in mind. Of course, there are systems built around more objective conclusions than style, but this example highlights how important it is to ensure your ML models are generated from as objective an opinion as possible. If that fails to be possible (sock trends aren’t entirely an objective matter), recognizing the bias in your models and sharing it with clients, users, and engineers is monumental to ensure that computers most fairly and accurately mimic the inferences of humans as well as the needs of what you’re trying to achieve.

If you’re interested in learning artificial intelligence, know that its fundamentals are not too different from conditional statements and loops you might see in other introductory coding classes. Additionally, if you’re someone who enjoys wearing mismatched socks, I deeply apologize for any insult. As I mentioned, artificial intelligence is incredibly biased, so I wouldn’t take it too personally…

If you’re working on any projects using forward chaining in rule based systems, please reach out — I would love to hear from you!

--

--

Madé Lapuerta

Big nerd writing about the intersection between technology & fashion. Spanish/Cuban turned New Yorker. Founder & Editor at Dashion: medium.com/dashion.