What does reasoning like humans mean for AI?

May 19, 2016
What does reasoning like humans mean for AI?

Reasoning (as a human ability) is innate and involves the capacity of assessing ordinary situations, probabilities and possible outcomes in order to make decisions or to determine further steps for derived actions. While humans inherit and develop the reasoning mechanism, the programmers involved in artificial intelligence software have to break down and reconstruct similar reasoning abilities fit for machine processes.

NYTimes featured in March 2016 an article on what progress looks like in machine reasoning. Richard Socher from MetaMind registered a breakthrough when his AI program was able to identify the fact that a tennis player wore a cap – via a pattern recognition software. This identification process was rather slow when compared with the similar operation in a person, but it managed to mimic the human thought process, which is a small victory in itself.

Considered one of the most enthralling challenges in A.I. software, developing machine programs into having human-like problem solving capacities is a niche race that has its own contestants lining up their artificial neural nets. There have been punctual breakthroughs, yet no generalized software system can equal humans in understanding and reasoning.

Human reasoning

Defined as the capacity of consciously making sense of the surrounding elements (both material and immaterial) and of further building on these elements via logic, fact-establishing and problem solving, human reasoning has a vast amount of dedicated research and various classifications.

Public and private reason, cognitive-instrumental and moral-practical reason, deductive, inductive or analogical reasoning and so on – all have philosophers, sociologists and other human sciences specialists dedicated to studies and insights; there are different groups with different opinions, and there is also room for further innovation.

The way a person approaches reality via his or her own filter of thought is called a mental model – the internal representation of external reality that allows a subject to translate objective factors into subjective, operable factors.

Mental models are one way of explaining reasoning. Not the only one, but the idea of a mental (preset or previously formed) model allows the translation towards AI reference models. A software program can receive a set of data to employ in comparison processes, in order to qualify new information as regular/valid or, on the contrary, out of the preset range. Alternatively, in different terms, the programmer can “teach” neural learning software how to identify various elements via patterns, familiar elements, repetition, or recognition.

Another human function that supports cognitive processes is the memory. No dilemma here – an Artificial Intelligence system can benefit from a vast memory that might aim at leveling human memory. The trick would reside in the way human memory is organized, stimulated and triggered. All these are not completely understood or mapped out. Researchers have only recently gained access to more relevant data on the way human brains encode, superpose and retrieve memories – and it is not linear nor easy.

One of the reality-connected, input layers in reasoning consists of perception – the human capacity of being aware of stimuli and reacting to certain levels of sight, sound, or of other types of stimulating factors. In AI this element defines as “inputs” and currently seems like the most minor issue – video and audio materials translate into software data without major difficulties.

Although these few elements mentioned above do not cover by far the complexity of human reasoning, by contemplating perception, mental models and memory we can form a rough idea of what a few human thinking process cornerstones are.

 AI reasoning

Since Artificial Intelligence does not possess autonomy of thought and is (still) a closed/isolated system (no random data penetrates the processing environment), the term of reasoning is rather improper. Even so, by AI reasoning we should understand the capacity of a machine system to combine the inputs in order to produce superior quality outputs.

Using all the uploaded data, or the extracted data (in the case the main machine has associated sensors that monitor material environments and deliver its own data), the system would be able to use algorithms, pattern comparison and other specific software capabilities to make possible analysis results that mimic human reasoning.

Monitoring sensors can replace human perception (without all its nuances, of course). AI reasoning would be the process of “representing information about the world in a form that a computer system can utilize to solve complex tasks”. All necessary data is stored in the digital memory (as simple as it may be, compared to human memory).

Going into a conceptual discussion on how AI reasoning could or could not replicate the intricate models of human reasoning is futile – it is more useful and less time-consuming to follow the press releases coming from the companies currently engaged in developing AI systems (Google included) and keep informed on their progress.

However there are a few essential differences between human intelligence and AI, which engineers could at most aim at reproducing in an artificial environment:

  • The relative autonomy of a human being (it consumes material elements to procure its energy and in turn employs that energy on mental and physical processes);
  • The ability to grow and develop by itself;
  • The sharing, socializing and forming communities ability;
  • The ability to understand ethics and values and work with them in an adaptive manner;
  • The wonderful unpredictability that sometimes makes the utmost sense when modifying the angle of view;
  • The entire instinct-feelings-sentiment conundrum.

Ultimately, would you accept a case where AI determines your fate?

As faulty as our world may seem at times, whenever a decision is disadvantageous for a person the first thing that comes to mind is that some other person took that decision. It’s a frustrating thing, and it takes time to process the fact that someone made an error. “Errare humanum est” (it is human to make errors) states a Latin aphorism. Ultimately we are humans too, and whenever time and forgiveness work together a sense of acceptance may work its way into the human mind.

What happens when a faulty decision affecting our lives comes from an implacable, machinist, AI authority? No error there – an intelligent program just processed the data and decided against our better interest. Perhaps in the bigger picture our fate was less important, perhaps the mathematics odds were against us.

Is this a challenge we are prepared for? Would humanity accept such a motivation for crucial, perhaps life-changing decisions? It might be a good moment to remember how in older SF movies the human race stood out in comparison with other (imaginary) entities because of its obstinacy to consider each individual important – well, it is doubtful whether AI software would be able to integrate this element in its decision making processes.

Of course, this kind of extrapolation may seem premature, but what if we will face this type of situations sooner, rather than later? We may as well contemplate such a perspective thoroughly. Endowing Artificial Intelligence with human reasoning is a delicate matter, since we don’t know ourselves to a full extent what defines humans, and whether our imperfections, hesitations and errors are not actually means of improving, instead of faulty bits.

Subscribe to our weekly news digest!

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later