Risk Latte - Artificial Intelligence 101 - Lecture Series: Algorithms and AI

Artificial Intelligence 101 - Lecture Series: Algorithms and AI

Rahul Bhattacharya

Artificial Intelligence 101 - Lecture Series: Algorithms and AI

[NOTE: This is extracted from one of the classroom lectures given by me as a part of the Financial and Engineering (FEM) program while discussing the topic of Symbolic Regression and Genetic Algorithms. The AI 101 lecture series will also feature as part of Risk Latte's upcoming signature course in the field of machine learning in finance, Certificate in Financial Robotics (CFR) course.]

To say that Symbolic Regression and genetic algorithms will make computers as smart as human beings is a gross overstatement. I find this even ludicrous. It is true that various algorithms such as symbolic regression, within the broad class of genetic algorithms, has taken machine learning to great heights. But to say that they can now impart a human-like intelligence to machines is, quite obviously, facetious. Yet, I cannot deny that there is this enormous excitement in the air. The field of artificial intelligence (AI) and machine learning is growing by leaps and bounds. No longer is this just an academic pursuit. Big corporations and fast growing start-ups have already embraced this field in its pursuit to develop efficient and intelligent products. In short, the race is on to make computers intelligent. But there is a lot that still needs to be accomplished.

Let's be modest. Computers, or, any generalized silicon based machine for that matter, are not intelligent. The day a silicon based machine could talk the way we talk, using a natural language, see things the way we see things and think like us using the power of abstraction they will become intelligent. All the progress in the field of machine learning over the past decade and a half notwithstanding, the mind of a computer is still "algorithmic". That is the biggest impediment standing in the way for the machines to become intelligent.

One of the biggest misconceptions about AI that many people, including a large number of programmers and AI coders, carry with them is that any artificial intelligence has to be algorithmic. An "algorithmic mind" - whether it is the mind of a robot or some other form of hardware, such as, an isolated desktop or a series of desktop connected together is perceived by many to be the key to development of artificial intelligence. Nothing can be further from truth. True AI entails a complete absence of algorithm. An intelligent mind, be it biological (organic) like that of a human being or artificial (silicon based) like that of a robot, both by definition and construction, has to be non-algorithmic.

When we were toddlers and beginning to learn, and perceive things around us, how did we tell the difference between a cat and a dog? Or, how did we know that a certain type of machine was called a motor car? Our mothers simply pointed towards a car and said that it was a "car". She showed us a dog and said that it was a "dog" and in the same way she identified a cat for us. We saw these objects repeatedly and got to figure out that a car is car and a dog is a dog. We did not memorize any algorithm to identify a car, such as, check if the machine has wheels and four tyres, if the answer is “yes” then proceed to the next step, then check if the machine has four doors and windows and whether it was emitting some kind of gas from a pipe at the back, and if the answer is again “yes” to all these questions, then this is a car. We were told by our mothers our first trainer that this object was a "car" and we looked at it and saw many similar objects and started recognizing a car; the same for cats, dogs and every other object that we came to recognize and every other activity that we learned to do. Our learning process is visual, semantic and ultimately heuristic.

Algorithms primarily came into use with the rise of the computers. Even though the first known mathematical algorithm can be traced back to the 9th century when Muḥammad Ibn Musa al-Khwārizmi, the Persian mathematician, wrote down step by step rules to solve the quadratic equation their true value came to limelight only when we started writing source codes for the computers in the late 1940s. Computers are infinitely more capable to carry out complex and large calculations than the humans and yet they cannot be said to possess intelligence like the humans do. This is simply because they are incapable of carrying out any abstraction when solving a problem. The ability to understand a problem or recognize patterns in a data set via abstraction is the hallmark of human intelligence. Computers need a rule book, a step by step procedure to solve any problem or recognize patterns; and, this rule book is provided to them by the human programmer. In short, an algorithm a set of rules needs to be written down for the computer to understand the problem. The computer cannot make use of ideas abstraction to connect the dots or see the problem in its entirety; the problem has to be broken down systematically into smaller problems and a set of rules need to be generated to solve each of these small problems. That is what any software source code does.

Ever since computers became ubiquitous and a permanent fixture of our computing life, algorithms have assumed great importance. To write the source code in a high-level language the software for solving a problem which the computer's internal machinery will understand and then act upon we need to first write a set of rules which will help with deconstructing the problem in the greatest possible detail. Every problem in mathematics, physics, engineering and even biology needs to be viewed as a collection of smaller parts which can unraveled via a set of rules. Silicon based intelligence if the word, “intelligence”, can at all be applied to the present-day computers does not have the capability of abstraction. Silicon based machines, such as computers, cannot think and hence the power of ideation needed to put a problem any problem, be it numerical or algebraic into its context and at the same time develop a generalized perspective for solving it is missing. An algorithm is needed to put the problem in its context. What the algorithm does is to take a particular mathematical model and explains it to the machine. The algorithm also tells the machine how to proceed in steps to reach from alpha to the omega. Of course, the machine will not understand human language, such as English or French, and hence the whole process of communicating with the machine has to be in a language that the machine understands. That part is taken care of via the software and its interface with the hardware (these languages are high level languages, assembly languages and at the most fundamental level of the machine, the machine language).

A simple example might help to illustrate the point. Say, we look at a collection of data, like a two-dimensional time series data, but have no idea where that data has come from or what that data represents. Yet, simply by looking at the data visual examination we can get some idea about a pattern. If the data set isn't too large, we can easily check if there is a linear or a non-linear relationship between X-variables and Y-variables. Even if we cannot extract an exact mathematical relationship between the two set of variables, at the very least, we can figure out if there is any correlation or co-movement between the X-variables and the Y-variables. If the data set is large, then we can work with our calculators or even use our computers where we use the computer simply as a calculating machine and experiment with various statistical models to come up with an exact mathematical relationship between the two set of variables. Here, even though we may use the computer to perform calculations it is our mind that is actually solving the problem because it is inside our mind where the data set is really being analyzed, churned and taken apart. Our mind does not follow any algorithm when handling the data. It simply moves from one model to the other, one idea to the other in no particular order to see what fits the data. Eventually, our mental iterations coupled with calculating power of the machines generates a model that fits the data.

However, if we knew where the data come from or what the data represents then we can very quickly postulate a math model to fit the data. This is because we can then analyze the data within a context and using a perspective that we have developed based on our lengthy acquisition of knowledge over our life time. When we look at census data or data regarding the height of males and females in a society we quickly fit a Gaussian (normal) distribution to it. Until recently, all financial markets data would be fitted into such a Gaussian distribution. When we look at a data concerning an epidemic, such as AIDS, we are quick to use exponential relationships. Given the vast knowledge of mathematics, statistics and statistical distribution that we have acquired ever since we were in high school and given our knowledge and understanding of various theories of finance, physics, chemistry, engineering, biology and social sciences we can easily connect the dots when we look at a data set within a certain context. Our power of ideation the way we can engage in abstraction of data based on our knowledge is immense. This gives us the intelligence.

Now, if we give the same data set to a computer without telling it what to look for then it'll be clueless. No matter how short and simple or complex and large the data set, if we don't tell a computer what to look for, what model to fit it will not know what to do. No matter how hard we try the computer will not be able to contextualize the data. We need to do that via writing an algorithm. We have to give a model to the computer via the algorithm and using that model the computer will analyze the data. A computer cannot regress through data the way we do in our minds. To perform regression on data set a computer needs to have an a priori mathematical model given to it via the human programmer. It searches a well defined and per-determined mathematical space that is given to it via the algorithm to find an expression equation to fit the data. This is how it does regression. Can the computer be taught to regress on a data set exactly like a human mind? One can make a start by using what is called Symbolic Regression, a certain technique within the class of genetic algorithms.

This brings me to the starting point of our discussion. Have genetic algorithms such as symbolic regression, started to make intelligent machines? The answer is still no; not because I am a consummate believer in the human species and organic intelligence. I don't think silicon based minds can be completely rid of algorithms any time soon.



Any comments and queries can be sent through our web-based form.

More on Artificial Intelligence and Machine Learning >>

back to top

Videos
 
 

More from Articles


Artificial Intelligence and Machine Learning