And so, I went on reading about the neural network and how it is a module on the human brain and nervous system, how the way it was thought of was the same concept as the one of how little children don’t get born with fixed knowledge but instead information is added throughout their life. This is also how the neural network works: not only does it learn from interaction with other intelligence, but it also keeps learning from itself, trying different solutions and outcomes.
These algorithms are created by building up layers of questions that can help reach a conclusion. The layers are sometimes called neural networks because they mimic the way the human brain works. Think about the structure of the brain, with its neurons connected
to other neurons by synapses. A collection of neurons might fire due to an input of data from the senses (like the smell of freshly baked bread). Secondary neurons then fire, provided certain thresholds are passed. (Perhaps a decision to eat the bread.) A secondary neuron might fire, for example, if ten connected neurons are firing due to the input data, but not if fewer are firing. A trigger might also depend on the strengths of the incoming signals from other neurons.
In a series of lectures on the human behaviour many similarities were recognized,
but also differences and the question if chemical changes in our bodies affect our personality? Through these lectures this question is addressed and confirmed.
But what about the neural network? Is it a brain without a body or at least one that doesn't have hormones and chemical structure?

Investigating more into these questions I found the lectures and books of James L. (Jay) McClelland, a Professor in the Psychology Department and Director of the Centre for Mind, Brain, Computation and Technology at Stanford University, and his question is: What makes people smarter than computers? And although he recognizes the abilities of the neural network, he sees the difference lies in the way of information processing.
McClelland and his co-authors describe a new theory of cognition called connectionism.
The connectionism theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations.
In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network.
The problem with the neural network for McClelland is making correlations and connections and being able to explain why it chose to do what it did: “I do not mean to say that we as humans have perfect access to the basis of our own perceptions, feelings, and choices of actions. Those interested in human thought have been aware since the late 19th century that introspection is often uninformative or completely misleading. Yet we can and do share information with each other that we can use to immediately alter our behaviour – something that is not possible for machine systems that simply learn to get better through massive experience. Where our ability to engage in meta-cognition comes from is an open scientific question. One could hold that it is something that evolution endowed us with, or one could hold that evolution and culture gave us language, and with language, we developed the ability to understand and give explanations, and once these abilities developed, we became able to use language to make observations for ourselves.
The recent AI language system GPT-33 may have some abilities along these lines. This system was trained on a vast corpus of language including quite a lot of transcribed human discourse. Since such discourse contains examples of explanation, it is possible that the system would, if assessed, be able to give some form of self-explanation.”7


The system that he mentioned here (GPT-33) is the one used for AI1, although I don’t know if it is used for AI2 and/or AI3.
5
6
7