top of page
Search

Is AI All About the Brain?

Updated: Feb 22


AI is straying farther and farther from its human-inspired roots.
AI is straying farther and farther from its human-inspired roots.

About eight decades ago, Alan Turing, the father of modern computing, introduced the idea of Artificial Intelligence to the world. In order to be considered intelligent, he argued, a machine has to pass the Turing Testthat is, it would have to possess the ability to convince a human of its intelligence. Today, we recognize Turing as one of the pioneers of AI, yet discussion on the definition of man-made intelligence has escaped the confinement of a simply philosophical discussion about a machine’s behavior. Can AI become truly intelligent without the perfect emulation of what makes humans intelligent? Does it matter? Read on to find out.


Déjà Vu


The first neural networks (what would evolve and morph to become modern AI) started with a Cornell professor named Frank Rosenblatt, who first put into practice the idea of using mathematical systems which could adapt during training. In fact, Rosenblatt’s ideas are still relevant: the idea of a Perceptron, as he called it, is one of the first things taught in any introductory machine learning course. 


Here’s how it goes: an Artificial Neural Network (ANN) might be described as a bunch of layers. Let’s say our network starts with three layers. In each layer, there are a few nodes. Now, in the most basic perceptrons, each node from one layer connects to all of the nodes in the next layer, essentially giving “feedback” to its successors. Each one of these nodes is basically a mathematical equation, with a few variables. As it receives inputs from the nodes in the previous layer, it uses the equation—along with the aforementioned inputs—to calculate a result, which it then passes on. A Biological Neural Network (BNN), on the other hand, is slightly more mysterious. As a neuron in a human brain fires, it releases an electrical impulse that is converted into a chemical signal. As this chemical signal flows from the emitters at the end of a brain cell to the receptors at the front of another, it is absorbed. If enough of the chemical signals (called neurotransmitters in Neuroscience) reach the cell to trigger a chemical imbalance, the receiving cell fires in turn, and the process repeats again. 


And this is why the Perceptron is so significant. For something introduced more than six decades ago, the Perceptron is so freshly intriguing; not only is it representative of a basic concept in machine learning (the fact that complicated problems could be estimated by turning them into equations), but it is also a near-facsimile of the processes that happen in a human brain, where nodes (brain cells, in your case) process information and pass it on to a next layer of processing. Although there are some notable differences (a brain cell’s response being binary, and a ANN node’s response being a number), the similarities are still surprising. 


Dichotomies Emerge


As AI became more and more advanced, what was originally a single cohort of “AI” researchers split into two distinct categories: the Cognitive AI Scientists, who aim to understand the brain’s nuanced functions to create powerful AI, and what I will call the Mathematical AI Scientists, who aim to develop pragmatic models relying on the power of data and algorithms. Why is this split important? Well, the overall trend of AI is that the form and function of AI architecture (the algorithms and math that makes up, broadly, an AI system) have since strayed away from that of the brain. Instead of layers upon layers of nodes holistically producing a response, modern AI has turned to massive matrix transformations to generate and classify text, images, and videos. 


Cognitive AI Scientists, in contrast to those pursuing Mathematical AI, have enjoyed relatively little success; current efforts to map the brain—efforts which, if successful, will certainly lead to breakthroughs in how we understand the brain—are hindered simply by the raw complexities of the brain, and what we currently know about the brain is just not enough to produce an accurate model of human cognitive processes on logic, memory, learning, and emotion. This is not to say that the brain’s concepts have not led to breakthroughs in AI: Mathematical AI researchers have, time and time again, attempted to create architecture that emulates brain behavior. Concepts such as Reinforcement Learning and Titans reflect a seemingly inherent tendency to limit the input-output behavior of AI to be at least similar to that of humans. 


However, emulation is not the same as replication. Although a LLM might be able to produce text like a human, what goes on underneath the screen is actually very far from how the human brain produces responses. A better way to describe this is that instead of mimicking how the brain processes text, modern LLMs use mathematical functions to generate text strings that are similar to what humans have generated before. There is not an inherent issue with this practice, yet the paradigm of approximation may result in a hard ceiling for the capability of AI development. 


Sky’s the Limit: The Cognitive Viewpoint


Some people believe that, in order to bring AI on the same level as humans, a direct model of human brain structure must be formed digitally. However, the brain is much more complex than anything we’ve modeled so far—with hundreds billions of different neurons, each belonging to one of six major systems, each consequently split into a large number of neural networks. Needless to say, Cognitive scientists have not yet discovered how the brain even works. 


Even then, the idea of “mapping” is vague to say the least. As far as scientists know, a considerable portion of the brain (¼) is dedicated to receiving, processing, and acting on sensory inputs. These “projection areas” connect with the brain structures traditionally used for thinking, called “association areas”, to deliver sensory input and receive motor commands. This leads us to wonder whether these projection areas actually play a significant role in processing and learning. If we didn’t model these areas in a part of our brain simulation, would our brain be otherwise complete? If not, how is it possible to simulate sensory input in a brain simulation? Even now, The Matrix is more relevant than ever. Regardless of whether the human brain is successfully mapped in the near future, Cognitive science will probably continue helping us improve the power and safety of AI systems through its forays into understanding how we function. 


Good Enough: The Mathematical Viewpoint


Other individuals believe that mathematical approximations of human behavior is “good enough” for the development of Artificial General Intelligence, a common goal for AI companies and scientists. Even though efforts to “brute-force” more powerful models have been met from considerable resistance arising from the exponentially increasing cost of running models with billions of parameters, more efficient and effective architectures are being constantly developed to mitigate this problem. 


One caution on focusing on abstract mathematical structures in order to trial for approximations of intelligent behavior is the black-box paradigm, which states that it is impossible to explain just exactly why a model engages in a specific type of behavior. This has far-reaching consequences—while being unable to explain why ChatGPT fails to understand a silly joke about walking into a bar is not significant, the same cannot be said if the model engages in harmful behavior across systems. Although modern LLMs are monitored with their Chains of Thought (CoT), in which a model attempts to explain its own reasoning process, the increasing complexity of AI means that they may not be a reliable source of explanation for much longer.

 
 
 

Comments


bottom of page