Type keywords...

Open Sourced

Open Sourced # 3: You vs. AI

September 18, 2020

All the hype surrounding artificial intelligence (AI) got you bummed out about your human brain? Missing the days when you didn’t have to worry about getting hustled by a poker-playing AI? Last week’s Neuralink demo have you feeling like an army of cyborg pigs are going to take over the world?

Allow me to offer some solace: the human brain is still the pinnacle of intelligence. While AI may be excelling in some domains, the human brain continues to outperform in others. No, we can’t translate prose into fifty different languages or store an Internet’s-worth of data. But we can empathize, invent, offer opinions, hold values, and write blog posts. Identifying how the human brain succeeds at these tasks is a likely key to further empowering AI. Even if AI fully surpasses human intelligence someday, we can be content knowing that it was born from the processes used by the intelligence engines sitting in our own heads.

Compared to AI, the human brain is great at abstract reasoning. Presented with little information, we can quickly solve new, unfamiliar problems. All those coordination skills you built up playing Mario Kart can be abstracted from that problem space and applied to make you a better real world driver, even if the only car you’ve ever seen is Toad’s Bullet Bike (backed by science). By transferring knowledge from one domain to another, we streamline the learning process and adapt to our environment. The human brain, however, is not so great at processing information quickly. We have to listen to a song four or five times before it’s memorized, but for AI, memorization is as easy as hitting “Save”. Un-memorizing presents even more of a challenge for humans because, unfortunately, our memories don’t come with “Delete” buttons. Typically, AI will outpace the human brain at any task that involves mathematical calculations or data processing.

Human intelligence stems from an elegant series of interactions. The human brain is a modular bundle of 86 billion neurons, which are specialized cells that transmit and receive electrical signals. They work harmoniously in a big game of telephone, transforming data as it travels down interweaving neuron chains. An individual neuron is composed of three main parts: dendrites, a cell body, and an axon. Signals are received through the dendrites, travel to the cell body, and continue down the axon until they reach the synapse, the point of communication between two neurons. Signals will only be sent, however, if the neuron reaches a certain voltage threshold, a reaction known as an action potential. At the synapse, the firing of an action potential in one neuron causes the transmission of chemical messengers called neurotransmitters to another. The more neurotransmitters, the stronger the connection. In this way, neurons can talk to thousands of neighbors and stimulate or inhibit their activity, forming circuits that process information and carry out a response.

Neuron Structure  | Courtesy of  Smartsheet
Neuron Structure | Courtesy of Smartsheet

The human brain has a distant, artificial cousin: neural networks. This is the type of machine learning we’re seeing used in object recognition systems on self-driving cars and game-playing AIs. Artificial neural networks (ANN) were first proposed in 1944 by University of Chicago researchers Warren McCullough and Walter Pitts in an attempt to show the similarities between the human brain and the digital computer. But since then, ANNs have mostly exposed just how different we are from our cyber counterparts.

An ANN is a simplified model of the human brain’s structure. It consists of thousands or millions of densely connected nodes that communicate information like neurons. Often, nodes are organized into a number of layers: an initial input layer, one or more hidden layers, and a final output layer. As problems grow more complex, more hidden layers must be introduced to accommodate extra computations, and the network grows “deeper” (this is where the term “deep learning” comes from). Data travels “feed-forward” through an ANN, from a node’s incoming connections to its outgoing connections. At each incoming connection, the node assigns a “weight,” the value of which gets multiplied by the input. The greater the weight, the stronger the connection between nodes. Incoming weight-input products are summed up and passed to the node’s outgoing connections only if the output value is above a certain threshold. When the threshold is exceeded, the node “fires,” and data moves to the next layer. As such, ANN inputs undergo divisions, multiplications, and additions until reaching the output layer with a concise value.

Artificial Neural Network Architecture  | Courtesy of  Smartsheet
Artificial Neural Network Architecture | Courtesy of Smartsheet

As previously mentioned, ANNs are superb data-crunchers. They work well in repetitive tasks that have a clearly defined problem space and can be represented by data. This is why so many ANN ventures are happening in the gaming industry: the rules of a game never change, and a game can be broken down mathematically. That said, ANNs fall off the rails when introduced to new subject areas. You can’t take an object-recognition ANN and expect it to know how to play chess — it can’t handle the data inputs of game pieces, let alone understand the concept of winning. But even an eight-year-old kid, with a couple hours and a lot of patience, can learn to play chess with a reasonable degree of skill.

On paper, the human brain and ANNs sound more like fraternal twins than distant cousins. Their structural similarities are clear: neurons and nodes handle data processing; neurotransmitters and weights manipulate data values; thresholds dictate how data moves through the system. But when we look at what types of problems the human brain and ANNs can solve, we see that they’re worlds apart. Why?

Imagine you’re handed a computer’s central processing unit (CPU) — the electronic circuitry that executes a computer’s commands — and asked what program that CPU is running. Can you do it? Of course not! Your guess is as good as that of the very guy who invented the CPU. To know what a computer is doing, you can’t just look at its hardware; you need to know what set of instructions, or software, is running through its wires.

The same goes for the human brain. Thus far, neuroscientists have classified the brain’s architecture as a web of adaptable, interconnected neurons. This model is now being applied to building ANNs with the hope of achieving similar forms of intelligence. But that hope has not been fulfilled because the human brain and ANNs are running entirely different software. In fact, neuroscience has almost no clue what computations are happening in the brain because it’s so difficult to track an individual electrochemical signal in a haystack of 86 billion neurons and one thousand trillion synapses (we don’t even know how the 302-neuron brain of a worm works). So, even though the human brain and ANNs have similar structures, their operating systems are divergent and uncertain.

To build ANNs with human capacity for abstract thinking, we first need to answer some important questions about the brain’s software, such as:

  • How do neurons store information?
  • Can consciousness be represented computationally? Is consciousness necessary to achieve human-level intelligence?
  • To what extent is intelligence innate versus learned?
  • How does a single neuron compute? How do circuits of neurons compute? How does the human brain compute as a whole?

The ultimate goal of answering these questions is a Universal Theory of Intelligence (UTI), a set of principles that holds true in brain tissue and in metal circuit boards. But we have a long way to go until then. In the meantime, we should look to develop human-in-the-loop AI systems that call on humans for abstract thinking and AI for data processing. This collaboration between brains and bionics is the basis of Elon Musk’s vision for Neuralink, and has also helped produce otherwise impossible music, optimize engineering designs, and diagnose rare diseases. It will be interesting to see what new capabilities arise as AI takes on more and more of the human brain.

Ben Lehrburger

Ben is A.I. For Anyone’s Open Sourced column writer and analyzes how AI affects or is affected by current events. Ben currently studies at Dartmouth College, where he is majoring in Cognitive Science with a concentration on AI.

Keep up-to-date with the latest from AI4A!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.