Robot week Day 1: Can we achieve artificial consciousness? Lessons from the brain
Can machines attain sentience—perceiving, feeling, and being conscious of their surroundings? What are the prospects and how about ethical concerns of pursuing this goal?
A path toward machine "general intelligence" using Darwinian evolution to create artificial brains for mobile, potentially sentient robots.
Creating artificial brain evolution
In recent studies, scientists have explored using evolution within computer models to understand intelligence and cognition. Instead of designing artificial intelligence entirely from scratch, they simulate evolution, allowing the computer models to "evolve" intelligent behaviors naturally.
This approach aims to help us understand how intelligence could arise in machines and gain insights into how real brains work through this process. By evolving virtual “brains,” researchers create systems that can link perception to action, mimicking how biological brains handle information and make decisions.
One method involves creating brain-computer models that simulate the brain’s information-processing tasks. These models can even be applied to robots or other physical systems to replicate and observe behaviors seen in animals, like cricket sound-following or rat-like movements. By allowing algorithms to evolve and adapt, scientists can study which cognitive strategies might naturally emerge to handle tasks like learning and motion detection.
In these experiments, a variety of strategies evolved, from basic reflexes to complex forms of learning. This diversity suggests that nature doesn’t favor a single “best” solution.
Timing is everything - how the Brain focuses our attention
Timing is essential for how our brains understand and react to the world. Every event unfolds in time, and if we respond too early or too late, it can be costly. Our brains generate internal rhythms to help us anticipate events, even in simple activities like walking. For example, as we step, the brain creates a rhythm to predict when our feet will hit the ground. If this timing is off—say we step on a rock—neurons sense the mismatch, alerting the brain to potential dangers in the path.
The brain uses similar prediction mechanisms across all senses. Rather than processing every bit of incoming sensory data, it focuses the attention on unexpected changes. This ability to filter out predictable patterns is essential; constant monitoring of all sensory input would exhaust the brain. Our visual attention works similarly: it’s drawn both by the image’s features and by what the brain expects, letting us focus on what truly matters.
Evolution of the brain (aka our) attention
In recent studies, scientists have explored how our brains track rhythms and use them to anticipate sounds. Imagine you’re listening to a steady beat, and then suddenly, a sound comes in that’s either shorter or longer than expected. This change is called an “oddball” tone, and detecting it helps us understand how the brain focuses our attention and responds to rhythm.
Two main theories try to explain this brain behavior. The first suggests that the brain starts and stops “counting” when a tone plays, with attention consistently spread over the entire sound. The other theory proposes that attention is not steady but instead peaks at key moments, especially when a rhythmic tone is expected. According to this theory, if a tone is delayed, people may perceive it as longer since the brain is already primed to notice it at the usual rhythm point.
Researchers extended this concept by creating artificial “brains” in computers. These digital brains, when trained on rhythmic tones, showed similar attention patterns to humans. Their experiments supported the idea that, much like human brains, these artificial brains focused primarily on when a sound ended, not when it started.
“Bottom-up” and “top-down”
Attention in the brain operates through two main processes: bottom-up and top-down mechanisms. In bottom-up attention, sensory details in the environment, like a sudden noise or bright color, capture our focus without any conscious control.
In contrast, top-down attention is guided by experience, expectations, and internal models of what we think is important in a scene or situation. For example, when looking at a face, we instinctively focus on the eyes, mouth, and nose, where the most telling expressions are found. This focus comes from our brain’s learned model of faces, honed over time to quickly interpret emotions and intentions.
Research using artificial brains shows how these processes play out in the lab. By programming artificial brains to focus on certain parts of a rhythmic sequence, researchers observed how internal models shaped attention. These models helped the artificial brains “expect” certain rhythms, allowing them to prioritize relevant stimuli and disregard unnecessary details.
Understanding this balance between bottom-up and top-down processes offers insights into how the brain efficiently processes complex information.
For centuries, philosophers have debated how we perceive and understand the world. Plato argued that perception alone is not true knowledge; it requires "concepts," or what we now call representations, to guide our understanding.
Modern neuroscience supports this, showing that these representations help guide attention and fill in gaps based on expectations.
Evolutionary computational neuroscience advances this understanding by using artificial brains to study attention, allowing less biased insights into how cognition shapes perception and action.
About the scientific paper:
First author: Michele Farisco, Sweden
Published: Neural Networks, September 2024.
Link to paper: https://www.sciencedirect.com/science/article/pii/S0893608024006385?via%3Dihub
Comments ()