Robots achieve consciousness through simulations of evolution

Robots achieve consciousness through simulations of evolution

Can machines attain consciousness — perceiving, feeling, and being conscious of their surroundings? A path toward machine intelligence could be using Darwinian evolution to create artificial brains for mobile, potentially sentient robots.

Creating artificial evolution
In recent studies, scientists have explored using evolution within computer models to understand intelligence and cognition.

Instead of designing artificial intelligence entirely from scratch, they simulate evolution, allowing the computer models to "evolve" intelligent behaviors naturally.

By evolving virtual “brains,” researchers create systems that can link perception to action, like how biological brains handle information and make decisions. This approach can help us understand how intelligence could arise in machines and gain insights into how real brains work through this process.

One method involves creating brain-computer models that simulate the way the brain processes information. These models can be applied to robots to replicate and observe behaviors seen in animals, like cricket sound-following or rat-like movements.

By allowing algorithms to evolve and adapt, scientists can study which cognitive strategies might naturally emerge to handle tasks like learning and motion detection.

In these experiments, a variety of strategies evolved in the artificial brains, from basic reflexes to complex forms of learning. This diversity suggests that nature doesn’t favor a single “best” solution.

Timing is everything - how the brain pays attention
Timing is essential for how our brains understand and react to the world. Every event unfolds in time, and if we respond too early or too late, it can be costly.

Our brains generate internal rhythms to help us anticipate events, even in simple activities like walking. For example, as we step, the brain creates a rhythm to predict when our feet will hit the ground. If this timing is off—say we step on a rock—neurons sense the mismatch, alerting the brain to potential dangers in the path.

The brain uses similar prediction mechanisms across all our senses. Rather than processing every bit of incoming sensory data, it focuses its attention on unexpected changes.

This ability to filter out predictable patterns is essential; constant monitoring of all sensory input would exhaust the brain. Our visual attention works similarly: it’s drawn both by the image’s features and by what the brain expects, letting us focus on what truly matters.

“Bottom-up” and “top-down”
Attention in the brain operates through two main processes: bottom-up and top-down mechanisms. In bottom-up attention, sensory details in the environment, like a sudden noise or bright color, capture our focus without any conscious control.

In contrast, top-down attention is guided by experience, expectations, and internal models of what we think is important in a situation. For example, when looking at a face, we instinctively focus on the eyes, mouth, and nose, where the most telling expressions are found.

This focus comes from our brain’s learned model of faces, developed over time to quickly interpret emotions and intentions.

Research using artificial brains shows how these processes play out in the lab. By programming artificial brains to focus on certain stimuli, researchers observed how they gradually shaped their attention. That helped the artificial brains to develop expectations, allowing them to prioritize relevant stimuli and disregard unnecessary details.

Understanding this balance between bottom-up and top-down processes offers insights into how the brain efficiently processes complex information.

For centuries, philosophers have debated how we perceive and understand the world. Plato argued that perception alone is not true knowledge; it requires "concepts" to guide our understanding. "Evolutionary" neuroscience advances this understanding by using artificial brains.


About the scientific papers:

First author: Michele Farisco, Sweden
Published: Neural Networks, September 2024.
Link to paper: https://www.sciencedirect.com/science/article/pii/S0893608024006385?via%3Dihub

First author: Ilya A Kanaev, China
Published: Neuroscience and Biobehavioral Reviews, February 2022.
Link to paper:
https://www.sciencedirect.com/science/article/pii/S0149763421005820?via%3Dihub