When Networks

Need To Make Sense

What We’re Talking About

Current artificial intelligence based on deep learning has problems. Each of these AI systems can only be used in situations for which it’s designed, and they make mistakes even in such situations. Anything that’s too different from their training sets causes issues. This isn’t surprising, because their construction is not based on a notion of how actual understanding arises from neural interactions.
Current cortical prostheses have problems, too. Cortical prosthetic vision researchers struggle to produce useful systems because there’s no notion of how visual experiences arise from neural interactions.
We need a solution to these problems that’s based on a testable theory of experience and understanding.

Why This Matters

Self-driving cars crash into things. Image-identifying software misidentifies people and objects. Tools for managing human resources and medical information develop biases. Cortical visual prostheses produce disappointing results and researchers work without a theoretical basis that could be used to make improvements. These systems have access to lots of data, but they don’t understand anything; they’re built on computation rather than on meaningful pattern formation.
This situation results from the reliance on methods that have nothing to do with human experience and understanding. We should aim to produce neuroprosthetics and AI that are based on how humans experience the world and that operate in an understandable way.

What We Have

A novel perspective on the physical nature of subjective experience and understanding, which leads to:
– a specific way of thinking about how cortical prosthetic vision arises from neural interactions
– a way to improve the performance of current cortical visual prostheses, and
– a strategy for building AI that’s based on the way people have experience.

We’d Love To Talk About It

About the current state of our research, what it means, and where it’s going.

We look forward to talking with you.