Monash

Discovery

Towards a conscious-o-meter

22.01.2016

In a nutshell: This finding makes it possible to use IIT, a mathematical description of consciousness, to measure consciousness in real brains.

View Paper Abstract
Towards a conscious-o-meter

The big picture:

Computers can guide space probes to distant planets, store astronomical amounts of personal data, and beat us at chess. But smart as they are, they lack that ineffable humanness that is conscious experience.

And smart as they are, neuroscientists struggle to explain how the human brain generates consciousness – and silences it during deep sleep and anaesthesia.

Now, ARC Centre for Integrative Brain Function associate investigator Naotsugu Tsuchiya and researchers at Brain Science Institute at RIKEN, Japan, have made headway by modifying a mathematical formula describing consciousness so that it can be used to measure consciousness in real brains.

The ultimate goal of this type of work is a scale akin to a ‘consciousness meter’, which could be used to compare consciousness in different states, such as sleep, following brain injury, and in different species, says Tsuchiya, who heads Monash Neuroscience of Consciousness (MoNoC) at Monash University.

Today’s neuroscientists trying to get a handle on the total subjective experience of another person fast hit the limits of even the most sophisticated neuroscience techniques.

A potential means of breaking this intellectual log jam is a mathematical theory of consciousness called Integrated Information Theory, developed by University of Wisconsin neuroscientist Giulio Tononi.

IIT is based on the assumption that consciousness emerges from a system that massively shares information among its different parts. It provides an objective measure of level of consciousness, boiled down to a single value, phi — an index of information shared between different parts of a system.

So, when we see a purple car, our brains have integrated the information about the object, its colour, its potential uses, our past experiences with cars, and so on – we get a high phi. A digital camera seeing the same car, would not integrate information, and gets a low phi.

But although IIT is gathering kudos among consciousness researchers, it has its limitations, not least that it is impossible to compute phi using data from real-world systems.

“It’s too complex and requires complete knowledge of how the system should behave in all possible situations,” says Tsuchiya. To date, phi has been computed for tiny networks of ten neurons, a far cry from the 86 billion neurons of the adult human brain.

The work-around by Tsuchiya and his RIKEN colleagues is a new way of measuring how a system integrates information between its individual parts based on measurements of how much information is lost when the system’s parts are disconnected.

Getting to the ultimate goal of a ‘consciousness meter’ won’t be easy, says Tsuchiya. Theories of consciousness, including IIT, are the Big Bang Theory of neuroscience — based on a smattering of data, and huge amounts of untested theory to pull them together.

But by making more data collection possible, the modified IIT will contribute to the long-haul of understanding consciousness, he says.

Next steps:
Tsuchiya and others will apply the modified IIT formulation to a variety of more complex systems, including EEGs from humans doing visual tasks. This will allow them to work out whether IIT is a realistic model of consciousness.


Reference:
Oizumi, M., Amari, S. I., Yanagawa, T., Fujii, N., & Tsuchiya, N. (2016). Measuring integrated information from the decoding perspective. PLoS Comput Biol, 12(1), e1004654.


Republish this article:

We believe in sharing knowledge. We use a Creative Commons Attribution 4.0 International License, which allows unrestricted use of this content, subject only to appropriate attribution. So please use this article as is, or edit it to fit your purposes. Referrals, mentions and links are appreciated.

CIBF