It is ironic and mundane at the same time that for all our scientific progress over the centuries, mankind still has only a basic understanding of the thing that enables this understanding: our brains. I am certainly no neuroscientist, so of all people, I certainly cannot claim any advanced understanding of the odd (and tasty, according to many foodies) lumps of fatty tissue that reside inside our craniums.
Perhaps we are incredibly inapt at understanding how brains work due to the way we tend to organize things. All things really – companies, cities, machines, households, education, healthcare, science. Because organizing things usually comes down to breaking it down into parts, then deciding what each part does, and then assuming (hoping, trusting) that when we put the parts together, the whole system functions as intended.
When developing technical systems such as complex machinery or software products, we often go for this decomposition-integration routine: define the system as a whole, break it into bits which can be separately designed, and then put the bits together into the (hopefully) functional system. This process is sometimes referred to as ‘system architecting’, since it is about devising an architecture for the system that will enable it to fulfill its intended functions. Dr. Gerrit Muller maintains an inspiring website on system architecting focused on the domain of complex technical systems such as MRI scanners and semiconductor lithography machines. But the approach of defining an overall architecture and breaking it down into manageable chunks is also recognizable in software development (the whole notion of object oriented development relies heavily on it) and the concept of organizational modularity (especially organizations that develop products)
Now here’s the catch: developing a technical (or social) system is one thing. Understanding a system that we had no hand in making, is quite another.
One of the few things I understand of neuroscience is that research in this field revolves to a large extent around the so-called ‘Brodmann areas‘: groups of brain cells that, simply put, show functional and chemical kinship. You could say that the recognition of these Brodmann areas is the result of a continuing search of over a century long for the architecture of the brain.
I mentioned the term ‘building blocks’ above; Brodmann areas can be regarded as the building blocks of the brain. Examples include the primary visual cortex and the primary motor cortex. Over the past decades, neuroscientists have demonstrated the correlation of these Brodmann areas with several functions; hence the names of the Brodmann areas I mentioned as examples. Applying this divide-and-conquer logic to neuroscience makes the complex system of the human brain a bit less unwieldy to investigate. It has resulted in the leap of knowledge that materialized over the past decades in this area. A modular view of the brain enabled a modular organization of the field of neuroscience and the distribution of specific subjects over a widespread landscape of neuroscientists.
But to what extent are our brains modular? Simon argued back in 1962 that biological systems are by definition near decomposable, which can be loosely interpreted as being sort of modular. Again, simply put: in a near decomposable system, it is possible to alter one component without disturbing other components surrounding it (1). And this is where the concept of ‘loose coupling‘ comes in. Regardless of the convenience of the notion of Brodmann areas and near decomposability, it is undeniable that parts of the brain are not isolated. The functioning of the brain and abstract functions of it such as intelligence and awareness are emergent behaviors.
Emergent behavior has very little to do with human-conceived, decomposable architectures. If anything, a decomposable architecture has the purpose to limit emergent behavior and to bring it under human control. But the brain is not an intentionally designed system that was broken down into functional modules that were developed individually and then put together. Trying to understand it as such makes little sense to me.
But I would say that researchers at KU Leuven in Belgium are on the right track. They acknowledge that to understand what goes on up there, you need to look at the grey matter as an integrated, evolved system; not as a modular, man-made one.
Perhaps if they pull that off, we can move on to the next step and add a new way of developing man-made systems to our toolbox. The work on artificial neural networks has been moving in that direction – although the results have been sort of disappointing so far, given the fact that people have been thinking along those lines since at least the 1950s. I guess we’re still a long way from the first ‘man-assisted-evolved’ car. Until that time, we can only do as the Romans did. Divide and conquer.
(1) Simon’s argument relied on the notion that if, for example, vision made an evolutionary step forward, it needed to be able to do so without disturbing, let’s say, the digestive system. Otherwise, an evolutionary development in one part of the organism would too easily disrupt other functions. And that would seriously hamper the survival of the organism – better vision at the cost of not being able to digest food is not a very successful evolutionary trade-off.