Addressing information overload by taking a complexity approach to semantics.
The theory of complexity is based on the idea that complex behaviour emerges out of simple rules. Usually, we are talking about systems containing large numbers of interacting agents. In this sense ‘complex’ means that behaviour cannot be extrapolated from the rules of the system, only derived from models.
Living systems can be defined in terms of dissipative structures. These structures exist at the edge of chaos, or on the line between order and randomness. This means there is enough order for a distinct structure to be maintained over time, but enough randomness to allow for flexibility and evolution of that structure. This is usually achieved in living systems by gradually replacing the components of the system, by ingestion and excretion. For example, the human body is a continuous structure whose chemistry and components change over time. So is a human cell, and at a more basic level within the cell there are autocatalytic sets of enzymes and reactions which in the right circumstances are self-perpetuating.
Complex networks can be found all over the natural and human world. Examples include the neocortex, the economy, ecosystems, control gene expression, metabolic pathways, contagion of disease or ideas, structure of the internet or languages etc. This is because they all are formed in a similar way following a few principles like growth by addition and preference for existing highly connected nodes. For example think of how the internet started, where people linked to a few popular sites, making them more popular, which made more people likely to link to them etc.
Small World Networks
This kind of growth often results in a scale-free network, whereby the distribution of node degree (how many links a node has) is described by a power law, or in other words follows a logarithmic graph. Another example of a power law is the ‘long tail’ curve utilised by iTunes, Amazon etc, who make a large proportion of their money from small volume sales of a large number of relatively unpopular items. For the same reason the size of an avalanche or earthquake is never predictable but fits statistically into a power law distribution.
Perhaps the best known example is the social network, which is also a small world network. The power law means that although most people only have a few contacts, there are a few social hubs with hundreds. Most people know about the ‘6 degrees’ idea in human society, where we are all connected to everyone else by around six steps in the network. A small world network achieves this by having hubs and weak ties, both of which emerge from the rules governing network growth.
The point is that small world networks are very stable. If you take out a randomly chosen node then there are always plenty of other routes through the network. On the other hand, a strictly hierarchical network is vulnerable since removal of a single node will prevent information flow to all the nodes beneath it. This is also why some infectious diseases or computer viruses are never wiped out. There is a tipping point which occurs during formation of a network when the small world state is reached. After this, overall connectivity is not really improved by adding new connections. This state is achieved by weak ties, which are connections between nodes which don’t have overlapping neighbourhoods. For instance, I know people in Vancouver so friends in New Zealand are only two steps away from them in the social network.
Complex Network Dynamics
Why is this relevant to organisations? This is about how a network of humans responds to the outside world, which is a function of information flow and network growth. Or in other words, the dynamics IN the network and the dynamics OF the network. Open Space Technology is an example of how self-organisation can be encouraged within groups of people by flattening hierarchy, which helps create new weak ties between different levels of an organisation. This increases information flow through the network which strengthens it and enables it to be more adaptable.
Spider organisations, typically large hierarchical corporations, are slow, ordered and brittle. Starfish organisations, typically loose associations of individuals or smaller organisations, are more adaptable because their ability to move quickly allows them to change with the environment. Look at the impact of guerrilla tactics on warfare. Traditional armies have a hierarchical command structure, where information has to pass up the chain of command before decisions can be made centrally. They are slow to mobilise and suffer if key components are lost. The guerrilla approach is to have a distributed organisation of cells without a central command center, which can collaborate or act autonomously when required. They usually share general principles but are free to make decisions. Consider the account of Apache resistance in The Starfish and The Spider for an example of a successful leaderless organisation.
Semantic profiling software reads text in order to represent context by extracting themes and the relationships between them. Context representations can be used for discovery, mapping, exploration etc. The topology of a semantic network is similar to a social network, so many of the same tools can be used. A semantic approach is complementary to manual tagging, partly due to the emphasis on implicit relationships rather than pre-defined categories, and partly because automation makes it consistent, which supports the pattern matching functions.
Whether a semantic or a manual approach is used, I believe that focusing on emergent relationships can help knowledge systems avoid the categorisation and domain evolution problems that top-down taxonomies suffer from. An example would be for expertise location or ad hoc team building in crisis/disaster management, or other applications where the emphasis is on immediate action.
Hierarchical Temporal Memory
It is currently maintained that the neocortex is a machine for matching patterns in time. Tracking the emergence of semantic relationships over time may be the only way we can realistically make sense of the huge amount of information most of us are bombarded with these days. Psychologists also theorise that a new concept may often come about by forming a relationship between two previously unrelated ideas.
I like to think that having a new idea is equivalent to forming a weak tie in the semantic network. The feeling of insight corresponds to linking previously unconnected ideas. As context then develops around the new relationship, nodes with no overlapping context gradually become more and more embedded. For instance, the words ‘mobile’ and ‘phone’ were once unrelated, but now have almost transitioned to a single word state due to their association. The key is to identify these new weak ties as they appear and start to develop surrounding context. For instance, ‘this article is related to those you have read but also contains some interesting new ideas’…
Hmmm . . . most of this has never occurred to me. I liked your point about the removal of a single node disabling an entire heirarchical system, arguing in favour of distributed information/power structures. And it’s an interesting account of the sensations that accompany new ideas – I’m not sure there’s anything quite like it in Locke’s Essay on the Human Understanding.
Hi Mark, interesting stuff, learned a few things. My biggest concern with automated semantic analysis is the critical mass. How much of the useful content one would need before meaningful things start to emerge? And how to determine what’s useful?
If you use theme relationships to define context then similarity matching can be done across relatively small document collections, in the hundreds or lower. Similarity matching supports many knowledge management functions including discovery, deduplication, document routing etc.
For meaningful context clusters to emerge you may need more information, although it depends heavily on the document set. A highly embedded set of related articles covering a narrow subject area will tend to merge into one central clump. A more diverse data set will tend to throw context satellites from the main body. These satellites are useful indicators of novelty whatever the collection size, above a minimum threshold, since this novelty is only ever defined relative to rest of the data set.
Of course, as a data set grows new additions are less likely to be novel… perhaps somewhat akin to the reduced frequency of insight as we get older!
I’ve thought about the issue a little differently…I use a simple perspective of measures and look at ‘new things’ to see if they are positive, negative or different from other things I know or have experienced. I ignore both positive and negative to focus on the ‘different’ as they will change prior or future relationships.
[…] – bookmarked by 4 members originally found by ilovebigbutts2007 on 2008-11-13 Emergent Contexts https://mpagah.wordpress.com/2008/07/09/emergent-contexts/ – bookmarked by 2 members originally […]