One of the outputs of the workshop was a distributed cognitive model of information. (Well, it might actually be more than than just that: it might be a distributed, situated, enactive, embodied, integrative model of cognition. We’ll have to think more deeply about it, as we did only take one week to develop it!) Elements of the model include humans and computers situated in an effective environment. The model is able to clarify some of the contradictions in typical thinking about the internet:
- Ontology: is the Internet an agent, a collection of agents, or an environment (as Sue) for agents? The answer is yes to all three; the model describes how this might be so, depending on how you “slice and dice” the model..
- Agency: The model also resolves some of the questions around how machines affects our interpretation of technology-mediated human agency (Latour). Are machines an extensions of people? Are machines in a co-evolutionary relationship with humans? The model shows that both are true inasmuch as (again) they are different way to slice and dice the model.
- Scale: The model is self-similar across scales. The self similarity can be explained in terms of replicant theory: replicants such as genes, memes, and tremes are translated; the translation boundaries correspond to hierarchical boundaries.
- Dynamic change: The model retrodicts and predicts Internet behaviour across time. These retrodictions and predictions are also related to scale (see previous point). The Internet of Things affects how the changes across space and time will be.
The model is model of distributed cognition, not consciousness. We suggest, however, that further analysis and refinement of the model might reveal it to be a useful model for distributed and situated consciousness. The reason we shied away from claiming a model of consciousness is not because of the ‘hard problem’ of how qualia arises from matter, but because of another phenomenological obstacle: only I can assert with absolute certainty than I am conscious at this moment. I can’t even do the same with you. Because a first-person perspective is necessary for any certainty of consciousness, and anything else is only a really, really good guess. Consciousness can only be asserted by those that are conscious (cf. Tononi). And this is “the harder problem”.