For my part, what I am doing now is to figure out what the community of Future Text Connect (which I am going to call the FTC) 2019 is by collecting data on
- Public profiles that are related to the community (web sites, blogs, etc)
- Digital artifacts created and maintained by the members (images, videos, code repositories)
- Community groups and communication channels (skype, slack, github, gitlab, Google Docs, Google colab, etc)
I am interested in specific digital artifacts instead of philosophical principles. My own work, however, is platform independent and data driven.
From
Marc-Antoine Parent
What we're trying to do, and how: a new definition.
Collectively, we're exploring ways of working together, and I hope building something larger than what we can each build. The goal as I understand it is an ecosystem of tools to support collective intelligence, i.e. an ecosystem of common thought.
I'm personally interested in two problems:
I think that, as a group, we would want:
At the technical level: to be able to show seamless information flow between our various platforms (all the way from cross-reference through embedding through shadowing nodes.)
At the human level: To show how we can build on one another's work, enrich it, and come up with rich inclusive perspective documents as a result of that work.
At the meso level: We need a way to collaborate in large numbers that allow both synthesis and multiple perspectives to co-exist. (Collaboration without edit wars through choral perspectives and an underlying broad graph.)
Let me go further: We're trying to model new ways to work with information, both at the individual and collective level. Because of the scope of information, we need to be able to focus on partial views; because of its complexity and interrelatedness, we want to be able to envision global views. As individuals, we sometimes want to express an individual vision that diverges from the collective; as a group we want those views to be woven back into a more global perspective. So we need perspective documents that remain connected to a global information space. The global view attempts to present a synoptic synthesis across multiple perspectives, but also across the evolution of the community. If a group is discussing an issue, there is a lot of conversation (flow) that feeds into building the synoptic documents.
There are many perspectives on a global information space:
The tools we are building have to do all of this, and interrelate it. A user story must explain how an individual moves between those views. Usually, a person starts with a personal perspective, uses a computational agent to branch from the personal to a computed local neighbourhood view, may dip into the global view to find new leads, interpret those leads in a historical-conversational view, gets pulled into an ongoing conversation about different perspectives, tries to affect the communal view... This is the user story I want to elaborate on in time.
How will we work together, concretely?
There are two approaches, starting from the abstract and starting from the concrete. (I think we'll have to do both.) We all have build tools, which have made certain assumptions regarding data models, whether those tools are document-centric, concept-centric, representation-centric, or some combination of the above. We need to share our existing, concretely realized data models and identify anchoring points, where our models can refer to one another, be translated into one another. From the abstract side, we probably need to define a data model that is generic enough to express what each of us is doing. Sometimes, it will be unify by expressing what each of us tries to do, but hopefully we can find simple generic units that can express a lot of different things like the URL.
One of my own efforts, Hyperknowledge.org, is a specification to describe operations on concept graphs, with an emphasis on data sharing between different data repositories. The underlying graph model is based on topic maps, and allows hypergraphs, i.e. N-ary relationships, relationships as relationship targets, etc. To allow data sharing, we model remote references, embeddings, local forks and merges of individual data objects.
One particular aspect of hyperknowledge is an enriched view of URLs: URLs do not always uniquely determine a resource. All meaning is contextual. When two agents speak about a resource, they both have a different dataset about this resource; hence the knowledge object is different though it uses the same name and may refer to the same abstract entity. So I distinguish local names, and a dataset is uniquely determined by a combination of local name, realm (identity of the dataset holder) and timestamp. External references, or provenance information for local copies, should keep all three.
Those graphs are mostly accessed as event streams. I'm hugely committed to event sourcing. This makes it easier to merge concepts between different representations in different realms, using a model not unlike git. I think this is very important to social knowledge. Operations are decomposed into micro-operations dynamically, which makes it possible to add operation processors for new operation types dynamically. The aim is to use those micro-operations to allow a fork and merge model of whole concept graphs.
Hyperknowledge aims explicitly to be a lingua franca between our various efforts; but it's still work in progress, and there is still work to be done to model composite documents and view specifications using that model.
My other, earlier effort is IdeaLoom.org; it connects a much simpler concept graph model with conversation streams. It is based on the belief that formal structure is the result of a conversation, and will be refined through conversation. That basic assumption also reflects how I think we should work together.