Written up for the Unrev-II Project
|0.9||Reformatting and linking, minor edits.
Improved the picture of development-document cycles.
Appended a new paragraph to “Partitionable”.
“Quotable” section added.
|0.7||“Categorizable” section expanded,
“Relational” requirement added under “DKR Requirements”
|0.6||“Reusable” requirement added after “Hierarchical”|
|0.5||“Partitionable” requirement added under “General System Requirements”|
|0.4||Use Case Scenarios|
|0.3||Formatting, two additions|
|0.2||Refinements. Thoughts from UnrevII discussions and other additions|
- General Functional Requirements
- General System Requirements
- DKR Requirements
- Operational Requirements
- Data Structure Requirements
- Use Cases and Scenarios
- Future: Abstract Knowledge Representation
This document is aimed at adducing the requirements for a subset of an eventual Dynamic Knowledge Repository (DKR). The subset described is for a collaborative document system, inspired by what Doug describes as an “Open HyperDocument System” (OHS). The goal of this document is to show how such a system fits into a DKR framework, detail its requirements, and point to extensions that move it in the direction of a full DKR.
A fully functional DKR will need to manage many different kinds of things:
- abstract knowledge representations (and inference engines)
- predictive models
- multimedia objects
- programs of various kinds (search engines, simulations, applets)
- data (spreadsheet files, database tables)
It is likely, too, that different kinds of problem will require information to be organized in fundamentally different ways. For example, a DKR devoted to the energy problem might have major headings for the problem statement, real world data, tactical possibilities, strategic alternatives, and predictive models. On the other hand, a DKR devoted to building the next-generation DKR might have sections for requirements, design, implementation, testing, bug reports, suggestions, schedules, and future plans.
Since the general outline of a DKR seems to depend on the problem domain it is targeted for, it seems reasonable to focus attention on the elements they have in common.
This set of requirements will focus on what is perhaps the major common feature: Documents — in particular, Collaborative Documents, and the need for collaborators to interact in a variety of ways, from real-time interactive shared screens to conventional email, to construct them.
Other important areas that will need attention include the integration of multimedia objects (including animations, simulations, audio, video, and the like) as well as the critical functions of abstract knowledge representation, inference engines, model-building functions, and the integration of other executable programs. But here, we’ll focus on Collaborative Documents.
A wide variety of email and forum-based discussions occur on a host of topics every day. In each of these discussions, important information frequently surfaces, but that information is hard to capture where you need it.
Document production systems, on the other hand, simplify the task of creating complex documents but make it hard to gather and integrate feedback.
For example the DKR discussions have identified several possible starting points for such a system. That kind of feedback occurs naturally in an email system, as opposed to a document production system, but each of the pointers was buried in a separate email. It required lengthy search to gather them together (below), and the list may not even be complete!
To act as a foundation for a DKR, a Collaborative Document System (CDS) needs to combine the best features of:
- Directory tree / outlining programs
- Hypertext (links and formatting)
- XML (inline references and other features)
- Email systems
- Forums and Email Archives
- Document Database
- Versioning Systems
- Difference Engines
- Search Engines
In the DKR discussion, we’ve seen pointers to several possible starting points for such a system. Those are contained in the References post, in the Bootstrap section. (They many possible starting points listed in the post desperately need short synopses and evaluations.)
The lengthy list of starting points, the difficulty of creating it, and the rapidity with which it goes out of date, combine to suggest several obvious requirements for the system: It needs to be composed of information nodes that are hierarchical, mailable, linkable, and evaluable (more on those subjects in a moment).
Each of those requirements leads in turn to other requirements. The next three sections discuss the General Functional requirements, General System Requirements, Anticipated DKR Requirements, and Operational Requirements in greater detail.
Following that, there are a few additional sections that will at some point be moved to separate document(s).
At the end, there is a brief consideration of the role that an abstract knowledge representation might play in such in system.
Possibly the most remarkable thing about the DITA documentation system is the way it meets some of the most important system requirements with especially with respect to re-use of material by way of transclusion, in addition to standard HTML hyperlinking.
These are the general requirements for how the system must operate, to be effective.
This document, like the list of starting points mentioned earlier, is heavily hierarchical in nature — as are most technical documents. These facts further underscore the need for a hierarchical system.
For example, this email message should exist in outline form. It should be easy to add and remove entries to various sections: for example, the list of starting points given above.
However, the hierarchy should function using XML-style “entity references” that copy the target contents into the displayed document, “inline”. That permits multiple references to the same node. The result is effectively a lattice of information nodes, where any one view of it is hierarchical.
To be strictly correct, the underlying data structure will be a directed graph. In reality, it will be bi-directional, and it will typically turn out to have cyclic loops. Although it would be nice to avoid that, it is probably unavoidable.
The “network” nature of the graph results from the property that allows a document-segment (node or tree) to be used in multiple places. In each “document” that makes such an access, however, the view is hierarchical. The hierarchy is a view of the graph, and a “document” is really a structured collection of nodes from the data base.
Unlike HTML, where references to other documents occurs only with links, references to other nodes and trees in this system will typically occur as “includes”. The effect of the inclusions will be to make the material will appear inline, as though it were part of the original document.
Although “hard” links to objects will be needed at times, in most cases the link to the “Requirements Document” should be a “soft” link — that is, an indirect link that points to the latest version. That means never having to worry about looking at an old version of the spec.
Each node in the hierarchy needs to be versioned, so that previous information is available. In addition, the task of displaying differences becomes essentially trivial.
It must be possible to “publish” the whole document or sections of it by “posting” it. It must also be possible to create replies for individual sections, and then “post” them all at one time.
At a minimum, every node in the system has two hierarchies descending from it. One is a list of content nodes that comprise the hierarchical document. The other is a list of reviewer comments. (Some comments will be specific to the information in that node, others will be intended as general comments for that section of the document.).
Other sub-element lists may found to be desirable in the future, so the system should be “open-ended” in allowing other sublists to be added, identified, and accessed.
Rather than using a central “repository”, the system should employ the major strengths of email systems, namely: fast access on local systems and the robust nature of the system as a result of having redundant copies on many different systems. The system will be more space intensive than email systems, but storage costs are dropping precipitously, and future technologies paint an even brighter picture.
To mitigate the short-term need for storage space, it should be possible to set individual storage policies. For example, a user will most likely not want to keep previous versions of any documents they are not personally involved in authoring.
It must also be possible to add names to the authoring list. Name removal should probably be limited to the original author. For those cases when the original author is no longer part of the system, it should be possible to make a copy of the document and name a new primary author.
When a new version of a document arrives, differences are highlighted. Old-version information becomes accessible through links (if saved). Differences are always against the last version that was visited. If a section of the document was never visited, the most recent version of that section is displayed on the first visit. If several iterations have taken place since the last visit, the cumulative differences are shown. (Again, node-versioning makes this user-friendly feature fairly trivial.)
XMLTreeDiff at IBM Alphaworks (Lars Martin)
Clearly support for web links is desirable, as shown by the links to the various possible starting points in the References post. [Note: Each of those should be evaluated against this requirements list, and used to modify these requirements.]
Indirect links are needed, both to link to a list of related nodes, and to link to the latest version of a node.
It must be possible to categorize nodes (and possibly links). For IBIS-style discussions, for example, node types include (at a minimum) question, alternative, pro, con, endorsement, and decision.
For material that is included “in line” in the original document, typing implies the ability to choose which kinds of linked-information to include. For example, in addition to the current version, one might choose to display previous versions and/or all commentary.
For material that is displayed in separate windows, typing allows the secondary windows to automatically display material of a given type. (For example, in Rod Welch’s “contract alignment” example, the secondary window might automatically display the meeting minutes that are linked to particular phrases in a contract. Lines might be automatically drawn from sections of the minutes to sections of the contract. Other links in the documents, however, would be ignored.
The Traction system probably presents that most clearly-thought out and well-implemented approach to Categories. In that system, categories are implemented as lists. When a category is applied to a node, the node acquires a link to the list, and also becomes a member of it. The fact that nodes are members of category lists allows efficient searches. The fact that each node links to the categories it belongs to allows all of the nodes categories to be displayed in a list (to the right of the paragraph, in Traction, in a light blue color).
In Traction, categories can also be hierarchical. The colon- convention is used to separate categories, as in “logic:assert” or “logic:deny”. Categories can also be changed in that system. In the demo that Chris Nuzum was kind enough to give me, he used the example of “ToDo” changing to “Feature:Scheduled” and “Bug:Open”. When you invoke the change operation, all of the nodes currently marked “ToDo” are listed, and flagged as “subject to the change”. You can then uncheck any nodes the change does not apply to before performing the operation. Then, when you change the remaining “ToDo” nodes, the list is all set to carry out the change.
In addition to those features, Traction realized that the impact of changes could be large, so they included an *audit trail* for every change. When a node is re-categorized, the date, time, and author of the change are recorded. It may also be possible to undo such changes, though I’m not sure. But the important point is that changes in such a system can generate a significant amount of confusion. The audit trail makes it possible to see what happened. It would also be helpful to identify folks you would rather not have messing around in your data base.
To summarize, then, the requirements for the proper handling of categories, are:
- Creatable (add new categories)
- Hierarchical (catA:catB)
- Assignable (node <–> catA)
- Removable (node <-/-> catA)
- Changeable (catA –> catB, selected subset of nodes changes)
- Auditable (audit trail)
- Searchable (to find all nodes of given type(s))
It should be possible to construct an initial design document using queries of the form “give me all design notes corresponding to the features we decided to implement in the current version of the functional specification”.
The many possible starting points in the References list highlights the need for evaluablility. It should be possible, not only to reply with a comment on any item in those lists, but also to add an evaluation, much as Amazon.com keeps evaluations for books. That feature is arguably their greatest contribution to eCommerce, and the DKR should make use of it. It should also be possible to order list items using relative evaluations. That lets the most promising starting point float to the top of the list.
Not all lists should be ordered by evaluation, however. For example, the sequence of requirements has been chosen to provide the most natural “bridge” from one to the next. So evaluation-ordering must be an option.
Ideally, it should also be possible to “weight” an evaluation, perhaps by adding a “yay” or “nay” to an existing evaluation.
When displaying an evaluation, where evaluators can choose a value from 1..5, it might make sense to display the average, the number of evaluations, and the distribution. A distribution like
10 2 1 2 10
for example, would show a highly polarized response, even though the “average” was 3.
* Architecture for Internet searching, categorization, and ranking. (http://www.cs.sunysb.edu/~maxim/OpenGRiD/)
The system must increase the ability of multiple people, working collaboratively, to generate up-to-date and accurate revisions.
For any given document, there are several classes of interaction:
The first group consists of people who receive the document and do nothing else with it. (Just trying to be complete here.) The second group consists of people who send back comments on different sections. That feedback will typically be used in future versions.
The third group consists of people who suggest an alternative wording or organization. Those “suggestions” take the form of a modified copy of the original. One of the document authors may then agree to use that formulation in place of the original, or may simply keep it as commentary.
The fourth group consists of the fully-collaborative authoring group. The original author must be able to add other individuals to the document, or to subsections of it. (An author registered for a given node has authoring privileges throughout the hierarchy anchored at that node.)
Every information node that is created should be automatically attributed to it’s author. When a new version of a node is created, all of the people who sent comments should be contained in a “reviewer” list. When a suggestion is accepted, the author of the suggested node should go into a “contributor” list in the parent node and be added to the “author” list for the current node. It should be possible to identify all of the reviewers, contributors, and authors for the whole document and for each section of it.
In addition to being able to add commentary to existing documents, the user must be able to easily quote from existing documents when creating new ones.
Internally, the quotations will appear as a link (for example, using the w3c XInclude specification). But the quoted material will appear “inline” in the new document. The link, in this case, will be a “hard link”. That is, when newer versions of the text are created, the link will not point to them, but will instead point to the original version. The fact that newer versions exist, however, will be reflected in the display (explained next).
When displayed, quoted material will be automatically attributed, and followed by a link to the original source node, in its original context. If that material has changed, that link will be flagged as “older”, and a link to the newer version will also be presented. (The document’s author(s) will then have the option of using the newer version in place of the original.)
If the system is truly a network (a node can exist in multiple contexts), then the pointer must point not only to the node, but also to it’s parent context, so that the link goes to the document the node was quoted from. On the other hand, if the system is not really a network (but only appears to be one through the action of inclusion operations like quoting, then the system must be prepared to handle “pointers to pointers”. In other words, if the node appeared in document A, and it was quoted in document B, then when constructing Document C, quoting the same text from document B will construct a link (pointer) in C to the pointer (virtual node?) in B that points to A. The “context” of the node, in that case, must be B, and not A.
When new versions of a document are created, material would be included by pointing to it, keeping attributions intact. The system must accelerate that process. It should be possible to start a new document in one of two ways:
- Copy the original document intact to create a new version of it. (Deletes and rearrangements then affect the new document, while the original version remains intact.
- Create a document and designate it as the “target” so that, as you review other documents, selecting parts of it and issuing the “copy” command automatically stuffs it into the target.
These are requirements for the system as a whole.
The system must be “open” in the sense that a user is not constrained to using a particular editor, email system, or central server. The specifications for interaction with the system should be freely available, along with a reference implementation to use as a basis. As much as possible, conformance with existing standards (XML, xHTML, HTTP, email) is desirable. (The tricky decisions, of course, will be between required features and standard protocols that don’t support them.)
The server and client systems that implement the DKR must also be fully *extensible*. In other words, the same characteristics of hierarchy, versioning, and revisability (use of most recent version) that apply to the documents must apply to the system itself.
That extensibility can be accomplished with a “dispatch table” that names the class to use for each kind of object that needs to be created. In conjunction with open sourcing, that architecture allows a user to extend (subclass) an existing class and then use the extended version in place of the original. In addition, upgrades can occur dynamically, while the system is in operation, while allowing for modular downgrades when extensions don’t work out.
* Warner Ornstine’s Cords/Plugs/Sockets Architecture.
Security in such a system becomes an issue, unfortunately. The system should employ whatever mechanisms exist or can be constructed to help prevent Trojan horse attacks, back door attacks, and other security breaches in an open source system.
For example, Christine Peterson described Apache’s process as having something like 45 reviewers, 3 of whom recommend the inclusion and none of whom object, before new code is added to the system.
Email is fundamentally the right interface for such a system, because information comes to you, the information is organized into threads, and you can edit/reply from within the same application you use to view the information. (Email’s major weaknesses stem from the fact that even though the interface is appropriate, the underlying data structures are not. But the hierarchy inherent in the specified system will rectify those flaws, eliminating the redundancy inherent in email responses and allowing for thread-summaries.)
However, the factor that makes email central to one’s daily activities is the wide variety of inputs you receive. Email is inherently “project neutral”. You get email on every topic under the sun, including personal and professional interests. It represents “one stop shopping” for your information needs. (The Web, on the other hand, provides nicer storefronts, but you have to go visit the store to find what you want.)
In a sense, the “firewall” requirement is in itself a partition. In an organization like the Stanford Research Center (SRI), for example, there is a need to create a project-specific partition, so that only only other members of the project team ever see that information. On the other hand, there is a wide area of shared expertise (computer expertise, management expertise, administrative expertise) that can be shared among all members of the organization.
In a similar vein, the “email interface model” implies the need for multiple partitions — one for each project or interest area, for example. The degree to which you “cross-fertilize” between the partitions should then be up to you.
With a partitionable system, the client connects to multiple projects. Each of those projects has information that is “protected”, in the sense that it never goes beyond the members of the project. Other information is “public”, and sharable. For example, a software design team has design specs that are obviously protected. But at the same time they may acquire or generate information about general principles for solving problems. That information can go public. (At some point, they may also wish to expose their architecture as an example of those principles.)
These additional requirements begin to move the system towards a DKR.
With respect to security, there is also the issue of “firewall” capability. The DKR must allow professionals in many different organizations to contribute and share knowledge. That knowledge may largely be in the form of published papers and the means to locate and access them, but it represents a high-degree of inter-organizational cooperation, at the level of the individual professional.
The DKR will also be handy for individual projects, though. The mechanisms will support collaborative designs and “on demand” education as to corporate procedures, for example. But that information must remain *inside* the firewall, inaccessible to competitors.
In the ideal scenario, it will also be possible to “publish” information stored in the inner repository at strategic times, rather like publishing a technical paper that gives the design of the system. But until then, the firewall must remain intact.
It must be possible to add *relations* as first-class objects in the system, where a “first class” object is one that can be observed and manipulated like any other node in the system. Such relations will make it possible to link nodes in interesting ways, make it possible to add new connections over time, and allow for some forms of automated reasoning (or at least, ” reasoning assistance”). In conjunction with categories, the addition of relations is likely to be the most important step in converting the system into a true DKR, of the kind that Jack Park describes.
Relations should work like much like categories, with the capacity for adding and changing relations, while keeping an audit trail of the modifications. However, while categories apply to single nodes, relations relate pairs of nodes, at a minimum, or possibly multiple nodes at one time. As Dewain Delp observed, the repository of information nodes in the system is more properly described as a “network”, rather than a “hierarchy”, because a single node may be simultaneously part of several document structures. (Even though any one view will most probably (and valuably) be hierarchical.) With the advent of relations, the system is immediately and obviously a true network.
An equivalence relation, for example, could be used to relate a new question to an existing thread. The sender of the question, now alerted to the equivalence relation, can then readily inspect the answers that have been previously been given. (There are likely to be several answers in the system. By giving high marks to the answer(s) that were found to be most helpful, the best answers “float to the top” in an organic, evolving FAQ.)
Another useful relation is “implies”. The ability to add implications to the system lets the user create connections between nodes. The inverse of that relation (implied by) allows a user to trace back the raison d’etre for a given node. In a software design network, implications allow functional requirements to be linked to each other and to design requirements, which can then be linked to specifications, and from there to code. If “not” is introduced at any stage (as in, “we can’t do this) then the proposal under attack can be traced back to its roots — with alternatives available at each stage. If the design proposal is invalid for example, perhaps one of the design alternatives that has been discussed will be usable. Failing that, the functional requirement can be reconsidered, etc.
The ability to add relations will provide the kind of “alignment” that Rod Welch talks about — the ability to thread document sections together so that, for example, a section of a contract can be threaded back to the email discussions that prompted it, making it easier to ensure that the final contract accurately reflects the desired goals.
Although users can add relations at will, it makes sense for the system to come with a “starter set” of standard relations that everyone uses by convention. That initial set can come from the fields of logic, mathematics, and abstract reasoning:
For example, a design idea might be “implied by” multiple functional requirements. The fact that a single idea solves multiple problems makes it an elegant solution. In that sense, the relation is an “and” of the requirements. But at the same time, dropping all but one of those requirements would still imply the design idea. In that sense, the relation would be an “or”. In general, (a*b) => c and (a+b) => c do not imply any relation between a and b, but only between the a,b, pair (in some particular configuration) and c. However, even if and/or are *not* relations in their own right, some mechanism for specifying such connections may still be useful — even if it is only an attribute of a relationship.]
- equivalent to
- iff (double implication)
- set/subset, union/intersection
- Abstract Reasoning
- analogous to, similar to, like
- instance of, special case of
- abstraction of, general case of
Eventually, the system must become a *teaching* tool. It must follow the concept of “Education on Demand”, intelligently supplying the user with the information needed, and educating that user, whatever their initial background. (Within reasonable limits.)
This is an outline of functional operations for the system:
- Add, change, delete, move nodes
- Copy nodes
–node alone, current-version subtree, whole subtree
- Link (indirect, “soft” links, and direct “hard” links)
- Automatic versioning
- Automatic attribution
- Increment version number for future edits
- Deliver to group via server
- Automatically diff against last visited version of each node
- Highlight diffs
- “Go to next unread” feature
- New node: author=currUser, lastEditor=currUser
- Copy node: all lists unchanged
- Modify node: lastEditor=currUser
- Copy text: new node created, all lists copied
- Paste text: Author-list + Contributor list from the clipboard node merge into the contributor list for the current node.
This is a highly imperfect solution to the attribution problem. Copying a single word out of a very large node stands to create a highly-inaccurate contributor list. On the other hand, creating a new node and pasting all of the text from the old one would drop attributions altogether.
A better alternative, if feasible, would be attributions attached to every phrase in the node. That requirement creates a third category of containment for the node, consisting of the text that makes it up. When originally created, there would only be one long phrase, and it’s author. When others make changes, the text would be broken up into segments. That’s the same architecture most editors use internally, anyway, but it would require storing a lot more information, putting it together to display the node, and taking it into account when copying and pasting.
Since it is possible to receive comments on nodes that have been deleted from the current (not yet published) draft, it may be necessary for the system to maintain “phantom” nodes that can be used to collect such comments.
Phantom nodes are invisible until a comment is received. Theoretically, they can disappear once the current version is posted (since future comments will be on that version). In practice, though, there The comments themselves are always stored under the original node.
As an alternative, the system could operate like the CRIT system, where such comments go to the end of the document.
Each node needs a trash bin that collects nodes which are deleted from under it. Trash bins are never emptied, except by explicit action requiring multiple explicit confirmations.
The comment/version-publishing system means that locks are not required for single-author documents. But for multiple authors to collaborate, it must be possible to prevent editing conflicts.
One possibility is to implement distributed locks. The major issue there is handling communication outages.
An equally viable possibility may be to allow simultaneous edits and detect their occurrence when a new version is received. The competing versions can then be displayed side-by-side along with user-selectable merge options.
Detection of competing versions may require something other than simple version numbers. Or perhaps the versionID would consist of the version number combined with the ID of the current writer.
TrashBin nodes must maintain a pointer to the phantom that is left behind after deletes, or to the location at which to create such a phantom.
A monotonically increasing version#, combined with the ID of the most recent editor *should* be sufficient to identify changes in a node. It may be that a timestamp works better, though. Even a timestamp will need to be combined with the most-recent-editor-ID, though, to identify competing versions created by different authors. (Although matching a millisecond-timestamp is improbable, it is not impossible.)
The version number for a node would be the maximum of the version numbers for all content subnodes. When edited, the new version number would either be a timestamp or the parent version# + 1. (All parents would then be adjusted.)
TimeStamps probably make more sense, since edits using the algorithm above will make the version# “jump around” quite a bit.
In either case, a more “user-friendly” version number is needed for the document as a whole.
The system needs to account for a “hierarchy of versions” of at least two levels. The first level is for a set of documents. (All documents for version 2.0 of the system, for example.) The second level is the version of the document itself. (Version 3 of the 2.0 Requirements Doc).
Each node in the system should be able to track the following information:
- Unique identifier (so links always work)
- List of Content subelements
- List of Comment subelements
- List of elements comprising the content-text, with attributions (if implemented)
- Version-identifier for the node
- Version-identifier for the content sublist
- Author list
- Contributor list
- Reviewer list
- Last editor
- Evaluation list
- Evaluation summary
- Distributed Lock (unless Competing Versions is chosen)
- Trash Bin
- isPhantom identifier
- pointer to own phantom
After the initial version of the data/object structures has been nailed down, they need to be run through a series of use case scenarios, with the data manipulations defined for each. The goal of the process will be to refine the data structures, looking for weaknesses or necessary reorganizations. [Note: Some scenarios may need to be tabled as unsuitable for the initial system.]
- Software Development discussions and documents
- IBIS-style discussions
- Development-Document Cycles:
FunctionalSpecs<-->DesignSpecs ----+ ^ | ^ | | | | | | +-->UserDocs +-->ApiDocs<-+ | | | ^ | | | | | | | +-->FAQ +--> Code <--+ | ^ | | | | | Tests <--+-----+ | | | | Bug Reports--------------------+ | +--User Suggestions
- Strategic Decisions (combinations)
- multiple possibilities identified (~= alternatives)
- proposals consist of combinations of possibilities
- one proposal selected
- Build a Product/Feature comparison chart
- Feature rows, product columns
- Adding a column suggests a new feature, then track the “back-gathering” of data on previous products.
- Build a Requirements/Technology evaluation chart
- Requirements rows, Technology columns
- Must-Have, Nice-To-Have, Optional categories
- Y/N cells &/or evaluation cells
- Adding a new technology suggests additional “must have” feature
- Project Management
- implementation checklists & sign-ups
(track who signed up to do what)
- implementation checklists & sign-ups
- Multiple Software Versions
- Series of tutorial examples
- Code branches with common elements
- IBIS-style discussions
- Add questions, posit alternatives, evaluate & decide
- Subsume propositions as alternatives under a question
- Mathematical/Logical Reasoning
- Assertions, Negations
- Implications (a-> b)
- Inferences (a->b + b->c + a => c)
- Comment on a node
- Comment on a structure
- Suggest a text revision
- Suggest a new node
- Suggest a new structure
- Accept/reject a suggestion
- Edit a copy
- integrate comments
- fold in and remove, or
- reject and remove
- New version replaces old, and links to it.
- Competing Versions
- become “siblings”? — a parent needed
- Use IBIS model for resolution?
- Evaluations, leading to eventual selection
A hierarchical system is created from only two relationships:
If progress is made in the pursuit of abstract knowledge representations, it may be that the whole of collaborative document system may well migrate into a knowledge representation, using those two relationships. The document management system would then be a subset of a much larger knowledge management repository. A start on this capability is made when the system’s Relational requirement is satisfied.
One wonders what such a system will look like after it begins to be extended with thousands of additional relationships.
It boggles the mind.