“Certainly, the deontic applications strike a chord with me. Pulling exemplar cases from the literature and encoding them as one and two layered defeasible theories sounds exciting. The implementation seems to follow the constructive proof of the theory itself for the back end prolog side. A web accessible front end seems technologically feasible using the abstract window toolkit and its extensions. All things considered, my stopped up head and scratchy memory could use a reveiw of the defeasible system itself and possible modifications to the graphic representation introduced by theory concerns.”
This quote, pulled from an email, may or may not be meaningless to the reader. Some of the words used have very specialized meanings, built out of shared experiences of printed text, blackboards and compilers. First, let us revisit the notion of defeasibility and how that changes formal logic. Next, we can exercise our minds with some logic graphs to get some experience with visualization of conceptual abstractions. The body of the work can then be approached. How can we construct a graph visualization of a defeasible theory that is formally provable to be sound and complete with respect to its syntactic component?
Such a body of defeasible logic and it’s unique graph visualization will have some distinguishable computational organs. During implementation those organs might be constituted with various tools and materials at different times. This next iteration of implementation will speak to an interest in interactive publications, while making more concrete the melding of theory and visualization. As we have seen in the past few years, working with the visualization of a theory allows one to find the tough cases and provides a springboard for a revision of the theory itself. The tools we construct should reflect the mutability of the theory as it is worked out and provide support for a collaborative setting of facts and rules.
The most severe critisism of the defeasible logic group’s work may be one from a common perspective, simply, “what good is it?” If we are to avoid being guilty of the charge `non-monotonic logic hackers` there must be a natural application of the defeasible system with deep connections to the human experience. As one ponders this question, the ethical relevance is almost eventually apparent, “what _good_ is it?” As one might suspect, the application of defeasible logic in the realm of deontics introduces a folding of defeasibility back into the computation of defeasibility. A reveiw of some ethics literature will turn up a wide variety of relevant examples for tough cases.
Many different ethical theories have been independantly introduced, criticized, and either reintroduced in an adapted form or discarded. Rather than give a historical account of those disputes or attempt to classify, categorize, or otherwise label those ethical systems and their proponents, I propose to shift to the meta-ethical level and consider issues common to any and all ethical theories.
Ethical inconsistencies provide the anomalies that give rise to moral change. These moral dilemmas can be encoded in a two layer defeasible theory, where the first layer is a set of facts and rules pertaining to the situation itself and the second layer is a set of facts and rules pertaining to the precedence relation between the rules in the first. By taking as examples the ethical paradoxes in the literature, we can then focus on the encoding of the examples into defeasible forms.
It has been refreshing to participate in defeasible logic graph discussions and possible implementation approaches at the AI Center. Having been involved in the analysis, and design of tools for visualizing defeasible theories, the move to a particular context borrows from the existing work. The peculiarity of defeasible ethics theories introduces a complexity not required in elaboration of the basic theory.
Since reading Nute’s paper on Defeasible Logic Graphs (DLG), I have been working on a web accessible version of the ‘Logic Graph Server’. The emphasis on collaboration and participatory development of these theories lends itself to network environments. Although this system is primarily intended to address concerns that must be faced by all moral agents, or discussed by any ethical theorist, it may be useful to deploy as a research and development tool in a variety of application domains.
Conversation Topics :
– non-monotonicity as rule defeasibility
Non-monotonicity can arise in a variety of ways and historically has been a motivating factor for probabilistic reasoning, fuzzy logic and quantum logics. Defeasible reasoning takes a unique approach, making the inference engine itself non-monotonic. One can denote conditionals according to the type of logical relation. If the relation is `strict’ the antecedent entails the conclustion universally. Most relations are actually of a defeasible sort, which admit to anomoly or other special cases. Some relations are statements about anomolies and the particular circumstances that invalidate other relations. A defeasible theory is primarily comprised of a set of literal facts and a set of rules. One can assign truth values to the atomics of the theory and check for derivable consequences.
– computation complexity and pragmatic completeness
There is an old adage in software engineering, “make it work first; if is proves to be useful, make it fast.” Often a programming effort will be subjected to external constraints due to project scheduling or real resource limitations. These constraints, while factoring into how the task is accomplished, may not be in harmony with the theorical intent. Since the work at the AI Center is basic research, we should be more concerned with the theoretical accuracy than with pragmatic constraints.
– defeasible logic graph specifics
A well founded proof theory for defeasible logics provides a basis for defining some graph representation of a theory of facts and rules. A marking of the graph using colored labels and the propagation of the colors is well suited for visualizing defeasible theories. One of the motivations for the work is hypothetical deliberation or reasoning with partial information. We need a software tool for constructing defeasible logic graphs, and working with derivable concequences of those graphs. Furthermore, we need a tool that facilitates collabrative efforts.
– limitations and simplifying restrictions
Since the graph visualization tool for defeasible theories must run on current computer technology, the logic graph needs a planar rendering. One can not always find a satisfactory planar rendering of a logic graph in two dimensions and overlapping lines confuse the eye. Even a simple theory can produce a bewildering `boxes and lines’ representation. If the graphs are restricted to their propositional forms, e.g., the model abstracts away the internal structure of the propositions, one can view the shape of the theory graph as a whole. The current work on d-graph does not deny the value of rendering in guts of a sentance, a la Sowa’s Conceptual Graphs, rather its foci is at particular level of abstraction.
The definition of strict rules indicates that inconsistency between strict rules is not allowed. If there is a derivable conflict using strict rules only, the defeasible theory is not well formed. The sorts of relations that admit to conflict, anomolies and exceptions are encoded as defeasible relations. However, if one creates a theory and finds that it admits to strict cannabis inconsistency, then one has to reveiw that -was- considered the set of ubiquitous logical relations. In a similar fashion, circular reasoning is often considered a fallacy. The d-graph tool treats consistency of the strict rules and acyclic graph structure as two properties of well formed defeasible theories.
More later :
– semantic and philosophic referents
– deontic application