top of page

Thinking on the Web

Chapter 2
Gödel: What is Decidable?

 

In the last chapter, we suggested that small wireless devices connected to an intelligent Web could produce ubiquitous computing and empower the Information Revolution. In the future, Semantic Web architecture is designed to add some intelligence to the Web through machine processing capabilities. For the Semantic Web to succeed the expressive power of the logic added to its mark-up languages must be balanced against the resulting computational complexity. Therefore, it is important to evaluate both the expressive characteristics of logic languages, as well as, their inherit limitations.
 
In fact, some options for Web logic include solutions that may not be solvable through rational argument. In particular, the work of Kurt Gödel identified the concept of undecidability where the truth or falsity of some statements may not be determined.
 
In this chapter, we review some of the basic principles of logic and related them to the suitability for Web applications. First, we review the basic concept of logic, and discuss various characteristics and limitations of logic analysis. We introduce First Order Logics (FOL) and its subsets, such as Descriptive Logic and Horn Logic which offer attractive characteristics for Web applications. These languages set the parameters for how expressive Web markup languages can become.
 
Second, we investigate how logic conflicts and limitations in computer programming and Artificial Intelligence (AI) have been handled in closed environments to date. We consider how errors in logic contribute to significant ‘bugs’ that lead to crashed computer programs.
 
Third, we review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivers user-interface presentations residing within the markup languages traveling over the Web. The Semantic Web changes this partitioned arrangement.
 
Finally, we discuss the implications of using logic in markup languages on the Semantic Web.
 
Philosophical and Mathematical Logic

Aristotle described man as a “rational animal” and established the study of logic beginning with the process of codifying syllogisms. A syllogism is a kind of argument in which there are three propositions, two of them premises, one a conclusion.
 
Aristotle was the first to create a logic system which allowed predicates and subjects to be represented by letters or symbols. His logic form allowed one to substitute for subjects and predicates with letters (variables).
 
For example: If A is predicated of all B, and B is predicated of all C, then A is predicated of all C.  By predicated, Aristotle means B belongs to A, or all B's are A's. For instance, we can substitute subjects and predicates into this syllogism to get: If all humans (B's) are mortal (A), and all Greeks (C's) are humans (B's), then all Greeks (C's) are mortal (A).
 
Today, Aristotle's system is mostly seen as of historical value.
 
Subsequently, other philosophers and mathematicians such as Leibniz developed methods to represent logic and reasoning as a series of mechanical and symbolic tasks. They were followed by logicians who developed mechanical rules to carry out logical deductions.
 
In logic, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject. Today, logic is considered to be the primary reasoning mechanism for solving problems. Logic allows us to sets up systems and criteria for distinguishing acceptable arguments from unacceptable arguments. The structure of arguments is based upon formal relations between the newly produced assertions and the previous ones. Through argument we can then express inferences. Inferences are the processes where new assertions may be produced from existing ones.
 
When relationships are independent of the assertions themselves we call them ‘formal’. Through these processes, logic provides a mechanism for the extension of knowledge. As a result, logic provides prescriptions for reasoning by machines, as well as, by people.
 
Traditionally, logic has been studied as a branch of philosophy. However, since the mid-1800’s logic has been commonly studied as a branch of mathematics and more recently as a branch of computer science. The scope of logic can therefore be extended to include reasoning using probability and causality. In addition, logic includes the study of structures of fallacious arguments and paradoxes.
 
By logic then, we mean the study and application of the principles of reasoning, and the relationships between statements, concepts or propositions. Logic incorporates both the methods of reasoning and the validity of the results. In common language, we refer to logic in several ways; logic can be considered as a framework or system of reasoning, a particular mode or process of reasoning, or the guiding principles of a field or discipline.  We also use the term "logical" to describe a reasoned approach to solve a problem or get to a decision, as opposed to the alternative "emotional" approaches to react or respond to a situation.
 
As logic has developed, its scope has splintered into many distinctive branches. These distinctions serve to formalize different forms of logic as a science. The distinctions between the various branches of logic lead to their limitations and expressive capabilities which are central issues to designing the Semantic Web languages. The following sections identify some of the more important distinctions.
Deductive and Inductive Reasoning
 
Originally, logic consisted only of deductive reasoning which was concerned with a premise and a resultant deduction. However, it is important to note that inductive reasoning – the study of deriving a reliable generalization from observations – has also been included in the study of logic. Correspondingly, we must distinguish between deductive validity and inductive validity.
 
The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. An inference is deductively valid if and only if there is no possible situation in which all the premises are true and the conclusion false.
 
Inductive validity on the other hand requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some of which use mathematical models of probability.
Paradox
 
A paradox is an apparently true statement that seems to lead to a contradiction or to a situation that defies intuition. Typically, either the statements in question do not really imply the contradiction; or the puzzling result is not really a contradiction; or the premises themselves are not all really true (or, cannot all be true together). The recognition of ambiguities, equivocations, and unstated assumptions underlying known paradoxes has often led to significant advances in science, philosophy and mathematics.
 
Formal and Informal Logic
 
Formal logic (sometimes called ‘symbolic logic’) attempts to capture the nature of logical truth and inference in formal systems. This consists of a formal language, a set of rules of derivation (often called ‘rules of inference’), and sometimes a set of axioms. The formal language consists of a set of discrete symbols, a syntax (i.e., the rules for the construction of a statement), and a semantics (i.e., the relationship between symbols or groups of symbols and their meanings). Expressions in formal logic are often called ‘formulas.’
 
The rules of derivation and potential axioms then operate with the language to specify a set of theorems, which are formulas that are either basic axioms or true statements that are derivable using the axioms and rules of derivation. In the case of formal logic systems, the theorems are often interpretable as expressing logical truths (called tautologies).
 
Formal logic encompasses a wide variety of logic systems. For instance, propositional logic and predicate logic are kinds of formal logic, as well as temporal logic, modal logic, Hoare logic and the calculus of constructions. Higher-order logics are logical systems based on a hierarchy of types.
 
For example, Hoare logic is a formal system developed by the British computer scientist C. A. R. Hoare. The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic.
 
The purpose of such a system is to provide a set of logical rules by which to reason about the correctness of computer programs with the rigor of mathematical logic. The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation.
 
A Hoare triple is of the form:
{P} C {Q}
where P and Q are assertions and C is a command.
 
P is called the precondition and Q the post-condition. Assertions are formulas in predicate logic. An interpretation of such a triple is: Whenever P holds of the state before the execution of C, then Q will hold afterwards.
 
Alternatively, informal logic is the study of logic that is used in natural language arguments. Informal logic is complicated by the fact that it may be very hard to extract the formal logical structure embedded in an argument. Informal logic is also more difficult because the semantics of natural language assertions is much more complicated than the semantics of formal logical systems.
Mathematical Logic
 
Mathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, the application of mathematical techniques to the representation and analysis of formal logic.
 
The boldest attempt to apply logic to mathematics was pioneered by philosopher-logician Bertrand Russell. His idea was that mathematical theories were logical tautologies, and his program was to show this by means to a reduction of mathematics to logic. The various attempts to carry this out met with a series of failures, such as Russell's Paradox, and the defeat of Hilbert's Program by Gödel's incompleteness theorems (which we shall describe shortly).
 
Russell's paradox represents either of two interrelated logical contradictions. The first is a contradiction arising in the logic of sets or classes. Some sets can be members of themselves, while others can not. The set of all sets is itself a set, and so it seems to be a member of itself. The null or empty set, however, must not be a member of itself. However, suppose that we can form a set of all sets that, like the null set, are not included in themselves. The paradox arises from asking the question of whether this set is a member of itself. It is, if and only if, it is not!
 
The second form is a contradiction involving properties. Some properties seem to apply to themselves, while others do not. The property of being a property is itself a property, while the property of being a table is not, itself, a table.
 
Hilbert's Program was developed in the early 1920s, by German mathematician David Hilbert. It called for a formalization of all of mathematics in axiomatic form, together with a proof that this axiomatization of mathematics is consistent. The consistency proof itself was to be carried out using only what Hilbert called ‘finitary’ methods. The special epistemological character of this type of reasoning yielded the required justification of classical mathematics. It was also a great influence on Kurt Gödel, whose work on the incompleteness theorems was motivated by Hilbert's Program. In spite of the fact that Gödel's work is generally taken to prove that Hilbert's Program cannot be carried out, Hilbert's Program has nevertheless continued to be influential in the philosophy of mathematics, and work on Revitalized Hilbert Programs has been central to the development of proof theory.
 
Both the statement of Hilbert's Program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory. Despite the negative nature of Gödel's incompleteness theorems, a result in model theory can be understood as showing how close logics came to being true: every rigorously defined mathematical theory can be exactly captured by a First-Order Logical (FOL) theory. Thus it is apparent that the two areas of mathematical logic are complementary.
 
Logic is extensively applied in the fields of artificial intelligence and computer science. These fields provide a rich source of problems in formal logic. In the 1950s and 60s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or produces artificial intelligence. This turned out to be more difficult than expected because of the complexity of human reasoning.
 
In logic programming, a program consists of a set of axioms and rules. In symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving, machines can find and check proofs, as well as work with proofs too lengthy to be written out by hand. However, the computation complexity of carrying out automated theorem proving is a serious limitation. It is a limitation that we will find in subsequent chapters significantly impacts the Semantic Web.
​
Decidability
 
In the 1930s, the mathematical logician, Kurt Gödel shook the world of mathematics when he established that, in certain important mathematical domains, there are problems that cannot be solved or propositions that cannot be proved, or disproved, and are therefore undecidable. Whether a certain statement of first order logic is provable as a theorem is one example; and whether a polynomial equation in several variables has integer solutions is another. While humans solve problems in these domains all the time, it is not certain that arbitrary problems in these domains can always be solved. This is relevant for artificial intelligence since it is important to establish the boundaries for a problem’s solution.
Kurt Gödel
 
Kurt Gödel (shown Figure 2-1) was born April 28, 1906 in Brünn, Austria-Hungary (now Brno, Czech Republic). He had rheumatic fever when he was six years old and his health became a chronic concern over his lifetime. Kurt entered the University of Vienna in 1923 where he was influenced by the lectures of Wilhelm Furtwängler. Furtwängler was an outstanding mathematician and teacher, but in addition he was paralyzed from the neck down, and this forced him to lecture from a wheel chair with an assistant to write on the board. This made a big impression on Gödel who was very conscious of his own health.
 
As an undergraduate Gödel studied Russell's book Introduction to Mathematical Philosophy. He completed his doctoral dissertation under Hans Hahn in 1929. His thesis proved the completeness of the first order functional calculus. He subsequently became a member of the faculty of the University of Vienna, where he belonged to the school of logical positivism until 1938.
 
Gödel is best known for his 1931 proof of the "Incompleteness Theorems." He proved fundamental results about axiomatic systems showing that in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular, the consistency of the axioms cannot be proved.
 
This ended a hundred years of attempts to establish axioms and axiom-based logic systems which would put the whole of mathematics on this basis. One major attempt had been by Bertrand Russell with Principia Mathematica (1910-13). Another was Hilbert's formalism which was dealt a severe blow by Gödel's results. The theorem did not destroy the fundamental idea of formalism, but it did demonstrate that any system would have to be more comprehensive than that envisaged by Hilbert.
 
One consequence of Gödel's results implied that a computer can never be programmed to answer all mathematical questions.
 
In 1935, Gödel proved important results on the consistency of the axiom of choice with the other axioms of set theory. He visited Göttingen in the summer of 1938, lecturing there on his set theory research and returned to Vienna to marry Adele Porkert in 1938.
 
After settling in the United States, Gödel again produced work of the greatest importance. His “Consistency of the axiom of choice and of the generalized continuum-hypothesis with the axioms of set theory” (1940) is a classic of modern mathematics. In this he proved that if an axiomatic system of set theory of the type proposed by Russell and Whitehead in Principia Mathematica is consistent, then it will remain so when the axiom of choice and the generalized continuum-hypothesis are added to the system. This did not prove that these axioms were independent of the other axioms of set theory, but when this was finally established by Cohen in 1963 he used the ideas of Gödel. Gödel held a chair at Princeton from 1953 until his death in 1978.
Propositional Logic
 
Propositional logic (or calculus) is a branch of symbolic logic dealing with propositions as units and with the combinations and connectives that relate them. It can be defined as the branch of symbolic logic that deals with the relationships formed between propositions by connectives such as compounds and connectives shown below:
Symbols
Statement
Connectives
p q
"either p is true, or q is true, or both"
disjunction
p · q
"both p and q are true"
conjunction
p q
"if p is true, then q is true"
implication
p q
"p and q are either both true or both false"
equivalence
A ‘truth table’ is a complete list of the possible truth values of a statement. We use "T" to mean "true", and "F" to mean "false" (or "1" and "0" respectively).
 
Truth tables are adequate to test validity, tautology, contradiction, contingency, consistency, and equivalence. This is important because truth tables are a mechanical application of the rules.
 
Propositional calculus is a formal system for deduction whose atomic formulas are propositional variables. In propositional calculus, the language consists of propositional variables (or placeholders) and sentential operators (or connectives). A well-formed formula is any atomic formula or a formula built up from sentential operators.
First-Order Logic (FOL)
 
First-Order Logic (FOL), also known as first-order predicate calculus, is a systematic approach to logic based on the formulation of quantifiable statements such as "there exists an x such that..." or "for any x, it is the case that...”. A first-order logic theory is a logical system that can be derived from a set of axioms as an extension of first-order logic.
 
FOL is distinguished from higher order logic in that the values "x" in the FOL statements are individual values and not properties. Even with this restriction, first-order logic is capable of formalizing all of set theory and most of mathematics. Its restriction to quantification of individual properties makes it difficult to use for the purposes of topology, but it is the classical logical theory underlying mathematics.
 
The branch of mathematics called Model Theory is primarily concerned with connections between first order properties and first order structures. First order languages are by their nature very restrictive and as a result many questions can not be discussed using them. On the other hand first-order logics have precise grammars.
 
Predicate calculus is quantificational and based on atomic formulas that are propositional functions and modal logic. In Predicate calculus, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject.
 
Automated Inference for FOL
 
Automated inference using first-order logic is harder than using Propositional Logic because variables can take on potentially an infinite number of possible values from their domain. Hence there are potentially an infinite number of ways to apply the Universal-Elimination rule of inference.
 
Godel's Completeness Theorem says that FOL is only semi-decidable. That is, if a sentence is true given a set of axioms, there is a procedure that will determine this. However, if the sentence is false, then there is no guarantee that a procedure will ever determine this. In other words, the procedure may never halt in this case. As a result, the Truth Table method of inference is not complete for FOL because the truth table size may be infinite.
 
Natural deduction is complete for FOL, but is not practical for automated inference because the ‘branching factor’ in the search process is too large. This is the result of the necessity to try every inference rule in every possible way using the set of known sentences.
 
Let us consider the rule of inference known as Modus Ponens (MP). Modus Ponens is a rule of inference pertaining to the IF/THEN operator. Modus Ponens states that if the antecedent of a conditional is true, then the consequent must also be true:                                                                                                                
(MP) Given the statements p and if p then q, infer q.
 
The Generalized Modus Ponens (GMP) is not complete for FOL. However, Generalized Modus Ponens is complete for Knowledge Bases (KBs) containing only Horn clauses.
An other very important logic that we shall discuss in detail in chapter 8 is Horn logic.
A Horn clause is a sentence of the form:
 
(Ax) (P1(x) ^ P2(x) ^ ... ^ Pn(x)) => Q(x)
 
where there are 0 or more Pi's, and the Pi's and Q are positive (i.e., un-negated) literals.
 
Horn clauses represent a subset of the set of sentences representable in FOL. For example: P(a) v Q(a) is a sentence in FOL, but is not a Horn clause.
 
Natural deduction using GMP is complete for KBs containing only Horn clauses. Proofs start with the given axioms/premises in KB, deriving new sentences using GMP until the goal/query sentence is derived. This defines a forward chaining inference procedure because it moves "forward" from the KB to the goal.
 
For example: KB = All cats like fish, cats eat everything they like, and Molly is a cat. In first-order logic then,
 
(1) KB = (Ax) cat(x) => likes(x, Fish)
(2) (Ax)(Ay) (cat(x) ^ likes(x,y)) => eats(x,y)
(3) cat(Molly)
 
Query: Does Molly eat fish?
 
Proof:
 
Use GMP with (1) and (3) to derive: (4) likes(Molly, Fish)
Use GMP with (3), (4) and (2) to derive: eats(Molly, Fish)
 
Conclusion: Yes, Molly eats fish.
​
Description Logic
 
Description Logics (DLs) allow specifying a terminological hierarchy using a restricted set of first-order formulas. DLs have nice computational properties (they are often decidable and tractable), but the inference services are restricted to classification and subsumption. That means, given formulae describing classes, the classifier associated with certain description logic will place them inside a hierarchy. Given an instance description, the classifier will determine the most specific classes to which the instance belongs.
 
From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound.
 
Descriptive Logic is one possibility for Inference Engines for the Semantic Web. Another possibility is based on Horn-logic, which is another subset of First-Order Predicate logic (see Figure 2-2).
 
In addition, Descriptive Logic and rule systems (e.g., Horn Logic) are somewhat orthogonal which means that they overlap, but one does not subsume the other. In other words, there are capabilities in Horn logic that are complementary to those available for Descriptive Logic.
 
Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapter 8.
 
Using Full First-Order Logic (FFOL) for specifying axioms requires a full-fledged automated theorem prover. However, FOL is semi-decidable and doing inferencing becomes computationally untractable for large amounts of data and axioms.

This means, than in an environment like the Web, FFOL programs will not scale to handle huge amounts of knowledge. Besides full first theorem proving would mean maintaining consistency throughout the Web, which is impossible.
 
 Description Logic fragment of FOL.
 
FOL includes expressiveness beyond the overlap, notably: positive disjunctions; existentials; and entailment of non-ground and non-atomic conclusions.
 
Horn FOL is another fragment of FOL. Horn Logic Program (LP) is a slight weakening of Horn FOL. "Weakening" here means that the conclusions from a given set of Horn premises that are entailed according to the Horn LP formalism are a subset of the conclusions entailed (from that same set of premises) according to the Horn FOL formalism. However, the set of ground atomic conclusions is the same in the Horn LP as in the Horn FOL. For most practical purposes (e.g., relational database query answering), Horn LP is thus essentially similar in its power to the Horn FOL. Horn LP is a fragment of both FOL and nonmonotonic LP.
 
This discussion may seem esoteric, but it is precisely these types of issues that will decide both the design of the Semantic Web as well as is likelihood to succeed.
Higher Order Logic
 
Higher Order Logics (HOL's) provide greater expressive power than FOL, but they are even more difficult computationally. For example, in HOL's, one can have true statements that are not provable (see discussion of Gödel’s Incompleteness Theorem). There are two aspects of this issue: higher-order syntax and higher-order semantics. If a higher-order semantics is not needed (and this is often the case), a second-order logic can often be translated into a first-order logic.
 
In first-order semantics, variables can only range over domains of individuals or over the names of predicates and functions, but not over sets as such. In higher-order syntax, variables are allowed to appear in places where normally predicate or function symbols appear.
 
Predicate calculus is the primary example of logic where syntax and semantics are both first-order. There are logics that have higher-order syntax but first-order semantics. Under a higher-order semantics, an equation between predicate (or function) symbols, is true, if and only if logics with a higher-order semantics and higher-order syntax are statements expressing trust about other statements.
 
To state it another way, higher-order logic is distinguished from first-order logic in several ways. The first is the scope of quantifiers; in first-order logic, it is forbidden to quantify over predicates. The second way in which higher-order logic differs from first-order logic is in the constructions that are allowed in the underlying type theory. A higher-order predicate is a predicate that takes one or more other predicates as arguments. In general, a higher-order predicate of order n takes one or more (n − 1)th-order predicates as arguments (where n > 1).
​
Recursion theory
 
Recursion is the process a procedure goes through when one of the steps of the procedure involves rerunning a complete set of identical steps. In mathematics and computer science, recursion is a particular way of specifying a class of objects with the help of a reference to other objects of the class: a recursive definition defines objects in terms of the already defined objects of the class. A recursive process is one in which objects are defined in terms of other objects of the same type. Using a recurrence relation, an entire class of objects can be built up from a few initial values and a small number of rules.
 
The Fibonacci numbers (i.e., the infinite sequence of numbers starting 0, 1, 1, 2, 3, 5, 8, 13, …, where the next number in the sequence is defined a s the sum of the previous two numbers) is a commonly known recursive set.
 
The following is a recursive definition of person's ancestors: One's parents are one's ancestors (base case).  The parents of any ancestor are also ancestors of the person under consideration (recursion step).
 
Therefore, your ancestors include: your parents, and your parents' parents (grandparents), and your grandparents' parents, and everyone else you get by successively adding ancestors.
 
It is convenient to think that a recursive definition defines objects in terms of "previously defined" member of the class.
 
While recursive definitions are useful and widespread in mathematics, care must be taken to avoid self-recursion, in which an object is defined in terms of itself, leading to an infinite nesting (see Figure 1-1: “The Print Gallery” by M.C. Escher is a visual illustration of self-recursion).
​
Knowledge Representation
 
Let’s define what we mean by the fundamental terms “data,” “information,” “knowledge,” and "understanding." An item of data is a fundamental element of an application. Data can be represented by populations and labels. Data is raw; it exists and has no significance beyond its existence. It can exist in any form, usable or not. It does not have meaning by itself.
 
Information on the other hand is an explicit association between items of data. Associations represent a function relating one set of things to another set of things. Information can be considered to be data that has been given meaning by way of relational connections. This "meaning" can be useful, but does not have to be. A relational database creates information from the data stored within it.
 
Knowledge can be considered to be an appropriate collection of information, such that it is useful. Knowledge-based systems contain knowledge as well as information and data.
 
A rule is an explicit functional association from a set of information things to a specific information thing. As a result, a rule is knowledge.
 
We can construct information from data and knowledge from information and finally produce understanding from knowledge. Understanding lies at the highest level. Understanding is an interpolative and probabilistic process that is cognitive and analytical. It is the process by which one can take existing knowledge and synthesize new knowledge. One who has understanding can pursue useful actions because he can synthesize new knowledge or information from what is previously known (and understood). Understanding can build upon currently held information, knowledge, and understanding itself.
 
AI systems possess understanding in the sense that they are able to synthesize new knowledge from previously stored information and knowledge. An important element of AI is the principle that intelligent behavior can be achieved through processing of symbolic structures representing increments of knowledge. This has produced knowledge-representation languages that allow the representation and manipulation of knowledge to deduce new facts from the existing knowledge. The knowledge-representation language must have a well-defined syntax and semantics system while supporting inference.
 
Three techniques have been popular to express knowledge representation and inference: (1) Logic-based approaches, (2) Rule-based systems, and (3) Frames and semantic networks.
 
Logic-based approaches use logical formulas to represent complex relationships. They require a well-defined syntax, semantics, and proof theory. The formal power of a logical theorem proof can be applied to knowledge to derive new knowledge. Logic is used as the formalism for programming languages and databases. It can also be used as a formalism to implement knowledge methodology. Any formalism that admits a declarative semantics and can be interpreted both as a programming language and a database language is a knowledge language. However, the approach is inflexible and requires great precision in stating the logical relationships. In some cases, common sense inferences and conclusions cannot be derived, and the approach may be inefficient, especially when dealing with issues that result in large combinations of objects or concepts.
 
Rule-based approaches are more flexible and allow the representation of knowledge using sets of IF-THEN or other conditional rules. This approach is more procedural and less formal in its logic. As a result, reasoning can be controlled through a forward or backward chaining interpreter.
 
Frames and semantic networks capture declarative information about related objects and concepts where there is a clear class hierarchy and where the principle of inheritance can be used to infer the characteristics of members of a subclass. The two forms of reasoning in this technique are matching (i.e., identification of objects having common properties), and property inheritance in which properties are inferred for a subclass. Frames and semantic networks are limited to representation and inference of relatively simple systems.
 
In each of these approaches, the knowledge-representation component (i.e., problem-specific rules and facts) is separate from the problem-solving and inference procedures.
 
For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. AI researchers have studied such systems and produced today’s Knowledge Representation (KR). KR is currently in a state comparable to that of hypertext before the advent of the Web. Knowledge representation contains the seeds of important applications, but to fully realize its potential, it must be linked into a comprehensive global system.
​
Computational Logic
​
Programming a computer involves creating a sequence of logical instructions that the computer will use to perform a wide variety of tasks. While it is possible to create programs directly in machine language, it is uncommon for programmers to work at this level because of the abstract nature of the instructions. It is better to write programs in a simple text file using a high-level programming language which can later be compiled into executable code.
​
The ‘logic model’ for programming is a basic element that communicates the logic behind a program. A logic model can be a graphic representation of a program illustrating the logical relationships between program elements and the flow of calculation, data manipulation or decisions as the program executes its steps.
​
Logic models typically use diagrams, flow sheets, or some other type of visual schematic to convey relationships between programmatic inputs, processes, and outcomes. Logic models attempt to show the links in a chain of reasoning about relationships to the desired goal. The desired goal is usually shown as the last link in the model.
​
A logic program may consist of a set of axioms and a goal statement. The logic form can be a set of ‘IF-THEN’ statements. The rules of inference are applied to determine whether the axioms are sufficient to ensure the truth of the goal statement. The execution of a logic program corresponds to the construction of a proof of the goal statement from the axioms.
​
In the logic programming model the programmer is responsible for specifying the basic logical relationships and does not specify the manner in which the inference rules are applied. Thus
 
Logic + Control = Algorithms
 
The operational semantics of logic programs correspond to logical inference. The declarative semantics of logic programs are derived from the term model. The denotation of semantics in logic programs are defined in terms of a function which assigns meaning to the program. There is a close relation between the axiomatic semantics of imperative programs and logic programs.
 
The control portion of the equation is provided by an inference engine whose role is to derive theorems based on the set of axioms provided by the programmer. The inference engine uses the operations of resolution and unification to construct proofs. Faulty logic models occur when the essential problem has not been clearly stated or defined.
​
Program developers work carefully to construct logic models to avoid logic conflicts, recursive loops, and paradoxes within their computer programs. As a result, programming logic should lead to executable code without paradox or conflict, if it is flawlessly produced. Nevertheless we know that ‘bugs’ or programming errors do occur, some of which are directly or indirectly a result of logic conflicts.
​
As programs have grown in size from thousands of line of code to millions of lines, the problems of ‘bugs’ and logic conflicts have also grown. Today programs such as operating systems can have over 25 million lines of codes and considered to have hundreds of thousands of ‘bugs’ most of which are seldom encountered during routine program usage.
​
Confining logic issues to beta-testing on local servers allows programmers reasonable control of conflict resolution. Now consider applying many lines of application code logic to the Semantic Web were it may access many information nodes. The magnitude of the potential conflicts could be somewhat daunting.
​
Artificial Intelligence
 
John McCarthy of MIT contributed the term ‘Artificial Intelligence’ (AI) and by the late 1950s, there were many researchers in AI working on programming computers. Eventually, AI expanded into such fields as philosophy, psychology and biology.
 
AI is sometimes described in two ways: strong AI and weak AI. Strong AI asserts that computers can be made to think on a level equal to humans.  Weak AI simply holds that some ‘thinking-like’ features can be added to computers to make them more useful tools. Examples of Weak AI abound: expert systems, drive-by-wire cars, smart browsers, and speech recognition software. These weak AI components may, when combined, begin to approach the expectations of strong AI.
 
AI includes the study of computers that can perform cognitive tasks including: understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases or illnesses, solving mathematical problems, performing financial analyses, learning new procedures for problem solving, and playing complex games, like chess.
 
We will provide a more detailed discussion on Artificial Intelligence on the Web and what is meant by machine intelligence in Chapter 3.
​
Web Architecture and Business Logic
 
So far we have explored the basic elements, characteristics, and limitations of logic and suggested that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs.
 
Next we will review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet. This simple arrangement of segregating the complexity of logic to the executable programs residing on servers has minimized processing difficulties over the Web itself.
 
Today, markup languages are not equipped with logic connectives. So all complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, there is difficulty in interoperable programming which greatly inhibits automation of Web Services.
 
Browsers such as Internet Explorer and Netscape Navigator view Web pages written in HyperText Markup Language (HTML). The HTML program can be written to a simple text file that is recognized by the browser and it can call embedded script programming.  In addition, HTML can include compiler directives that call server pages with access to proprietary compiled programming. As a result, simple-text HTML is empowered with important capabilities to call complex business logic programming residing on servers both in the frameworks of Microsoft’s .NET and Sun’s J2EE. These frameworks support Web Services and form a vital part of today’s Web.
 
When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program provides functions as transaction processing, database connectivity, and messaging.
 
Business logic is concerned with logic about: how we model real world business objects - such as accounts, loans, travel; how these objects are stored; how these objects interact with each other - e.g. a bank account must have an owner and a bank holder's portfolio is the sum of his accounts; and who can access and update these objects.
 
As an example, consider an online store that provides real-time pricing and availability information. The site will provide a form for you to choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways.
 
The Web server delegates the response generation to a script, however, the business logic for the pricing lookup is included from an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response.
 
The application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client.
 
By separating the pricing logic from the HTML response-generating code, the pricing logic becomes reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checking out a customer.
 
Recently, eXtensible Markup Language (XML) Web Services use an XML payload to a Web server. The Web server can then process the data and respond much as application servers have in the past.
 
XML has become the standard for data transfer of all types of applications. XML provides a data model that is supported by most data-handling tools and vendors. Structuring data as XML allows hierarchical, graph-based representations of the data to be presented to tools, which opens up a host of possibilities.
The task of creating and deploying Web Services automatically requires interoperable standards. The most advanced vision for the next generation of Web Services is the development of Web Services over Semantic Web Architecture.
The Semantic Web
 
Now let’s consider using logic within markup languages on the Semantic Web. This means empowering the Web’s expressive capability, but at the expense of reducing Web performance.
 
The current Web is built on HTML and XML, which describes how information is to be displayed and laid out on a Web page for humans to read. In addition, HTML is not capable of being directly exploited by information retrieval techniques. XML may have enabled the exchange of data across the Web, but it says nothing about the meaning of that data. In effect, the Web has developed as a medium for humans without a focus on data that could be processed automatically.
 
As a result, computers are unable to automatically process the meaning of Web content. For machines to perform useful automatic reasoning tasks on these documents, the language machines use must go beyond the basic semantics of XML Schema. They will require an ontology language, logic connectives, and rule systems.
​
By introducing these elements the Semantic Web is intended to be a paradigm shift just as powerful as the original Web. The Semantic Web will bring meaning to the content of Web pages, where software agents roaming from page-to-page can carry out automated tasks.
​
The Semantic Web will be constructed over the Resource Description Framework (RDF) and Web Ontology Language (OWL). In addition, it will implement logic inference and rule systems. These languages are being developed by the W3C. Data can be defined and linked using RDF and OWL so that there is more effective discovery, automation, integration, and reuse across different applications.
 
These languages are conceptually richer than HTML and allow representation of the meaning and structure of content (interrelationships between concepts). This makes Web content understandable by software agents, opening the way to a whole new generation of technologies for information processing, retrieval, and analysis.
 
If a developer publishes data in XML on the Web, it doesn’t require much more effort to take the extra step and publish the data in RDF. By creating ontologies to describe data, intelligent applications won’t have to spend time translating various XML schemas.
 
An ontology defines the terms used to describe and represent an area of knowledge. Although XML Schema is sufficient for exchanging data between parties who have agreed to the definitions beforehand, their lack of semantics prevents machines from reliably performing this task with new XML vocabularies.
 
In addition, the ontology of RDF and RDF Schema (RDFS) is very limited (see Chapter 5). RDF is roughly limited to binary ground predicates and RDF Schema is roughly limited to a subclass hierarchy and a property hierarchy with domain and range definitions.
 
Adding an Ontology language will permit the development of explicit, formal conceptualizations of models (see Chapter 6). The main requirements of an onotology language include: a well-defined syntax, a formal semantics, convenience of expression, n efficient reasoning support system, and sufficient expressive power.
 
Since the W3C has established that the Semantic Web would require much more expressive power than using RDF and RDF Schema would offer, the W3C has defined Web Ontology Language (called OWL). The layered architecture of the Semantic Web would suggest that one way to develop the necessary ontology language is to extend RDF Schema by using the RDF meaning of classes and properties and adding primitives to support richer expressiveness. However, simply extending RDF Schema would fail to achieve the best combination of expressive power and efficient reasoning. The layered architecture of the Semantic Web promotes the downward compatibility and reuse of software is only achieved with OWL Full (see Chapter 6), but at the expense of computational intractability.
 
RDF and OWL (DL and Lite, see Chapter 6) are specializations of predicate logic. They provide a syntax that fits well with Web languages. They also define reasonable subsets of logic that offer a trade-off between expressive power and computational complexity.
 
Semantic Web research has developed from the traditions of Artificial Intelligence (AI) and ontology languages. Currently, the most important ontology languages on the Web are XML, XML Schema, RDF, RDF Schema, and OWL.
 
Agents are pieces of software that work autonomously and proactively. In most cases agent will simply collect and organize information. Agents on the Semantic Web will receive some tasks to perform and seek information from Web resources, while communicating with other Web agents, in order to fulfill its task. Semantic Web agents will utilize metadata, ontologies, and logic to carry out its tasks.
 
In a closed environment, Semantic Web specifications have already been used to accomplish many tasks, such as data interoperability for business-to-business (B2B) transactions. Many companies have expended resources to translate their internal data syntax for their partners. As the world migrates towards RDF and ontologies, interoperability will become more flexible to new demands.
 
An inference is a process of using rules to manipulate knowledge to produce new knowledge. Adding logic to the Web means using rules to make inferences and choosing a course of action. The logic must be powerful enough to describe complex properties of objects, but not so powerful that agents can be tricked by a paradox. A combination of mathematical and engineering issues complicates this task.
 
We will provide a more detailed presentation on paradoxes on the Web and what is solvable on the Web in the next few chapters.
​
Inference Engines for the Semantic Web
 
Inference engines process the knowledge available in the Semantic Web by deducing new knowledge from already specified knowledge.
 
Higher Order Logic (HOL) based inference engines have to greatest expressive power among all known logics such as the characterization of transitive closure. However, higher order logics don't have nice computational properties. There are true statements, which are unprovable (Gödel’s Incompleteness Theorem).
 
Full First Order Logic (FFOL) based inference engines for specifying axioms requires a full-fledged automated theorem prover. FOL is semi-decidable and doing inferencing is computationally not tractable for large amounts of data and axioms.
 
This means, than in an environment like the Web, HOL and FFOL programs would not scale up for handling huge amounts of knowledge. Besides full first theorem proving would mean to maintain consistency throughout the web, which is impossible.
 
Predicate calculus is the primary example of logic where syntax and semantics are both first-order. From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound and is a good candidate language for Web logic.
 
Other possibilities for inference engines for the Semantic Web are languages based on Horn-logic, which is another fragment of First-Order Predicate logic (see Figure 2-2).
 
In addition, Descriptive Logic and rule systems (e.g., Horn Logic) have different capabilities. Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapters, 6, 7, 8 and 9.
Conclusion
 
For the Semantic Web to provide machine processing capabilities, the logic expressive power of mark-up languages must be balanced against the resulting computational complexity of reasoning. In this chapter, we examined both the expressive characteristics of logic languages, as well as, their inherit limitations. First Order Logics (FOL) fragments such as Descriptive Logic and Horn Logic offer attractive characteristics for Web applications and set the parameters for how expressive Web markup languages can become.
 
We also reviewed the concept of Artificial Intelligence (AI) and how logic is applied in computer programming. After exploring the basic elements, characteristics, and limitations of logic and suggesting that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs, we reviewed how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet.
 
Finally, we discussed the implications of using logic within markup languages on the Web through the development of the Semantic Web.
 
Our conclusions from this chapter include: Logic is the foundation of knowledge representation which can be applied to AI in general and the World Wide Web specially. Logic can provide a high-level language for expressing knowledge and has high expressive power. Logic has a well-understood formal semantics for assigning unambiguous meaning to logic statements. In addition, we saw that proof systems exist that can automatically derive statements syntactically from premises. Predicate logic uniquely offers a sound and complete proof system while higher-order logics do not. By tracking the proof to reach its consequence the logic can provide explanations for the answers.
 
Currently, complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, this difficulty for interoperable programs greatly inhibits automation of Web Services. The Semantic Web offers a way to use logic in the form of Descriptive Logic or Horn Logic on the Web.
Exercises
 
2-1. Explain how logic for complex business calculations is currently carried out through .NET and J2EE application servers.
2-2. Explain the difference between FOL and HOL.
2-3. Why is it necessary to consider less powerful expressive languages for the Semantic Web?
2-4. Why is undeciability a concern on the Web?
​
Website http://escherdroste.math.leidenuniv.nl/ offers visualize the mathematical structure behind Escher's Print Gallery using the Escher and the Droste effect. This mathematical structure answers some questions about Escher's picture, such as: "what's in the blurry white hole in the middle?" This project is an initiative of Hendrik Lenstra of the Universiteit Leiden and the University of California at Berkeley. Bart de Smit of the Universiteit Leiden runs the project.
 
 Interlude #2: Truth and Beauty
 
As John passed with a sour look on his face, Mary looked up from her text book and asked, “Didn’t you enjoy the soccer game?”
 
“How can you even ask that when we lost?” asked John gloomily.
 
“I think the team performed beautifully, despite the score” said Mary.
 
This instantly frustrated John and he said, "Do you know Mary that sometimes I find it disarming the way you express objects in terms of beauty. I find that simply accepting something on the basis of its beauty can lead to false conclusions?"
 
Mary reflected upon this before offering a gambit of her own, "Well John, do you know that sometimes I find that relying on objective truth alone can lead to unattractive conclusions."
 
John became flustered and reflected his dismay by demanding, "Give me an example."
 
Without hesitation, Mary said, "Perhaps you will recall that in the late 1920's, mathematicians were quite certain that every well-posed mathematical question had to have a definite answer ─ either true or false. For example, suppose they claimed that every even number was the sum of two prime numbers,” referring to Goldbach's Conjecture which she had just been studying in her text book. Mary continued, “Mathematicians would seek the truth or falsity of the claim by examining a chain of logical reasoning that would lead in a finite number of steps to prove if the claim were either true or false."
 
"So mathematicians thought at the time," said John. "Even today most people still do."
 
"Indeed," said Mary. "But in 1931, logician Kurt Gödel proved that the mathematicians were wrong. He showed that every sufficiently expressive logical system must contain at least one statement that can be neither proved nor disproved following the logical rules of that system. Gödel proved that not every mathematical question has to have a yes or no answer. Even a simple question about numbers may be undecidable. In fact, Gödel proved that there exist questions that while being undecidable by the rules of logical system can be seen to be actually true if we jump outside that system. But they cannot be proven to be true.”
 
“Thank you for that clear explanation,” said John. “But isn’t such a fact simply a translation into mathematic terms of the famous Liar’s Paradox: ‘This statement is false.’”
 
“Well, I think it's a little more complicated than that,” said Mary. “But Gödel did identify the problem of self-reference that occurs in the Liar’s Paradox.  Nevertheless, Gödel’s theorem contradicted the thinking of most of the great mathematicians of his time. The result is that one can not be as certain as the mathematician had desired. See what I mean, Gödel may have found an important truth, but it was – well to be frank – rather disappointingly unattractive," concluded Mary.
 
"On the contrary,” countered John, “from my perspective it was the beauty of the well-posed mathematical question offered by the mathematicians that was proven to be false.
 
Mary replied, “I’ll have to think about that.”
bottom of page