H. Peter Alesso
Search Results
27 results found with an empty search
- Thinking on the Web | H Peter Alesso
An excerpt from the non-fiction technology book Thinking on the Web. Thinking on the Web AMAZON Chapter 2 Gödel: What is Decidable? In the last chapter, we suggested that small wireless devices connected to an intelligent Web could produce ubiquitous computing and empower the Information Revolution. In the future, Semantic Web architecture is designed to add some intelligence to the Web through machine processing capabilities. For the Semantic Web to succeed the expressive power of the logic added to its mark-up languages must be balanced against the resulting computational complexit y. Therefore, it is important to evaluate both the expressive characteristics of logic languages, as well as, their inherit limitations. In fact, some options for Web logic include solutions that may not be solvable through rational argument. In particular, the work of Kurt Gödel identified the concept of undecidability where the truth or falsity of some statements may not be determined. In this chapter, we review some of the basic principles of logic and related them to the suitability for Web applications. First, we review the basic concept of logic, and discuss various characteristics and limitations of logic analysis. We introduce First Order Logics (FOL) and its subsets, such as Descriptive Logic and Horn Logic which offer attractive characteristics for Web applications. These languages set the parameters for how expressive Web markup languages can become. Second, we investigate how logic conflicts and limitations in computer programming and Artificial Intelligence (AI) have been handled in closed environments to date. We consider how errors in logic contribute to significant ‘bugs’ that lead to crashed computer programs. Third, we review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivers user-interface presentations residing within the markup languages traveling over the Web. The Semantic Web changes this partitioned arrangement. Finally, we discuss the implications of using logic in markup languages on the Semantic Web. Philosophical and Mathematical Logic Aristotle described man as a “rational animal” and established the study of logic beginning with the process of codifying syllogisms. A syllogism is a kind of argument in which there are three propositions, two of them premises, one a conclusion. Aristotle was the first to create a logic system which allowed predicates and subjects to be represented by letters or symbols. His logic form allowed one to substitute for subjects and predicates with letters (variables). For example: If A is predicated of all B, and B is predicated of all C, then A is predicated of all C. By predicated, Aristotle means B belongs to A, or all B's are A's. For instance, we can substitute subjects and predicates into this syllogism to get: If all humans (B's) are mortal (A), and all Greeks (C's) are humans (B's), then all Greeks (C's) are mortal (A). Today, Aristotle's system is mostly seen as of historical value. Subsequently, other philosophers and mathematicians such as Leibniz developed methods to represent logic and reasoning as a series of mechanical and symbolic tasks. They were followed by logicians who developed mechanical rules to carry out logical deductions. In logic, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject. Today, logic is considered to be the primary reasoning mechanism for solving problems. Logic allows us to sets up systems and criteria for distinguishing acceptable arguments from unacceptable arguments. The structure of arguments is based upon formal relations between the newly produced assertions and the previous ones. Through argument we can then express inferences. Inferences are the processes where new assertions may be produced from existing ones. When relationships are independent of the assertions themselves we call them ‘formal’. Through these processes, logic provides a mechanism for the extension of knowledge. As a result, logic provides prescriptions for reasoning by machines, as well as, by people. Traditionally, logic has been studied as a branch of philosophy. However, since the mid-1800’s logic has been commonly studied as a branch of mathematics and more recently as a branch of computer science. The scope of logic can therefore be extended to include reasoning using probability and causality. In addition, logic includes the study of structures of fallacious arguments and paradoxes. By logic then, we mean the study and application of the principles of reasoning, and the relationships between statements, concepts or propositions. Logic incorporates both the methods of reasoning and the validity of the results. In common language, we refer to logic in several ways; logic can be considered as a framework or system of reasoning, a particular mode or process of reasoning, or the guiding principles of a field or discipline. We also use the term "logical" to describe a reasoned approach to solve a problem or get to a decision, as opposed to the alternative "emotional" approaches to react or respond to a situation. As logic has developed, its scope has splintered int o many distinctive branches. These distinctions serve to formalize different forms of logic as a science. The distinctions between the various branches of logic lead to their limitations and expressive capabilities which are central issues to designing the Semantic Web languages. The following sections identify some of the more important distinctions. Deductive and Inductive Reasoning Originally, logic consisted only of deductive reasoning which was concerned with a premise and a resultant deduction. However, it is important to note that inductive reasoning – the study of deriving a reliable generalization from observations – has also been included in the study of logic. Correspondingly, we must distinguish between deductive validity and inductive validity. The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. An inference is deductively valid if and only if there is no possible situation in which all the premises are true and the conclusion false. Inductive validity on the other hand requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some of which use mathematical models of probability. Paradox A paradox is an apparently true statement that seems to lead to a contradiction or to a situation that defies intuition. Typically, either the statements in question do not really imply the contradiction; or the puzzling result is not really a contradiction; or the premises themselves are not all really true (or, cannot all be true together). The recognition of ambiguities, equivocations, and unstated assumptions underlying known paradoxes has often led to significant advances in science, philosophy and mathematics. Formal and Informal Logic Formal logic (sometimes called ‘symbolic logic’) attempts to capture the nature of logical truth and inference in formal systems. This consists of a formal language, a set of rules of derivation (often called ‘rules of inference’), and sometimes a set of axioms. The formal language consists of a set of discrete symbols, a syntax (i.e., the rules for the construction of a statement), and a semantics (i.e., the relationship between symbols or groups of symbols and their meanings). Expressions in formal logic are often called ‘formulas.’ The rules of derivation and potential axioms then operate with the language to specify a set of theorems, which are formulas that are either basic axioms or true statements that are derivable using the axioms and rules of derivation. In the case of formal logic systems, the theorems are often interpretable as expressing logical truths (called tautologies). Formal logic encompasses a wide variety of logic systems. For instance, propositional logic and predicate logic are kinds of formal logic, as well as temporal logic, modal logic, Hoare logic and the calculus of constructions. Higher-order logics are logical systems based on a hierarchy of types. For example, Hoare logic is a formal system developed by the British computer scientist C. A. R. Hoare. The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic. The purpose of such a system is to provide a set of logical rules by which to reason about the correctness of computer programs with the rigor of mathematical logic. The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form: {P} C {Q} where P and Q are assertions and C is a command. P is called the precondition and Q the post-condition. Assertions are formulas in predicate logic. An interpretation of such a triple is: Whenever P holds of the state before the execution of C, then Q will hold afterwards. Alternatively, informal logic is the study of logic that is used in natural language arguments. Informal logic is complicated by the fact that it may be very hard to extract the formal logical structure embedded in an argument. Informal logic is also more difficult because the semantics of natural language assertions is much more complicated than the semantics of formal logical systems. Mathematical Logic Mathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, the application of mathematical techniques to the representation and analysis of formal logic. The boldest attempt to apply logic to mathematics was pioneered by philosopher-logician Bertrand Russell. His idea was that mathematical theories were logical tautologies, and his program was to show this by means to a reduction of mathematics to logic. The various attempts to carry this out met with a series of failures, such as Russell's Paradox, and the defeat of Hilbert's Program by Gödel's incompleteness theorems (which we shall describe shortly). Russell's paradox represents either of two interrelated logical contradictions. The first is a contradiction arising in the logic of sets or classes. Some sets can be members of themselves, while others can not. The set of all sets is itself a set, and so it seems to be a member of itself. The null or empty set, however, must not be a member of itself. However, suppose that we can form a set of all sets that, like the null set, are not included in themselves. The paradox arises from asking the question of whether this set is a member of itself. It is, if and only if, it is not! The second form is a contradiction involving properties. Some properties seem to apply to themselves, while others do not. The property of being a property is itself a property, while the property of being a table is not, itself, a table. Hilbert's Program was developed in the early 1920s, by German mathematician David Hilbert. It called for a formalization of all of mathematics in axiomatic form, together with a proof that this axiomatization of mathematics is consistent. The consistency proof itself was to be carried out using only what Hilbert called ‘finitary’ methods. The special epistemological character of this type of reasoning yielded the required justification of classical mathematics. It was also a great influence on Kurt Gödel, whose work on the incompleteness theorems was motivated by Hilbert's Program. In spite of the fact that Gödel's work is generally taken to prove that Hilbert's Program cannot be carried out, Hilbert's Program has nevertheless continued to be influential in the philosophy of mathematics, and work on Revitalized Hilbert Programs has been central to the development of proof theory. Both the statement of Hilbert's Program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory. Despite the negative nature of Gödel's incompleteness theorems, a result in model theory can be understood as showing how close logics came to being true: every rigorously defined mathematical theory can be exactly captured by a First-Order Logical (FOL) theory. Thus it is apparent that the two areas of mathematical logic are complementary. Logic is extensively applied in the fields of artificial intelligence and computer science. These fields provide a rich source of problems in formal logic. In the 1950s and 60s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or produces artificial intelligence. This turned out to be more difficult than expected because of the complexity of human reasoning. In logic programming, a program consists of a set of axioms and rules. In symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving, machines can find and check proofs, as well as work with proofs too lengthy to be written out by hand. However, the computation complexity of carrying out automated theorem proving is a serious limitation. It is a limitation that we will find in subsequent chapters significantly impacts the Semantic Web. Decidability In the 1930s, the mathematical logician, Kurt Gödel shook the world of mathematics when he established that, in certain important mathematical domains, there are problems that cannot be solved or propositions that cannot be proved, or disproved, and are therefore undecidable. Whether a certain statement of first order logic is provable as a theorem is one example; and whether a polynomial equation in several variables has integer solutions is another. While humans solve problems in these domains all the time, it is not certain that arbitrary problems in these domains can always be solved. This is relevant for artificial intelligence since it is important to establish the boundaries for a problem’s solution. Kurt Gödel Kurt Gödel (shown Figure 2-1) was born April 28, 1906 in Brünn, Austria-Hungary (now Brno, Czech Republic). He had rheumatic fever when he was six years old and his health became a chronic concern over his lifetime. Kurt entered the University of Vienna in 1923 where he was influenced by the lectures of Wilhelm Furtwängler. Furtwängler was an outstanding mathematician and teacher, but in addition he was paralyzed from the neck down, and this forced him to lecture from a wheel chair with an assistant to write on the board. This made a big impression on Gödel who was very conscious of his own health. As an undergraduate Gödel studied Russell's book Introduction to Mathematical Philosophy. He completed his doctoral dissertation under Hans Hahn in 1929. His thesis proved the completeness of the first order functional calculus. He subsequently became a member of the faculty of the University of Vienna, where he belonged to the school of logical positivism until 1938. Gödel is best known for his 1931 proof of the "Incompleteness Theorems." He proved fundamental results about axiomatic systems showing that in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular, the consistency of the axioms cannot be proved. This ended a hundred years of attempts to establish axioms and axiom-based logic systems which would put the whole of mathematics on this basis. One major attempt had been by Bertrand Russell with Principia Mathematica (1910-13). Another was Hilbert's formalism which was dealt a severe blow by Gödel's results. The theorem did not destroy the fundamental idea of formalism, but it did demonstrate that any system would have to be more comprehensive than that envisaged by Hilbert. One consequence of Gödel's results implied that a computer can never be programmed to answer all mathematical questions. In 1935, Gödel proved important results on the consistency of the axiom of choice with the other axioms of set theory. He visited Göttingen in the summer of 1938, lecturing there on his set theory research and returned to Vienna to marry Adele Porkert in 1938. After settling in the United States, Gödel again produced work of the greatest importance. His “Consistency of the axiom of choice and of the generalized continuum-hypothesis with the axioms of set theory” (1940) is a classic of modern mathematics. In this he proved that if an axiomatic system of set theory of the type proposed by Russell and Whitehead in Principia Mathematica is consistent, then it will remain so when the axiom of choice and the generalized continuum-hypothesis are added to the system. This did not prove that these axioms were independent of the other axioms of set theory, but when this was finally established by Cohen in 1963 he used the ideas of Gödel. Gödel held a chair at Princeton from 1953 until his death in 1978. Propositional Logic Propositional logic (or calculus) is a branch of symbolic logic dealing with propositions as units and with the combinations and connectives that relate them. It can be defined as the branch of symbolic logic that deals with the relationships formed between propositions by connectives such as compounds and connectives shown below: Symbols Statement Connectives p q "either p is true, or q is true, or both" disjunction p · q "both p and q are true" conjunction p q "if p is true, then q is true" implication p q "p and q are either both true or both false" equivalence A ‘truth table’ is a complete list of the possible truth values of a statement. We use "T" to mean "true", and "F" to mean "false" (or "1" and "0" respectively). Truth tables are adequate to test validity, tautology, contradiction, contingency, consistency, and equivalence. This is important because truth tables are a mechanical application of the rules. Propositional calculus is a formal system for deduction whose atomic formulas are propositional variables. In propositional calculus, the language consists of propositional variables (or placeholders) and sentential operators (or connectives). A well-formed formula is any atomic formula or a formula built up from sentential operators. First-Order Logic (FOL) First-Order Logic (FOL), also known as first-order predicate calculus, is a systematic approach to logic based on the formulation of quantifiable statements such as "there exists an x such that..." or "for any x, it is the case that...”. A first-order logic theory is a logical system that can be derived from a set of axioms as an extension of first-order logic. FOL is distinguished from higher order logic in that the values "x" in the FOL statements are individual values and not properties. Even with this restriction, first-order logic is capable of formalizing all of set theory and most of mathematics. Its restriction to quantification of individual properties makes it difficult to use for the purposes of topology, but it is the classical logical theory underlying mathematics. The branch of mathematics called Model Theory is primarily concerned with connections between first order properties and first order structures. First order languages are by their nature very restrictive and as a result many questions can not be discussed using them. On the other hand first-order logics have precise grammars. Predicate calculus is quantificational and based on atomic formulas that are propositional functions and modal logic. In Predicate calculus, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject. Automated Inference for FOL Automated inference using first-order logic is harder than using Propositional Logic because variables can take on potentially an infinite number of possible values from their domain. Hence there are potentially an infinite number of ways to apply the Universal-Elimination rule of inference. Godel's Completeness Theorem says that FOL is only semi-decidable. That is, if a sentence is true given a set of axioms, there is a procedure that will determine this. However, if the sentence is false, then there is no guarantee that a procedure will ever determine this. In other words, the procedure may never halt in this case. As a result, the Truth Table method of inference is not complete for FOL because the truth table size may be infinite. Natural deduction is complete for FOL, but is not practical for automated inference because the ‘branching factor’ in the search process is too large. This is the result of the necessity to try every inference rule in every possible way using the set of known sentences. Let us consider the rule of inference known as Modus Ponens (MP). Modus Ponens is a rule of inference pertaining to the IF/THEN operator. Modus Ponens states that if the antecedent of a conditional is true, then the consequent must also be true: (MP) Given the statements p and if p then q, infer q. The Generalized Modus Ponens (GMP) is not complete for FOL. However, Generalized Modus Ponens is complete for Knowledge Bases (KBs) containing only Horn clauses. An other very important logic that we shall discuss in detail in chapter 8 is Horn logic. A Horn clause is a sentence of the form: (Ax) (P1(x) ^ P2(x) ^ ... ^ Pn(x)) => Q(x) where there are 0 or more Pi's, and the Pi's and Q are positive (i.e., un-negated) literals. Horn clauses represent a subset of the set of sentences representable in FOL. For example: P(a) v Q(a) is a sentence in FOL, but is not a Horn clause. Natural deduction using GMP is complete for KBs containing only Horn clauses. Proofs start with the given axioms/premises in KB, deriving new sentences using GMP until the goal/query sentence is derived. This defines a forward chaining inference procedure because it moves "forward" from the KB to the goal. For example: KB = All cats like fish, cats eat everything they like, and Molly is a cat. In first-order logic then, (1) KB = (Ax) cat(x) => likes(x, Fish) (2) (Ax)(Ay) (cat(x) ^ likes(x,y)) => eats(x,y) (3) cat(Molly) Query: Does Molly eat fish? Proof: Use GMP with (1) and (3) to derive: (4) likes(Molly, Fish) Use GMP with (3), (4) and (2) to derive: eats(Molly, Fish) Conclusion: Yes, Molly eats fish. Description Logic Description Logics (DLs) allow specifying a terminological hierarchy using a restricted set of first-order formulas. DLs have nice computational properties (they are often decidable and tractable), but the inference services are restricted to classification and subsumption. That means, given formulae describing classes, the classifier associated with certain description logic will place them inside a hierarchy. Given an instance description, the classifier will determine the most specific classes to which the instance belongs. From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound. Descriptive Logic is one possibility for Inference Engines for the Semantic Web. Another possibility is based on Horn-logic, which is another subset of First-Order Predicate logic (see Figure 2-2). In addition, Descriptive Logic and rule systems (e.g., Horn Logic) are somewhat orthogonal which means that they overlap, but one does not subsume the other. In other words, there are capabilities in Horn logic that are complementary to those available for Descriptive Logic. Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapter 8. Using Full First-Order Logic (FFOL) for specifying axioms requires a full-fledged automated theorem prover. However, FOL is semi-decidable and doing inferencing becomes computationally untractable for large amounts of data and axioms. This means, than in an environment like the Web, FFOL programs will not scale to handle huge amounts of knowledge. Besides full first theorem proving would mean maintaining consistency throughout the Web, which is impossible. Description Logic fragment of FOL. FOL includes expressiveness beyond the overlap, notably: positive disjunctions; existentials; and entailment of non-ground and non-atomic conclusions. Horn FOL is another fragment of FOL. Horn Logic Program (LP) is a slight weakening of Horn FOL. "Weakening" here means that the conclusions from a given set of Horn premises that are entailed according to the Horn LP formalism are a subset of the conclusions entailed (from that same set of premises) according to the Horn FOL formalism. However, the set of ground atomic conclusions is the same in the Horn LP as in the Horn FOL. For most practical purposes (e.g., relational database query answering), Horn LP is thus essentially similar in its power to the Horn FOL. Horn LP is a fragment of both FOL and nonmonotonic LP. This discussion may seem esoteric, but it is precisely these types of issues that will decide both the design of the Semantic Web as well as is likelihood to succeed. Higher Order Logic Higher Order Logics (HOL's) provide greater expressive power than FOL, but they are even more difficult computationally. For example, in HOL's, one can have true statements that are not provable (see discussion of Gödel’s Incompleteness Theorem). There are two aspects of this issue: higher-order syntax and higher-order semantics. If a higher-order semantics is not needed (and this is often the case), a second-order logic can often be translated into a first-order logic. In first-order semantics, variables can only range over domains of individuals or over the names of predicates and functions, but not over sets as such. In higher-order syntax, variables are allowed to appear in places where normally predicate or function symbols appear. Predicate calculus is the primary example of logic where syntax and semantics are both first-order. There are logics that have higher-order syntax but first-order semantics. Under a higher-order semantics, an equation between predicate (or function) symbols, is true, if and only if logics with a higher-order semantics and higher-order syntax are statements expressing trust about other statements. To state it another way, higher-order logic is distinguished from first-order logic in several ways. The first is the scope of quantifiers; in first-order logic, it is forbidden to quantify over predicates. The second way in which higher-order logic differs from first-order logic is in the constructions that are allowed in the underlying type theory. A higher-order predicate is a predicate that takes one or more other predicates as arguments. In general, a higher-order predicate of order n takes one or more (n − 1)th-order predicates as arguments (where n > 1). Recursion theory Recursion is the process a procedure goes through when one of the steps of the procedure involves rerunning a complete set of identical steps. In mathematics and computer science, recursion is a particular way of specifying a class of objects with the help of a reference to other objects of the class: a recursive definition defines objects in terms of the already defined objects of the class. A recursive process is one in which objects are defined in terms of other objects of the same type. Using a recurrence relation, an entire class of objects can be built up from a few initial values and a small number of rules. The Fibonacci numbers (i.e., the infinite sequence of numbers starting 0, 1, 1, 2, 3, 5, 8, 13, …, where the next number in the sequence is defined a s the sum of the previous two numbers) is a commonly known recursive set. The following is a recursive definition of person's ancestors: One's parents are one's ancestors (base case). The parents of any ancestor are also ancestors of the person under consideration (recursion step). Therefore, your ancestors include: your parents, and your parents' parents (grandparents), and your grandparents' parents, and everyone else you get by successively adding ancestors. It is convenient to think that a recursive definition defines objects in terms of "previously defined" member of the class. While recursive definitions are useful and widespread in mathematics, care must be taken to avoid self-recursion, in which an object is defined in terms of itself, leading to an infinite nesting (see Figure 1-1: “The Print Gallery” by M.C. Escher is a visual illustration of self-recursion). Knowledge Representation Let’s define what we mean by the fundamental terms “data,” “information,” “knowledge,” and "understanding." An item of data is a fundamental element of an application. Data can be represented by populations and labels. Data is raw; it exists and has no significance beyond its existence. It can exist in any form, usable or not. It does not have meaning by itself. Information on the other hand is an explicit association between items of data. Associations represent a function relating one set of things to another set of things. Information can be considered to be data that has been given meaning by way of relational connections. This "meaning" can be useful, but does not have to be. A relational database creates information from the data stored within it. Knowledge can be considered to be an appropriate collection of information, such that it is useful. Knowledge-based systems contain knowledge as well as information and data. A rule is an explicit functional association from a set of information things to a specific information thing. As a result, a rule is knowledge. We can construct information from data and knowledge from information and finally produce understanding from knowledge. Understanding lies at the highest level. Understanding is an interpolative and probabilistic process that is cognitive and analytical. It is the process by which one can take existing knowledge and synthesize new knowledge. One who has understanding can pursue useful actions because he can synthesize new knowledge or information from what is previously known (and understood). Understanding can build upon currently held information, knowledge, and understanding itself. AI systems possess understanding in the sense that they are able to synthesize new knowledge from previously stored information and knowledge. An important element of AI is the principle that intelligent behavior can be achieved through processing of symbolic structures representing increments of knowledge. This has produced knowledge-representation languages that allow the representation and manipulation of knowledge to deduce new facts from the existing knowledge. The knowledge-representation language must have a well-defined syntax and semantics system while supporting inference. Three techniques have been popular to express knowledge representation and inference: (1) Logic-based approaches, (2) Rule-based systems, and (3) Frames and semantic networks. Logic-based approaches use logical formulas to represent complex relationships. They require a well-defined syntax, semantics, and proof theory. The formal power of a logical theorem proof can be applied to knowledge to derive new knowledge. Logic is used as the formalism for programming languages and databases. It can also be used as a formalism to implement knowledge methodology. Any formalism that admits a declarative semantics and can be interpreted both as a programming language and a database language is a knowledge language. However, the approach is inflexible and requires great precision in stating the logical relationships. In some cases, common sense inferences and conclusions cannot be derived, and the approach may be inefficient, especially when dealing with issues that result in large combinations of objects or concepts. Rule-based approaches are more flexible and allow the representation of knowledge using sets of IF-THEN or other conditional rules. This approach is more procedural and less formal in its logic. As a result, reasoning can be controlled through a forward or backward chaining interpreter. Frames and semantic networks capture declarative information about related objects and concepts where there is a clear class hierarchy and where the principle of inheritance can be used to infer the characteristics of members of a subclass. The two forms of reasoning in this technique are matching (i.e., identification of objects having common properties), and property inheritance in which properties are inferred for a subclass. Frames and semantic networks are limited to representation and inference of relatively simple systems. In each of these approaches, the knowledge-representation component (i.e., problem-specific rules and facts) is separate from the problem-solving and inference procedures. For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. AI researchers have studied such systems and produced today’s Knowledge Representation (KR). KR is currently in a state comparable to that of hypertext before the advent of the Web. Knowledge representation contains the seeds of important applications, but to fully realize its potential, it must be linked into a comprehensive global system. Computational Logic Programming a computer involves creating a sequence of logical instructions that the computer will use to perform a wide variety of tasks. While it is possible to create programs directly in machine language, it is uncommon for programmers to work at this level because of the abstract nature of the instructions. It is better to write programs in a simple text file using a high-level programming language which can later be compiled into executable code. The ‘logic model’ for programming is a basic element that communicates the logic behind a program. A logic model can be a graphic representation of a program illustrating the logical relationships between program elements and the flow of calculation, data manipulation or decisions as the program executes its steps. Logic models typically use diagrams, flow sheets, or some other type of visual schematic to convey relationships between programmatic inputs, processes, and outcomes. Logic models attempt to show the links in a chain of reasoning about relationships to the desired goal. The desired goal is usually shown as the last link in the model. A logic program may consist of a set of axioms and a goal statement. The logic form can be a set of ‘IF-THEN’ statements. The rules of inference are applied to determine whether the axioms are sufficient to ensure the truth of the goal statement. The execution of a logic program corresponds to the construction of a proof of the goal statement from the axioms. In the logic programming model the programmer is responsible for specifying the basic logical relationships and does not specify the manner in which the inference rules are applied. Thus Logic + Control = Algorithms The operational semantics of logic programs correspond to logical inference. The declarative semantics of logic programs are derived from the term model. The denotation of semantics in logic programs are defined in terms of a function which assigns meaning to the program. There is a close relation between the axiomatic semantics of imperative programs and logic programs. The control portion of the equation is provided by an inference engine whose role is to derive theorems based on the set of axioms provided by the programmer. The inference engine uses the operations of resolution and unification to construct proofs. Faulty logic models occur when the essential problem has not been clearly stated or defined. Program developers work carefully to construct logic models to avoid logic conflicts, recursive loops, and paradoxes within their computer programs. As a result, programming logic should lead to executable code without paradox or conflict, if it is flawlessly produced. Nevertheless we know that ‘bugs’ or programming errors do occur, some of which are directly or indirectly a result of logic conflicts. As programs have grown in size from thousands of line of code to millions of lines, the problems of ‘bugs’ and logic conflicts have also grown. Today programs such as operating systems can have over 25 million lines of codes and considered to have hundreds of thousands of ‘bugs’ most of which are seldom encountered during routine program usage. Confining logic issues to beta-testing on local servers allows programmers reasonable control of conflict resolution. Now consider applying many lines of application code logic to the Semantic Web were it may access many information nodes. The magnitude of the potential conflicts could be somewhat daunting. Artificial Intelligence John McCarthy of MIT contributed the term ‘Artificial Intelligence’ (AI) and by the late 1950s, there were many researchers in AI working on programming computers. Eventually, AI expanded into such fields as philosophy, psychology and biology. AI is sometimes described in two ways: strong AI and weak AI. Strong AI asserts that computers can be made to think on a level equal to humans. Weak AI simply holds that some ‘thinking-like’ features can be added to computers to make them more useful tools. Examples of Weak AI abound: expert systems, drive-by-wire cars, smart browsers, and speech recognition software. These weak AI components may, when combined, begin to approach the expectations of strong AI. AI includes the study of computers that can perform cognitive tasks including: understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases or illnesses, solving mathematical problems, performing financial analyses, learning new procedures for problem solving, and playing complex games, like chess. We will provide a more detailed discussion on Artificial Intelligence on the Web and what is meant by machine intelligence in Chapter 3. Web Architecture and Business Logic So far we have explored the basic elements, characteristics, and limitations of logic and suggested that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs. Next we will review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet. This simple arrangement of segregating the complexity of logic to the executable programs residing on servers has minimized processing difficulties over the Web itself. Today, markup languages are not equipped with logic connectives. So all complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, there is difficulty in interoperable programming which greatly inhibits automation of Web Services. Browsers such as Internet Explorer and Netscape Navigator view Web pages written in HyperText Markup Language (HTML). The HTML program can be written to a simple text file that is recognized by the browser and it can call embedded script programming. In addition, HTML can include compiler directives that call server pages with access to proprietary compiled programming. As a result, simple-text HTML is empowered with important capabilities to call complex business logic programming residing on servers both in the frameworks of Microsoft’s .NET and Sun’s J2EE. These frameworks support Web Services and form a vital part of today’s Web. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program provides functions as transaction processing, database connectivity, and messaging. Business logic is concerned with logic about: how we model real world business objects - such as accounts, loans, travel; how these objects are stored; how these objects interact with each other - e.g. a bank account must have an owner and a bank holder's portfolio is the sum of his accounts; and who can access and update these objects. As an example, consider an online store that provides real-time pricing and availability information. The site will provide a form for you to choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. The Web server delegates the response generation to a script, however, the business logic for the pricing lookup is included from an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response. The application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client. By separating the pricing logic from the HTML response-generating code, the pricing logic becomes reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checking out a customer. Recently, eXtensible Markup Language (XML) Web Services use an XML payload to a Web server. The Web server can then process the data and respond much as application servers have in the past. XML has become the standard for data transfer of all types of applications. XML provides a data model that is supported by most data-handling tools and vendors. Structuring data as XML allows hierarchical, graph-based representations of the data to be presented to tools, which opens up a host of possibilities. The task of creating and deploying Web Services automatically requires interoperable standards. The most advanced vision for the next generation of Web Services is the development of Web Services over Semantic Web Architecture. The Semantic Web Now let’s consider using logic within markup languages on the Semantic Web. This means empowering the Web’s expressive capability, but at the expense of reducing Web performance. The current Web is built on HTML and XML, which describes how information is to be displayed and laid out on a Web page for humans to read. In addition, HTML is not capable of being directly exploited by information retrieval techniques. XML may have enabled the exchange of data across the Web, but it says nothing about the meaning of that data. In effect, the Web has developed as a medium for humans without a focus on data that could be processed automatically. As a result, computers are unable to automatically process the meaning of Web content. For machines to perform useful automatic reasoning tasks on these documents, the language machines use must go beyond the basic semantics of XML Schema. They will require an ontology language, logic connectives, and rule systems. By introducing these elements the Semantic Web is intended to be a paradigm shift just as powerful as the original Web. The Semantic Web will bring meaning to the content of Web pages, where software agents roaming from page-to-page can carry out automated tasks. The Semantic Web will be constructed over the Resource Description Framework (RDF) and Web Ontology Language (OWL). In addition, it will implement logic inference and rule systems. These languages are being developed by the W3C. Data can be defined and linked using RDF and OWL so that there is more effective discovery, automation, integration, and reuse across different applications. These languages are conceptually richer than HTML and allow representation of the meaning and structure of content (interrelationships between concepts). This makes Web content understandable by software agents, opening the way to a whole new generation of technologies for information processing, retrieval, and analysis. If a developer publishes data in XML on the Web, it doesn’t require much more effort to take the extra step and publish the data in RDF. By creating ontologies to describe data, intelligent applications won’t have to spend time translating various XML schemas. An ontology defines the terms used to describe and represent an area of knowledge. Although XML Schema is sufficient for exchanging data between parties who have agreed to the definitions beforehand, their lack of semantics prevents machines from reliably performing this task with new XML vocabularies. In addition, the ontology of RDF and RDF Schema (RDFS) is very limited (see Chapter 5). RDF is roughly limited to binary ground predicates and RDF Schema is roughly limited to a subclass hierarchy and a property hierarchy with domain and range definitions. Adding an Ontology language will permit the development of explicit, formal conceptualizations of models (see Chapter 6). The main requirements of an onotology language include: a well-defined syntax, a formal semantics, convenience of expression, n efficient reasoning support system, and sufficient expressive power. Since the W3C has established that the Semantic Web would require much more expressive power than using RDF and RDF Schema would offer, the W3C has defined Web Ontology Language (called OWL). The layered architecture of the Semantic Web would suggest that one way to develop the necessary ontology language is to extend RDF Schema by using the RDF meaning of classes and properties and adding primitives to support richer expressiveness. However, simply extending RDF Schema would fail to achieve the best combination of expressive power and efficient reasoning. The layered architecture of the Semantic Web promotes the downward compatibility and reuse of software is only achieved with OWL Full (see Chapter 6), but at the expense of computational intractability. RDF and OWL (DL and Lite, see Chapter 6) are specializations of predicate logic. They provide a syntax that fits well with Web languages. They also define reasonable subsets of logic that offer a trade-off between expressive power and computational complexity. Semantic Web research has developed from the traditions of Artificial Intelligence (AI) and ontology languages. Currently, the most important ontology languages on the Web are XML, XML Schema, RDF, RDF Schema, and OWL. Agents are pieces of software that work autonomously and proactively. In most cases agent will simply collect and organize information. Agents on the Semantic Web will receive some tasks to perform and seek information from Web resources, while communicating with other Web agents, in order to fulfill its task. Semantic Web agents will utilize metadata, ontologies, and logic to carry out its tasks. In a closed environment, Semantic Web specifications have already been used to accomplish many tasks, such as data interoperability for business-to-business (B2B) transactions. Many companies have expended resources to translate their internal data syntax for their partners. As the world migrates towards RDF and ontologies, interoperability will become more flexible to new demands. An inference is a process of using rules to manipulate knowledge to produce new knowledge. Adding logic to the Web means using rules to make inferences and choosing a course of action. The logic must be powerful enough to describe complex properties of objects, but not so powerful that agents can be tricked by a paradox. A combination of mathematical and engineering issues complicates this task. We will provide a more detailed presentation on paradoxes on the Web and what is solvable on the Web in the next few chapters. Inference Engines for the Semantic Web Inference engines process the knowledge available in the Semantic Web by deducing new knowledge from already specified knowledge. Higher Order Logic (HOL) based inference engines have to greatest expressive power among all known logics such as the characterization of transitive closure. However, higher order logics don't have nice computational properties. There are true statements, which are unprovable (Gödel’s Incompleteness Theorem). Full First Order Logic (FFOL) based inference engines for specifying axioms requires a full-fledged automated theorem prover. FOL is semi-decidable and doing inferencing is computationally not tractable for large amounts of data and axioms. This means, than in an environment like the Web, HOL and FFOL programs would not scale up for handling huge amounts of knowledge. Besides full first theorem proving would mean to maintain consistency throughout the web, which is impossible. Predicate calculus is the primary example of logic where syntax and semantics are both first-order. From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound and is a good candidate language for Web logic. Other possibilities for inference engines for the Semantic Web are languages based on Horn-logic, which is another fragment of First-Order Predicate logic (see Figure 2-2). In addition, Descriptive Logic and rule systems (e.g., Horn Logic) have different capabilities. Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapters, 6, 7, 8 and 9. Conclusion For the Semantic Web to provide machine processing capabilities, the logic expressive power of mark-up languages must be balanced against the resulting computational complexity of reasoning. In this chapter, we examined both the expressive characteristics of logic languages, as well as, their inherit limitations. First Order Logics (FOL) fragments such as Descriptive Logic and Horn Logic offer attractive characteristics for Web applications and set the parameters for how expressive Web markup languages can become. We also reviewed the concept of Artificial Intelligence (AI) and how logic is applied in computer programming. After exploring the basic elements, characteristics, and limitations of logic and suggesting that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs, we reviewed how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet. Finally, we discussed the implications of using logic within markup languages on the Web through the development of the Semantic Web. Our conclusions from this chapter include: Logic is the foundation of knowledge representation which can be applied to AI in general and the World Wide Web specially. Logic can provide a high-level language for expressing knowledge and has high expressive power. Logic has a well-understood formal semantics for assigning unambiguous meaning to logic statements. In addition, we saw that proof systems exist that can automatically derive statements syntactically from premises. Predicate logic uniquely offers a sound and complete proof system while higher-order logics do not. By tracking the proof to reach its consequence the logic can provide explanations for the answers. Currently, complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, this difficulty for interoperable programs greatly inhibits automation of Web Services. The Semantic Web offers a way to use logic in the form of Descriptive Logic or Horn Logic on the Web. Exercises 2-1. Explain how logic for complex business calculations is currently carried out through .NET and J2EE application servers. 2-2. Explain the difference between FOL and HOL. 2-3. Why is it necessary to consider less powerful expressive languages for the Semantic Web? 2-4. Why is undeciability a concern on the Web? Website http://escherdroste.math.leidenuniv.nl/ offers visualize the mathematical structure behind Escher's Print Gallery using the Escher and the Droste effect. This mathematical structure answers some questions about Escher's picture, such as: "what's in the blurry white hole in the middle?" This project is an initiative of Hendrik Lenstra of the Universiteit Leiden and the University of California at Berkeley. Bart de Smit of the Universiteit Leiden runs the project. Interlude #2: Truth and Beauty As John passed with a sour look on his face, Mary looked up from her text book and asked, “Didn’t you enjoy the soccer game?” “How can you even ask that when we lost?” asked John gloomily. “I think the team performed beautifully, despite the score” said Mary. This instantly frustrated John and he said, "Do you know Mary that sometimes I find it disarming the way you express objects in terms of beauty. I find that simply accepting something on the basis of its beauty can lead to false conclusions?" Mary reflected upon this before offering a gambit of her own, "Well John, do you know that sometimes I find that relying on objective truth alone can lead to unattractive conclusions." John became flustered and reflected his dismay by demanding, "Give me an example." Without hesitation, Mary said, "Perhaps you will recall that in the late 1920's, mathematicians were quite certain that every well-posed mathematical question had to have a definite answer ─ either true or false. For example, suppose they claimed that every even number was the sum of two prime numbers,” referring to Goldbach's Conjecture which she had just been studying in her text book. Mary continued, “Mathematicians would seek the truth or falsity of the claim by examining a chain of logical reasoning that would lead in a finite number of steps to prove if the claim were either true or false." "So mathematicians thought at the time," said John. "Even today most people still do." "Indeed," said Mary. "But in 1931, logician Kurt Gödel proved that the mathematicians were wrong. He showed that every sufficiently expressive logical system must contain at least one statement that can be neither proved nor disproved following the logical rules of that system. Gödel proved that not every mathematical question has to have a yes or no answer. Even a simple question about numbers may be undecidable. In fact, Gödel proved that t here exist questions that while being undecidable by the rules of logical system can be seen to be actually true if we jump outside that system. But they cannot be proven to be true.” “Thank you for that clear explanation,” said John. “But isn’t such a fact simply a translation into mathematic terms of the famous Liar’s Paradox: ‘This statement is false.’” “Well, I think it's a little more complicated than that,” said Mary. “But Gödel did identify the problem of self-reference that occurs in the Liar’s Paradox. Nevertheless, Gödel’s theorem contradicted the thinking of most of the great mathematicians of his time. The result is that one can not be as certain as the mathematician had desired. See what I mean, Gödel may have found an important truth, but it was – well to be frank – rather disappointingly unattractive," concluded Mary. "On the contrary,” countered John, “from my perspective it was the beauty of the well-posed mathematical question offered by the mathematicians that was proven to be false. Mary replied, “I’ll have to think about that.”
- Movie | H Peter Alesso
Podcasts of books by Peter Alesso for book to movie for Midshipman Henry Gallant Henry Gallant Movie Movie: Midshipman Henry Gallant in Space Movie with subtitles: Midshipman Henry Gallant in Space Podcast: Midshipman Henry Gallant in Space (1)
- Connections | H Peter Alesso
excerpt from computer science technology book Connections. Connections AMAZON Chapter 1 Connecting Information “The ultimate search engine would understand exactly what you mean and give back exactly what you want.” said Larry Page[1]. We live in the information age. As society has progressed into the post-indu strial era, access to knowledge and information has become the cornerstone of modern living. With the advent of the World Wide Web, vast amounts of information have suddenly become available to people throughout the world. And searching the Web has become an essential capability whether you are sitting at your desktop PC or wandering the corporate halls with your wireless PDA. As a result, there is no better place to start our discussion of connecting information than with the world’s greatest search engine ─ Google. Google has become a global household name ─ millions use it daily in a hundred languages to conduct over half of all online searches. As a result, Google connects people to relevant information. By providing free access to information, Google offers a seductive gratification to whoever seeks it. To power its searches Google, uses patented, custom-designed programs and hundreds of thousands of computers to provide the greatest computing power of any enterprise. Searching for information is now called ‘googling’ which men, women, and children can perform over computers and cell phones. And thanks to small targeted advertisements that searchers can click for information, Google has become a financial success. In this chapter, we follow the hero’s journey of Google founders Larry Page and Sergey Brin as they invent their Googleware technology for efficient connection to information, then go on to become masters in pursuit of their holy grail ─ ‘perfect search.’ The Google Story Google was founded by two Ph.D. computer science students at Stanford University in California ─ Larry Page and Sergey Brin. When Page and Brin began their hero’s journey, they didn’t know exactly where they were headed. It is widely known that, at first, Page and Brin didn’t hit it off. When they met in 1995, 24 year-old Page was a new graduate of the University of Michigan visiting Stanford University to consider entering graduate school; Brin, at age 23, was a Stanford graduate student who was assigned to host Page’s visit. At first, the two seemed to differ on just about every subject they discussed. They each had strong opinions and divergent viewpoints, and their relationship seemed destined to be contentious. Larry Page was born in 1973 in Lansing, Michigan. Both of his parents were computer scientists. His father was a university professor and a leader in the field of artificial intelligence, while his mother was a teacher of computer programming. As a result of his upbringing in this talented and technology-oriented family, Page seemed destined for success in the computer industry in one way or another. After graduating from high school, Page studied computer engineering at the University of Michigan where he earned his Bachelor of Science degree. Following his undergraduate studies, he decided to pursue graduate work in computer engineering at Stanford University. He intended to build a career in academia or the computer science profession, building on a Ph.D. degree. Meanwhile, Sergey Brin was also born in 1973, in Moscow, Russia, the son of a Russian mathematician and economist. His entire family fled the Soviet Union in 1979 under the threat of growing anti-Semitism, and began their new life as immigrants in the United States. Brin displayed a great interest in computers from an early age. As a youth, he was influenced by the rapid popularization of personal computers, and was very much a child of the microprocessor age. He too was brought up to be familiar with mathematics and computer technology, and as a young child, in the first grade he turned in a computer printout for a school project. Later, at the age of nine, he was given a Commodore 64 computer as a birthday gift from his father. Brin entered the University of Maryland at College Park where he studied mathematics and computer science. He completed his studies at the University of Maryland in 1993 having completed his Bachelor of Science degree. Following his undergraduate studies, he was given a National Science Foundation fellowship to pursue graduate studies in computer science at Stanford University. Not only did he exhibit early talent and interest in mathematics and computer science, he also became acutely interested in data management and networking as the Internet was becoming an increasing force in American society. While at Stanford, he pursued research and prepared publications in the areas of data-mining and pattern extraction. He also wrote software to convert scientific papers written in TeX, a cross-platform text processing language, into HyperText Markup Language (HTML), the multimedia language of the World Wide Web. Brin successfully completed his Masters degree at Stanford. Like Page, Brin’s intent was to continue in his graduate studies to earn a Ph.D. which he also viewed as a great opportunity to establish an outstanding academic or professional career in computer science. The hero’s journey for Page and Brin began as they heard the call ─ to develop a unique approach for retrieving relevant information from the voluminous data on the World Wide Web. Page remembered, “When we first met each other, we each thought the other was obnoxious. Then we hit it off and became really good friends.... I got this crazy idea that I was going to download the entire Web onto my computer. I told my advisor it would only take a week... So I started to download the Web, and Sergey started helping me because he was interested in data mining and making sense of the information.”[2] Although Page initially thought the downloading of the Web would be a short term project, taking a week or so to accomplish, he quickly found that the scope of what he wanted to do was much greater than his original estimate. Once he started his downloading project, he enlisted Brin to join the effort. While working together the two became inspired and wrote the seminal paper entitled The Anatomy of a Large-Scale Hypertextual Web Search Engine[3]. It explained their efficient ranking algorithm, ‘PageRank.’ Brin said about the experience, “The research behind Google began in 1995. The first prototype was actually called BackRub. A couple of years later, we had a search engine that worked considerably better than the others available did at the time.”[4] This prototype listed the results of a Web search according to a quantitative measure of the popularity of the pages. By January 1996, the system was able to analyze the ‘back links’ pointing to a given website and from this quantify the popularity of the site. Within the next few years, the prototype system had been converted into progressively improved versions, and these were substantially more effective than any other search engine then available. As the buzz about their project spread, more and more people began to use it. Soon they were reporting that there were 10,000 searches per day at Stanford using their system. With this growing use and popularity of their search system, they began to realize that they were maxing out their search ability due to the limited number of computers they had at their disposal. They would need more hardware to continue their remarkable expansion and enable more search activity. As Page said, “This is about how many searches we can do, and we need more computers. Our whole history has been like that. We always need more computers.”[5] In many ways, the research project at Stanford was a low budget operation. Because of a chronic shortage of cash, the pair are said to have monitored the Stanford computer science department’s loading docks for newly arrived computers to ‘borrow.’ In spite of this, within a short span of time, the reputation of the BackRub system had grown dramatically and their new search technology began to be broadly noticed. They named their successor search engine ‘Google,’ in a whimsical analogy to the mathematical term ‘Googol,’ which is the immensely large number 1 followed by 100 zeros. The transition from the earlier Backrub technology to the much more sophisticated Google was slow. But the Google system began with an index of 25 million pages and the capability to handle 10,000 search queries every day, even when it was in its initial stage of introduction. The Google search engine grew quickly as it was continuously improved. The effectiveness and relevance of the Google searches, its scope of coverage, speed and reliability, and its clean user interface all contributed to a rapid increase in the popularity of the search engine. At this time, Google was still a student research project, and both Page and Brin were still intent on completing their respective doctoral programs at Stanford. As a result, they initially refused to ‘answer the call’ and continued to devote themselves their academic pursuit of the technology of search. Through all this, Brin maintained an eclectic collection of interests and activities. He continued with his graduate research interests at Stanford and he collaborated with his fellow Ph.D. students and professors on other projects such as automatic detection. At the same time, he also pursued a variety of outside interests, including sailing and trapeze. Brin’s father had stressed the importance for him to complete his Ph.D. He said, “I expected him to get his Ph.D. and to become somebody, maybe a professor.” In response to his father’s question as to whether he was taking any advanced courses one semester, Brin replied, “Yes, advanced swimming.”[6] While Brin and Page continued on as graduate students, they began to realize the importance of what they had succeeded in developing. The two aspiring entrepreneurs decided to try and license the Google technology to existing Internet companies. But they found themselves unsuccessful in stimulating the interest of the major enterprises. They were forced to face the crucial decision of continuing on at Stanford or striking out on their own. With their realization that they were onto something that was important and perhaps even groundbreaking, they decided to make the move. Thus our two heroes had reached their point of departure and they crossed over from the academic into the business world. As they committed to this new direction, they realized they would need to postpone their educational aspirations, prepare plans for their business concept, develop a working demo of their commercial search product, and seek funding sponsorship from outside investors. Having made this decision, they managed to interest Sun Microsystems founder Andy Bechtolsheim in their idea. As Brin recalls, "We met him very early one morning on the porch of a Stanford faculty member's home in Palo Alto. We gave him a quick demo. He had to run off somewhere, so he said, 'Instead of us discussing all the details, why don't I just write you a check?' It was made out to Google Inc. and was for $100,000."[7] The check remained in Page's desk un-cashed for several weeks while he and Brin set up a corporation and sought additional money from family and friends ─ almost $1 million in total. Having started the new company, lined up investor funding, and possessing a superb product, they realized ultimate success would require a good balance of perspiration as well as inspiration. Nevertheless, at this point Google appeared to be well on the road to success. Page and Brin have been on a roll every since, armed with the great confidence that they had both a superior product and an excellent vision for global information collection, storage, and retrieval. In addition, they believed that coordination and optimization of the entire hardware/software system was important, and so they developed their own Googleware technology by combining their custom software with appropriately integrated custom hardware, thereby fully leveraging their ingenious concept. Google Inc. opened its doors as a business entity in September 1998, operating out of modest facilities in a Menlo Park, California garage. As Page and Brin initiated their journey, they faced many challenges and along the way. They matured in their understanding with the help of mentors they encountered such as Yahoo!’s Dave Filo. Filo not only encouraged the two in the development of their search technology, but also made business suggestions for their project. Following the company startup, interest in Google grew rapidly. Red Hat, a Linux company, signed on as their first commercial customer. They were particularly interested in Google because they realized the importance of search technology and its ability to run on open source systems such as Linux. In addition, the press began to take notice of this new commercial venture and articles began to appear in the media highlighting the Google product that offered relevant search results. The late 1990s saw a spectacular growth in development of the technology industry, and Silicon Valley was awash with investor funding. The timing was right for Google, and in 1999, they sought and received a second round of funding, obtaining $25 million from Silicon Valley venture capital firms. The additional funding enabled them to expand their operations and move into new facilities they called the ‘Googleplex,’ Google's current headquarters in Mountain View, California. Although at the time they occupied only a small portion of the new two-story building, they had clearly come a long way from a university research project to a full-fledged technology company with a rapid growth trajectory and a product that was in high demand. Google was also in the process of developing a unique company culture. They operated in an informal atmosphere that facilitated both collegiality and an easy exchange of ideas. Google staffers enjoyed this rewarding atmosphere while they continued to make many incremental improvements to their search engine technology. For example, in an effort to expand the utility of their keyword-targeted advertising to small businesses, they rolled out the ‘AdWords’ system, a software package that represents a self-service advertisement development capability. Google took a major step forward when, in 2000, it was selected by Yahoo to replace Inktomi as their provider of supplementary search results. Because of the superiority of Google over other search engine capabilities, licenses were obtained by many other companies, including the Internet services powerhouse America Online (AOL), Netscape, Freeserve, and eventually Microsoft Network (MSN). In fact, although Microsoft has pursued its own search technology, Bill Gates once commented on search-engine technology development by saying that “Google kicked our butts.”[8] By the end of 2000, Google was handling more than 100 million searches each day. Shortly thereafter Google began to deliver new innovations and establish new partnerships to enter the burgeoning field of mobile wireless computing. By expanding into this field, Google continued to pursue its strategy of putting search into the hands of as many users as possible. As the global use of Google grew, the patterns contained within the records of search queries provided new information about what was on the minds of the global community of Internet users. Google was able to analyze the global traffic in Internet searching and identify patterns, trends, and surprises – a process they called ‘Google Zeitgeist.’ In 2004, Yahoo decided to compete directly with Google and discontinued its reliance on the Google search technology. Nevertheless, Google continued to expand, increasing its market share and dominance of the Web search market through the deployment of regional versions of its software, incorporating language capabilities beyond English. As a result, Google continued to expand as a global Internet force. Also in 2004, Google offered its stock to investors through an Initial Public Offering (IPO). This entrance to public trading of Google stock created not only a big stir in the financial markets, but also great wealth for the two founding entrepreneurs. Page and Brin immediately joined the billionaire’s club as they entered the exclusive ranks of the wealthiest people in the world. Following the IPO, Google began to challenge Microsoft in its role as the leading provider of computer services. They issued a series of new products, including the email service Gmail, the impressive map and satellite image product Google Earth, Google Talk to compete in the growing Voice of the Internet (VoIP) market, and products aimed at leveraging their ambitious project to make the content of thousands of books searchable online, Google Base and Google Book Search. In addition to these new ventures, they have continued to innovate in their core field of search by introducing new features for searching images, news articles, shopping services (Froogle), and other local search options. It is clear that Google has become an essential tool for connecting people and information in support of the developing Information Revolution. Having established itself at the epicenter of the Web, Google is widely regarded as the ‘place to be’ for the best and brightest programming talent in the industry. It is fair to say that, since the introduction of the printing press, no other entity or event has had more impact on public access to information than Google. In fact, Google has endeavored to accumulate a good part of all human knowledge from the vast amount of information stored on the Web. The effective transformation of Google into an engine for what Page calls a ‘perfect search’ would basically give people everywhere the right answers to their questions and the ability to understand everything in the world. Page and Brin could not have achieved their technological success without having a clear vision of the future of the Internet. Page recently commented in an interview that he believes that in the future "information access and communications will become truly ubiquitous,” meaning that “anyone in the world will have access to any kind of information they want or be able to communicate with anyone else instantly and for very little cost.” In fact, this vision of the future is not far from where we are now.[9] Page also noted that the real power of the Internet is the ability to serve people all over the globe with access to information that represents empowerment of individuals. The ability to facilitate the improved lives and productivity of billions of human beings throughout the world is an awesome potential outcome. And the ability to support the information needs of people from different cultures and languages is an unusual challenge. Page stated in an interview that “even language is becoming less of a barrier. There's pretty good automatic translation out there. I've been using it quite a bit as Google becomes more globalized. It doesn't translate documents exactly, but it does a pretty good job and it's getting better every day.”[10] Even with translation and global reach, however, there remain significant challenges to connecting the people of the world through advanced information technology. One of the challenges is the potential for governmental restrictions on the access to information. Encryption technology, for example, inhibits the power of governments to monitor or control such information access. However, a 1998 survey of encryption policy found that several countries, including Belarus, China, Israel, Pakistan, Russia, and Singapore, maintained strong domestic controls while several other countries were considering the adoption of such controls.[11] The phrase ‘Don't be evil’ has been attributed to Google as its catch phrase or motto. Google's present CEO Eric Schmidt commented, in response to questions about the meaning of this motto, that "evil is whatever Sergey says is evil." Brin, on the other hand, said in an interview with Playboy Magazine, “As for ‘Don’t be evil,’ we have tried to define precisely what it means to be a force for good — always do the right, ethical thing. Ultimately ‘Don’t be evil’ seems the easiest way to express it.” And Page also commented on the phrase, saying “Apparently people like it better than ‘Be good.’”[12] Page and Brin maintain lofty ambitions for the future of information technology, and they communicated those ambitions in an unprecedented seven-page letter to Wall Street entitled An Owner's Manual' for Google's Shareholders, written to detail Google's intentions as a public company. They explained their vision that “Searching and organizing all the world’s information is an unusually important task that should be carried out by a company that is trustworthy and interested in the public good.”[13] In response to questions about how Google will be used in the future, Brin said “Your mind is tremendously efficient at weighing an enormous amount of information. We want to make smarter search engines that do a lot of the work for us. The smarter we can make the search engine, the better. Where will it lead? Who knows? But it’s credible to imagine a leap as great as that from hunting through library stacks to a Google session, when we leap from today’s search engines to having the entirety of the world’s information as just one of our thoughts.”[14] At this junction, Page and Brin find themselves in a state of great personal wealth and great accomplishment, having created a technology and company that is profoundly affecting human culture and society. The two computer scientists have traveled far in their hero’s journey to carry out their vision of global search, having developed skills and capabilities for themselves as well as for Google and the Googleware technology. As they succeeded, their search technology became a key milestone in the development of the Information Revolution. Their journey is not over, however. Before continuing their story, let’s digress into the historical context. The Information Revolution Over past millennia, the world has witnessed two global revolutions: the Agricultural Revolution and the Industrial Revolution. During the Agricultural Revolution, a hunter-gather could acquire the resources from an area of 100 acres to produce an adequate food supply, whereas a single farmer needed only one acre of land to produce the equivalent amount of food. It was this 100-fold improvement in land management that fueled the agricultural revolution. It not only enabled far more efficient food production, but also provided food resources well above the needs of subsistence, resulting in a new era built on trade. Where a single farmer and his horse had worked a farm, during the Industrial Revolution workers were able to use a single steam engine that produced 100 times the horsepower of this farmer-horse team. As a result, the Industrial Revolution placed a 100-fold increase of mechanical power into the hands of the laborer. It resulted in the falling cost of labor and this fueled the unprecedented acceleration in economic growth that ensued. Over the millennia, man has accumulated great knowledge, produced a treasury of cultural literature and developed a wealth of technology advances, much of which has been recorded in written form. By the mid-twentieth century, the quantity of accessible useful information had grown explosively, requiring new methods of information management; and this can be said to have triggered the Information Revolution. As computer technology offered great improvements in information management technology, it also provided substantial reductions in the cost of information access. It did more than allow people to receive information. Individuals could buy, sell and even create their own information. Cheap, plentiful, easily accessible information has become as powerful an economic dynamic as land and energy had for the two prior revolutions. The falling cost of information has, in part, reflected the dramatic improvement in price-performance of microprocessors, which appears to be on a pattern of doubling every eighteen months. While the computer has been contributing to information productivity since the 1950’s, the resulting global economic productivity gains were initially slow to be realized. Until the late 1990’s, networks were rigid and closed, and time to implement changes in the telecommunication industry were measured in decades. Since then, the Web has become the ‘grim reaper’ of information inefficiency. For the first time, ordinary people had real power over information production and dissemination. As the cost of information dropped, the microprocessor in effect gave ordinary people control over information about consumer products. Today, we are beginning to see dramatic change as service workers experience the productivity gains from rapid communications and automated business and knowledge transactions. A service worker can now complete knowledge transactions 100 times faster using intelligent software and near ubiquitous computing in comparison to a clerk using written records. As a result, the Information Revolution is placing a 100-fold increase in transaction speed into the hands of the service worker. Therefore, the Information Revolution is based on the falling cost of information-based transactions which in turn fuels economic growth. In considering these three major revolutions in human society, a defining feature of each has been the requirement for more knowledgeable and more highly skilled workers. The Information Revolution signals that this will be a major priority for its continued growth. Clearly, the Web will play a central role in the efficient performance of the Information Revolution because it offers a powerful communication medium that is itself becoming ever more useful through intelligent applications. Over the past 50 years, the Internet/World Wide Web has grown into the global Information Superhighway. And just as roads connected the traders of the Agricultural Revolution and railroads connected the producers and consumers of the Industrial Revolution, the Web is now connecting information to people in the Information Revolution. The Information Revolutions enables service workers today to complete knowledge transactions many times faster through intelligent software using photons over the Internet, in comparison to clerks using electrons over wired circuits just a few decades ago. But perhaps the most essential ingredient in the Web’s continued success has been search technology such as Google, which has provided real efficiency in connecting to relevant information and completing vital transactions. Now Google transforms data and information into useful knowledge energizing the Information Revolution. Defining Information Google started with Page’s and Brin’s quest to mine data and make sense of the voluminous information on the Web. But what differentiates information from knowledge and how do companies like Google manipulate it on the Web to nourish the Information Revolution? First let’s be clear about what we mean by the fundamental terms ‘data,’ ‘information,’ ‘knowledge,’ and ‘understanding.’ An item of data is a fundamental element of information, the processed data that has some independent usefulness. And right now data is the main thing you can find directly on the Web in its current state. Data can be considered the raw material of information. Symbols and numbers are forms of data. Data can be organized within a database to form structured information. While spreadsheets are ‘number crunchers,’ databases are the ‘information crunchers.’ Databases are highly effective in managing and manipulating structured data.[15] Consider, for example, a directory or phone book which contains elements of information (i.e., names, addresses and phone numbers) about telephone customers in a particular area. In such a directory, each customer’s information is laid out in the same pattern. The phone book is basically a table which contains a record for each customer. Each customer’s record includes his name, address, and phone number. But you can’t directly search such a database on the Web. This is because there is no ‘schema’ defining the structure of data on the Web. Thus, what looks like information to the human being who is looking at the directory (taking with him his background knowledge and experience as a context) in reality is data because it lacks this schema. On the other hand, information explicitly associates one set of things to another. A telephone book full of data becomes information when we associate the data to persons we know or wish to communicate with. For example, suppose we found data entries in a telephone book for four different persons named Jones, but all of them were living within one block of each other. The fact that there are four bits of data about persons with the same name in approximately the same location is interesting information. Knowledge, on the other hand, can be considered to be a meaningful collection of useful information. We can construct information from data. And we can construct knowledge from information. Finally, we can achieve understanding from the knowledge we have gathered. Understanding lies at the highest level. It is the process by which we can take existing knowledge and synthesize new knowledge. Once we have understanding, we can pursue useful actions because we can synthesize new knowledge or information from what is previously known. Again, knowledge and understanding are currently elusive on the Web. Future Semantic Web architectures seek to redress this limit. To continue our telephone example, suppose we developed a genealogy tree for the Jones and found the four Jones who lived near each other were actually brothers. This would give us additional knowledge about the Jones in addition to information about their addresses. If we then interviewed the brothers and found that their father had bought each brother a house in his neighborhood when they married, we would finally understand quite a bit about them. We could continue the interviews to find out about their future plans for their off-spring – thus producing more new knowledge. If we could manipulate data, information, knowledge, and understanding by combining a search engine, such as Google, with a reasoning engine, we could create a logic machine. Such an effort would be central to the development of Artificial Intelligence (AI) on the Web. AI systems seek to create understanding through their ability to integrate information and synthesize new knowledge from previously stored information and knowledge. An important element of AI is the principle that intelligent behavior can be achieved through processing of symbolic structures representing increments of knowledge. This has produced knowledge-representation languages that allow the representation and manipulation of knowledge to deduce new facts from the existing knowledge. The World Wide Web has become the greatest repository of information on virtually every topic. Its biggest problem, however, is the classic problem of finding a needle in a haystack. Given the vast stores of information on the Web, finding exactly what you’re looking for can be a major challenge. This is where search engines, like Google, come in ─ and where we can look for the greatest future innovations to come when we combine AI and search. Larry Page and Sergey Brin found that the existing search technology looked at information on the Web in simple ways. They decided that to deliver better results, they would have to go beyond simply looking, to looking good. Looking Good Commercial search engines are based upon one of two forms of Web search technologies: human directed search and automated search. Human directed search is search in which the human performs an integral part of the process. In this form of search engine technology, a database is prepared of keywords, concepts, and references that can be useful to the human operator. Searches that are keyword based are easy to conduct but they have the disadvantage of providing large volumes of irrelevant or meaningless results. The basic idea in its simplest form is to count the number of words in the search query that match words in the keyword index, and rank the Web page accordingly. Although more sophisticated approaches also take into account the location of the keywords, the improved performance may not be substantial. As an example, it is known that keywords used in the title tags of Web pages tend to be more significant than words that occur in the web page, but not in the title tag; however, the level of improvement may be modest. Another approach is to use hierarchies of topics to assist in human-directed search. The disadvantage of this approach is that the topic hierarchies must be independently created and are therefore expensive to create and maintain. The alternative approach is automated search; this approach is the path taken by Google. It uses software agents, called Web crawlers (also called spiders, robots, bots, or agents) to automatically follow hypertext links from one site to another on the Web until they accumulate vast amounts of information about the Web pages and their interconnections. From this, a complex index can be prepared to store the relevant information. Such automated search methods accumulate information automatically and allow for continuing updates. However, even though these processes may be highly sophisticated and automatic, the information they produce is represented as links to words, and not as meaningful concepts. Current automated search engines must maintain huge databases of Web page references. There are two implementations of such search engines: individual search engines and meta-searchers. Individual search engines (such as Google) accumulate their own databases of information about Web pages and their interconnections and store them in such a way as to be searchable. Meta-searchers, on the other hand, access multiple individual engines simultaneously, searching their databases. In the use of key words in search engines, there are two language-based phenomena that can significantly impact effectiveness and therefore must be taken into account. The first of these is polysemy, the fact that single words frequently have multiple meanings; and the second is synonymy, the fact that multiple words can have the same meaning or refer to the same concept. In addition, there are several characteristics required to improve a search engine’s performance. It is important to consider useful searches as distinct from fruitless ones. To be useful, there are three necessary criteria: (1) maximize the relevant information, (2) minimize irrelevant information, and (3) make the ranking meaningful, with the most highly relevant results first. The first criterion is called recall. The desire to obtain relevant results is very important, and the fact is that, without effective recall, we may be swamped with less relevant information and may, in fact, leave out the most important and relevant results. It is essential to reduce the rate of false negatives ─ important and relevant results that are not displayed ─ to a level that is as low as possible. The second criterion, minimizing irrelevant information, is also very important to ensure that relevant results are not swamped; this criterion is called precision. If the level of precision is too low, the useful results will be highly diluted by the uninteresting results, and the user will be burdened by the task of sifting through all of the results to find the needle in the haystack. High precision means a very low rate of false positives, irrelevant results that are highly ranked and displayed at the top of our search result. Since there is always a tradeoff between reducing the risk of missing relevant results and reducing the level of irrelevant results, the third criterion, ranking, is very important. Ranking is most effective when it matches our information needs in terms of our perception of what is most relevant in our results. The challenge for a software system is to be able to accurately match the expectations of a human user since the degree of relevance of a search contains several subjective factors such as the immediate needs of the user and the context of the search. Many of the desired characteristics for advanced search, therefore, match well with the research directions in artificial intelligence and pattern recognition. By obtaining an awareness of individual preferences, for example, a search engine could more effectively take them into account in improving the effectiveness of search. Recognizing ranking algorithms were the weak point in competing search technology Page and Brin introduced their own new ranking algorithm ─ PageRanking. Google Connects Information Just as the name Google is derived from the esoteric mathematical term ‘googol,’ in the future, the direction of Google will focus on developing the esoteric ‘perfect search engine,’ defined by Page as something that "understands exactly what you mean and gives you back exactly what you want." In the past, Google has applied great innovation to try and overcome the limitations of prior search approaches; PageRank was conceived by Google to overcome some of the key limitations.[16] Page and Brin recognized that providing the fastest, most accurate search results would require a new approach to server systems. While most search engines used a small number of large servers that often slowed down under peak use, Google went the other direction by using large numbers of linked PCs to find search results in response to queries. The approach turned out to be effective in that it produced much faster response times and greater scalability while minimizing costs. Others have followed Google’s lead in this innovation while Google has continued its efforts to make their systems more efficient. Google takes a parallel processing approach to its search technology by conducting a series of calculations on multiple processors. This has provided Google with critical timing advantage, permitting their search algorithms to be very fast. While other search engines rely heavily on the simple approach of counting the occurrences of keywords, Google’s PageRank approach considers the entire link structure of the Web to help in the determination of Web page importance. By then performing a hypertext matching assessment to narrow the search results for the particular search being conducted, Google achieves superior performance. In a sense, they combine insight into Web page importance with query-specific attributes to rank pages and deliver the most relevant results at the top of the search results. The PageRank algorithm analyzes the importance of the Web pages it considers by solving an exceptionally complex set of equations with a huge number of variables and terms. By considering links between Web pages as ‘votes’ from one page to another, PageRank can assign a measure of a page’s importance by counting its votes. It also takes into account the importance of each page that supplies a vote, and by appropriately weighting these votes, further improves the quality of the search. In addition, PageRank considers the Web page content, but unlike other search engines that restrict such consideration to the text content, Google consider the full contents of the page. In a sense, Google attempts to use the collective intelligence of the Web, a topic for further discussion later in this book, in its effort to improve the relevance of its search results. Finally, because the search algorithms used by Google are automated, Google has earned a reputation for objectivity and lack of bias in its results. Throughout their exciting years establishing and growing Google as a company, Page and Brin realized that continued innovation was essential. They undertook to find new innovative services that would enhance access to Web information with added thought and not a little perspiration. Page said that he respected the idea of having “a healthy disregard for the impossible.”[17] In February 2002, the Google Search Appliance, a plug-and-play application for search, was introduced. In short order, this product was dispersed throughout the world populating company networks, university systems, and the entire Web. The popular Google Search Appliance is referred to as ‘Google in a box.’ In another initiative, Google News was introduced in September of 2002. This free news service, which allows automatic selection and arrangement of news headlines and pictures, features real time updating and tailoring allowing users to browse the news with scan and search capabilities. Continuing Google's emphasis on innovation, the Google search service for products, Froogle, was launched in December of 2002. Froogle allows users to Search millions of commercial websites to find product and pricing information. It enables users to identify and link to a variety of sources for specific products, providing images, specifications and pricing information for the items being sought. Google's innovations have also impacted the publishing business with both search and advertising features. Google purchased Pyra Labs in 2003, and thus became the host of Blogger, a leading service for the sharing of thoughts and opinions through online journals, or blogs (weblogs). Finally, Google Maps became a dynamic online mapping feature, and Google Earth a highly popular mapping and satellite imagery resource. Using these innovative applications, users can find information about particular locations, get directions, and display both maps as well as satellite images of a desired address. With each new capability, Google expands our access to more information and moves us closer to Page’s Holy Grail: ‘perfect search.’ At this junction, Page and Brin have finally completed their hero’s journey. They have become the Masters of Search; committed to improving access to information and lifting the bonds of ignorance from millions around the world. Pattern of Discovery Larry Page and Sergey Brin were trying to solve the problem of easy, quick access to all Web information, and ultimately to all human knowledge. In order to index existing Web information and provide rapid relevant search results, their challenge was to sort through billions of pages of material efficiently and explicitly find the right responses. They were confident that their vision for developing a global information collection, storage, and retrieval system would succeed if they could base it on a unique and efficient ranking algorithm. The process of inspiration for Page and Brin became fulfilled when they completed their seminal paper entitled The Anatomy of a Large-Scale Hypertextual Web Search Engine which explained their efficient ranking algorithm, PageRank. In developing a breakthrough ranking algorithm based upon the ideas of publication ranking, Page and Brin experienced a moment of inspiration. But they didn’t stop there. They also believed that optimization was vitally important and so they developed their own Googleware technology consisting of combining custom software with custom hardware thereby reflecting the founder’s genius. They built the world’s most powerful computational enterprise, and they have been on a roll every since. Page stressed that inspiration still required perspiration and that Google appeared destined for rapid growth and expansion. In building the customized computer Googleware infrastructure for PageRank, they were demonstrating the 1% Inspiration and 99% Perspiration pattern. The result was Google, the dominant search engine connecting people to all of the World Wide Web’s information. Forecasts for Connecting Information For many of us it seems that an uncertain future looms ahead like a massive opaque block of granite. But just as Michelangelo suggested that he took a block of stone and chip away the non-essential pieces to produce David, we can chip away the improbable to uncover the possible. By examining inventors and their process of discovery, we are able to visualize the tapestry of our past to help unveil patterns that can serve as our guide posts on our path forward. Page and Brin invented an essential search technology, but their contributions to information processing were evolutionary in nature – built on inspiration and perspiration. One forecast for connecting information is that we can expect a continued pattern of inspired innovation as we go forward in the expansion of search and related technology. Discoveries requiring inspiration and perspiration: In considering the future for connecting information, we expect that improved ranking algorithms will ensure Google’s continued dominance for some time to come. Extrapolating from Google’s success, we can expect a series of inspired innovations building upon its enterprise computer system, such as offering additional knowledge related services. Future Google services could include: expanding into multimedia areas such as television, movies, and music using Google TV and Google Mobile. Viewers would have all the history of TV to choose from. And Google would offer advertisers targeted search. Google Mobile could deliver the same service and products to cell phone technology. By 2020, Google could digitize and indexed every book, movie, TV show, and song ever produced; making it available conveniently. In addition, Google could dominate the Internet as a hub site. The ubiquitous GoogleNet, would dominate wireless access and cell-phone. As for Google browser, Gbrowser, it could replace operating systems. However, our vision also concludes connecting information through developing more intelligent search capabilities. A new Web architecture such as Tim Berners-Lee’s Semantic Web, would add knowledge representation and logic to the markup languages of the Web. Semantics on the Web would offer extraordinary leaps in Web search capabilities. Since Google has cornered online advertising, they have made it progressively more precision-targeted and inexpensive. But Google also has 150,000 servers with nearly unlimited storage space and massive processing power. Beyond simply inspired discoveries, Google or other search engine powers could find innovations based upon new principles yet to be proven, as suggested in the following. Discoveries requiring new proof of principle: Technology futurists such as Ray Kurzweil have suggested that Strong AI (software programs that exhibit true intelligence) could emerge from developing web-based systems such as that of Google. Strong AI could perform data mining at a whole new level. This type of innovation would require a Proof of Principle. Some have suggested that Google’s purpose in converting books into electronic form is not to provide for humans to read them, but rather to provide a form that could be accessible by software, with AI as the consumer. One of the great areas of innovation resulting from Google’s initiatives is its ability to search the Human Genome. Such technology could lead to a personal DNA search capability within the next decade. This could result in the identification of medical prescriptions that are specific to you; and you would know exactly what kinds of side-effects to expect from a given drug. And consider what might happen if we had ‘perfect search?’ Think about the capability to ask any question and get the perfect answer – an answer with real context. The answer could incorporate all of the world’s knowledge using text, video, or audio. And it would reflect every nuance of meaning. Most importantly, it would be tailored to your own particular context. That’s the stated goal of IBM, Microsoft, Google and others. Such a capability would offer its greatest benefits when knowledge is easily gathered. Soon search will move away from the PC-centric operations to the Web connected to many small devices such as mobile phones and PDAs. The most insignificant object with a chip and the ability to connect will be network-aware and searchable. And search needs to solve access to deep databases of knowledge, such as the University of California’s library system. While there are several hundred thousand books online, there are 100 million more that are not. ‘Perfect search’ will find all this information and connect us to the world’s knowledge, but this is the beginning of decision making, not the end. Search and artificial intelligence seem destined to get together. In the coming chapters, we will be exploring all the different technologies involved in connecting information and we will be exploring how the prospects for ‘perfect search’ could turn into ‘ubiquitous intelligence.’ First, ubiquitous computing populates the world with devices using microchips everywhere. Then the ubiquitous Web connects and controls these devices on a global scale. The ubiquitous Web is a pervasive Web infrastructure allows all physical objects access by URIs, providing information and services that enrich users’ experiences in their physical context just as the Web does in cyberspace. The final step comes when artificial intelligence reaches the capability of managing and regulating devices seamlessly and invisibly within the environment – achieving ubiquitous intelligence. Ubiquitous intelligence is the final step of Larry Page’s ‘perfect search’ and the future of the Information Revolution. References: [1] Prather, M., “Ga-Ga for Google,” Entrepreneur Magazine , April 2002 . [2] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [3] Brin, S., and Page, L., The Anatomy of a Large-Scale Hypertextual Web Search Engine, Computer Science Department, Stanford University, Stanford, 1996 [4] Brin S., and Page, L., “The Future of the Internet,” Speech to the Commonwealth Club, March 21, 2001, [5] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [6] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [7] Technology Review, interview entitled “Search Us, Says Google,” 1/11/2002 [8] Kevin Kelleher, “Google vs. Gates,” Wired, Issue 12.03, March 2004. [9] Brin S., and Page, L., “The Future of the Internet,” Speech to the Commonwealth Club, March 21, 2001, [10] Ibid [11] Cryptography and Liberty 1998, An International Survey of Encryption Policy, February 1998, from http://www.gilc.org/crypto/crypto-survey.html [12] Playboy Magazine Interview, “Google Guys,” Playboy Magazine, September 2004 [13] From Google's Letter to Prospective Shareholders http://www.thestreet.com/_yahoo/markets/marketfeatures/10157519_6.htm l [14] Playboy Magazine Interview, “Google Guys,” Playboy Magazine, September 2004 [16] Quotes from http://www.google.com/corporate/tech.html [17] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005
- Rear Admiral Henry Gallant | H Peter Alesso
Excerpt from the eighth book in the Henry Gallant Saga, Rear Admiral Henry Gallant. Rear Admiral Henry Gallant AMAZON Chapter 1 Far Away Captain Henry Gallant was still far away, but he could already make out the bright blue marble of Earth floating in the black velvet ocean of space. His day was flat and dreary. Since entering the solar system, he had been unable to sleep. Instead, he found himself wandering around the bridge like a marble rattling in a jar. His mind had seemingly abandoned his body to meander on its own, leaving his empty shell to limp through his routine. He hoped tomorrow would bring something better. I’ll be home soon, he thought. A welcoming image of Alaina flashed into his mind, but it was instantly shattered by the memory of their last bitter argument. The quarrel had occurred the day he was deployed to the Ross star system and had haunted him throughout the mission. Now that incident loomed like a glaring threat to his homecoming. As he stared at the main viewscreen of the Constellation, he listened to the bridge crew’s chatter. “The sensor sweep is clear, sir,” reported an operator. Gallant was tempted to put a finger to his lips and hiss, “shh,” so he could resume his brooding silence. But that would be unfair to his crew. They were as exhausted and drained from the long demanding deployment as he was. They deserved better. He plopped down into his command chair and said, “Coffee.” The auto-server delivered a steaming cup to the armrest portal. After a few gulps, the coffee woke him from his zombie state. He checked the condition of his ship on a viewscreen. The Constellation was among the largest machines ever built by human beings. She was the queen of the task force, and her crew appreciated her sheer size and strength. She carried them through space with breathtaking majesty, possessing power and might and stealth that established her as the quintessential pride of human ingenuity. They knew every centimeter of her from the forward viewport to the aft exhaust port. Her dull grey titanium hull didn’t glitter or sparkle, but every craggy plate on her exterior was tingling with lethal purpose. She could fly conventionally at a blistering three-tenths the speed of light between planets. And between stars, she warped at faster than the speed of light. Even now, returning from the Ross star system with her depleted starfighters, battle damage, and exhausted crew, she could face any enemy by spitting out starfighters, missiles, lasers, and plasma death. After a moment, he switched the readout to scan the other ships in the task force. Without taking special notice, he considered the material state of one ship after another. Several were in a sorrowful dysfunctional condition, begging for a dockyard’s attention. He congratulated himself for having prepared a detailed refit schedule for when they reached the Moon’s shipyards. He hoped it would speed along the repair process. Earth’s moon would offer the beleaguered Task Force 34, the rest and restoration it deserved after its grueling operation. The Moon was the main hub of the United Planets’ fleet activities. The Luna bases were the most elaborate of all the space facilities in the Solar System. They performed ship overhauls and refits, as well as hundreds of new constructions. Luna’s main military base was named Armstrong Luna and was the home port of the 1st Fleet, fondly called the Home Fleet. Captain Julie Ann McCall caught Gallant’s eye as she rushed from the Combat Information Center onto the bridge. There was a troubled look on her face. Is she anxious to get home too? Was there someone special waiting for her? Or would she, once more, disappear into the recesses of the Solar Intelligence Agency? After all these years, she’s still a mystery to me. McCall approached him and leaned close to his face. In a hushed throaty voice, she whispered, “Captain, we’ve received an action message. You must read it immediately.” Her tight self-control usually obscured her emotions, but now something extraordinary appeared in her translucent blue eyes—fear! He placed his thumb over his command console ID recognition pad. A few swipes over the screen, and he saw the latest action message icon flashing red. He tapped the symbol, and it opened. TOP SECRET: ULTRA - WAR WARNING Date-time stamp: 06.11.2176.12:00 Authentication code: Alpha-Gamma 1916 To: All Solar System Commands From: Solar Intelligence Agency Subject: War Warning Diplomatic peace negotiations with the Titans have broken down. Repeat: Diplomatic peace negotiations with the Titans have broken down. What this portends is unknown, but all commands are to be on the highest alert in anticipation of the resumption of hostilities. Russell Rissa Director SIA TOP SECRET: ULTRA - WAR WARNING He reread the terse communication. As if emerging from a cocoon, Gallant brushed off his preoccupation over his forthcoming liberty. He considered the possibilities. Last month, he sent the sample Halo detection devices to Earth. He hoped that the SIA had analyzed the technology and distributed it to the fleet, though knowing government bureaucracy, he guessed that effort would need his prodding before the technology came into widespread use. Still, there should be time before it becomes urgent. The SIA had predicted that the Titans would need at least two years to rebuild their forces before they could become a threat again. Could he rely on that? Even though he was getting closer to Earth with every passing second, the light from the inner planets was several days old. Something could have already transpired. There was one immutable lesson in war: never underestimate your opponent. A shiver ran down his spine. This is bad. Very bad! Gone was the malaise that had haunted him earlier. Now, he emerged as a disciplined military strategist, intent on facing a major new challenge. Looking expectantly, he examined McCall’s face for an assessment. Shaking her head, she hesitated. “The picture is incomplete. I have little to offer.” Gallant needed her to be completely open and honest with him, but he was unsure how to win that kind of support. He rubbed his chin and spoke softly, “I’d like to tell you a story about a relationship I’ve had with a trusted colleague. And I’d like you to pretend that you were that colleague.” McCall furrowed her brow, but a curious gleam grew in her eyes. He said, “I’ve known this colleague long enough to know her character even though she has been secretive about her personal life and loyalties.” McCall inhaled and visibly relaxed as she exhaled. Her eyes focused their sharp acumen on Gallant. “She is bright enough to be helpful and wise enough not to be demanding,” continued Gallant. “She has offered insights into critical issues and made informed suggestions that have influenced me. She is astute and might know me better than I know myself because of the tests she has conducted. When I’ve strayed into the sensitive topic of genetic engineering, she has soothed my bumpy relationship with politicians.” He hesitated. Then added, “Yet, she has responsibilities and professional constraints on her candidness. She might be reluctant to speak openly on sensitive issues, particularly to me.” McCall’s face was a blank mask, revealing no trace of her inner response to his enticing words. He said, “If you can relate to this, I want you to consider that we are at a perilous moment. It is essential that you speak frankly to me about any insights you might have about this situation.” She swallowed and took a step closer to Gallant. Their faces were mere centimeters apart. “Very well,” she said. “The Chameleon are a spent force. After the loss of their last Great Ship, they are defenseless. They agreed to an unconditional surrender. They might even beg for our help from the Titans. Their moral system is like ours and should not be a concern in any forthcoming action. However, the Titans have an amoral empathy with other species.” He gave an encouraging nod. She added, “Despite the defeat of Admiral Zzey’s fleet in Ross, the Titans remain a considerable threat. They opened peace negotiations ostensibly to seek a treaty with a neutral zone between our two empires. But we can’t trust them. They are too aggressive and self-interested to keep any peace for long. One option they might try is to eliminate the Chameleon while they have the opportunity. Another is to rebuild their fleet for a future strike against us. However, the most alarming possibility would be an immediate attack against us with everything they currently have. They might even leave their home world exposed. But that would only make sense if they could achieve an immediate and overwhelming strategic victory.” Gallant grimaced as he absorbed her analysis. She concluded, “This dramatic rejection of diplomacy can only mean that they are ready to reignite the war—with a vengeance. They will strike us with swift and ruthless abandon.” Gallant turned his gaze toward the bright blue marble—still far away.
- Henry Gallant and the Great Ship | H Peter Alesso
Excerpt from the seventh book of the Henry Gallant Saga, Henry Gallant and the Great Ship. Henry Gallant and the Great Ship AMAZON Chapter 1 An Unfortunate Turn of Events As soon as the morning watch settled in, Captain Henry Gallant walked onto the Constellation’s bridge. The Officer-of-the-Deck rose and vacated the command chair without speaking. The voyage had lasted long enough for the crew to become accustomed to his routine. Habitually, during the first minutes of the day, he examined the ship’s vital operational parameters from his bedside monitor before going into CIC for a detailed task force sitrep. Blips from the combat space patrol (CSP) were visible on the main viewer. The speakers broadcast communication traffic from distant Hawkeyes. Once he had satisfied himself that all was as it should be, he appeared on the bridge and assessed the more mundane needs for the day. The OOD handed him a list of completed tasks and those that demanded his approval. During this activity, he was lost in contemplation, and no one dared interrupt his train of thought. Only after dictating his orders for the day did he relax and give a word of encouragement to the OOD. Then he disappeared below decks for his daily walkabout, where he gauged the temperament of the crew. The hour exercise through the spacecraft carrier allowed him to maintain his fitness. This ritual was the most efficient use of his time since it also allowed him to observe ongoing maintenance and repair activities. On the one hand, the number of administrative duties clamoring for his attention limited his time; on the other, keeping in sync with his ship’s pulse was vital to making good decisions. It brought a faint smile to his lips when he resolved to shift more of the clerical burden onto his XO. Margret Fletcher had a talent for paperwork and was known for her no-nonsense adherence to the regs. Even though he overloaded her of late, she had responded with her usual zeal. As he passed through compartment after compartment, he dictated audio notes into his comm pin about items that needed attention. He marched along the corridors and stepped through the open hatches, ever mindful of the crew’s attention. Although immersed in his process, the crew discerned that his military instincts were on full alert. He would notice the slightest failure of attention to detail as the men and women went about their jobs. Occasionally, he heard a laugh or good-natured ribbing. That was well. A crew that could laugh while working would faithfully execute their duties. He enjoyed the sameness of each day; it reassured him that his world remained rational. It had been two days since the Constellation had poked her nose into the Ross star system. Gallant congratulated himself on making the deployment from Earth so rapidly. It had been a long and arduous two-month grind, but Task Force 34 was finally ready to relieve Task Force 31 as guardian of this system. He shifted his mind back to the disturbing initial surveillance reports that had perplexed him for the last twenty-four hours. Task Force 31 was not visible, which by itself, wasn’t alarming. A planetary body might block their light, though they weren’t responding to radio signals either. Again, they might be on the other side of the star, and the speed of light wasn’t being accommodating. Another calculation percolated into his consciousness. He had sent Hawkeyes out on a sweep of the system. So far, nothing was amiss, but there was confusing radio chatter from the planets indicating that some horrific event had occurred recently. Gallant returned to the bridge in time to review the latest recon update. None of the information was reassuring. He noticed an anomaly in the data that prickled the hairs on the back of his neck. Though the statistics were mysteriously thin and precariously riddled with contaminated inconsistencies, they were coaxing him toward a disturbing conclusion. He worried his premonition might be correct and ordered the CIC to conduct an AI simulation analysis. It wasn’t long before Commander Fletcher stepped onto the bridge. “Good morning, Captain,” she said. Then with a frown, she added, “I have the results.” Gallant spun in his command chair and cast a concerned eye on her. She held a tablet by two fingers out in front of her as if she had found it in a vat of something vile. “Morning XO,” said Gallant, taking the device. Swiping through the screens, he absorbed the information while his heartbeat rose. He wanted to remain calm to reinforce his reputation as imperturbable. He didn’t want Fletcher or anyone else to suspect that he could lose his composure. But he was bursting to rush into CIC. He wanted to review the raw data to verify that it was accurate, but he knew that the analysts would have been meticulous in developing this report. She interrupted his concentration. “You were right, sir.” “Ha—h’m,” he said, clearing his throat. He took a deep breath and forced himself to appear relaxed. Fletcher shook her head and prodded, “Looks like an enormous debris field—possibly with escape pods.” She pointed to the area spread deep throughout the star system’s heart, halfway between planets Bravo and Charlie. The OOD and the chief of the watch inched closer, craning their necks to get a peek at the tablet. Gallant recalled the disturbing image of the original data. Understanding flooded over him. He visualized what must have taken place, and it took an enormous effort to suppress his emotions. She scowled. “No sign of Task Force 31.” Still, he didn’t respond. She muttered, “That doesn’t necessarily mean . . .” Everyone on the bridge gazed expectantly at him. Like a father who returns home to find his front door smashed open, he ordered, “OOD, open a channel to all ships.” A moment later, the OOD reported, “Channel open to all ships, Commodore.” “To all ships, this is Commodore Gallant; set general quarters, assume formation diamond 4.4.” “Aye aye, sir,” came the response from each ship. The task force split into four strike forces. Captain Jackson of the Courageous led the first strike force designated 34.1. It was followed one light hour behind by 34.2 and 34.3, led by Captain Hernandez of the Indefatigable and Captain Chu of the Inflexible, respectively. They kept a light-hour separation from each other. Finally, Gallant led Constellation and Invincible in 34.4, another light hour behind the rest. The cruisers and destroyers were split amongst the strike forces. The dispersed strike forces looked like a baseball diamond with the Constellation at home plate. It took several hours to complete the maneuver. Satisfied that the ships were sufficiently far apart for the majority to survive a blast from the Great Ship’s super-laser, he ordered, “Task Force change course to 030 Mark 2, all ahead full.” Gallant waited anxiously on the bridge for the entire twenty-four hours it took for the task force to crawl across the Ross star system. Some telltale blips appeared on the scope interspersed within a belt of asteroids. When they were finally close enough, they saw the remains of many half-dead ships. They began picking up distress signals of countless escape pods. Officers and watch-standers on the bridge stared at the viewscreen, trying to glimpse the wreckage. Gallant’s eye estimated the number of blips. They could only be the remnants of Task Force 31. It was worse than he imagined—a terrible loss of life. “OOD, prepare med-techs. Send the search and rescue teams to recover the escape pod survivors.” The initial action report was sent by the senior surviving officer, Captain Raymond. It was sketchy. It couldn’t be called a ‘battle’ report since not a single ship of the task force had fired a shot. After a brief visit to Constellation’s sickbay, the officer reported to Gallant’s stateroom. Raymond was not quite fifty, but his balding head, sunken eyes, and beaked nose made him appear older. His long black mustache with grey flecks drooped, making him appear to frown. His uniform was in tatters, and he had several bandaged injuries that had been tended to by the ship’s surgeon. His thickset body was powerful, but he stood slumped over, pain etched across his face. “That’s the scorched wreck of my ship, the Dauntless,” said Captain Raymond, pointing to the viewscreen. The broken battlecruiser, along with the crippled remnants of four cruisers and a dozen destroyers, were all that was left of Commodore Pearson’s Task Force 31. “Commodore Pearson orders were to hold the system at all costs. Admiral Graves had assured him that the Great Ship would not appear. He was told that it would have to protect the Chameleon home planet in the Cygni star system against the Titans. At least that was President Neumann’s thinking after he found out that the Chameleon had only the one Great Ship left.” “The United Planets has been in negotiation with the aliens for over a year,” said Gallant. “Was there no progress?” There was anguish in Raymond’s voice. “None. And the Chameleon were angry.” He paused, dropping his gaze. “The governor told them to shove off, no deal was possible. After that ultimatum, things turned ugly.” Gallant frowned. “Take your time and start from the beginning.” Raymond’s words were clipped. “Task Force 31 had one carrier, four battlecruisers, and two cruiser-destroyer squadrons between planets Charlie and Bravo when the Great Ship appeared. They demanded that the United Planets evacuate the star system. Well, you know Pearson, no way that was happening. He sounded battle stations and ordered his ships to disperse to present a minimal target for the Chameleons.” When Raymond hesitated, Gallant prompted, “What happened next?” “The action was a disaster—a complete shock. The Chameleon looked at the dispersion as a threat and warned him to stand-down, withdraw, or surrender. After a few minutes, they fired.” He cast his eyes down. “The single blast was so devastating that it destroyed nearly all our ships. The blinding light and searing heat crippled my Dauntless and disintegrated most of the task force. The crippled remainders launched escape pods and waited for a follow-up salvo that, mercifully, never came. We hobbled out of the way. I sent a message to the governor on Charlie.” Raymond swallowed hard and furrowed his brow. “The governor’s response was to call it ‘an unfortunate turn of events.’” “I learned later that the Chameleon had threatened to make peace with the Titans if we didn’t yield the system. They must have since it gave them the freedom of action to leave their home world unprotected and deal with us.” He handed Gallant a flash drive. “This contains a plot of the action and the recordings of the communications between our ships and the governor. I’ve stuck my neck out to get this information on the record. You should collect and check the wreckage along with my observations.” “I understand. Some powerful men in the admiralty will be worried. I will describe the action in a detailed report to be sent to Earth,” said Gallant. He worried about how to keep Task Force 34 from suffering the same fate as their predecessor.
- Lieutenant Henry Gallant | H Peter Alesso
Excerpt from the second book in the Henry Gallant Saga, Lieutenant Henry Gallant. Lieutenant Henry Gallant 1 RUN AMAZON Gallant ran—gasping for breath, heart pounding—the echo of his footsteps reverberated behind him. He hoped to reach the bridge, but hope is a fragile thing. Peering over his shoulder into the dark, he tripped on a protruding jagged beam, one of the ship’s many battle scars. As he crashed to the deck, the final glow of emergency lights sputtered out, leaving only the pitch black of power failure—his failure. He lay still and listened to the ship’s cries of pain; the incessant wheezing of atmosphere bleeding from the many tiny hull fissures, the repetitious groaning of metal from straining structures, and the crackling of electrical wires sparking against panels. Thoughts flashed past him. How long will the oxygen last? He was reluctant to guess. Where are they? The clamor of dogged footsteps drew closer even as he rasped for another breath. Trembling from exhaustion, he clawed at the bulkhead to pull himself up. His hemorrhaging leg made even standing brutally painful. Nevertheless, he ran. The bulkhead panels and compartment hatches were indistinguishable in the dimness. Vague phantoms lurked nearby even while his eyes adjusted to whatever glowing plasma blast embers flickered from the hull. As he twisted around a corner, he crashed his shoulder into a bulkhead. The impact knocked him back and spun him around. Reaching out with a bloody hand, he grasped the hatch handle leading into the Operation’s compartment. Going through the hatch, he pulled it shut behind him. He started to run, then awkwardly fought his own momentum, and stopped. Stupid! Stupid! Going back to the hatch, he hit the security locking mechanism. It wouldn’t stop a plasma blast, but it might slow them down, he thought. At least this compartment is airtight. Finally, able to take a deep breath, he tried to clear his head of bombarding sensations. He should’ve been in battle armor, but he’d stayed too long in engineering trying to maintain power while the hull had been breached and the ship boarded. Now his uniform was scorched, revealing the plasma burns of seared flesh from his left shoulder down across his back to his right thigh. He had no idea where the rest of the crew was; many were probably dead. His comm pin was mute, and the ship’s AI wasn’t responding. He had only a handgun, but, so far, he didn’t think they were tracking him specifically, merely penetrating into the ship to gain control. Gallant tried to run once more, but his legs were unwilling. Leaning against the bulkhead, like a dead weight, he slid slowly down to the deck. Unable to go farther, he sat dripping blood and trembling as the potent grip of shock grabbed hold. The harrowing pain of his burnt flesh swept over him. Hope and fear alike abandoned him, leaving only an undeniable truth; without immediate medical treatment, he wouldn’t survive. I’m done. Closing his eyes, he fought against the pain and the black vertigo of despair. He took a deep breath and called upon the last of his inner resolve and resilience . . . No! I won’t give up. Exhaling and opening his eyes, he caught sight of a nearly invisible luminescent glow of a Red Cross symbol, offering him a glimmer of hope. He stretched his arm toward the cabinet. “Argh.” He heard a cry of agony and only belatedly realized it had escaped his own lips as he strained to pull away twisted metal from the door to a medical cabinet. Reaching inside, he grabbed a damaged medi-pack. Painstakingly he used the meager emergency provisions to stop the bleeding and to infuse blood plasma. His limited mobility prevented him from reaching awkward areas, but he managed to insert an analgesic hypodermic into his raw, blistered flesh. Next, he crudely bandaged his suffering body. He relaxed momentarily as the medication coursed through his veins, working to stifle the worst effects of shock and blood loss. His parched throat demanded . . . Water. He looked at more cabinets but was unable to make out their markings in the dark. Stretching his fingers, he opened the nearest one, groping for something familiar inside. No. He opened the next. No. And another. Yes. Finally, he snatched a half-buried survival kit. Greedily he drank and even managed to take a few bites of an energy bar. A surge of adrenaline helped him shift his position to sit more comfortably as his mind came into sharper focus. As he examined his surroundings in the faint light, he spotted an interface station. He was about to reach up and patch into the ship’s AI to get an update on the ship’s defensive posture when he was disturbed by the dismal clangor of footsteps. He held his breath. Are they coming this way?
- Research | H Peter Alesso
Science fiction writers engage in research and discovery to fill their imagination. Research AI HIVE I invite you to join my AI community. Come on a journey into the future of artificial intelligence. AIHIVE has the potential to revolutionize many aspects of our lives, from the way we work to the way we interact with the world around us. Here, we explore the latest advances in AI, discuss the technical and ethical implications of this technology, and share our thoughts on the future. We believe that AI has the potential to make the world a better place, and we are committed to using this ability to create a world where AI benefits all of humanity. Here are some of the things you can find on our website: Directory of leading AI companies News and analysis on AI software Discussions about AI business opportunities Tutorials on artificial intelligence tools AI experts in Silicon Valley Video Software Laboratory The entertainment industry has always been at the forefront of technological innovation, continually transforming the way we create and consume content. In recent years, Artificial Intelligence (AI) and Computer-Generated Imagery (CGI) have become the primary forces driving this change. These cutting-edge technologies are now dominating the video landscape, opening up new possibilities for creators and redefining the limits of storytelling. AI video innovations are changing in Silicon Valley. Small businesses are creating AI video software tools for interchanging text, audio, and video media.
- Captain Hawkins | H Peter Alesso
book excerpt from the science fiction novel Captain Hawkins. Captain Hawkins AMAZON Only the Brave After twenty-four hours of non-stop brutal violence and cruel bloodshed, the soldiers had had no sleep and little food or water. They had repeatedly engaged in hand-to-hand combat against the demonstrators. Even though the soldiers were heavily armed and armored, they had taken serious casualties. Now, tired and angry, everyone they found looked like a rebel. The hospital had been a place of healing—now it became a makeshift prison. In a large observation room, the soldiers sorted people into three groups: the wounded men, a smaller group of women and children, and the medical personnel including Hawkins and Joshua. With bloodthirsty eagerness, the ranking officer repeated, “Take these rebels out and shoot them,” pointing to the first group. As the first group was headed toward the door, Hawkins stepped forward, planted his feet wide apart, and shouted, “Stop, Colonel!” Outraged at the ruthlessness of the order, he put his hands on his hips and said, “You can’t execute these men.” The officer turned toward the disturbance and said harshly, “It is my duty to safeguard the nation. Am I to care for the lives of rebels?” “For the sake of humanity, yes,” said Hawkins, his voice strong and vibrant. With an unyielding stare, he added, “This is still a civilized world, not a lawless state.” Crossing his arms without taking his eyes off the interloper, the immaculately attired colonel seemed disconcerted. Hawkins said, “These men have not been properly charged.” The colonel remained unimpressed. “There are always witnesses to any massacre, Colonel.” Making a grand sweeping gesture with his arms, he added, “Just look around.” The colonel frowned as he surveyed the frightened faces of the women and children. Then seeing the uncertainty on the faces of his own men, his frown deepened into an angry scowl. “Eventually, there’ll be a reckoning,” said Hawkins, waving his hand to take in the hellish carnage throughout the city. “The government will look for scapegoats to justify this harsh reality. It wouldn’t be prudent to be so easily identified with merciless acts.” The colonel stared daggers at Hawkins. For a moment his hand hovered over his pistol, as if he were considering putting a bullet in Hawkins’s head right then. Instead, his eyes narrowed as recognition dawned on his face. He sneered, “Why, I know you. I served with you at Gambaro Ridge.” A smile crept across his face, and he said with a strange blend of sarcasm and irony, “You were killed.” “Not quite,” responded Hawkins with an outlandish grin. “I saw you shot to pieces when you recklessly charged the enemy stronghold,” said the colonel, smirking, and nodding his head. He laughed, “That was insane. You were definitely killed.” “As you say,” said Hawkins, letting a chortle escape his lips. “Your assault gave the rest of us a chance to escape,” the colonel remarked thoughtfully, considering the memory in a new light. Undecided on how to deal with such an uncommon man, the colonel pointed at him and exclaimed to his troops, “Ha! Here’s something you rarely see—a disgruntled ex-Marine.” A roar of laughter erupted from his soldiers. Hawkins threw his head back and laughed as well, “Ha!” The colonel stepped closer to inspect him. A small, jagged scar over his right brow was nearly hidden behind the shock of unkempt sandy brown hair, which draped over his forehead in a careless manner. He was tall with an athletic build, and he stood forward on balls of feet, like a boxer. His strong jaw and intense gray-blue eyes purported an iron will. The colonel remembered Hawkins as a courageous, but utterly reckless, officer. Hawkins recognized the colonel as well. Anthony Rodríguez was swarthy, ruggedly handsome with a broad mustache and a muscular physique. Hawkins remembered him as a fashionable man, his uniform always well-tailored. What he lacked in imagination, Rodríguez made up for as a stickler for protocol, meticulously carrying out orders to further his career. After a long moment, Rodríguez barked, “Don’t be foolish enough to believe I feel any obligation to you. You did your job. Now I’m doing mine.” Throughout the observation room, frightened people waited for the tension to burst. They realized that in many ways their fate was bound together with this tête-à-tête. Rodríguez said, “I don’t believe your battlefield antics were ever acknowledged. Some might have thought you a fool.” Stone faced, Hawkins retorted, “Then you stand here today—alive—as a testament to my folly.” Coloring slightly, Rodríguez took a moment to recall his orders and began parsing the words to extract their broader intent. Finally, he asked, “What are you doing here? Are you a rebel?” “I’m no rebel,” said Hawkins adamantly. “The generators were failing. No technicians were left to bring up the backups, so I was called here to protect the women and children.” “Called here? By whom?” “What does that matter?” asked Hawkins. “I’ll decide what’s important,” Rodríguez snapped. Joshua spoke up, “It was me.” “What was your business here?” “I came to help.” “Help whom? Were you with the demonstrators?” “Yes, but I was looking for my mother . . .” “There. By his own admission, he’s a member of the rebels,” said the colonel delighted at finding something clearly within the bounds of his orders. Joshua tried to explain, “I not a rebel. I just wanted to …” Rodríguez ordered, “Put him with the rest of the rebels.” As the soldiers pulled Joshua away and placed him with the group of rebels, Hawkins said, “He’s just a boy. He was involved in things beyond his understanding.” Rodríguez shot a disdainful look at Hawkins and asked, “Oh! Were things beyond your understanding, when you aided the rebels hiding in this building?” “I came to succor the weak and helpless, as is the duty of any man of honor,” said Hawkins. Offended and enraged, Rodríguez stormed, “No! You were aiding a rebel force attacking our nation’s capital.” “I—was—saving—lives,” spat Hawkins. “Once again!” The veiled reference to Gambaro Ridge made Rodríguez flushed crimson—the emotional cocktail of anger and humiliation was so powerful that his face looked as if it would explode. His voice contorted into a rapid-fire staccato of orders, “Place this man under arrest—along with the rest of these rebels—march them all to prison.” Several pairs of hands reached out and grabbed Hawkins, but as he twisted free several more soldiers joined in the brawl. Six soldiers were as battered and bruised as Hawkins before they managed to pin him down. They bound his wrists and flung him against the wall with the rebels. His dark eyes blazing with contempt, Hawkins’s deep voice boomed, “Anthony Rodríguez, if I survive this barbarity,” he took a deep breath, and said slowly, “I hope to chance upon you—once again.” Other distraught prisoners began yelling their own protestations, but Rodríguez bellowed over the clamor, “Take them away! Take them away!”
- Fame | H Peter Alesso
A gallery of Science Fiction Ledgends and theiw works. Science Fiction Writers Hall of Fame Isaac Asimov Asimov is one of the foundational voices of 20th-century science fiction. His work often incorporated hard science, creating an engaging blend of scientific accuracy and imaginative speculation. Known for his "Robot" and "Foundation" series, Asimov's ability to integrate scientific principles with compelling narratives has left an enduring legacy in the field. Arthur C. Clarke The author of numerous classics including "2001: A Space Odyssey," Clarke's work is notable for its visionary, often prophetic approach to future technologies and space exploration. His thoughtful, well-researched narratives stand as enduring examples of 'hard' science fiction. Robert A. Heinlein Heinlein, one of science fiction's most controversial and innovative writers, is best known for books like "Stranger in a Strange Land" and "Starship Troopers." His work is known for its strong political ideologies and exploration of societal norms. Philip K. Dick With stories often marked by paranoid and dystopian themes, Dick's work explores philosophical, sociological, and political ideas. His books like "Do Androids Dream of Electric Sheep?" inspired numerous films, solidifying his impact on popular culture. Ray Bradbury Known for his poetic prose and poignant societal commentary, Bradbury's work transcends genre. His dystopian novel "Fahrenheit 451" remains a touchstone in the canon of 20th-century literature, and his short stories continue to inspire readers and writers alike. Ursula K. Le Guin Le Guin's works, such as "The Left Hand of Darkness" and the "Earthsea" series, often explored themes of gender, sociology, and anthropology. Her lyrical prose and profound explorations of human nature have left an indelible mark on science fiction. Frank Herbert The author of the epic "Dune" series, Herbert crafted a detailed and complex future universe. His work stands out for its intricate plotlines, political intrigue, and environmental themes. William Gibson Gibson is known for his groundbreaking cyberpunk novel "Neuromancer," where he coined the term 'cyberspace.' His speculative fiction often explores the effects of technology on society. H.G. Wells Although Wells's works were published on the cusp of the 20th century, his influence carried well into it. Known for classics like "The War of the Worlds" and "The Time Machine", Wells is often hailed as a father of science fiction. His stories, filled with innovative ideas and social commentary, have made an indelible impact on the genre. Larry Niven Known for his 'Ringworld' series and 'Known Space' stories, Niven's hard science fiction works are noted for their imaginative, scientifically plausible scenarios and compelling world-building. Octavia Butler Butler's work often incorporated elements of Afrofuturism and tackled issues of race and gender. Her "Xenogenesis" series and "Kindred" are known for their unique and poignant explorations of human nature and society. Orson Scott Card Best known for his "Ender's Game" series, Card's work combines engaging narrative with introspective examination of characters. His stories often explore ethical and moral dilemmas. Alfred Bester Bester's "The Stars My Destination" and "The Demolished Man" are considered classics of the genre. His work is recognized for its powerful narratives and innovative use of language. Kurt Vonnegut Though not strictly a science fiction writer, Vonnegut's satirical and metafictional work, like "Slaughterhouse-Five," often used sci-fi elements to highlight the absurdities of human condition. Harlan Ellison Known for his speculative and often dystopian short stories, Ellison's work is distinguished by its cynical tone, inventive narratives, and biting social commentary. Stanislaw Lem Lem's work, such as "Solaris," often dealt with philosophical questions. Philip José Farmer Known for his "Riverworld" series, Farmer's work often explored complex philosophical and social themes through creative world-building and the use of historical characters. He is also recognized for his innovations in the genre and the sexual explicitness of some of his work. J. G. Ballard Best known for his novels "Crash" and "High-Rise", Ballard's work often explored dystopian modernities and psychological landscapes. His themes revolved around surrealistic and post-apocalyptic visions of the human condition, earning him a unique place in the sci-fi genre. AI Science Fiction Hall of Fame As a science fiction aficionado and AI expert, there's nothing more exciting to me t han exploring the relationship between sci-fi literature and artificial intelligence. Science fiction is an innovative genre, often years ahead of its time, an d has influenced AI's development in ways you might not expect. But it's not just techies like us who should be interested - students of AI can learn a lot from these visionary authors. So buckle up, as we're about to embark on an insider's journey through the most famous science fiction writers in the hall of fame! The Science Fiction-AI Connection Science fiction and AI go together like peanut butter and jelly. In fact, one could argue that some of our most advanced AI concepts and technologies sprung from the seeds planted by sci-fi authors. I remember as a young techie, curled up with my dog, reading Isaac Asimov’s "I, Robot". I was just a teenager, but that book completely changed how I saw the potential of AI. The Most Famous Sci-Fi Writers and their AI Visions Ready for a deep dive into the works of the greats? Let's take a closer look at some of the most famous science fiction writers in the hall of fame, and how their imaginations have shaped the AI we know today. Isaac Asimov: Crafting the Ethics of AI You can't talk about AI in science fiction without first mentioning Isaac Asimov. His "I, Robot" introduced the world to the Three Laws of Robotics, a concept that continues to influence AI development today. As an AI student, I remember being fascinated by how Asimov's robotic laws echoed the ethical considerations we must grapple with in real-world AI. Philip K. Dick: Dreaming of Synthetic Humans Next up, Philip K. Dick. If you've seen Blade Runner, you've seen his influence at work. In "Do Androids Dream of Electric Sheep?" (the book Blade Runner is based on), Dick challenges us to question what it means to be human and how AI might blur those lines. It's a thought that has certainly kept me up late on more than a few coding nights! Arthur C. Clarke: AI, Autonomy, and Evolution Arthur C. Clarke's "2001: A Space Odyssey" has been both a source of inspiration and caution in my work. The AI character HAL 9000 is an eerie portrayal of autonomous AI systems' potential power and risks. It's a reminder that AI, like any technology, can be a double-edged sword. William Gibson: AI in Cyberspace Finally, William Gibson's "Neuromancer" gave us a vision of AI in cyberspace before the internet was even a household name. I still remember my shock reading about an AI entity in the digital ether - years later, that same concept is integral to AI in cybersecurity. The Power of Creativity These authors' works are testaments to the power of creativity in imagining the possibilities of AI. As students, you'll need to push boundaries and think outside the box - just like these authors did. Understanding Potential and Limitations The stories these authors spun provide us with vivid scenarios of AI's potential and limitations. They remind us that while AI has massive potential, it's not without its challenges and dangers. Conclusion And there we have it - our deep dive into the most famous science fiction writers in the hall of fame and their influence on AI. Their work is not just fiction; it's a guiding light, illuminating the path that has led us to the AI world we live in today. As students, we have the opportunity to shape the AI of tomorrow, just as these authors did. So why not learn from the best? Science Fiction Greats of the 21st Century Neal Stephenson is renowned for his complex narratives and incredibly detailed world-building. His Baroque Cycle trilogy is a historical masterpiece, while Snow Crash brought the concept of the 'Metaverse' into popular culture. China Miéville has won several prestigious awards for his 'weird fiction,' a blend of fantasy and science fiction. Books like Perdido Street Station and The City & The City are both acclaimed and popular. His work is known for its rich, evocative language and innovative concepts. Kim Stanley Robinson is best known for his Mars trilogy, an epic tale about the terraforming and colonization of Mars. He's famous for blending hard science, social commentary, and environmental themes. He continues this trend in his 21st-century works like the climate-focused New York 2140. Margaret Atwood, while also recognized for her mainstream fiction, has made significant contributions to science fiction. Her novel The Handmaid's Tale and its sequel The Testaments provide a chilling dystopian vision of a misogynistic society. Her MaddAddam trilogy further underscores her unique blend of speculative fiction and real-world commentary. Alastair Reynolds is a leading figure in the hard science fiction subgenre, known for his space opera series Revelation Space. His work, often centered around post-humanism and AI, is praised for its scientific rigor and inventive plotlines. Reynolds, a former scientist at the European Space Agency, incorporates authentic scientific concepts into his stories. Paolo Bacigalupi's works often deal with critical environmental and socio-economic themes. His debut novel The Windup Girl won both the Hugo and Nebula awards and is renowned for its bio-punk vision of the future. His YA novel, Ship Breaker, also received critical acclaim, winning the Michael L. Printz Award. Ann Leckie's debut novel Ancillary Justice, and its sequels, are notable for their exploration of AI, gender, and colonialism. Ancillary Justice won the Hugo, Nebula, and Arthur C. Clarke Awards, a rare feat in science fiction literature. Her unique narrative styles and complex world-building are highly appreciated by fans and critics alike. Iain M. Banks was a Scottish author known for his expansive and imaginative 'Culture' series. Though he passed away in 2013, his work remains influential in the genre. His complex storytelling and exploration of post-scarcity societies left a significant mark in science fiction. William Gibson is one of the key figures in the cyberpunk sub-genre, with his novel Neuromancer coining the term 'cyberspace.' In the 21st century, he continued to innovate with his Blue Ant trilogy. His influence on the genre, in terms of envisioning the impacts of technology on society, is immense. Ted Chiang is highly regarded for his thoughtful and philosophical short stories. His collection Stories of Your Life and Others includes "Story of Your Life," which was adapted into the film Arrival. Each of his carefully crafted tales explores a different scientific or philosophical premise. Charlie Jane Anders is a diverse writer who combines elements of science fiction, fantasy, and more in her books. Her novel All the Birds in the Sky won the 2017 Nebula Award for Best Novel. She's also known for her work as an editor of the science fiction site io9. N.K. Jemisin is the first author to win the Hugo Award for Best Novel three years in a row, for her Broken Earth Trilogy. Her works are celebrated for their diverse characters, intricate world-building, and exploration of social issues. She's one of the most influential contemporary voices in fantasy and science fiction. Liu Cixin is China's most prominent science fiction writer and the first Asian author to win the Hugo Award for Best Novel, for The Three-Body Problem. His Remembrance of Earth's Past trilogy is praised for its grand scale and exploration of cosmic civilizations. His work blends hard science with complex philosophical ideas. John Scalzi is known for his accessible writing style and humor. His Old Man's War series is a popular military science fiction saga, and his standalone novel Redshirts won the 2013 Hugo Award for Best Novel. He's also recognized for his blog "Whatever," where he discusses writing, politics, and more. Cory Doctorow is both a prolific author and an advocate for internet freedom. His novel Little Brother, a critique of increased surveillance, is frequently used in educational settings. His other novels, like Down and Out in the Magic Kingdom, are known for their examination of digital rights and technology's impact on society. Octavia Butler (1947-2006) was an award-winning author known for her incisive exploration of race, gender, and societal structures within speculative fiction. Her works like the Parable series and Fledgling have continued to influence and inspire readers well into the 21st century. Her final novel, Fledgling, a unique take on vampire mythology, was published in 2005. Peter F. Hamilton is best known for his space opera series such as the Night's Dawn trilogy and the Commonwealth Saga. His work is often noted for its scale, complex plotting, and exploration of advanced technology and alien civilizations. Despite their length, his books are praised for maintaining tension and delivering satisfying conclusions. Ken Liu is a prolific author and translator in science fiction. His short story "The Paper Menagerie" is the first work of fiction to win the Nebula, Hugo, and World Fantasy Awards. As a translator, he's known for bringing Liu Cixin's The Three-Body Problem to English-speaking readers. Ian McDonald is a British author known for his vibrant and diverse settings, from a future India in River of Gods to a colonized Moon in the Luna series. His work often mixes science fiction with other genres, and his narrative style has been praised as vivid and cinematic. He has won several awards, including the Hugo, for his novellas and novels. James S.A. Corey is the pen name of collaborators Daniel Abraham and Ty Franck. They're known for The Expanse series, a modern space opera exploring politics, humanity, and survival across the solar system. The series has been adapted into a critically acclaimed television series. Becky Chambers is praised for her optimistic, character-driven novels. Her debut, The Long Way to a Small, Angry Planet, kickstarted the popular Wayfarers series and was shortlisted for the Arthur C. Clarke Award. Her focus on interpersonal relationships and diverse cultures sets her work apart from more traditional space operas. Yoon Ha Lee's Machineries of Empire trilogy, beginning with Ninefox Gambit, is celebrated for its complex world-building and innovative use of technology. The series is known for its intricate blend of science, magic, and politics. Lee is also noted for his exploration of gender and identity in his works. Ada Palmer's Terra Ignota series is a speculative future history that blends philosophy, politics, and social issues in a post-scarcity society. The first book in the series, Too Like the Lightning, was a finalist for the Hugo Award for Best Novel. Her work is appreciated for its unique narrative voice and in-depth world-building. Charlie Stross specializes in hard science fiction and space opera, with notable works including the Singularity Sky series and the Laundry Files series. His books often feature themes such as artificial intelligence, post-humanism, and technological singularity. His novella "Palimpsest" won the Hugo Award in 2010. Kameron Hurley is known for her raw and gritty approach to science fiction and fantasy. Her novel The Light Brigade is a time-bending military science fiction story, while her Bel Dame Apocrypha series has been praised for its unique world-building. Hurley's work often explores themes of gender, power, and violence. Andy Weir shot to fame with his debut novel The Martian, a hard science fiction tale about a man stranded on Mars. It was adapted into a successful Hollywood film starring Matt Damon. His later works, Artemis and Project Hail Mary, continue his trend of scientifically rigorous, yet accessible storytelling. Jeff VanderMeer is a central figure in the New Weird genre, blending elements of science fiction, fantasy, and horror. His Southern Reach Trilogy, starting with Annihilation, explores ecological themes through a mysterious, surreal narrative. The trilogy has been widely praised, with Annihilation adapted into a major motion picture. Nnedi Okorafor's Africanfuturist works blend science fiction, fantasy, and African culture. Her novella Binti won both the Hugo and Nebula awards. Her works are often celebrated for their unique settings, compelling characters, and exploration of themes such as cultural conflict and identity. Claire North is a pen name of Catherine Webb, who also writes under Kate Griffin. As North, she has written several critically acclaimed novels, including The First Fifteen Lives of Harry August, which won the John W. Campbell Memorial Award for Best Science Fiction Novel. Her works are known for their unique concepts and thoughtful exploration of time and memory. M.R. Carey is the pen name of Mike Carey, known for his mix of horror and science fiction. His novel The Girl With All the Gifts is a fresh take on the zombie genre, and it was later adapted into a film. Carey's works are celebrated for their compelling characters and interesting twists on genre conventions. Greg Egan is an Australian author known for his hard science fiction novels and short stories. His works often delve into complex scientific and mathematical concepts, such as artificial life and the nature of consciousness. His novel Diaspora is considered a classic of hard science fiction. Steven Erikson is best known for his epic fantasy series, the Malazan Book of the Fallen. However, he has also made significant contributions to science fiction with works like Rejoice, A Knife to the Meat. His works are known for their complex narratives, expansive world-building, and philosophical undertones. Vernor Vinge is a retired San Diego State University professor of mathematics and computer science and a Hugo award-winning science fiction author. Although his most famous work, A Fire Upon the Deep, was published in the 20th century, his later work including the sequel, Children of the Sky, has continued to influence the genre. He is also known for his 1993 essay "The Coming Technological Singularity," in which he argues that rapid technological progress will soon lead to the end of the human era. Jo Walton has written several novels that mix science fiction and fantasy, including the Hugo and Nebula-winning Among Others. Her Thessaly series, starting with The Just City, is a thought experiment about establishing Plato's Republic in the ancient past. She is also known for her non-fiction work on the history of science fiction and fantasy. Hugh Howey is best known for his series Wool, which started as a self-published short story and grew into a successful series. His works often explore post-apocalyptic settings and the struggle for survival and freedom. Howey's success has been a notable example of the potential of self-publishing in the digital age. Richard K. Morgan is a British author known for his cyberpunk and dystopian narratives. His debut novel Altered Carbon, a hardboiled cyberpunk mystery, was adapted into a Netflix series. His works are characterized by action-packed plots, gritty settings, and exploration of identity and human nature. Hannu Rajaniemi is a Finnish author known for his unique blend of hard science and imaginative concepts. His debut novel, The Quantum Thief, and its sequels have been praised for their inventive ideas and complex, layered narratives. Rajaniemi, who holds a Ph.D. in mathematical physics, incorporates authentic scientific concepts into his fiction. Stephen Baxter is a British author who often writes hard science fiction. His Xeelee sequence is an expansive future history series covering billions of years. Baxter is known for his rigorous application of scientific principles and his exploration of cosmic scale and deep time. C.J. Cherryh is an American author who has written more than 60 books since the mid-1970s. Her Foreigner series, which began in the late '90s and has continued into the 21st century, is a notable science fiction series focusing on political conflict and cultural interaction. She has won multiple Hugo Awards and was named a Grand Master by the Science Fiction and Fantasy Writers of America. Elizabeth Bear is an American author known for her diverse range of science fiction and fantasy novels. Her novel Hammered, which combines cybernetics and Norse mythology, started the acclaimed Jenny Casey trilogy. She has won multiple awards, including the Hugo, for her novels and short stories. Larry Niven is an American author best known for his Ringworld series, which won the Hugo, Nebula, and Locus awards. In the 21st century, he continued the series and collaborated with other authors on several other works, including the Bowl of Heaven series with Gregory Benford. His works often explore hard science concepts and future history. David Mitchell is known for his genre-blending novels, such as Cloud Atlas, which weaves six interconnected stories ranging from historical fiction to post-apocalyptic science fiction. The novel was shortlisted for the Booker Prize and adapted into a film. His works often explore themes of reality, identity, and interconnectedness. Robert J. Sawyer is a Canadian author known for his accessible style and blend of hard science fiction with philosophical and ethical themes. His Neanderthal Parallax trilogy, which started in 2002, examines an alternate world where Neanderthals became the dominant species. He is a recipient of the Hugo, Nebula, and John W. Campbell Memorial awards. Daniel Suarez is known for his high-tech thrillers. His debut novel Daemon and its sequel Freedom™ explore the implications of autonomous computer programs on society. His books are praised for their action-packed narratives and thought-provoking themes related to technology and society. Kazuo Ishiguro is a Nobel Prize-winning author, known for his poignant and thoughtful novels. Never Let Me Go, published in 2005, combines elements of science fiction and dystopian fiction in a heartbreaking narrative about cloned children raised for organ donation. Ishiguro's work often grapples with themes of memory, time, and self-delusion. Malka Older is a humanitarian worker and author known for her Infomocracy trilogy. The series, starting with Infomocracy, presents a near-future world where micro-democracy has become the dominant form of government. Her work stands out for its political savvy and exploration of information technology. James Lovegrove is a versatile British author, known for his Age of Odin series and Pantheon series which blend science fiction with mythology. His Firefly novel series, based on the popular Joss Whedon TV show, has been well received by fans. He's praised for his engaging writing style and inventive blending of genres. Emily St. John Mandel is known for her post-apocalyptic novel Station Eleven, which won the Arthur C. Clarke Award and was a finalist for the National Book Award and the PEN/Faulkner Award. Her works often explore themes of memory, fate, and interconnectedness. Her writing is praised for its evocative prose and depth of character. Sue Burke's debut novel Semiosis is an engaging exploration of human and alien coexistence, as well as the sentience of plants. The book was a finalist for the John W. Campbell Memorial Award and spawned a sequel, Interference. Burke's work is known for its realistic characters and unique premise. Tade Thompson is a British-born Yoruba author known for his Rosewater trilogy, an inventive blend of alien invasion and cyberpunk tropes set in a future Nigeria. The first book in the series, Rosewater, won the Arthur C. Clarke Award. His works are celebrated for their unique settings and blend of African culture with classic and innovative science fiction themes. Send Your Suggestion First name Last name Email What did you like best? How can we improve? Send Feedback Thanks for sharing your feedback with us!
- Midshipman Academy | H Peter Alesso
Excerpt of book Midshipman Henry Gallent at the Academy. Midshipman Henry Gallant at the Academy AMAZON 1 Threadbare Still a boy, not yet a man, Henry Gallant dug his stiff fingers deep into his pockets. He shivered as the bitter-cold wind clawed through his threadbare clothes . “Do you see it?” asked the elderly woman beside him, pulling her shawl tight around her. The overhead streetlamp offered little illumination as they squinted down the dark, winding dirt road. “Not yet,” said Gallant, standing on his tiptoes. The woman was a head shorter than him with a careworn face that the chill air made rosy. Her elegant features revealed that she had once been a beauty, and while time had weathered her, she had aged gracefully. Gallant stomped his feet impatiently while his mind was already racing, considering the prospects for his future. She asked, “Will you visit me when you get liberty?” “Of course, Grandmother,” he said, but he had no idea when that might be. “You know I’ve always tried to do my best, ever since . . .,” Gallant took a deep breath and wrapped his arms tight around his chest. “They were heroes, you know,” she said softly. “I know,” he said as the painful memory boiled up. She had told him many times about the meteor that struck the family outpost on Phobos when he was a child. His parents had only seconds to seal him in an escape pod and couldn’t save themselves. The picture his mind conjured up was of their selfless act. Since that ordeal, he had become obsessed with controlling his emotions. He had learned to set his own rules of behavior, things he would allow himself to express and things he wouldn’t. He kissed her gently on her forehead. “You gave meaning to my parents’ sacrifice by caring for me all these years.” Her work as a clerk by day and a seamstress at night had been taxing but necessary to make ends meet. She said, “You have been a blessing to me. Your freelance programming helped us manage.” She brushed back a tangled lock of brown hair from his forehead and said, “I wish I could have done more to mend your clothes.” “There’s nothing wrong with them,” he said. He stretched his arms wide as proof, but he was careful not to tear open a seam. “They’re perfect.” Anxiously, he stared down the road, wishing the bus had wings. Several minutes later, he said, “I think I see lights.” She brightened. “You’ll soon have a brand-new uniform.” While the bus approached, his grandmother continued to give him last-minute advice and encouragement, but he couldn’t concentrate on her words. As he looked into her eyes and saw her love, he could only feel guilt at leaving her alone. He planned to send her his meager midshipman’s allowance. It wouldn’t be much, but it was all he could do. It will be all right , he thought. The bus sputtered to a stop in front of them. A creaking door opened. Gallant barely had time for a quick hug and kiss before getting aboard. He carried a small bag that contained a change of underclothes and a few toiletries. He made his way to a rear window seat and waved as the bus departed. He watched her figure wave back as it faded into the shadows. The darkness seemed to swallow her like a living thing. Gallant sat next to a woman holding a small spaghetti-armed child. He remained quiet, staring straight ahead. The night was dark and cold along the remote, meandering mountain road. During the first hour of his journey, he worried about leaving his grandmother alone in their tiny mountain cabin. Although it was set in a pastoral valley with a natural spring, it lacked many modern conveniences. Besides his financial contribution over the years, he helped her by taking care of daily necessities. He cleaned the solar panels and maintained the storage batteries. Unfortunately, home delivery in rural areas had not yet taken hold, so he undertook the long jet-flyer trip to the nearest store. Now she would have to manage on her own, and her arthritis had been acting up. How will she manage without me? His emotional baggage shifted during the second hour. While he bounced around in the obsolete vehicle, self-doubt crept in. All his weaknesses, failings, and fears blossomed full form into his mind. He had never been aboard a spaceship, wasn’t a legacy, and didn’t even know a space officer. Most likely, he would be hazed, ridiculed, and driven out as undesirable within a week. His frown deepened with each passing mile, and he began to wish he had never applied for admission to the academy. Finally, he considered getting off and catching the return bus. I’m getting too good at predicting adverse outcomes, he thought. Gallant decided that untrustworthy emotions wouldn’t control him. Instead, he would let his logical mind guide him. He tried to calculate his chances of success. Then, after weighing the pros and cons, he thought, I must be bold. He straightened his spine, lifted his head, and vanquished guilt and fear. Either I make it, or I die trying! That’s all there was to it. Everything changed after that. As daylight trickled over the last hill, the road broadened into a smoothly paved highway. The sun’s resilient brightness lifted his spirits. He couldn’t wait for the adventure to begin.
- About | H Peter Alesso
H. Peter Alesso wrote a self portrait to reveal his history and experiences that helped him on his writing journey. My Story I love words, but that wasn't always the case. I grew up with a talent for numbers, leading me to follow a different path. I went to Annapolis and MIT and became a nuclear physicist at Lawrence Livermore National Laboratory. Only after retiring was my desire to tell stories reawakened. In recent years, I have immersed myself in the world of words, drawing on my scientific knowledge and personal experience to shape my writing. As a scientist, I explored physics and technology, which enabled me to create informative and insightful books, sharing my knowledge with readers who sought to expand their understanding in these areas—contributing to their intellectual growth while satisfying my own passion. But it was my time as a naval officer, that genuinely ignited my imagination and propelled me into science fiction. After graduating from the United States Naval Academy and serving on nuclear submarines during both hot and cold wars, I witnessed firsthand the complexities and challenges of military operations that seamen face daily. This allowed me a unique perspective, which I channeled into creating Henry Gallant and a 22nd-century world where a space officer fought against invading aliens. Through this narrative, I explored the depths of human resilience, the mysteries of space, and the intricacies of military conflict. My stories let me share the highlights of my journey with you. I hope you enjoy the ride. 1/9 Contact First name* Last name Email* Write a message Submit
- Dark Genius | H Peter Alesso
excerpt from the suspense thriller drama book Dark Genius. Dark Genius AMAZON Time Off (Excerpt) The next morning, Lawrence gazed up at the impressive face of Mont Blanc. The chill air penetrated even his warm clothing. He resolutely tugged on his ski gloves, slung his MIT scarf around his neck, and hefted his freshly waxed skis to his shoulder—he was all set. Boots climbing across the snow, he headed for the gondola. He could see the tiny figures of skiers already skimming down the steep slopes above, and his pulse quickened. As the group shuffled toward the gondola, he nodded to several familiar faces, relieved to find neither Proust nor Maurice among them. He thought he’d seen Emma in line ahead of him and fidgeted through the whole ride, oblivious to the spectacular view that spread below him. When he reached the advanced level, he got off, pulled his goggles down, and stepped into his skis. He picked Emma out immediately, even under her goggles and sporty ski hat. “Hi,” he said with a big smile, glad they both had the morning free from meetings. “Hi,” she replied, moving to his side in one smooth fluid push. Several others said, “Hello.” He returned a nod and pulled his jacket tightly around him against the chill air. A veteran skier strolled past with weathered skin and disrupted hair. He wore a turned-down smirk that challenged all comers to prove their worth. These were all experienced skiers, dressed for warmth, and equipped with the best quality gear. The first pair left together, plunging down onto the black runs. Others quickly followed, separated enough to avoid interference. Finally, he and Emma were the only ones on the top of the world. They felt as though they had the mountain all to themselves. Lawrence breathed in the crisp Swiss mountain air. It felt different somehow—cleaner, freer, better. The temperature was 5 C. He said, “Wow, what a fantastic day! This is an amazing resort, and the snow looks perfect.” “Something tells me I’m going to like this place.” “Me too.” Emma tugged on his scarf, and with mischief in her eyes, dared him, “Race you to the bottom.” He started to ask, “What do I get if I win?” when he realized she was already ten yards ahead. Though not an expert, he was a good skier. He shoved his poles hard into the snow and leaned forward, propelling himself down the slope after her. The skis hissed smoothly on the packed powder as he pulled himself along with his poles. Picking up speed on the gradually steepening slope, he was still falling behind. Going over the first vertical drop with spine-chilling ease, he found his rhythm and felt the adrenaline rush of speed, snow, and slope. Concentrating on his own maneuvering, he couldn’t watch Emma but could tell he still wasn’t gaining on her. He leaned over his skis, pulled up his poles, and dropped into a tuck. Instantly his speed increased, and his skis drifted a little farther apart than good style dictated. His hips and knees swiveled left–right–left–right–left in smooth, sweeping micro-turns, shoulders barely moving. Still, Emma held her lead ahead of him. A cluster of trees loomed ahead. He shifted his weight to come around, the right edges of his skis, biting hard into the slope and swung past them cleanly. He straightened up and turned to avoid several rocky obstacles. He maneuvered through a series of flags on the run, carving an extended S in the snow. He was close behind Emma now and could see her looking back at him, her face alive with pleasure. He was delighted. He aimed his skis straight down the slope again and felt the joy of zooming down a 45-degree drop. The thrill of speed and mastery of the terrain far outweighed any concern of potential danger. As he followed the curve of the mountain to the left, he came upon another row of flags, black and red, fluttering in the wind. The slope suddenly rose up under him, his knees compressed, and at this speed, he felt the lift as he caught air. He gave a shout of pure glee. Emma was near, and she ran an S-turn through his track. The slope eased a bit, and he jammed his left pole into the snow for leverage, pushing his skis down hard. The snow sprayed out from the abrupt stop and hung, crystallized, for a moment in the still air as he looked across a shoulder of the mountain. It plunged down toward a grove of trees, black in the distance. Breathing hard, he glanced over his shoulder but couldn’t see Emma. A momentary concern flashed through his mind, but then he caught a glimpse of her through some trees to his left. He swung back downhill and zig-zagged through the mounds beneath the gondola cables, driving his poles in hard with each knee-pounding bump. With her more direct route, Emma was ahead of him again. He pushed harder, trying to catch up to her, his knees straining on each turn. Without warning, his right ski caught an edge. He flailed, struggling to regain control, skidded, and fell. Shaking himself off, he quickly regained his feet, gasping for breath, and wiped the snow off his face and goggles. He stamped his feet to make sure his bindings were still tight, then set off in pursuit of Emma once more. Gaining speed, he schussed across the undulating ground, his skis intertwining with Emma’s tracks. A row of bright-orange warning signs made him check his speed sharply. This run had taken him dangerously close to a ravine. Behind the crossed sticks he could see where the cliff dropped and didn’t stop to think how far down it went into nothingness. He carved another hard turn, angling his skis back toward the left, and raced for the tree line. Keep forward. Get your hands in front of you. Set shoulders downslope, keep knees, and hips loose. The wind buffeted him, a pounding wall of resistance against his increasing speed. The wild schuss was nearing an end. Pine and spruce trees rushed by him, blurring into an impenetrable wall. The sun glistened over the snow’s surface, a sharp stretch of rocks and ravines was marked by warning flags thrown into high relief. Dark shadows obscured the terrain, making the slopes more dangerous. He knew there were sheer drops on each flank of the run. He felt an absurd desire to kick off his skies and run. Instead, he kept his focus on the track ahead and ignored the folds in the landscape. Finally, he saw an opening through the trees that had hemmed him in. He veered more left and shot through it. As he straightened his course, Emma whizzed by him, so close that he felt a spray of snow. Is she really that good, or did she misjudge her position? Trees pressed against the uphill side as the run curved around the mountain’s flank, their branches brittle against the white cold of the sky. Lake Geneva, now spread out in a breathtaking panorama below them. The thermometer had dropped precipitously to -3 C, and flakes of snow began to prick Lawrence’s cheek. Speed seemed no longer possible against the cold resisting wind. As the slope leveled out to the end of the run, he saw Emma out of the corner of his eye, only a few yards and scant seconds behind him. He angled his skis to cross the finish line. As his momentum slowed, he suddenly felt exhausted. His head throbbed, and his muscles ached from a combination of exertion and dehydration. His joints ground and creaked. His fingers refused to release their grip on the poles. Every sense seemed to have turned against him, and he blinked hard, his breathing labored. With an effort, he pulled off his soaked gloves and unzipped his jacket, sweating heavily. Stabbing his poles into the ground, he groaned as he bent over to unlatch his skis. Luckily the bindings sprang open easily, and he straightened painfully. The snow was falling faster now. He hadn’t noticed before. He cradled his stiff hands to his chest like a drowning man trying to catch his breath. The bracing wind stung his cheeks, leaving a bittersweet icy red welt. He was spent. As he looked for Emma, he wondered . . . Did I win?