H. Peter Alesso
Search Results
27 items found for ""
- Research | H Peter Alesso
Research AI HIVE I invite you to join my AI community. Come on a journey into the future of artificial intelligence. AIHIVE has the potential to revolutionize many aspects of our lives, from the way we work to the way we interact with the world around us. Here, we explore the latest advances in AI, discuss the technical and ethical implications of this technology, and share our thoughts on the future. We believe that AI has the potential to make the world a better place, and we are committed to using this ability to create a world where AI benefits all of humanity. Here are some of the things you can find on our website: Directory of leading AI companies News and analysis on AI software Discussions about AI business opportunities Tutorials on artificial intelligence tools AI experts in Silicon Valley Video Software Laboratory The entertainment industry has always been at the forefront of technological innovation, continually transforming the way we create and consume content. In recent years, Artificial Intelligence (AI) and Computer-Generated Imagery (CGI) have become the primary forces driving this change. These cutting-edge technologies are now dominating the video landscape, opening up new possibilities for creators and redefining the limits of storytelling. AI video innovations are changing in Silicon Valley. Small businesses are creating AI video software tools for interchanging text, audio, and video media.
- All Androids Lie | H Peter Alesso
All Androids Lie AMAZON THE GAME Kateryna said, “Hold still, Dear,” as she wiped the dirty smudge off the corner of Maria’s mouth. Maria asked, “Why is everyone so excited?” Kateryna said, “They’re scared of the loud noise.” “What is it?” “Fireworks. See the bright flashes exploding in the night sky,” said the girl’s mother. Maria nodded. “It’s the start of The Game,” lied her mother. “I told you all about it. Don’t you remember?” Maria shook her head, puzzled. “Everyone in the city plays, and there are terrific prizes.” Kateryna added, “What a pity you’re only four. You can’t play. I’m sooo sorry. You might have been great.” “What’s the game?” “It’s a big, big game of tag. Everyone in the city will run to escape. If you’re tagged, you lose. Everyone wants to win. It’s too bad you can’t play.” “Why can’t I play?” Kateryna said, “You’re only four. You’d get tired, cry, and make a fuss.” “I won’t. I won’t make a fuss.” “You would have been good at this game. The prizes are spectacular. Including that new doll, Laura, that you wanted so badly.” “If I win, I will get Laura?” “Yes, and lots more.” Outside, people were running and shouting. “There are candies, treats, games, and other toys for the winners. But you could never win. You would cry and quit.” “No, Mommy. I’ll be good. I want to play and win the prizes.” “I’m so sorry, Dear. The game is long and hard, and I don’t think you’re strong enough.” “Oh, Mommy, I really, really want to. I promise to be good.” Maria looked as if she was ready to throw a tantrum. “None of that, or you will lose immediately,” scolded her mother. “Please?” she asked with the most adoring smile. “Well, I don’t know,” said her mother. “There are many people who can tag you, and you must run away from all of them.” “I will. Please?” Kateryna looked appreciatively at her fair-haired daughter. The prekindergarten teacher told her that Maria was her star pupil because she was so advanced with her numbers and letters. She loved her toy piano and played well with the other children. Kateryna could see herself in the child, not just in the likeness of her face and features but in spirit and desire. Normally, a good-natured and happy-go-lucky sort of woman, she felt she could rise to any challenge. And now, she faced her fiercest test. “If I let you play, there can be no quitting. Do you agree? Pinky Swear?” “Yes! Yes! Pinky Swear,” said Maria jumping up and down. Static from the radio crackled behind them. The news announcer said, “This city has been a center for trade and manufacturing for key businesses along the Black Sea coast. But now its magnificent architecture and unique decor are being wiped off the face of the earth.” With steely determination, Kateryna suppressed her fears and shut the radio off. As the explosions drew near, she calmly said, “Let’s get ready! “Keep these documents safe,” Kateryna said, tucking the papers into Maria’s coat pocket. “They are the game tickets with your name. The rules of the game are strict. And you must reach the winning flag without being tagged. You must stay close to me and don’t talk to people. Do you understand?” “Yes.” “Whenever I say run, you run. Or else, the bad men will tag you.” Maria nodded. She put a scarf around Maria’s neck and buttoned up her coat. Then she pulled up the collar before being satisfied that she would be warm. “My gloves,” squeaked Maria. “Here they are.” As they left their apartment building and stepped out onto the street, they saw people leaving their houses in panic. “Are all these people playing the game?” “Yes. See how much fun they’re having. I told you it was a popular game. You must be tough to play. Are you tough?” “Yes. Mommy.” “Are you?” her mother asked with a raised brow. The skinny four-year-old put her hands on her hips, stood like a superhero with her chest out, and shouted, “I’m tough, and I mean it!” Fairly bursting with laughter, Kateryna said, “Okay, then. Let’s go,” Kateryna gripped the girl’s hand firmly and said, “This way.” As they hurried, there were loud explosions throughout the city. When they reached the train station, shells were bursting high above. “Gosh! Everything is happening so fast.” “Be patient, Dear.” They managed to squeeze onto a packed rail car, but the train was slow and made many erratic stops as if it were engaged in a game of dodgeball. Soon Maria complained, “The people are scary.” Kateryna touched the girl’s cheek and said, “Be brave. We’re on a great adventure. You must be bold.” But after two hours, Maria scowled and said, “I’m cold.” As Kateryna rearranged the girl’s scarf and coat, deep frown lines bit into her face threatening to become a permanent mask. She removed the girl’s gloves and rubbed the tiny hands. Then she planted a kiss on Maria’s rosy cheek. Maria pouted, “I’m hungry.” “Maria, you’re a troublesome thing.” Kateryna took a package out of her pocket and unwrapped a Kanapky sandwich for her. The girl took several bites and then looked disinterested in the rest. She sulked, “I’m thirsty.” “I don’t have any water,” said her exasperated mother. “But if you’re going to be a nasty girl, we will have to quit the game and go home immediately.” “Mommmm,” whined Maria. Nearby, a very old, cantankerous-looking woman, rumpled and wrinkled as a walnut, said, “Here, I have an extra.” She handed Maria a small water bottle. “Thank you. That’s generous of you,” said Kateryna with relief. After another hour, Maria pressed her face against the window, peering into the night as February’s frost crept along the windowpane, forming the jagged lines of an ice blossom. Suddenly, the train bounced and rocked. Pieces of steel and glass flew about. People screamed in pain. A bit of shrapnel cracked the skull of a nearby man. It made the sound of a champagne cork popping. THUNK! “Mommy, that man is bleeding.” “Shhh. It was an accident. He will be taken care of. We must keep moving.” They fled the train and the bombardment area. Kateryna gripped her daughter’s hand tightly and pulled her along as quickly as possible. When they reached a military checkpoint, a soldier told them it was safer to travel on the back roads. “He’s dressed like Daddy. Is Daddy playing too?” “Yes, Darling,” said Kateryna, holding back a tear. “I’m afraid he is.” “I’m scared, Mommy.” Gathering her courage, Kateryna said, “Don’t be frightened, Maria. Remember, it’s only a game. And we’re going to win. Just don’t let them tag you, okay.” “Huh ha.” In the early morning hours, the rosy glow of the sun kissed the horizon just as they reached the top of a hill. “Can we rest, Mommy? I’m tired.” “Not yet. See that bunker across the field? That’s the finish line. When we get there, we’ll win the prize.” “Oh good,” said Maria, perking up, but she could barely move. Kateryna picked her up and carried her. But after going only a hundred yards, Maria exclaimed, “Huh, oh. Mommy are those the bad men?” pointing to men with guns chasing them. Kateryna looked over her shoulder and said, “Yes, Maria. They are very bad men. Evil does not sleep; it waits for a chance to catch you. So, we must hurry.” She put Maria down and said, “See that bunker ahead. That’s the finish line. That’s where you turn in your ticket. Hold it fast to your chest.” Then she leaned closer and whispered, “I love you, Dearest,” though the sentiment seemed more like goodbye. “I love you too, Mommy,” said Maria clutching her ticket. The child’s words wrapped around Kateryna like a thick warm blanket. She yelled, “Run, Maria, run!” The noise from the blasts was terrific and the flashes of the overhead lights cast eerie shadows on their path. Cold breath steamed from their mouths as they huffed and puffed. Gripped by the full force of her worst fears, Kateryna yelled, “Run, Maria! Don’t look back! Run!” Maria ran with all the might and passion a four-year-old could muster. Finally, when she reached the bunker, a giant armor-clad soldier pulled her to safety. Maria jumped up and down and shouted over the din, “Did we win, Mommy? Did we win?” Then, suddenly, and loudly, Maria let out a cry that tore through the night. She sobbed unrelentingly, even as she stuttered out several snot-thick breaths. In the open field, just a dozen yards from the bunker, her mother lay face-down, sprawled out like a discarded rag doll.
- Rear Admiral Henry Gallant | H Peter Alesso
Rear Admiral Henry Gallant AMAZON Chapter 1 Far Away Captain Henry Gallant was still far away, but he could already make out the bright blue marble of Earth floating in the black velvet ocean of space. His day was flat and dreary. Since entering the solar system, he had been unable to sleep. Instead, he found himself wandering around the bridge like a marble rattling in a jar. His mind had seemingly abandoned his body to meander on its own, leaving his empty shell to limp through his routine. He hoped tomorrow would bring something better. I’ll be home soon, he thought. A welcoming image of Alaina flashed into his mind, but it was instantly shattered by the memory of their last bitter argument. The quarrel had occurred the day he was deployed to the Ross star system and had haunted him throughout the mission. Now that incident loomed like a glaring threat to his homecoming. As he stared at the main viewscreen of the Constellation, he listened to the bridge crew’s chatter. “The sensor sweep is clear, sir,” reported an operator. Gallant was tempted to put a finger to his lips and hiss, “shh,” so he could resume his brooding silence. But that would be unfair to his crew. They were as exhausted and drained from the long demanding deployment as he was. They deserved better. He plopped down into his command chair and said, “Coffee.” The auto-server delivered a steaming cup to the armrest portal. After a few gulps, the coffee woke him from his zombie state. He checked the condition of his ship on a viewscreen. The Constellation was among the largest machines ever built by human beings. She was the queen of the task force, and her crew appreciated her sheer size and strength. She carried them through space with breathtaking majesty, possessing power and might and stealth that established her as the quintessential pride of human ingenuity. They knew every centimeter of her from the forward viewport to the aft exhaust port. Her dull grey titanium hull didn’t glitter or sparkle, but every craggy plate on her exterior was tingling with lethal purpose. She could fly conventionally at a blistering three-tenths the speed of light between planets. And between stars, she warped at faster than the speed of light. Even now, returning from the Ross star system with her depleted starfighters, battle damage, and exhausted crew, she could face any enemy by spitting out starfighters, missiles, lasers, and plasma death. After a moment, he switched the readout to scan the other ships in the task force. Without taking special notice, he considered the material state of one ship after another. Several were in a sorrowful dysfunctional condition, begging for a dockyard’s attention. He congratulated himself for having prepared a detailed refit schedule for when they reached the Moon’s shipyards. He hoped it would speed along the repair process. Earth’s moon would offer the beleaguered Task Force 34, the rest and restoration it deserved after its grueling operation. The Moon was the main hub of the United Planets’ fleet activities. The Luna bases were the most elaborate of all the space facilities in the Solar System. They performed ship overhauls and refits, as well as hundreds of new constructions. Luna’s main military base was named Armstrong Luna and was the home port of the 1st Fleet, fondly called the Home Fleet. Captain Julie Ann McCall caught Gallant’s eye as she rushed from the Combat Information Center onto the bridge. There was a troubled look on her face. Is she anxious to get home too? Was there someone special waiting for her? Or would she, once more, disappear into the recesses of the Solar Intelligence Agency? After all these years, she’s still a mystery to me. McCall approached him and leaned close to his face. In a hushed throaty voice, she whispered, “Captain, we’ve received an action message. You must read it immediately.” Her tight self-control usually obscured her emotions, but now something extraordinary appeared in her translucent blue eyes—fear! He placed his thumb over his command console ID recognition pad. A few swipes over the screen, and he saw the latest action message icon flashing red. He tapped the symbol, and it opened. TOP SECRET: ULTRA - WAR WARNING Date-time stamp: 06.11.2176.12:00 Authentication code: Alpha-Gamma 1916 To: All Solar System Commands From: Solar Intelligence Agency Subject: War Warning Diplomatic peace negotiations with the Titans have broken down. Repeat: Diplomatic peace negotiations with the Titans have broken down. What this portends is unknown, but all commands are to be on the highest alert in anticipation of the resumption of hostilities. Russell Rissa Director SIA TOP SECRET: ULTRA - WAR WARNING He reread the terse communication. As if emerging from a cocoon, Gallant brushed off his preoccupation over his forthcoming liberty. He considered the possibilities. Last month, he sent the sample Halo detection devices to Earth. He hoped that the SIA had analyzed the technology and distributed it to the fleet, though knowing government bureaucracy, he guessed that effort would need his prodding before the technology came into widespread use. Still, there should be time before it becomes urgent. The SIA had predicted that the Titans would need at least two years to rebuild their forces before they could become a threat again. Could he rely on that? Even though he was getting closer to Earth with every passing second, the light from the inner planets was several days old. Something could have already transpired. There was one immutable lesson in war: never underestimate your opponent. A shiver ran down his spine. This is bad. Very bad! Gone was the malaise that had haunted him earlier. Now, he emerged as a disciplined military strategist, intent on facing a major new challenge. Looking expectantly, he examined McCall’s face for an assessment. Shaking her head, she hesitated. “The picture is incomplete. I have little to offer.” Gallant needed her to be completely open and honest with him, but he was unsure how to win that kind of support. He rubbed his chin and spoke softly, “I’d like to tell you a story about a relationship I’ve had with a trusted colleague. And I’d like you to pretend that you were that colleague.” McCall furrowed her brow, but a curious gleam grew in her eyes. He said, “I’ve known this colleague long enough to know her character even though she has been secretive about her personal life and loyalties.” McCall inhaled and visibly relaxed as she exhaled. Her eyes focused their sharp acumen on Gallant. “She is bright enough to be helpful and wise enough not to be demanding,” continued Gallant. “She has offered insights into critical issues and made informed suggestions that have influenced me. She is astute and might know me better than I know myself because of the tests she has conducted. When I’ve strayed into the sensitive topic of genetic engineering, she has soothed my bumpy relationship with politicians.” He hesitated. Then added, “Yet, she has responsibilities and professional constraints on her candidness. She might be reluctant to speak openly on sensitive issues, particularly to me.” McCall’s face was a blank mask, revealing no trace of her inner response to his enticing words. He said, “If you can relate to this, I want you to consider that we are at a perilous moment. It is essential that you speak frankly to me about any insights you might have about this situation.” She swallowed and took a step closer to Gallant. Their faces were mere centimeters apart. “Very well,” she said. “The Chameleon are a spent force. After the loss of their last Great Ship, they are defenseless. They agreed to an unconditional surrender. They might even beg for our help from the Titans. Their moral system is like ours and should not be a concern in any forthcoming action. However, the Titans have an amoral empathy with other species.” He gave an encouraging nod. She added, “Despite the defeat of Admiral Zzey’s fleet in Ross, the Titans remain a considerable threat. They opened peace negotiations ostensibly to seek a treaty with a neutral zone between our two empires. But we can’t trust them. They are too aggressive and self-interested to keep any peace for long. One option they might try is to eliminate the Chameleon while they have the opportunity. Another is to rebuild their fleet for a future strike against us. However, the most alarming possibility would be an immediate attack against us with everything they currently have. They might even leave their home world exposed. But that would only make sense if they could achieve an immediate and overwhelming strategic victory.” Gallant grimaced as he absorbed her analysis. She concluded, “This dramatic rejection of diplomacy can only mean that they are ready to reignite the war—with a vengeance. They will strike us with swift and ruthless abandon.” Gallant turned his gaze toward the bright blue marble—still far away.
- Thinking on the Web | H Peter Alesso
Thinking on the Web AMAZON Chapter 2 Gödel: What is Decidable? In the last chapter, we suggested that small wireless devices connected to an intelligent Web could produce ubiquitous computing and empower the Information Revolution. In the future, Semantic Web architecture is designed to add some intelligence to the Web through machine processing capabilities. For the Semantic Web to succeed the expressive power of the logic added to its mark-up languages must be balanced against the resulting computational complexit y. Therefore, it is important to evaluate both the expressive characteristics of logic languages, as well as, their inherit limitations. In fact, some options for Web logic include solutions that may not be solvable through rational argument. In particular, the work of Kurt Gödel identified the concept of undecidability where the truth or falsity of some statements may not be determined. In this chapter, we review some of the basic principles of logic and related them to the suitability for Web applications. First, we review the basic concept of logic, and discuss various characteristics and limitations of logic analysis. We introduce First Order Logics (FOL) and its subsets, such as Descriptive Logic and Horn Logic which offer attractive characteristics for Web applications. These languages set the parameters for how expressive Web markup languages can become. Second, we investigate how logic conflicts and limitations in computer programming and Artificial Intelligence (AI) have been handled in closed environments to date. We consider how errors in logic contribute to significant ‘bugs’ that lead to crashed computer programs. Third, we review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivers user-interface presentations residing within the markup languages traveling over the Web. The Semantic Web changes this partitioned arrangement. Finally, we discuss the implications of using logic in markup languages on the Semantic Web. Philosophical and Mathematical Logic Aristotle described man as a “rational animal” and established the study of logic beginning with the process of codifying syllogisms. A syllogism is a kind of argument in which there are three propositions, two of them premises, one a conclusion. Aristotle was the first to create a logic system which allowed predicates and subjects to be represented by letters or symbols. His logic form allowed one to substitute for subjects and predicates with letters (variables). For example: If A is predicated of all B, and B is predicated of all C, then A is predicated of all C. By predicated, Aristotle means B belongs to A, or all B's are A's. For instance, we can substitute subjects and predicates into this syllogism to get: If all humans (B's) are mortal (A), and all Greeks (C's) are humans (B's), then all Greeks (C's) are mortal (A). Today, Aristotle's system is mostly seen as of historical value. Subsequently, other philosophers and mathematicians such as Leibniz developed methods to represent logic and reasoning as a series of mechanical and symbolic tasks. They were followed by logicians who developed mechanical rules to carry out logical deductions. In logic, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject. Today, logic is considered to be the primary reasoning mechanism for solving problems. Logic allows us to sets up systems and criteria for distinguishing acceptable arguments from unacceptable arguments. The structure of arguments is based upon formal relations between the newly produced assertions and the previous ones. Through argument we can then express inferences. Inferences are the processes where new assertions may be produced from existing ones. When relationships are independent of the assertions themselves we call them ‘formal’. Through these processes, logic provides a mechanism for the extension of knowledge. As a result, logic provides prescriptions for reasoning by machines, as well as, by people. Traditionally, logic has been studied as a branch of philosophy. However, since the mid-1800’s logic has been commonly studied as a branch of mathematics and more recently as a branch of computer science. The scope of logic can therefore be extended to include reasoning using probability and causality. In addition, logic includes the study of structures of fallacious arguments and paradoxes. By logic then, we mean the study and application of the principles of reasoning, and the relationships between statements, concepts or propositions. Logic incorporates both the methods of reasoning and the validity of the results. In common language, we refer to logic in several ways; logic can be considered as a framework or system of reasoning, a particular mode or process of reasoning, or the guiding principles of a field or discipline. We also use the term "logical" to describe a reasoned approach to solve a problem or get to a decision, as opposed to the alternative "emotional" approaches to react or respond to a situation. As logic has developed, its scope has splintered int o many distinctive branches. These distinctions serve to formalize different forms of logic as a science. The distinctions between the various branches of logic lead to their limitations and expressive capabilities which are central issues to designing the Semantic Web languages. The following sections identify some of the more important distinctions. Deductive and Inductive Reasoning Originally, logic consisted only of deductive reasoning which was concerned with a premise and a resultant deduction. However, it is important to note that inductive reasoning – the study of deriving a reliable generalization from observations – has also been included in the study of logic. Correspondingly, we must distinguish between deductive validity and inductive validity. The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. An inference is deductively valid if and only if there is no possible situation in which all the premises are true and the conclusion false. Inductive validity on the other hand requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some of which use mathematical models of probability. Paradox A paradox is an apparently true statement that seems to lead to a contradiction or to a situation that defies intuition. Typically, either the statements in question do not really imply the contradiction; or the puzzling result is not really a contradiction; or the premises themselves are not all really true (or, cannot all be true together). The recognition of ambiguities, equivocations, and unstated assumptions underlying known paradoxes has often led to significant advances in science, philosophy and mathematics. Formal and Informal Logic Formal logic (sometimes called ‘symbolic logic’) attempts to capture the nature of logical truth and inference in formal systems. This consists of a formal language, a set of rules of derivation (often called ‘rules of inference’), and sometimes a set of axioms. The formal language consists of a set of discrete symbols, a syntax (i.e., the rules for the construction of a statement), and a semantics (i.e., the relationship between symbols or groups of symbols and their meanings). Expressions in formal logic are often called ‘formulas.’ The rules of derivation and potential axioms then operate with the language to specify a set of theorems, which are formulas that are either basic axioms or true statements that are derivable using the axioms and rules of derivation. In the case of formal logic systems, the theorems are often interpretable as expressing logical truths (called tautologies). Formal logic encompasses a wide variety of logic systems. For instance, propositional logic and predicate logic are kinds of formal logic, as well as temporal logic, modal logic, Hoare logic and the calculus of constructions. Higher-order logics are logical systems based on a hierarchy of types. For example, Hoare logic is a formal system developed by the British computer scientist C. A. R. Hoare. The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic. The purpose of such a system is to provide a set of logical rules by which to reason about the correctness of computer programs with the rigor of mathematical logic. The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form: {P} C {Q} where P and Q are assertions and C is a command. P is called the precondition and Q the post-condition. Assertions are formulas in predicate logic. An interpretation of such a triple is: Whenever P holds of the state before the execution of C, then Q will hold afterwards. Alternatively, informal logic is the study of logic that is used in natural language arguments. Informal logic is complicated by the fact that it may be very hard to extract the formal logical structure embedded in an argument. Informal logic is also more difficult because the semantics of natural language assertions is much more complicated than the semantics of formal logical systems. Mathematical Logic Mathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, the application of mathematical techniques to the representation and analysis of formal logic. The boldest attempt to apply logic to mathematics was pioneered by philosopher-logician Bertrand Russell. His idea was that mathematical theories were logical tautologies, and his program was to show this by means to a reduction of mathematics to logic. The various attempts to carry this out met with a series of failures, such as Russell's Paradox, and the defeat of Hilbert's Program by Gödel's incompleteness theorems (which we shall describe shortly). Russell's paradox represents either of two interrelated logical contradictions. The first is a contradiction arising in the logic of sets or classes. Some sets can be members of themselves, while others can not. The set of all sets is itself a set, and so it seems to be a member of itself. The null or empty set, however, must not be a member of itself. However, suppose that we can form a set of all sets that, like the null set, are not included in themselves. The paradox arises from asking the question of whether this set is a member of itself. It is, if and only if, it is not! The second form is a contradiction involving properties. Some properties seem to apply to themselves, while others do not. The property of being a property is itself a property, while the property of being a table is not, itself, a table. Hilbert's Program was developed in the early 1920s, by German mathematician David Hilbert. It called for a formalization of all of mathematics in axiomatic form, together with a proof that this axiomatization of mathematics is consistent. The consistency proof itself was to be carried out using only what Hilbert called ‘finitary’ methods. The special epistemological character of this type of reasoning yielded the required justification of classical mathematics. It was also a great influence on Kurt Gödel, whose work on the incompleteness theorems was motivated by Hilbert's Program. In spite of the fact that Gödel's work is generally taken to prove that Hilbert's Program cannot be carried out, Hilbert's Program has nevertheless continued to be influential in the philosophy of mathematics, and work on Revitalized Hilbert Programs has been central to the development of proof theory. Both the statement of Hilbert's Program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory. Despite the negative nature of Gödel's incompleteness theorems, a result in model theory can be understood as showing how close logics came to being true: every rigorously defined mathematical theory can be exactly captured by a First-Order Logical (FOL) theory. Thus it is apparent that the two areas of mathematical logic are complementary. Logic is extensively applied in the fields of artificial intelligence and computer science. These fields provide a rich source of problems in formal logic. In the 1950s and 60s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or produces artificial intelligence. This turned out to be more difficult than expected because of the complexity of human reasoning. In logic programming, a program consists of a set of axioms and rules. In symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving, machines can find and check proofs, as well as work with proofs too lengthy to be written out by hand. However, the computation complexity of carrying out automated theorem proving is a serious limitation. It is a limitation that we will find in subsequent chapters significantly impacts the Semantic Web. Decidability In the 1930s, the mathematical logician, Kurt Gödel shook the world of mathematics when he established that, in certain important mathematical domains, there are problems that cannot be solved or propositions that cannot be proved, or disproved, and are therefore undecidable. Whether a certain statement of first order logic is provable as a theorem is one example; and whether a polynomial equation in several variables has integer solutions is another. While humans solve problems in these domains all the time, it is not certain that arbitrary problems in these domains can always be solved. This is relevant for artificial intelligence since it is important to establish the boundaries for a problem’s solution. Kurt Gödel Kurt Gödel (shown Figure 2-1) was born April 28, 1906 in Brünn, Austria-Hungary (now Brno, Czech Republic). He had rheumatic fever when he was six years old and his health became a chronic concern over his lifetime. Kurt entered the University of Vienna in 1923 where he was influenced by the lectures of Wilhelm Furtwängler. Furtwängler was an outstanding mathematician and teacher, but in addition he was paralyzed from the neck down, and this forced him to lecture from a wheel chair with an assistant to write on the board. This made a big impression on Gödel who was very conscious of his own health. As an undergraduate Gödel studied Russell's book Introduction to Mathematical Philosophy. He completed his doctoral dissertation under Hans Hahn in 1929. His thesis proved the completeness of the first order functional calculus. He subsequently became a member of the faculty of the University of Vienna, where he belonged to the school of logical positivism until 1938. Gödel is best known for his 1931 proof of the "Incompleteness Theorems." He proved fundamental results about axiomatic systems showing that in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular, the consistency of the axioms cannot be proved. This ended a hundred years of attempts to establish axioms and axiom-based logic systems which would put the whole of mathematics on this basis. One major attempt had been by Bertrand Russell with Principia Mathematica (1910-13). Another was Hilbert's formalism which was dealt a severe blow by Gödel's results. The theorem did not destroy the fundamental idea of formalism, but it did demonstrate that any system would have to be more comprehensive than that envisaged by Hilbert. One consequence of Gödel's results implied that a computer can never be programmed to answer all mathematical questions. In 1935, Gödel proved important results on the consistency of the axiom of choice with the other axioms of set theory. He visited Göttingen in the summer of 1938, lecturing there on his set theory research and returned to Vienna to marry Adele Porkert in 1938. After settling in the United States, Gödel again produced work of the greatest importance. His “Consistency of the axiom of choice and of the generalized continuum-hypothesis with the axioms of set theory” (1940) is a classic of modern mathematics. In this he proved that if an axiomatic system of set theory of the type proposed by Russell and Whitehead in Principia Mathematica is consistent, then it will remain so when the axiom of choice and the generalized continuum-hypothesis are added to the system. This did not prove that these axioms were independent of the other axioms of set theory, but when this was finally established by Cohen in 1963 he used the ideas of Gödel. Gödel held a chair at Princeton from 1953 until his death in 1978. Propositional Logic Propositional logic (or calculus) is a branch of symbolic logic dealing with propositions as units and with the combinations and connectives that relate them. It can be defined as the branch of symbolic logic that deals with the relationships formed between propositions by connectives such as compounds and connectives shown below: Symbols Statement Connectives p q "either p is true, or q is true, or both" disjunction p · q "both p and q are true" conjunction p q "if p is true, then q is true" implication p q "p and q are either both true or both false" equivalence A ‘truth table’ is a complete list of the possible truth values of a statement. We use "T" to mean "true", and "F" to mean "false" (or "1" and "0" respectively). Truth tables are adequate to test validity, tautology, contradiction, contingency, consistency, and equivalence. This is important because truth tables are a mechanical application of the rules. Propositional calculus is a formal system for deduction whose atomic formulas are propositional variables. In propositional calculus, the language consists of propositional variables (or placeholders) and sentential operators (or connectives). A well-formed formula is any atomic formula or a formula built up from sentential operators. First-Order Logic (FOL) First-Order Logic (FOL), also known as first-order predicate calculus, is a systematic approach to logic based on the formulation of quantifiable statements such as "there exists an x such that..." or "for any x, it is the case that...”. A first-order logic theory is a logical system that can be derived from a set of axioms as an extension of first-order logic. FOL is distinguished from higher order logic in that the values "x" in the FOL statements are individual values and not properties. Even with this restriction, first-order logic is capable of formalizing all of set theory and most of mathematics. Its restriction to quantification of individual properties makes it difficult to use for the purposes of topology, but it is the classical logical theory underlying mathematics. The branch of mathematics called Model Theory is primarily concerned with connections between first order properties and first order structures. First order languages are by their nature very restrictive and as a result many questions can not be discussed using them. On the other hand first-order logics have precise grammars. Predicate calculus is quantificational and based on atomic formulas that are propositional functions and modal logic. In Predicate calculus, as in grammar, a subject is what we make an assertion about, and a predicate is what we assert about the subject. Automated Inference for FOL Automated inference using first-order logic is harder than using Propositional Logic because variables can take on potentially an infinite number of possible values from their domain. Hence there are potentially an infinite number of ways to apply the Universal-Elimination rule of inference. Godel's Completeness Theorem says that FOL is only semi-decidable. That is, if a sentence is true given a set of axioms, there is a procedure that will determine this. However, if the sentence is false, then there is no guarantee that a procedure will ever determine this. In other words, the procedure may never halt in this case. As a result, the Truth Table method of inference is not complete for FOL because the truth table size may be infinite. Natural deduction is complete for FOL, but is not practical for automated inference because the ‘branching factor’ in the search process is too large. This is the result of the necessity to try every inference rule in every possible way using the set of known sentences. Let us consider the rule of inference known as Modus Ponens (MP). Modus Ponens is a rule of inference pertaining to the IF/THEN operator. Modus Ponens states that if the antecedent of a conditional is true, then the consequent must also be true: (MP) Given the statements p and if p then q, infer q. The Generalized Modus Ponens (GMP) is not complete for FOL. However, Generalized Modus Ponens is complete for Knowledge Bases (KBs) containing only Horn clauses. An other very important logic that we shall discuss in detail in chapter 8 is Horn logic. A Horn clause is a sentence of the form: (Ax) (P1(x) ^ P2(x) ^ ... ^ Pn(x)) => Q(x) where there are 0 or more Pi's, and the Pi's and Q are positive (i.e., un-negated) literals. Horn clauses represent a subset of the set of sentences representable in FOL. For example: P(a) v Q(a) is a sentence in FOL, but is not a Horn clause. Natural deduction using GMP is complete for KBs containing only Horn clauses. Proofs start with the given axioms/premises in KB, deriving new sentences using GMP until the goal/query sentence is derived. This defines a forward chaining inference procedure because it moves "forward" from the KB to the goal. For example: KB = All cats like fish, cats eat everything they like, and Molly is a cat. In first-order logic then, (1) KB = (Ax) cat(x) => likes(x, Fish) (2) (Ax)(Ay) (cat(x) ^ likes(x,y)) => eats(x,y) (3) cat(Molly) Query: Does Molly eat fish? Proof: Use GMP with (1) and (3) to derive: (4) likes(Molly, Fish) Use GMP with (3), (4) and (2) to derive: eats(Molly, Fish) Conclusion: Yes, Molly eats fish. Description Logic Description Logics (DLs) allow specifying a terminological hierarchy using a restricted set of first-order formulas. DLs have nice computational properties (they are often decidable and tractable), but the inference services are restricted to classification and subsumption. That means, given formulae describing classes, the classifier associated with certain description logic will place them inside a hierarchy. Given an instance description, the classifier will determine the most specific classes to which the instance belongs. From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound. Descriptive Logic is one possibility for Inference Engines for the Semantic Web. Another possibility is based on Horn-logic, which is another subset of First-Order Predicate logic (see Figure 2-2). In addition, Descriptive Logic and rule systems (e.g., Horn Logic) are somewhat orthogonal which means that they overlap, but one does not subsume the other. In other words, there are capabilities in Horn logic that are complementary to those available for Descriptive Logic. Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapter 8. Using Full First-Order Logic (FFOL) for specifying axioms requires a full-fledged automated theorem prover. However, FOL is semi-decidable and doing inferencing becomes computationally untractable for large amounts of data and axioms. This means, than in an environment like the Web, FFOL programs will not scale to handle huge amounts of knowledge. Besides full first theorem proving would mean maintaining consistency throughout the Web, which is impossible. Description Logic fragment of FOL. FOL includes expressiveness beyond the overlap, notably: positive disjunctions; existentials; and entailment of non-ground and non-atomic conclusions. Horn FOL is another fragment of FOL. Horn Logic Program (LP) is a slight weakening of Horn FOL. "Weakening" here means that the conclusions from a given set of Horn premises that are entailed according to the Horn LP formalism are a subset of the conclusions entailed (from that same set of premises) according to the Horn FOL formalism. However, the set of ground atomic conclusions is the same in the Horn LP as in the Horn FOL. For most practical purposes (e.g., relational database query answering), Horn LP is thus essentially similar in its power to the Horn FOL. Horn LP is a fragment of both FOL and nonmonotonic LP. This discussion may seem esoteric, but it is precisely these types of issues that will decide both the design of the Semantic Web as well as is likelihood to succeed. Higher Order Logic Higher Order Logics (HOL's) provide greater expressive power than FOL, but they are even more difficult computationally. For example, in HOL's, one can have true statements that are not provable (see discussion of Gödel’s Incompleteness Theorem). There are two aspects of this issue: higher-order syntax and higher-order semantics. If a higher-order semantics is not needed (and this is often the case), a second-order logic can often be translated into a first-order logic. In first-order semantics, variables can only range over domains of individuals or over the names of predicates and functions, but not over sets as such. In higher-order syntax, variables are allowed to appear in places where normally predicate or function symbols appear. Predicate calculus is the primary example of logic where syntax and semantics are both first-order. There are logics that have higher-order syntax but first-order semantics. Under a higher-order semantics, an equation between predicate (or function) symbols, is true, if and only if logics with a higher-order semantics and higher-order syntax are statements expressing trust about other statements. To state it another way, higher-order logic is distinguished from first-order logic in several ways. The first is the scope of quantifiers; in first-order logic, it is forbidden to quantify over predicates. The second way in which higher-order logic differs from first-order logic is in the constructions that are allowed in the underlying type theory. A higher-order predicate is a predicate that takes one or more other predicates as arguments. In general, a higher-order predicate of order n takes one or more (n − 1)th-order predicates as arguments (where n > 1). Recursion theory Recursion is the process a procedure goes through when one of the steps of the procedure involves rerunning a complete set of identical steps. In mathematics and computer science, recursion is a particular way of specifying a class of objects with the help of a reference to other objects of the class: a recursive definition defines objects in terms of the already defined objects of the class. A recursive process is one in which objects are defined in terms of other objects of the same type. Using a recurrence relation, an entire class of objects can be built up from a few initial values and a small number of rules. The Fibonacci numbers (i.e., the infinite sequence of numbers starting 0, 1, 1, 2, 3, 5, 8, 13, …, where the next number in the sequence is defined a s the sum of the previous two numbers) is a commonly known recursive set. The following is a recursive definition of person's ancestors: One's parents are one's ancestors (base case). The parents of any ancestor are also ancestors of the person under consideration (recursion step). Therefore, your ancestors include: your parents, and your parents' parents (grandparents), and your grandparents' parents, and everyone else you get by successively adding ancestors. It is convenient to think that a recursive definition defines objects in terms of "previously defined" member of the class. While recursive definitions are useful and widespread in mathematics, care must be taken to avoid self-recursion, in which an object is defined in terms of itself, leading to an infinite nesting (see Figure 1-1: “The Print Gallery” by M.C. Escher is a visual illustration of self-recursion). Knowledge Representation Let’s define what we mean by the fundamental terms “data,” “information,” “knowledge,” and "understanding." An item of data is a fundamental element of an application. Data can be represented by populations and labels. Data is raw; it exists and has no significance beyond its existence. It can exist in any form, usable or not. It does not have meaning by itself. Information on the other hand is an explicit association between items of data. Associations represent a function relating one set of things to another set of things. Information can be considered to be data that has been given meaning by way of relational connections. This "meaning" can be useful, but does not have to be. A relational database creates information from the data stored within it. Knowledge can be considered to be an appropriate collection of information, such that it is useful. Knowledge-based systems contain knowledge as well as information and data. A rule is an explicit functional association from a set of information things to a specific information thing. As a result, a rule is knowledge. We can construct information from data and knowledge from information and finally produce understanding from knowledge. Understanding lies at the highest level. Understanding is an interpolative and probabilistic process that is cognitive and analytical. It is the process by which one can take existing knowledge and synthesize new knowledge. One who has understanding can pursue useful actions because he can synthesize new knowledge or information from what is previously known (and understood). Understanding can build upon currently held information, knowledge, and understanding itself. AI systems possess understanding in the sense that they are able to synthesize new knowledge from previously stored information and knowledge. An important element of AI is the principle that intelligent behavior can be achieved through processing of symbolic structures representing increments of knowledge. This has produced knowledge-representation languages that allow the representation and manipulation of knowledge to deduce new facts from the existing knowledge. The knowledge-representation language must have a well-defined syntax and semantics system while supporting inference. Three techniques have been popular to express knowledge representation and inference: (1) Logic-based approaches, (2) Rule-based systems, and (3) Frames and semantic networks. Logic-based approaches use logical formulas to represent complex relationships. They require a well-defined syntax, semantics, and proof theory. The formal power of a logical theorem proof can be applied to knowledge to derive new knowledge. Logic is used as the formalism for programming languages and databases. It can also be used as a formalism to implement knowledge methodology. Any formalism that admits a declarative semantics and can be interpreted both as a programming language and a database language is a knowledge language. However, the approach is inflexible and requires great precision in stating the logical relationships. In some cases, common sense inferences and conclusions cannot be derived, and the approach may be inefficient, especially when dealing with issues that result in large combinations of objects or concepts. Rule-based approaches are more flexible and allow the representation of knowledge using sets of IF-THEN or other conditional rules. This approach is more procedural and less formal in its logic. As a result, reasoning can be controlled through a forward or backward chaining interpreter. Frames and semantic networks capture declarative information about related objects and concepts where there is a clear class hierarchy and where the principle of inheritance can be used to infer the characteristics of members of a subclass. The two forms of reasoning in this technique are matching (i.e., identification of objects having common properties), and property inheritance in which properties are inferred for a subclass. Frames and semantic networks are limited to representation and inference of relatively simple systems. In each of these approaches, the knowledge-representation component (i.e., problem-specific rules and facts) is separate from the problem-solving and inference procedures. For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. AI researchers have studied such systems and produced today’s Knowledge Representation (KR). KR is currently in a state comparable to that of hypertext before the advent of the Web. Knowledge representation contains the seeds of important applications, but to fully realize its potential, it must be linked into a comprehensive global system. Computational Logic Programming a computer involves creating a sequence of logical instructions that the computer will use to perform a wide variety of tasks. While it is possible to create programs directly in machine language, it is uncommon for programmers to work at this level because of the abstract nature of the instructions. It is better to write programs in a simple text file using a high-level programming language which can later be compiled into executable code. The ‘logic model’ for programming is a basic element that communicates the logic behind a program. A logic model can be a graphic representation of a program illustrating the logical relationships between program elements and the flow of calculation, data manipulation or decisions as the program executes its steps. Logic models typically use diagrams, flow sheets, or some other type of visual schematic to convey relationships between programmatic inputs, processes, and outcomes. Logic models attempt to show the links in a chain of reasoning about relationships to the desired goal. The desired goal is usually shown as the last link in the model. A logic program may consist of a set of axioms and a goal statement. The logic form can be a set of ‘IF-THEN’ statements. The rules of inference are applied to determine whether the axioms are sufficient to ensure the truth of the goal statement. The execution of a logic program corresponds to the construction of a proof of the goal statement from the axioms. In the logic programming model the programmer is responsible for specifying the basic logical relationships and does not specify the manner in which the inference rules are applied. Thus Logic + Control = Algorithms The operational semantics of logic programs correspond to logical inference. The declarative semantics of logic programs are derived from the term model. The denotation of semantics in logic programs are defined in terms of a function which assigns meaning to the program. There is a close relation between the axiomatic semantics of imperative programs and logic programs. The control portion of the equation is provided by an inference engine whose role is to derive theorems based on the set of axioms provided by the programmer. The inference engine uses the operations of resolution and unification to construct proofs. Faulty logic models occur when the essential problem has not been clearly stated or defined. Program developers work carefully to construct logic models to avoid logic conflicts, recursive loops, and paradoxes within their computer programs. As a result, programming logic should lead to executable code without paradox or conflict, if it is flawlessly produced. Nevertheless we know that ‘bugs’ or programming errors do occur, some of which are directly or indirectly a result of logic conflicts. As programs have grown in size from thousands of line of code to millions of lines, the problems of ‘bugs’ and logic conflicts have also grown. Today programs such as operating systems can have over 25 million lines of codes and considered to have hundreds of thousands of ‘bugs’ most of which are seldom encountered during routine program usage. Confining logic issues to beta-testing on local servers allows programmers reasonable control of conflict resolution. Now consider applying many lines of application code logic to the Semantic Web were it may access many information nodes. The magnitude of the potential conflicts could be somewhat daunting. Artificial Intelligence John McCarthy of MIT contributed the term ‘Artificial Intelligence’ (AI) and by the late 1950s, there were many researchers in AI working on programming computers. Eventually, AI expanded into such fields as philosophy, psychology and biology. AI is sometimes described in two ways: strong AI and weak AI. Strong AI asserts that computers can be made to think on a level equal to humans. Weak AI simply holds that some ‘thinking-like’ features can be added to computers to make them more useful tools. Examples of Weak AI abound: expert systems, drive-by-wire cars, smart browsers, and speech recognition software. These weak AI components may, when combined, begin to approach the expectations of strong AI. AI includes the study of computers that can perform cognitive tasks including: understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases or illnesses, solving mathematical problems, performing financial analyses, learning new procedures for problem solving, and playing complex games, like chess. We will provide a more detailed discussion on Artificial Intelligence on the Web and what is meant by machine intelligence in Chapter 3. Web Architecture and Business Logic So far we have explored the basic elements, characteristics, and limitations of logic and suggested that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs. Next we will review how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet. This simple arrangement of segregating the complexity of logic to the executable programs residing on servers has minimized processing difficulties over the Web itself. Today, markup languages are not equipped with logic connectives. So all complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, there is difficulty in interoperable programming which greatly inhibits automation of Web Services. Browsers such as Internet Explorer and Netscape Navigator view Web pages written in HyperText Markup Language (HTML). The HTML program can be written to a simple text file that is recognized by the browser and it can call embedded script programming. In addition, HTML can include compiler directives that call server pages with access to proprietary compiled programming. As a result, simple-text HTML is empowered with important capabilities to call complex business logic programming residing on servers both in the frameworks of Microsoft’s .NET and Sun’s J2EE. These frameworks support Web Services and form a vital part of today’s Web. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program provides functions as transaction processing, database connectivity, and messaging. Business logic is concerned with logic about: how we model real world business objects - such as accounts, loans, travel; how these objects are stored; how these objects interact with each other - e.g. a bank account must have an owner and a bank holder's portfolio is the sum of his accounts; and who can access and update these objects. As an example, consider an online store that provides real-time pricing and availability information. The site will provide a form for you to choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. The Web server delegates the response generation to a script, however, the business logic for the pricing lookup is included from an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response. The application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client. By separating the pricing logic from the HTML response-generating code, the pricing logic becomes reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checking out a customer. Recently, eXtensible Markup Language (XML) Web Services use an XML payload to a Web server. The Web server can then process the data and respond much as application servers have in the past. XML has become the standard for data transfer of all types of applications. XML provides a data model that is supported by most data-handling tools and vendors. Structuring data as XML allows hierarchical, graph-based representations of the data to be presented to tools, which opens up a host of possibilities. The task of creating and deploying Web Services automatically requires interoperable standards. The most advanced vision for the next generation of Web Services is the development of Web Services over Semantic Web Architecture. The Semantic Web Now let’s consider using logic within markup languages on the Semantic Web. This means empowering the Web’s expressive capability, but at the expense of reducing Web performance. The current Web is built on HTML and XML, which describes how information is to be displayed and laid out on a Web page for humans to read. In addition, HTML is not capable of being directly exploited by information retrieval techniques. XML may have enabled the exchange of data across the Web, but it says nothing about the meaning of that data. In effect, the Web has developed as a medium for humans without a focus on data that could be processed automatically. As a result, computers are unable to automatically process the meaning of Web content. For machines to perform useful automatic reasoning tasks on these documents, the language machines use must go beyond the basic semantics of XML Schema. They will require an ontology language, logic connectives, and rule systems. By introducing these elements the Semantic Web is intended to be a paradigm shift just as powerful as the original Web. The Semantic Web will bring meaning to the content of Web pages, where software agents roaming from page-to-page can carry out automated tasks. The Semantic Web will be constructed over the Resource Description Framework (RDF) and Web Ontology Language (OWL). In addition, it will implement logic inference and rule systems. These languages are being developed by the W3C. Data can be defined and linked using RDF and OWL so that there is more effective discovery, automation, integration, and reuse across different applications. These languages are conceptually richer than HTML and allow representation of the meaning and structure of content (interrelationships between concepts). This makes Web content understandable by software agents, opening the way to a whole new generation of technologies for information processing, retrieval, and analysis. If a developer publishes data in XML on the Web, it doesn’t require much more effort to take the extra step and publish the data in RDF. By creating ontologies to describe data, intelligent applications won’t have to spend time translating various XML schemas. An ontology defines the terms used to describe and represent an area of knowledge. Although XML Schema is sufficient for exchanging data between parties who have agreed to the definitions beforehand, their lack of semantics prevents machines from reliably performing this task with new XML vocabularies. In addition, the ontology of RDF and RDF Schema (RDFS) is very limited (see Chapter 5). RDF is roughly limited to binary ground predicates and RDF Schema is roughly limited to a subclass hierarchy and a property hierarchy with domain and range definitions. Adding an Ontology language will permit the development of explicit, formal conceptualizations of models (see Chapter 6). The main requirements of an onotology language include: a well-defined syntax, a formal semantics, convenience of expression, n efficient reasoning support system, and sufficient expressive power. Since the W3C has established that the Semantic Web would require much more expressive power than using RDF and RDF Schema would offer, the W3C has defined Web Ontology Language (called OWL). The layered architecture of the Semantic Web would suggest that one way to develop the necessary ontology language is to extend RDF Schema by using the RDF meaning of classes and properties and adding primitives to support richer expressiveness. However, simply extending RDF Schema would fail to achieve the best combination of expressive power and efficient reasoning. The layered architecture of the Semantic Web promotes the downward compatibility and reuse of software is only achieved with OWL Full (see Chapter 6), but at the expense of computational intractability. RDF and OWL (DL and Lite, see Chapter 6) are specializations of predicate logic. They provide a syntax that fits well with Web languages. They also define reasonable subsets of logic that offer a trade-off between expressive power and computational complexity. Semantic Web research has developed from the traditions of Artificial Intelligence (AI) and ontology languages. Currently, the most important ontology languages on the Web are XML, XML Schema, RDF, RDF Schema, and OWL. Agents are pieces of software that work autonomously and proactively. In most cases agent will simply collect and organize information. Agents on the Semantic Web will receive some tasks to perform and seek information from Web resources, while communicating with other Web agents, in order to fulfill its task. Semantic Web agents will utilize metadata, ontologies, and logic to carry out its tasks. In a closed environment, Semantic Web specifications have already been used to accomplish many tasks, such as data interoperability for business-to-business (B2B) transactions. Many companies have expended resources to translate their internal data syntax for their partners. As the world migrates towards RDF and ontologies, interoperability will become more flexible to new demands. An inference is a process of using rules to manipulate knowledge to produce new knowledge. Adding logic to the Web means using rules to make inferences and choosing a course of action. The logic must be powerful enough to describe complex properties of objects, but not so powerful that agents can be tricked by a paradox. A combination of mathematical and engineering issues complicates this task. We will provide a more detailed presentation on paradoxes on the Web and what is solvable on the Web in the next few chapters. Inference Engines for the Semantic Web Inference engines process the knowledge available in the Semantic Web by deducing new knowledge from already specified knowledge. Higher Order Logic (HOL) based inference engines have to greatest expressive power among all known logics such as the characterization of transitive closure. However, higher order logics don't have nice computational properties. There are true statements, which are unprovable (Gödel’s Incompleteness Theorem). Full First Order Logic (FFOL) based inference engines for specifying axioms requires a full-fledged automated theorem prover. FOL is semi-decidable and doing inferencing is computationally not tractable for large amounts of data and axioms. This means, than in an environment like the Web, HOL and FFOL programs would not scale up for handling huge amounts of knowledge. Besides full first theorem proving would mean to maintain consistency throughout the web, which is impossible. Predicate calculus is the primary example of logic where syntax and semantics are both first-order. From a modeling point of view, Description Logics correspond to Predicate Logic statements with three variables suggesting that modeling is syntactically bound and is a good candidate language for Web logic. Other possibilities for inference engines for the Semantic Web are languages based on Horn-logic, which is another fragment of First-Order Predicate logic (see Figure 2-2). In addition, Descriptive Logic and rule systems (e.g., Horn Logic) have different capabilities. Both Descriptive Logic and Horn Logic are critical branches of logic that highlight essential limitations and expressive powers which are central issues to designing the Semantic Web languages. We will discuss them further in chapters, 6, 7, 8 and 9. Conclusion For the Semantic Web to provide machine processing capabilities, the logic expressive power of mark-up languages must be balanced against the resulting computational complexity of reasoning. In this chapter, we examined both the expressive characteristics of logic languages, as well as, their inherit limitations. First Order Logics (FOL) fragments such as Descriptive Logic and Horn Logic offer attractive characteristics for Web applications and set the parameters for how expressive Web markup languages can become. We also reviewed the concept of Artificial Intelligence (AI) and how logic is applied in computer programming. After exploring the basic elements, characteristics, and limitations of logic and suggesting that errors in logic contribute to many significant ‘bugs’ that lead to crashed computer programs, we reviewed how Web architecture is used to partition the delivery of business logic from the user interface. The Web architecture keeps the logic restricted to executable code residing on the server and delivering user interface presentations residing within the markup languages traveling along the Internet. Finally, we discussed the implications of using logic within markup languages on the Web through the development of the Semantic Web. Our conclusions from this chapter include: Logic is the foundation of knowledge representation which can be applied to AI in general and the World Wide Web specially. Logic can provide a high-level language for expressing knowledge and has high expressive power. Logic has a well-understood formal semantics for assigning unambiguous meaning to logic statements. In addition, we saw that proof systems exist that can automatically derive statements syntactically from premises. Predicate logic uniquely offers a sound and complete proof system while higher-order logics do not. By tracking the proof to reach its consequence the logic can provide explanations for the answers. Currently, complex logic and detailed calculations must be carried out by specially compiled programs residing on Web servers where they are accessed by server page frameworks. The result is highly efficient application programs on the server must communicate very inefficiently with other proprietary applications using XML in simple ASCII text. In addition, this difficulty for interoperable programs greatly inhibits automation of Web Services. The Semantic Web offers a way to use logic in the form of Descriptive Logic or Horn Logic on the Web. Exercises 2-1. Explain how logic for complex business calculations is currently carried out through .NET and J2EE application servers. 2-2. Explain the difference between FOL and HOL. 2-3. Why is it necessary to consider less powerful expressive languages for the Semantic Web? 2-4. Why is undeciability a concern on the Web? Website http://escherdroste.math.leidenuniv.nl/ offers visualize the mathematical structure behind Escher's Print Gallery using the Escher and the Droste effect. This mathematical structure answers some questions about Escher's picture, such as: "what's in the blurry white hole in the middle?" This project is an initiative of Hendrik Lenstra of the Universiteit Leiden and the University of California at Berkeley. Bart de Smit of the Universiteit Leiden runs the project. Interlude #2: Truth and Beauty As John passed with a sour look on his face, Mary looked up from her text book and asked, “Didn’t you enjoy the soccer game?” “How can you even ask that when we lost?” asked John gloomily. “I think the team performed beautifully, despite the score” said Mary. This instantly frustrated John and he said, "Do you know Mary that sometimes I find it disarming the way you express objects in terms of beauty. I find that simply accepting something on the basis of its beauty can lead to false conclusions?" Mary reflected upon this before offering a gambit of her own, "Well John, do you know that sometimes I find that relying on objective truth alone can lead to unattractive conclusions." John became flustered and reflected his dismay by demanding, "Give me an example." Without hesitation, Mary said, "Perhaps you will recall that in the late 1920's, mathematicians were quite certain that every well-posed mathematical question had to have a definite answer ─ either true or false. For example, suppose they claimed that every even number was the sum of two prime numbers,” referring to Goldbach's Conjecture which she had just been studying in her text book. Mary continued, “Mathematicians would seek the truth or falsity of the claim by examining a chain of logical reasoning that would lead in a finite number of steps to prove if the claim were either true or false." "So mathematicians thought at the time," said John. "Even today most people still do." "Indeed," said Mary. "But in 1931, logician Kurt Gödel proved that the mathematicians were wrong. He showed that every sufficiently expressive logical system must contain at least one statement that can be neither proved nor disproved following the logical rules of that system. Gödel proved that not every mathematical question has to have a yes or no answer. Even a simple question about numbers may be undecidable. In fact, Gödel proved that t here exist questions that while being undecidable by the rules of logical system can be seen to be actually true if we jump outside that system. But they cannot be proven to be true.” “Thank you for that clear explanation,” said John. “But isn’t such a fact simply a translation into mathematic terms of the famous Liar’s Paradox: ‘This statement is false.’” “Well, I think it's a little more complicated than that,” said Mary. “But Gödel did identify the problem of self-reference that occurs in the Liar’s Paradox. Nevertheless, Gödel’s theorem contradicted the thinking of most of the great mathematicians of his time. The result is that one can not be as certain as the mathematician had desired. See what I mean, Gödel may have found an important truth, but it was – well to be frank – rather disappointingly unattractive," concluded Mary. "On the contrary,” countered John, “from my perspective it was the beauty of the well-posed mathematical question offered by the mathematicians that was proven to be false. Mary replied, “I’ll have to think about that.”
- Fame | H Peter Alesso
Science Fiction Writers Hall of Fame Isaac Asimov Asimov is one of the foundational voices of 20th-century science fiction. His work often incorporated hard science, creating an engaging blend of scientific accuracy and imaginative speculation. Known for his "Robot" and "Foundation" series, Asimov's ability to integrate scientific principles with compelling narratives has left an enduring legacy in the field. Arthur C. Clarke The author of numerous classics including "2001: A Space Odyssey," Clarke's work is notable for its visionary, often prophetic approach to future technologies and space exploration. His thoughtful, well-researched narratives stand as enduring examples of 'hard' science fiction. Robert A. Heinlein Heinlein, one of science fiction's most controversial and innovative writers, is best known for books like "Stranger in a Strange Land" and "Starship Troopers." His work is known for its strong political ideologies and exploration of societal norms. Philip K. Dick With stories often marked by paranoid and dystopian themes, Dick's work explores philosophical, sociological, and political ideas. His books like "Do Androids Dream of Electric Sheep?" inspired numerous films, solidifying his impact on popular culture. Ray Bradbury Known for his poetic prose and poignant societal commentary, Bradbury's work transcends genre. His dystopian novel "Fahrenheit 451" remains a touchstone in the canon of 20th-century literature, and his short stories continue to inspire readers and writers alike. Ursula K. Le Guin Le Guin's works, such as "The Left Hand of Darkness" and the "Earthsea" series, often explored themes of gender, sociology, and anthropology. Her lyrical prose and profound explorations of human nature have left an indelible mark on science fiction. Frank Herbert The author of the epic "Dune" series, Herbert crafted a detailed and complex future universe. His work stands out for its intricate plotlines, political intrigue, and environmental themes. William Gibson Gibson is known for his groundbreaking cyberpunk novel "Neuromancer," where he coined the term 'cyberspace.' His speculative fiction often explores the effects of technology on society. H.G. Wells Although Wells's works were published on the cusp of the 20th century, his influence carried well into it. Known for classics like "The War of the Worlds" and "The Time Machine", Wells is often hailed as a father of science fiction. His stories, filled with innovative ideas and social commentary, have made an indelible impact on the genre. Larry Niven Known for his 'Ringworld' series and 'Known Space' stories, Niven's hard science fiction works are noted for their imaginative, scientifically plausible scenarios and compelling world-building. Octavia Butler Butler's work often incorporated elements of Afrofuturism and tackled issues of race and gender. Her "Xenogenesis" series and "Kindred" are known for their unique and poignant explorations of human nature and society. Orson Scott Card Best known for his "Ender's Game" series, Card's work combines engaging narrative with introspective examination of characters. His stories often explore ethical and moral dilemmas. Alfred Bester Bester's "The Stars My Destination" and "The Demolished Man" are considered classics of the genre. His work is recognized for its powerful narratives and innovative use of language. Kurt Vonnegut Though not strictly a science fiction writer, Vonnegut's satirical and metafictional work, like "Slaughterhouse-Five," often used sci-fi elements to highlight the absurdities of human condition. Harlan Ellison Known for his speculative and often dystopian short stories, Ellison's work is distinguished by its cynical tone, inventive narratives, and biting social commentary. Stanislaw Lem Lem's work, such as "Solaris," often dealt with philosophical questions. Philip José Farmer Known for his "Riverworld" series, Farmer's work often explored complex philosophical and social themes through creative world-building and the use of historical characters. He is also recognized for his innovations in the genre and the sexual explicitness of some of his work. J. G. Ballard Best known for his novels "Crash" and "High-Rise", Ballard's work often explored dystopian modernities and psychological landscapes. His themes revolved around surrealistic and post-apocalyptic visions of the human condition, earning him a unique place in the sci-fi genre. AI Science Fiction Hall of Fame As a science fiction aficionado and AI expert, there's nothing more exciting to me t han exploring the relationship between sci-fi literature and artificial intelligence. Science fiction is an innovative genre, often years ahead of its time, an d has influenced AI's development in ways you might not expect. But it's not just techies like us who should be interested - students of AI can learn a lot from these visionary authors. So buckle up, as we're about to embark on an insider's journey through the most famous science fiction writers in the hall of fame! The Science Fiction-AI Connection Science fiction and AI go together like peanut butter and jelly. In fact, one could argue that some of our most advanced AI concepts and technologies sprung from the seeds planted by sci-fi authors. I remember as a young techie, curled up with my dog, reading Isaac Asimov’s "I, Robot". I was just a teenager, but that book completely changed how I saw the potential of AI. The Most Famous Sci-Fi Writers and their AI Visions Ready for a deep dive into the works of the greats? Let's take a closer look at some of the most famous science fiction writers in the hall of fame, and how their imaginations have shaped the AI we know today. Isaac Asimov: Crafting the Ethics of AI You can't talk about AI in science fiction without first mentioning Isaac Asimov. His "I, Robot" introduced the world to the Three Laws of Robotics, a concept that continues to influence AI development today. As an AI student, I remember being fascinated by how Asimov's robotic laws echoed the ethical considerations we must grapple with in real-world AI. Philip K. Dick: Dreaming of Synthetic Humans Next up, Philip K. Dick. If you've seen Blade Runner, you've seen his influence at work. In "Do Androids Dream of Electric Sheep?" (the book Blade Runner is based on), Dick challenges us to question what it means to be human and how AI might blur those lines. It's a thought that has certainly kept me up late on more than a few coding nights! Arthur C. Clarke: AI, Autonomy, and Evolution Arthur C. Clarke's "2001: A Space Odyssey" has been both a source of inspiration and caution in my work. The AI character HAL 9000 is an eerie portrayal of autonomous AI systems' potential power and risks. It's a reminder that AI, like any technology, can be a double-edged sword. William Gibson: AI in Cyberspace Finally, William Gibson's "Neuromancer" gave us a vision of AI in cyberspace before the internet was even a household name. I still remember my shock reading about an AI entity in the digital ether - years later, that same concept is integral to AI in cybersecurity. The Power of Creativity These authors' works are testaments to the power of creativity in imagining the possibilities of AI. As students, you'll need to push boundaries and think outside the box - just like these authors did. Understanding Potential and Limitations The stories these authors spun provide us with vivid scenarios of AI's potential and limitations. They remind us that while AI has massive potential, it's not without its challenges and dangers. Conclusion And there we have it - our deep dive into the most famous science fiction writers in the hall of fame and their influence on AI. Their work is not just fiction; it's a guiding light, illuminating the path that has led us to the AI world we live in today. As students, we have the opportunity to shape the AI of tomorrow, just as these authors did. So why not learn from the best? Science Fiction Greats of the 21st Century Neal Stephenson is renowned for his complex narratives and incredibly detailed world-building. His Baroque Cycle trilogy is a historical masterpiece, while Snow Crash brought the concept of the 'Metaverse' into popular culture. China Miéville has won several prestigious awards for his 'weird fiction,' a blend of fantasy and science fiction. Books like Perdido Street Station and The City & The City are both acclaimed and popular. His work is known for its rich, evocative language and innovative concepts. Kim Stanley Robinson is best known for his Mars trilogy, an epic tale about the terraforming and colonization of Mars. He's famous for blending hard science, social commentary, and environmental themes. He continues this trend in his 21st-century works like the climate-focused New York 2140. Margaret Atwood, while also recognized for her mainstream fiction, has made significant contributions to science fiction. Her novel The Handmaid's Tale and its sequel The Testaments provide a chilling dystopian vision of a misogynistic society. Her MaddAddam trilogy further underscores her unique blend of speculative fiction and real-world commentary. Alastair Reynolds is a leading figure in the hard science fiction subgenre, known for his space opera series Revelation Space. His work, often centered around post-humanism and AI, is praised for its scientific rigor and inventive plotlines. Reynolds, a former scientist at the European Space Agency, incorporates authentic scientific concepts into his stories. Paolo Bacigalupi's works often deal with critical environmental and socio-economic themes. His debut novel The Windup Girl won both the Hugo and Nebula awards and is renowned for its bio-punk vision of the future. His YA novel, Ship Breaker, also received critical acclaim, winning the Michael L. Printz Award. Ann Leckie's debut novel Ancillary Justice, and its sequels, are notable for their exploration of AI, gender, and colonialism. Ancillary Justice won the Hugo, Nebula, and Arthur C. Clarke Awards, a rare feat in science fiction literature. Her unique narrative styles and complex world-building are highly appreciated by fans and critics alike. Iain M. Banks was a Scottish author known for his expansive and imaginative 'Culture' series. Though he passed away in 2013, his work remains influential in the genre. His complex storytelling and exploration of post-scarcity societies left a significant mark in science fiction. William Gibson is one of the key figures in the cyberpunk sub-genre, with his novel Neuromancer coining the term 'cyberspace.' In the 21st century, he continued to innovate with his Blue Ant trilogy. His influence on the genre, in terms of envisioning the impacts of technology on society, is immense. Ted Chiang is highly regarded for his thoughtful and philosophical short stories. His collection Stories of Your Life and Others includes "Story of Your Life," which was adapted into the film Arrival. Each of his carefully crafted tales explores a different scientific or philosophical premise. Charlie Jane Anders is a diverse writer who combines elements of science fiction, fantasy, and more in her books. Her novel All the Birds in the Sky won the 2017 Nebula Award for Best Novel. She's also known for her work as an editor of the science fiction site io9. N.K. Jemisin is the first author to win the Hugo Award for Best Novel three years in a row, for her Broken Earth Trilogy. Her works are celebrated for their diverse characters, intricate world-building, and exploration of social issues. She's one of the most influential contemporary voices in fantasy and science fiction. Liu Cixin is China's most prominent science fiction writer and the first Asian author to win the Hugo Award for Best Novel, for The Three-Body Problem. His Remembrance of Earth's Past trilogy is praised for its grand scale and exploration of cosmic civilizations. His work blends hard science with complex philosophical ideas. John Scalzi is known for his accessible writing style and humor. His Old Man's War series is a popular military science fiction saga, and his standalone novel Redshirts won the 2013 Hugo Award for Best Novel. He's also recognized for his blog "Whatever," where he discusses writing, politics, and more. Cory Doctorow is both a prolific author and an advocate for internet freedom. His novel Little Brother, a critique of increased surveillance, is frequently used in educational settings. His other novels, like Down and Out in the Magic Kingdom, are known for their examination of digital rights and technology's impact on society. Octavia Butler (1947-2006) was an award-winning author known for her incisive exploration of race, gender, and societal structures within speculative fiction. Her works like the Parable series and Fledgling have continued to influence and inspire readers well into the 21st century. Her final novel, Fledgling, a unique take on vampire mythology, was published in 2005. Peter F. Hamilton is best known for his space opera series such as the Night's Dawn trilogy and the Commonwealth Saga. His work is often noted for its scale, complex plotting, and exploration of advanced technology and alien civilizations. Despite their length, his books are praised for maintaining tension and delivering satisfying conclusions. Ken Liu is a prolific author and translator in science fiction. His short story "The Paper Menagerie" is the first work of fiction to win the Nebula, Hugo, and World Fantasy Awards. As a translator, he's known for bringing Liu Cixin's The Three-Body Problem to English-speaking readers. Ian McDonald is a British author known for his vibrant and diverse settings, from a future India in River of Gods to a colonized Moon in the Luna series. His work often mixes science fiction with other genres, and his narrative style has been praised as vivid and cinematic. He has won several awards, including the Hugo, for his novellas and novels. James S.A. Corey is the pen name of collaborators Daniel Abraham and Ty Franck. They're known for The Expanse series, a modern space opera exploring politics, humanity, and survival across the solar system. The series has been adapted into a critically acclaimed television series. Becky Chambers is praised for her optimistic, character-driven novels. Her debut, The Long Way to a Small, Angry Planet, kickstarted the popular Wayfarers series and was shortlisted for the Arthur C. Clarke Award. Her focus on interpersonal relationships and diverse cultures sets her work apart from more traditional space operas. Yoon Ha Lee's Machineries of Empire trilogy, beginning with Ninefox Gambit, is celebrated for its complex world-building and innovative use of technology. The series is known for its intricate blend of science, magic, and politics. Lee is also noted for his exploration of gender and identity in his works. Ada Palmer's Terra Ignota series is a speculative future history that blends philosophy, politics, and social issues in a post-scarcity society. The first book in the series, Too Like the Lightning, was a finalist for the Hugo Award for Best Novel. Her work is appreciated for its unique narrative voice and in-depth world-building. Charlie Stross specializes in hard science fiction and space opera, with notable works including the Singularity Sky series and the Laundry Files series. His books often feature themes such as artificial intelligence, post-humanism, and technological singularity. His novella "Palimpsest" won the Hugo Award in 2010. Kameron Hurley is known for her raw and gritty approach to science fiction and fantasy. Her novel The Light Brigade is a time-bending military science fiction story, while her Bel Dame Apocrypha series has been praised for its unique world-building. Hurley's work often explores themes of gender, power, and violence. Andy Weir shot to fame with his debut novel The Martian, a hard science fiction tale about a man stranded on Mars. It was adapted into a successful Hollywood film starring Matt Damon. His later works, Artemis and Project Hail Mary, continue his trend of scientifically rigorous, yet accessible storytelling. Jeff VanderMeer is a central figure in the New Weird genre, blending elements of science fiction, fantasy, and horror. His Southern Reach Trilogy, starting with Annihilation, explores ecological themes through a mysterious, surreal narrative. The trilogy has been widely praised, with Annihilation adapted into a major motion picture. Nnedi Okorafor's Africanfuturist works blend science fiction, fantasy, and African culture. Her novella Binti won both the Hugo and Nebula awards. Her works are often celebrated for their unique settings, compelling characters, and exploration of themes such as cultural conflict and identity. Claire North is a pen name of Catherine Webb, who also writes under Kate Griffin. As North, she has written several critically acclaimed novels, including The First Fifteen Lives of Harry August, which won the John W. Campbell Memorial Award for Best Science Fiction Novel. Her works are known for their unique concepts and thoughtful exploration of time and memory. M.R. Carey is the pen name of Mike Carey, known for his mix of horror and science fiction. His novel The Girl With All the Gifts is a fresh take on the zombie genre, and it was later adapted into a film. Carey's works are celebrated for their compelling characters and interesting twists on genre conventions. Greg Egan is an Australian author known for his hard science fiction novels and short stories. His works often delve into complex scientific and mathematical concepts, such as artificial life and the nature of consciousness. His novel Diaspora is considered a classic of hard science fiction. Steven Erikson is best known for his epic fantasy series, the Malazan Book of the Fallen. However, he has also made significant contributions to science fiction with works like Rejoice, A Knife to the Meat. His works are known for their complex narratives, expansive world-building, and philosophical undertones. Vernor Vinge is a retired San Diego State University professor of mathematics and computer science and a Hugo award-winning science fiction author. Although his most famous work, A Fire Upon the Deep, was published in the 20th century, his later work including the sequel, Children of the Sky, has continued to influence the genre. He is also known for his 1993 essay "The Coming Technological Singularity," in which he argues that rapid technological progress will soon lead to the end of the human era. Jo Walton has written several novels that mix science fiction and fantasy, including the Hugo and Nebula-winning Among Others. Her Thessaly series, starting with The Just City, is a thought experiment about establishing Plato's Republic in the ancient past. She is also known for her non-fiction work on the history of science fiction and fantasy. Hugh Howey is best known for his series Wool, which started as a self-published short story and grew into a successful series. His works often explore post-apocalyptic settings and the struggle for survival and freedom. Howey's success has been a notable example of the potential of self-publishing in the digital age. Richard K. Morgan is a British author known for his cyberpunk and dystopian narratives. His debut novel Altered Carbon, a hardboiled cyberpunk mystery, was adapted into a Netflix series. His works are characterized by action-packed plots, gritty settings, and exploration of identity and human nature. Hannu Rajaniemi is a Finnish author known for his unique blend of hard science and imaginative concepts. His debut novel, The Quantum Thief, and its sequels have been praised for their inventive ideas and complex, layered narratives. Rajaniemi, who holds a Ph.D. in mathematical physics, incorporates authentic scientific concepts into his fiction. Stephen Baxter is a British author who often writes hard science fiction. His Xeelee sequence is an expansive future history series covering billions of years. Baxter is known for his rigorous application of scientific principles and his exploration of cosmic scale and deep time. C.J. Cherryh is an American author who has written more than 60 books since the mid-1970s. Her Foreigner series, which began in the late '90s and has continued into the 21st century, is a notable science fiction series focusing on political conflict and cultural interaction. She has won multiple Hugo Awards and was named a Grand Master by the Science Fiction and Fantasy Writers of America. Elizabeth Bear is an American author known for her diverse range of science fiction and fantasy novels. Her novel Hammered, which combines cybernetics and Norse mythology, started the acclaimed Jenny Casey trilogy. She has won multiple awards, including the Hugo, for her novels and short stories. Larry Niven is an American author best known for his Ringworld series, which won the Hugo, Nebula, and Locus awards. In the 21st century, he continued the series and collaborated with other authors on several other works, including the Bowl of Heaven series with Gregory Benford. His works often explore hard science concepts and future history. David Mitchell is known for his genre-blending novels, such as Cloud Atlas, which weaves six interconnected stories ranging from historical fiction to post-apocalyptic science fiction. The novel was shortlisted for the Booker Prize and adapted into a film. His works often explore themes of reality, identity, and interconnectedness. Robert J. Sawyer is a Canadian author known for his accessible style and blend of hard science fiction with philosophical and ethical themes. His Neanderthal Parallax trilogy, which started in 2002, examines an alternate world where Neanderthals became the dominant species. He is a recipient of the Hugo, Nebula, and John W. Campbell Memorial awards. Daniel Suarez is known for his high-tech thrillers. His debut novel Daemon and its sequel Freedom™ explore the implications of autonomous computer programs on society. His books are praised for their action-packed narratives and thought-provoking themes related to technology and society. Kazuo Ishiguro is a Nobel Prize-winning author, known for his poignant and thoughtful novels. Never Let Me Go, published in 2005, combines elements of science fiction and dystopian fiction in a heartbreaking narrative about cloned children raised for organ donation. Ishiguro's work often grapples with themes of memory, time, and self-delusion. Malka Older is a humanitarian worker and author known for her Infomocracy trilogy. The series, starting with Infomocracy, presents a near-future world where micro-democracy has become the dominant form of government. Her work stands out for its political savvy and exploration of information technology. James Lovegrove is a versatile British author, known for his Age of Odin series and Pantheon series which blend science fiction with mythology. His Firefly novel series, based on the popular Joss Whedon TV show, has been well received by fans. He's praised for his engaging writing style and inventive blending of genres. Emily St. John Mandel is known for her post-apocalyptic novel Station Eleven, which won the Arthur C. Clarke Award and was a finalist for the National Book Award and the PEN/Faulkner Award. Her works often explore themes of memory, fate, and interconnectedness. Her writing is praised for its evocative prose and depth of character. Sue Burke's debut novel Semiosis is an engaging exploration of human and alien coexistence, as well as the sentience of plants. The book was a finalist for the John W. Campbell Memorial Award and spawned a sequel, Interference. Burke's work is known for its realistic characters and unique premise. Tade Thompson is a British-born Yoruba author known for his Rosewater trilogy, an inventive blend of alien invasion and cyberpunk tropes set in a future Nigeria. The first book in the series, Rosewater, won the Arthur C. Clarke Award. His works are celebrated for their unique settings and blend of African culture with classic and innovative science fiction themes. Send Your Suggestion First name Last name Email What did you like best? How can we improve? Send Feedback Thanks for sharing your feedback with us!
- Captain Heny Gallant | H Peter Alesso
Captain Henry Gallant AMAZON Chapter 1 Streak Across the Sky Cold night air smacked Rob Ryan in the face as he stepped out of the Liftoff bar—a favorite haunt of pilots. He was still weaving his way through the parking terminal looking for his single-seat jet-flyer when a familiar face appeared at his elbow. Grabbing his arm, his friend said, “You shouldn’t fly. Let me give you a ride.” Ryan straightened to his full six-two height and shrugged off his friend’s hand. “I’m fine,” he said, swiping a lock of unkempt brown hair out of his eyes. “Don’t be pigheaded. There’s a difference between self-reliance and foolishness.” He pushed past his friend. “Nonsense. I fly better when I’m . . . mellow.” As he left his buddy behind, he noticed a young woman who had come out of the bar after him. He had spent the past hour eyeing this smokin’ hot redhead, but she had been with somebody. Now she was heading out on her own. She glanced at him and quickened her pace. A thought penetrated the fog in his mind. I’ll show her. At his Cobra 777 jet-flyer, he zipped up his pressure suit, buckled into the cockpit, and pulled on his AI neural interface—all the while imagining a wild take-off that would wow the redhead. He jockeyed his jet along the taxiway onto the runway. When the turbo launch kicked in, the black-and-chrome jet spewed a cloud of exhaust and dust across the strip. He jammed the throttle all the way in and gave a whoop of pure joy at the roar and explosive thrust of the machine. The exhilaration—a primitive, visceral feeling—increased by the second, along with his altitude and speed. His love of speed was only matched by his almost unhealthy fascination with flying machines—too fast was never fast enough. For a few seconds, his mind flashed back to his very first flight. The thrill only lasted a few minutes before the mini flyer spun out and crashed. Without a word, his father picked him up and sat him back down in the seat, restarting the engine with a wink and a grin. Clearest of all was the memory of his father’s approval as he took off again and soared higher and faster than before. Now he sliced through the crisp night air in a military jet that had his name engraved on the side. He ignited an extra thruster to drive the engine even hotter. Riding the rush of adrenaline, he pulled back on the stick to pull the nose up. Atmospheric flying was different than being in space, and for him, it had a sensual rhythm all its own. As he reached altitude, he pulled a tight loop and snapped the jet inverted, giving himself a bird’s-eye view of the ground below. But instead of reveling in admiration as expected, he found himself fighting for control against a powerful shockwave as a Scorpion 699 jet blew past him. The blast of its fuel exhaust was nothing compared to the indignation and shame that burned his face. It was the redhead. Damn. She’s good. His pulse raced as he became fully alert. Determined to pursue her, he angled the ship across air traffic lanes, breaking every safety regulation in the book. Instinctively his eyes scanned the horizon and the edges around him, watching for threats or other machines that might interfere with his trajectory. Pinwheeling in a high-G turn, he felt the crush of gravity against his chest, yet still, his hand on the throttle urged ever more speed from the machine. He lost track of the Scorpion in the clouds, and in mere seconds she maneuvered behind him. He tried to shake her using every evasive maneuver he had learned in his fighter training but couldn’t do it. His eyes roamed the sky, watching for potential dangers. The night sky was dark, but several landmarks lit up the ground below him. Earth’s capital, Melbourne, glowed with activity to the north; a mountain range stretched across the horizon 50 km to the west, and an airport lay to the south at the edge of the ocean. As he scanned the skyline, he noticed a radio-telescope antenna. Impulsively he dove toward it, the Scorpion on his tail. At the last moment, the redhead broke pursuit to avoid the antenna, but in a moment of reckless folly, Ryan crashed through the flimsy wire mesh, no more substantial to his Cobra than a wisp of cloud. “That’ll need a patch,” he chuckled. But once more, the Scorpion blew by him. He watched it roar away as if he were in slow motion. As the redhead curved back toward him for another pass, he gritted his teeth in frustration. With thrusters already at max burn, he punched the afterburner to create his own shock wave and turned head-on into her path. “Damn!” he screamed as the other ship twisted away. His golden rule for staying alive while flying was “never yield but always leave yourself an out.” Folly had made him reckless, and he knew his reflexes were sluggish, but he was pissed at himself for letting this pilot provoke him. Recovering his reason, he leveled off and threw down the skid flaps to reach a more reasonable speed. The jet took the torque and inertia strain, and the flashing red lights on his display turned yellow and then green. Despite his irritation, he allowed himself a faint smile when his AI read the Scorpion’s registration: Lorelei Steward. Good sense advised that he throttle back, but pride won out. Spotting the Scorpion silhouetted against a cloud, he jammed the throttle forward yet again. Finally, behind her, his smile broadened. She wouldn’t slip away this time. She pulled her jet into a violent oblique pop, rolled inverted until the nose pointed to the ground then returned to upright. He stuck with her, move for move. Abruptly she angled for the nearby mountain range. He chased her, low and fast, through a pass and down into a twisting canyon, rolling and pitching in a dizzying display of aerobatic skill. He kept close on her six until they blew out of the ravine. In a desperate ploy to shake him, she turned back toward Melbourne’s airspace and headed straight into a crowded flying highway. Ryan was so close behind that it took a few seconds before he realized her blunder. She had turned into an oncoming traffic lane. The cockpit warning lights lit up the cabin as Ryan dodged a stream of oncoming vehicles. Up ahead, Lorelei ducked under a passenger liner that swerved directly into his path. Time slowed to a crawl as he foresaw his fate—he could escape by pulling up—but that would force the crowded passenger liner to dive and crash into the ground. “Damn it all!” he yelled and dove—leaving the liner a clear path to safety. Through the neural interface, his AI shrieked, TOO LOW! PULL UP! TOO LOW! PULL UP! He used every bit of expertise he could muster to twist, turn, and wrestle his jet into a controlled descent. His vision narrowed as the lights of city and ships gave way to a line of unyielding rocks zooming toward him. In a blink, he ran out of time—and altitude. BRACE FOR IMPACT! The Cobra plowed a trough a hundred meters long across the desert floor. Ryan sat in the cockpit, stunned and disoriented amid the flames and wreckage until his lungs convulsed from the dense smoke. An acidic stench and the taste of jet fuel assailed his nose and throat, rousing him from his stupor. Fumbling to unbuckle the safety harness, he held his breath until he could release the hatch and climb out of his ruined machine. Shaking hands searched his body for broken bones. To his relief, he was intact . . . if he didn’t count the ringing in his ears and the blood that coursed down his face. The maxim from flight school ran through his mind: “Any landing you walk away from . . .” But as he limped away, his beloved Cobra burned into a twisted mound of molten metal, its nose buried in the dusty red ground. He shook his head at the wreck. “Captain Gallant is going to have my ass.”
- Portfolio | H Peter Alesso
The Henry Gallant Saga COURAGE is only a word . . . until you witness it. Then . . . it is contagious. Henry Gallant is the only Natural left in Earth's genetically engineered space navy. Despite overwhelming odds and the doubts of his shipmates, Gallant refuses to back down as he uses his unique abilities to fight for victory at the farthest reaches of the Solar System. Follow Gallant as he finds the spine to stand tall, vanquish fear, and rain violence upon the methane-breathing enemy aliens. The nation needs a hero like Henry Gallant. He fights! For fans of Horatio Hornblower and Honor Harring ton. 1/9
- e-Video | H Peter Alesso
e-Video AMAZON Chapter 1 Bandwidth for Video Electronic-Video, or “e-Video”, includes all audio/video clips that are distributed and played over the Internet, either by direct download or streaming video. The problem with video, however, has been its inability to travel over networks without clogging the lines. If you’ve ever tried to deliver video, you know that even after heroic efforts on your part (including optimizing the source video, the hardware, the software, the editing and the compression process) there remains a significant barrier to delivering your video over the Web. That is the “last mile” connection to the client. So before we explain the details of how to produce, capture, edit and compress video for the Web, we had better begin by describing the near term opportunities for overcoming the current bandwidth limitations for delivering video over the Internet. In this chapter, we will describe how expanding broadband fiber networks will reach out to the “last mile” to homes and businesses creating opportunities for video delivery. In order to accomplish this, we will start by quantifing three essential concerns: the file size requirements for sending video data over the Internet, the network fiber capacity of the Internet for the near future and the progress of narrowband (28.8Kbps) to broadband (1.5 Mbps) over the “last mile.” This will provide an understanding of the difficulties being overcome in transforming video from the current limited narrowband streaming video to broadband video delivery. Transitioning from Analog to Digital Technology Thomas Alva Edison’s contributions to the telegraph, phonograph, telephone, motion pictures and radio helped transform the 20th Century with analog appliances in the home and the factory. Many of Edison’s contributions were based on the continuous electrical analog signal. Today, Edison’s analog appliances are being replaced by digital ones. Why? Let’s begin by comparing the basic analog and digital characteristics. Analog signals move along wires as electromagnetic waves. The signal’s frequency refers to the number of time per second that a wave oscillates in a complete cycle. The higher the speed, or frequency, the more cycles of a wave are completed in a given period of time. A baud rate is one analog electric cycle or wave per second. Frequency is also stated in hertz (Hz). (Kilohertz or kHz represents 1000 Hz, MHz represents 1,000,000 Hz and GHz represents a billion Hz). Analog signals, such as voice, radio, and TV involve oscillations within specified ranges of frequency. For example: Voice has a range of 300 to 3300 Hz Analog cable TV has a range of 54 MHz to 750MHz Analog microwave towers have a range of 2 to 12 GHz Sending a signal along analog wires is similar to sending water through a pipe. The further it travels the more force it loses and the weaker it becomes. It can also pick up vibrations, or noise, which introduces signal errors. Today, analog technology has become available world-wide through the following transmission media: 1/. Copper wire for telephone (one-to-one communication). 2/. Broadcast for radio & television (one-to-many communication). 3/. Cable for television (one-to-many communication). Most forms of analog content, from news to entertainment, have been distributed over one or more of these methods. Analog technology prior to 1990, was based primarily on the one-to-many distribution system as show in the Table below where information was primarily directed toward individuals from a central point. Table 1-1 Analog Communication Prior to 1990 Prior to 1990, over 99% of businesses and homes had content reach them from any one of the three transmission delivery systems. Only the telephone allowed two-way communication, however. While the other analog systems where reasonably efficient in delivering content, the client could only send feedback, or pay bills, through ordinary postal mail. Obviously, the interactivity level of this system was very low. The technology used in Coaxial Cable TV (CATV) is designed for the transport of video signals. It is comprised of three systems: AM, FM, and Digital. Since the current CATV system with coaxial analog technology is highly limited in bandwidth new technology is necessary for applications requiring higher bandwidth. In the digital system, a CATV network will get better performance than AM/FM systems and ease the migration from coaxial to a fiber based system. Fiber-optics in CATV networks will eliminate most bottlenecks and increase channel capacity for high speed networks. Analog signals are a continuous variable waveform that are information intensive. They require considerable bandwidth and care in transmission. Analog transmissions over phone lines have some inherent problems when used for sending data. Analog signals lose their strength over long distances and often need to be amplified. Signal processing introduces distortions and become amplified raising the possibility of errors. In contrast to the waveform of analog signals, digital signals are transmitted over wire connections by varying the voltage across the line between a high and a low state. Typically, a high voltage level represents a binary digit 1 and a low voltage level represents a binary digit 0. Because they are binary, digital signals are inherently less complex than analog signals and over long distances they are more reliable. If a digital signal needs to be boosted, the signal is simply regenerated rather than being amplified. As a result, digital signals have the following advantages over analog: Superior quality Fewer errors Higher transmission speeds Less complex equipment The excitement over converting analog to digital media is, therefore, easy to explain. It is motivated by cost-effective higher quality digital processing for data, voice and video information. In transitioning from analog to digital technologies however, several significant changes are also profoundly altering broadcast radio and television. The transition introduces fundamental changes from one way broadcast to two-way transmission, and thereby the potential for interactivity, and scheduling of programming to suit the user’s needs. Not only is there an analog to digital shift, but a synchronous to asynchronous shift as well. Television and radio no longer needs to be synchronous and simultaneous. Rather the viewer and listener can control the time of performance. In addition, transmission can be one of three media: copper wire, cable, or wireless. Also, the receiver is transitioning from a dumb device, such as the television, to an intelligent set-top box with significant CPU power. This potentially changes the viewer from a passive to an interactive participant. Today, both analog and digital video technologies coexist in the production and creative part of the process leading up to the point where the video is broadcast. Currently, businesses and homes can receive content from one to six delivery systems: analog: copper wire (telephones), coaxial cable (TV cable), or broadcast (TV or radio); digital: copper wire (modem, DSL), Ethernet modem, or wireless (satellite). At the present time, analog systems still dominate, but digital systems are competing very favorably as infrastructure becomes available. Analog/digital telephone and digital cable allow two-way communication and these technologies are rapidly growing. The digital systems are far more efficient and allow greater interactivity with the client. Competing Technologies The race is on as cable, data, wireless, and telecommunications companies are scrambling to piece together the broadband puzzle and to compete in future markets. The basic infrastructure of copper wire, cable and satellite, as well as, the packaged contents are in place to deliver bigger, richer data files and media types. In special cases, data transmission over the developing computer networks within corporations and between universities, already exist. Groups vying to dominate have each brought different technologies and standards to the table. For the logical convergence of hardware, software and networking technology to occur the interface of theses industries must meet specific inter-operational capabilities and must achieve customer expectations for quality of service. Long distance and local Regional Bell Operating Companies (RBOC) telephone companies started with the phone system designed for point-to-point communication, POTS (plain old telephones) and have evolved into a large switched, distributed network, capable of handling millions of simultaneous calls. They track and bill accordingly with an impressive performance record. They have delivered 99.999% reliability with high quality audio. Their technology is now evolving toward DSL (Digital Subscriber Line) modems. AT&T has made significant progress in leading broadband technology development now that it has added the vast cable networks of Tele-Communications Inc. and MediaOne Group to telephone and cellular. Currently, AT&T with about 45% of the market can plug into more U.S. households than any other provider. But other telecommunications companies, such as Sprint and MCI, as well as, the regional Bell operating companies, are also capable of integrating broadband technology with their voice services. Although both routing and architecture of the telephone network has evolved since the AT&T divestiture, the basics remain the same. About 25,000 central offices in the U.S. connect through 1200 intermediate switching nodes, called access tandems. The switching centers are connected by trunks designed to carry multiple voice frequency circuits using frequency division multiplexing (FDM), or synchronous time-division multiplexing (TDM), or wavelength division multiplexing (WDM) for optics. The cable companies Time Warner, Comcast, Cox Communications and Charter Communications have 60 million homes wired with coaxial cable primarily one-way cable offering one-to-many broadcast service. Their technology competes through the introduction of cable modems and the upgrade of their infrastructure to support two-way communication. The merger between AOL and Time Warner demonstrates how Internet and content companies are finding ways to converge. Cable television networks currently reaches 200 million homes. On the other hand, satellite television can potentially reach 1 billion homes. These will offer nearly complete coverage of the U.S., digital satellite is also competing. DirecTV, has DirecPC, which can beam data to a PC. Its rival, EchoStar Corp., is working with interactive TV player, TiVo Inc., to deliver video and data service to a set-top box. However, satellite is currently not only a one-way delivery system, but is also the most expensive in the U.S. In regions of the world outside the U.S. where the capital investment in copper wires and cable has yet to be made, satellite may have a better competitive opportunity. The Internet itself doesn’t own its own connections. Internet data traffic passes along the copper, fiber, coaxial cable, and wireless transmission of the other industries as a digital alternative to analog transmissions. The new media is being built to include text, graphics, audio, and video across platforms of television, Internet, cable and wireless industries. The backbone uses wide area communications technology, including satellite, fiber, coaxial cable, copper and wireless. Data servers mix mainframes, workstations, supercomputers, and microcomputers and a diversity of clients populate the end-points of the networks including; conventions PCs, palmtops, PDAs, smart phones, set-top boxes, and TVs. Figure 1-1 Connecting the backbone of the Internet to Your Home Web-television hybrids, such as, WebTV provide opportunities for cross-promotion between television and Internet. Independent developers may take advantage of broadcast-Internet synergy by creating shows to targeted audiences Clearly, the future holds a need for interaction between the TV and the Internet. But will it appear as TV quality video transmitted over the Internet and subsequently displayed on a TV set. Or, alternatively, as URL information embedded within existing broadcast TV set pictures. Perhaps both. Streaming Video Streaming is the ability to play media, such as audio and video, directly over the Internet without downloading the entire file before play begins. Digital encoding is required to convert the analog signal into compressed digital format for transmission and playback. Streaming videos send a constant flow of audio/video information to their audience. While streaming videos may be archived for on-demand viewing, they can also be shown in real-time. Examples include play-by-play sports events, concerts and corporate board meetings. But a streaming video offers more than a simple digitized signal transmitted over the Internet. It offers the ability for interactive audience response and unparalleled form of two-way communication. The interactive streaming video process is referred to as Webcasting. Widespread Web-casting will be impractical, however, until audiences have access rates of a minimum of 100 Kbps or faster. Compression technology can be expected to grow more powerful, significantly reducing bandwidth requirement. By 2006 the best estimates indicate that 40 Million homes will have cable modems and 25 Million DSL connections with access rates of 1.5 Mbps. We shall see in Chapters 5, 6 and 7 how the compression codecs and software standards will competitively change “effective” Internet bandwidth and the quality of delivered video. The resultant video quality at a given bandwidth is highly dependent upon the specific video compressor. The human eye is extremely non-linear and its capabilities are difficult to quantify. The quality of compression, specific video application, typical content, available bandwidth, and user preferences all must be considered when evaluating compressor options. Some optimize for “talking heads” while other optimize for motion. To date, the value of streaming video has been primarily the rebroadcast of TV content and redirected audio from radio broadcasts. The success of these services to compete with traditional analog broadcasts will depend upon the ability of streaming video producers to develop and deliver their content using low cost computers that present a minimal barrier to entry. Small, low cost independent producers will effectively target audiences previously ignored. Streaming videos steadily moving toward the integration of text, graphics, audio, and video with interactive on-line chat will find new audiences. In Chapter 2, we present business models to address business’s video needs. Despite these promising aspects, streaming video is still a long way from providing a satisfactory audio/video experience in comparison to traditional broadcasts. The low data transmission rates are a severe limitation on the quality of streaming videos. While a direct broadcast satellite dish receives data at 2 Mbps, an analog modem is currently limited to 0.05 Mbps. The new cable modems and ADSL are starting to offer speeds competitive with satellite, but they will take time to penetrate globally. Unlike analog radio and television, streaming videos requires a dynamic connection between the computer providing the content to the viewer. Current computer technology limits the viewing audience to up to 50,000. While strategies to overcome this with replicating servers may increase audiences, this too will take effort. The enhancement of data compression reduces the required video data streaming rates to more manageable levels. The technology has only recently reached the point where video can be digitized and compressed to levels which allow reasonable appearance during distribution over digital networks. Advances continue to come, improving look and delivery of video. Calculating Bandwidth Requirements So far we have presented the advantages of digital techology, unfortunately there is one rather large disadvantage - bandwidth limitations. Let’s try some simple math that illustrates the difficulties. Live, or on-demand, streaming video and/or audio is relatively easy to encode. The most difficult part is not the encoding of the files. It is determining what level of data may be transmitted. The following Table contains information that will help with some basic terms and definitions: Why the difference between Kbps and KB/sec? File sizes on a hard drive are measured in Kilobytes (KB). But the data that transferred over a modem is measured in Kilobits per second (Kbps) because it's comparatively slower than a hard drive. In the case of a 28.8Kbps modem the maximum data transfer rate is 2.5 KB/sec even through the calculated rate is 28.8Kbs / 8 bits in a byte = 3.6KB/sec. This is because there is approximately a 30% losses of transmission capabilities lost due to Internet “noise.” This is due to traffic congestion on the web and more than one surfer requesting information on the same server. The following Table 1-4 provides information concerning the characteristics of video files. This includes pixels per frame and frames per file (film size file). We can use the information in Table 1-4 to compare to some simple calculations. We will use the following formula to calculate the approximate size in Megabytes of a digitized video file: (pixel width) x (pixel height) x (color bit depth) x (fps) x (duration in seconds) 8,000,000 (bits / MB) For three minutes of video at 15 frames per second with a color bit depth of 24-bit in a window that is 320x240 pixels, the digitized source file would be approximately 622 Megabytes: (320) x (240) x (24) x (15) x (180) / 8,000,000 = 622 Megabytes We will see in chapter 4, how data compression will significantly reduce this burden. Now that we have our terms defined, let's take the case of a TV station that wants to broadcast their channel live 24hrs a day for a month over the web to a target audience of 56 Kbps modem users. In this case, a live stream generates a 4.25KB/sec since a 56Kbps file transfers at 4.25KB/sec. So how much data would be transferred in a 24 hr period if one stream was constantly being used? ANSWER = 4.25 KB/sec * (number of seconds in a day) * 30 days per month = 11 GB/month So, one stream playing a file encoded for 56 Kbs for 24hrs a day will generate 11 gigabytes in a month. How is this figure useful? This figure becomes important if you can estimate the average number of viewers in a month, then you can estimate the total amount of data that will be transferred from your process. Ultimately the issue becomes one of the need for sufficient backbone infrastructure to carry many broadcasts to many viewers across the networks. For HDTV with a screen size of 1080x1920 and 24-bit color, a bandwidth of 51.8 Mbps is required. This is a serious amount of data flow to route around the Internet to millions of viewers. Transitioning from Narrowband to Broadband In telecommunications, bandwidth refers to data capacity of a channel. For an analog service, the bandwidth is defined as the difference between the highest and lowest frequency within which the medium carries traffic. For example, cabling that carries data between 200 MHz and 300 MHz has a bandwidth of 100MHz. In addition to analog speeds in hertz (Hz) and digital speeds in bits per second (bps), the carrying rate is sometimes categorized as narrowband and broadband. It is useful to relate this to an analogy in which wider pipes carry more water. TV and cable are carried at broadband speeds. However, most telephone and modem data traffic from the central offices to individual homes and businesses are carried at slower narrowband speeds. This is usually referred to as the “last mile” issue. The definitions for narrowband and broadband vary within the industries, but are summarized for our purposes as: Narrowband refers to rates less than 1.5 Mbps Broadband refers to rates at or beyond 1.5 Mbps A major bottleneck of analog services exists between cabling of residents and telephone central offices. Digital Subscriber Line (DSL) and cable modem are gaining in availability. Cable TV companies are investing heavily in converting their cabling from one-way only cable TV to two-way systems for cable modems and telephones. In contrast to the “last-mile” for residential areas, telephones companies are laying fiber cables for digital services from their switches to office buildings where the high-density client base justifies the additional expense. We can appreciate the potential target audience for video by estimating; how fast the “last mile” bandwidth demand is growing. Because installing underground fiber costs more than $20,000 per mile, fiber only makes sense for businesses and network backbones. Not for “last mile” access to homes. Table 1-5 shows the estimated number of users connected at various modem speeds in 1999 and 2006. High-speed consumer connections are now being implemented through cable modems and digital subscriber lines (DSL). Approximately 1.3 million home had cable modems by the end of 1999 in comparison to 300,000 DSL connections primarily to businesses. By 2006, we project 40 million cable modems and 25 million DSL lines. Potentially data at the rate of greater than one megabit per second could be delivered to over 80 per cent of more than 550 million residential telephone lines in the world. Better than one megabit per second can also be delivered over fiber/coax CATV lines configured for two-way transmission, to approximately 10 million out of 200 million total users (though these can be upgraded). In2000, the median bandwidth in the U.S. is less than 56. This is de facto a narrowband environment. But worldwide there is virtually limitless demand for communications as presented by the following growth rates: The speed of computer connections is soaring. The number of connections at greater than 1.5 Mbps is growing at 45% per year in residential areas and at 55% per year in business areas. Because of improving on-line experience, people will stay connected about 20% longer per year. As more remote areas of the world get connected, messages will travel about 15% father a year. The number of people online worldwide in 1999 was 150 million, but the peak Internet load was only 10% and the actual transmission time that data was being transferred, was only 25% of that number. With the average access rate of 44 kbps this indicates an estimate of about 165 Gbps at peak load. In 2006 there will be about 300 million users and about 65 million of these will have broadband (>1.5 Mbps) access. With the addition of increased peak load and increased actual transmission time, this will result in an estimated usage of about 16.5 Tera-bits per second of data processing. It all adds up to a lot of bits. It leads to a demand for total data communications in 2006 of nearly a100-fold increase over 1999. With the number of new users connecting to the Internet growing this fast can the fiber backbone meet this demand? Figure 1-2 answers this question. Figure 1-2 shows the growth in Local Area Networks (LANs) from 1980 to 2000 with some projection into the next decade. In addition, the Internet capacity is shows that over the last few decades and indicates the potential growth rate into the next decade. The jump up in Internet capacity due to Dense Wavelength Division Multiplexing (DWDM) is a projection of the multiply effect of this new technology. As a result this figure shows that we can expect multi-Tera-bit per second performance from the Internet backbone in the years ahead. This will meet the projected growth in demand. Great! But, what about that “last mile” of copper, coax, and wireless? The “last mile” involves servers, networks, content and transitions from narrow to broadband. Initially, the “last mile” will convert to residential broadband not as fiber optics, but as a network overlaid on existing telephone and cable television wiring. One megabit per second can be delivered to over 80 % or more of 550 million residential telephone lines in the world. It can also be delivered over all fiber/coax CATV lines configured for two-way service. The latter represents a small fraction of the worldwide CATV lines however, requiring only 10 million homes out of 200 million. But upgrade programs will convert the remainder in 5 years. The endgame of the upgrade process may be fiber directly to the customer’s home, but not for the next decade or two. A fiber signal travels coast to coast in 30 ms and human latency (period to achieve recognition) is about 50 milliseconds. Thus fiber is the only technology to deliver viewable HDTV video. However, due to the cost and man-power involved, we’re stuck with the “last mile” remaining copper, coax and wireless for a while yet. The Table 1-7 below summarizes how the five delivery approaches for analog and digital technologies will co-exist for the next few years. In chapter 8, we will present network background on the technologies and standards and revisit this table in more detail. One-way * (FFTH is fiber to the home, FTTC is fiber to the curb, MPEG-2 is a compression standard see chapter 4, ATM is Asynchronous Transfer Mode see chapter 8, TDM is Time Division Multiplexing see chapter 8). Preparing to Converge To be fully prepared to take advantage of the converging technologies, we must ask and answer the right questions. This is not as easy as it might seem. We could ask, “Which company will dominate the broadband data and telecommunication convergence?” But this would be inadequate because the multi-trillion dollar world e-commerce market is too big for any one company to monopolize. We could ask, “Which broadband networks will dominate the Internet backbone?” But this would be inadequate because innovative multiplexing and compression advances will make broadband ubiquitous and subservient to the “last mile” problem. We could ask, “Which transmission means (cable, wireless, or copper) will dominate the “last mile”?” But this would be inadequate because the geographical infrastructure diversity of these technologies throughout the world will dictate different winners in different regions of the world demonstrating this as a “local” problem. Individually, these questions address only part of the convergence puzzle. It is e-commerce’s demand for economic efficiency that will force us to face the important q estion of the telecommunication convergence puzzle. “What are meaningful broadband cross-technology standards?” Without globally accepted standards, hardware and software developers can’t create broad solutions for consumer demand. As a result, we will be concerned throughout this book in pointing out the directions and conflicts that various competing standards are undertaking. Conclusion In this chapter, we presented the background of analog technology’s transition toward digital technology. This chapter provided a calculation that illustrated why digital video data is such a difficult bandwidth problem. It evaluated the rate of change of conversion from narrowband connections to broadband. This rate establishing a critical perspective on the timeline of the demand for Internet video. On the basis of this chapter, you should conclude that: The Internet backbone combination of fiber and optical multiplexing will perform in the multi-Tera-bps range and provide plenty of network bandwidth in the next few years. The “last mile” connectivity will remain twisted pair, wireless, and coax cable for the next few years, but broadband (1.5Mbps) access through cable modems and x-DSL will grow to 40 million users in just a few years. Streaming video was identified as the crossroads of technology convergence. It is the bandwidth crisis of delivering video that will prove decisive in setting global standards and down-selecting competing technologies. The success of streaming video in its most cost-effect and customer satisfying form will define the final technology convergence model into the 21st Century
- Semantic Web | H Peter Alesso
Semantic Web Services AMAZON Chapter 6.0 The Semantic Web In this chapter, we provide an introduction to the Semantic Web and discuss its background and potential. By laying out a road map for its likely development, we describe the essential stepping stones including: knowledge representation, inference, ontology, search and search engines. We also discuss several supporting semantic layers of the Markup Language Pyramid Resource Description Framework (RDF) a nd Web Ontology Language (OWL). In addition, we discuss using RDF and OWL for supporting software agents, Semantic Web Services, and semantic searches. Background Tim Berners-Lee invented the World Wide Web in 1989 and built the World Wide Web Consortium (W3C ) team in 1992 to develop, extend, and standardize the Web. But he didn’t stop there. He continued his research at MIT through Project Oxygen[1] and began conceptual development of the Semantic Web. The Semantic Web is intended to be a paradigm shift just as powerful as the original Web. The goal of the Semantic Web is to provide a machine-readable intelligence that would come from hyperlinked vocabularies that Web authors would use to explicitly define their words and concepts. The idea allows software agents to analyze the Web on our behalf, making smart inferences that go beyond the simple linguistic analyses performed by today's search engines. Why do we need such a system? Today, the data available within HTML Web pages is difficult to use on a large scale because there is no global schema. As a result, there is no system for publishing data in such a way to make it easily processed by machines. For example, just think of the data available on airplane schedules, baseball statistics, and consumer products. This information is presently available at numerous sites, but it is all in HTML format which means that using it has significant limitations. The Semantic Web will bring structure and defined content to the Web, creating an environment where software agents can carry out sophisticated tasks for users. The first steps in weaving the Semantic Web on top of the existing Web are already underway. In the near future, these developments will provide new functionality as machines become better able to "understand" and process the data. This presumes, however, that developers will annotate their Web data in advanced markup languages. To this point, the language-development process isn't finished. There is also ongoing debate about the logic and rules that will govern the complex syntax. The W3C is attempting to set new standards while leading a collaborative effort among scientists around the world. Berners-Lee has stated his vision that today’s Web Services in conjunction with developing the Semantic Web, should become interoperable. Skeptics, however, have called the Semantic Web a Utopian vision of academia. Some doubt it will take root within the commercial community. Despite these doubts, research and development projects are burgeoning throughout the world. And even though Semantic Web technologies are still developing, they have already shown tremendous potential in the areas of semantic groupware (see Chapter 13) and semantic search (see Chapter 15). Enough so, that the future of both the Semantic Web and Semantic Web Services (see Chapter 11) appears technically attractive. The Semantic Web The current Web is built on HTML, which describes how information is to be displayed and laid out on a Web page for humans to read. In effect, the Web has developed as a medium for humans without a focus on data that could be processed automatically. In addition, HTML is not capable of being directly exploited by information retrieval techniques. As a result, the Web is restricted to manual keyword searches. For example, if we want to buy a product over the Internet, we must sit at a computer and search for most popular online stores containing appropriate categories of products. We recognize that while computers are able to adeptly parse Web pages for layout and routine processing, they are unable to process the meaning of their content. XML may have enabled the exchange of data across the Web, but it says nothing about the meaning of that data. The Semantic Web will bring structure to the meaningful content of Web pages, where software agents roaming from page-to-page can readily carry out automated tasks. We can say that the Semantic Web will become the abstract representation of data on the Web. And that it will be constructed over the Resource Description Framework (RDF) (see Chapter 7) and Web Ontology Language (OWL) (see Chapter 8). These languages are being developed by the W3C, with participations from academic researchers and industrial partners. Data can be defined and linked using RDF and OWL so that there is more effective discovery, automation, integration, and reuse across different applications. These languages are conceptually richer than HTML and allow representation of the meaning and structure of content (interrelationships between concepts). This makes Web content understandable by software agents, opening the way to a whole new generation of technologies for information processing, retrieval, and analysis. Two important technologies for developing the Semantic Web are already in place: XML and RDF. XML lets everyone create their own tags. Scripts, or programs, can make use of these tags in sophisticated ways, but the script writer has to know how the page writer uses each tag. In short, XML allows users to add arbitrary structure to their documents, but says nothing about what the structure means. If a developer publishes data in XML on the Web, it doesn’t require much more effort to take the extra step and publish the data in RDF. By creating ontologies to describe data, intelligent applications won’t have to spend time translating various XML schemas. In a closed environment, Semantic Web specifications have already been used to accomplish many tasks, such as data interoperability for business-to-business (B2B) transactions. Many companies have expended resources to translate their internal data syntax for their partners. As everyone migrates towards RDF and ontologies, interoperability will become more flexible to new demands. Another example of applicability is that of digital asset management. Photography archives, digital music, and video are all applications that are looking to rely to a greater degree on metadata. The ability to see relationships between separate media resources as well as the composition of individual media resources is well served by increased metadata descriptions and enhanced vocabularies. The concept of metadata has been around for years and has been employed in many software applications. The push to adopt a common specification will be widely welcomed. For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. AI researchers have studied such systems and produced today’s Knowledge Representation (KR). KR is currently in a state comparable to that of hypertext before the advent of the Web. Knowledge representation contains the seeds of important applications, but to fully realize its potential, it must be linked into a comprehensive global system. The objective of the Semantic Web, therefore, is to provide a language that expresses both data and rules for reasoning as a Web-based knowledge representation. Adding logic to the Web means using rules to make inferences and choosing a course of action. A combination of mathematical and engineering issues complicates this task (see Chapter 9). The logic must be powerful enough to describe complex properties of objects, but not so powerful that agents can be tricked by a paradox. Intelligence Concepts The concept of Machine Intelligence (MI) is fundamental to the Semantic Web. Machine Intelligence is often referred to in conjunction with the terms Machine Learning, Computational Intelligence, Soft-Computing, and Artificial Intelligence. Although these terms are often used interchangeably, they are different branches of study. For example, Artificial Intelligence involves symbolic computation while Soft-Computing involves intensive numeric computation. We can identify the following sub-branches of Machine Intelligence that relate to the Semantic Web: Knowledge Acquisition and Representation. Agent Systems. Ontology. Although symbolic Artificial Intelligence is currently built and developed into Semantic Web data representation, there is no doubt that software tool vendors and software developers will incorporate the Soft-Computing paradigm as well. The benefit is creating adaptive software applications. This means that Soft-Computing applications may adapt to unforeseen input. Knowledge Acquisition is the extraction of knowledge from various sources, while Knowledge Representation is the expression of knowledge in computer-tractable form that is used to help software-agents perform. A Knowledge Representation language includes Language Syntax (describes configurations that can constitute sentences) and Semantics (determines the facts and meaning based upon the sentences). For the Semantic Web to function, computers must have access to structured collections of information. But, traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts. As a result, central control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable. In an attempt to avoid problems, traditional knowledge-representation systems narrow their focus and use a limited set of rules for making inferences. These system limitations restrict the questions that can be asked reliably. XML and the RDF are important technologies for developing the Semantic Web; they provide languages that express both data and rules for reasoning about the data from a knowledge-representation system. The meaning is expressed by RDF, which encodes it in sets of triples, each triple acting as a sentence with a subject, predicate, and object. These triples can be written using XML tags. As a result, an RDF document makes assertions about specific things. Subject and object are each identified by a Universal Resource Identifier (URI), just as those used in a link on a Web page. The predicate is also identified by URIs, which enables anyone to define a new concept just by defining a URI for it somewhere on the Web. The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document, but are tied to a unique definition that everyone can find on the Web. Search Algorithms The basic technique of search (or state space search) refers to a broad class of methods that are encountered in many different AI applications; the technique is sometimes considered a universal problem-solving mechanism in AI. To solve a search problem, it is necessary to prescribe a set of possible or allowable states, a set of operators to change from one state to another, an initial state, a set of goal states, and additional information to help distinguish states according to their likeliness to lead to a target or goal state. The problem then becomes one of finding a sequence of operators leading from the initial state to one of the goal states. Search algorithms can range from brute force methods (which use no prior knowledge of the problem domain, and are sometimes referred to as blind searches) to knowledge-intensive heuristic searches that use knowledge to guide the search toward a more efficient path to the goal state (see Chapters 9 and 15). Search techniques include: Brute force Breadth-first Depth-first Depth-first iterative-deepening Bi-directional Heuristic Hill-climbing Best-first A* Beam Iterative-deepening-A* Brute force searches entail the systematic and complete search of the state space to identify and evaluate all possible paths from the initial state to the goal states. These searches can be breadth-first or depth-first. In a breadth-first search, each branch at each node in a search tree is evaluated, and the search works its way from the initial state to the final state considering all possibilities at each branch, a level at a time. In the depth-first search, a particular branch is followed all the way to a dead end (or to a successful goal state). Upon reaching the end of a path, the algorithm backs up and tries the next alternative path in a process called backtracking. The depth-first iterative-deepening algorithm is a variation of the depth-first technique in which the depth-first method is implemented with a gradually increasing limit on the depth. This allows a search to be completed with a reduced memory requirement, and improves the performance where the objective is to find the shortest path to the target state. The bi-directional search starts from both the initial and target states and performs a breadth-first search in both directions until a common state is found in the middle. The solution is found by combining the path from the initial state with the inverse of the path from the target state. These brute force methods are useful for relatively simple problems, but as the complexity of the problem rises, the number of states to be considered can become prohibitive. For this reason, heuristic approaches are more appropriate to complex search problems where prior knowledge can be used to direct the search. Heuristic approaches use knowledge of the domain to guide the choice of which nodes to expand next and thus avoid the need for a blind search of all possible states. The hill-climbing approach is the simplest heuristic search; this method works by always moving in the direction of the locally steepest ascent toward the goal state. The biggest drawback of this approach is that the local maximum is not always the global maximum and the algorithm can get stuck at a local maximum thus failing to achieve the best results. To overcome this drawback, the best-first approach maintains an open list of nodes that have been identified but not expanded. If a local maximum is encountered, the algorithm moves to the next best node from the open list for expansion. This approach, however, evaluates the next best node purely on the basis of its evaluation of ascent toward the goal without regard to the distance it lies from the initial state. The A* technique goes one step further by evaluating the overall path from the initial state to the goal using the path to the present node combined with the ascent rates to the potential successor nodes. This technique tries to find the optimal path to the goal. A variation on this approach is the beam search in which the open list of nodes is limited to retain only the best nodes, and thereby reduce the memory requirement for the search. The iterative-deepening-A* approach is a further variation in which depth-first searches are completed, a branch at a time, until some threshold measure is exceeded for the branch, at which time it is truncated and the search backtracks to the most recently generated node. A classic example of an AI-search application is computer chess. Over the years, computer chess-playing software has received considerable attention, and such programs are a commercial success for home PCs. In addition, most are aware of the highly visible contest between IBM’s Deep Blue Supercomputer and the reigning World Chess Champion, Garry Kasparov in May 1997. Millions of chess and computing fans observed this event in real-time where, in a dramatic sixth game victory, Deep Blue beat Kasparov. This was the first time a computer has won a match with a current world champion under tournament conditions. Computer chess programs generally make use of standardized opening sequences, and end game databases as a knowledge base to simplify these phases of the game. For the middle game, they examine large trees and perform deep searches with pruning to eliminate branches that are evaluated as clearly inferior and to select the most highly evaluated move. We will explore semantic search in more detail in Chapter 15. Thinking The goal of the Semantic Web is to provide a machine-readable intelligence. But, whether AI programs actually think is a relatively unimportant question, because whether or not "smart" programs "think," they are already becoming useful. Consider, for example, IBM’s Deep Blue. In May 1997, IBM's Deep Blue Supercomputer played a defining match with the reigning World Chess Champion, Garry Kasparov. This was the first time a computer had won a complete match against the world’s best human chess player. For almost 50 years, researchers in the field of AI had pursued just this milestone. Playing chess has long been considered an intellectual activity, requiring skill and intelligence of a specialized form. As a result, chess attracted AI researchers. The basic mechanism of Deep Blue is that the computer decides on a chess move by assessing all possible moves and responses. It can identify up to a depth of about 14 moves and value-rank the resulting game positions using an algorithm prepared in advance by a team of grand masters. Did Deep Blue demonstrate intelligence or was it merely an example of computational brute force? Our understanding of how the mind of a brilliant player like Kasparov works is limited. But indubitably, his "thought" process was something very different than Deep Blue’s. Arguably, Kasparov’s brain works through the operation of each of its billions of neurons carrying out hundreds of tiny operations per second, none of which, in isolation, demonstrates intelligence. One approach to AI is to implement methods using ideas of computer science and logic algebras. The algebra would establish the rules between functional relationships and sets of data structures. A fundamental set of instructions would allow operations including sequencing, branching and recursion within an accepted hierarchy. The preference of computer science has been to develop hierarchies that resolve recursive looping through logical methods. One of the great computer science controversies of the past five decades has been the role of GOTO-like statements. This has risen again in the context of Hyperlinking. Hyperlinking, like GOTO statements, can lead to unresolved conflict loops (see Chapter 12). Nevertheless, logic structures have always appealed to AI researchers as a natural entry point to demonstrate machine intelligence. An alternative to logic methods is to use introspection methods, which observe and mimic human brains and behavior. In particular, pattern recognition seems intimately related to a sequence of unique images with a special linkage relationship. While Introspection, or heuristics, is an unreliable way of determining how humans think, when they work, Introspective methods can form effective and useful AI. The success of Deep Blue and chess programming is important because it employs both logic and introspection AI methods. When the opinion is expressed that human grandmasters do not examine 200,000,000 move sequences per second, we should ask, “How do they know?'' The answer is usually that human grandmasters are not aware of searching this number of positions, or that they are aware of searching a smaller number of sequences. But then again, as individuals, we are generally unaware of what actually does go on in our minds. Much of the mental computation done by a chess player is invisible to both the player and to outside observers. Patterns in the position suggest what lines of play to look at, and the pattern recognition processes in the human mind seem to be invisible to that mind. However, the parts of the move tree that are examined are consciously accessible. Suppose most of the chess player’s skill actually comes from an ability to compare the current position against images of 10,000 positions already studied. (There is some evidence that this is at least partly true.) We would call selecting the best position (or image) among the 10,000, insightful. Still, if the unconscious human version yields intelligent results, and the explicit algorithmic Deep Blue version yields essentially the same results, then couldn’t the computer and its programming be called intelligent too? For now, the Web consists primarily of huge number of data nodes (containing texts, pictures, movies, sounds). The data nodes are connected through hyperlinks to form `hyper-networks' can collectively represent complex ideas and concepts above the level of the individual data. However, the Web does not currently perform many sophisticated tasks with this data. The Web merely stores and retrieves information even after considering some of the “intelligent applications” in use today (including intelligent agents, EIP, and Web Services). So far, the Web does not have some of the vital ingredients it needs, such as a global database scheme, a global error-correcting feedback mechanism, a logic layer protocol, or universally accepted knowledge bases with inference engines. As a result, we may say that the Web continues to grow and evolve, but it does not learn. If the jury is still out on defining the Web as intelligent, (and may be for some time) we can still consider ways to change the Web to give it the capabilities to improve and become more useful (see Chapter 9). Knowledge Representation and Inference An important element of AI is the principle that intelligent behavior can be achieved through processing of symbol structures representing increments of knowledge. This has given rise to the development of knowledge-representation languages that permit the representation and manipulation of knowledge to deduce new facts. Thus, knowledge-representation languages must have a well-defined syntax and semantics system, while supporting inference. First let’s define the fundamental terms “data,” “information,” and “knowledge.” An item of data is a fundamental element of an application. Data can be represented by population and labels. Information is an explicit association between data things. Associations are often functional in that they represent a function relating one set of things to another set of things. A rule is an explicit functional association from a set of information things to a resultant information thing. So, in this sense, a rule is knowledge. Knowledge-based systems contain knowledge as well as information and data. The information and data can be modeled and implemented in a database. Knowledge-engineering methodologies address design and maintenance knowledge, as well as the data and information. Logic is used as the formalism for programming languages and databases. It can also be used as a formalism to implement knowledge methodology. Any formalism that admits a declarative semantics and can be interpreted both as a programming language and database language is a knowledge language. Three well-established techniques have been used for knowledge representation and inference: frames and semantic networks, logic based approaches, and rule based systems. Frames and semantic networks also referred to as slot and filler structures, capture declarative information about related objects and concepts where there is a clear class hierarchy and where the principle of inheritance can be used to infer the characteristics of members of a subclass from those of the higher level class. The two forms of reasoning in this technique are matching (i.e., identification of objects having common properties), and property inheritance in which properties are inferred for a subclass. Because of limitations, frames and semantic networks are generally limited to representation and inference of relatively simple systems. Logic-based approaches use logical formulas to represent more complex relationships among objects and attributes. Such approaches have well-defined syntax, semantics and proof theory. When knowledge is represented with logic formulas, the formal power of a logical theorem proof can be applied to derive new knowledge. However, the approach is inflexible and requires great precision in stating the logical relationships. In some cases, common-sense inferences and conclusions cannot be derived, and the approach may be inefficient, especially when dealing with issues that result in large combinations of objects or concepts. Rule-based approaches are more flexible. They allow the representation of knowledge using sets of IF-THEN or other condition action rules. This approach is more procedural and less formal in its logic and as a result, reasoning can be controlled in a forward or backward chaining interpreter. In each of these approaches, the knowledge-representation component (i.e., problem-specific rules and facts) is separate from the problem-solving and inference procedures. Resource Description Framework (RDF) The Semantic Web is built on syntaxes which use the Universal Resource Identifier (URI) to represent data in triples-based structures using Resource Description Framework (RDF) (see Chapter 7). A URI is a Web identifier, such as "http:" or "ftp:.” The syntax of URIs is governed by the IETF, publishers of the general URI specification the W3C maintains a list of URI schemes . In an RDF document, assertions are made about particular things having properties with certain values. This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject, predicate, and object are each identified by a URI. The RDF triplets form webs of information about related things. Because RDF uses URIs to encode this information in a document the URIs ensure that concepts are not just words in a document, but are tied to a unique definition. All the triples result in a directed graph whose nodes and arcs are all labeled with qualified URIs. The RDF model is very simple and uniform. The only vocabulary is URIs which allow the use of the same URI as a node and as an arc label. This makes self-reference and reification possible, just as in natural languages. This is appreciable in a user-oriented context (like the Web), but is difficult to cope with in knowledge-based systems and inference engines. Once information is in RDF form, data becomes easier to process. We illustrate an RDF document in Example 6-1. This piece of RDF basically says that a book has the title "e-Video: Producing Internet Video," and was written by "H. Peter Alesso." Example 6-1 Listing 6-1 Sample RDF /XML H. Peter Alesso e-Video: Producing Internet Video The benefit of RDF is that the information maps directly and unambiguously to a decentralized model that differentiates the semantics of the application from any additional syntax. In addition, XML Schema restricts the syntax of XML applications and using it in conjunction with RDF may be useful for creating some datatypes. The goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application. RDF models may be used to address and reuse components (software engineering), to handle problems of schema evolution (database), and to represent knowledge (Artificial Intelligence). However, modeling metadata in a completely domain independent fashion is difficult to handle. How successful RDF will be in automating activities over the Web is an open question. However, if RDF could provide a standardized framework for most major Web sites and applications, it could bring significant improvements in automating Web-related activities and services (see Chapter 11). If some of the major sites on the Web incorporate semantic modeling through RDF, it could provide more sophisticated searching capabilities over these sites (see Chapter 15). We will return to a detailed presentation of RDF in Chapter 7. RDF Schema The first "layer" of the Semantic Web is the simple data-typing model called a schema. A schema is simply a document that defines another document. It is a master checklist or grammar definition. The RDF Schema was designed to be a simple data-typing model for RDF. Using RDF Schema, we can say that "Desktop" is a type of "Computer," and that "Computer" is a sub class of “Machine”. We can also create properties and classes, as well as, creating ranges and domains for properties. All of the terms for RDF Schema start with namespace http://www.w3.org/2000/01/rdf-schema# . The three most important RDF concepts are "Resource" (rdfs:Resource), "Class" (rdfs:Class), and "Property" (rdf:Property). These are all "classes," in that terms may belong to these classes. For example, all terms in RDF are types of resource. To declare that something is a "type" of something else, we just use the rdf:type property: rdfs:Resource rdf:type rdfs:Class . rdfs:Class rdf:type rdfs:Class . rdf:Property rdf:type rdfs:Class . rdf:type rdf:type rdf:Property . This means "Resource is a type of Class, Class is a type of Class, Property is a type of Class, and type is a type of Property." We will return to a detailed presentation of RDF Schema in Chapter 7. Ontology A program that wants to compare information across two databases has to know that two terms are being used to mean the same thing. Ideally, the program must have a way to discover common meanings for whatever databases it encounters. A solution to this problem is provided by the Semantic Web in the form of collections of information called ontologies. Artificial-intelligence and Web researchers use the term ontology for a document that defines the relations among terms. A typical ontology for the Web includes a taxonomy with a set of inference rules. Ontology and Taxonomy We can express an Ontology as: Ontology = < taxonomy, inference rules> And we can express a taxonomy as: Taxonomy = < {classes}, {relations}> The taxonomy defines classes of objects and relations among them. For example, an address may be defined as a type of location, and city codes may be defined to apply only to locations, and so on. Classes, subclasses, and relations among entities are important tools. We can express a large number of relations among entities by assigning properties to classes and allowing subclasses to inherit such properties. Inference rules in ontologies supply further power. An ontology may express the rule "If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code." A program could then readily deduce, for instance, that an MIT address, being in Cambridge, must be in Massachusetts, which is in the U.S., and therefore should be formatted to U.S. standards. The computer doesn't actually "understand" this, but it can manipulate the terms in a meaningful way. The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results. The effectiveness of software agents will increase exponentially as more machine-readable Web content and automated services become available. The Semantic Web promotes this synergy — even agents that were not expressly designed to work together can transfer semantic data. The Semantic Web will provide the foundations and the framework to make such technologies more feasible. Web Ontology Language (OWL) In 2003, the W3C began final unification of the disparate ontology efforts into a standardizing ontology called the Web Ontology Language (OWL). OWL is a vocabulary extension of RDF. OWL is currently evolving into the semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL facilitates greater machine readability of Web content than that supported by XML, RDF, and RDFS by providing additional vocabulary along with formal semantics. OWL comes in several flavors as three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. By offering three flavors, OWL hopes to attract a broad following. We will return to detailed presentation of OWL in Chapter 8. Inference A rule may describe a conclusion that one draws from a premise. A rule can be a statement processed by an engine or a machine that can make an inference from a given generic rule. The principle of "inference" derives new knowledge from knowledge that we already know. In a mathematical sense, querying is a form of inference and inference is one of the supporting principles of the Semantic Web. For two applications to talk together and process XML data, they require that the two parties must first agree on a common syntax for their documents. After reengineering their documents with new syntax, the exchange can happen. However, using the RDF/XML model, two parties may communicate with different syntax using the concept of equivalencies. For example, in RDF/XML we could say “car” and specify that it is equivalent to “automobile.” We can see how the system could scale. Merging databases becomes recording in RDF that "car" in one database is equivalent to "automobile" in a second database. Indeed, this is already possible with Semantic Web tools, such as a Python program called "Closed World Machine” or CWM. Unfortunately, great levels of inference can only be provided using "First Order Predicate Logic," FOPL languages, and OWL is not entirely a FOPL language. First-order Logic (FOL) is defined as a general-purpose representation language that is based on an ontological commitment to the existence of objects and relations. FOL makes it easy to state facts about categories, either by relating objects to the categories or by quantifying. For FOPL languages, a predicate is a feature of the language which can make a statement about something, or to attribute a property to that thing. Unlike propositional logics, in which specific propositional operators are identified and treated, predicate logic uses arbitrary names for predicates and relations which have no specific meaning until the logic is applied. Though predicates are one of the features which distinguish first-order predicate logic from propositional logic, these are really the extra structure necessary to permit the study of quantifiers. The two important features of natural languages whose logic is captured in the predicate calculus are the terms "every" and "some" and their synonyms. Analogues in formal logic are referred to as the universal and existential quantifiers. These features of language refer to one or more individuals or things, which are not propositions and therefore force some kind of analysis of the structure of "atomic" propositions. The simplest logic is classical or boolean, first-order logic. The "classical" or "boolean" signifies that propositions are either true or false. First-order logic permits reasoning about the propositional and also about quantification ("all" or "some"). An elementary example of the inference is as follows: A ll men are mortal. John is a man. The conclusion: John is mortal. Application of inference rules provides powerful logical deductions. With ontology pages on the Web, solutions to terminology problems begin to emerge. The definitions of terms and vocabularies or XML codes used on a Web page can be defined by pointers from a page to an ontology. Different ontologies need to provide equivalence relations (defining the same meaning for all vocabularies), otherwise there would be a conflict and confusion. Software Agents Many automated Web Services already exist without semantics, but other programs, such as agents have no way to locate one that will perform a specific function. This process, called service discovery, can happen only when there is a common language to describe a service in a way that lets other agents understand both the function offered and the way to take advantage of it. Services and agents can advertise their function by depositing descriptions in directories similar to the Yellow Pages. There are some low-level, service-discovery schemes which are currently available. The Semantic Web is more flexible by comparison. The consumer and producer agents can reach a shared understanding by exchanging ontologies which provide the vocabulary needed for discussion. Agents can even bootstrap new reasoning capabilities when they discover new ontologies. Semantics also make it easier to take advantage of a service that only partially matches a request. An intelligent agent is a computer system that is situated in some environment, that is capable of autonomous action and learning in its environment in order to meet its design objectives. Intelligent agents can have the following characteristics: reactivity — they perceive their environment, and respond, pro-active — they exhibit goal-directed behavior and social — they interact with other agents. Real-time intelligent agent technology offers a powerful Web tool. Agents are able to act without the intervention of humans or other systems: they have control both over their own internal state and over their behavior. In complexity domains, agents must be prepared for the possibility of failure. This situation is called non-deterministic. Normally, an agent will have a repertoire of actions available to it. This set of possible actions represents the agent’s capability to modify its environments. Similarly, the action "purchase a house" will fail if insufficient funds are available to do so. Actions therefore have pre-conditions associated with them, which define the possible situations in which they can be applied. The key problem facing an agent is that of deciding which of its actions it should perform to satisfy its design objectives. Agent architectures are really software architectures for decision-making systems that are embedded in an environment. The complexity of the decision-making process can be affected by a number of different environmental properties, such as: Accessible vs inaccessible. Deterministic vs non- deterministic. Episodic vs non-episodic. Static vs dynamic. Discrete vs continuous. The most complex general class of environment is inaccessible, non-deterministic, non-episodic, dynamic, and continuous. Trust and Proof The next step in the architecture of the Semantic Web is trust and proof. If one person says that x is blue, and another says that x is not blue, will the Semantic Web face logical contradiction? The answer is no, because applications on the Semantic Web generally depend upon context, and applications in the future will contain proof-checking mechanisms and digital signatures. Semantic Web Capabilities and Limitations The Semantic Web promises to make Web content machine understandable, allowing agents and applications to access a variety of heterogeneous resources, processing and integrating the content, and producing added value for the user. The Semantic Web aims to provide an extra machine understandable layer, which will considerably simplify programming and maintenance effort for knowledge-based Web Services. Current technology at research centers allow many of the functionalities the Semantic Web promises: software agents accessing and integrating content from distributed heterogeneous Web resources. However, these applications are really ad-hoc solutions using wrapper technology. A wrapper is a program that accesses an existing Website and extracts the needed information. Wrappers are screen scrapers in the sense that they parse the HTML source of a page, using heuristics to localize and extract the relevant information. Not surprisingly, wrappers have high construction and maintenance costs since much testing is needed to guarantee robust extraction and each time the Website changes, the wrapper has to be updated accordingly. The main power of Semantic Web languages is that anyone can create one, simply by publishing RDF triplets with URIs. We have already seen that RDF Schema and OWL are very powerful languages. One of the main challenges the Semantic Web community faces for the construction of innovative and knowledge-based Web Services is to reduce the programming effort while keeping the Web preparation task as small as possible. The Semantic Web’s success or failure will be determined by solving the following: • The availability of content. • Ontology availability, development, and evolution. • Scalability – Semantic Web content, storage, and search are scalable. • Multilinguality – information in several languages. • Visualization – Intuitive visualization of Semantic Web content. • Stability of Semantic Web languages. Conclusion In this chapter, we provided an introduction to the Semantic Web and discussed its background and potential. By laying out a roadmap for its likely development, we described the essential stepping stones including: knowledge representation, inference, ontology, and search. We also discussed several supporting semantic layers of the Markup Language Pyramid Resource Description Framework (RDF) and Web Ontology Language (OWL). In addition, we discussed using RDF and OWL for supporting software agents, Semantic Web Services, and semantic search. [1] MIT's Project Oxygen is developing technologies to enable pervasive, human-centered computing and information-technology services. Oxygen's user technologies include speech and vision technologies to enable communication with Oxygen as if interacting directly with another person, saving much time and effort. Automaton, individualized knowledge access, and collaboration technologies will be used to perform a wide variety of automated, cutting-edge tasks.
- Henry Gallant and the Warrior | H Peter Alesso
Henry Gallant and the Warrior AMAZON Going Up 1 Lieutenant Henry Gallant plodded along the cobblestone streets of New Annapolis—head down, mind racing . . . My orders say take command of the Warrior immediately . . . but no promotion . . . Why not? He pondered the possibilities, but he already knew the answer. Though he had steely gray eyes, a square jaw, and was taller than nearly everyone around him, what distinguished him most was not visible to the naked eye—he was a Natural—born without genetic engineering. Is this my last chance to prove myself? By the time he reached the space elevator, the welcoming breeze of the clear brisk morning had brightened his mood and he fell into line behind the shipyard personnel without complaint. Looking up, he marveled: That cable climbs into the clouds like an Indian rope trick. When it was his turn at last, the guard scanned his comm pin against the access manifest. The portal light blinked red. “Pardon, sir. Access denied,” said the grim-faced sentry. “Call the officer of the guard,” demanded Gallant. The officer of the guard appeared but was no more inclined to pass Gallant through than the sentry was. The guard touched the interface panel and made several more entries, but the portal continued to blink red. “There’s a hold on your access, sir.” Trouble already? Gallant thought. Then he asked, “A hold?” “Yes, sir. Your clearance and authorization are in order, but SIA has placed a hold on your travel. They want you to report to SIA headquarters, A.S.A.P.” “I need to go to the shipyard and attend to important business before going to the Solar Intelligence Agency,” clarified Gallant, but even as he said it, he knew it wouldn’t help. “Sorry, sir. Orders.” Gallant noticed the four gold stripes of a captain’s sleeve. The officer was waiting to take the next elevator. “Captain?” he said, hailing the man before he recognized him. Captain Kenneth Caine of the Repulse marched to the guard post, frowning. “What can I do for you, Gallant?” Of all the luck, he thought. Caine was the last person he wanted to impose upon, but it was too late now. Several uncomfortable moments passed with the three of them standing there—Caine, Gallant, and the officer of the guard—staring at each other, waiting for someone to break the silence. Finally, Gallant addressed Caine: “Well, sir, I’ve received orders to take command of the Warrior, but apparently all the T’s haven’t been crossed and my shipyard access has a hold from SIA.” Caine’s frown deepened. Gallant turned to the officer of the guard and said, “Is it possible to allow me go to my ship and complete my business? I’ll report to SIA immediately afterward.” The officer of the guard fidgeted and squirmed. He understandably did not like being placed in such a position while under the scrutiny of a full captain. Caine shrugged. Gallant was puzzled for a moment, wondering how to win Caine’s support. He tried the officer of the guard again, “Perhaps, you could send a message to SIA headquarters stating that you informed me of my requirement to report and that I agreed to attend this afternoon after I assume command of my ship. I’ll initial it.” Caine nodded. The guard brightened visibly. “That should be acceptable, sir.” He made a few entries into his interface panel and the portal finally blinked green. Gallant stepped through the gate and joined Caine. Together they walked to the elevator doors and mingled with the group waiting for the next available car. “Thank you for your help, captain,” said Gallant. “I’m sorry to have troubled you.” Caine merely nodded. Unwilling to miss the opportunity to reconnect with his former commanding officer, Gallant asked, “How’ve you been, sir?” Caine’s frown returned. “Well, personally, it’s been quite a trial . . .” Gallant resisted the temptation to coax him onward. After a minute, Caine revealed, “I lost a lot of shipmates during the last action.” He sighed and took a moment to silently mourn their passing. “I’m sorry, sir,” said Gallant, who was sensitive to the prickling pain in Caine’s voice. Gallant then took a long look at the senior officer. He recalled a mental image of his former commanding officer—solidly built and squared shouldered with a crew-cut and a craggy face. In contrast, the man before him now was balding and flabby, with a puffy face and deep frown lines. “Humph,” grumbled Caine, recognizing Gallant’s critical stare. “You’ve changed too. You’re no longer the lanky callow midshipman who reported aboard the Repulse nearly five years ago.” “Thank you, sir,” said Gallant, breaking into an appreciative smile. Caine returned the smile and, warming to the conversation, he said, “We had a few good times back then—and a few victories as well—a good ship, a good crew.” A minute passed before Caine added, “As for the Repulse—she’s suffered along with her crew . . . perhaps more than her fair share. As you know, she’s has been in the forefront of battle since the beginning of the war, but when the Titans attacked Jupiter Station earlier this year, we took a terrible beating—along with the rest of the fleet.” Caine’s face went blank for a few seconds as he relived the event. “ The Titans used nuclear weapons to bombard the colonies. The loss of life was staggering. Jupiter’s moons are now lifeless, scorched rocks. The colonists fled on whatever transport they could find and they’re now in the refugee camp on the outskirts of this city,” said Caine. Then, trying to sound optimistic but unable to hide his concern, he added, “We gave the Titans some lumps as well. It’ll be some time before they can trouble us on this side of the asteroid belt.” “So I understand, sir.” SWOOSH! BAM! The elevator car doors opened with a loud bang. Caine stepped inside. Gallant grabbed the strap and buckled himself into the adjacent acceleration couch. A powerful engine pulled the glass-encased car along a ribbon cable anchored to the planet’s surface and extended to the Mars space station in geostationary orbit. A balance of forces kept the cable under tension while the elevator ascended—gravity at the lower end and the centripetal force of the station at the upper end. The tiny vehicle accelerated swiftly to seven g’s and reached orbit in less than ten minutes before braking to docking speed. Gallant enjoyed a spectacular view as the car sped through the clouds. Below him was the receding raw red and brown landscape of Mars spread over the planet’s curvature; above him was one of man’s most ambitious modern structures; —a space station, replete with a shipyard that housed the newest space vessels under construction including Gallant’s new command, the Warrior, as well as ships in need of repair, including the Repulse. Gallant tried to pick out his minute ship against the much larger battle cruisers nested near it, but the rotation of the station hid it from view. “Repulse is completing extensive repairs. She’ll be back in action before long. I have a fierce loyalty to my ship and I know she’ll acquit herself well, no matter what comes,” said Caine. “I’m sure she will, sir,” said Gallant. “I haven’t congratulated you on your first command, yet” Caine said, extending his hand. “You’ve earned it.” “Thank you, sir,” said Gallant, shaking hands, while a thought flashed through his mind, If I earned command, why wasn’t I promoted? “Do you have any idea of your first assignment, yet?” “No, sir. It could be almost anything,” said Gallant, but he was thinking, Probably involves the Warrior’s special capabilities. Caine said, “At least you’ll get a chance to strike the enemy.” Gallant said, “We still know so little about the aliens’ origins or intentions. Since they’ve taken Jupiter, they’ve expanded their bases from the satellites of the outer planets. They’ve also penetrated into the asteroids. That puts them in a position to launch raids here.” Caine said, “I once asked you, ‘What’s the single most important element in achieving victory in battle?’” “Yes, sir, and my answer is the same: surprise.” “Yes,” Caine said, “but to achieve surprise, it’s essential for us to gather more intelligence.” “I agree, sir.” “Tell me, Gallant,” Caine said, as he shifted position, “are you aware there are many people who hold you in contempt? They still doubt that a Natural can serve in the fleet.” Gallant grimaced. “I’ve always done my duty to the best of my ability, sir.” “And you have done admirably, from what I know of your actions, but that hasn’t fazed some. I’ve heard little about your last mission.” “I can’t discuss that mission, sir. It’s been classified as need-to-know under a special compartment classification,” said Gallant, as he thought, I wish I could tell you about the AI berserker machine. I can only imagine what’s in store for the Warrior. “Nevertheless, I’ve heard that Anton Neumann was much praised for that mission. He was promoted to full commander and given the cruiser Achilles, though, I wouldn’t be surprised if his father’s influence played a role in that.” Gallant said nothing, but stared down at his shoes, Neumann always wins. Caine grunted and then said, “Neither of us is in good standing with Anton’s father.” Caine and Gallant had previously run afoul of Gerome Neumann, President of NNR, Shipping and Mining Inc., and an industrial and government powerbroker. Gallant nodded. Upon arriving at the space station platform, the elevator car doors opened automatically and once again banged loudly. SWOOSH! BAM! A long, enclosed tunnel formed the central core of the station with twenty-four perpendicular arms that served as docking piers. The tunnel featured many windows and access ports to reach the twenty-four ships that extended from the docking arms. The two men chatted about the war news while they rode a tram along the tunnel causeway. Finally, Gallant left Caine at the Repulse and continued to his new command. A swarm of workmen buzzed along the Warrior’s scaffolding, cranes hauled machinery to and fro, and miscellaneous gear lay haphazardly about. An infinite amount of preparation was under way, servicing the ship in anticipation of her departure. Gallant gaped . . . There she is. He leaned forward to take in every line and aspect of the ship. Despite the distractions, he saw the ship as a thing of exquisite beauty. The Warrior featured a smooth rocket shaped hull and while she was smaller than her battle cruiser neighbors, she weighed thirty-thousand tons with an overall length of one hundred and twenty meters and a beam of forty meters. She was designed with stealth capability, so she emitted no detectable signals and remained invisible until her power supply required recharging. Her armament included a FASER cannon, several short-range plasma weapons, and several laser cannons. She was equipped with an armor belt and force shield plus electronic warfare decoys and sensors. The ship’s communications, navigation, FTL propulsion, and AI computer were all state-of-the-art. The crew of 126 officers and men, was highly trained and already on board. When the Warrior traveled through the unrelenting and unforgiving medium of space it would serve as the crew’s heartfelt home. The brief, relaxed sense of freedom that Gallant had enjoyed between deployments was coming to an end; his shoulders tightened in anticipation. He stepped onto the enclosed gangplank and saluted the flag that was displayed on the bow. Then he saluted the officer of the watch and asked, “Request permission to come aboard, sir?” “Permission granted, sir,” said Midshipman Gabriel in a gravelly voice that was totally at odds with his huge grin, dimpled cheeks, and boyish freckled face. Was I ever that young? thought Gallant before he recalled he was only a few years older. Boarding the ship, Gallant’s eyes widened as he sought to drink everything in. He was impressed by the innovative technologies that had been freshly installed. The novelty of his role on this ship was not lost on him. Upon reaching the bridge, he ordered Gabriel to use the ship’s intercom to call the crew to attention. “All officers, report to the bridge!” Gabriel ordered. When the officers had gathered around him a minute later, he said, “All hands, attention!” Drawn together on every deck, the crew stopped their work, came to attention, and listened. Gallant recited his orders, “Pursuant to fleet orders, I, Lieutenant Henry Gallant, assume command of the United Planet ship, Warrior, on this date at the Mars’ Space Station.” He continued reciting several more official paragraphs, but from that moment forward, the Warrior was a member of the United Planets’ fleet and Gallant was officially her commanding officer. With the formal requirements concluded, Gallant spoke over the address system: “At ease. Officers and crew of the Warrior, I’m proud to serve with you. I look forward to getting to know each one of you. For now, we must outfit this ship and prepare to do our job as part of the fleet. There are battles to be fought, a war to win, and the Warrior has a key role to play.” Satisfied with his brief statement, Gallant nodded to Gabriel. Over the address system Gabriel announced, “Attention! All hands dismissed! Return to your regular duties.” Gallant stood before the officers on the bridge, addressed each by name and shook their hands, starting with the executive officer and then the department heads; operations, engineering, and weapons; followed by the junior officers. His first impression was that they were an enthusiastic and professional group. “I will provide prioritized work items for each of you to address in the next few days as we prepare for our upcoming shakedown cruise. For now, you can return to your duties. Thank you.” Gallant entered the Combat Information Center and pulled on a neural interface to the ship’s AI. The dozens of delicate silicon probes touched his scalp at key points. It sensitively picked up wave patterns emanating from his thoughts and allowed him to communicate with the AI directly. Gallant formed a mental image of the Warrior's interior. While Gallant could use the interface for evaluating the ship’s condition, the controls remained under manual control. He hashed out his priorities for his department heads to work on and sent them messages. He ordered them to address the myriad of items he had been mentally considering for hours. While he would have liked to have had a discussion with each officer individually, that would simply have to wait. It was time to get back to the space elevator. Gallant frowned in frustration at being pulled away by his appointment: I’d better hustle to SIA.
- Excerpts | H Peter Alesso
Excerpts Writing Porfolio Finding Inspiration in Every Turn The Henry Gallant Saga Midshipman Henry Gallant in Space Lieutenant Henry Gallant Henry Gallant and the Warrior Commander Gallant Captain Henry Gallant Commandor Henry Gallant Henry Gallant and the Great Ship Rear Admiral Henry Gallant Midshipman Henry Gallant at the Academy Dramatic Novels Youngblood Dark Genius Captain Hawkins Short Stories All Androids Lie Computer Books Connections Thinking on the Web The Semantic Web The Intelligent Wireless Web E-Video Computer Apps Graphic Novels Screenp lay