top of page

Search Results

27 results found with an empty search

  • New | H Peter Alesso

    New release "Fallout of War: Ukraine Year One" available on Amazon. New Releases In the tradition of Herman Wouk's sweeping historical war epics, Fallout of War follows Lieutenant Commander James Fairbanks, a career naval submarine officer assigned as a military attaché to the American embassy in Kyiv in late 2021. Fairbanks arrives in Ukraine with his wife, Lucy, a State Department analyst, just as tensions with Russia reach a critical juncture. A thoughtful, disciplined officer known for his strategic acumen and unvarnished assessments, Fairbanks quickly becomes immersed in the complex political and military landscape of Eastern Europe. Fairbanks tours the Chernobyl exclusion zone, where he meets Ukrainian special forces conducting training exercises amid the haunting ruins of the 1986 disaster. These encounters with hardened Ukrainian soldiers, many of whom fought in the Donbas since 2014, give Fairbanks his first understanding of Ukrainian determination and the existential nature of their struggle. Through a series of diplomatic functions and intelligence briefings, he develops relationships with key Ukrainian officials and eventually meets President Volodymyr Zelensky, whose evolution from entertainer to wartime leader forms one of the novel's central character studies. On December 20, 2024, OpenAI quietly released a three-minute video that marked the moment when artificial general intelligence shifted from "someday" to "soon.” Their o3 model had achieved 87.5% on the ARC-AGI benchmark, a test specifically designed to resist pattern-matching shortcuts and measure genuine reasoning. Just months earlier, the best AI systems struggled to break 32%. The average human scores 85%. This wasn't an incremental improvement. It was a phase transition—like water suddenly becoming steam. On the Cusp of Superintelligence captures this pivotal moment when the race to artificial general intelligence transformed from a research project to an engineering sprint. It reveals how multiple paths to AGI are converging simultaneously, each backed by billion-dollar labs with fundamentally different theories of intelligence. OpenAI bet everything on scaling—that intelligence emerges from processing enough information with enough parameters. Their progression from GPT-3's 175 billion parameters to o3's breakthrough validated their conviction that the path to AGI is a straight highway that needs to be extended far enough. Meanwhile, DeepMind, led by neuroscientist Demis Hassabis, pursued a portfolio strategy combining hierarchical reasoning, self-improving Gödel machines, multi-agent systems, embodied intelligence, and scientific discovery. Their synthesis approach suggests that superintelligence might require not choosing between paradigms but orchestrating them into unified systems. Anthropic took a different path, prioritizing safety through Constitutional AI, building alignment into the architecture rather than adding it afterward. Their Claude models demonstrated that capability and safety need not be mutually exclusive. Meta champions world models and embodied intelligence, arguing that true understanding requires grounding in physical reality—that language models without world models are like philosophers in Plato's cave, seeing only shadows. Their V-JEPA learns by predicting what happens next in video, developing causal understanding that pure text training cannot achieve. xAI's Grok series, trained on the massive Colossus supercluster with up to 200,000 GPUs, represents yet another philosophy: maximum truth-seeking through real-time information integration. Grok-4's multi-agent architecture and native tool use pushed boundaries in reasoning and reliability, achieving perfect scores on mathematical Olympiads while maintaining connection to current events through X's data streams. IBM's revolutionary NeuroVSA demonstrated that neural and symbolic approaches need not compete—showing how symbols can become high-dimensional vectors and logic can emerge from geometry itself. This synthesis enables systems that combine neural pattern recognition with symbolic reasoning, solving problems neither approach could handle alone. Yet as Western labs pushed these boundaries, an unexpected challenge emerged from the East. In January 2025, China's DeepSeek-R1 surpassed American models in key benchmarks, creating what researchers called an "AI Sputnik moment." Constrained by hardware limitations, Chinese researchers had become masters of efficiency, achieving comparable or superior performance with a fraction of the computational resources. Their Mixture-of-Experts architectures and algorithmic innovations demonstrated that the path to AGI might not require massive scale, but rather clever engineering. As these architectural philosophies race toward AGI, a more profound crisis emerges. We face the outer alignment challenge: how do we specify what we want from superintelligent systems without destroying what we value? More troubling still is inner alignment—ensuring these systems actually pursue our goals rather tha In 2019, Giuseppe Carleo and his team pioneered the application of machine learning to quantum physics. This book is an introduction to how AI revolutionizes quantum field theory (QFT) , from scalar fields to complex gauge theories describing quarks and gluons. The narrative unfolds in three acts. First, readers discover the mathematical kinship between neural networks and quantum fields—the renormalization group maps onto information flow through neural layers, while gauge symmetry provides blueprints for AI architectures. Through Python code, readers build networks that discover phase transitions without being taught physics, demonstrating AI's ability to rediscover fundamental principles from data alone. The second act examines how AI addresses each type of quantum field. For scalar fields, neural networks identify exotic phases that traditional methods miss. For fermions, architectures like FermiNet achieve chemical accuracy while sidestepping computational barriers. For gauge fields, flow-based models conquer critical slowing down that has limited simulations for decades. Key breakthroughs include MIT's gauge-equivariant flows, which reduce autocorrelation times by a factor of 100, DeepMind's solution to 30-electron molecules, and the discovery by transformers that million-term scattering amplitudes can be expressed as a single equation. The final act envisions AI not just calculating but creating physics systems like MELVIN, designing quantum experiments that no human has imagined. Language models solve bootstrap equations. Neural networks propose routes to grand unification. The book culminates in a convergence of quantum computers and classical AI—a partnership that could crack QFT's deepest mysteries. By teaching AI nature's symmetries, we're creating systems that reveal patterns invisible to human analysis—AI intelligence is offering a different way of interrogating reality. Written as an introduction for physicists curious about AI and ML, as well as for AI and ML experts interested in fundamental physics, the book strikes a balance between rigor and practical implementation, offering both conceptual frameworks and tools for the quantum field theory revolution.

  • Fame | H Peter Alesso

    A gallery of Science Fiction Ledgends and theiw works. Science Fiction Writers Hall of Fame Isaac Asimov Asimov is one of the foundational voices of 20th-century science fiction. His work often incorporated hard science, creating an engaging blend of scientific accuracy and imaginative speculation. Known for his "Robot" and "Foundation" series, Asimov's ability to integrate scientific principles with compelling narratives has left an enduring legacy in the field. Arthur C. Clarke The author of numerous classics including "2001: A Space Odyssey," Clarke's work is notable for its visionary, often prophetic approach to future technologies and space exploration. His thoughtful, well-researched narratives stand as enduring examples of 'hard' science fiction. Robert A. Heinlein Heinlein, one of science fiction's most controversial and innovative writers, is best known for books like "Stranger in a Strange Land" and "Starship Troopers." His work is known for its strong political ideologies and exploration of societal norms. Philip K. Dick With stories often marked by paranoid and dystopian themes, Dick's work explores philosophical, sociological, and political ideas. His books like "Do Androids Dream of Electric Sheep?" inspired numerous films, solidifying his impact on popular culture. Ray Bradbury Known for his poetic prose and poignant societal commentary, Bradbury's work transcends genre. His dystopian novel "Fahrenheit 451" remains a touchstone in the canon of 20th-century literature, and his short stories continue to inspire readers and writers alike. Ursula K. Le Guin Le Guin's works, such as "The Left Hand of Darkness" and the "Earthsea" series, often explored themes of gender, sociology, and anthropology. Her lyrical prose and profound explorations of human nature have left an indelible mark on science fiction. Frank Herbert The author of the epic "Dune" series, Herbert crafted a detailed and complex future universe. His work stands out for its intricate plotlines, political intrigue, and environmental themes. William Gibson Gibson is known for his groundbreaking cyberpunk novel "Neuromancer," where he coined the term 'cyberspace.' His speculative fiction often explores the effects of technology on society. H.G. Wells Although Wells's works were published on the cusp of the 20th century, his influence carried well into it. Known for classics like "The War of the Worlds" and "The Time Machine", Wells is often hailed as a father of science fiction. His stories, filled with innovative ideas and social commentary, have made an indelible impact on the genre. Larry Niven Known for his 'Ringworld' series and 'Known Space' stories, Niven's hard science fiction works are noted for their imaginative, scientifically plausible scenarios and compelling world-building. Octavia Butler Butler's work often incorporated elements of Afrofuturism and tackled issues of race and gender. Her "Xenogenesis" series and "Kindred" are known for their unique and poignant explorations of human nature and society. Orson Scott Card Best known for his "Ender's Game" series, Card's work combines engaging narrative with introspective examination of characters. His stories often explore ethical and moral dilemmas. Alfred Bester Bester's "The Stars My Destination" and "The Demolished Man" are considered classics of the genre. His work is recognized for its powerful narratives and innovative use of language. Kurt Vonnegut Though not strictly a science fiction writer, Vonnegut's satirical and metafictional work, like "Slaughterhouse-Five," often used sci-fi elements to highlight the absurdities of human condition. Harlan Ellison Known for his speculative and often dystopian short stories, Ellison's work is distinguished by its cynical tone, inventive narratives, and biting social commentary. Stanislaw Lem Lem's work, such as "Solaris," often dealt with philosophical questions. Philip José Farmer Known for his "Riverworld" series, Farmer's work often explored complex philosophical and social themes through creative world-building and the use of historical characters. He is also recognized for his innovations in the genre and the sexual explicitness of some of his work. J. G. Ballard Best known for his novels "Crash" and "High-Rise", Ballard's work often explored dystopian modernities and psychological landscapes. His themes revolved around surrealistic and post-apocalyptic visions of the human condition, earning him a unique place in the sci-fi genre. AI Science Fiction Hall of Fame As a science fiction aficionado and AI expert, there's nothing more exciting to me t han exploring the relationship between sci-fi literature and artificial intelligence. Science fiction is an innovative genre, often years ahead of its time, an d has influenced AI's development in ways you might not expect. But it's not just techies like us who should be interested - students of AI can learn a lot from these visionary authors. So buckle up, as we're about to embark on an insider's journey through the most famous science fiction writers in the hall of fame! The Science Fiction-AI Connection Science fiction and AI go together like peanut butter and jelly. In fact, one could argue that some of our most advanced AI concepts and technologies sprung from the seeds planted by sci-fi authors. I remember as a young techie, curled up with my dog, reading Isaac Asimov’s "I, Robot". I was just a teenager, but that book completely changed how I saw the potential of AI. The Most Famous Sci-Fi Writers and their AI Visions Ready for a deep dive into the works of the greats? Let's take a closer look at some of the most famous science fiction writers in the hall of fame, and how their imaginations have shaped the AI we know today. Isaac Asimov: Crafting the Ethics of AI You can't talk about AI in science fiction without first mentioning Isaac Asimov. His "I, Robot" introduced the world to the Three Laws of Robotics, a concept that continues to influence AI development today. As an AI student, I remember being fascinated by how Asimov's robotic laws echoed the ethical considerations we must grapple with in real-world AI. Philip K. Dick: Dreaming of Synthetic Humans Next up, Philip K. Dick. If you've seen Blade Runner, you've seen his influence at work. In "Do Androids Dream of Electric Sheep?" (the book Blade Runner is based on), Dick challenges us to question what it means to be human and how AI might blur those lines. It's a thought that has certainly kept me up late on more than a few coding nights! Arthur C. Clarke: AI, Autonomy, and Evolution Arthur C. Clarke's "2001: A Space Odyssey" has been both a source of inspiration and caution in my work. The AI character HAL 9000 is an eerie portrayal of autonomous AI systems' potential power and risks. It's a reminder that AI, like any technology, can be a double-edged sword. William Gibson: AI in Cyberspace Finally, William Gibson's "Neuromancer" gave us a vision of AI in cyberspace before the internet was even a household name. I still remember my shock reading about an AI entity in the digital ether - years later, that same concept is integral to AI in cybersecurity. The Power of Creativity These authors' works are testaments to the power of creativity in imagining the possibilities of AI. As students, you'll need to push boundaries and think outside the box - just like these authors did. Understanding Potential and Limitations The stories these authors spun provide us with vivid scenarios of AI's potential and limitations. They remind us that while AI has massive potential, it's not without its challenges and dangers. Conclusion And there we have it - our deep dive into the most famous science fiction writers in the hall of fame and their influence on AI. Their work is not just fiction; it's a guiding light, illuminating the path that has led us to the AI world we live in today. As students, we have the opportunity to shape the AI of tomorrow, just as these authors did. So why not learn from the best? Science Fiction Greats of the 21st Century Neal Stephenson is renowned for his complex narratives and incredibly detailed world-building. His Baroque Cycle trilogy is a historical masterpiece, while Snow Crash brought the concept of the 'Metaverse' into popular culture. China Miéville has won several prestigious awards for his 'weird fiction,' a blend of fantasy and science fiction. Books like Perdido Street Station and The City & The City are both acclaimed and popular. His work is known for its rich, evocative language and innovative concepts. Kim Stanley Robinson is best known for his Mars trilogy, an epic tale about the terraforming and colonization of Mars. He's famous for blending hard science, social commentary, and environmental themes. He continues this trend in his 21st-century works like the climate-focused New York 2140. Margaret Atwood, while also recognized for her mainstream fiction, has made significant contributions to science fiction. Her novel The Handmaid's Tale and its sequel The Testaments provide a chilling dystopian vision of a misogynistic society. Her MaddAddam trilogy further underscores her unique blend of speculative fiction and real-world commentary. Alastair Reynolds is a leading figure in the hard science fiction subgenre, known for his space opera series Revelation Space. His work, often centered around post-humanism and AI, is praised for its scientific rigor and inventive plotlines. Reynolds, a former scientist at the European Space Agency, incorporates authentic scientific concepts into his stories. Paolo Bacigalupi's works often deal with critical environmental and socio-economic themes. His debut novel The Windup Girl won both the Hugo and Nebula awards and is renowned for its bio-punk vision of the future. His YA novel, Ship Breaker, also received critical acclaim, winning the Michael L. Printz Award. Ann Leckie's debut novel Ancillary Justice, and its sequels, are notable for their exploration of AI, gender, and colonialism. Ancillary Justice won the Hugo, Nebula, and Arthur C. Clarke Awards, a rare feat in science fiction literature. Her unique narrative styles and complex world-building are highly appreciated by fans and critics alike. Iain M. Banks was a Scottish author known for his expansive and imaginative 'Culture' series. Though he passed away in 2013, his work remains influential in the genre. His complex storytelling and exploration of post-scarcity societies left a significant mark in science fiction. William Gibson is one of the key figures in the cyberpunk sub-genre, with his novel Neuromancer coining the term 'cyberspace.' In the 21st century, he continued to innovate with his Blue Ant trilogy. His influence on the genre, in terms of envisioning the impacts of technology on society, is immense. Ted Chiang is highly regarded for his thoughtful and philosophical short stories. His collection Stories of Your Life and Others includes "Story of Your Life," which was adapted into the film Arrival. Each of his carefully crafted tales explores a different scientific or philosophical premise. Charlie Jane Anders is a diverse writer who combines elements of science fiction, fantasy, and more in her books. Her novel All the Birds in the Sky won the 2017 Nebula Award for Best Novel. She's also known for her work as an editor of the science fiction site io9. N.K. Jemisin is the first author to win the Hugo Award for Best Novel three years in a row, for her Broken Earth Trilogy. Her works are celebrated for their diverse characters, intricate world-building, and exploration of social issues. She's one of the most influential contemporary voices in fantasy and science fiction. Liu Cixin is China's most prominent science fiction writer and the first Asian author to win the Hugo Award for Best Novel, for The Three-Body Problem. His Remembrance of Earth's Past trilogy is praised for its grand scale and exploration of cosmic civilizations. His work blends hard science with complex philosophical ideas. John Scalzi is known for his accessible writing style and humor. His Old Man's War series is a popular military science fiction saga, and his standalone novel Redshirts won the 2013 Hugo Award for Best Novel. He's also recognized for his blog "Whatever," where he discusses writing, politics, and more. Cory Doctorow is both a prolific author and an advocate for internet freedom. His novel Little Brother, a critique of increased surveillance, is frequently used in educational settings. His other novels, like Down and Out in the Magic Kingdom, are known for their examination of digital rights and technology's impact on society. Octavia Butler (1947-2006) was an award-winning author known for her incisive exploration of race, gender, and societal structures within speculative fiction. Her works like the Parable series and Fledgling have continued to influence and inspire readers well into the 21st century. Her final novel, Fledgling, a unique take on vampire mythology, was published in 2005. Peter F. Hamilton is best known for his space opera series such as the Night's Dawn trilogy and the Commonwealth Saga. His work is often noted for its scale, complex plotting, and exploration of advanced technology and alien civilizations. Despite their length, his books are praised for maintaining tension and delivering satisfying conclusions. Ken Liu is a prolific author and translator in science fiction. His short story "The Paper Menagerie" is the first work of fiction to win the Nebula, Hugo, and World Fantasy Awards. As a translator, he's known for bringing Liu Cixin's The Three-Body Problem to English-speaking readers. Ian McDonald is a British author known for his vibrant and diverse settings, from a future India in River of Gods to a colonized Moon in the Luna series. His work often mixes science fiction with other genres, and his narrative style has been praised as vivid and cinematic. He has won several awards, including the Hugo, for his novellas and novels. James S.A. Corey is the pen name of collaborators Daniel Abraham and Ty Franck. They're known for The Expanse series, a modern space opera exploring politics, humanity, and survival across the solar system. The series has been adapted into a critically acclaimed television series. Becky Chambers is praised for her optimistic, character-driven novels. Her debut, The Long Way to a Small, Angry Planet, kickstarted the popular Wayfarers series and was shortlisted for the Arthur C. Clarke Award. Her focus on interpersonal relationships and diverse cultures sets her work apart from more traditional space operas. Yoon Ha Lee's Machineries of Empire trilogy, beginning with Ninefox Gambit, is celebrated for its complex world-building and innovative use of technology. The series is known for its intricate blend of science, magic, and politics. Lee is also noted for his exploration of gender and identity in his works. Ada Palmer's Terra Ignota series is a speculative future history that blends philosophy, politics, and social issues in a post-scarcity society. The first book in the series, Too Like the Lightning, was a finalist for the Hugo Award for Best Novel. Her work is appreciated for its unique narrative voice and in-depth world-building. Charlie Stross specializes in hard science fiction and space opera, with notable works including the Singularity Sky series and the Laundry Files series. His books often feature themes such as artificial intelligence, post-humanism, and technological singularity. His novella "Palimpsest" won the Hugo Award in 2010. Kameron Hurley is known for her raw and gritty approach to science fiction and fantasy. Her novel The Light Brigade is a time-bending military science fiction story, while her Bel Dame Apocrypha series has been praised for its unique world-building. Hurley's work often explores themes of gender, power, and violence. Andy Weir shot to fame with his debut novel The Martian, a hard science fiction tale about a man stranded on Mars. It was adapted into a successful Hollywood film starring Matt Damon. His later works, Artemis and Project Hail Mary, continue his trend of scientifically rigorous, yet accessible storytelling. Jeff VanderMeer is a central figure in the New Weird genre, blending elements of science fiction, fantasy, and horror. His Southern Reach Trilogy, starting with Annihilation, explores ecological themes through a mysterious, surreal narrative. The trilogy has been widely praised, with Annihilation adapted into a major motion picture. Nnedi Okorafor's Africanfuturist works blend science fiction, fantasy, and African culture. Her novella Binti won both the Hugo and Nebula awards. Her works are often celebrated for their unique settings, compelling characters, and exploration of themes such as cultural conflict and identity. Claire North is a pen name of Catherine Webb, who also writes under Kate Griffin. As North, she has written several critically acclaimed novels, including The First Fifteen Lives of Harry August, which won the John W. Campbell Memorial Award for Best Science Fiction Novel. Her works are known for their unique concepts and thoughtful exploration of time and memory. M.R. Carey is the pen name of Mike Carey, known for his mix of horror and science fiction. His novel The Girl With All the Gifts is a fresh take on the zombie genre, and it was later adapted into a film. Carey's works are celebrated for their compelling characters and interesting twists on genre conventions. Greg Egan is an Australian author known for his hard science fiction novels and short stories. His works often delve into complex scientific and mathematical concepts, such as artificial life and the nature of consciousness. His novel Diaspora is considered a classic of hard science fiction. Steven Erikson is best known for his epic fantasy series, the Malazan Book of the Fallen. However, he has also made significant contributions to science fiction with works like Rejoice, A Knife to the Meat. His works are known for their complex narratives, expansive world-building, and philosophical undertones. Vernor Vinge is a retired San Diego State University professor of mathematics and computer science and a Hugo award-winning science fiction author. Although his most famous work, A Fire Upon the Deep, was published in the 20th century, his later work including the sequel, Children of the Sky, has continued to influence the genre. He is also known for his 1993 essay "The Coming Technological Singularity," in which he argues that rapid technological progress will soon lead to the end of the human era. Jo Walton has written several novels that mix science fiction and fantasy, including the Hugo and Nebula-winning Among Others. Her Thessaly series, starting with The Just City, is a thought experiment about establishing Plato's Republic in the ancient past. She is also known for her non-fiction work on the history of science fiction and fantasy. Hugh Howey is best known for his series Wool, which started as a self-published short story and grew into a successful series. His works often explore post-apocalyptic settings and the struggle for survival and freedom. Howey's success has been a notable example of the potential of self-publishing in the digital age. Richard K. Morgan is a British author known for his cyberpunk and dystopian narratives. His debut novel Altered Carbon, a hardboiled cyberpunk mystery, was adapted into a Netflix series. His works are characterized by action-packed plots, gritty settings, and exploration of identity and human nature. Hannu Rajaniemi is a Finnish author known for his unique blend of hard science and imaginative concepts. His debut novel, The Quantum Thief, and its sequels have been praised for their inventive ideas and complex, layered narratives. Rajaniemi, who holds a Ph.D. in mathematical physics, incorporates authentic scientific concepts into his fiction. Stephen Baxter is a British author who often writes hard science fiction. His Xeelee sequence is an expansive future history series covering billions of years. Baxter is known for his rigorous application of scientific principles and his exploration of cosmic scale and deep time. C.J. Cherryh is an American author who has written more than 60 books since the mid-1970s. Her Foreigner series, which began in the late '90s and has continued into the 21st century, is a notable science fiction series focusing on political conflict and cultural interaction. She has won multiple Hugo Awards and was named a Grand Master by the Science Fiction and Fantasy Writers of America. Elizabeth Bear is an American author known for her diverse range of science fiction and fantasy novels. Her novel Hammered, which combines cybernetics and Norse mythology, started the acclaimed Jenny Casey trilogy. She has won multiple awards, including the Hugo, for her novels and short stories. Larry Niven is an American author best known for his Ringworld series, which won the Hugo, Nebula, and Locus awards. In the 21st century, he continued the series and collaborated with other authors on several other works, including the Bowl of Heaven series with Gregory Benford. His works often explore hard science concepts and future history. David Mitchell is known for his genre-blending novels, such as Cloud Atlas, which weaves six interconnected stories ranging from historical fiction to post-apocalyptic science fiction. The novel was shortlisted for the Booker Prize and adapted into a film. His works often explore themes of reality, identity, and interconnectedness. Robert J. Sawyer is a Canadian author known for his accessible style and blend of hard science fiction with philosophical and ethical themes. His Neanderthal Parallax trilogy, which started in 2002, examines an alternate world where Neanderthals became the dominant species. He is a recipient of the Hugo, Nebula, and John W. Campbell Memorial awards. Daniel Suarez is known for his high-tech thrillers. His debut novel Daemon and its sequel Freedom™ explore the implications of autonomous computer programs on society. His books are praised for their action-packed narratives and thought-provoking themes related to technology and society. Kazuo Ishiguro is a Nobel Prize-winning author, known for his poignant and thoughtful novels. Never Let Me Go, published in 2005, combines elements of science fiction and dystopian fiction in a heartbreaking narrative about cloned children raised for organ donation. Ishiguro's work often grapples with themes of memory, time, and self-delusion. Malka Older is a humanitarian worker and author known for her Infomocracy trilogy. The series, starting with Infomocracy, presents a near-future world where micro-democracy has become the dominant form of government. Her work stands out for its political savvy and exploration of information technology. James Lovegrove is a versatile British author, known for his Age of Odin series and Pantheon series which blend science fiction with mythology. His Firefly novel series, based on the popular Joss Whedon TV show, has been well received by fans. He's praised for his engaging writing style and inventive blending of genres. Emily St. John Mandel is known for her post-apocalyptic novel Station Eleven, which won the Arthur C. Clarke Award and was a finalist for the National Book Award and the PEN/Faulkner Award. Her works often explore themes of memory, fate, and interconnectedness. Her writing is praised for its evocative prose and depth of character. Sue Burke's debut novel Semiosis is an engaging exploration of human and alien coexistence, as well as the sentience of plants. The book was a finalist for the John W. Campbell Memorial Award and spawned a sequel, Interference. Burke's work is known for its realistic characters and unique premise. Tade Thompson is a British-born Yoruba author known for his Rosewater trilogy, an inventive blend of alien invasion and cyberpunk tropes set in a future Nigeria. The first book in the series, Rosewater, won the Arthur C. Clarke Award. His works are celebrated for their unique settings and blend of African culture with classic and innovative science fiction themes. Send Your Suggestion First name Last name Email What did you like best? How can we improve? Send Feedback Thanks for sharing your feedback with us!

  • Home | H. Peter Alesso science fiction author

    Author H. Peter Alesso presents excerpts from his published portfolio and research projects. H. Peter Alesso Portfolio Past, Present, and Future. " Oh, why is love so complicated?" asked Henry. Alaina said, "It's not so complicated. You just have to love the other person more than yourself." Not everyone who fights is a warrior. A warrior knows what's worth fighting for.

  • Movie | H Peter Alesso

    Podcasts of books by Peter Alesso for book to movie for Midshipman Henry Gallant Henry Gallant Movie Movie: Midshipman Henry Gallant in Space Movie with subtitles: Midshipman Henry Gallant in Space Podcast: Midshipman Henry Gallant in Space (1)

  • Commander Gallant | H Peter Alesso

    Excerpt of fourth book in the Henry Gallant Saga, Commander Gallant. Commander Henry Gallant AMAZON Chapter 1 Methane Planet As the warp bubble collapsed, the Warrior popped out on the edge of the Gliese-581star system. The Warrior was Captain Henry Gallant’s first command, the culmination of everything he’d worked for since entering the academy. With a rocket-shaped hull over one hundred meters long, she boasted stealth technology, a sub-light antimatter engine, and an FTL dark-matter drive. Gallant gawked. “What an awesome sight.” The busy bridge crew stole their eyes away from their instrument panels long enough to gaze in amazement at the Titan civilization. The many ships traveling between planets were remarkable, but the energy readings of the densely populated planets were off the charts. From his command chair, Gallant focused on the home of his alien enemy. The star was an M-class red dwarf—smaller, cooler, and less massive than Sol, at about twenty light-years from Earth. “Sir,” said the astrogator, “we’re three light-days from the sun. Five planets are visible.” “It’s like the solar system,” marveled Midshipman Stedman, an eager but green officer. His slight build and round, boyish face often seemed to get lost amid the bustle of the more experienced crew. “If you don’t notice that the sun is ruby instead of amber and that there are only five planets,” chided Chief Howard. The oldest member of the crew, he was a seasoned veteran with a slight potbelly. He wore his immaculate uniform with pride. Every ribbon, insignia, and star on his left breast had a long and glorious story. He was only too glad to retell the stories—with appropriate embellishments—over a whiskey, preferably Jack Daniels. The astrogator said, “Only two planets are within the liquid methane zone. But some of the asteroids and moons may have been methane-formed.” “Wow, the system is full of mining colonies, military bases, and communications satellites. The spaceship traffic is amazing. There must be many thousands of ships,” said Stedman. The astrogator reported, “Scans on the second and third planets show billions of beings. The second planet has the greatest energy density. I’ll bet that’s their planet of origin.” “Quite likely,” acknowledged Gallant, his curiosity aroused. “An imposing presence, Skipper,” said Roberts. Young and garrulous, Roberts had steady nerves and sound professional judgment. He was of average height with brown hair, a lean, smooth face, and a sturdy body. Gallant had come to trust him as a stalwart friend—something one only discovers during a crisis. That moment had come several months earlier when Roberts put his career and his life on the line for Gallant. “It has one large moon,” added Chief Howard. Gliese-Beta was a majestic ringed planet wrapped in a dense hydrocarbon nitrogen-rich atmosphere. It was opaque to blue light but transparent to red and infrared. The red dwarf's infrared warmed the planet and made it habitable for methane lifeforms. Gliese-Gamma was similar. The astrogator continued, “The next two planets are gas giants with several moons. Gliese-Delta is composed of hydrogen and helium with volcanic methane moons, much like our Neptune. Gliese-Epsilon is a low-mass planet with a climate model like a runaway greenhouse effect—analogous to our Venus.” “That’s interesting,” said Roberts. Encouraged, the astrogator concluded his report, “The system includes a disk-shaped asteroid field.” The Warrior used its radars and telescopes to plot the planets’ orbits. The CIC team computed the course of nearby contacts. The Warrior’s emission spectrum was controlled in stealth mode. “What’s your assessment of their military strength?” asked Gallant. The CIC team listed the large and small warships, followed by planetary defenses. They added an estimate of the traffic flow. It was a long list. OOD said, “Here is the compiled reported, sir.” “Very well,” acknowledged Gallant as he scanned the tablet. The sub-light engines drove the ship onward into the heart of the system. Calculating their flight path, the astrogator reported, “We’ll reach the asteroids in about forty-eight hours, sir.” Gallant tapped the screen to call up the AI settings for plot control and touched the destination. He ordered a deep-space probe sent toward the largest asteroids. “It’ll take several hours to start transmitting, sir.” Gallant said, “Once we get a base established, we can go deeper into their system. I’m interested in seeing how their home planet differs from our solar system. We need to learn about their society and leadership structure.” Roberts asked, “Skipper, we’ve always called them Titans, but what do they call themselves?” Gallant said, “I can’t replicate their name in our language. As autistic savants, their communication is different from our speech. We’ll continue to call them Titans.” The CIC team reported, “Our initial assessment shows that Gliese-Beta has a diverse topology and climate. It’s ecologically rich with many species. Extensive methane oceans and landmasses have abundant soil and temperature conditions. It can support a wide variety of methane-breathing lifeforms.” Roberts said, “It’s so different from our water-rich Earth.” “Earth is mostly water,” said Gallant. “The oceans provide us with fish to eat, water vapor to fill our skies with clouds, rain to nurture our crops, and water for us to drink. Our metabolism and food cycle are water-based, and we ourselves are 97 percent water. For us, water is life.” “How does this methane world sustain the Titans?” asked Roberts Gallant said, “The temperature variations provide methane in all three phases: gas, liquid, and solid. Methane rivers freeze at high latitudes to form polar sheets. The methane cycle is a complex molecular soup. It is formed from reactions when the ultraviolet radiation from the sun strikes the methane. Their methane life forms are comparable to our oxygen-based life cycle. And just as methane is a poison to us, oxygen is toxic to them.” Roberts asked, “Are they autistic savants because of the methane-based chemistry?” “That’s one of the things we’re here to learn. Our first task is to establish a base,” said Gallant. “From that hideout, the Warrior can recharge her stealth battery and remain safe between operations.” Maintaining stealth mode, the Warrior approached the outer edge of the asteroid belt. She conducted a spiral search to map the interrelated defenses. The crew looked for an asteroid large enough to hide the ship. Gallant and the XO combed through the CIC data to check for potential locations. “How about here, sir?” asked one of the analysts, pointing to a cluster near the outer perimeter of the field. The asteroid belt included many asymmetrical rocky bodies. Three smaller clusters skirted the outer edge. Some asteroids were more than one kilometer wide. “Yes, that might do,” said Gallant. “It’s large enough to block radar detection and shield us from view while we’re recharging. We’ll call this base Alamo." Gallant ordered a two-man team to construct a relay station on the asteroid. He left one of the Warrior’s remote-controlled drones on the surface along with a supply depot. Once Alamo was established, he settled the Warrior into orbit behind the rocks to recharge her stealth batteries. The next day, they reconnoitered the fifth planet and discovered a communication junction box. Moving deeper into Titan territory, they caught a bird’s-eye view of the alien’s home planet. They saw several orbiting shipyards and space stations. The Warrior collected information about the Titan fleet and civilization. The bridge crew was surprised at the incredible infrastructure the aliens had developed. Operating in such a populated environment was a challenge, but the cloaking technology allowed the Warrior to remain undetected. What followed were busy days as the Warrior peered into the inner workings of the Titan system. The crew compiled detailed lists of warships and their deposition, as well as their refueling and patrolling pattern. They learned shipping traffic patterns, monitored the industrial capacity, and accumulated population statistics. There were over twenty billion inhabitants. The Titans had built their main military headquarters on the third planet. It had a layered defense with satellites, minefields, and overlapping fortresses. A display showed fluctuating energy emissions for their industry. After two weeks of collecting information, Roberts approached Gallant’s command chair. He asked, “Captain, can you give us your game plan going forward?” Gallant recalled Admiral Collingsworth’s orders detailing their hazardous mission. All of which required his stealth ship and crew to be at peak operational and battle readiness. He said, “Yes. It’s time to fill you in. We’ve collected a lot of info, but scouting isn’t our sole mission. I intend to do more. Much more.” The bridge crew leaned closer, eager to drink in every tidbit of juicy news. He said, “We are finally ready to engage in asymmetric warfare. We will penetrate the Titan communication network to learn about their military deployment.” He paused for dramatic effect as everyone drew in a deep breath. “And we will raid commercial shipping to throw their civilian population into turmoil.” A buzz of excitement filled the bridge. “That’s a tall order, Skipper,” said Roberts. “Yes, it is.” Gallant asked, “Are you up for it?” “Can do, sir!” said Roberts. “Can do, sir!” roared the bridge crew. Commander Julie Ann McCall stepped out of CIC and onto the bridge. She walked straight to Gallant and grabbed his arm. She said, “I must speak to you immediately.” McCall was not a line officer. She was a product of genetic engineering who had inherited tendencies of the most diabolical kind, which made her a talented Solar Intelligence Agency (SIA) operative. Her considerable skills in manipulation and deception had fostered her brilliant career. What she lacked in kindness and empathy, she more than made up for in intellect, guile, and allure. She was astonishingly efficient at analyzing an opponent’s flaws. Some who had felt her cold-blooded sting labeled her a sociopath who would do anything to achieve her goals. Gallant’s long and checkered relationship with her remained a riddle to him. Now he gazed into her blazing eyes, and then he looked down at her hand on his arm. She pulled her hand away but repeated, “I need to speak with you privately.” The commotion on the bridge died down and the two senior officers became the focus of attention. Gallant rose from his command chair, and said, “Commander, please come with me.” All eyes on the bridge followed the pair as they left.

  • Research | H Peter Alesso

    Science fiction writers engage in research and discovery to fill their imagination. Research AI HIVE I invite you to join my AI community. Come on a journey into the future of artificial intelligence. AIHIVE has the potential to revolutionize many aspects of our lives, from the way we work to the way we interact with the world around us. Here, we explore the latest advances in AI, discuss the technical and ethical implications of this technology, and share our thoughts on the future. We believe that AI has the potential to make the world a better place, and we are committed to using this ability to create a world where AI benefits all of humanity. Here are some of the things you can find on our website: Directory of leading AI companies News and analysis on AI software Discussions about AI business opportunities Tutorials on artificial intelligence tools AI experts in Silicon Valley Video Software Laboratory The entertainment industry has always been at the forefront of technological innovation, continually transforming the way we create and consume content. In recent years, Artificial Intelligence (AI) and Computer-Generated Imagery (CGI) have become the primary forces driving this change. These cutting-edge technologies are now dominating the video landscape, opening up new possibilities for creators and redefining the limits of storytelling. AI video innovations are changing in Silicon Valley. Small businesses are creating AI video software tools for interchanging text, audio, and video media.

  • e-Video | H Peter Alesso

    excerpt of the book e-Video on deploying video on the Web. e-Video AMAZON Chapter 1 Bandwidth for Video Electronic-Video, or “e-Video”, includes all audio/video clips that are distributed and played over the Internet, either by direct download or streaming video. The problem with video, however, has been its inability to travel over networks without clogging the lines. If you’ve ever tried to deliver video, you know that even after heroic efforts on your part (including optimizing the source video, the hardware, the software, the editing and the compression process) there remains a significant barrier to delivering your video over the Web. That is the “last mile” connection to the client. So before we explain the details of how to produce, capture, edit and compress video for the Web, we had better begin by describing the near term opportunities for overcoming the current bandwidth limitations for delivering video over the Internet. In this chapter, we will describe how expanding broadband fiber networks will reach out to the “last mile” to homes and businesses creating opportunities for video delivery. In order to accomplish this, we will start by quantifing three essential concerns: the file size requirements for sending video data over the Internet, the network fiber capacity of the Internet for the near future and the progress of narrowband (28.8Kbps) to broadband (1.5 Mbps) over the “last mile.” This will provide an understanding of the difficulties being overcome in transforming video from the current limited narrowband streaming video to broadband video delivery. Transitioning from Analog to Digital Technology Thomas Alva Edison’s contributions to the telegraph, phonograph, telephone, motion pictures and radio helped transform the 20th Century with analog appliances in the home and the factory. Many of Edison’s contributions were based on the continuous electrical analog signal. Today, Edison’s analog appliances are being replaced by digital ones. Why? Let’s begin by comparing the basic analog and digital characteristics. Analog signals move along wires as electromagnetic waves. The signal’s frequency refers to the number of time per second that a wave oscillates in a complete cycle. The higher the speed, or frequency, the more cycles of a wave are completed in a given period of time. A baud rate is one analog electric cycle or wave per second. Frequency is also stated in hertz (Hz). (Kilohertz or kHz represents 1000 Hz, MHz represents 1,000,000 Hz and GHz represents a billion Hz). Analog signals, such as voice, radio, and TV involve oscillations within specified ranges of frequency. For example: Voice has a range of 300 to 3300 Hz Analog cable TV has a range of 54 MHz to 750MHz Analog microwave towers have a range of 2 to 12 GHz Sending a signal along analog wires is similar to sending water through a pipe. The further it travels the more force it loses and the weaker it becomes. It can also pick up vibrations, or noise, which introduces signal errors. Today, analog technology has become available world-wide through the following transmission media: 1/. Copper wire for telephone (one-to-one communication). 2/. Broadcast for radio & television (one-to-many communication). 3/. Cable for television (one-to-many communication). Most forms of analog content, from news to entertainment, have been distributed over one or more of these methods. Analog technology prior to 1990, was based primarily on the one-to-many distribution system as show in the Table below where information was primarily directed toward individuals from a central point. Table 1-1 Analog Communication Prior to 1990 Prior to 1990, over 99% of businesses and homes had content reach them from any one of the three transmission delivery systems. Only the telephone allowed two-way communication, however. While the other analog systems where reasonably efficient in delivering content, the client could only send feedback, or pay bills, through ordinary postal mail. Obviously, the interactivity level of this system was very low. The technology used in Coaxial Cable TV (CATV) is designed for the transport of video signals. It is comprised of three systems: AM, FM, and Digital. Since the current CATV system with coaxial analog technology is highly limited in bandwidth new technology is necessary for applications requiring higher bandwidth. In the digital system, a CATV network will get better performance than AM/FM systems and ease the migration from coaxial to a fiber based system. Fiber-optics in CATV networks will eliminate most bottlenecks and increase channel capacity for high speed networks. Analog signals are a continuous variable waveform that are information intensive. They require considerable bandwidth and care in transmission. Analog transmissions over phone lines have some inherent problems when used for sending data. Analog signals lose their strength over long distances and often need to be amplified. Signal processing introduces distortions and become amplified raising the possibility of errors. In contrast to the waveform of analog signals, digital signals are transmitted over wire connections by varying the voltage across the line between a high and a low state. Typically, a high voltage level represents a binary digit 1 and a low voltage level represents a binary digit 0. Because they are binary, digital signals are inherently less complex than analog signals and over long distances they are more reliable. If a digital signal needs to be boosted, the signal is simply regenerated rather than being amplified. As a result, digital signals have the following advantages over analog: Superior quality Fewer errors Higher transmission speeds Less complex equipment The excitement over converting analog to digital media is, therefore, easy to explain. It is motivated by cost-effective higher quality digital processing for data, voice and video information. In transitioning from analog to digital technologies however, several significant changes are also profoundly altering broadcast radio and television. The transition introduces fundamental changes from one way broadcast to two-way transmission, and thereby the potential for interactivity, and scheduling of programming to suit the user’s needs. Not only is there an analog to digital shift, but a synchronous to asynchronous shift as well. Television and radio no longer needs to be synchronous and simultaneous. Rather the viewer and listener can control the time of performance. In addition, transmission can be one of three media: copper wire, cable, or wireless. Also, the receiver is transitioning from a dumb device, such as the television, to an intelligent set-top box with significant CPU power. This potentially changes the viewer from a passive to an interactive participant. Today, both analog and digital video technologies coexist in the production and creative part of the process leading up to the point where the video is broadcast. Currently, businesses and homes can receive content from one to six delivery systems: analog: copper wire (telephones), coaxial cable (TV cable), or broadcast (TV or radio); digital: copper wire (modem, DSL), Ethernet modem, or wireless (satellite). At the present time, analog systems still dominate, but digital systems are competing very favorably as infrastructure becomes available. Analog/digital telephone and digital cable allow two-way communication and these technologies are rapidly growing. The digital systems are far more efficient and allow greater interactivity with the client. Competing Technologies The race is on as cable, data, wireless, and telecommunications companies are scrambling to piece together the broadband puzzle and to compete in future markets. The basic infrastructure of copper wire, cable and satellite, as well as, the packaged contents are in place to deliver bigger, richer data files and media types. In special cases, data transmission over the developing computer networks within corporations and between universities, already exist. Groups vying to dominate have each brought different technologies and standards to the table. For the logical convergence of hardware, software and networking technology to occur the interface of theses industries must meet specific inter-operational capabilities and must achieve customer expectations for quality of service. Long distance and local Regional Bell Operating Companies (RBOC) telephone companies started with the phone system designed for point-to-point communication, POTS (plain old telephones) and have evolved into a large switched, distributed network, capable of handling millions of simultaneous calls. They track and bill accordingly with an impressive performance record. They have delivered 99.999% reliability with high quality audio. Their technology is now evolving toward DSL (Digital Subscriber Line) modems. AT&T has made significant progress in leading broadband technology development now that it has added the vast cable networks of Tele-Communications Inc. and MediaOne Group to telephone and cellular. Currently, AT&T with about 45% of the market can plug into more U.S. households than any other provider. But other telecommunications companies, such as Sprint and MCI, as well as, the regional Bell operating companies, are also capable of integrating broadband technology with their voice services. Although both routing and architecture of the telephone network has evolved since the AT&T divestiture, the basics remain the same. About 25,000 central offices in the U.S. connect through 1200 intermediate switching nodes, called access tandems. The switching centers are connected by trunks designed to carry multiple voice frequency circuits using frequency division multiplexing (FDM), or synchronous time-division multiplexing (TDM), or wavelength division multiplexing (WDM) for optics. The cable companies Time Warner, Comcast, Cox Communications and Charter Communications have 60 million homes wired with coaxial cable primarily one-way cable offering one-to-many broadcast service. Their technology competes through the introduction of cable modems and the upgrade of their infrastructure to support two-way communication. The merger between AOL and Time Warner demonstrates how Internet and content companies are finding ways to converge. Cable television networks currently reaches 200 million homes. On the other hand, satellite television can potentially reach 1 billion homes. These will offer nearly complete coverage of the U.S., digital satellite is also competing. DirecTV, has DirecPC, which can beam data to a PC. Its rival, EchoStar Corp., is working with interactive TV player, TiVo Inc., to deliver video and data service to a set-top box. However, satellite is currently not only a one-way delivery system, but is also the most expensive in the U.S. In regions of the world outside the U.S. where the capital investment in copper wires and cable has yet to be made, satellite may have a better competitive opportunity. The Internet itself doesn’t own its own connections. Internet data traffic passes along the copper, fiber, coaxial cable, and wireless transmission of the other industries as a digital alternative to analog transmissions. The new media is being built to include text, graphics, audio, and video across platforms of television, Internet, cable and wireless industries. The backbone uses wide area communications technology, including satellite, fiber, coaxial cable, copper and wireless. Data servers mix mainframes, workstations, supercomputers, and microcomputers and a diversity of clients populate the end-points of the networks including; conventions PCs, palmtops, PDAs, smart phones, set-top boxes, and TVs. Figure 1-1 Connecting the backbone of the Internet to Your Home Web-television hybrids, such as, WebTV provide opportunities for cross-promotion between television and Internet. Independent developers may take advantage of broadcast-Internet synergy by creating shows to targeted audiences Clearly, the future holds a need for interaction between the TV and the Internet. But will it appear as TV quality video transmitted over the Internet and subsequently displayed on a TV set. Or, alternatively, as URL information embedded within existing broadcast TV set pictures. Perhaps both. Streaming Video Streaming is the ability to play media, such as audio and video, directly over the Internet without downloading the entire file before play begins. Digital encoding is required to convert the analog signal into compressed digital format for transmission and playback. Streaming videos send a constant flow of audio/video information to their audience. While streaming videos may be archived for on-demand viewing, they can also be shown in real-time. Examples include play-by-play sports events, concerts and corporate board meetings. But a streaming video offers more than a simple digitized signal transmitted over the Internet. It offers the ability for interactive audience response and unparalleled form of two-way communication. The interactive streaming video process is referred to as Webcasting. Widespread Web-casting will be impractical, however, until audiences have access rates of a minimum of 100 Kbps or faster. Compression technology can be expected to grow more powerful, significantly reducing bandwidth requirement. By 2006 the best estimates indicate that 40 Million homes will have cable modems and 25 Million DSL connections with access rates of 1.5 Mbps. We shall see in Chapters 5, 6 and 7 how the compression codecs and software standards will competitively change “effective” Internet bandwidth and the quality of delivered video. The resultant video quality at a given bandwidth is highly dependent upon the specific video compressor. The human eye is extremely non-linear and its capabilities are difficult to quantify. The quality of compression, specific video application, typical content, available bandwidth, and user preferences all must be considered when evaluating compressor options. Some optimize for “talking heads” while other optimize for motion. To date, the value of streaming video has been primarily the rebroadcast of TV content and redirected audio from radio broadcasts. The success of these services to compete with traditional analog broadcasts will depend upon the ability of streaming video producers to develop and deliver their content using low cost computers that present a minimal barrier to entry. Small, low cost independent producers will effectively target audiences previously ignored. Streaming videos steadily moving toward the integration of text, graphics, audio, and video with interactive on-line chat will find new audiences. In Chapter 2, we present business models to address business’s video needs. Despite these promising aspects, streaming video is still a long way from providing a satisfactory audio/video experience in comparison to traditional broadcasts. The low data transmission rates are a severe limitation on the quality of streaming videos. While a direct broadcast satellite dish receives data at 2 Mbps, an analog modem is currently limited to 0.05 Mbps. The new cable modems and ADSL are starting to offer speeds competitive with satellite, but they will take time to penetrate globally. Unlike analog radio and television, streaming videos requires a dynamic connection between the computer providing the content to the viewer. Current computer technology limits the viewing audience to up to 50,000. While strategies to overcome this with replicating servers may increase audiences, this too will take effort. The enhancement of data compression reduces the required video data streaming rates to more manageable levels. The technology has only recently reached the point where video can be digitized and compressed to levels which allow reasonable appearance during distribution over digital networks. Advances continue to come, improving look and delivery of video. Calculating Bandwidth Requirements So far we have presented the advantages of digital techology, unfortunately there is one rather large disadvantage - bandwidth limitations. Let’s try some simple math that illustrates the difficulties. Live, or on-demand, streaming video and/or audio is relatively easy to encode. The most difficult part is not the encoding of the files. It is determining what level of data may be transmitted. The following Table contains information that will help with some basic terms and definitions: Why the difference between Kbps and KB/sec? File sizes on a hard drive are measured in Kilobytes (KB). But the data that transferred over a modem is measured in Kilobits per second (Kbps) because it's comparatively slower than a hard drive. In the case of a 28.8Kbps modem the maximum data transfer rate is 2.5 KB/sec even through the calculated rate is 28.8Kbs / 8 bits in a byte = 3.6KB/sec. This is because there is approximately a 30% losses of transmission capabilities lost due to Internet “noise.” This is due to traffic congestion on the web and more than one surfer requesting information on the same server. The following Table 1-4 provides information concerning the characteristics of video files. This includes pixels per frame and frames per file (film size file). We can use the information in Table 1-4 to compare to some simple calculations. We will use the following formula to calculate the approximate size in Megabytes of a digitized video file: (pixel width) x (pixel height) x (color bit depth) x (fps) x (duration in seconds) 8,000,000 (bits / MB) For three minutes of video at 15 frames per second with a color bit depth of 24-bit in a window that is 320x240 pixels, the digitized source file would be approximately 622 Megabytes: (320) x (240) x (24) x (15) x (180) / 8,000,000 = 622 Megabytes We will see in chapter 4, how data compression will significantly reduce this burden. Now that we have our terms defined, let's take the case of a TV station that wants to broadcast their channel live 24hrs a day for a month over the web to a target audience of 56 Kbps modem users. In this case, a live stream generates a 4.25KB/sec since a 56Kbps file transfers at 4.25KB/sec. So how much data would be transferred in a 24 hr period if one stream was constantly being used? ANSWER = 4.25 KB/sec * (number of seconds in a day) * 30 days per month = 11 GB/month So, one stream playing a file encoded for 56 Kbs for 24hrs a day will generate 11 gigabytes in a month. How is this figure useful? This figure becomes important if you can estimate the average number of viewers in a month, then you can estimate the total amount of data that will be transferred from your process. Ultimately the issue becomes one of the need for sufficient backbone infrastructure to carry many broadcasts to many viewers across the networks. For HDTV with a screen size of 1080x1920 and 24-bit color, a bandwidth of 51.8 Mbps is required. This is a serious amount of data flow to route around the Internet to millions of viewers. Transitioning from Narrowband to Broadband In telecommunications, bandwidth refers to data capacity of a channel. For an analog service, the bandwidth is defined as the difference between the highest and lowest frequency within which the medium carries traffic. For example, cabling that carries data between 200 MHz and 300 MHz has a bandwidth of 100MHz. In addition to analog speeds in hertz (Hz) and digital speeds in bits per second (bps), the carrying rate is sometimes categorized as narrowband and broadband. It is useful to relate this to an analogy in which wider pipes carry more water. TV and cable are carried at broadband speeds. However, most telephone and modem data traffic from the central offices to individual homes and businesses are carried at slower narrowband speeds. This is usually referred to as the “last mile” issue. The definitions for narrowband and broadband vary within the industries, but are summarized for our purposes as: Narrowband refers to rates less than 1.5 Mbps Broadband refers to rates at or beyond 1.5 Mbps A major bottleneck of analog services exists between cabling of residents and telephone central offices. Digital Subscriber Line (DSL) and cable modem are gaining in availability. Cable TV companies are investing heavily in converting their cabling from one-way only cable TV to two-way systems for cable modems and telephones. In contrast to the “last-mile” for residential areas, telephones companies are laying fiber cables for digital services from their switches to office buildings where the high-density client base justifies the additional expense. We can appreciate the potential target audience for video by estimating; how fast the “last mile” bandwidth demand is growing. Because installing underground fiber costs more than $20,000 per mile, fiber only makes sense for businesses and network backbones. Not for “last mile” access to homes. Table 1-5 shows the estimated number of users connected at various modem speeds in 1999 and 2006. High-speed consumer connections are now being implemented through cable modems and digital subscriber lines (DSL). Approximately 1.3 million home had cable modems by the end of 1999 in comparison to 300,000 DSL connections primarily to businesses. By 2006, we project 40 million cable modems and 25 million DSL lines. Potentially data at the rate of greater than one megabit per second could be delivered to over 80 per cent of more than 550 million residential telephone lines in the world. Better than one megabit per second can also be delivered over fiber/coax CATV lines configured for two-way transmission, to approximately 10 million out of 200 million total users (though these can be upgraded). In2000, the median bandwidth in the U.S. is less than 56. This is de facto a narrowband environment. But worldwide there is virtually limitless demand for communications as presented by the following growth rates: The speed of computer connections is soaring. The number of connections at greater than 1.5 Mbps is growing at 45% per year in residential areas and at 55% per year in business areas. Because of improving on-line experience, people will stay connected about 20% longer per year. As more remote areas of the world get connected, messages will travel about 15% father a year. The number of people online worldwide in 1999 was 150 million, but the peak Internet load was only 10% and the actual transmission time that data was being transferred, was only 25% of that number. With the average access rate of 44 kbps this indicates an estimate of about 165 Gbps at peak load. In 2006 there will be about 300 million users and about 65 million of these will have broadband (>1.5 Mbps) access. With the addition of increased peak load and increased actual transmission time, this will result in an estimated usage of about 16.5 Tera-bits per second of data processing. It all adds up to a lot of bits. It leads to a demand for total data communications in 2006 of nearly a100-fold increase over 1999. With the number of new users connecting to the Internet growing this fast can the fiber backbone meet this demand? Figure 1-2 answers this question. Figure 1-2 shows the growth in Local Area Networks (LANs) from 1980 to 2000 with some projection into the next decade. In addition, the Internet capacity is shows that over the last few decades and indicates the potential growth rate into the next decade. The jump up in Internet capacity due to Dense Wavelength Division Multiplexing (DWDM) is a projection of the multiply effect of this new technology. As a result this figure shows that we can expect multi-Tera-bit per second performance from the Internet backbone in the years ahead. This will meet the projected growth in demand. Great! But, what about that “last mile” of copper, coax, and wireless? The “last mile” involves servers, networks, content and transitions from narrow to broadband. Initially, the “last mile” will convert to residential broadband not as fiber optics, but as a network overlaid on existing telephone and cable television wiring. One megabit per second can be delivered to over 80 % or more of 550 million residential telephone lines in the world. It can also be delivered over all fiber/coax CATV lines configured for two-way service. The latter represents a small fraction of the worldwide CATV lines however, requiring only 10 million homes out of 200 million. But upgrade programs will convert the remainder in 5 years. The endgame of the upgrade process may be fiber directly to the customer’s home, but not for the next decade or two. A fiber signal travels coast to coast in 30 ms and human latency (period to achieve recognition) is about 50 milliseconds. Thus fiber is the only technology to deliver viewable HDTV video. However, due to the cost and man-power involved, we’re stuck with the “last mile” remaining copper, coax and wireless for a while yet. The Table 1-7 below summarizes how the five delivery approaches for analog and digital technologies will co-exist for the next few years. In chapter 8, we will present network background on the technologies and standards and revisit this table in more detail. One-way * (FFTH is fiber to the home, FTTC is fiber to the curb, MPEG-2 is a compression standard see chapter 4, ATM is Asynchronous Transfer Mode see chapter 8, TDM is Time Division Multiplexing see chapter 8). Preparing to Converge To be fully prepared to take advantage of the converging technologies, we must ask and answer the right questions. This is not as easy as it might seem. We could ask, “Which company will dominate the broadband data and telecommunication convergence?” But this would be inadequate because the multi-trillion dollar world e-commerce market is too big for any one company to monopolize. We could ask, “Which broadband networks will dominate the Internet backbone?” But this would be inadequate because innovative multiplexing and compression advances will make broadband ubiquitous and subservient to the “last mile” problem. We could ask, “Which transmission means (cable, wireless, or copper) will dominate the “last mile”?” But this would be inadequate because the geographical infrastructure diversity of these technologies throughout the world will dictate different winners in different regions of the world demonstrating this as a “local” problem. Individually, these questions address only part of the convergence puzzle. It is e-commerce’s demand for economic efficiency that will force us to face the important q estion of the telecommunication convergence puzzle. “What are meaningful broadband cross-technology standards?” Without globally accepted standards, hardware and software developers can’t create broad solutions for consumer demand. As a result, we will be concerned throughout this book in pointing out the directions and conflicts that various competing standards are undertaking. Conclusion In this chapter, we presented the background of analog technology’s transition toward digital technology. This chapter provided a calculation that illustrated why digital video data is such a difficult bandwidth problem. It evaluated the rate of change of conversion from narrowband connections to broadband. This rate establishing a critical perspective on the timeline of the demand for Internet video. On the basis of this chapter, you should conclude that: The Internet backbone combination of fiber and optical multiplexing will perform in the multi-Tera-bps range and provide plenty of network bandwidth in the next few years. The “last mile” connectivity will remain twisted pair, wireless, and coax cable for the next few years, but broadband (1.5Mbps) access through cable modems and x-DSL will grow to 40 million users in just a few years. Streaming video was identified as the crossroads of technology convergence. It is the bandwidth crisis of delivering video that will prove decisive in setting global standards and down-selecting competing technologies. The success of streaming video in its most cost-effect and customer satisfying form will define the final technology convergence model into the 21st Century

  • Intelligent Wireless Web | H Peter Alesso

    excerpt of the technology book The Intelligent Wireless Web. The Intelligent Wireless Web AMAZON Chapter 10.0 Progress in Developing the Intelligent Wireless Web In this chapter, we take the components developed in earlier chapters and lay out a plausible framework for building the Intelligent Wireless Web including our evaluation of the compatibility, integration and synergy issues facing the five merging technology areas that will build it: User Interface – from click to speech Personal Space – from tangled wires to multifunction wireless devices Networks – from wired infrastructure to integrated wired/wireless Protocols – from IP to Mobile IP Web Architecture – from dumb and static to intelligent and dynamic. Finally, we present strategic planning guidelines and the conclusions you could reach as a result of this book. We began this book by describing what we meant by the “Intelligent Wireless Web and presenting an overview of the framework for plausibly constructing it. Our concept of an Intelligent Wireless Web weaves together several important concepts related to intelligence (the ability to learn), “wirelessness” (mobility and convenience) and its advances in telecommunications and information technology that together promised to deliver increasingly capable information services to mobile users anytime and anywhere. We suggested putting these concepts together to form the “Intelligent Wireless Web.” We stated that it was certainly possible to develop intelligent applications for the Internet without media (audio/video) Web features or wireless capability. But, it was our suggestion that Web media, such as, audio could lead to improved user interfaces using speech and that small wireless devices widely distributed could lead to easier access to large portions of the worlds population. The end result could be, not just an intelligent Internet but a widely available, easily accessible, user friendly, Intelligent Wireless Web. Fundamentally, our vision for an Intelligent Wireless Web is very simple - it is a network that provides anytime, anywhere access through efficient user interfaces to applications that learn. Notwithstanding the difficulty of defining intelligence (in humans or machines), we recognized that terms such as “artificial intelligence”, “intelligent agents”, “smart machines” and the like, refer to the performance of functions that mimic those associated with human intelligence. The full range of information services is the next logical step along with the introduction of a variety of different portable user devices (e.g., pagers, PDAs, web-enabled cell phones, small portable computers) that have wireless connectivity. The results will be wireless technology as an extension of the present evolutionary trend in information technology. In addition, Artificial Intelligence and intelligent software applications will also make their way onto the wireless Web and that a performance Index or measure should be developed to evaluate the progress of Web smarts. In the following sections, we will bring together the components of the Intelligent Wireless Web and how it is being constructed. But building it will be a broad and far-reaching task involving more technology integration and synthesis than revolutionary inventions. Future Wireless Communication Process Ideally, the future wireless communication process should start with a user interface based on speech recognition where we merely talk to a personal mobile device that recognizes our identity, words and commands. The personal mobile device would connect seamlessly to embedded and fixed devices in the immediate environment. The message would be relayed to a server residing on a network with the necessary processing power and software to analyzed the contents of the message. The server could then draw necessary supplemental knowledge and services from around the world through the Internet. Finally, the synthesized messages would be delivered to the appropriate parties in their own language on their own personal mobile device. To build this ideal future wireless communication process we must connect the following inherent technologies of communications along with their essential components: Connecting People to Devices – the user interface. Currently we rely on the mouse, keyboard and video display; speech recognition and understanding deployed for mobile devices is a key component for the future. Connecting Devices to Devices. Currently hard-wired connections between devices limit mobility and constrain the design of networks. In the future, the merging of wired and wireless communication infrastructure require the establishment of wireless protocols and standards for the connection between devices; future smart applications require the development and improvement of intelligence services. Also needed is a method to measure the performance and/or intelligence of the Internet so that we can assess advancements. Connecting Devices to People. To deliver useful information to the globally mobile user, future systems require advances in speech synthesis and language translation. So lets start connecting the necessary technologies to fulfill the vision of an Intelligent Wireless Web. The physical components and software necessary to construct and implement the Intelligent Wireless Web require compatibility, integration and synergy of five merging technology areas: < >User Interface – to transition from the mouse click to speech as the primary method of communication between people and devices;Personal Space – to transition from connection of devices by tangled wires to multifunction wireless devices;Networks – to transition from a mostly wired infrastructure to an integrated wired/wireless system of interconnections;Protocols – transition from the original Internet protocol (IP) to the new Mobile IP; andWeb Architecture – to transition from dumb and static applications to new applications that are intelligent, dynamic and constantly learning. FIGURE 10-1 Building the Intelligent Wireless Web User Interface – from Click to Speech We have evaluated communication between humans and their machines and found the problem of how to obtain speech recognition functionality in a handheld or embedded device to be challenging; however efforts currently underway look favorable for solutions in the relatively near term. While we may expect speech interfaces to permeate society steadily, we anticipate that successful traditional interfaces, such as, mouse and touch screen, will continue to be in operation for a very long time. Particularly, for such high power applications as selecting events on detailed graphical representations. Certainly, it is not a difficult problem for a handheld device (such as a cell phone) to perform limited speech recognition activities (such as voice activated dialing). But since the demands for speech functionality increase with the greater complexity of the speech recognition tasks, it becomes more and more difficult to provide these capabilities on a small mobile wireless device with limited capabilities. Therefore, the problem becomes one of distributing the capability for speech recognition and understanding between the local wireless device and remote processing resources to which it is connected. This problem is being currently addressed in far-reaching research at several places, but most notably at the MIT AI Laboratory and at Microsoft Research. The Microsoft effort is directed at technology projects supporting and leading to the vision of a fully speech enabled computer. The Microsoft concept Dr. Who, uses continuous speech recognition and spoken language understanding. Dr. Who is designed to power a voice-based pocket PC with a web browser, email, and cellular telephone capabilities. The highly promising initiative know as, Project Oxygen, is ongoing at MIT’s AI Laboratory. This visionary effort is developing a comprehensive system to achieve the objective of anytime anywhere computing. In this concept, a user carries a wireless interface device that is continuously connected to a network of computing devices in a manner similar to the way cell phone communications maintain continuous connection to a communications network. The local device is speech enabled, and much of the speech recognition capability is embedded in the remote system of high-capability computers. Systems for conversational interface are also being developed that are capable of communicating in several languages. These systems can answer queries in real-time with a distributed architecture that can retrieve data from several different domains of knowledge to answer a query. Such systems have five main functions: speech recognition, language understanding, information retrieval, language generation and speech synthesis. Speech recognition may be an ideal interface for the handheld devices being developed as part of the Oxygen project, but the Oxygen project will need far more advanced speech-recognition systems than are currently available to achieve its ultimate objective of enabling interactive conversation with full understanding. Figure 10-2 identifies the main requirements for an effective speech-based user interface and identifies the current status of each. To meet the needs of the Intelligent Wireless Web, the ultimate desired result is that speech recognition, understanding, translation and synthesis become practical for routine use on handheld, wearable and embedded devices. USER INTERFACE – from click to speech REQUIREMENTS STATUS Speech Recognition Speech Understanding Text to Speech Translation Speech Synthesis Speech Synthesis Markup Language Advanced Continuing Advanced Continuing Continuing Lagging Speech recognition, understanding, translation and synthesis become practical for use on handheld, wearable and embedded devices. RESULTS FIGURE 10-2 Building the User Interface Personal Space – from Wired to Wireless We imagined living our life within the confines of our own Personal Space - without wires, but with devices to “connect” us wherever we travel. Implementation of a Wireless Personal Area Network (WPAN), composed of the personal devices around us as well as our immediate environment is one solution. In the office, devices improve work productivity by enabling access to data, text, and images relating to performing our jobs and by providing for analysis, access to software applications and communications as needed. Creating a WPAN of our immediately available devices will enable a future where a lifetime of knowledge may be accessed through gateways worn on the body or placed within the immediate environment (including our home, auto, office, school, library, ect). WPAN will also allow devices to work together and share each other's information and services. For example, a web page can be called up on a small screen, and can then be wirelessly sent to a printer for full size printing. A mobile WPAN can even be created in a vehicle via interface devices such as wireless headsets, microphones and speakers for communications. As envisioned, WPAN will allow the user to customize his or her communications capabilities permitting everyday devices to become smart, tetherless devices that spontaneously communicate whenever they are in close proximity. Figure 10-3 summarizes the requirements and their status for this element of the Intelligent Wireless Web; the objective is the achieve the ability for handheld, wearable, and embedded devices to connect easily without wires and share software applications, as needed, producing office, home and mobile Wireless Personal Area Networks. PERSONAL SPACE – from wired to wireless REQUIREMENTS STATUS Advanced Continuing Lagging Lagging Adaptable wireless devices Wireless protocol Wireless small screen applications “Nomadic” or mobile software for devices Handheld, wearable, and embedded devices connect easily without wires and share software applications, as needed, producing office, home and mobile Wireless Personal Area Networks. RESULTS FIGURE 10 - 3 Building the Your Personal Space Networks – from Wired to Integrated Wired/Wireless The earliest computers were stand-alone, unconnected machines. During the 1980’s, mergers, takeovers and downsizing have led to a need to consolidate company data in fast, seamless, integrated database have for all corporate information. With these driving forces, Intranets and local networks began to increase in size, and this required ways to interface with each other. Over the past decade, enterprise models and architectures, as well as, their corresponding implementation in actual business practices have changed to take advantage of new technologies. The big lure to wireless is the potential for big money in implementing wireless architectures that can send information packets from people with small personal devices, such as cell phones, to the a company’s Web site and there to conduct transactions. The number of wireless subscribers is expected to grow globally from the current few million to more than 400 million by 2005. The vast system of interconnecting networks that comprise the Internet is composed of several different types of transmission media, dominated by wired media but including: < >WiredFiber opticTwisted pairs (copper)Coaxial cable < >WirelessMicrowaveInfraredLaser NETWORKS – from wired to integrated wired/wireless REQUIREMENTS STATUS Wireless LAN Wireless WAN Satellites Wired Interface Advanced Advanced Continuing Continuing Networks continue migration to optical fiber for long haul while last mile is met by both fiber, mobile wireless, and fixed wireless (LMDS & MDDS) RESULTS FIGURE 10-4 Building Integrated Networks Protocols – from IP to Mobile IP To achieve the mobility requirements of the Intelligent Wireless Web, the Wireless Appliance Protocol, WAP, provides a global standard for data-oriented services to mobile devices thereby enabling anywhere and anytime access. In so doing, access will be provided to far more end-users than can be reached by using the personal computer as a fixed end point. Figure 10-5 provides an overview of the needed changes to support the Intelligent Wireless Web. The anticipated result is to provide intelligent networking software for routing and tracking that leads to general changes in IP networking protocols toward mobile IP. Sitting on top of the entire layer infrastructure will be a new control-plan for applications that smooth routing. PROTOCOLS - from IP to Mobile IP Continuing Continuing IPv6 Mobile IP standard REQUIREMENTS STATUS Intelligent networking software for routing and tracking that leads to general changes in IP networking protocols toward mobile IP. Sitting on top of the entire layer infrastructure will be a new control-plan for applications that smooth routing. RESULTS FIGURE 10 - 5 Building the Mobile Internet Protocols Web Architecture - Dumb & Static to Intelligent & Dynamic Ideally, the wireless communication process should start with the user talking to a personal, or embedded, device that recognizes his identity, words and commands. It will connect seamlessly to the correct transmission device, drawing on whatever resources are required from around the Web. In one case, only database search sorting and retrieval might be required. Or in another case, a specialized Web Service application program might be required. In any case, the information will be evaluated, and the content of the message will be augmented with the appropriate supporting data to fill in the ‘blanks’. If there is appropriate supplementary audio, or video, it will be included for reference. Finally, the results will be delivered to the appropriate parties in their own language through their own different and varied connection devices. For the Web to learn how to conduct this type of intelligent processing requires a mechanism for the adapting and self-organizing on a hypertext network. In addition, it needs to develop Learning Algorithms that would allow it to autonomously change its structure and organize the knowledge it contains, by "learning" the ideas and preferences of its users. The World Wide Web Consortium (W3C) suggests the use of better semantic information as part of web documents, and of the use of next generation Web languages Figure 10-6 provides a summary of the semantic web architecture needed to support the Intelligent Wireless Web. Intelligent applications running directly over the Web, as well as, AI Web Services served from AI service providers will progressively increase the tasking performed with adaptive, dynamic intelligent products. In addition, a Web performance Index will provide some useful measures of Web progress. WEB ARCHITECTURE – from dumb and static to intelligent and dynamic REQUIREMENTS STATUS XMLschema RDF schema & Topic Maps Logic Layer Dynamic Languages Adaptive Applications Distributed AI AI Web Sevices Registration and Validation of Information Intelligent applications running directly over the Web, as well as, AI Web Services supported from AI service providers progressively increasing the percent of applications performed with adaptive, dynamic intelligent products. An overall increase can be expected in the total percentage of learning algorithms operating on the Web. RESULTS FIGURE 10- 6 Building AI Servers with the Semantic Strategic Planning Guidelines Strategic planning is the determination of the course of action and allocation of resources necessary to achieve selected long-term goals. But charting strategic direction for wireless communications networks in a diverse and competitive landscape is complicated by an economy that has introduced dynamic rules for success. Both the rate of technology change and the speed at which new technologies become available have increased. The shorter product life cycles resulting from this rapid diffusion of new technologies places a competitive premium on being able to quickly introduce new goods and services into the marketplace. In order to develop guidelines for strategic planning, we must consider enterprise goals. Traditionally driven by technology, network planning has evolved and now faces new challenges. But the network planning process itself includes two "discordant" requirements: first, to optimize of the network’s long-term investment while second, optimizing of the time to market for each new product. Finding the right balance is not easy. However, opportunities for developers and service providers will exist if they can reach all mobile users by developing infrastructure to support: < >any wireless carrierany wireless network (TDMA, CDMA, etc.)any wireless device (pager, digital cell phone, PDA)any wireless applicationsany Web format (XML, HTML, etc.)any wireless technology (WAP, SMS, pager, etc.)any medium (text, audio, text-to-speech, voice recognition or video)balancing innovations in software (e.g. adaptive software, nomadic software) against innovations in hardware (e.g. chip designs), balancing proprietary standards (motivating competition) against open standards (offering universal access), and balancing local(centralized) Web innovations (e.g. Web Services) against global(distributed) Web architectural evolution (e.g. the Semantic Web).A vendor dominates a market and sets a de facto standard (for example; POTS telephony from AT&T, or PC operating systems from Microsoft).Standards organizations establish standards (for example; HTML).Vendor and market collaboration that is not clearly attributable to any one organization (for example; TCP/IP or VCR formats). FIGURE 10-7 Possible Technology Timeline Conclusion In this chapter, we presented the components developed in earlier chapters and outlined a feasible framework for building the Intelligent Wireless Web, including our evaluation of the compatibility, integration and synergy issues facing the five merging technology areas: User Interface, Personal Space, Networks, Protocols, and Web Architecture. Ten conclusions you could reach from this book about building the Intelligent Wireless Web include: - User Interface - < >Speech recognition and speech synthesis offer attractive solutions to overcome the input and output limitations of small mobile devices, if they can overcome their own limitation of memory and processing power through the right balance for the client-server relationship between the small device and nearby embedded resources. The essential components for achieving this balance are new chip designs coupled with open adaptive nomadic software. The new chips may provide hardware for small devices that is small, light weight, and consumes little power while having the ability to perform applications by downloading adaptive software as needed.- Personal Space - < >Handheld, wearable and embedded devices are upgrading many existing office and home locations making computing access more universal through Wireless Personal Area Networks.Competition between the wireless networking standards Bluetooth and IEEE 802.11b, as well as general networking software, Jini and UpnP, will continue for several years as each finds strong points to exploit before a final winner emerges. MIT’s Project OXYGEN may introduce some innovative protocol alternatives within several years. - Networks - < >Wired and wireless networks will continue to merge and improve backbone performance to greater than the 10 Tera-bps range as well as produce improved interoperability. < >Over time, there will be a migration of core networks to optical fiber simply because photons carry a lot more information more efficiently and at less expense than electrons. By 2003, ultra-long haul (> 4000 km) high bandwidth optical transport will be deployed in the US. The quest for the last mile will be met with a combination of fiber and wireless. In dense metropolitan areas free-space optical networks will provide 622Mbps of bandwidth to buildings without digging the streets. Second generation LMDS and MDDS fixed wireless will be deployed to buildings requiring less bandwidth.- Internet Protocols - < >Intelligent networking software for routing and tracking will lead to general changes in IP networking protocols to include IPv6 and mobile IP. Sitting on top of the entire layer infrastructure may be a number of new control-plane software applications that may add intelligence to the network for smooth integration of routing (layer 3) and wavelength switching. - Web Architecture - < >Intelligent agents, intelligent software application and Artificial Intelligence applications from AI Servers Providers may make their way onto the Web in greater numbers as adaptive software, dynamic programming languages and Learning Algorithms are introduced into Web Services (including both .NET and J2EE architectures).The evolution of Web Architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF/Topic Maps and a Logic Layer.A Web performance Index, or measure, may be developed to evaluate the progress of Internet progress in performing intelligent tasks utilizing learning algorithms.The Intelligent Wireless Web’s significant potential for rapidly completing information transactions may become an important contribution to global worker productivity. 1 [1] Bogdanowicz, K.D., Scapolo, F., Leijten, J., and Burgelman, J-C., “Scenarios for Ambient Intelligence in 2010,” ISTAG Report, European Commission, Feb. 2001.

  • Connections | H Peter Alesso

    excerpt from computer science technology book Connections. Connections AMAZON Chapter 1 Connecting Information “The ultimate search engine would understand exactly what you mean and give back exactly what you want.” said Larry Page[1]. We live in the information age. As society has progressed into the post-indu strial era, access to knowledge and information has become the cornerstone of modern living. With the advent of the World Wide Web, vast amounts of information have suddenly become available to people throughout the world. And searching the Web has become an essential capability whether you are sitting at your desktop PC or wandering the corporate halls with your wireless PDA. As a result, there is no better place to start our discussion of connecting information than with the world’s greatest search engine ─ Google. Google has become a global household name ─ millions use it daily in a hundred languages to conduct over half of all online searches. As a result, Google connects people to relevant information. By providing free access to information, Google offers a seductive gratification to whoever seeks it. To power its searches Google, uses patented, custom-designed programs and hundreds of thousands of computers to provide the greatest computing power of any enterprise. Searching for information is now called ‘googling’ which men, women, and children can perform over computers and cell phones. And thanks to small targeted advertisements that searchers can click for information, Google has become a financial success. In this chapter, we follow the hero’s journey of Google founders Larry Page and Sergey Brin as they invent their Googleware technology for efficient connection to information, then go on to become masters in pursuit of their holy grail ─ ‘perfect search.’ The Google Story Google was founded by two Ph.D. computer science students at Stanford University in California ─ Larry Page and Sergey Brin. When Page and Brin began their hero’s journey, they didn’t know exactly where they were headed. It is widely known that, at first, Page and Brin didn’t hit it off. When they met in 1995, 24 year-old Page was a new graduate of the University of Michigan visiting Stanford University to consider entering graduate school; Brin, at age 23, was a Stanford graduate student who was assigned to host Page’s visit. At first, the two seemed to differ on just about every subject they discussed. They each had strong opinions and divergent viewpoints, and their relationship seemed destined to be contentious. Larry Page was born in 1973 in Lansing, Michigan. Both of his parents were computer scientists. His father was a university professor and a leader in the field of artificial intelligence, while his mother was a teacher of computer programming. As a result of his upbringing in this talented and technology-oriented family, Page seemed destined for success in the computer industry in one way or another. After graduating from high school, Page studied computer engineering at the University of Michigan where he earned his Bachelor of Science degree. Following his undergraduate studies, he decided to pursue graduate work in computer engineering at Stanford University. He intended to build a career in academia or the computer science profession, building on a Ph.D. degree. Meanwhile, Sergey Brin was also born in 1973, in Moscow, Russia, the son of a Russian mathematician and economist. His entire family fled the Soviet Union in 1979 under the threat of growing anti-Semitism, and began their new life as immigrants in the United States. Brin displayed a great interest in computers from an early age. As a youth, he was influenced by the rapid popularization of personal computers, and was very much a child of the microprocessor age. He too was brought up to be familiar with mathematics and computer technology, and as a young child, in the first grade he turned in a computer printout for a school project. Later, at the age of nine, he was given a Commodore 64 computer as a birthday gift from his father. Brin entered the University of Maryland at College Park where he studied mathematics and computer science. He completed his studies at the University of Maryland in 1993 having completed his Bachelor of Science degree. Following his undergraduate studies, he was given a National Science Foundation fellowship to pursue graduate studies in computer science at Stanford University. Not only did he exhibit early talent and interest in mathematics and computer science, he also became acutely interested in data management and networking as the Internet was becoming an increasing force in American society. While at Stanford, he pursued research and prepared publications in the areas of data-mining and pattern extraction. He also wrote software to convert scientific papers written in TeX, a cross-platform text processing language, into HyperText Markup Language (HTML), the multimedia language of the World Wide Web. Brin successfully completed his Masters degree at Stanford. Like Page, Brin’s intent was to continue in his graduate studies to earn a Ph.D. which he also viewed as a great opportunity to establish an outstanding academic or professional career in computer science. The hero’s journey for Page and Brin began as they heard the call ─ to develop a unique approach for retrieving relevant information from the voluminous data on the World Wide Web. Page remembered, “When we first met each other, we each thought the other was obnoxious. Then we hit it off and became really good friends.... I got this crazy idea that I was going to download the entire Web onto my computer. I told my advisor it would only take a week... So I started to download the Web, and Sergey started helping me because he was interested in data mining and making sense of the information.”[2] Although Page initially thought the downloading of the Web would be a short term project, taking a week or so to accomplish, he quickly found that the scope of what he wanted to do was much greater than his original estimate. Once he started his downloading project, he enlisted Brin to join the effort. While working together the two became inspired and wrote the seminal paper entitled The Anatomy of a Large-Scale Hypertextual Web Search Engine[3]. It explained their efficient ranking algorithm, ‘PageRank.’ Brin said about the experience, “The research behind Google began in 1995. The first prototype was actually called BackRub. A couple of years later, we had a search engine that worked considerably better than the others available did at the time.”[4] This prototype listed the results of a Web search according to a quantitative measure of the popularity of the pages. By January 1996, the system was able to analyze the ‘back links’ pointing to a given website and from this quantify the popularity of the site. Within the next few years, the prototype system had been converted into progressively improved versions, and these were substantially more effective than any other search engine then available. As the buzz about their project spread, more and more people began to use it. Soon they were reporting that there were 10,000 searches per day at Stanford using their system. With this growing use and popularity of their search system, they began to realize that they were maxing out their search ability due to the limited number of computers they had at their disposal. They would need more hardware to continue their remarkable expansion and enable more search activity. As Page said, “This is about how many searches we can do, and we need more computers. Our whole history has been like that. We always need more computers.”[5] In many ways, the research project at Stanford was a low budget operation. Because of a chronic shortage of cash, the pair are said to have monitored the Stanford computer science department’s loading docks for newly arrived computers to ‘borrow.’ In spite of this, within a short span of time, the reputation of the BackRub system had grown dramatically and their new search technology began to be broadly noticed. They named their successor search engine ‘Google,’ in a whimsical analogy to the mathematical term ‘Googol,’ which is the immensely large number 1 followed by 100 zeros. The transition from the earlier Backrub technology to the much more sophisticated Google was slow. But the Google system began with an index of 25 million pages and the capability to handle 10,000 search queries every day, even when it was in its initial stage of introduction. The Google search engine grew quickly as it was continuously improved. The effectiveness and relevance of the Google searches, its scope of coverage, speed and reliability, and its clean user interface all contributed to a rapid increase in the popularity of the search engine. At this time, Google was still a student research project, and both Page and Brin were still intent on completing their respective doctoral programs at Stanford. As a result, they initially refused to ‘answer the call’ and continued to devote themselves their academic pursuit of the technology of search. Through all this, Brin maintained an eclectic collection of interests and activities. He continued with his graduate research interests at Stanford and he collaborated with his fellow Ph.D. students and professors on other projects such as automatic detection. At the same time, he also pursued a variety of outside interests, including sailing and trapeze. Brin’s father had stressed the importance for him to complete his Ph.D. He said, “I expected him to get his Ph.D. and to become somebody, maybe a professor.” In response to his father’s question as to whether he was taking any advanced courses one semester, Brin replied, “Yes, advanced swimming.”[6] While Brin and Page continued on as graduate students, they began to realize the importance of what they had succeeded in developing. The two aspiring entrepreneurs decided to try and license the Google technology to existing Internet companies. But they found themselves unsuccessful in stimulating the interest of the major enterprises. They were forced to face the crucial decision of continuing on at Stanford or striking out on their own. With their realization that they were onto something that was important and perhaps even groundbreaking, they decided to make the move. Thus our two heroes had reached their point of departure and they crossed over from the academic into the business world. As they committed to this new direction, they realized they would need to postpone their educational aspirations, prepare plans for their business concept, develop a working demo of their commercial search product, and seek funding sponsorship from outside investors. Having made this decision, they managed to interest Sun Microsystems founder Andy Bechtolsheim in their idea. As Brin recalls, "We met him very early one morning on the porch of a Stanford faculty member's home in Palo Alto. We gave him a quick demo. He had to run off somewhere, so he said, 'Instead of us discussing all the details, why don't I just write you a check?' It was made out to Google Inc. and was for $100,000."[7] The check remained in Page's desk un-cashed for several weeks while he and Brin set up a corporation and sought additional money from family and friends ─ almost $1 million in total. Having started the new company, lined up investor funding, and possessing a superb product, they realized ultimate success would require a good balance of perspiration as well as inspiration. Nevertheless, at this point Google appeared to be well on the road to success. Page and Brin have been on a roll every since, armed with the great confidence that they had both a superior product and an excellent vision for global information collection, storage, and retrieval. In addition, they believed that coordination and optimization of the entire hardware/software system was important, and so they developed their own Googleware technology by combining their custom software with appropriately integrated custom hardware, thereby fully leveraging their ingenious concept. Google Inc. opened its doors as a business entity in September 1998, operating out of modest facilities in a Menlo Park, California garage. As Page and Brin initiated their journey, they faced many challenges and along the way. They matured in their understanding with the help of mentors they encountered such as Yahoo!’s Dave Filo. Filo not only encouraged the two in the development of their search technology, but also made business suggestions for their project. Following the company startup, interest in Google grew rapidly. Red Hat, a Linux company, signed on as their first commercial customer. They were particularly interested in Google because they realized the importance of search technology and its ability to run on open source systems such as Linux. In addition, the press began to take notice of this new commercial venture and articles began to appear in the media highlighting the Google product that offered relevant search results. The late 1990s saw a spectacular growth in development of the technology industry, and Silicon Valley was awash with investor funding. The timing was right for Google, and in 1999, they sought and received a second round of funding, obtaining $25 million from Silicon Valley venture capital firms. The additional funding enabled them to expand their operations and move into new facilities they called the ‘Googleplex,’ Google's current headquarters in Mountain View, California. Although at the time they occupied only a small portion of the new two-story building, they had clearly come a long way from a university research project to a full-fledged technology company with a rapid growth trajectory and a product that was in high demand. Google was also in the process of developing a unique company culture. They operated in an informal atmosphere that facilitated both collegiality and an easy exchange of ideas. Google staffers enjoyed this rewarding atmosphere while they continued to make many incremental improvements to their search engine technology. For example, in an effort to expand the utility of their keyword-targeted advertising to small businesses, they rolled out the ‘AdWords’ system, a software package that represents a self-service advertisement development capability. Google took a major step forward when, in 2000, it was selected by Yahoo to replace Inktomi as their provider of supplementary search results. Because of the superiority of Google over other search engine capabilities, licenses were obtained by many other companies, including the Internet services powerhouse America Online (AOL), Netscape, Freeserve, and eventually Microsoft Network (MSN). In fact, although Microsoft has pursued its own search technology, Bill Gates once commented on search-engine technology development by saying that “Google kicked our butts.”[8] By the end of 2000, Google was handling more than 100 million searches each day. Shortly thereafter Google began to deliver new innovations and establish new partnerships to enter the burgeoning field of mobile wireless computing. By expanding into this field, Google continued to pursue its strategy of putting search into the hands of as many users as possible. As the global use of Google grew, the patterns contained within the records of search queries provided new information about what was on the minds of the global community of Internet users. Google was able to analyze the global traffic in Internet searching and identify patterns, trends, and surprises – a process they called ‘Google Zeitgeist.’ In 2004, Yahoo decided to compete directly with Google and discontinued its reliance on the Google search technology. Nevertheless, Google continued to expand, increasing its market share and dominance of the Web search market through the deployment of regional versions of its software, incorporating language capabilities beyond English. As a result, Google continued to expand as a global Internet force. Also in 2004, Google offered its stock to investors through an Initial Public Offering (IPO). This entrance to public trading of Google stock created not only a big stir in the financial markets, but also great wealth for the two founding entrepreneurs. Page and Brin immediately joined the billionaire’s club as they entered the exclusive ranks of the wealthiest people in the world. Following the IPO, Google began to challenge Microsoft in its role as the leading provider of computer services. They issued a series of new products, including the email service Gmail, the impressive map and satellite image product Google Earth, Google Talk to compete in the growing Voice of the Internet (VoIP) market, and products aimed at leveraging their ambitious project to make the content of thousands of books searchable online, Google Base and Google Book Search. In addition to these new ventures, they have continued to innovate in their core field of search by introducing new features for searching images, news articles, shopping services (Froogle), and other local search options. It is clear that Google has become an essential tool for connecting people and information in support of the developing Information Revolution. Having established itself at the epicenter of the Web, Google is widely regarded as the ‘place to be’ for the best and brightest programming talent in the industry. It is fair to say that, since the introduction of the printing press, no other entity or event has had more impact on public access to information than Google. In fact, Google has endeavored to accumulate a good part of all human knowledge from the vast amount of information stored on the Web. The effective transformation of Google into an engine for what Page calls a ‘perfect search’ would basically give people everywhere the right answers to their questions and the ability to understand everything in the world. Page and Brin could not have achieved their technological success without having a clear vision of the future of the Internet. Page recently commented in an interview that he believes that in the future "information access and communications will become truly ubiquitous,” meaning that “anyone in the world will have access to any kind of information they want or be able to communicate with anyone else instantly and for very little cost.” In fact, this vision of the future is not far from where we are now.[9] Page also noted that the real power of the Internet is the ability to serve people all over the globe with access to information that represents empowerment of individuals. The ability to facilitate the improved lives and productivity of billions of human beings throughout the world is an awesome potential outcome. And the ability to support the information needs of people from different cultures and languages is an unusual challenge. Page stated in an interview that “even language is becoming less of a barrier. There's pretty good automatic translation out there. I've been using it quite a bit as Google becomes more globalized. It doesn't translate documents exactly, but it does a pretty good job and it's getting better every day.”[10] Even with translation and global reach, however, there remain significant challenges to connecting the people of the world through advanced information technology. One of the challenges is the potential for governmental restrictions on the access to information. Encryption technology, for example, inhibits the power of governments to monitor or control such information access. However, a 1998 survey of encryption policy found that several countries, including Belarus, China, Israel, Pakistan, Russia, and Singapore, maintained strong domestic controls while several other countries were considering the adoption of such controls.[11] The phrase ‘Don't be evil’ has been attributed to Google as its catch phrase or motto. Google's present CEO Eric Schmidt commented, in response to questions about the meaning of this motto, that "evil is whatever Sergey says is evil." Brin, on the other hand, said in an interview with Playboy Magazine, “As for ‘Don’t be evil,’ we have tried to define precisely what it means to be a force for good — always do the right, ethical thing. Ultimately ‘Don’t be evil’ seems the easiest way to express it.” And Page also commented on the phrase, saying “Apparently people like it better than ‘Be good.’”[12] Page and Brin maintain lofty ambitions for the future of information technology, and they communicated those ambitions in an unprecedented seven-page letter to Wall Street entitled An Owner's Manual' for Google's Shareholders, written to detail Google's intentions as a public company. They explained their vision that “Searching and organizing all the world’s information is an unusually important task that should be carried out by a company that is trustworthy and interested in the public good.”[13] In response to questions about how Google will be used in the future, Brin said “Your mind is tremendously efficient at weighing an enormous amount of information. We want to make smarter search engines that do a lot of the work for us. The smarter we can make the search engine, the better. Where will it lead? Who knows? But it’s credible to imagine a leap as great as that from hunting through library stacks to a Google session, when we leap from today’s search engines to having the entirety of the world’s information as just one of our thoughts.”[14] At this junction, Page and Brin find themselves in a state of great personal wealth and great accomplishment, having created a technology and company that is profoundly affecting human culture and society. The two computer scientists have traveled far in their hero’s journey to carry out their vision of global search, having developed skills and capabilities for themselves as well as for Google and the Googleware technology. As they succeeded, their search technology became a key milestone in the development of the Information Revolution. Their journey is not over, however. Before continuing their story, let’s digress into the historical context. The Information Revolution Over past millennia, the world has witnessed two global revolutions: the Agricultural Revolution and the Industrial Revolution. During the Agricultural Revolution, a hunter-gather could acquire the resources from an area of 100 acres to produce an adequate food supply, whereas a single farmer needed only one acre of land to produce the equivalent amount of food. It was this 100-fold improvement in land management that fueled the agricultural revolution. It not only enabled far more efficient food production, but also provided food resources well above the needs of subsistence, resulting in a new era built on trade. Where a single farmer and his horse had worked a farm, during the Industrial Revolution workers were able to use a single steam engine that produced 100 times the horsepower of this farmer-horse team. As a result, the Industrial Revolution placed a 100-fold increase of mechanical power into the hands of the laborer. It resulted in the falling cost of labor and this fueled the unprecedented acceleration in economic growth that ensued. Over the millennia, man has accumulated great knowledge, produced a treasury of cultural literature and developed a wealth of technology advances, much of which has been recorded in written form. By the mid-twentieth century, the quantity of accessible useful information had grown explosively, requiring new methods of information management; and this can be said to have triggered the Information Revolution. As computer technology offered great improvements in information management technology, it also provided substantial reductions in the cost of information access. It did more than allow people to receive information. Individuals could buy, sell and even create their own information. Cheap, plentiful, easily accessible information has become as powerful an economic dynamic as land and energy had for the two prior revolutions. The falling cost of information has, in part, reflected the dramatic improvement in price-performance of microprocessors, which appears to be on a pattern of doubling every eighteen months. While the computer has been contributing to information productivity since the 1950’s, the resulting global economic productivity gains were initially slow to be realized. Until the late 1990’s, networks were rigid and closed, and time to implement changes in the telecommunication industry were measured in decades. Since then, the Web has become the ‘grim reaper’ of information inefficiency. For the first time, ordinary people had real power over information production and dissemination. As the cost of information dropped, the microprocessor in effect gave ordinary people control over information about consumer products. Today, we are beginning to see dramatic change as service workers experience the productivity gains from rapid communications and automated business and knowledge transactions. A service worker can now complete knowledge transactions 100 times faster using intelligent software and near ubiquitous computing in comparison to a clerk using written records. As a result, the Information Revolution is placing a 100-fold increase in transaction speed into the hands of the service worker. Therefore, the Information Revolution is based on the falling cost of information-based transactions which in turn fuels economic growth. In considering these three major revolutions in human society, a defining feature of each has been the requirement for more knowledgeable and more highly skilled workers. The Information Revolution signals that this will be a major priority for its continued growth. Clearly, the Web will play a central role in the efficient performance of the Information Revolution because it offers a powerful communication medium that is itself becoming ever more useful through intelligent applications. Over the past 50 years, the Internet/World Wide Web has grown into the global Information Superhighway. And just as roads connected the traders of the Agricultural Revolution and railroads connected the producers and consumers of the Industrial Revolution, the Web is now connecting information to people in the Information Revolution. The Information Revolutions enables service workers today to complete knowledge transactions many times faster through intelligent software using photons over the Internet, in comparison to clerks using electrons over wired circuits just a few decades ago. But perhaps the most essential ingredient in the Web’s continued success has been search technology such as Google, which has provided real efficiency in connecting to relevant information and completing vital transactions. Now Google transforms data and information into useful knowledge energizing the Information Revolution. Defining Information Google started with Page’s and Brin’s quest to mine data and make sense of the voluminous information on the Web. But what differentiates information from knowledge and how do companies like Google manipulate it on the Web to nourish the Information Revolution? First let’s be clear about what we mean by the fundamental terms ‘data,’ ‘information,’ ‘knowledge,’ and ‘understanding.’ An item of data is a fundamental element of information, the processed data that has some independent usefulness. And right now data is the main thing you can find directly on the Web in its current state. Data can be considered the raw material of information. Symbols and numbers are forms of data. Data can be organized within a database to form structured information. While spreadsheets are ‘number crunchers,’ databases are the ‘information crunchers.’ Databases are highly effective in managing and manipulating structured data.[15] Consider, for example, a directory or phone book which contains elements of information (i.e., names, addresses and phone numbers) about telephone customers in a particular area. In such a directory, each customer’s information is laid out in the same pattern. The phone book is basically a table which contains a record for each customer. Each customer’s record includes his name, address, and phone number. But you can’t directly search such a database on the Web. This is because there is no ‘schema’ defining the structure of data on the Web. Thus, what looks like information to the human being who is looking at the directory (taking with him his background knowledge and experience as a context) in reality is data because it lacks this schema. On the other hand, information explicitly associates one set of things to another. A telephone book full of data becomes information when we associate the data to persons we know or wish to communicate with. For example, suppose we found data entries in a telephone book for four different persons named Jones, but all of them were living within one block of each other. The fact that there are four bits of data about persons with the same name in approximately the same location is interesting information. Knowledge, on the other hand, can be considered to be a meaningful collection of useful information. We can construct information from data. And we can construct knowledge from information. Finally, we can achieve understanding from the knowledge we have gathered. Understanding lies at the highest level. It is the process by which we can take existing knowledge and synthesize new knowledge. Once we have understanding, we can pursue useful actions because we can synthesize new knowledge or information from what is previously known. Again, knowledge and understanding are currently elusive on the Web. Future Semantic Web architectures seek to redress this limit. To continue our telephone example, suppose we developed a genealogy tree for the Jones and found the four Jones who lived near each other were actually brothers. This would give us additional knowledge about the Jones in addition to information about their addresses. If we then interviewed the brothers and found that their father had bought each brother a house in his neighborhood when they married, we would finally understand quite a bit about them. We could continue the interviews to find out about their future plans for their off-spring – thus producing more new knowledge. If we could manipulate data, information, knowledge, and understanding by combining a search engine, such as Google, with a reasoning engine, we could create a logic machine. Such an effort would be central to the development of Artificial Intelligence (AI) on the Web. AI systems seek to create understanding through their ability to integrate information and synthesize new knowledge from previously stored information and knowledge. An important element of AI is the principle that intelligent behavior can be achieved through processing of symbolic structures representing increments of knowledge. This has produced knowledge-representation languages that allow the representation and manipulation of knowledge to deduce new facts from the existing knowledge. The World Wide Web has become the greatest repository of information on virtually every topic. Its biggest problem, however, is the classic problem of finding a needle in a haystack. Given the vast stores of information on the Web, finding exactly what you’re looking for can be a major challenge. This is where search engines, like Google, come in ─ and where we can look for the greatest future innovations to come when we combine AI and search. Larry Page and Sergey Brin found that the existing search technology looked at information on the Web in simple ways. They decided that to deliver better results, they would have to go beyond simply looking, to looking good. Looking Good Commercial search engines are based upon one of two forms of Web search technologies: human directed search and automated search. Human directed search is search in which the human performs an integral part of the process. In this form of search engine technology, a database is prepared of keywords, concepts, and references that can be useful to the human operator. Searches that are keyword based are easy to conduct but they have the disadvantage of providing large volumes of irrelevant or meaningless results. The basic idea in its simplest form is to count the number of words in the search query that match words in the keyword index, and rank the Web page accordingly. Although more sophisticated approaches also take into account the location of the keywords, the improved performance may not be substantial. As an example, it is known that keywords used in the title tags of Web pages tend to be more significant than words that occur in the web page, but not in the title tag; however, the level of improvement may be modest. Another approach is to use hierarchies of topics to assist in human-directed search. The disadvantage of this approach is that the topic hierarchies must be independently created and are therefore expensive to create and maintain. The alternative approach is automated search; this approach is the path taken by Google. It uses software agents, called Web crawlers (also called spiders, robots, bots, or agents) to automatically follow hypertext links from one site to another on the Web until they accumulate vast amounts of information about the Web pages and their interconnections. From this, a complex index can be prepared to store the relevant information. Such automated search methods accumulate information automatically and allow for continuing updates. However, even though these processes may be highly sophisticated and automatic, the information they produce is represented as links to words, and not as meaningful concepts. Current automated search engines must maintain huge databases of Web page references. There are two implementations of such search engines: individual search engines and meta-searchers. Individual search engines (such as Google) accumulate their own databases of information about Web pages and their interconnections and store them in such a way as to be searchable. Meta-searchers, on the other hand, access multiple individual engines simultaneously, searching their databases. In the use of key words in search engines, there are two language-based phenomena that can significantly impact effectiveness and therefore must be taken into account. The first of these is polysemy, the fact that single words frequently have multiple meanings; and the second is synonymy, the fact that multiple words can have the same meaning or refer to the same concept. In addition, there are several characteristics required to improve a search engine’s performance. It is important to consider useful searches as distinct from fruitless ones. To be useful, there are three necessary criteria: (1) maximize the relevant information, (2) minimize irrelevant information, and (3) make the ranking meaningful, with the most highly relevant results first. The first criterion is called recall. The desire to obtain relevant results is very important, and the fact is that, without effective recall, we may be swamped with less relevant information and may, in fact, leave out the most important and relevant results. It is essential to reduce the rate of false negatives ─ important and relevant results that are not displayed ─ to a level that is as low as possible. The second criterion, minimizing irrelevant information, is also very important to ensure that relevant results are not swamped; this criterion is called precision. If the level of precision is too low, the useful results will be highly diluted by the uninteresting results, and the user will be burdened by the task of sifting through all of the results to find the needle in the haystack. High precision means a very low rate of false positives, irrelevant results that are highly ranked and displayed at the top of our search result. Since there is always a tradeoff between reducing the risk of missing relevant results and reducing the level of irrelevant results, the third criterion, ranking, is very important. Ranking is most effective when it matches our information needs in terms of our perception of what is most relevant in our results. The challenge for a software system is to be able to accurately match the expectations of a human user since the degree of relevance of a search contains several subjective factors such as the immediate needs of the user and the context of the search. Many of the desired characteristics for advanced search, therefore, match well with the research directions in artificial intelligence and pattern recognition. By obtaining an awareness of individual preferences, for example, a search engine could more effectively take them into account in improving the effectiveness of search. Recognizing ranking algorithms were the weak point in competing search technology Page and Brin introduced their own new ranking algorithm ─ PageRanking. Google Connects Information Just as the name Google is derived from the esoteric mathematical term ‘googol,’ in the future, the direction of Google will focus on developing the esoteric ‘perfect search engine,’ defined by Page as something that "understands exactly what you mean and gives you back exactly what you want." In the past, Google has applied great innovation to try and overcome the limitations of prior search approaches; PageRank was conceived by Google to overcome some of the key limitations.[16] Page and Brin recognized that providing the fastest, most accurate search results would require a new approach to server systems. While most search engines used a small number of large servers that often slowed down under peak use, Google went the other direction by using large numbers of linked PCs to find search results in response to queries. The approach turned out to be effective in that it produced much faster response times and greater scalability while minimizing costs. Others have followed Google’s lead in this innovation while Google has continued its efforts to make their systems more efficient. Google takes a parallel processing approach to its search technology by conducting a series of calculations on multiple processors. This has provided Google with critical timing advantage, permitting their search algorithms to be very fast. While other search engines rely heavily on the simple approach of counting the occurrences of keywords, Google’s PageRank approach considers the entire link structure of the Web to help in the determination of Web page importance. By then performing a hypertext matching assessment to narrow the search results for the particular search being conducted, Google achieves superior performance. In a sense, they combine insight into Web page importance with query-specific attributes to rank pages and deliver the most relevant results at the top of the search results. The PageRank algorithm analyzes the importance of the Web pages it considers by solving an exceptionally complex set of equations with a huge number of variables and terms. By considering links between Web pages as ‘votes’ from one page to another, PageRank can assign a measure of a page’s importance by counting its votes. It also takes into account the importance of each page that supplies a vote, and by appropriately weighting these votes, further improves the quality of the search. In addition, PageRank considers the Web page content, but unlike other search engines that restrict such consideration to the text content, Google consider the full contents of the page. In a sense, Google attempts to use the collective intelligence of the Web, a topic for further discussion later in this book, in its effort to improve the relevance of its search results. Finally, because the search algorithms used by Google are automated, Google has earned a reputation for objectivity and lack of bias in its results. Throughout their exciting years establishing and growing Google as a company, Page and Brin realized that continued innovation was essential. They undertook to find new innovative services that would enhance access to Web information with added thought and not a little perspiration. Page said that he respected the idea of having “a healthy disregard for the impossible.”[17] In February 2002, the Google Search Appliance, a plug-and-play application for search, was introduced. In short order, this product was dispersed throughout the world populating company networks, university systems, and the entire Web. The popular Google Search Appliance is referred to as ‘Google in a box.’ In another initiative, Google News was introduced in September of 2002. This free news service, which allows automatic selection and arrangement of news headlines and pictures, features real time updating and tailoring allowing users to browse the news with scan and search capabilities. Continuing Google's emphasis on innovation, the Google search service for products, Froogle, was launched in December of 2002. Froogle allows users to Search millions of commercial websites to find product and pricing information. It enables users to identify and link to a variety of sources for specific products, providing images, specifications and pricing information for the items being sought. Google's innovations have also impacted the publishing business with both search and advertising features. Google purchased Pyra Labs in 2003, and thus became the host of Blogger, a leading service for the sharing of thoughts and opinions through online journals, or blogs (weblogs). Finally, Google Maps became a dynamic online mapping feature, and Google Earth a highly popular mapping and satellite imagery resource. Using these innovative applications, users can find information about particular locations, get directions, and display both maps as well as satellite images of a desired address. With each new capability, Google expands our access to more information and moves us closer to Page’s Holy Grail: ‘perfect search.’ At this junction, Page and Brin have finally completed their hero’s journey. They have become the Masters of Search; committed to improving access to information and lifting the bonds of ignorance from millions around the world. Pattern of Discovery Larry Page and Sergey Brin were trying to solve the problem of easy, quick access to all Web information, and ultimately to all human knowledge. In order to index existing Web information and provide rapid relevant search results, their challenge was to sort through billions of pages of material efficiently and explicitly find the right responses. They were confident that their vision for developing a global information collection, storage, and retrieval system would succeed if they could base it on a unique and efficient ranking algorithm. The process of inspiration for Page and Brin became fulfilled when they completed their seminal paper entitled The Anatomy of a Large-Scale Hypertextual Web Search Engine which explained their efficient ranking algorithm, PageRank. In developing a breakthrough ranking algorithm based upon the ideas of publication ranking, Page and Brin experienced a moment of inspiration. But they didn’t stop there. They also believed that optimization was vitally important and so they developed their own Googleware technology consisting of combining custom software with custom hardware thereby reflecting the founder’s genius. They built the world’s most powerful computational enterprise, and they have been on a roll every since. Page stressed that inspiration still required perspiration and that Google appeared destined for rapid growth and expansion. In building the customized computer Googleware infrastructure for PageRank, they were demonstrating the 1% Inspiration and 99% Perspiration pattern. The result was Google, the dominant search engine connecting people to all of the World Wide Web’s information. Forecasts for Connecting Information For many of us it seems that an uncertain future looms ahead like a massive opaque block of granite. But just as Michelangelo suggested that he took a block of stone and chip away the non-essential pieces to produce David, we can chip away the improbable to uncover the possible. By examining inventors and their process of discovery, we are able to visualize the tapestry of our past to help unveil patterns that can serve as our guide posts on our path forward. Page and Brin invented an essential search technology, but their contributions to information processing were evolutionary in nature – built on inspiration and perspiration. One forecast for connecting information is that we can expect a continued pattern of inspired innovation as we go forward in the expansion of search and related technology. Discoveries requiring inspiration and perspiration: In considering the future for connecting information, we expect that improved ranking algorithms will ensure Google’s continued dominance for some time to come. Extrapolating from Google’s success, we can expect a series of inspired innovations building upon its enterprise computer system, such as offering additional knowledge related services. Future Google services could include: expanding into multimedia areas such as television, movies, and music using Google TV and Google Mobile. Viewers would have all the history of TV to choose from. And Google would offer advertisers targeted search. Google Mobile could deliver the same service and products to cell phone technology. By 2020, Google could digitize and indexed every book, movie, TV show, and song ever produced; making it available conveniently. In addition, Google could dominate the Internet as a hub site. The ubiquitous GoogleNet, would dominate wireless access and cell-phone. As for Google browser, Gbrowser, it could replace operating systems. However, our vision also concludes connecting information through developing more intelligent search capabilities. A new Web architecture such as Tim Berners-Lee’s Semantic Web, would add knowledge representation and logic to the markup languages of the Web. Semantics on the Web would offer extraordinary leaps in Web search capabilities. Since Google has cornered online advertising, they have made it progressively more precision-targeted and inexpensive. But Google also has 150,000 servers with nearly unlimited storage space and massive processing power. Beyond simply inspired discoveries, Google or other search engine powers could find innovations based upon new principles yet to be proven, as suggested in the following. Discoveries requiring new proof of principle: Technology futurists such as Ray Kurzweil have suggested that Strong AI (software programs that exhibit true intelligence) could emerge from developing web-based systems such as that of Google. Strong AI could perform data mining at a whole new level. This type of innovation would require a Proof of Principle. Some have suggested that Google’s purpose in converting books into electronic form is not to provide for humans to read them, but rather to provide a form that could be accessible by software, with AI as the consumer. One of the great areas of innovation resulting from Google’s initiatives is its ability to search the Human Genome. Such technology could lead to a personal DNA search capability within the next decade. This could result in the identification of medical prescriptions that are specific to you; and you would know exactly what kinds of side-effects to expect from a given drug. And consider what might happen if we had ‘perfect search?’ Think about the capability to ask any question and get the perfect answer – an answer with real context. The answer could incorporate all of the world’s knowledge using text, video, or audio. And it would reflect every nuance of meaning. Most importantly, it would be tailored to your own particular context. That’s the stated goal of IBM, Microsoft, Google and others. Such a capability would offer its greatest benefits when knowledge is easily gathered. Soon search will move away from the PC-centric operations to the Web connected to many small devices such as mobile phones and PDAs. The most insignificant object with a chip and the ability to connect will be network-aware and searchable. And search needs to solve access to deep databases of knowledge, such as the University of California’s library system. While there are several hundred thousand books online, there are 100 million more that are not. ‘Perfect search’ will find all this information and connect us to the world’s knowledge, but this is the beginning of decision making, not the end. Search and artificial intelligence seem destined to get together. In the coming chapters, we will be exploring all the different technologies involved in connecting information and we will be exploring how the prospects for ‘perfect search’ could turn into ‘ubiquitous intelligence.’ First, ubiquitous computing populates the world with devices using microchips everywhere. Then the ubiquitous Web connects and controls these devices on a global scale. The ubiquitous Web is a pervasive Web infrastructure allows all physical objects access by URIs, providing information and services that enrich users’ experiences in their physical context just as the Web does in cyberspace. The final step comes when artificial intelligence reaches the capability of managing and regulating devices seamlessly and invisibly within the environment – achieving ubiquitous intelligence. Ubiquitous intelligence is the final step of Larry Page’s ‘perfect search’ and the future of the Information Revolution. References: [1] Prather, M., “Ga-Ga for Google,” Entrepreneur Magazine , April 2002 . [2] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [3] Brin, S., and Page, L., The Anatomy of a Large-Scale Hypertextual Web Search Engine, Computer Science Department, Stanford University, Stanford, 1996 [4] Brin S., and Page, L., “The Future of the Internet,” Speech to the Commonwealth Club, March 21, 2001, [5] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [6] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005 [7] Technology Review, interview entitled “Search Us, Says Google,” 1/11/2002 [8] Kevin Kelleher, “Google vs. Gates,” Wired, Issue 12.03, March 2004. [9] Brin S., and Page, L., “The Future of the Internet,” Speech to the Commonwealth Club, March 21, 2001, [10] Ibid [11] Cryptography and Liberty 1998, An International Survey of Encryption Policy, February 1998, from http://www.gilc.org/crypto/crypto-survey.html [12] Playboy Magazine Interview, “Google Guys,” Playboy Magazine, September 2004 [13] From Google's Letter to Prospective Shareholders http://www.thestreet.com/_yahoo/markets/marketfeatures/10157519_6.htm l [14] Playboy Magazine Interview, “Google Guys,” Playboy Magazine, September 2004 [16] Quotes from http://www.google.com/corporate/tech.html [17] Vise, D. A., and Malseed, M., The Google Story, Delacourt Press, New York, NY, 2005

  • All Androids Lie | H Peter Alesso

    Excerpt of short story collection book, All Androids Lie. All Androids Lie AMAZON THE GAME Kateryna said, “Hold still, Dear,” as she wiped the dirty smudge off the corner of Maria’s mouth. Maria asked, “Why is everyone so excited?” Kateryna said, “They’re scared of the loud noise.” “What is it?” “Fireworks. See the bright flashes exploding in the night sky,” said the girl’s mother. Maria nodded. “It’s the start of The Game,” lied her mother. “I told you all about it. Don’t you remember?” Maria shook her head, puzzled. “Everyone in the city plays, and there are terrific prizes.” Kateryna added, “What a pity you’re only four. You can’t play. I’m sooo sorry. You might have been great.” “What’s the game?” “It’s a big, big game of tag. Everyone in the city will run to escape. If you’re tagged, you lose. Everyone wants to win. It’s too bad you can’t play.” “Why can’t I play?” Kateryna said, “You’re only four. You’d get tired, cry, and make a fuss.” “I won’t. I won’t make a fuss.” “You would have been good at this game. The prizes are spectacular. Including that new doll, Laura, that you wanted so badly.” “If I win, I will get Laura?” “Yes, and lots more.” Outside, people were running and shouting. “There are candies, treats, games, and other toys for the winners. But you could never win. You would cry and quit.” “No, Mommy. I’ll be good. I want to play and win the prizes.” “I’m so sorry, Dear. The game is long and hard, and I don’t think you’re strong enough.” “Oh, Mommy, I really, really want to. I promise to be good.” Maria looked as if she was ready to throw a tantrum. “None of that, or you will lose immediately,” scolded her mother. “Please?” she asked with the most adoring smile. “Well, I don’t know,” said her mother. “There are many people who can tag you, and you must run away from all of them.” “I will. Please?” Kateryna looked appreciatively at her fair-haired daughter. The prekindergarten teacher told her that Maria was her star pupil because she was so advanced with her numbers and letters. She loved her toy piano and played well with the other children. Kateryna could see herself in the child, not just in the likeness of her face and features but in spirit and desire. Normally, a good-natured and happy-go-lucky sort of woman, she felt she could rise to any challenge. And now, she faced her fiercest test. “If I let you play, there can be no quitting. Do you agree? Pinky Swear?” “Yes! Yes! Pinky Swear,” said Maria jumping up and down. Static from the radio crackled behind them. The news announcer said, “This city has been a center for trade and manufacturing for key businesses along the Black Sea coast. But now its magnificent architecture and unique decor are being wiped off the face of the earth.” With steely determination, Kateryna suppressed her fears and shut the radio off. As the explosions drew near, she calmly said, “Let’s get ready! “Keep these documents safe,” Kateryna said, tucking the papers into Maria’s coat pocket. “They are the game tickets with your name. The rules of the game are strict. And you must reach the winning flag without being tagged. You must stay close to me and don’t talk to people. Do you understand?” “Yes.” “Whenever I say run, you run. Or else, the bad men will tag you.” Maria nodded. She put a scarf around Maria’s neck and buttoned up her coat. Then she pulled up the collar before being satisfied that she would be warm. “My gloves,” squeaked Maria. “Here they are.” As they left their apartment building and stepped out onto the street, they saw people leaving their houses in panic. “Are all these people playing the game?” “Yes. See how much fun they’re having. I told you it was a popular game. You must be tough to play. Are you tough?” “Yes. Mommy.” “Are you?” her mother asked with a raised brow. The skinny four-year-old put her hands on her hips, stood like a superhero with her chest out, and shouted, “I’m tough, and I mean it!” Fairly bursting with laughter, Kateryna said, “Okay, then. Let’s go,” Kateryna gripped the girl’s hand firmly and said, “This way.” As they hurried, there were loud explosions throughout the city. When they reached the train station, shells were bursting high above. “Gosh! Everything is happening so fast.” “Be patient, Dear.” They managed to squeeze onto a packed rail car, but the train was slow and made many erratic stops as if it were engaged in a game of dodgeball. Soon Maria complained, “The people are scary.” Kateryna touched the girl’s cheek and said, “Be brave. We’re on a great adventure. You must be bold.” But after two hours, Maria scowled and said, “I’m cold.” As Kateryna rearranged the girl’s scarf and coat, deep frown lines bit into her face threatening to become a permanent mask. She removed the girl’s gloves and rubbed the tiny hands. Then she planted a kiss on Maria’s rosy cheek. Maria pouted, “I’m hungry.” “Maria, you’re a troublesome thing.” Kateryna took a package out of her pocket and unwrapped a Kanapky sandwich for her. The girl took several bites and then looked disinterested in the rest. She sulked, “I’m thirsty.” “I don’t have any water,” said her exasperated mother. “But if you’re going to be a nasty girl, we will have to quit the game and go home immediately.” “Mommmm,” whined Maria. Nearby, a very old, cantankerous-looking woman, rumpled and wrinkled as a walnut, said, “Here, I have an extra.” She handed Maria a small water bottle. “Thank you. That’s generous of you,” said Kateryna with relief. After another hour, Maria pressed her face against the window, peering into the night as February’s frost crept along the windowpane, forming the jagged lines of an ice blossom. Suddenly, the train bounced and rocked. Pieces of steel and glass flew about. People screamed in pain. A bit of shrapnel cracked the skull of a nearby man. It made the sound of a champagne cork popping. THUNK! “Mommy, that man is bleeding.” “Shhh. It was an accident. He will be taken care of. We must keep moving.” They fled the train and the bombardment area. Kateryna gripped her daughter’s hand tightly and pulled her along as quickly as possible. When they reached a military checkpoint, a soldier told them it was safer to travel on the back roads. “He’s dressed like Daddy. Is Daddy playing too?” “Yes, Darling,” said Kateryna, holding back a tear. “I’m afraid he is.” “I’m scared, Mommy.” Gathering her courage, Kateryna said, “Don’t be frightened, Maria. Remember, it’s only a game. And we’re going to win. Just don’t let them tag you, okay.” “Huh ha.” In the early morning hours, the rosy glow of the sun kissed the horizon just as they reached the top of a hill. “Can we rest, Mommy? I’m tired.” “Not yet. See that bunker across the field? That’s the finish line. When we get there, we’ll win the prize.” “Oh good,” said Maria, perking up, but she could barely move. Kateryna picked her up and carried her. But after going only a hundred yards, Maria exclaimed, “Huh, oh. Mommy are those the bad men?” pointing to men with guns chasing them. Kateryna looked over her shoulder and said, “Yes, Maria. They are very bad men. Evil does not sleep; it waits for a chance to catch you. So, we must hurry.” She put Maria down and said, “See that bunker ahead. That’s the finish line. That’s where you turn in your ticket. Hold it fast to your chest.” Then she leaned closer and whispered, “I love you, Dearest,” though the sentiment seemed more like goodbye. “I love you too, Mommy,” said Maria clutching her ticket. The child’s words wrapped around Kateryna like a thick warm blanket. She yelled, “Run, Maria, run!” The noise from the blasts was terrific and the flashes of the overhead lights cast eerie shadows on their path. Cold breath steamed from their mouths as they huffed and puffed. Gripped by the full force of her worst fears, Kateryna yelled, “Run, Maria! Don’t look back! Run!” Maria ran with all the might and passion a four-year-old could muster. Finally, when she reached the bunker, a giant armor-clad soldier pulled her to safety. Maria jumped up and down and shouted over the din, “Did we win, Mommy? Did we win?” Then, suddenly, and loudly, Maria let out a cry that tore through the night. She sobbed unrelentingly, even as she stuttered out several snot-thick breaths. In the open field, just a dozen yards from the bunker, her mother lay face-down, sprawled out like a discarded rag doll.

  • Semantic Web | H Peter Alesso

    Ab excerpt of the non-fiction book the Semantic Web Development. Semantic Web Services AMAZON Chapter 6.0 The Semantic Web In this chapter, we provide an introduction to the Semantic Web and discuss its background and potential. By laying out a road map for its likely development, we describe the essential stepping stones including: knowledge representation, inference, ontology, search and search engines. We also discuss several supporting semantic layers of the Markup Language Pyramid Resource Description Framework (RDF) a nd Web Ontology Language (OWL). In addition, we discuss using RDF and OWL for supporting software agents, Semantic Web Services, and semantic searches. Background Tim Berners-Lee invented the World Wide Web in 1989 and built the World Wide Web Consortium (W3C ) team in 1992 to develop, extend, and standardize the Web. But he didn’t stop there. He continued his research at MIT through Project Oxygen[1] and began conceptual development of the Semantic Web. The Semantic Web is intended to be a paradigm shift just as powerful as the original Web. The goal of the Semantic Web is to provide a machine-readable intelligence that would come from hyperlinked vocabularies that Web authors would use to explicitly define their words and concepts. The idea allows software agents to analyze the Web on our behalf, making smart inferences that go beyond the simple linguistic analyses performed by today's search engines. Why do we need such a system? Today, the data available within HTML Web pages is difficult to use on a large scale because there is no global schema. As a result, there is no system for publishing data in such a way to make it easily processed by machines. For example, just think of the data available on airplane schedules, baseball statistics, and consumer products. This information is presently available at numerous sites, but it is all in HTML format which means that using it has significant limitations. The Semantic Web will bring structure and defined content to the Web, creating an environment where software agents can carry out sophisticated tasks for users. The first steps in weaving the Semantic Web on top of the existing Web are already underway. In the near future, these developments will provide new functionality as machines become better able to "understand" and process the data. This presumes, however, that developers will annotate their Web data in advanced markup languages. To this point, the language-development process isn't finished. There is also ongoing debate about the logic and rules that will govern the complex syntax. The W3C is attempting to set new standards while leading a collaborative effort among scientists around the world. Berners-Lee has stated his vision that today’s Web Services in conjunction with developing the Semantic Web, should become interoperable. Skeptics, however, have called the Semantic Web a Utopian vision of academia. Some doubt it will take root within the commercial community. Despite these doubts, research and development projects are burgeoning throughout the world. And even though Semantic Web technologies are still developing, they have already shown tremendous potential in the areas of semantic groupware (see Chapter 13) and semantic search (see Chapter 15). Enough so, that the future of both the Semantic Web and Semantic Web Services (see Chapter 11) appears technically attractive. The Semantic Web The current Web is built on HTML, which describes how information is to be displayed and laid out on a Web page for humans to read. In effect, the Web has developed as a medium for humans without a focus on data that could be processed automatically. In addition, HTML is not capable of being directly exploited by information retrieval techniques. As a result, the Web is restricted to manual keyword searches. For example, if we want to buy a product over the Internet, we must sit at a computer and search for most popular online stores containing appropriate categories of products. We recognize that while computers are able to adeptly parse Web pages for layout and routine processing, they are unable to process the meaning of their content. XML may have enabled the exchange of data across the Web, but it says nothing about the meaning of that data. The Semantic Web will bring structure to the meaningful content of Web pages, where software agents roaming from page-to-page can readily carry out automated tasks. We can say that the Semantic Web will become the abstract representation of data on the Web. And that it will be constructed over the Resource Description Framework (RDF) (see Chapter 7) and Web Ontology Language (OWL) (see Chapter 8). These languages are being developed by the W3C, with participations from academic researchers and industrial partners. Data can be defined and linked using RDF and OWL so that there is more effective discovery, automation, integration, and reuse across different applications. These languages are conceptually richer than HTML and allow representation of the meaning and structure of content (interrelationships between concepts). This makes Web content understandable by software agents, opening the way to a whole new generation of technologies for information processing, retrieval, and analysis. Two important technologies for developing the Semantic Web are already in place: XML and RDF. XML lets everyone create their own tags. Scripts, or programs, can make use of these tags in sophisticated ways, but the script writer has to know how the page writer uses each tag. In short, XML allows users to add arbitrary structure to their documents, but says nothing about what the structure means. If a developer publishes data in XML on the Web, it doesn’t require much more effort to take the extra step and publish the data in RDF. By creating ontologies to describe data, intelligent applications won’t have to spend time translating various XML schemas. In a closed environment, Semantic Web specifications have already been used to accomplish many tasks, such as data interoperability for business-to-business (B2B) transactions. Many companies have expended resources to translate their internal data syntax for their partners. As everyone migrates towards RDF and ontologies, interoperability will become more flexible to new demands. Another example of applicability is that of digital asset management. Photography archives, digital music, and video are all applications that are looking to rely to a greater degree on metadata. The ability to see relationships between separate media resources as well as the composition of individual media resources is well served by increased metadata descriptions and enhanced vocabularies. The concept of metadata has been around for years and has been employed in many software applications. The push to adopt a common specification will be widely welcomed. For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. AI researchers have studied such systems and produced today’s Knowledge Representation (KR). KR is currently in a state comparable to that of hypertext before the advent of the Web. Knowledge representation contains the seeds of important applications, but to fully realize its potential, it must be linked into a comprehensive global system. The objective of the Semantic Web, therefore, is to provide a language that expresses both data and rules for reasoning as a Web-based knowledge representation. Adding logic to the Web means using rules to make inferences and choosing a course of action. A combination of mathematical and engineering issues complicates this task (see Chapter 9). The logic must be powerful enough to describe complex properties of objects, but not so powerful that agents can be tricked by a paradox. Intelligence Concepts The concept of Machine Intelligence (MI) is fundamental to the Semantic Web. Machine Intelligence is often referred to in conjunction with the terms Machine Learning, Computational Intelligence, Soft-Computing, and Artificial Intelligence. Although these terms are often used interchangeably, they are different branches of study. For example, Artificial Intelligence involves symbolic computation while Soft-Computing involves intensive numeric computation. We can identify the following sub-branches of Machine Intelligence that relate to the Semantic Web: Knowledge Acquisition and Representation. Agent Systems. Ontology. Although symbolic Artificial Intelligence is currently built and developed into Semantic Web data representation, there is no doubt that software tool vendors and software developers will incorporate the Soft-Computing paradigm as well. The benefit is creating adaptive software applications. This means that Soft-Computing applications may adapt to unforeseen input. Knowledge Acquisition is the extraction of knowledge from various sources, while Knowledge Representation is the expression of knowledge in computer-tractable form that is used to help software-agents perform. A Knowledge Representation language includes Language Syntax (describes configurations that can constitute sentences) and Semantics (determines the facts and meaning based upon the sentences). For the Semantic Web to function, computers must have access to structured collections of information. But, traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts. As a result, central control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable. In an attempt to avoid problems, traditional knowledge-representation systems narrow their focus and use a limited set of rules for making inferences. These system limitations restrict the questions that can be asked reliably. XML and the RDF are important technologies for developing the Semantic Web; they provide languages that express both data and rules for reasoning about the data from a knowledge-representation system. The meaning is expressed by RDF, which encodes it in sets of triples, each triple acting as a sentence with a subject, predicate, and object. These triples can be written using XML tags. As a result, an RDF document makes assertions about specific things. Subject and object are each identified by a Universal Resource Identifier (URI), just as those used in a link on a Web page. The predicate is also identified by URIs, which enables anyone to define a new concept just by defining a URI for it somewhere on the Web. The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document, but are tied to a unique definition that everyone can find on the Web. Search Algorithms The basic technique of search (or state space search) refers to a broad class of methods that are encountered in many different AI applications; the technique is sometimes considered a universal problem-solving mechanism in AI. To solve a search problem, it is necessary to prescribe a set of possible or allowable states, a set of operators to change from one state to another, an initial state, a set of goal states, and additional information to help distinguish states according to their likeliness to lead to a target or goal state. The problem then becomes one of finding a sequence of operators leading from the initial state to one of the goal states. Search algorithms can range from brute force methods (which use no prior knowledge of the problem domain, and are sometimes referred to as blind searches) to knowledge-intensive heuristic searches that use knowledge to guide the search toward a more efficient path to the goal state (see Chapters 9 and 15). Search techniques include: Brute force Breadth-first Depth-first Depth-first iterative-deepening Bi-directional Heuristic Hill-climbing Best-first A* Beam Iterative-deepening-A* Brute force searches entail the systematic and complete search of the state space to identify and evaluate all possible paths from the initial state to the goal states. These searches can be breadth-first or depth-first. In a breadth-first search, each branch at each node in a search tree is evaluated, and the search works its way from the initial state to the final state considering all possibilities at each branch, a level at a time. In the depth-first search, a particular branch is followed all the way to a dead end (or to a successful goal state). Upon reaching the end of a path, the algorithm backs up and tries the next alternative path in a process called backtracking. The depth-first iterative-deepening algorithm is a variation of the depth-first technique in which the depth-first method is implemented with a gradually increasing limit on the depth. This allows a search to be completed with a reduced memory requirement, and improves the performance where the objective is to find the shortest path to the target state. The bi-directional search starts from both the initial and target states and performs a breadth-first search in both directions until a common state is found in the middle. The solution is found by combining the path from the initial state with the inverse of the path from the target state. These brute force methods are useful for relatively simple problems, but as the complexity of the problem rises, the number of states to be considered can become prohibitive. For this reason, heuristic approaches are more appropriate to complex search problems where prior knowledge can be used to direct the search. Heuristic approaches use knowledge of the domain to guide the choice of which nodes to expand next and thus avoid the need for a blind search of all possible states. The hill-climbing approach is the simplest heuristic search; this method works by always moving in the direction of the locally steepest ascent toward the goal state. The biggest drawback of this approach is that the local maximum is not always the global maximum and the algorithm can get stuck at a local maximum thus failing to achieve the best results. To overcome this drawback, the best-first approach maintains an open list of nodes that have been identified but not expanded. If a local maximum is encountered, the algorithm moves to the next best node from the open list for expansion. This approach, however, evaluates the next best node purely on the basis of its evaluation of ascent toward the goal without regard to the distance it lies from the initial state. The A* technique goes one step further by evaluating the overall path from the initial state to the goal using the path to the present node combined with the ascent rates to the potential successor nodes. This technique tries to find the optimal path to the goal. A variation on this approach is the beam search in which the open list of nodes is limited to retain only the best nodes, and thereby reduce the memory requirement for the search. The iterative-deepening-A* approach is a further variation in which depth-first searches are completed, a branch at a time, until some threshold measure is exceeded for the branch, at which time it is truncated and the search backtracks to the most recently generated node. A classic example of an AI-search application is computer chess. Over the years, computer chess-playing software has received considerable attention, and such programs are a commercial success for home PCs. In addition, most are aware of the highly visible contest between IBM’s Deep Blue Supercomputer and the reigning World Chess Champion, Garry Kasparov in May 1997. Millions of chess and computing fans observed this event in real-time where, in a dramatic sixth game victory, Deep Blue beat Kasparov. This was the first time a computer has won a match with a current world champion under tournament conditions. Computer chess programs generally make use of standardized opening sequences, and end game databases as a knowledge base to simplify these phases of the game. For the middle game, they examine large trees and perform deep searches with pruning to eliminate branches that are evaluated as clearly inferior and to select the most highly evaluated move. We will explore semantic search in more detail in Chapter 15. Thinking The goal of the Semantic Web is to provide a machine-readable intelligence. But, whether AI programs actually think is a relatively unimportant question, because whether or not "smart" programs "think," they are already becoming useful. Consider, for example, IBM’s Deep Blue. In May 1997, IBM's Deep Blue Supercomputer played a defining match with the reigning World Chess Champion, Garry Kasparov. This was the first time a computer had won a complete match against the world’s best human chess player. For almost 50 years, researchers in the field of AI had pursued just this milestone. Playing chess has long been considered an intellectual activity, requiring skill and intelligence of a specialized form. As a result, chess attracted AI researchers. The basic mechanism of Deep Blue is that the computer decides on a chess move by assessing all possible moves and responses. It can identify up to a depth of about 14 moves and value-rank the resulting game positions using an algorithm prepared in advance by a team of grand masters. Did Deep Blue demonstrate intelligence or was it merely an example of computational brute force? Our understanding of how the mind of a brilliant player like Kasparov works is limited. But indubitably, his "thought" process was something very different than Deep Blue’s. Arguably, Kasparov’s brain works through the operation of each of its billions of neurons carrying out hundreds of tiny operations per second, none of which, in isolation, demonstrates intelligence. One approach to AI is to implement methods using ideas of computer science and logic algebras. The algebra would establish the rules between functional relationships and sets of data structures. A fundamental set of instructions would allow operations including sequencing, branching and recursion within an accepted hierarchy. The preference of computer science has been to develop hierarchies that resolve recursive looping through logical methods. One of the great computer science controversies of the past five decades has been the role of GOTO-like statements. This has risen again in the context of Hyperlinking. Hyperlinking, like GOTO statements, can lead to unresolved conflict loops (see Chapter 12). Nevertheless, logic structures have always appealed to AI researchers as a natural entry point to demonstrate machine intelligence. An alternative to logic methods is to use introspection methods, which observe and mimic human brains and behavior. In particular, pattern recognition seems intimately related to a sequence of unique images with a special linkage relationship. While Introspection, or heuristics, is an unreliable way of determining how humans think, when they work, Introspective methods can form effective and useful AI. The success of Deep Blue and chess programming is important because it employs both logic and introspection AI methods. When the opinion is expressed that human grandmasters do not examine 200,000,000 move sequences per second, we should ask, “How do they know?'' The answer is usually that human grandmasters are not aware of searching this number of positions, or that they are aware of searching a smaller number of sequences. But then again, as individuals, we are generally unaware of what actually does go on in our minds. Much of the mental computation done by a chess player is invisible to both the player and to outside observers. Patterns in the position suggest what lines of play to look at, and the pattern recognition processes in the human mind seem to be invisible to that mind. However, the parts of the move tree that are examined are consciously accessible. Suppose most of the chess player’s skill actually comes from an ability to compare the current position against images of 10,000 positions already studied. (There is some evidence that this is at least partly true.) We would call selecting the best position (or image) among the 10,000, insightful. Still, if the unconscious human version yields intelligent results, and the explicit algorithmic Deep Blue version yields essentially the same results, then couldn’t the computer and its programming be called intelligent too? For now, the Web consists primarily of huge number of data nodes (containing texts, pictures, movies, sounds). The data nodes are connected through hyperlinks to form `hyper-networks' can collectively represent complex ideas and concepts above the level of the individual data. However, the Web does not currently perform many sophisticated tasks with this data. The Web merely stores and retrieves information even after considering some of the “intelligent applications” in use today (including intelligent agents, EIP, and Web Services). So far, the Web does not have some of the vital ingredients it needs, such as a global database scheme, a global error-correcting feedback mechanism, a logic layer protocol, or universally accepted knowledge bases with inference engines. As a result, we may say that the Web continues to grow and evolve, but it does not learn. If the jury is still out on defining the Web as intelligent, (and may be for some time) we can still consider ways to change the Web to give it the capabilities to improve and become more useful (see Chapter 9). Knowledge Representation and Inference An important element of AI is the principle that intelligent behavior can be achieved through processing of symbol structures representing increments of knowledge. This has given rise to the development of knowledge-representation languages that permit the representation and manipulation of knowledge to deduce new facts. Thus, knowledge-representation languages must have a well-defined syntax and semantics system, while supporting inference. First let’s define the fundamental terms “data,” “information,” and “knowledge.” An item of data is a fundamental element of an application. Data can be represented by population and labels. Information is an explicit association between data things. Associations are often functional in that they represent a function relating one set of things to another set of things. A rule is an explicit functional association from a set of information things to a resultant information thing. So, in this sense, a rule is knowledge. Knowledge-based systems contain knowledge as well as information and data. The information and data can be modeled and implemented in a database. Knowledge-engineering methodologies address design and maintenance knowledge, as well as the data and information. Logic is used as the formalism for programming languages and databases. It can also be used as a formalism to implement knowledge methodology. Any formalism that admits a declarative semantics and can be interpreted both as a programming language and database language is a knowledge language. Three well-established techniques have been used for knowledge representation and inference: frames and semantic networks, logic based approaches, and rule based systems. Frames and semantic networks also referred to as slot and filler structures, capture declarative information about related objects and concepts where there is a clear class hierarchy and where the principle of inheritance can be used to infer the characteristics of members of a subclass from those of the higher level class. The two forms of reasoning in this technique are matching (i.e., identification of objects having common properties), and property inheritance in which properties are inferred for a subclass. Because of limitations, frames and semantic networks are generally limited to representation and inference of relatively simple systems. Logic-based approaches use logical formulas to represent more complex relationships among objects and attributes. Such approaches have well-defined syntax, semantics and proof theory. When knowledge is represented with logic formulas, the formal power of a logical theorem proof can be applied to derive new knowledge. However, the approach is inflexible and requires great precision in stating the logical relationships. In some cases, common-sense inferences and conclusions cannot be derived, and the approach may be inefficient, especially when dealing with issues that result in large combinations of objects or concepts. Rule-based approaches are more flexible. They allow the representation of knowledge using sets of IF-THEN or other condition action rules. This approach is more procedural and less formal in its logic and as a result, reasoning can be controlled in a forward or backward chaining interpreter. In each of these approaches, the knowledge-representation component (i.e., problem-specific rules and facts) is separate from the problem-solving and inference procedures. Resource Description Framework (RDF) The Semantic Web is built on syntaxes which use the Universal Resource Identifier (URI) to represent data in triples-based structures using Resource Description Framework (RDF) (see Chapter 7). A URI is a Web identifier, such as "http:" or "ftp:.” The syntax of URIs is governed by the IETF, publishers of the general URI specification the W3C maintains a list of URI schemes . In an RDF document, assertions are made about particular things having properties with certain values. This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject, predicate, and object are each identified by a URI. The RDF triplets form webs of information about related things. Because RDF uses URIs to encode this information in a document the URIs ensure that concepts are not just words in a document, but are tied to a unique definition. All the triples result in a directed graph whose nodes and arcs are all labeled with qualified URIs. The RDF model is very simple and uniform. The only vocabulary is URIs which allow the use of the same URI as a node and as an arc label. This makes self-reference and reification possible, just as in natural languages. This is appreciable in a user-oriented context (like the Web), but is difficult to cope with in knowledge-based systems and inference engines. Once information is in RDF form, data becomes easier to process. We illustrate an RDF document in Example 6-1. This piece of RDF basically says that a book has the title "e-Video: Producing Internet Video," and was written by "H. Peter Alesso." Example 6-1 Listing 6-1 Sample RDF /XML H. Peter Alesso e-Video: Producing Internet Video The benefit of RDF is that the information maps directly and unambiguously to a decentralized model that differentiates the semantics of the application from any additional syntax. In addition, XML Schema restricts the syntax of XML applications and using it in conjunction with RDF may be useful for creating some datatypes. The goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application. RDF models may be used to address and reuse components (software engineering), to handle problems of schema evolution (database), and to represent knowledge (Artificial Intelligence). However, modeling metadata in a completely domain independent fashion is difficult to handle. How successful RDF will be in automating activities over the Web is an open question. However, if RDF could provide a standardized framework for most major Web sites and applications, it could bring significant improvements in automating Web-related activities and services (see Chapter 11). If some of the major sites on the Web incorporate semantic modeling through RDF, it could provide more sophisticated searching capabilities over these sites (see Chapter 15). We will return to a detailed presentation of RDF in Chapter 7. RDF Schema The first "layer" of the Semantic Web is the simple data-typing model called a schema. A schema is simply a document that defines another document. It is a master checklist or grammar definition. The RDF Schema was designed to be a simple data-typing model for RDF. Using RDF Schema, we can say that "Desktop" is a type of "Computer," and that "Computer" is a sub class of “Machine”. We can also create properties and classes, as well as, creating ranges and domains for properties. All of the terms for RDF Schema start with namespace http://www.w3.org/2000/01/rdf-schema# . The three most important RDF concepts are "Resource" (rdfs:Resource), "Class" (rdfs:Class), and "Property" (rdf:Property). These are all "classes," in that terms may belong to these classes. For example, all terms in RDF are types of resource. To declare that something is a "type" of something else, we just use the rdf:type property: rdfs:Resource rdf:type rdfs:Class . rdfs:Class rdf:type rdfs:Class . rdf:Property rdf:type rdfs:Class . rdf:type rdf:type rdf:Property . This means "Resource is a type of Class, Class is a type of Class, Property is a type of Class, and type is a type of Property." We will return to a detailed presentation of RDF Schema in Chapter 7. Ontology A program that wants to compare information across two databases has to know that two terms are being used to mean the same thing. Ideally, the program must have a way to discover common meanings for whatever databases it encounters. A solution to this problem is provided by the Semantic Web in the form of collections of information called ontologies. Artificial-intelligence and Web researchers use the term ontology for a document that defines the relations among terms. A typical ontology for the Web includes a taxonomy with a set of inference rules. Ontology and Taxonomy We can express an Ontology as: Ontology = < taxonomy, inference rules> And we can express a taxonomy as: Taxonomy = < {classes}, {relations}> The taxonomy defines classes of objects and relations among them. For example, an address may be defined as a type of location, and city codes may be defined to apply only to locations, and so on. Classes, subclasses, and relations among entities are important tools. We can express a large number of relations among entities by assigning properties to classes and allowing subclasses to inherit such properties. Inference rules in ontologies supply further power. An ontology may express the rule "If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code." A program could then readily deduce, for instance, that an MIT address, being in Cambridge, must be in Massachusetts, which is in the U.S., and therefore should be formatted to U.S. standards. The computer doesn't actually "understand" this, but it can manipulate the terms in a meaningful way. The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results. The effectiveness of software agents will increase exponentially as more machine-readable Web content and automated services become available. The Semantic Web promotes this synergy — even agents that were not expressly designed to work together can transfer semantic data. The Semantic Web will provide the foundations and the framework to make such technologies more feasible. Web Ontology Language (OWL) In 2003, the W3C began final unification of the disparate ontology efforts into a standardizing ontology called the Web Ontology Language (OWL). OWL is a vocabulary extension of RDF. OWL is currently evolving into the semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL facilitates greater machine readability of Web content than that supported by XML, RDF, and RDFS by providing additional vocabulary along with formal semantics. OWL comes in several flavors as three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. By offering three flavors, OWL hopes to attract a broad following. We will return to detailed presentation of OWL in Chapter 8. Inference A rule may describe a conclusion that one draws from a premise. A rule can be a statement processed by an engine or a machine that can make an inference from a given generic rule. The principle of "inference" derives new knowledge from knowledge that we already know. In a mathematical sense, querying is a form of inference and inference is one of the supporting principles of the Semantic Web. For two applications to talk together and process XML data, they require that the two parties must first agree on a common syntax for their documents. After reengineering their documents with new syntax, the exchange can happen. However, using the RDF/XML model, two parties may communicate with different syntax using the concept of equivalencies. For example, in RDF/XML we could say “car” and specify that it is equivalent to “automobile.” We can see how the system could scale. Merging databases becomes recording in RDF that "car" in one database is equivalent to "automobile" in a second database. Indeed, this is already possible with Semantic Web tools, such as a Python program called "Closed World Machine” or CWM. Unfortunately, great levels of inference can only be provided using "First Order Predicate Logic," FOPL languages, and OWL is not entirely a FOPL language. First-order Logic (FOL) is defined as a general-purpose representation language that is based on an ontological commitment to the existence of objects and relations. FOL makes it easy to state facts about categories, either by relating objects to the categories or by quantifying. For FOPL languages, a predicate is a feature of the language which can make a statement about something, or to attribute a property to that thing. Unlike propositional logics, in which specific propositional operators are identified and treated, predicate logic uses arbitrary names for predicates and relations which have no specific meaning until the logic is applied. Though predicates are one of the features which distinguish first-order predicate logic from propositional logic, these are really the extra structure necessary to permit the study of quantifiers. The two important features of natural languages whose logic is captured in the predicate calculus are the terms "every" and "some" and their synonyms. Analogues in formal logic are referred to as the universal and existential quantifiers. These features of language refer to one or more individuals or things, which are not propositions and therefore force some kind of analysis of the structure of "atomic" propositions. The simplest logic is classical or boolean, first-order logic. The "classical" or "boolean" signifies that propositions are either true or false. First-order logic permits reasoning about the propositional and also about quantification ("all" or "some"). An elementary example of the inference is as follows: A ll men are mortal. John is a man. The conclusion: John is mortal. Application of inference rules provides powerful logical deductions. With ontology pages on the Web, solutions to terminology problems begin to emerge. The definitions of terms and vocabularies or XML codes used on a Web page can be defined by pointers from a page to an ontology. Different ontologies need to provide equivalence relations (defining the same meaning for all vocabularies), otherwise there would be a conflict and confusion. Software Agents Many automated Web Services already exist without semantics, but other programs, such as agents have no way to locate one that will perform a specific function. This process, called service discovery, can happen only when there is a common language to describe a service in a way that lets other agents understand both the function offered and the way to take advantage of it. Services and agents can advertise their function by depositing descriptions in directories similar to the Yellow Pages. There are some low-level, service-discovery schemes which are currently available. The Semantic Web is more flexible by comparison. The consumer and producer agents can reach a shared understanding by exchanging ontologies which provide the vocabulary needed for discussion. Agents can even bootstrap new reasoning capabilities when they discover new ontologies. Semantics also make it easier to take advantage of a service that only partially matches a request. An intelligent agent is a computer system that is situated in some environment, that is capable of autonomous action and learning in its environment in order to meet its design objectives. Intelligent agents can have the following characteristics: reactivity — they perceive their environment, and respond, pro-active — they exhibit goal-directed behavior and social — they interact with other agents. Real-time intelligent agent technology offers a powerful Web tool. Agents are able to act without the intervention of humans or other systems: they have control both over their own internal state and over their behavior. In complexity domains, agents must be prepared for the possibility of failure. This situation is called non-deterministic. Normally, an agent will have a repertoire of actions available to it. This set of possible actions represents the agent’s capability to modify its environments. Similarly, the action "purchase a house" will fail if insufficient funds are available to do so. Actions therefore have pre-conditions associated with them, which define the possible situations in which they can be applied. The key problem facing an agent is that of deciding which of its actions it should perform to satisfy its design objectives. Agent architectures are really software architectures for decision-making systems that are embedded in an environment. The complexity of the decision-making process can be affected by a number of different environmental properties, such as: Accessible vs inaccessible. Deterministic vs non- deterministic. Episodic vs non-episodic. Static vs dynamic. Discrete vs continuous. The most complex general class of environment is inaccessible, non-deterministic, non-episodic, dynamic, and continuous. Trust and Proof The next step in the architecture of the Semantic Web is trust and proof. If one person says that x is blue, and another says that x is not blue, will the Semantic Web face logical contradiction? The answer is no, because applications on the Semantic Web generally depend upon context, and applications in the future will contain proof-checking mechanisms and digital signatures. Semantic Web Capabilities and Limitations The Semantic Web promises to make Web content machine understandable, allowing agents and applications to access a variety of heterogeneous resources, processing and integrating the content, and producing added value for the user. The Semantic Web aims to provide an extra machine understandable layer, which will considerably simplify programming and maintenance effort for knowledge-based Web Services. Current technology at research centers allow many of the functionalities the Semantic Web promises: software agents accessing and integrating content from distributed heterogeneous Web resources. However, these applications are really ad-hoc solutions using wrapper technology. A wrapper is a program that accesses an existing Website and extracts the needed information. Wrappers are screen scrapers in the sense that they parse the HTML source of a page, using heuristics to localize and extract the relevant information. Not surprisingly, wrappers have high construction and maintenance costs since much testing is needed to guarantee robust extraction and each time the Website changes, the wrapper has to be updated accordingly. The main power of Semantic Web languages is that anyone can create one, simply by publishing RDF triplets with URIs. We have already seen that RDF Schema and OWL are very powerful languages. One of the main challenges the Semantic Web community faces for the construction of innovative and knowledge-based Web Services is to reduce the programming effort while keeping the Web preparation task as small as possible. The Semantic Web’s success or failure will be determined by solving the following: • The availability of content. • Ontology availability, development, and evolution. • Scalability – Semantic Web content, storage, and search are scalable. • Multilinguality – information in several languages. • Visualization – Intuitive visualization of Semantic Web content. • Stability of Semantic Web languages. Conclusion In this chapter, we provided an introduction to the Semantic Web and discussed its background and potential. By laying out a roadmap for its likely development, we described the essential stepping stones including: knowledge representation, inference, ontology, and search. We also discussed several supporting semantic layers of the Markup Language Pyramid Resource Description Framework (RDF) and Web Ontology Language (OWL). In addition, we discussed using RDF and OWL for supporting software agents, Semantic Web Services, and semantic search. [1] MIT's Project Oxygen is developing technologies to enable pervasive, human-centered computing and information-technology services. Oxygen's user technologies include speech and vision technologies to enable communication with Oxygen as if interacting directly with another person, saving much time and effort. Automaton, individualized knowledge access, and collaboration technologies will be used to perform a wide variety of automated, cutting-edge tasks.

  • Henry Gallant and the Warrior | H Peter Alesso

    Excerpt from book 3 of the Henry Gallant Saga, Henry Gallant and the Warrior. Henry Gallant and the Warrior AMAZON Going Up 1 Lieutenant Henry Gallant plodded along the cobblestone streets of New Annapolis—head down, mind racing . . . My orders say take command of the Warrior immediately . . . but no promotion . . . Why not? He pondered the possibilities, but he already knew the answer. Though he had steely gray eyes, a square jaw, and was taller than nearly everyone around him, what distinguished him most was not visible to the naked eye—he was a Natural—born without genetic engineering. Is this my last chance to prove myself? By the time he reached the space elevator, the welcoming breeze of the clear brisk morning had brightened his mood and he fell into line behind the shipyard personnel without complaint. Looking up, he marveled: That cable climbs into the clouds like an Indian rope trick. When it was his turn at last, the guard scanned his comm pin against the access manifest. The portal light blinked red. “Pardon, sir. Access denied,” said the grim-faced sentry. “Call the officer of the guard,” demanded Gallant. The officer of the guard appeared but was no more inclined to pass Gallant through than the sentry was. The guard touched the interface panel and made several more entries, but the portal continued to blink red. “There’s a hold on your access, sir.” Trouble already? Gallant thought. Then he asked, “A hold?” “Yes, sir. Your clearance and authorization are in order, but SIA has placed a hold on your travel. They want you to report to SIA headquarters, A.S.A.P.” “I need to go to the shipyard and attend to important business before going to the Solar Intelligence Agency,” clarified Gallant, but even as he said it, he knew it wouldn’t help. “Sorry, sir. Orders.” Gallant noticed the four gold stripes of a captain’s sleeve. The officer was waiting to take the next elevator. “Captain?” he said, hailing the man before he recognized him. Captain Kenneth Caine of the Repulse marched to the guard post, frowning. “What can I do for you, Gallant?” Of all the luck, he thought. Caine was the last person he wanted to impose upon, but it was too late now. Several uncomfortable moments passed with the three of them standing there—Caine, Gallant, and the officer of the guard—staring at each other, waiting for someone to break the silence. Finally, Gallant addressed Caine: “Well, sir, I’ve received orders to take command of the Warrior, but apparently all the T’s haven’t been crossed and my shipyard access has a hold from SIA.” Caine’s frown deepened. Gallant turned to the officer of the guard and said, “Is it possible to allow me go to my ship and complete my business? I’ll report to SIA immediately afterward.” The officer of the guard fidgeted and squirmed. He understandably did not like being placed in such a position while under the scrutiny of a full captain. Caine shrugged. Gallant was puzzled for a moment, wondering how to win Caine’s support. He tried the officer of the guard again, “Perhaps, you could send a message to SIA headquarters stating that you informed me of my requirement to report and that I agreed to attend this afternoon after I assume command of my ship. I’ll initial it.” Caine nodded. The guard brightened visibly. “That should be acceptable, sir.” He made a few entries into his interface panel and the portal finally blinked green. Gallant stepped through the gate and joined Caine. Together they walked to the elevator doors and mingled with the group waiting for the next available car. “Thank you for your help, captain,” said Gallant. “I’m sorry to have troubled you.” Caine merely nodded. Unwilling to miss the opportunity to reconnect with his former commanding officer, Gallant asked, “How’ve you been, sir?” Caine’s frown returned. “Well, personally, it’s been quite a trial . . .” Gallant resisted the temptation to coax him onward. After a minute, Caine revealed, “I lost a lot of shipmates during the last action.” He sighed and took a moment to silently mourn their passing. “I’m sorry, sir,” said Gallant, who was sensitive to the prickling pain in Caine’s voice. Gallant then took a long look at the senior officer. He recalled a mental image of his former commanding officer—solidly built and squared shouldered with a crew-cut and a craggy face. In contrast, the man before him now was balding and flabby, with a puffy face and deep frown lines. “Humph,” grumbled Caine, recognizing Gallant’s critical stare. “You’ve changed too. You’re no longer the lanky callow midshipman who reported aboard the Repulse nearly five years ago.” “Thank you, sir,” said Gallant, breaking into an appreciative smile. Caine returned the smile and, warming to the conversation, he said, “We had a few good times back then—and a few victories as well—a good ship, a good crew.” A minute passed before Caine added, “As for the Repulse—she’s suffered along with her crew . . . perhaps more than her fair share. As you know, she’s has been in the forefront of battle since the beginning of the war, but when the Titans attacked Jupiter Station earlier this year, we took a terrible beating—along with the rest of the fleet.” Caine’s face went blank for a few seconds as he relived the event. “ The Titans used nuclear weapons to bombard the colonies. The loss of life was staggering. Jupiter’s moons are now lifeless, scorched rocks. The colonists fled on whatever transport they could find and they’re now in the refugee camp on the outskirts of this city,” said Caine. Then, trying to sound optimistic but unable to hide his concern, he added, “We gave the Titans some lumps as well. It’ll be some time before they can trouble us on this side of the asteroid belt.” “So I understand, sir.” SWOOSH! BAM! The elevator car doors opened with a loud bang. Caine stepped inside. Gallant grabbed the strap and buckled himself into the adjacent acceleration couch. A powerful engine pulled the glass-encased car along a ribbon cable anchored to the planet’s surface and extended to the Mars space station in geostationary orbit. A balance of forces kept the cable under tension while the elevator ascended—gravity at the lower end and the centripetal force of the station at the upper end. The tiny vehicle accelerated swiftly to seven g’s and reached orbit in less than ten minutes before braking to docking speed. Gallant enjoyed a spectacular view as the car sped through the clouds. Below him was the receding raw red and brown landscape of Mars spread over the planet’s curvature; above him was one of man’s most ambitious modern structures; —a space station, replete with a shipyard that housed the newest space vessels under construction including Gallant’s new command, the Warrior, as well as ships in need of repair, including the Repulse. Gallant tried to pick out his minute ship against the much larger battle cruisers nested near it, but the rotation of the station hid it from view. “Repulse is completing extensive repairs. She’ll be back in action before long. I have a fierce loyalty to my ship and I know she’ll acquit herself well, no matter what comes,” said Caine. “I’m sure she will, sir,” said Gallant. “I haven’t congratulated you on your first command, yet” Caine said, extending his hand. “You’ve earned it.” “Thank you, sir,” said Gallant, shaking hands, while a thought flashed through his mind, If I earned command, why wasn’t I promoted? “Do you have any idea of your first assignment, yet?” “No, sir. It could be almost anything,” said Gallant, but he was thinking, Probably involves the Warrior’s special capabilities. Caine said, “At least you’ll get a chance to strike the enemy.” Gallant said, “We still know so little about the aliens’ origins or intentions. Since they’ve taken Jupiter, they’ve expanded their bases from the satellites of the outer planets. They’ve also penetrated into the asteroids. That puts them in a position to launch raids here.” Caine said, “I once asked you, ‘What’s the single most important element in achieving victory in battle?’” “Yes, sir, and my answer is the same: surprise.” “Yes,” Caine said, “but to achieve surprise, it’s essential for us to gather more intelligence.” “I agree, sir.” “Tell me, Gallant,” Caine said, as he shifted position, “are you aware there are many people who hold you in contempt? They still doubt that a Natural can serve in the fleet.” Gallant grimaced. “I’ve always done my duty to the best of my ability, sir.” “And you have done admirably, from what I know of your actions, but that hasn’t fazed some. I’ve heard little about your last mission.” “I can’t discuss that mission, sir. It’s been classified as need-to-know under a special compartment classification,” said Gallant, as he thought, I wish I could tell you about the AI berserker machine. I can only imagine what’s in store for the Warrior. “Nevertheless, I’ve heard that Anton Neumann was much praised for that mission. He was promoted to full commander and given the cruiser Achilles, though, I wouldn’t be surprised if his father’s influence played a role in that.” Gallant said nothing, but stared down at his shoes, Neumann always wins. Caine grunted and then said, “Neither of us is in good standing with Anton’s father.” Caine and Gallant had previously run afoul of Gerome Neumann, President of NNR, Shipping and Mining Inc., and an industrial and government powerbroker. Gallant nodded. Upon arriving at the space station platform, the elevator car doors opened automatically and once again banged loudly. SWOOSH! BAM! A long, enclosed tunnel formed the central core of the station with twenty-four perpendicular arms that served as docking piers. The tunnel featured many windows and access ports to reach the twenty-four ships that extended from the docking arms. The two men chatted about the war news while they rode a tram along the tunnel causeway. Finally, Gallant left Caine at the Repulse and continued to his new command. A swarm of workmen buzzed along the Warrior’s scaffolding, cranes hauled machinery to and fro, and miscellaneous gear lay haphazardly about. An infinite amount of preparation was under way, servicing the ship in anticipation of her departure. Gallant gaped . . . There she is. He leaned forward to take in every line and aspect of the ship. Despite the distractions, he saw the ship as a thing of exquisite beauty. The Warrior featured a smooth rocket shaped hull and while she was smaller than her battle cruiser neighbors, she weighed thirty-thousand tons with an overall length of one hundred and twenty meters and a beam of forty meters. She was designed with stealth capability, so she emitted no detectable signals and remained invisible until her power supply required recharging. Her armament included a FASER cannon, several short-range plasma weapons, and several laser cannons. She was equipped with an armor belt and force shield plus electronic warfare decoys and sensors. The ship’s communications, navigation, FTL propulsion, and AI computer were all state-of-the-art. The crew of 126 officers and men, was highly trained and already on board. When the Warrior traveled through the unrelenting and unforgiving medium of space it would serve as the crew’s heartfelt home. The brief, relaxed sense of freedom that Gallant had enjoyed between deployments was coming to an end; his shoulders tightened in anticipation. He stepped onto the enclosed gangplank and saluted the flag that was displayed on the bow. Then he saluted the officer of the watch and asked, “Request permission to come aboard, sir?” “Permission granted, sir,” said Midshipman Gabriel in a gravelly voice that was totally at odds with his huge grin, dimpled cheeks, and boyish freckled face. Was I ever that young? thought Gallant before he recalled he was only a few years older. Boarding the ship, Gallant’s eyes widened as he sought to drink everything in. He was impressed by the innovative technologies that had been freshly installed. The novelty of his role on this ship was not lost on him. Upon reaching the bridge, he ordered Gabriel to use the ship’s intercom to call the crew to attention. “All officers, report to the bridge!” Gabriel ordered. When the officers had gathered around him a minute later, he said, “All hands, attention!” Drawn together on every deck, the crew stopped their work, came to attention, and listened. Gallant recited his orders, “Pursuant to fleet orders, I, Lieutenant Henry Gallant, assume command of the United Planet ship, Warrior, on this date at the Mars’ Space Station.” He continued reciting several more official paragraphs, but from that moment forward, the Warrior was a member of the United Planets’ fleet and Gallant was officially her commanding officer. With the formal requirements concluded, Gallant spoke over the address system: “At ease. Officers and crew of the Warrior, I’m proud to serve with you. I look forward to getting to know each one of you. For now, we must outfit this ship and prepare to do our job as part of the fleet. There are battles to be fought, a war to win, and the Warrior has a key role to play.” Satisfied with his brief statement, Gallant nodded to Gabriel. Over the address system Gabriel announced, “Attention! All hands dismissed! Return to your regular duties.” Gallant stood before the officers on the bridge, addressed each by name and shook their hands, starting with the executive officer and then the department heads; operations, engineering, and weapons; followed by the junior officers. His first impression was that they were an enthusiastic and professional group. “I will provide prioritized work items for each of you to address in the next few days as we prepare for our upcoming shakedown cruise. For now, you can return to your duties. Thank you.” Gallant entered the Combat Information Center and pulled on a neural interface to the ship’s AI. The dozens of delicate silicon probes touched his scalp at key points. It sensitively picked up wave patterns emanating from his thoughts and allowed him to communicate with the AI directly. Gallant formed a mental image of the Warrior's interior. While Gallant could use the interface for evaluating the ship’s condition, the controls remained under manual control. He hashed out his priorities for his department heads to work on and sent them messages. He ordered them to address the myriad of items he had been mentally considering for hours. While he would have liked to have had a discussion with each officer individually, that would simply have to wait. It was time to get back to the space elevator. Gallant frowned in frustration at being pulled away by his appointment: I’d better hustle to SIA.

H. Peter Alesso

©2023 by hpeteralesso.com.

  • Twitter
  • LinkedIn
  • YouTube
bottom of page