Published Titles

Computing and the National Science Foundation, 1950–2016: Building a Foundation for Modern Computing

By: Peter A. Freeman, W. Richards Adrion, William Aspray
ISBN: 978-1-4503-7276-3
DOI: ​10.1145/3336323
Table of Contents

This organizational history relates the role of the National Science Foundation (NSF) in the development of modern computing. Drawing upon new and existing oral histories, extensive use of NSF documents, and the experience of two of the authors as senior managers, this book describes how NSF’s programmatic activities originated and evolved to become the primary source of funding for fundamental research in computing and information technologies.

The book traces how NSF’s support has provided facilities and education for computing usage by all scientific disciplines, aided in institution and professional community building, supported fundamental research in computer science and allied disciplines, and led the efforts to broaden participation in computing by all segments of society.

Today, the research and infrastructure facilitated by NSF computing programs are significant economic drivers of American society and industry. For example, NSF supported work that led to the first widely-used web browser, Netscape; sponsored the creation of algorithms at the core of the Google search engine; facilitated the growth of the public Internet; and funded research on the scientific basis for countless other applications and technologies. NSF has advanced the development of human capital and ideas for future advances in computing and its applications.

This account is the first comprehensive coverage of NSF’s role in the extraordinary growth and expansion of modern computing and its use. It will appeal to historians of computing, policy makers and leaders in government and academia, and individuals interested in the history and development of computing and the NSF.

Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali

By: Oded Goldreich
ISBN: 978-1-4503-7266-4
DOI: 10.1145/3335741
Table of Contents

Cryptography is concerned with the construction of schemes that withstand any abuse. A cryptographic scheme is constructed so as to maintain a desired functionality, even under malicious attempts aimed at making it deviate from its prescribed behavior. The design of cryptographic systems must be based on firm foundations, whereas ad hoc approaches and heuristics are a very dangerous way to go. These foundations were developed mostly in the 1980s, in works that are all co-authored by Shafi Goldwasser and/or Silvio Micali. These works have transformed cryptography from an engineering discipline, lacking sound theoretical foundations, into a scientific field possessing a well-founded theory, which influences practice as well as contributes to other areas of theoretical computer science.

This book celebrates these works, which were the basis for bestowing the 2012 A.M. Turing Award upon Shafi Goldwasser and Silvio Micali. A significant portion of this book reproduces some of these works, and another portion consists of scientific perspectives by some of their former students. The highlight of the book is provided by a few chapters that allow the readers to meet Shafi and Silvio in person. These include interviews with them, their biographies and their Turing Award lectures.

Concurrency: The works of Leslie Lamport - by Dahlia Malkhi

By: Dahlia Malkhi
ISBN: 978-1-4503-7270-1
DOI: 10.1145/3335772
Table of Contents

This book is a celebration of Leslie Lamport’s work on concurrency, interwoven in four-and-a-half decades of an evolving industry: from the introduction of the first personal computer to an era when parallel and distributed multiprocessors are abundant. His works lay formal foundations for concurrent computations executed by interconnected computers. Some of the algorithms have become standard engineering practice for fault tolerant distributed computing – distributed systems that continue to function correctly despite failures of individual components. He also developed a substantial body of work on the formal specification and verification of concurrent systems, and has contributed to the development of automated tools applying these methods.

Part I consists of technical chapters of the book and a biography. The technical chapters of this book present a retrospective on Lamport’s original ideas from experts in the field. Through this lens, it portrays their long-lasting impact. The chapters cover timeless notions Lamport introduced: the Bakery algorithm, atomic shared registers and sequential consistency; causality and logical time; Byzantine Agreement; state machine replication and Paxos; temporal logic of actions (TLA). The professional biography tells of Lamport’s career, providing the context in which his work arose and broke new grounds, and discusses LaTeX – perhaps Lamport’s most influential contribution outside the field of concurrency. This chapter gives a voice to the people behind the achievements, notably Lamport himself, and additionally the colleagues around him, who inspired, collaborated, and helped him drive worldwide impact. Part II consists of a selection of Leslie Lamport’s most influential papers.

This book touches on a lifetime of contributions by Leslie Lamport to the field of concurrency and on the extensive influence he had on people working in the field. It will be of value to historians of science, and to researchers and students who work in the area of concurrency and who are interested to read about the work of one of the most influential researchers in this field.

The Essentials of Modern Software Engineering: Free the Practices from the Method Prisons

By: Ivar Jacobson, Harold “Bud” Lawson, Pan-Wei Ng, Paul E. McMahon, Michael Goedicke
ISBN: 978-1-94748-724-6
DOI: 10.1145/3277669
Table of Contents

The first course in software engineering is the most critical. Education must start from an understanding of the heart of software development, from familiar ground that is common to all software development endeavors. This book is an in-depth introduction to software engineering that uses a systematic, universal kernel to teach the essential elements of all software engineering methods.

This kernel, Essence, is a vocabulary for defining methods and practices. Essence was envisioned and originally created by Ivar Jacobson and his colleagues, developed by Software Engineering Method and Theory (SEMAT) and approved by The Object Management Group (OMG) as a standard in 2014. Essence is a practice-independent framework for thinking and reasoning about the practices we have and the practices we need. Essence establishes a shared and standard understanding of what is at the heart of software development. Essence is agnostic to any particular method, lifecycle independent, programming language independent, concise, scalable, extensible, and formally specified. Essence frees the practices from their method prisons.

The first part of the book describes Essence, the essential elements to work with, the essential things to do and the essential competencies you need when developing software. The other three parts describe more and more advanced use cases of Essence. Using real but manageable examples, it covers the fundamentals of Essence and the innovative use of serious games to support software engineering. It also explains how current practices such as user stories, use cases, Scrum, and micro-services can be described using Essence, and illustrates how their activities can be represented using the Essence notions of cards and checklists. The fourth part of the book offers a vision how Essence can be scaled to support large, complex systems engineering.

Essence is supported by an ecosystem developed and maintained by a community of experienced people worldwide. From this ecosystem, professors and students can select what they need and create their own way of working, thus learning how to create ONE way of working that matches the particular situation and needs.

Data Cleaning

By: Ihab Ilyas
ISBN: 978-1-97000-144-0
DOI: 10.1145/3310205
Table of Contents

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and incorrect business decisions. Poor data across businesses and the U.S. government are reported to cost trillions of dollars a year. Multiple surveys show that dirty data is the most common barrier faced by data scientists. Not surprisingly, developing effective and efficient data cleaning solutions is challenging and is rife with deep theoretical and engineering problems.

This book is about data cleaning, which is used to refer to all kinds of tasks and activities to detect and repair errors in the data. Rather than focus on a particular data cleaning task, we give an overview of the end-to-end data cleaning process, describing various error detection and repair methods, and attempt to anchor these proposals with multiple taxonomies and views. Specifically, we cover four of the most common and important data cleaning tasks, namely, outlier detection, data transformation, error repair (including imputing missing values), and data deduplication. Furthermore, due to the increasing popularity and applicability of machine learning techniques, we include a chapter that specifically explores how machine learning techniques are used for data cleaning, and how data cleaning is used to improve machine learning models.

This book is intended to serve as a useful reference for researchers and practitioners who are interested in the area of data quality and data cleaning. It can also be used as a textbook for a graduate course. Although we aim at covering state-of-the-art algorithms and techniques, we recognize that data cleaning is still an active field of research and therefore provide future directions of research whenever appropriate.

Hardness of Approximation Between P and NP

By: Aviad Rubinstein
ISBN: 978-1-947287-20-4
DOI: 10.1145/3241304
Table of Contents

Nash equilibrium is the central solution concept in Game Theory. Since Nash’s original paper in 1951, it has found countless applications in modeling strategic behavior of traders in markets, (human) drivers and (electronic) routers in congested networks, nations in nuclear disarmament negotiations, and more. A decade ago, the relevance of this solution concept was called into question by computer scientists, who proved (under appropriate complexity assumptions) that computing a Nash equilibrium is an intractable problem. And if centralized, specially designed algorithms cannot find Nash equilibria, why should we expect distributed, selfish agents to converge to one? The remaining hope was that at least approximate Nash equilibria can be efficiently computed.

Understanding whether there is an efficient algorithm for approximate Nash equilibrium has been the central open problem in this field for the past decade. In this book, we provide strong evidence that even finding an approximate Nash equilibrium is intractable. We prove several intractability theorems for different settings (two-player games and many-player games) and models (computational complexity, query complexity, and communication complexity). In particular, our main result is that under a plausible and natural complexity assumption (“Exponential Time Hypothesis for PPAD”), there is no polynomial-time algorithm for finding an approximate Nash equilibrium in two-player games.

The problem of approximate Nash equilibrium in a two-player game poses a unique technical challenge: it is a member of the class PPAD, which captures the complexity of several fundamental total problems, i.e., problems that always have a solution; and it also admits a quasipolynomial time algorithm. Either property alone is believed to place this problem far below NP-hard problems in the complexity hierarchy; having both simultaneously places it just above P, at what can be called the frontier of intractability. Indeed, the tools we develop in this book to advance on this frontier are useful for proving hardness of approximation of several other important problems whose complexity lies between P and NP: Brouwer’s fixed point, market equilibrium, CourseMatch (A-CEEI), densest k-subgraph, community detection, VC dimension and Littlestone dimension, and signaling in zero-sum games.

Conversational UX Design

By: Robert J. Moore, Raphael Arar
ISBN: 978-1-4503-6301-3
DOI: 10.1145/3304087
Table of Contents

With recent advances in natural language understanding techniques and far-field microphone arrays, natural language interfaces, such as voice assistants and chatbots, are emerging as a popular new way to interact with computers. They have made their way out of the industry research labs and into the pockets, desktops, cars and living rooms of the general public. But although such interfaces recognize bits of natural language, and even voice input, they generally lack conversational competence, or the ability to engage in natural conversation. Today’s platforms provide sophisticated tools for analyzing language and retrieving knowledge, but they fail to provide adequate support for modeling interaction. The user experience (UX) designer or software developer must figure out how a human conversation is organized, usually relying on commonsense rather than on formal knowledge. Fortunately, practitioners can rely on conversation science.

This book adapts formal knowledge from the field of Conversation Analysis (CA) to the design of natural language interfaces. It outlines the Natural Conversation Framework (NCF), developed at IBM Research, a systematic framework for designing interfaces that work like natural conversation. The NCF consists of four main components: 1) an interaction model of “expandable sequences,” 2) a corresponding content format, 3) a pattern language with 100 generic UX patterns and 4) a navigation method of six basic user actions. The authors introduce UX designers to a new way of thinking about user experience design in the context of conversational interfaces, including a new vocabulary, new principles and new interaction patterns. User experience designers and graduate students in the HCI field as well as developers and conversation analysis students should find this book of interest.

Heterogeneous Computing

By: Mohamed Zahran
ISBN: 978-1-4503-6233-7
DOI: 10.1145/3281649
Table of Contents

If you look around you will find that all computer systems, from your portable devices to the strongest supercomputers, are heterogeneous in nature. The most obvious heterogeneity is the existence of computing nodes of different capabilities (e.g. multicore, GPUs, FPGAs, …). But there are also other heterogeneity factors that exist in computing systems, like the memory system components, interconnection, etc. The main reason for these different types of heterogeneity is to have good performance with power efficiency.

Heterogeneous computing results in both challenges and opportunities. This book discusses both. It shows that we need to deal with these challenges at all levels of the computing stack: from algorithms all the way to process technology. We discuss the topic of heterogeneous computing from different angles: hardware challenges, current hardware state-of-the-art, software issues, how to make the best use of the current heterogeneous systems, and what lies ahead.

The aim of this book is to introduce the big picture of heterogeneous computing. Whether you are a hardware designer or a software developer, you need to know how the pieces of the puzzle fit together. The main goal is to bring researchers and engineers to the forefront of the research frontier in the new era that started a few years ago and is expected to continue for decades. We believe that academics, researchers, practitioners, and students will benefit from this book and will be prepared to tackle the big wave of heterogeneous computing that is here to stay.

Making Databases Work: The Pragmatic Wisdom of Michael Stonebraker

By: Michael Lawrence Brodie
ISBN: 978-1-94748-719-2
DOI: 10.1145/3226595
Table of Contents

At the ACM Awards banquet in June 2017, during the 50th anniversary celebration of the A.M. Turing Award, ACM announced the launch of the ACM A.M. Turing Book Series, a sub-series of ACM Books, to honor the winners of the A.M. Turing Award, computing’s highest honor, the “Nobel Prize” for computing. This series aims to celebrate the accomplishments of awardees, explain their major contributions of lasting importance in computing. Michael R. Stonebraker: 2014 A.M. Turing Award Winner, the first book in the series, is intended to celebrate Mike’s contributions and impact to experts who should value the book for comprehensiveness and to non-experts who may value the book for impact. What accomplishments warranted computing’s highest honor? How did Stonebraker do it? Who is Mike Stonebraker—researcher, professor, CTO, lecturer, innovative product developer, serial entrepreneur, and decades-long leader, and, as Phil Bernstein has said, research evangelist for the database community. This book is intended to evaluate the enormous amount of published work on Mike and his contributions, and evaluate it in light of the Turing Award and place it in context.

The Handbook of Multimodal-Multisensor Interfaces, Volume II

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
ISBN: 978-1-97000-169-3
DOI: 10.1145/3107990
Table of Contents

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users’ states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities, and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on two timely and controversial challenge topics, including interdisciplinary approaches to optimizing strategic fusion and to multimodal deep learning. These discussions focus on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.


The Handbook of Multimodal-Multisensor Interfaces, Volume III

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
ISBN: 978-1-97000-173-0
DOI:
Table of Contents

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test-bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted for medicine, robotics, interaction with smart spaces, and similar topics. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.

Declarative Logic Programming: Theory, Systems, and Applications

By: Michael Kifer and Yanhong Annie Liu
ISBN: 978-1-97000-199-0
DOI: 10.1145/3191315
Table of Contents

Logic Programming (LP) is at the nexus of Knowledge Representation, AI, Mathematical Logic, Databases, and Programming Languages. It allows programming to be more declarative, by specifying "what" to do instead of "how" to do it. This field is fascinating and intellectually stimulating due to the fundamental interplay among theory, systems, and applications brought about by logic.

Several books cover the basics of LP but they focus mostly on the Prolog language. There is generally a lack of accessible collections of articles covering the key aspects of LP, such as the well-founded vs. stable semantics for negation, constraints, object-oriented LP, updates, probabilistic LP, and implementation methods, including top-down vs. bottom-up evaluation and tabling.

For systems, the situation is even less satisfactory, lacking expositions of LP inference machinery that supports tabling and other state-of-the-art implementation techniques. There is also a dearth of articles about systems that support truly declarative languages, especially those that tie into first-order logic, mathematical programming, and constraint programming. Also rare are surveys of challenging application areas of LP, such as Bioinformatics, Natural Language Processing, Verification, and Planning, as well as analysis of LP applications based on language abstractions and implementations methods.

The goal of this book is to help fill in the void in the literature with state-of-the-art surveys on key aspects of LP. Much attention was paid to making these surveys accessible to researchers, practitioners, and graduate students alike.

The Continuing Arms Race: Code-Reuse Attacks and Defenses

By: Per Larsen, Ahmad-Reza Sadeghi
ISBN: 978-1-97000-183-9
DOI: 10.1145/3129743
Table of Contents

As human activities moved to the digital domain, so did all the well-known malicious behaviors including fraud, theft, and other trickery. There is no silver bullet, and each security threat calls for a specific answer. One specific threat is that applications accept malformed inputs, and in many cases it is possible to craft inputs that let an intruder take full control over the target computer system. The nature of systems programming languages lies at the heart of the problem. Rather than rewriting decades of well-tested functionality, this book examines ways to live with the (programming) sins of the past while shoring up security in the most efficient manner possible. We explore a range of different options, each making significant progress towards securing legacy programs from malicious inputs. The solutions explored include enforcement-type defenses, which excludes certain program executions because they never arise during normal operation. Another strand explores the idea of presenting adversaries with a moving target that unpredictably changes its attack surface thanks to randomization. We also cover tandem execution ideas where the compromise of one executing clone causes it to diverge from another thus revealing adversarial activities. The main purpose of this book is to provide readers with some of the most influential works on run-time exploits and defenses. We hope that the material in this book will inspire readers and generate new ideas and paradigms.

The Sparse Fourier Transform: Theory & Practice

By: Haitham Hassanieh
ISBN: 978-1-94748-707-9
DOI: 10.1145/3166186
Table of Contents

The Fourier transform is one of the most fundamental tools for computing the frequency representation of signals. It plays a central role in signal processing, communications, audio and video compression, medical imaging, genomics, astronomy, as well as many other areas. Because of its widespread use, fast algorithms for computing the Fourier transform can benefit a large number of applications. The fastest algorithm for computing the Fourier transform is the Fast Fourier Transform (FFT), which runs in near-linear time making it an indispensable tool for many applications. However, today, the runtime of the FFT algorithm is no longer fast enough especially for big data problems where each dataset can be few terabytes. Hence, faster algorithms that run in sublinear time, i.e., do not even sample all the data points, have become necessary.

This book addresses the above problem by developing the Sparse Fourier Transform algorithms and building practical systems that use these algorithms to solve key problems in six different applications: wireless networks, mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits.

This is a revised version of the thesis that won the 2016 ACM Doctoral Dissertation Award.

Frontiers of Multimedia Research

By: Shih-Fu Chang
ISBN: 978-1-97000-107-5
DOI: 10.1145/3122865
Table of Contents

The field of multimedia is unique in offering a rich and dynamic forum for researchers from "traditional" fields to collaborate and develop new solutions and knowledge that transcend the boundaries of individual disciplines. Despite the prolific research activities and outcomes, however, few efforts have been made to develop books that serve as an introduction to the rich spectrum of topics covered by this broad field. A few books are available that either focus on specific subfields or basic background in multimedia. Tutorial-style materials covering the active topics being pursued by the leading researchers at frontiers of the field are currently lacking.

In 2015, ACM SIGMM, the special interest group on multimedia, launched a new initiative to address this void by selecting and inviting 12 rising-star speakers from different subfields of multimedia research to deliver plenary tutorial-style talks at the ACM Multimedia conference for 2015. Each speaker discussed the challenges and state-of-the-art developments of their prospective research areas in a general manner to the broad community. The covered topics were comprehensive, including multimedia content understanding, multimodal human-human and human-computer interaction, multimedia social media, and multimedia system architecture and deployment.

Following the very positive responses to these talks, the speakers were invited to expand the content covered in their talks into chapters that can be used as reference material for researchers, students, and practitioners. Each chapter discusses the problems, technical challenges, state-of-the-art approaches and performances, open issues, and promising direction for future work. Collectively, the chapters provide an excellent sampling of major topics addressed by the community as a whole. This book, capturing some of the outcomes of such efforts, is well positioned to fill the aforementioned needs in providing tutorial-style reference materials for frontier topics in multimedia.

Computational Methods for Protein Complex Prediction from Protein Interaction Networks

By: Sriganesh Srihari, Chern Han Yong, and Limsoon Wong
ISBN: 978-1-97000-155-6
DOI: 10.1145/3064650
Table of Contents

Complexes of physically interacting proteins constitute fundamental functional units that drive biological processes within cells. A faithful identification of the entire set of complexes (the ‘complexosome’) is therefore essential not only to understand complex formation but also the functional organization of cells. Advances over the last several years, particularly through the use of high-throughput yeast two-hybrid and affinity-purification based experimental (proteomics) techniques, extensively map interactions (the ‘interactome’) in model organisms, including Saccharomyces cerevisiae (budding yeast), Drosophila melanogaster (fruit fly) and Caenorhabditis elegans (roundworm). These interaction data have enabled systematic reconstruction of complexes in these organisms, thereby revealing novel insights into the constituents, assembly and functions of complexes. Computational methods have played a significant role towards these advancements by contributing more accurate, efficient and exhaustive ways to analyse the enormous amounts of data, and also by complementing for several of the limitations, including presence of biological and technical noise and lack of credible interactions (sparsity) arising from experimental protocols. In this book, we systematically walk through all the important computational methods devised to date (approximately between 2003 and 2015) for identifying complexes from the network of protein interactions (PPI network).

We present a detailed taxonomy of these methods, and comprehensively evaluate them for their ability to accurately identify complexes across a variety of scenarios, including presence of noise in PPI networks and inferring of sparse complexes. By covering challenges faced by these methods more lately, for instance in identifying sub- or small complexes, and discerning of overlapping complexes, we reveal how a combination of strategies is required to accurately reconstruct the entire complexosome. The experience gained from model organisms is now paving the way for identification of complexes from higher-order organisms including Homo sapiens (human). In particular, with the increasing use of ‘pan-omics’ techniques spanning genomics, transcriptomics, proteomics and metabolomics to map human cells across multiple layers of organization, the need to understand the rewiring of the interactome between conditions – e.g. between normal development and disease – and consequently, the dynamic reorganization of complexes across these conditions are gaining immense importance. Towards this end, more recent computational methods have integrated these pan-omics datasets to decipher complexes in diseases including cancer, which in turn have revealed novel insights into disease mechanisms and highlighted potential therapeutic targets. Here, we will cover several of these latest methods, thus emphasizing how a fundamental problem such as complex identification can have far-reaching applications towards understanding the biology underlying sophisticated functional and organizational transformations in cells.

Shared-Memory Parallelism Can be Simple, Fast, and Scalable

By: Julian Shun
ISBN: 978-1-97000-191-4
DOI: 10.1145/3018787
Table of Contents

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to enable the solutions developed to run efficiently under various settings. This book, a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award, addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The book provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results serve to ease the transition into the multicore era.

The book starts by introducing tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The book then introduces Ligra, the first high-level shared-memory framework for parallel graph traversal algorithms. The framework enables short and concise implementations that deliver performance competitive with that of highly-optimized code and up to orders of magnitude better than previous systems designed for distributed memory. Finally, the book bridges the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice.

The Handbook of Multimodal-Multisensor Interfaces, Volume I

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
ISBN: 978-1-97000-167-9
DOI: 10.1145/3015783
Table of Contents

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smartphones, wearables, in-vehicle, robotic, and many other applications that are now highly competitive commercially.

This edited collection is written by international experts and pioneers in the field. It provides a textbook for students, and a reference and technology roadmap for professionals working in this rapidly emerging area.

Volume 1 of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. They also highlight an in-depth look at the most common multimodal-multisensor combinations— for example, touch and pen input, haptic and non-speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users—including the world’s rapidly growing population of seniors.

These handbook chapters provide walk-through examples and video illustrations of different system designs and their interactive use. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal-multisensor systems. In the final chapter, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

Communities of Computing: Computer Science and Society in the ACM

By: Thomas J. Misa
ISBN: 978-1-97000-187-7
DOI: 10.1145/2973856
Table of Contents

Communities of Computing is the first book-length history of the Association for Computing Machinery (ACM), founded in 1947 and with a membership today of 100,000 worldwide.It profiles ACM’s notable SIGs, active chapters, and individual members, setting ACM’s history into a rich social and political context.The book’s 12 core chapters are organized into three thematic sections. “Defining the Discipline” examines the 1960s and 1970s when the field of computer science was taking form at the National Science Foundation, Stanford University, and through ACM’s notable efforts in education and curriculum standards.“Broadening the Profession” looks outward into the wider society as ACM engaged with social and political issues—and as members struggled with balancing a focus on scientific issues and awareness of the wider world.

Chapters examine the social turbulence surrounding the Vietnam War, debates about the women’s movement, efforts for computing and community education, and international issues including professionalization and the Cold War.“Expanding Research Frontiers” profiles three areas of research activity where ACM members and ACM itself shaped notable advances in computing, including computer graphics, computer security, and hypertext.

Featuring insightful profiles of notable ACM leaders, such as Edmund Berkeley, George Forsythe, Jean Sammet, Peter Denning, and Kelly Gotlieb, and honest assessments of controversial episodes, the volume deals with compelling and complex issues involving ACM and computing.It is not a narrow organizational history of ACM committees and SIGS, although much information about them is given.All chapters are original works of research.Many chapters draw on archival records of ACM’s headquarters, ACM SIGs, and ACM leaders.This volume makes a permanent contribution to documenting the history of ACM and understanding its central role in the history of computing.

Text Data Management and Analysis

By: ChengXiang Zhai, Sean Massung
ISBN: 978-1-97000-117-4
DOI: 10.1145/2915031
Table of Contents

Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content.As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text.The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic.

This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks.The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.

Reactive Internet Programming: State Chart XML in Action

By: Franck Barbier
ISBN: 978-1-97000-177-8
DOI: 10.1145/2872585
Table of Contents

Is Internet software so different from “ordinary” software? This book practically answers this question through the presentation of a software design method based on the State Chart XML W3C standard along with Java. Web enterprise, Internet-of-Things, and Android applications, in particular, are seamlessly specified and implemented from “executable models.”

Internet software puts forward the idea of event-driven or reactive programming, as pointed out in Bonér et al.’s “Reactive Manifesto” ( http://www.reactivemanifesto.org). It tells us that reactiveness is a must. However, beyond concepts, software engineers require effective means with which to put reactive programming into practice. This book’s purpose is to outline and explain such means.

The lack of professional examples in the literature that illustrate how reactive software should be shaped can be quite frustrating. Therefore, this book helps to fill in that gap by providing in-depth professional case studies that contain comprehensive details and meaningful alternatives. Furthermore, these case studies can be downloaded for further investigation.

Internet software requires higher adaptation, at run time in particular. After reading Reactive Internet Programming, the reader therefore will be ready to enter the forthcoming Internet era.

Reviews

If I may harken back to my initial disclaimer, namely, that I’m not an Internet programmer, I wish to say that in 1988 (in an aerospace venue) I urged my employer to purchase Harel’s StateMate tool. The proverbial bean counters did their job in rejecting the request. I’m happy to see this excellent book’s arrival, along with its open-source superstructure. It is very much worth reading, both for theory and for real-world application to complex event-driven systems.
George Hacken - Computing Reviews

An Architecture for Fast and General Data Processing on Large Clusters

By: Matei Zaharia
ISBN: 978-1-97000-157-0
DOI: 10.1145/2886107
Table of Contents

The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters.

At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.

This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.

We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.

This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

Verified Functional Programming in Agda

By: Aaron Stump
ISBN: 978-1-97000-127-3
DOI: 10.1145/2841316
Table of Contents

Agda is an advanced programming language based on Type Theory. Agda’s type system is expressive enough to support full functional verification of programs, in two styles. In external verification, we write pure functional programs and then write proofs of properties about them. The proofs are separate external artifacts, typically using structural induction. In internal verification, we specify properties of programs through rich types for the programs themselves. This often necessitates including proofs inside code, to show the type checker that the specified properties hold. The power to prove properties of programs in these two styles is a profound addition to the practice of programming, giving programmers the power to guarantee the absence of bugs, and thus improve the quality of software more than previously possible.

Verified Functional Programming in Agda is the first book to provide a systematic exposition of external and internal verification in Agda, suitable for undergraduate students of Computer Science. No familiarity with functional programming or computer-checked proofs is presupposed. The book begins with an introduction to functional programming through familiar examples like booleans, natural numbers, and lists, and techniques for external verification. Internal verification is considered through the examples of vectors, binary search trees, and Braun trees. More advanced material on type-level computation, explicit reasoning about termination, and normalization by evaluation is also included. The book also includes a medium-sized case study on Huffman encoding and decoding.

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics. (Ulf Norell, Chalmers University of Technology, Sweden)

Previously, when someone asked me how to learn dependently typed programming, I'd point them to various tutorials, papers, and blog posts about Agda. Now, I just give them this book. Jesper Cockx, K. U. Leuven, Belgium

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics.
Ulf Norell, Chalmers University of Technology, Sweden

The VR Book: Human-Centered Design for Virtual Reality

By: Jason Jerald
ISBN: 978-1-97000-112-9
DOI: 10.1145/2792790
Table of Contents

Virtual reality (VR) potentially provides our minds with direct access to digital media in a way that at first seems to have no limits.However, creating compelling VR experiences is an incredibly complex challenge.When VR is done well, the results are brilliant and pleasurable experiences that go beyond what we can do in the real world.When VR is done badly, not only is the system frustrating to use, but sickness can result.Reasons for bad VR are numerous; some failures come from the limitations of technology, but many come from a lack of understanding perception, interaction, design principles, and real users. This book discusses such issues, focusing upon the human element of VR rather than technical implementation, for if we do not get the human element correct, then no amount of technology will make VR anything more than an interesting tool confined to research laboratories. Even when VR principles are fully understood, first implementations are rarely novel and never ideal due to the complex nature of VR and the countless possibilities. However, the VR principles discussed within enable us to intelligently experiment with the rules and iteratively design towards innovative experiences.

Chapter 1. Introduction

Chapter 2. Perception

Chapter 3. Cybersickness

Chapter 4. Interaction

Chapter 5. Content Creation

Chapter 6. Iterative Design

Chapter 7. Conclusions and the Future

Reviews

The definitive guide for creating VR user interactions.
Amir Rubin - CEO, Sixense


Conceptually comprehensive, yet at the same time practical and grounded in real-world experience.
Paul Mlyniec - President of Digital ArtForms and father of MakeVR

The summative guidelines provide quick access with back references for further understanding.
Chris Pusczak - Creative Director of SymbioVR

I was able to briefly preview it at SIGGRAPH this last summer and it seemed like it would be a good fit for my class. Now that have had a chance to take a deeper dive, I am even more certain I will be using the book. I am looking forward to using this as a required text when I teach the course again.
Dr. David Whittinghill, Purdue University

Ada’s Legacy: Cultures of Computing from the Victorian to the Digital Age

By: Robin Hammerman and Andrew L. Russell
ISBN: 978-1-97000-149-5
DOI: 10.1145/2809523
Table of Contents

Ada's Legacy illustrates the depth and diversity of writers, thinkers, and makers who have been inspired by Ada Lovelace, the English mathematician and writer. The volume, which commemorates the bicentennial of Ada's birth in December 1815, celebrates Lovelace's many achievements as well as the impact of her life and work, which reverberated widely since the late nineteenth century. In the 21st century we have seen a resurgence in Lovelace scholarship, due to the growth of interdisciplinary thinking and the expanding influence of women in science, technology, engineering and mathematics. Ada's Legacy is a unique contribution to this scholarship, thanks to its combination of papers on Ada's collaboration with Charles Babbage, Ada's position in the Victorian and Steampunk literary genres, Ada's namesake programming language, Ada's representation in and inspiration of contemporary art, and her continued relevance in discussions about gender and technology in the digital age.

Because of its broad focus on subjects that reach far beyond the life and work of Ada herself, Ada's Legacy will appeal to readers who are curious about her enduring importance in computing and the wider world.

Edmund Berkeley and the Social Responsibility of Computer Professionals

By: Bernadette Longo
ISBN: 978-1-97000-139-6
DOI: 10.1145/2787754
Table of Contents

Edmund C. Berkeley (1909 – 1988) was a mathematician, insurance actuary, inventor, publisher, and a founder of the Association for Computing Machinery (ACM). His book Giant Brains or Machines That Think (1949) was the first explanation of computers for a general readership. His journal Computers and Automation (1951-1973) was the first journal for computer professionals. In the 1950s, Berkeley developed mail-order kits for small, personal computers such as Simple Simon and the Braniac. In an era when computer development was on a scale barely affordable by universities or government agencies, Berkeley took a different approach and sold simple computer kits to average Americans. He believed that digital computers, using mechanized reasoning based on symbolic logic, could help people make more rational decisions. The result of this improved reasoning would be better social conditions and fewer large-scale wars. Although Berkeley’s populist notions of computer development in the public interest did not prevail, the events of his life exemplify the human side of ongoing debates concerning the social responsibility of computer professionals.

This biography of Edmund Berkeley, based on primary sources gathered over 15 years of archival research, provides a lens to understand social and political decisions surrounding early computer development, and the consequences of these decisions in our 21st century lives.

Candidate Multilinear Maps

By: Sanjam Garg
ISBN: 978-1-62705-549-9
DOI: 10.1145/2714451
Table of Contents

Cryptography to me is the “black magic,” of cryptographers, enabling tasks that often seem paradoxical or simply just impossible. Like the space explorers, we cryptographers often wonder, “what are the boundaries of this world of “black magic?” This work lays one of the founding stones in furthering our understanding of these edges.

We describe plausible lattice-based constructions with properties that approximate the sought after multilinear maps in hard-discrete-logarithm groups. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function. These new constructions radically enhance our tool set and open a floodgate of applications. We present a survey of these applications. This book is based on my PhD thesis which was an extended version of a paper titled “Candidate Multilinear Maps from Ideal Lattices” co-authored with Craig Gentry and Shai Halevi. This paper was originally published at EUROCRYPT 2013.

Smarter Than Their Machines: Oral Histories of Pioneers in Interactive Computing

By: John Cullinane
ISBN: 978-1-62705-553-6
DOI: 10.1145/2663015
Table of Contents

The oral histories of the pioneers that led to interactive computing have much to offer today's leaders in business, industry, and academia on how to get complex things done. After all, industry, government, and academia working together created the computer industry, which led to the Internet as we know it today. To do so, the pioneers had to get around the various “systems” of the day that are always impediments to implementing new ideas. In their case, it was the voice-dominated communications industry. For example, packet switching was invented to get around such an issue. This was key to allowing incompatible computers to “talk” to each other across academia, and later industry, which would be the key to the Internet. Cullinane Corporation, the computer industry's first successful software products company, benefited from this technology as it focused on database software as the foundation for interactive computer systems for industry, government, and academia. As such, this book is a personal walk through the history that led to interactive computing as John Cullinane witnessed it and participated in it, with the help of the oral histories of some key pioneers, organized and introduced in a way that illustrates the close interaction of the various individuals and organizations involved in the evolution of interactive computing. These oral histories, including John's, were drawn from the archives of over 300 such histories located at the Charles Babbage Institute, University of Minnesota.

Embracing Interference in Wireless Systems

By: Shyamnath Gollakota
ISBN: 978-1-62705-474-4
DOI: 10.1145/2611390
Table of Contents

The wireless medium is a shared resource. If nearby devices transmit at the same time, their signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless networks have been designed with the assumption that interference is intrinsically harmful and must be avoided.

This book takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, we design practical systems that transform interference into a harmless, and even a beneficial phenomenon. To achieve this goal, we consider how wireless signals interact when they interfere, and use this understanding in our system designs. Specifically, when interference occurs, the signals get mixed on the wireless medium. By understanding the parameters of this mixing, we can invert the mixing and decode the interfered packets; thus, making interference harmless. Furthermore, we can control this mixing process to create strategic interference that allow decodability at a particular receiver of interest, but prevent decodability at unintended receivers and adversaries. Hence, we can transform interference into a beneficial phenomenon that provides security.

Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers

By: Bryan Parno
ISBN: 978-1-62705-477-5
DOI: 10.1145/2611399
Table of Contents

As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted.

In this book we describe how to resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an on-demand secure execution environment which can perform security-sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.

A Framework for Scientific Discovery through Video Games

By: Seth Cooper
ISBN: 978-1-62705-504-8
DOI: 10.1145/2625848
Table of Contents

As science becomes increasingly computational, the limits of what is computationally tractable become a barrier to scientific progress. Many scientific problems, however, are amenable to human problem solving skills that complement computational power. By leveraging these skills on a larger scale – beyond the relatively few individuals currently engaged in scientific inquiry – there is the potential for new scientific discoveries.

This book presents a framework for mapping open scientific problems into video games. The game framework combines computational power with human problem solving and creativity to work toward solving scientific problems that neither computers nor humans could previously solve alone. To maximize the potential contributors to scientific discovery, the framework designs a game to be played by people with no formal scientific background and incentivizes long-term engagement with a myriad of collaborative on competitive reward structures. The framework allows for the continual coevolution of the players and the game to each other: as players gain expertise through gameplay, the game changes to become a better tool.

The framework is validated by being applied to proteomics problems with the video game Foldit. Foldit players have contributed to novel discoveries in protein structure prediction, protein design, and protein structure refinement algorithms. The coevolution of human problem solving and computer tools in an incentivized game framework is an exciting new scientific pathway that can lead to discoveries currently unreachable by other methods.

Collection I


View Titles in Development