FORTHCOMING TITLES

The following is a list of titles to appear in the ACM Books series. Upon publication, each of the following books will appear in the ACM Digital Library and be accessible to those with full-text access in both PDF and ePub formats. Individual titles will be made available for purchase at Morgan & Claypool and also available at Amazon and Barnes & Noble. Please click on the title name below for more information about each title.

Algorithms and Methods in Structural Bioinformatics
Author: Nurit Haspel
Abstract:

Structural bioinformatics is the field related to the development and application of computational models for the prediction and analysis of macromolecular structures. The unique nature of protein and nucleotide structures has presented many computational challenges over the last three decades. The fast accumulation of data, in addition to the rapid increase in computational power, presents a unique set of challenges and opportunities in the analysis, comparison, modeling, and prediction of macromolecular structures and interactions.

The book is intended as a user's guide for key algorithms to solve problems related to macromolecular structure, with emphasis on protein structure, function and dynamics. It can be used as a textbook for a one-semester graduate course in algorithms in bioinformatics.

Computational Methods for Protein Complex Prediction from Protein Interaction Networks
Author: Sriganesh Srihari, Chern Han Yong, and Limsoon Wong
Abstract:

Complexes of physically interacting proteins constitute fundamental functional units that drive biological processes within cells. A faithful identification of the entire set of complexes (the ‘complexosome’) is therefore essential not only to understand complex formation but also the functional organization of cells. Advances over the last several years, particularly through the use of high-throughput yeast two-hybrid and affinity-purification based experimental (proteomics) techniques, extensively map interactions (the ‘interactome’) in model organisms, including Saccharomyces cerevisiae (budding yeast), Drosophila melanogaster (fruit fly) and Caenorhabditis elegans (roundworm). These interaction data have enabled systematic reconstruction of complexes in these organisms, thereby revealing novel insights into the constituents, assembly and functions of complexes. Computational methods have played a significant role towards these advancements by contributing more accurate, efficient and exhaustive ways to analyse the enormous amounts of data, and also by complementing for several of the limitations, including presence of biological and technical noise and lack of credible interactions (sparsity) arising from experimental protocols. In this book, we systematically walk through all the important computational methods devised to date (approximately between 2003 and 2015) for identifying complexes from the network of protein interactions (PPI network).

We present a detailed taxonomy of these methods, and comprehensively evaluate them for their ability to accurately identify complexes across a variety of scenarios, including presence of noise in PPI networks and inferring of sparse complexes. By covering challenges faced by these methods more lately, for instance in identifying sub- or small complexes, and discerning of overlapping complexes, we reveal how a combination of strategies is required to accurately reconstruct the entire complexosome. The experience gained from model organisms is now paving the way for identification of complexes from higher-order organisms including Homo sapiens (human). In particular, with the increasing use of ‘pan-omics’ techniques spanning genomics, transcriptomics, proteomics and metabolomics to map human cells across multiple layers of organization, the need to understand the rewiring of the interactome between conditions – e.g. between normal development and disease – and consequently, the dynamic reorganization of complexes across these conditions are gaining immense importance. Towards this end, more recent computational methods have integrated these pan-omics datasets to decipher complexes in diseases including cancer, which in turn have revealed novel insights into disease mechanisms and highlighted potential therapeutic targets. Here, we will cover several of these latest methods, thus emphasizing how a fundamental problem such as complex identification can have far-reaching applications towards understanding the biology underlying sophisticated functional and organizational transformations in cells.

Data Cleaning
Author: Ihab Ilyas
Abstract:

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and bad business decisions. Poor data across businesses and the government cost the U.S. economy $3.1 trillion a year, according to a report by InsightSquared in 2012.

Various tools and techniques have been proposed to detect data errors and anomalies. For example, data quality rules or integrity constraints have been proposed as a declarative way to describe legal or correct data instances. Any subset of data that does not conform to the defined rules is considered erroneous, which is also referred to as a violation.

Repairing dirty data sets is often a more challenging task. Multiple techniques with different objectives have been introduced. Some of these aim to minimally change the database, such that the data conforms to the declared quality rules; others involve users or knowledge bases to verify the repairs.

In this book, we discuss the main facets and directions in designing error detection and repairing techniques. We start by surveying anomaly detection techniques, based on what, how, and where to detect. We then propose a taxonomy of the various aspects of data repairing, including the repair target, the automation of the repair process, and the update model. The book also highlights new trends in data cleaning algorithms to cope with current Big Data settings, focusing on scalable data cleaning techniques for large data sets.

Database Replication
Author: Bettina Kemme
Abstract:

Database replication is widely used for fault-tolerance, scalability, and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and there are many hurdles to overcome. At the forefront is replica control: assuring that data copies remain consistent when updates occur. There exist many alternatives in regard to where updates can occur and when changes are propagated to data copies, how changes are applied, where the replication tool is located, etc. A particular challenge is to combine replica control with transaction management as it requires several operations to be treated as a single logical unit, and it provides atomicity, consistency, isolation and durability across the replicated system.

This book provides a categorization of replica control mechanisms, presents several replica and concurrency control mechanisms in detail, and discusses many of the issues that arise when such solutions need to be implemented within or on top of relational database systems. Furthermore, the book presents the tasks that are needed to build a fault-tolerant replication solution, provides an overview of load-balancing strategies that allow load to be equally distributed across all replicas, and introduces the concept of self-provisioning that allows the replicated system to dynamically decide on the number of replicas that are needed to handle the current load. As performance evaluation is a crucial aspect when developing a replication tool, the book presents an analytical model of the scalability potential of various replication solution.

Empirical Software Engineering
Author: Dag Sjøberg
Principles of Graph Data Management and Analytics
Author: Amol Deshpande and Amarnath Gupta
Abstract:

Principles of Graph Data Management and Analytics is the first textbook on the subject for upper-level undergraduates, graduate students and data management professionals who are interested in the new and exciting world of graph data management and computation. The book blends together the two thinly connected disciplines – a database-minded approach to managing and querying graphs, and an analytics-driven approach to perform scalable computation on large graphs. It presents a detailed treatment of the underlying theory and algorithms, and prevalent techniques and systems; it also presents textbook use cases and real-world problems that can be solved by combining database-centric and analysis-centric approaches. The book will enable students to understand the state of the art in graph data management, to effectively program currently available graph databases and graph analytics products, and to design their own graph data analysis systems.To help this process, the book supplements its textual material with several data sets, small and large, that will be made available through the book’s website. Several free and contributed software will also be provided for readers for practice.

Research Frontiers of Multimedia
Author: Shih-Fu Chang
Shared-Memory Parallelism Can be Simple, Fast, and Scalable
Author: Julian Shun
Abstract:

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to enable the solutions developed to run efficiently under various settings. This book, a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award, addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The book provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results serve to ease the transition into the multicore era.

The book starts by introducing tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The book then introduces Ligra, the first high-level shared-memory framework for parallel graph traversal algorithms. The framework enables short and concise implementations that deliver performance competitive with that of highly-optimized code and up to orders of magnitude better than previous systems designed for distributed memory. Finally, the book bridges the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice.

Software Evolution: Lessons Learned from Software History
Author: Kim Tracy
Abstract:

Software history has a deep impact on current software designers, computer scientists and technologists. Decisions and design constraints made in past are often unknown or poorly understood by current students, yet modern software systems use software based on those earlier decisions and design constraints. This work looks at software history through specific software areas and extracts student-consumable practices, learnings, and trends that are useful in current and future software design. It also exposes key areas that are highly used in modern software, yet no longer taught in most computing programs. Written as a textbook, this book uses past and current specific cases to explore the impact of specific software evolution trends and impacts.

Tangible and Embodied Interaction
Author: Brygg Ullmer, Ali Mazalek, Orit Shaer, and Caroline Hummels
Abstract:

User interfaces for our increasingly varied computational devices have long been oriented toward graphical screens and virtual interactors. Since the advent of mass market graphical interfaces in the mid-1980s, most human-computer interaction has been mediated by graphical buttons, sliders, text fields, and their virtual kin.

And yet, humans are profoundly physical creatures. Throughout our history (and prehistory), our bodies have profoundly shaped our activities and engagement with our world, and each other. Despite -- and perhaps also, because of -- the many successes of keyboard, pointer, touch screen, and (increasingly) speech modalities of computational interaction, many have sought alternate prospects for interaction that more deeply respect, engage, and celebrate our embodied physicality.

For several decades, tangible and embodied interaction (TEI) has been the topic of intense technological, scientific, artistic, humanistic, and mass-market research and practice. In this book, we elaborate on many dimensions of this diverse, transdisciplinary, blossoming topic.

The Continuing Arms Race: Code-Reuse Attacks and Defenses
Author: Thorston Holz, Per Larsen and Ahmad-Reza Sadeghi
The Handbook of Multimodal-Multisensor Interfaces, Volume I
Author: Sharon Oviatt
Abstract:

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

The Handbook of Multimodal-Multisensor Interfaces, Volume II
Author: Sharon Oviatt
Abstract:

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This handbook volume begins with multimodal signal processing, architectures, and machine learning. It covers recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight of this volume is processing of social and emotional information about users’ state, an exciting emerging capability in next-generation user interfaces. In addition to providing an overview, chapters discuss real-time multimodal analysis of affect and social signals from various modalities, and the perception of affective expression by users. Additional chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. The handbook chapters provide a number of walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.

View Published Titles