TITLES IN DEVELOPMENT

Forthcoming titles in the ACM Books Series are subject to change and will be published as they become available, with 25 titles to be published in Collection I. Upon publication, each of the following books will appear in the ACM Digital Library and be accessible to those with full-text access in both PDF and ePub formats. Individual titles will be made available for purchase at Morgan & Claypool and also available at Amazon and Barnes & Noble. Please click on the title name below for more information about each title.

Algorithms and Methods in Structural Bioinformatics
Author: Nurit Haspel
Abstract:

Structural bioinformatics is the field related to the development and application of computational models for the prediction and analysis of macromolecular structures. The unique nature of protein and nucleotide structures has presented many computational challenges over the last three decades. The fast accumulation of data, in addition to the rapid increase in computational power, presents a unique set of challenges and opportunities in the analysis, comparison, modeling, and prediction of macromolecular structures and interactions.

The book is intended as a user's guide for key algorithms to solve problems related to macromolecular structure, with emphasis on protein structure, function and dynamics. It can be used as a textbook for a one-semester graduate course in algorithms in bioinformatics.

Data Cleaning
Author: Ihab Ilyas
Abstract:

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and bad business decisions. Poor data across businesses and the government cost the U.S. economy $3.1 trillion a year, according to a report by InsightSquared in 2012.

Various tools and techniques have been proposed to detect data errors and anomalies. For example, data quality rules or integrity constraints have been proposed as a declarative way to describe legal or correct data instances. Any subset of data that does not conform to the defined rules is considered erroneous, which is also referred to as a violation.

Repairing dirty data sets is often a more challenging task. Multiple techniques with different objectives have been introduced. Some of these aim to minimally change the database, such that the data conforms to the declared quality rules; others involve users or knowledge bases to verify the repairs.

In this book, we discuss the main facets and directions in designing error detection and repairing techniques. We start by surveying anomaly detection techniques, based on what, how, and where to detect. We then propose a taxonomy of the various aspects of data repairing, including the repair target, the automation of the repair process, and the update model. The book also highlights new trends in data cleaning algorithms to cope with current Big Data settings, focusing on scalable data cleaning techniques for large data sets.

Database Replication
Author: Bettina Kemme
Abstract:

Database replication is widely used for fault-tolerance, scalability, and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and there are many hurdles to overcome. At the forefront is replica control: assuring that data copies remain consistent when updates occur. There exist many alternatives in regard to where updates can occur and when changes are propagated to data copies, how changes are applied, where the replication tool is located, etc. A particular challenge is to combine replica control with transaction management as it requires several operations to be treated as a single logical unit, and it provides atomicity, consistency, isolation and durability across the replicated system.

This book provides a categorization of replica control mechanisms, presents several replica and concurrency control mechanisms in detail, and discusses many of the issues that arise when such solutions need to be implemented within or on top of relational database systems. Furthermore, the book presents the tasks that are needed to build a fault-tolerant replication solution, provides an overview of load-balancing strategies that allow load to be equally distributed across all replicas, and introduces the concept of self-provisioning that allows the replicated system to dynamically decide on the number of replicas that are needed to handle the current load. As performance evaluation is a crucial aspect when developing a replication tool, the book presents an analytical model of the scalability potential of various replication solution.

Declarative Logic Programming: Theory, Systems, and Applications
Author: Michael Kifer and Yanhong Annie Liu
Empirical Software Engineering
Author: Dag Sjøberg
Heterogeneous Computing: Hardware & Software Perspectives
Author: Mohamed Zahran
Abstract:

This book addresses challenges in moving from homogeneous computing to heterogeneous computing where it is necessary to deal with computing nodes of different capabilities and characteristics such as general purpose cores, GPUs, FPGAs, Automata Processing, Neuromorphic chips.

Introduction to Computational Advertising
Author: Ricardo Baeza-Yates, Prabakhar Krishnamurthy, and Jian Yang
Abstract:

Web based advertising has grown from nothing at the end of last century to an annual market of over 100B dollars, being today the main advertising channel. Although this trend started in the desktop, mobile is rapidly catching up, being already 25% of the total revenue. The technology behind this market has also evolved, being nowadays a complex blend of computer science, economics and sociology. This book covers the current state of the art of what today is called computational advertising. That includes the economy of Web advertising, how to understand and model consumer behavior, how to match ads and consumers, how to predict user response and how to measure its effectiveness. We also cover ad allocation, campaign management and optimization, as well as fraud and privacy issues.

Principles of Graph Data Management and Analytics
Author: Amol Deshpande and Amarnath Gupta
Abstract:

Principles of Graph Data Management and Analytics is the first textbook on the subject for upper-level undergraduates, graduate students and data management professionals who are interested in the new and exciting world of graph data management and computation. The book blends together the two thinly connected disciplines – a database-minded approach to managing and querying graphs, and an analytics-driven approach to perform scalable computation on large graphs. It presents a detailed treatment of the underlying theory and algorithms, and prevalent techniques and systems; it also presents textbook use cases and real-world problems that can be solved by combining database-centric and analysis-centric approaches. The book will enable students to understand the state of the art in graph data management, to effectively program currently available graph databases and graph analytics products, and to design their own graph data analysis systems.To help this process, the book supplements its textual material with several data sets, small and large, that will be made available through the book’s website. Several free and contributed software will also be provided for readers for practice.

Research Frontiers of Multimedia
Author: Shih-Fu Chang
Software Evolution: Lessons Learned from Software History
Author: Kim Tracy
Abstract:

Software history has a deep impact on current software designers, computer scientists and technologists. Decisions and design constraints made in past are often unknown or poorly understood by current students, yet modern software systems use software based on those earlier decisions and design constraints. This work looks at software history through specific software areas and extracts student-consumable practices, learnings, and trends that are useful in current and future software design. It also exposes key areas that are highly used in modern software, yet no longer taught in most computing programs. Written as a textbook, this book uses past and current specific cases to explore the impact of specific software evolution trends and impacts.

Tangible and Embodied Interaction
Author: Brygg Ullmer, Ali Mazalek, Orit Shaer, and Caroline Hummels
Abstract:

User interfaces for our increasingly varied computational devices have long been oriented toward graphical screens and virtual interactors. Since the advent of mass market graphical interfaces in the mid-1980s, most human-computer interaction has been mediated by graphical buttons, sliders, text fields, and their virtual kin.

And yet, humans are profoundly physical creatures. Throughout our history (and prehistory), our bodies have profoundly shaped our activities and engagement with our world, and each other. Despite -- and perhaps also, because of -- the many successes of keyboard, pointer, touch screen, and (increasingly) speech modalities of computational interaction, many have sought alternate prospects for interaction that more deeply respect, engage, and celebrate our embodied physicality.

For several decades, tangible and embodied interaction (TEI) has been the topic of intense technological, scientific, artistic, humanistic, and mass-market research and practice. In this book, we elaborate on many dimensions of this diverse, transdisciplinary, blossoming topic.

The Continuing Arms Race: Code-Reuse Attacks and Defenses
Author: Thorston Holz, Per Larsen and Ahmad-Reza Sadeghi
The Handbook of Multimodal-Multisensor Interfaces, Volume II
Author: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
Abstract:

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users’ states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities, and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on two timely and controversial challenge topics, including interdisciplinary approaches to optimizing strategic fusion and to multimodal deep learning. These discussions focus on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.


The Handbook of Multimodal-Multisensor Interfaces, Volume III
Author: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
Abstract:

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test-bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted for medicine, robotics, interaction with smart spaces, and similar topics. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.

The Sparse Fourier Transform: Theory & Practice
Author: Haitham Hassanieh
Abstract:

ACM Books, an imprint of the Association for Computing Machinery, announces the signing of a new title, The Sparse Fourier Transform: Theory & Practice, by Haitham Hassanieh of University of Illinois at Urbana-Champaign. The book is expected to publish in early 2018. In the book, Hassanieh presents a new way to decrease the amount of computation needed to process data, thus increasing the efficiency of programs in several areas of computing, processing data 10 to 100 times faster than possible before. He then shows how this new algorithm can be used to build practical systems to solve key problems in six different applications including wireless networks, mobile systems, computer graphics, medical imaging, biochemistry and digital circuits. Hassanieh received the 2016 ACM Doctoral Dissertation Award for his MIT dissertation on the subject, from which this book is being expanded and enhanced. Hassanieh is an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Illinois at Urbana-Champaign. He received his MS and PhD in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). A native of Lebanon, he earned a BE in Computer and Communications Engineering from the American University of Beirut. Hassanieh’s Sparse Fourier Transform algorithm was chosen by MIT Technology Review as one of the top 10 breakthrough technologies of 2012. He has also been recognized with the Sprowls Award for Best Dissertation in Computer Science, and the SIGCOMM Best Paper Award.

View Published Titles