TITLES IN DEVELOPMENT

Forthcoming titles in the ACM Books Series are subject to change and will be published as they become available, with 25 titles to be published in Collection I and II. Upon publication, each of the following books will appear in the ACM Digital Library and be accessible to those with full-text access in both PDF and ePub formats. Individual titles will be made available for purchase at Morgan & Claypool and also available at Amazon and Barnes & Noble. Please click on the title name below for more information about each title.

Advanced Topics in Information Retrieval
Abstract:

Information Retrieval technology powers a lot of the search functionality that runs the Web. As different web properties evolve over time and new services become mainstream, the ability to search for pages, people, music, messages, posts, and other objects is an essential part of our daily digital life. To make sure that such functionality works from any device at scale, new techniques and topics have emerged. In this book we describe in detail such techniques, use cases, and provide extensive bibliography. The book should be of interest to researchers, practitioners, and graduate students who want to understand the latest advances in this area.

Algorithms and Methods in Structural Bioinformatics
Abstract:

Structural bioinformatics is the field related to the development and application of computational models for the prediction and analysis of macromolecular structures. The unique nature of protein and nucleotide structures has presented many computational challenges over the last three decades. The fast accumulation of data, in addition to the rapid increase in computational power, presents a unique set of challenges and opportunities in the analysis, comparison, modeling, and prediction of macromolecular structures and interactions.

The book is intended as a user's guide for key algorithms to solve problems related to macromolecular structure, with emphasis on protein structure, function and dynamics. It can be used as a textbook for a one-semester graduate course in algorithms in bioinformatics.

Artificial Intelligence and Society
Abstract:

The collection of essays on Artificial Intelligence and Society provides the first systematic and comprehensive coverage of how the field of Artificial Intelligence (AI) has impacted society. In the last few decades, AI has been used to help conserve endangered species, aid drug discovery and precision surgery, detect anomalies in medical imaging, improve emergency and disaster response, design markets for organ exchange, and address homelessness, among many other applications. However, as AI has been used to perform decision-making, concerns about privacy, security, fairness, and interpretability of the decisions made (or suggested) by technology have also emerged. This edited volume will collate contributions from leading AI researchers whose work has advanced the algorithmic state-of-the-art in AI and made significant contributions to society. The book is centered around five sections---sustainability, health, smart cities and urban planning, privacy and security, and fairness, accountability, and transparency. Each section will consist of several tutorial-style essays discussing specific societal challenges, relevant background material (mathematical and algorithmic), a brief literature review, the core algorithmic content, and a description of how the societal challenge has been addressed. The ultimate aim of this edited volume is to enlighten students about how AI algorithms and data-driven modeling have impacted society from a diverse range of perspectives and impact areas.

Birth of the Database: The Life and Works of Charles W. Bachman
Abstract:

ACM Books is pleased to announce the signing of a new book in our Turing Award series, Birth of the Database: The Work of Charles W. Bachman, edited by Gary Rector of Salt River Project.

Bachman received the prestigious ACM Turing Award in 1973 “for his outstanding contributions to database technology.” Bachman was the first Turing Award winner without a Ph.D., and the first who would spend his whole career in industry.

The book will feature a short biography of Bachman, the history of the creation of Bachman diagrams and IDS (the first direct-access database), and analysis of Bachman’s influence by his contemporaries and today's leading computer scientists.

Curiosity, Clarity, and Caring: How Jim Gray’s Passion for Learning, Teaching, and People Changed Computing
Abstract:

This book in the ACM Turing Award series focuses on the life and contributions of Jim Gray. The underlying themes are two-fold: (1) People: Jim was dedicated to the mentoring, nurturing, and development of individuals and the scientific community with a special emphasis on computer science education; AND (2) Technology: Jim’s amazing curiosity and passion for learning led him to dig deeply into new and uncharted areas within systems computing. He combined that with prodigious and clear explanations of fundamental concepts and detailed nuance. Even as Jim steered his gaze to new domains, he watched as his earlier research areas evolved and matured. He would then pop back years later with new insights and papers offering major contributions to augment his pioneering work. We follow Jim’s exploration of eight major areas of systems computing: systems, transactions, databases, availability, performance, sort, scale, and eScience. The nineth is called vision and highlights the many contributions Jim made at a more meta level as he describes the impact of technology trends, underlying “laws of computing”, engineering rules of thumb, and helps frame both the past and the future of computing.

Data Series Management and Analytics: Time Series, Sequences, High-Dimensional Vectors
Abstract:

There is an increasingly pressing need, by several applications in diverse domains, for developing techniques able to manage and analyze very large collections of sequences (also known as time series, or data series). Examples of such applications come from social media analytics and internet service providers, as well as from a multitude of scientific domains that need to apply machine learning techniques for knowledge extraction. It is not unusual for these applications to involve numbers of data series in the order of hundreds of millions to billions, which are often times not analyzed in their full detail due to their sheer size. In this book, we describe the theory and methods necessary for managing big data sequences, and for building corresponding systems that will enable scalable management of and complex analytics on very large sequence collections. To this effect, we describe recent developments in the areas of summarizing, indexing, searching for similar sequences, analyzing (motif discovery, anomaly discovery, classification, clustering), and visualizing large data series collections. Finally, we discuss the applicability of data series techniques on generic high-dimensional vectors.

Database Replication
Abstract:

Database replication is widely used for fault-tolerance, scalability, and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and there are many hurdles to overcome. At the forefront is replica control: assuring that data copies remain consistent when updates occur. There exist many alternatives in regard to where updates can occur and when changes are propagated to data copies, how changes are applied, where the replication tool is located, etc. A particular challenge is to combine replica control with transaction management as it requires several operations to be treated as a single logical unit, and it provides atomicity, consistency, isolation and durability across the replicated system.

This book provides a categorization of replica control mechanisms, presents several replica and concurrency control mechanisms in detail, and discusses many of the issues that arise when such solutions need to be implemented within or on top of relational database systems. Furthermore, the book presents the tasks that are needed to build a fault-tolerant replication solution, provides an overview of load-balancing strategies that allow load to be equally distributed across all replicas, and introduces the concept of self-provisioning that allows the replicated system to dynamically decide on the number of replicas that are needed to handle the current load. As performance evaluation is a crucial aspect when developing a replication tool, the book presents an analytical model of the scalability potential of various replication solution.

Foundations of Computation and Machine Learning: The Work of Leslie Valiant
Abstract:

ACM Books is pleased to announce the signing of a new book in our Turing Award series, Foundations of Computation and Machine Learning: The Work of Leslie Valiant, edited by Rocco Servedio of Columbia University.

Valiant received the prestigious ACM Turing Award in 2010 for 2010 "For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing."

The book will feature a short biography of Valiant, as well as analysis of his seminal works by today's leading computer scientists.

From Algorithms to Thinking Machines
Abstract:

From Algorithms to Thinking Machines uses an informative style to present the concepts of algorithms, data and computation and discuss the role of algorithms and computers in ruling and shaping the world in which we live and work. The main goal is to help readers to clearly understand the power and impact of pervasive use of algorithms on human lives. The book combines a popular approach with a well-founded scientific description aiming at discussing both principles and applications of algorithms, Big Data, and machine intelligence. The book includes eleven chapters providing a clear and deep description of algorithms, software systems, data-driven applications, machine learning and data science concepts, artificial intelligence evolution and impact. After introducing algorithm principles, data analysis tasks and learning processes from large volumes of data, the book examines the relationships among algorithms and human work, discussing how jobs are changing and how computers and their software programs are influencing human life and the labor sphere. Topics like value alignment, collective intelligence, Big Data impact, automatic decision methods, social control and political use of algorithms are illustrated and discussed using a descriptive writing style while assuring a careful and appropriate use of technical notions and subjects. Issues related to the role that big companies, governments and autocratic regimes are playing in the exploitation of algorithms and machine intelligence methods to influence people, laws and markets are also addressed. Ethics principles in software programming and human values in artificial intelligence algorithms that frequently and with increasing autonomy and proficiency, run in many real-world domains are analyzed.

Geospatial Data Science: A Hands On Approach for Developing Geospatial Applications
Abstract:

The purpose of this book is to teach the readers how to develop geospatial applications easily based on the principles and software tools of geospatial data science. Geospatial data science is the science of collecting, organizing, analyzing, and visualizing geospatial data. The book introduces a new generation of geospatial technologies based on the Semantic Web and the Linked Data paradigms, and shows how data scientists can use them to build environmental applications easily. The book is aimed at researchers and practitioners who would like to know more about this research area and can also be used as a textbook for a last year undergraduate or graduate course. Every chapter of the book contains exercises that can help the readers master the material covered by the chapter.

The topics covered by the book in detail are: geospatial data modeling, geospatial data and metadata, geospatial data formats and OGC standards, geospatial ontologies and linked geospatial data models, querying geospatial data expressed in RDF, querying evolving linked geospatial data, visualizing linked geospatial data, transforming geospatial data into RDF, interlinking geospatial data sources, geospatial ontology-based data access and incomplete geospatial information.

Introduction to Computational Advertising
Abstract:

Online advertising has grown from almost nothing at the end of last century to an annual spend of over 200B dollars globally. Today, online advertising garners the most advertising dollars of any advertising channel including TV. Online advertising is computational advertising since most the decisions of which ads to show to a user in a given context are determined by algorithms. Indeed, computational advertising was one of the first big data applications. For this reason, the problems behind computational advertising have driven research into large-scale machine learning and algorithmic game theory and is responsible for many advances in those areas as well as in parallel computing architectures. This book covers the current state of the art of computational advertising. That includes the economics of online advertising, understanding and modeling consumer behavior, matching ads and consumers, user response prediction and measurement of ad effectiveness. We also cover ad allocation, campaign management and optimization, as well as fraud and privacy issues. Today, computational advertising intersects computer science, economics marketing and psychology. Hence, after 20 years of advances in this field we hope this book fills the needs of researchers, practitioners and graduate students who want to understand the state of the art in this multidisciplinary area.

Linking the World’s Information: Tim Berners-Lee’s Invention of the World Wide Web
Abstract:

Sir Tim Berners-Lee was awarded the 50th anniversary ACM Turing Award "For inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale." This new book, one in a series dedicated to Turing Award winners, looks at the life and work of Berners-Lee. It features a short biography, seminal research, and commentary from leading computer scientists on the evolution and impact of his work.

Logic and Computational Complexity: Works of Stephen A. Cook
Abstract:

Stephen A. Cook was awarded the ACM Turing Award in 1982, in recognition of "his advancement of our understanding of the complexity of computation in a significant and profound way." Cook's theory of NP-completeness is one of the most fundamental and enduring contributions in computer science, and has a singificant impact outside the field. This volume will present works on NP-completeness and other contributions which, while perhaps not as well known, has also had a significant impact on computing theory and practice, as well as mathematical logic. With additional material, including a biographical chapter, Professor Cook's Turing Award address, and a full bibliography of his work, the volume will provide an excellent resource for anyone wishing to understand the foundations of Cook's work as well as its ongoing significance and relevance to current research problems in computing and beyond.

Pick, Click, Flick! The Story of Interaction Techniques
Abstract:

This book provides a comprehensive study of the many ways to interact with computers and computerized devices. An “interaction technique” starts when the user performs an action that causes an electronic device to respond, and includes the direct feedback from the device to the user. Examples include physical buttons and switches, on-screen menus and scrollbars operated by a mouse, touchscreen widgets and gestures such as flick-to-scroll, text entry on computers and touchscreens, consumer electronic controls such as remote controls, game controllers, input for virtual reality systems like waving a Nintendo Wii wand or your hands in front of a Microsoft Kinect, interactions with conversational agents such as Apple Siri, Google Assistant, Amazon Alexa or Microsoft Cortana, and adaptations of all of these for people with disabilities. The book starts with a history of the invention and development of these techniques, discusses the various options used today, and continues on to the future with the latest research on interaction techniques such as presented at academic conferences. Sections also cover how to use, model, implement, and evaluate new interaction techniques. The goal of the book is to be useful for anyone interested in why we interact with electronic devices the way we do, and designers creating the interaction techniques of tomorrow who need to know the options and constraints and what has been tried, also for implementers and consumers who want to get the most out of their interaction techniques.

Pointer Analysis: Theory and Practice
Abstract:

Pointer analysis provides information to disambiguate indirect reads and writes of data through pointers and indirect control flow through function pointers or virtual functions. Thus it enables application of other program analyses to programs containing pointers. There is a large body of literature on pointer analysis. However, there is no material that brings out a uniform coherent theme by separating fundamental concepts from advanced techniques and tricks. This book fills this void.

The book focuses on fundamental concepts instead of trying to cover the entire breadth of the literature on pointer analysis. Bibliographic notes point the reader to relevant literature for more details.
Rather than being driven completely by pointer analysis’s practical effectiveness, the book evolves the concepts from the first principles based on the language features, brings out the interactions of different abstractions at the level of ideas, and finally, relates them to practical observations and the nature of practical programs.

Principles of Graph Data Management and Analytics
Abstract:

Principles of Graph Data Management and Analytics is the first textbook on the subject for upper-level undergraduates, graduate students and data management professionals who are interested in the new and exciting world of graph data management and computation. The book blends together the two thinly connected disciplines – a database-minded approach to managing and querying graphs, and an analytics-driven approach to perform scalable computation on large graphs. It presents a detailed treatment of the underlying theory and algorithms, and prevalent techniques and systems; it also presents textbook use cases and real-world problems that can be solved by combining database-centric and analysis-centric approaches. The book will enable students to understand the state of the art in graph data management, to effectively program currently available graph databases and graph analytics products, and to design their own graph data analysis systems.To help this process, the book supplements its textual material with several data sets, small and large, that will be made available through the book’s website. Several free and contributed software will also be provided for readers for practice.

Prophets of Computing
Abstract:

When electronic digital computers made their first public appearance after World War II, they were widely perceived as a revolutionary force. Business management, the world of work, the nation state, and soon enough everyday life were expected to change dramatically when these machines became widely used. Ever since those early days, prophecies of computing have continually emerged, through the present day.

As computing began to spread to countries with different political structures and cultural traditions, beyond the US and UK, questions arose about what a future society with computers would look like. This volume investigates to what extent the prophesies of computing in those countries also differed, how much they had in common, and how to understand these divergences and convergences.

This book contains case studies on thirteen countries, based on source material in ten different languages—the accomplishment of an international team of scholars. In addition to analyses of debates, political changes, and popular speculation in these countries, we also show a wide range of pictorial representations of “the future with computers.”

Rendering History:  The Women of ACM-W
Abstract:

The process of creating 3D graphics – adding surfaces, texture and lighting to a bare wireframe – inspires Rendering History: The Women of ACM-W (Association for Computing Machinery’s Council on Women in Computing). To create a history of ACM-W, the book separates into three main parts and leverages an analogy to rendering 3D graphics. Developing the history of ACM-W (or creating a 3D graphic) requires surfaces; the anchor (or surface) of Rendering History is the traditional retelling of ACM-W’s story based on annual reports, newsletters and interviews. The texture of the book emerges as an annotated bibliography that outlines the major literature involving ACM-W’s purview: K-16 computing education and the girls and women in this school range. A 3D rendering finishes with lighting. A history of ACM-W is also incomplete without lighting – an accounting of the nearly 100 women who served ACM from 1993 to the present. The women of ACM-W co-founded some of the most influential institutions that recruit, retain and celebrate women in computing. They launched projects to broaden participation in computing. The women work in industry, in academia; some author books; a few are now retired. The women conduct research; they teach; they lead organizations. The women of ACM-W received millions of dollars in grant funding; they won prominent awards. Rendering History allows a sample of ACM-W members to inspire and motivate by narrating accounts of their lives. The description of ACM-W’s projects, an annotated bibliography and the personal stories comprise a fully-formed history of ACM-W.

Static Programming Analysis
Abstract:

Static program analysis studies the behavior of programs under all possible inputs. It is an area with a wealth of applications, in virtually any tool that processes programs. A compiler needs static analysis in order to detect errors (e.g., undefined variables) or to optimize code (e.g., eliminate casts or devirtualize calls). A refactoring or a program understanding tool need global program analysis in order to answer useful questions such as “where could this program variable have been set?” or “which parts of the program can influence this value?'' A security analyzer needs program analysis to determine “can the arguments of this private operation ever be affected by untrusted user input?” A concurrency bug detector needs program analysis in order to tell whether a program can ever deadlock or have races. Static program analysis is practically valuable, but it is also hard. It is telling that the quintessential undecidable computing problem, the “halting problem”, is typically phrased as a program analysis question: “can there be a program that accepts another program as input and determines whether the latter always terminates?” Other program analysis problems have given rise to some of the best known techniques and algorithms in computer science (e.g., data-flow frameworks). This book offers a comprehensive treatment of the principles, concepts, techniques and applications of static program analysis, illustrated and explained. The emphasis is on understanding the tradeoffs of different kinds of static program analysis algorithms and on appreciating the main factors for critically evaluating a static analysis and its suitability for practical tasks.

Tangible and Embodied Interaction
Abstract:

User interfaces for our increasingly varied computational devices have long been oriented toward graphical screens and virtual interactors. Since the advent of mass market graphical interfaces in the mid-1980s, most human-computer interaction has been mediated by graphical buttons, sliders, text fields, and their virtual kin.

And yet, humans are profoundly physical creatures. Throughout our history (and prehistory), our bodies have profoundly shaped our activities and engagement with our world, and each other. Despite -- and perhaps also, because of -- the many successes of keyboard, pointer, touch screen, and (increasingly) speech modalities of computational interaction, many have sought alternate prospects for interaction that more deeply respect, engage, and celebrate our embodied physicality.

For several decades, tangible and embodied interaction (TEI) has been the topic of intense technological, scientific, artistic, humanistic, and mass-market research and practice. In this book, we elaborate on many dimensions of this diverse, transdisciplinary, blossoming topic.

The Societal Impacts of Algorithmic Decision-Making
Abstract:

Algorithms are used to make decisions in an ever-increasing number of socially consequential domains. From risk assessment tools in the criminal justice system to content moderation tools to assessments in hiring, algorithms play a key role in shaping the lives of people around the world. Algorithms offer many potential benefits: they are consistent, scalable, and can leverage more data than any human could reasonably consume. However, without careful consideration algorithmic decision-making also carries a number of risks, like replicating human biases, creating perverse incentives, and propagating misinformation. This thesis seeks to develop principles for the responsible deployment of algorithms in applications of societal concern, realizing their benefits while addressing their potential harms. What does it mean to make decisions fairly? How do theoretical ideas about societal impacts manifest in practice? How do existing legal protections apply in algorithmic settings, and how can technical insights inform policy?

In this thesis, we explore these questions from a variety of perspectives. Part II leverages theoretical models to surface challenges posed by algorithmic decision-making and potential avenues to overcome them. Part III incorporates models of behavior to better understand the interplay between algorithms and humans decisions. In Part IV, we explore how these insights manifest in practice, studying applications in employment and credit scoring contexts. We conclude in Part V with open directions for future research.

Verification and Synthesis of Network Routing
Abstract:

Modern trends in networking, such as the rise of cloud computing and progressive rollout of large datacenter networks, have complicated network management and created a great need in practice for tools that can ensure reliability of our physical networks. To address this need, in recent years many researchers, including myself, have explored how to apply formal methods techniques, including verification and synthesis, to validate the correctness of networks at scale. However, despite a lot of recent interest in the fields of network verification and synthesis, the topics remain inscrutable to most practitioners and researchers, requiring strong backgrounds in both networking principles and formal methods techniques. Moreover, the fields are relatively new and most of the knowledge is locked away in the heads of a small number of people.

The goal of this publication is the provide an in-depth exploration of the recent advances in formal methods as applied specifically to the area of network routing (layer 3). The monograph aims to also provide suitable background to be accessible to those lacking background in either networking or formal methods. The target audiences would be (1) researchers and new graduate students looking to work in the fields of either network verification or network synthesis, and (2) industrial practitioners looking to build practical tools to improve network reliability.

View Published Titles