Published Titles

Communities of Computing: Computer Science and Society in the ACM

By: Thomas J. Misa
ISBN: 978-1-97000-187-7
DOI: 10.1145/2973856
Table of Contents

Communities of Computing is the first book-length history of the Association for Computing Machinery (ACM), founded in 1947 and with a membership today of 100,000 worldwide.It profiles ACM’s notable SIGs, active chapters, and individual members, setting ACM’s history into a rich social and political context.The book’s 12 core chapters are organized into three thematic sections. “Defining the Discipline” examines the 1960s and 1970s when the field of computer science was taking form at the National Science Foundation, Stanford University, and through ACM’s notable efforts in education and curriculum standards.“Broadening the Profession” looks outward into the wider society as ACM engaged with social and political issues—and as members struggled with balancing a focus on scientific issues and awareness of the wider world.

Chapters examine the social turbulence surrounding the Vietnam War, debates about the women’s movement, efforts for computing and community education, and international issues including professionalization and the Cold War.“Expanding Research Frontiers” profiles three areas of research activity where ACM members and ACM itself shaped notable advances in computing, including computer graphics, computer security, and hypertext.

Featuring insightful profiles of notable ACM leaders, such as Edmund Berkeley, George Forsythe, Jean Sammet, Peter Denning, and Kelly Gotlieb, and honest assessments of controversial episodes, the volume deals with compelling and complex issues involving ACM and computing.It is not a narrow organizational history of ACM committees and SIGS, although much information about them is given.All chapters are original works of research.Many chapters draw on archival records of ACM’s headquarters, ACM SIGs, and ACM leaders.This volume makes a permanent contribution to documenting the history of ACM and understanding its central role in the history of computing.

Text Data Management and Analysis

By: ChengXiang Zhai, Sean Massung
ISBN: 978-1-97000-117-4
DOI: 10.1145/2915031
Table of Contents

Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content.As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text.The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic.

This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks.The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.

Reviews

In general terms, the authors typically provide verbose descriptions of the reasons behind the design of specific techniques, with numerical examples and illustrative figures from the slides of two massive open online courses (MOOCs) offered by the first author on Coursera. They also provide specific sections that describe in detail the proper way to evaluate every different kind of technique, a key factor to be taken into account when applying the discussed techniques in practice.

The book, however, is not always self-contained, since its broad scope in a limited number of pages entails an unavoidable depth/breadth tradeoff. Most basic techniques can be implemented just by following the instructions and guidelines in the text, although interested readers might need to resort to the bibliographic references if they want to gain a thorough understanding of the many advanced techniques. Fortunately, the authors include some bibliographic notes and very selective suggestions for further reading at the end of each chapter, instead of the encyclopedic collection of references common in many other textbooks.

Although readers will not find detailed coverage of NLP techniques and some chapters might seem lacking in depth, advanced undergraduate students might find this book to be a valuable reference for getting acquainted with both information retrieval and text mining in a single volume, a worthwhile achievement for a 500-page textbook.
Fernando Berzal - In "Computing Reviews"

An Architecture for Fast and General Data Processing on Large Clusters

By: Matei Zaharia
ISBN: 978-1-97000-157-0
DOI: 10.1145/2886107
Table of Contents

The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters.

At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.

This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.

We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.

This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

Reactive Internet Programming: State Chart XML in Action

By: Franck Barbier
ISBN: 978-1-97000-177-8
DOI: 10.1145/2872585
Table of Contents

Is Internet software so different from “ordinary” software? This book practically answers this question through the presentation of a software design method based on the State Chart XML W3C standard along with Java. Web enterprise, Internet-of-Things, and Android applications, in particular, are seamlessly specified and implemented from “executable models.”

Internet software puts forward the idea of event-driven or reactive programming, as pointed out in Bonér et al.’s “Reactive Manifesto” ( http://www.reactivemanifesto.org). It tells us that reactiveness is a must. However, beyond concepts, software engineers require effective means with which to put reactive programming into practice. This book’s purpose is to outline and explain such means.

The lack of professional examples in the literature that illustrate how reactive software should be shaped can be quite frustrating. Therefore, this book helps to fill in that gap by providing in-depth professional case studies that contain comprehensive details and meaningful alternatives. Furthermore, these case studies can be downloaded for further investigation.

Internet software requires higher adaptation, at run time in particular. After reading Reactive Internet Programming, the reader therefore will be ready to enter the forthcoming Internet era.

Reviews

If I may harken back to my initial disclaimer, namely, that I’m not an Internet programmer, I wish to say that in 1988 (in an aerospace venue) I urged my employer to purchase Harel’s StateMate tool. The proverbial bean counters did their job in rejecting the request. I’m happy to see this excellent book’s arrival, along with its open-source superstructure. It is very much worth reading, both for theory and for real-world application to complex event-driven systems.
George Hacken - Computing Reviews

Verified Functional Programming in Agda

By: Aaron Stump
ISBN: 978-1-97000-127-3
DOI: 10.1145/2841316
Table of Contents

Agda is an advanced programming language based on Type Theory. Agda’s type system is expressive enough to support full functional verification of programs, in two styles. In external verification, we write pure functional programs and then write proofs of properties about them. The proofs are separate external artifacts, typically using structural induction. In internal verification, we specify properties of programs through rich types for the programs themselves. This often necessitates including proofs inside code, to show the type checker that the specified properties hold. The power to prove properties of programs in these two styles is a profound addition to the practice of programming, giving programmers the power to guarantee the absence of bugs, and thus improve the quality of software more than previously possible.

Verified Functional Programming in Agda is the first book to provide a systematic exposition of external and internal verification in Agda, suitable for undergraduate students of Computer Science. No familiarity with functional programming or computer-checked proofs is presupposed. The book begins with an introduction to functional programming through familiar examples like booleans, natural numbers, and lists, and techniques for external verification. Internal verification is considered through the examples of vectors, binary search trees, and Braun trees. More advanced material on type-level computation, explicit reasoning about termination, and normalization by evaluation is also included. The book also includes a medium-sized case study on Huffman encoding and decoding.

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics. (Ulf Norell, Chalmers University of Technology, Sweden)

Previously, when someone asked me how to learn dependently typed programming, I'd point them to various tutorials, papers, and blog posts about Agda. Now, I just give them this book. Jesper Cockx, K. U. Leuven, Belgium

Reviews

Verified Functional Programming in Agda is an excellent introduction to the field of dependently typed programming. Stump does a great job of making the subject accessible to beginners without shying away from the more advanced topics.
Ulf Norell, Chalmers University of Technology, Sweden

Ada’s Legacy: Cultures of Computing from the Victorian to the Digital Age

By: Robin Hammerman and Andrew L. Russell
ISBN: 978-1-97000-149-5
DOI: 10.1145/2809523
Table of Contents

Ada's Legacy illustrates the depth and diversity of writers, thinkers, and makers who have been inspired by Ada Lovelace, the English mathematician and writer. The volume, which commemorates the bicentennial of Ada's birth in December 1815, celebrates Lovelace's many achievements as well as the impact of her life and work, which reverberated widely since the late nineteenth century. In the 21st century we have seen a resurgence in Lovelace scholarship, due to the growth of interdisciplinary thinking and the expanding influence of women in science, technology, engineering and mathematics. Ada's Legacy is a unique contribution to this scholarship, thanks to its combination of papers on Ada's collaboration with Charles Babbage, Ada's position in the Victorian and Steampunk literary genres, Ada's namesake programming language, Ada's representation in and inspiration of contemporary art, and her continued relevance in discussions about gender and technology in the digital age.

Because of its broad focus on subjects that reach far beyond the life and work of Ada herself, Ada's Legacy will appeal to readers who are curious about her enduring importance in computing and the wider world.

Edmund Berkeley and the Social Responsibility of Computer Professionals

By: Bernadette Longo
ISBN: 978-1-97000-139-6
DOI: 10.1145/2787754
Table of Contents

Edmund C. Berkeley (1909 – 1988) was a mathematician, insurance actuary, inventor, publisher, and a founder of the Association for Computing Machinery (ACM). His book Giant Brains or Machines That Think (1949) was the first explanation of computers for a general readership. His journal Computers and Automation (1951-1973) was the first journal for computer professionals. In the 1950s, Berkeley developed mail-order kits for small, personal computers such as Simple Simon and the Braniac. In an era when computer development was on a scale barely affordable by universities or government agencies, Berkeley took a different approach and sold simple computer kits to average Americans. He believed that digital computers, using mechanized reasoning based on symbolic logic, could help people make more rational decisions. The result of this improved reasoning would be better social conditions and fewer large-scale wars. Although Berkeley’s populist notions of computer development in the public interest did not prevail, the events of his life exemplify the human side of ongoing debates concerning the social responsibility of computer professionals.

This biography of Edmund Berkeley, based on primary sources gathered over 15 years of archival research, provides a lens to understand social and political decisions surrounding early computer development, and the consequences of these decisions in our 21st century lives.

The VR Book: Human-Centered Design for Virtual Reality

By: Jason Jerald
ISBN: 978-1-97000-112-9
DOI: 10.1145/2792790
Table of Contents

Virtual reality (VR) potentially provides our minds with direct access to digital media in a way that at first seems to have no limits.However, creating compelling VR experiences is an incredibly complex challenge.When VR is done well, the results are brilliant and pleasurable experiences that go beyond what we can do in the real world.When VR is done badly, not only is the system frustrating to use, but sickness can result.Reasons for bad VR are numerous; some failures come from the limitations of technology, but many come from a lack of understanding perception, interaction, design principles, and real users. This book discusses such issues, focusing upon the human element of VR rather than technical implementation, for if we do not get the human element correct, then no amount of technology will make VR anything more than an interesting tool confined to research laboratories. Even when VR principles are fully understood, first implementations are rarely novel and never ideal due to the complex nature of VR and the countless possibilities. However, the VR principles discussed within enable us to intelligently experiment with the rules and iteratively design towards innovative experiences.

Chapter 1. Introduction

Chapter 2. Perception

Chapter 3. Cybersickness

Chapter 4. Interaction

Chapter 5. Content Creation

Chapter 6. Iterative Design

Chapter 7. Conclusions and the Future

Reviews

The definitive guide for creating VR user interactions.
Amir Rubin - CEO, Sixense


Conceptually comprehensive, yet at the same time practical and grounded in real-world experience.
Paul Mlyniec - President of Digital ArtForms and father of MakeVR

The summative guidelines provide quick access with back references for further understanding.
Chris Pusczak - Creative Director of SymbioVR

I was able to briefly preview it at SIGGRAPH this last summer and it seemed like it would be a good fit for my class. Now that have had a chance to take a deeper dive, I am even more certain I will be using the book. I am looking forward to using this as a required text when I teach the course again.
Dr. David Whittinghill, Purdue University

Candidate Multilinear Maps

By: Sanjam Garg
ISBN: 978-1-62705-549-9
DOI: 10.1145/2714451
Table of Contents

Cryptography to me is the “black magic,” of cryptographers, enabling tasks that often seem paradoxical or simply just impossible. Like the space explorers, we cryptographers often wonder, “what are the boundaries of this world of “black magic?” This work lays one of the founding stones in furthering our understanding of these edges.

We describe plausible lattice-based constructions with properties that approximate the sought after multilinear maps in hard-discrete-logarithm groups. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function. These new constructions radically enhance our tool set and open a floodgate of applications. We present a survey of these applications. This book is based on my PhD thesis which was an extended version of a paper titled “Candidate Multilinear Maps from Ideal Lattices” co-authored with Craig Gentry and Shai Halevi. This paper was originally published at EUROCRYPT 2013.

Smarter Than Their Machines: Oral Histories of Pioneers in Interactive Computing

By: John Cullinane
ISBN: 978-1-62705-553-6
DOI: 10.1145/2663015
Table of Contents

The oral histories of the pioneers that led to interactive computing have much to offer today's leaders in business, industry, and academia on how to get complex things done. After all, industry, government, and academia working together created the computer industry, which led to the Internet as we know it today. To do so, the pioneers had to get around the various “systems” of the day that are always impediments to implementing new ideas. In their case, it was the voice-dominated communications industry. For example, packet switching was invented to get around such an issue. This was key to allowing incompatible computers to “talk” to each other across academia, and later industry, which would be the key to the Internet. Cullinane Corporation, the computer industry's first successful software products company, benefited from this technology as it focused on database software as the foundation for interactive computer systems for industry, government, and academia. As such, this book is a personal walk through the history that led to interactive computing as John Cullinane witnessed it and participated in it, with the help of the oral histories of some key pioneers, organized and introduced in a way that illustrates the close interaction of the various individuals and organizations involved in the evolution of interactive computing. These oral histories, including John's, were drawn from the archives of over 300 such histories located at the Charles Babbage Institute, University of Minnesota.

Embracing Interference in Wireless Systems

By: Shyamnath Gollakota
ISBN: 978-1-62705-474-4
DOI: 10.1145/2611390
Table of Contents

The wireless medium is a shared resource. If nearby devices transmit at the same time, their signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless networks have been designed with the assumption that interference is intrinsically harmful and must be avoided.

This book takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, we design practical systems that transform interference into a harmless, and even a beneficial phenomenon. To achieve this goal, we consider how wireless signals interact when they interfere, and use this understanding in our system designs. Specifically, when interference occurs, the signals get mixed on the wireless medium. By understanding the parameters of this mixing, we can invert the mixing and decode the interfered packets; thus, making interference harmless. Furthermore, we can control this mixing process to create strategic interference that allow decodability at a particular receiver of interest, but prevent decodability at unintended receivers and adversaries. Hence, we can transform interference into a beneficial phenomenon that provides security.

Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers

By: Bryan Parno
ISBN: 978-1-62705-477-5
DOI: 10.1145/2611399
Table of Contents

As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted.

In this book we describe how to resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an on-demand secure execution environment which can perform security-sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.

A Framework for Scientific Discovery through Video Games

By: Seth Cooper
ISBN: 978-1-62705-504-8
DOI: 10.1145/2625848
Table of Contents

As science becomes increasingly computational, the limits of what is computationally tractable become a barrier to scientific progress. Many scientific problems, however, are amenable to human problem solving skills that complement computational power. By leveraging these skills on a larger scale – beyond the relatively few individuals currently engaged in scientific inquiry – there is the potential for new scientific discoveries.

This book presents a framework for mapping open scientific problems into video games. The game framework combines computational power with human problem solving and creativity to work toward solving scientific problems that neither computers nor humans could previously solve alone. To maximize the potential contributors to scientific discovery, the framework designs a game to be played by people with no formal scientific background and incentivizes long-term engagement with a myriad of collaborative on competitive reward structures. The framework allows for the continual coevolution of the players and the game to each other: as players gain expertise through gameplay, the game changes to become a better tool.

The framework is validated by being applied to proteomics problems with the video game Foldit. Foldit players have contributed to novel discoveries in protein structure prediction, protein design, and protein structure refinement algorithms. The coevolution of human problem solving and computer tools in an incentivized game framework is an exciting new scientific pathway that can lead to discoveries currently unreachable by other methods.

Shared-Memory Parallelism Can be Simple, Fast, and Scalable

By: Julian Shun
ISBN: 978-1-97000-191-4
DOI: 10.1145/3018787
Table of Contents

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to enable the solutions developed to run efficiently under various settings. This book, a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award, addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The book provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results serve to ease the transition into the multicore era.

The book starts by introducing tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The book then introduces Ligra, the first high-level shared-memory framework for parallel graph traversal algorithms. The framework enables short and concise implementations that deliver performance competitive with that of highly-optimized code and up to orders of magnitude better than previous systems designed for distributed memory. Finally, the book bridges the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice.

The Handbook of Multimodal-Multisensor Interfaces, Volume I

By: Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger
ISBN: 978-1-97000-167-9
DOI: 10.1145/3015783
Table of Contents

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces—user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smartphones, wearables, in-vehicle, robotic, and many other applications that are now highly competitive commercially.

This edited collection is written by international experts and pioneers in the field. It provides a textbook for students, and a reference and technology roadmap for professionals working in this rapidly emerging area.

Volume 1 of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. They also highlight an in-depth look at the most common multimodal-multisensor combinations— for example, touch and pen input, haptic and non-speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users—including the world’s rapidly growing population of seniors.

These handbook chapters provide walk-through examples and video illustrations of different system designs and their interactive use. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal-multisensor systems. In the final chapter, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

Computational Methods for Protein Complex Prediction from Protein Interaction Networks

By: Sriganesh Srihari, Chern Han Yong, and Limsoon Wong
ISBN: 978-1-97000-155-6
DOI: 10.1145/3064650
Table of Contents

Complexes of physically interacting proteins constitute fundamental functional units that drive biological processes within cells. A faithful identification of the entire set of complexes (the ‘complexosome’) is therefore essential not only to understand complex formation but also the functional organization of cells. Advances over the last several years, particularly through the use of high-throughput yeast two-hybrid and affinity-purification based experimental (proteomics) techniques, extensively map interactions (the ‘interactome’) in model organisms, including Saccharomyces cerevisiae (budding yeast), Drosophila melanogaster (fruit fly) and Caenorhabditis elegans (roundworm). These interaction data have enabled systematic reconstruction of complexes in these organisms, thereby revealing novel insights into the constituents, assembly and functions of complexes. Computational methods have played a significant role towards these advancements by contributing more accurate, efficient and exhaustive ways to analyse the enormous amounts of data, and also by complementing for several of the limitations, including presence of biological and technical noise and lack of credible interactions (sparsity) arising from experimental protocols. In this book, we systematically walk through all the important computational methods devised to date (approximately between 2003 and 2015) for identifying complexes from the network of protein interactions (PPI network).

We present a detailed taxonomy of these methods, and comprehensively evaluate them for their ability to accurately identify complexes across a variety of scenarios, including presence of noise in PPI networks and inferring of sparse complexes. By covering challenges faced by these methods more lately, for instance in identifying sub- or small complexes, and discerning of overlapping complexes, we reveal how a combination of strategies is required to accurately reconstruct the entire complexosome. The experience gained from model organisms is now paving the way for identification of complexes from higher-order organisms including Homo sapiens (human). In particular, with the increasing use of ‘pan-omics’ techniques spanning genomics, transcriptomics, proteomics and metabolomics to map human cells across multiple layers of organization, the need to understand the rewiring of the interactome between conditions – e.g. between normal development and disease – and consequently, the dynamic reorganization of complexes across these conditions are gaining immense importance. Towards this end, more recent computational methods have integrated these pan-omics datasets to decipher complexes in diseases including cancer, which in turn have revealed novel insights into disease mechanisms and highlighted potential therapeutic targets. Here, we will cover several of these latest methods, thus emphasizing how a fundamental problem such as complex identification can have far-reaching applications towards understanding the biology underlying sophisticated functional and organizational transformations in cells.


View Forthcoming Titles