30
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      babble: Learning Better Abstractions with E-Graphs and Anti-unification

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Library learning compresses a given corpus of programs by extracting common structure from the corpus into reusable library functions. Prior work on library learning suffers from two limitations that prevent it from scaling to larger, more complex inputs. First, it explores too many candidate library functions that are not useful for compression. Second, it is not robust to syntactic variation in the input.

          We propose library learning modulo theory (LLMT), a new library learning algorithm that additionally takes as input an equational theory for a given problem domain. LLMT uses e-graphs and equality saturation to compactly represent the space of programs equivalent modulo the theory, and uses a novel e-graph anti-unification technique to find common patterns in the corpus more directly and efficiently.

          We implemented LLMT in a tool named babble. Our evaluation shows that babble achieves better compression orders of magnitude faster than the state of the art. We also provide a qualitative evaluation showing that babble learns reusable functions on inputs previously out of reach for library learning.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Learning Syntactic Program Transformations from Examples

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

            Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, then it would notably improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a computer architecture consisting of a visual perception system, working memory, and action controller. The instruction set of this cognitive computer has commands for parsing a visual scene, directing gaze and attention, imagining new objects, manipulating the contents of a visual working memory, and controlling arm movement. Inferring a concept corresponds to inducing a program that can transform the input to the output. Some concepts require the use of imagination and recursion. Previously learned concepts simplify the learning of subsequent, more elaborate concepts and create a hierarchy of abstractions. We demonstrate how a robot can use these abstractions to interpret novel concepts presented to it as schematic images and then apply those concepts in very different situations. By bringing cognitive science ideas on mental imagery, perceptual symbols, embodied cognition, and deictic mechanisms into the realm of machine learning, our work brings us closer to the goal of building robots that have interpretable representations and common sense.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Lase: Locating and applying systematic edits by learning from examples

                Bookmark

                Author and article information

                Contributors
                Journal
                Proceedings of the ACM on Programming Languages
                Proc. ACM Program. Lang.
                Association for Computing Machinery (ACM)
                2475-1421
                January 09 2023
                January 11 2023
                January 09 2023
                : 7
                : POPL
                : 396-424
                Affiliations
                [1 ]University of California at San Diego, USA
                [2 ]Certora, USA
                [3 ]University of Washington, USA
                Article
                10.1145/3571207
                2215ebe8-a30f-4e1a-819f-c14970941c3a
                © 2023
                History

                Comments

                Comment on this article