Dynamic symbolic execution has gathered a lot of attention in recent years as an effective technique for generating high-coverage test suites and finding deep errors in complex software applications. In this tutorial-style presentation, I will introduce the main concepts of dynamic symbolic execution and exemplify them in the context of our KLEE symbolic execution infrastructure. The talk is primarily targeted to those who have no direct experience with dynamic symbolic execution and KLEE, but the talk will also include several parts useful for a more specialist audience.
The first talk will be some details of how the innards of the JVM works. This might be helpful to folks looking to implement their own languages either from scratch or on top of the JVM. The second talk will be a deep dive into garbage collection inluding work I've done on Lisp, Id, and Java implementations.
Dynamic languages are flexible, fun to use, and have a great power-to-code-weight ratio. But the lack of static typing can impede software development, makes code maintenance harder, and leads to bugs that lurk in code for a long time. Over the last 10 years, I've been studying dynamic language type systems in the context of the Ruby programming language, and there's good news: The upcoming Ruby 3 will have types. In these talks, I'll discuss some of the technical challenges and potential solutions for typing Ruby and similar dynamic languages, including handling dynamic language features, metaprogramming, and types that depend on the result of computations. I'll also talk about my experiences on the path from research to practice.
(1) I will talk about the entrails of software: linking and loading of machine code. This will include their history from the earliest stored-program computers, through to their place in modern toolchains and why they're not yet solved problems. Not unrelatedly, I'll also cover differing approaches to implementing interactive debugging. (2) I will talk about language interoperability from an implementer's perspective, visiting several different incarnations of the problem: traditional link-time cases, cases faced by ahead-of-time compilation of modern languages, and cases involving language virtual machines.
In these lectures, I will focus on the problem of compile-time scheduling optimizations that restructure computations to improve locality and parallelims. Traditional approaches to this problem reason about loop structures and schedules in terms of polyhedra, but there has been little success in extending such approaches beyond loop-based programs. I will start by briefly introducing the basic concepts used in reasoning about and transforming loop programs. I will then argue that there are interesting scheduling transformations that can be done for non–loop based programs, in particular recursive traversla programs such as those that arise in graphics, data mining, and simulation. I will survey recent work from the last decade on designing and implementing these transformations. Then, I will discuss recent advances in developing unified frameworks for representing, reasoning about, and transforming programs that deal with loops and recursion. Time permitting, we will also use one of these frameworks to explore and design a new transformation for recursive programs.
Language workbenches support the high-level definition of (domain-specific) programming languages by means of meta-languages and the automatic derivation of implementations from such definitions. A declarative meta-language allows the language designer to abstract from irrelevant implementation details and focus on the essence of a language. In these lectures I will talk about the design and semantics of two meta-languages (from the Spoofax language workbench) for the definition of syntax and static semantics. The lectures will cover three topics: (1) Declarative syntax definition with SDF3: the extension of context-free grammars to syntax definitions by means of features such as constructors, layout templates, lexical syntax, and declarative disambiguation. (2) Name binding and resolution with scope graphs: the definition of a framework for the representation of a wide range of name binding rules, including (mutually recursive) modules and imports. (3) Type system definition with Statix: the interaction between name and type resolution and the scheduling of name resolution queries.