Verona is a language designed and implemented by teams at Microsoft, Uppsalla and Imperial College. Building on the teams’ experiences with Pony, the new language aims to support high-level, high-performance, highly-convenience, safe systems programming. Verona extends the actor paradigm with asynchronous execution so that multiple actors may form temporal alliances and execute behaviours in concert. It has an ownership-based type system that ensures data race freedom and supports fine-grained, region-specific garbage collection. I will introduce Verona through examples and explore the design space of causal ordering for behaviours and their ramifications when reasoning about Verona. I will outline the implementation of a C++ runtime and compare Verona to Pony.
I introduce the basic principles of concurrent garbage collection, including on-the-fly garbage collection, focusing on the correctness challenges for garbage collectors due to concurrent mutation by the application program. We will then discuss approaches to solving these problems, and characterize these solutions with respect to an abstract garbage collection algorithm in terms of invariants that preserve the safety of the algorithm. I then discuss a concrete garbage collection algorithm designed to operate efficiently on weak memory multicore hardware, and argue informally for its correctness. I then describe how we have been able to formally and mechanically verify the correctness of this algorithm for weak memory using the Isabelle proof assistant.
We will look at the evolution of production concurrent collectors over the past decade, treating primarily concurrent marking and compaction. Collectors described include Compressor, Azul’s Pauseless/C4 and Collie, OpenJDK’s ZGC and Generational ZGC, OpenJDK’s Shenandoah, and LXR (a mostly non-concurrent, mostly reference-counting compacting collector as counterpoint).
Implementers often start with an interpreter to sketch how a language may work. They are easy to implement and great to experiment with. However, they are also an essential part of dynamic language implementations. We will talk about the basics of abstract syntax trees, bytecodes, and how these ideas can be used to implement a language. We will also look into optimizations for interpreters: how AST and bytecode interpreters can use run-time feedback to improve performance, and discuss how super nodes and super instructions allows us to make effective use of modern CPUs.
Software applications and technologies are built on top of foundational systems such as compilers, databases, and theorem provers. Such foundations form the trusted computing base, and fundamentally impact software quality and security. Thus, solidifying them is a critical challenge. These lectures will introduce general, effective techniques, and extensive, impactful efforts on finding thousands of critical issues in widely-used compilers, database management systems, and SMT solvers. They will focus on the high-level principles and core techniques, their significant practical successes, and future opportunities and challenges.
The most profound connection between logic and computation is a pun. The doctrine of Propositions as Types asserts that a certain kind of formal structure may be read in two ways: either as a proposition in logic or as a type in computing. Further, a related structure may be read as either the proof of the proposition or as a programme of the corresponding type. Further still, simplification of proofs corresponds to evaluation of programs. Accordingly, the title of this course also has two readings. It may be parsed as “(Programming Language) Foundations in Agda” or “Programming (Language Foundations) in Agda” — the specifications we will write in the proof assistant Agda both describe programming languages and are themselves programmes. The textbook for the course is a literate Agda script, freely available online at https://plfa.inf.ed.ac.uk.
Rust has popularised ownership type systems that grant programmers explicit control or their programs' memory management, within the context of a practical systems programming language. Rust's design draws on more than 30 years of work across many different programming language research communities: linear functional programming (ICFP), region allocation (PLDI), ownership types (OOPSLA), alias burying (ECOOP). This talk -- based on a POPL 2023 tutorial curated jointly with Tobias Wrigstad -- will attempt to argue that Rust's lifetimes, ML's regions, Pony's capabilities, C++'s unique_ptr are examples of Convergent Evolution in programming language design. Compressing six hours of material from a nominally three-hour tutorial into a 3 second "blipvert", I'll go on to speculate about likely further trajectories of language designs and ownership.