Loading…
For full conference details, please visit the 2018 European LLVM Developers’ Meeting website.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Monday, April 16
 

8:00am BST

Coffee & Pastries
Coffee and pastries.

Monday April 16, 2018 8:00am - 9:00am BST
Bristol Foyer

8:00am BST

Registration Desk
Registration desk open for check-in and questions.

Monday April 16, 2018 8:00am - 5:00pm BST
Bristol Foyer

9:00am BST

Welcome
Welcome to the conference. Quick overview of sessions, logistics, etc.

Speakers
avatar for Arnaud de Grandmaison

Arnaud de Grandmaison

LLVM Foundation


Monday April 16, 2018 9:00am - 9:15am BST
Bristol 1 & 2

9:15am BST

The Cerberus Memory Object Semantics for ISO and De Facto C
The semantics of pointers and memory objects in C has been a vexed question for many years.  C values cannot be treated as simple abstract or concrete entities: the language exposes their representations, but compiler optimisations rely on analyses that reason about provenance and initialisation status, not just runtime representations. The ISO standard leaves much of this unclear, and in some aspects differs with de facto standard usage - which itself is difficult to investigate.
This talk will describe our candidate source-language semantics for memory objects and pointers in C, as it is used and implemented in practice.  Focussing on provenance and uninitialised values, we propose a coherent set of choices for a host of design questions, based on discussion with the ISO WG14 C standards committee and previous surveys of C experts.  This should also inform design of the LLVM internal language semantics, and it seems that our source-language proposal and the LLVM proposal by Lopes, Hur, et al. can be made compatible.
Our semantics is integrated with the Cerberus semantics for much of the rest of C, with a clean translation of C into a Core intermediate language.  Together, the two make C undefined behaviours explicit.  Cerberus has a web-interface GUI in which one can explore all the allowed behaviours of small test programs, and which also identifies the clauses of the C standard relevant to typechecking and translating each test. Work-in-progress URL: http://svr-pes20-cerberus.cl.cam.ac.uk/
We also describe detailed proposals to WG14, showing how the semantics can be incorporated into the ISO standard.   This is joint work by Kayvan Memarian, Victor Gomes, and the speaker.

Speakers
avatar for Peter Sewell

Peter Sewell

University of Cambridge


Monday April 16, 2018 9:15am - 10:10am BST
Bristol 1 & 2

10:15am BST

Global code completion and architecture of clangd
Clangd is an implementation of the Language Server Protocol (LSP) server based on clang’s frontend and developed as part of LLVM in the clang-tools-extra repository. LSP is the relatively new initiative to standardize the protocol for providing intelligent semantic code editing features independent of a particular text editor. Clangd aims to support very large codebases and provide intelligent IDE features like code completion on a project-wide scale. In this talk, we’ll cover the architecture of clangd and talk in-depth about the feature we’ve been working on in the last few months: the global code completion.

Speakers
avatar for Ilya Biryukov

Ilya Biryukov

Software Engineer, Google
A software engineer at Google Munich. Mostly working on clangd (https://clang.llvm.org/extra/clangd.html).Previously at JetBrains, worked on ReSharper C++ and ReSharper.


Monday April 16, 2018 10:15am - 10:55am BST
Bristol 1 & 2

10:55am BST

Break
AM Break

Monday April 16, 2018 10:55am - 11:15am BST
Bristol Foyer

11:15am BST

Using LLVM in a Model Checking Workflow
Formal verification can be used to show the presence or absence of specific type of errors in a computer program. Formal verification is usually done by transforming the already implemented source code into a formal model, then mathematically proving certain properties of that model (e.g. an erroneous state in the model cannot be reached). The theta verification framework provides a well-defined formal model suitable for checking imperative programs. In this talk, we present an LLVM IR frontend for theta, which bridges the gap between formal verification frameworks and the LLVM IR representation. Leveraging the LLVM IR as the frontend language of the verification workflow simplifies the transformation and allows us to easily add new supported languages.

However, these transformations often yield impractically large models, which cannot be checked within a reasonable time. Therefore size reduction techniques need to be used on the program, which can be done by utilizing LLVM's optimization infrastructure (optimizing for size and simplicity rather than execution time) and extending it with other reduction algorithms (such as program slicing).

Speakers
GS

Gyula Sallai

Budapest University of Technology and Economics


Monday April 16, 2018 11:15am - 11:35am BST
Bristol 1 & 2

11:35am BST

Improved Loop Execution Modeling in the Clang Static Analyzer
The LLVM Clang Static Analyzer is a source code analysis tool which aims to find bugs in C, C++, and Objective-C programs using symbolic execution, i.e. it simulates the possible execution paths of the code. Currently, the simulation of the loops is somewhat naive (but efficient), unrolling the loops a predefined constant number of times. However, this approach can result in a loss of coverage in various cases. This study aims to introduce two alternative approaches which can extend the current method and can be applied simultaneously: (1) determining loops worth to fully unroll with applied heuristics, and (2) using a widening mechanism to simulate an arbitrary number of iteration steps. These methods were evaluated on numerous open source projects and proved to increase coverage in most of the cases. This work also laid the infrastructure for future loop modeling improvements.

Speakers
PS

Péter Szécsi

Eötvös Loránd University


Monday April 16, 2018 11:35am - 11:55am BST
Bristol 1 & 2

11:55am BST

Compile-Time Function Call Interception to Mock Functions in C/C++
In C/C++, test code is often interwoven with the production code we want to test. During the test development process we often have to modify the public interface of a class to replace existing dependencies; e.g. a supplementary setter or constructor function is added for dependency injection. In many cases, extra template parameters are used for the same purpose. These solutions may have serious detrimental effects on code structure and sometimes on run-time performance as well. We introduce a new technique that makes dependency replacement possible without the modification of the production code, thus it provides an alternative way to add unit tests. Our new compile-time instrumentation technique modifies LLVM IR, thus enables us to intercept function calls and replace them in runtime. Contrary to existing function call interception (FCI) methods, we instrument the call expression instead of the callee, thus we can avoid the modification and recompilation of the function in order to intercept the call. This has a clear advantage in case of system libraries and third party shared libraries, thus it provides an alternative way to automatize tests for legacy software. We created a prototype implementation based on the LLVM compiler infrastructure which is publicly available for testing.

Speakers
avatar for Gábor Márton

Gábor Márton

Ericsson
Gábor has been working with C++ since 2000. Currently, he is the member of Ericsson's CodeChecker program analysis team. He works on Cross Translation Unit (CTU) static analysis and the related ASTImporter of the LLVM/Clang compiler infrastructure.In 2019 he defended his PhD theses... Read More →


Monday April 16, 2018 11:55am - 12:15pm BST
Bristol 1 & 2

12:35pm BST

Lunch
Monday April 16, 2018 12:35pm - 2:00pm BST
Bristol 3

2:00pm BST

Towards implementing #pragma STDC FENV_ACCESS
When generating floating-point code, clang and LLVM will currently assume that the program always operates under default floating-point control modes, i.e. using the default rounding mode and with floating-point exceptions disabled, and never checks the floating-point status flags. This means that code that does attempt to make use of these IEEE features will not work reliably. The C standard defines a pragma FENV_ACCESS that is intended to instruct the compiler to switch to a method of generating code that will allow these features to be used, but this pragma and the associated infrastructure is not yet implemented in clang and LLVM.

The purpose of this BoF is to bring together all parties interested in this feature, whether as potential users, or as experts in any of the parts of the compiler that will need to be modified to implement it, from the clang front end, through the optimizers, to the various back ends that need to emit appropriate code for their platform. We will discuss the current status of the partial infrastructure that is already present, identify the pieces that are still missing, and hopefully agree on next steps to move towards a full implementation of pragma FENV_ACCESS in clang and LLVM.


Monday April 16, 2018 2:00pm - 2:40pm BST
Empire Suite

2:00pm BST

A Parallel IR in Real Life: Optimizing OpenMP
Exploiting parallelism is a key challenge in programming modern systems across a wide range of application domains and platforms. From the world's largest supercomputers, to embedded DSPs, OpenMP provides a programming model for parallel programming that a compiler can understand and optimize. While LLVM's optimizer has not traditionally been involved in OpenMP's implementation, with all of the outlining logic and translation into runtime-library calls residing in Clang, several groups have been experimenting with implementation techniques that push some of this translation process into LLVM itself. This allows the optimizer to simplify these parallel constructs before they're transformed into runtime calls and outlined functions.

We've experimented with several techniques for implementing a parallel IR in LLVM, including adding intrinsics to represent OpenMP constructs (as proposed by Intel and others) and using Tapir (an experimental extension to LLVM originally developed at MIT), and have used these to lower both parallel loops and tasks. Nearly all parallel IR techniques allow for analysis information to flow into the parallel code from the surrounding serial code, thus enabling further optimization, and on top of that, we've implemented optimizations such as fusion of parallel regions and the removal of redundant barriers. In this talk, we'll report on these results and other aspects of our experiences working with parallel extensions to LLVM's IR.

Speakers
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory


Monday April 16, 2018 2:00pm - 2:40pm BST
Bristol 2

2:00pm BST

New PM: taming a custom pipeline of Falcon JIT
Over the few last months we at Azul were teaching Falcon, our LLVM based optimizing JIT compiler, to leverage the new pass manager framework. This talk will focus on our motivation as well as practical experience in getting an extensive custom LLVM pipeline to production under the new pass manager.

I will cover the current state of LLVM pass manager as viewed from our "downstream" side, issues we met while converting, as well as our expectations and how well they were met at the end.

Speakers
avatar for Fedor Sergeev

Fedor Sergeev

Compiler Engineer, Azul Systems
Compiler engineer through all the carrier, Sun Studio native compilers before, Azul Falcon JIT compiler now.


Monday April 16, 2018 2:00pm - 2:40pm BST
Bristol 1

2:00pm BST

Hackers' Lab
Monday April 16, 2018 2:00pm - 3:25pm BST
Conservatory

2:45pm BST

Debug Info
For fans of producing quality debug info!  We'll take a straw poll to pick a particular topic of interest, possibly including: Improving the debugging of optimized code; reducing DWARF size, particularly with "comdat" DWARF; testing and verifying debug info; or DWARF v5.

Speakers
PR

Paul Robinson

Sr Staff Compiler Engineer, Sony Interactive Entertainment


Monday April 16, 2018 2:45pm - 3:25pm BST
SS Great Britain

2:45pm BST

An Introduction to AMD Optimizing C/C++ Compiler
In this paper we introduce some of the optimizations that are a part of AMD C/C++ Optimizing Compiler 1.0 (AOCC 1.0) which was released in May 2017 and is based on LLVM Compiler release 4.0.0. AOCC is AMD’s CPU performance compiler which is aimed at optimizing the performance of programs running on AMD processors. In particular, AOCC 1.0 is tuned to deliver high performance on AMD’s EPYC(TM) server processors. The performance results for SPECrate®2017_int_base, SPECrate®2017_int_peak [1], SPECrate®2017_fp_base and SPECrate®2017_fp_peak [2] that we include in the paper show that AOCC delivers excellent performance thereby enhancing the power of the AMD EPYC(TM) processor. The optimizations fall into the categories of loop vectorization, SLP vectorization, data layout optimizations and loop optimizations. We shall introduce and provide some details of each optimization. [1] https://www.spec.org/cpu2017/results/res2017q4/cpu2017-20171031-00334.html [2] https://www.spec.org/cpu2017/results/res2017q4/cpu2017-20171031-00366.html

Speakers
avatar for Dibyendu Das

Dibyendu Das

Senior Fellow, AMD


Monday April 16, 2018 2:45pm - 3:25pm BST
Bristol 1

3:25pm BST

Break
Monday April 16, 2018 3:25pm - 4:00pm BST
Bristol Foyer

4:00pm BST

Build system integration for interactive tools
The current approach for integrating clang tools with build systems (CompilationDatabase, compile_commands.json) was designed for running command line tools and it lacks some important features that would be nice to have for interactive tools like clangd, e.g. tracking updates to the compilation commands for existing files or propagating information like file renames back to the build system. The current approach also requires interference from the users of the tools to generate compile_commands.json even for the build systems that support it. On the other hand, there are existing tools like CLion and Visual Studio that integrate seamlessly with their supported build systems and “just work” for the users without extra configuration. Arguably, this approach provides a better user experience. It would be interesting to explore existing build systems and approaches for integrating them with interactive clang-based tools and improving user experience in that area.

Speakers

Monday April 16, 2018 4:00pm - 4:40pm BST
Empire Suite

4:00pm BST

Developing Kotlin/Native infrastructure with LLVM/Clang, travel notes.
In September of 2016 JetBrains started development of LLVM-based Kotlin compiler and runtime. Since then, we have reached version 0.5, which compiles to most LLVM targets (Linux, Windows and macOS as OS; x86, ARM and MIPS as CPU architectures, along with more exotic WebAssembly) and supports smooth interop with arbitrary C and Objective-C libraries. This talk will give some highlights on challenges we faced during development of this backend, with emphasis on LLVM-related topics.

Speakers
avatar for Nikolay Igotti

Nikolay Igotti

Kotlin/Native Tech Lead, JetBrains
Interested in runtimes, virtual machines, memory management, language design and concurrency approaches


Monday April 16, 2018 4:00pm - 4:40pm BST
Bristol 1

4:00pm BST

Extending LoopVectorize to Support Outer Loop Vectorization Using VPlan
The introduction of the VPlan model in Loop Vectorizer (LV) started as a refactoring effort to overcome LV’s existing limitations and extend its vectorization capabilities to outer loops. So far, progress has been made on the refactoring part by introducing the VPlan model to record the vectorization and unrolling decisions for candidate loops and generate code out of them. This talk focuses on the strategy to bring outer loop vectorization capabilities to Loop Vectorizer by introducing an alternative vectorization path in LV that builds VPlan upfront in the Loop Vectorizer pipeline. We discuss how this approach, in the short term, will add support for vectorizing a subset of simple outer loops annotated with vectorization directives (#pragma omp simd and #pragma clang loop vectorize). We also talk about the plan to extend the support towards generic outer and inner loop auto-vectorization through the convergence of both vectorization paths, the new alternative vectorization path and the existing inner loop vectorizer path, into a single one with advanced VPlan-based vectorization capabilities.

We conclude the talk by describing potential opportunities for the LLVM community to collaborate in the development of this effort.

Joint work of the Intel’s vectorizer team.

[1] RFC: Proposal for Outer Loop Vectorization Implementation Plan, December 2017, http://lists.llvm.org/pipermail/llvm-dev/2017-December/119523.html [2] Extending LoopVectorizer towards supporting OpenMP4.5 SIMD and outer loop auto-vectorization, 2016 LLVM Developers' Meeting, https://www.youtube.com/watch?v=XXAvdUwO7kQ [3] Introducing VPlan to the Loop Vectorizer, 2017 European LLVM Developer’s Meeting, https://www.youtube.com/watch?v=IqzJRs6tb7Y [4] Vectorizing Loops with VPlan – Current State and Next Steps, 2017 LLVM Developer’s Meeting, https://www.youtube.com/watch?v=BjBSJFzYDVk

Speakers
DC

Diego Caballero

Compiler Engineer, Intel Corporation
nGraph, MLIR, LLVM, VPlan, Vectorization, Performance Optimizations


Monday April 16, 2018 4:00pm - 4:40pm BST
Bristol 2

4:00pm BST

Round Table
  • Debug info
  • pragma STDC FENV_ACCESS
  • ThinLTO
  • Falcon JIT
  • OpenMP

Monday April 16, 2018 4:00pm - 5:25pm BST
Conservatory

4:45pm BST

LLVM for secure code
An opportunity to discuss and explore all ares where LLVM (and indeed other compilers) are being used to support the creation of secure code.  Some of the techniques/projects I know of include the following.
  • Existing in-tree techniques: Stack protector, stack checking, stack clash protection, pointer bounds checking, control flow protection.
  • Existing out-of-tree techniques: Return address protection (RAP), structure constification, latent entropy extraction, kernel stack leak reduction, integer overflow detect
  • Verification of passes.
  • Academic work: masking with random data, automatic power analysis countermeasures.
I'll record notes and share them after the BoF.

Speakers
avatar for Jeremy Bennett

Jeremy Bennett

Chief Executive, Embecosm
Bio: Dr Jeremy Bennett is founder and Chief Executive of Embecosm (http://www.embecosm.com), a consultancy implementing open source compilers and chip simulators for major corporations around the world. He is a author of the standard textbook "Introduction to Compiling Techniques... Read More →



Monday April 16, 2018 4:45pm - 5:25pm BST
SS Great Britain

4:45pm BST

Finding Iterator-related Errors with Clang Static Analyzer
The Clang Static Analyzer is a sub-project of Clang that performs source code analysis on C, C++, and Objective-C programs. It is able to find deep bugs by symbolically executing the code. However, this far finding C++ iterator related bugs was a white spot in the analysis. In this work we present a set of checkers that detects three different bugs of this kind: out-of-range iterator dereference, mismatch between iterator and container or two iterators and access of invalidated iterators. Our combined checker solution is capable finding all these errors even in in less straightforward cases. It is generic so it do not only work on STL containers, but also on iterators of custom container types. During the development of the checker we also had to overcome some infrastructure limitations from which also other (existing and future) checkers can benefit. The checker is already deployed inside Ericsson and is under review by the community.

Speakers
avatar for Ádám Balogh

Ádám Balogh

Master Developer, Ericsson


Monday April 16, 2018 4:45pm - 5:25pm BST
Bristol 1

4:45pm BST

Finding Missed Optimizations in LLVM (and other compilers)
Randomized differential testing of compilers has had great success in finding compiler crashes and silent miscompilations. In this talk I explain how I used the same approach to find missed optimizations in LLVM and other open source compilers (GCC and CompCert).

I compile C code generated by standard random program generators and use a custom binary analysis tool to compare the output programs. Depending on the optimization of interest, the tool can be configured to compare features such as the number of total instructions, multiply or divide instructions, function calls, stack accesses, and more. A standard test case reduction tool produces minimal examples once an interesting difference has been found.

I have used these tools to compare the code generated by GCC, Clang, and CompCert. I found previously unreported missing arithmetic optimizations in all three compilers, as well as individual cases of unnecessary register spilling, missed opportunities for register coalescing, dead stores, redundant computations, and missing instruction selection patterns. In this talk I will show examples of optimizations missed by LLVM in particular, both target-independent mid-end issues and ones in the ARM back-end.

Speakers
GB

Gergö Barany

Inria Paris


Monday April 16, 2018 4:45pm - 5:25pm BST
Bristol 2

6:30pm BST

Evening Reception
Monday April 16, 2018 6:30pm - 11:00pm BST
We the Curious
 
Tuesday, April 17
 

8:00am BST

Coffee and Pastries
Coffee and pastries

Tuesday April 17, 2018 8:00am - 9:00am BST
Bristol Foyer

8:30am BST

Registration Desk
Registration Desk open for check in and questions.

Tuesday April 17, 2018 8:30am - 5:00pm BST
Bristol Foyer

9:00am BST

Hardening the Standard Library
Every C++ program depends on a standard library implementation. For LLVM users, this means that libc++ is at the bottom of their dependency graph. It is vital that this library be correct and performant.

In this talk, I will discuss some of the principles and tools that we use to make libc++ as "solid" as possible. I'll talk about preconditions, postconditions, reading specifications, finding problems, ensuring that bugs stay fixed, as well as several tools that we use to achieve our goal of making libc++ as robust as possible.

Some of the topics I'll discuss are: * Precondition checking - when practical. * Warning eradication * The importance of a comprehensive test suite for both correctness and ensuring that bugs don't reappear. * Static analysis * Dynamic analysis * Fuzzing

Speakers
avatar for Marshall Clow

Marshall Clow

Principal Engineer, CPPAlliance
Marshall has been programming professionally for 35 years. He is the author of Boost.Algorithm, and has been a contributor to Boost for more than 15 years. He is the chairman of the Library working group of the C++ standard committee. He is the lead developer for libc++, the C++ standard... Read More →


Tuesday April 17, 2018 9:00am - 9:40am BST
Bristol 2

9:00am BST

Performance Analysis of Clang on DOE Proxy Apps
The US Department of Energy has released nearly 50 proxy applications (http://proxyapps.exascaleproject.org/). These are simplified applications that represent key characteristics of a wide class of scientific computing workloads. We've conducted in-depth performance analysis of Clang-generated code for these proxy applications, comparing to GCC-compiled code and, in some cases, code generated by vendor compilers, and have found some interesting places where Clang could do better. In this talk, we'll walk through several interesting examples and present some data on overall trends which, in some cases, are surprising.

Speakers
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory


Tuesday April 17, 2018 9:00am - 9:40am BST
Bristol 1

9:00am BST

Hackers' Lab
Tuesday April 17, 2018 9:00am - 10:25am BST
Conservatory

9:45am BST

Implementing an LLVM based Dynamic Binary Instrumentation framework
This talk will go over our efforts to implement a new open-source DBI framework based on LLVM.

We have been using DBI frameworks in our work for a few years now: to gather coverage information for fuzzing, to break whitebox cryptography implementations used in DRM or to simply assist reverse engineering.

However we were dissatisfied with the state of existing DBI frameworks: they were either not supporting mobile architectures, too focused on a very specific use cases or very hard to use. This prompted the idea of developing QBDI (https://qbdi.quarkslab.com), a new framework which has been in development for two years and a half.

With QBDI we wanted to try a modern take on DBI framework design and build a tool crafted to support mobile architectures from the start, adopting a modular design enabling its integration with other tools and that was easy to use by abstracting all the low-level details from the users.

During the talk, we will review the motivation behind the usage of a DBI. We will explain its core principle and the main implementation challenges we faced. We will share some lessons learned in the process and how it changed the way we think about dynamic instrumentation tools.

Speakers
avatar for Cédric Tessier

Cédric Tessier

Security Researcher & TL, Quarkslab
Cédric Tessier is a security researcher who designed instrumentation tools focused on reverse engineering as a member of a red team while working at Apple for five years. He continued to do so in the past few years at Quarkslab, as the leader of a team devoted to instrumentation... Read More →


Tuesday April 17, 2018 9:45am - 10:25am BST
Bristol 1

9:45am BST

LLVM Greedy Register Allocator – Improving Region Split Decisions
LLVM Code Generation provides several alternative passes for performing register allocation. Most of the LLVM in-tree targets use the Greedy Register Allocator, which was introduced in 2011. An overview of this allocator was presented by Jakob Olesen at the LLVM Developers' Meeting of that year (*). This allocator relies on splitting live ranges of variables in order to cope with excessive co-existing registers. In this technique a live range is split into two or more smaller subranges, where each subrange can be assigned a different register or be spilled.

This talk revisits the Greedy Register Allocator available in current LLVM, focusing on its live range region splitting mechanism. We show how this mechanism chooses to split live ranges, examine a couple of cases exposing suboptimal split decisions, and present recent contributions along with their performance impact. More details can be found in the patches and their reviews (**).

(*) https://llvm.org/devmtg/2011-11/#talk6 (**) https://reviews.llvm.org/rL316295, https://reviews.llvm.org/rL323870

Speakers

Tuesday April 17, 2018 9:45am - 10:25am BST
Bristol 2

10:25am BST

Break
Tuesday April 17, 2018 10:25am - 11:00am BST
Bristol Foyer

11:00am BST

Clang Static Analyzer
BoF for the users and implementors of the Clang Static Analyzer. Suggested agenda: 1. Quick presentation of the ongoing development activities in the Static Analyzer community 2. Discussion of the main annoyances using the Static Analyzer (e.g. sources of false positives) 3. Discussion of the most wanted checks for the Static Analyzer 4. Discussion of missing capabilities of the Analyzer (statistical checks, pointer analysis, ...) 5. Discussion of the constraint solver limitations and proposed solutions 6. Discussion of future directions

Speakers

Tuesday April 17, 2018 11:00am - 11:40am BST
Empire Suite

11:00am BST

Lightning Talks
 - C++ Parallel Standard Template Library support in LLVM (M. Dvorskiy, J. Cownie, A. Kukanov)
 - Can reviews become less of a bottleneck? (K. Beyls)
 - Clacc: OpenACC Support for Clang and LLVM (J. Denny, S. Lee, J. Vetter)
 - DragonFFI: Foreign Function Interface and JIT using Clang/LLVM (A. Guinet)
 - Easy::Jit: Compiler-assisted library to enable Just-In-Time compilation for C++ codes (Juan Manuel Martinez Caamaño, S. Guelton)
 - Flang -- Project Update (S. Scalpone)
 - Look-Ahead SLP: Auto-vectorization in the Presence of Commutative Operations (V. Porpodas, R. Rocha, L. Góes)
 - Low Cost Commercial Deployment of LLVM (J. Bennett)

Speakers
avatar for Jeremy Bennett

Jeremy Bennett

Chief Executive, Embecosm
Bio: Dr Jeremy Bennett is founder and Chief Executive of Embecosm (http://www.embecosm.com), a consultancy implementing open source compilers and chip simulators for major corporations around the world. He is a author of the standard textbook "Introduction to Compiling Techniques... Read More →
avatar for Kristof Beyls

Kristof Beyls

Senior Principal Engineer, Arm
compilers and related tools, profiling, security.
avatar for Juan Manuel Martinez Caamaño

Juan Manuel Martinez Caamaño

Engineer, Quarkslab
Likes LLVM and just-in-time compilation.
avatar for Adrien Guinet

Adrien Guinet

Quarkslab
avatar for Rodrigo Rocha

Rodrigo Rocha

University of Edinburgh
SS

Steve Scalpone

NVIDIA
Flang, F18, and NVIDIA C, C++, and Fortran for high-performance computing.
JS

Jeffrey S. Vetter

Oak Ridge National Laboratory


Tuesday April 17, 2018 11:00am - 11:40am BST
Bristol 2

11:00am BST

MIR-Canon: Improving Code Diff Through Canonical Transformation.
Comparing IR and assembly through diff-tools is common but can involve tediously reasoning through differences that are semantically equivalent. The development of GlobalISel presented problems of correctness verification between two programs compiled from identical IR using two different instruction selectors (SelectionDAG versus GlobalISel) where outcomes of each selector should ideally be reducible to identical programs. It is in this context that transforming the post-ISel Machine IR (MIR) to a more canonical form shows promise.

To address said verification challenges we have developed a MIR Canonicalization pass in the LLVM open source tree to perform a host of transformations that help to reduce non-semantic differences in MIR. These techniques include canonical virtual register renaming (based on the order operands are walked in the def-use graph), canonical code motion of defs in relation to their uses, and hoisting of idempotent instructions.

In this talk we will discuss these algorithms and demonstrate the benefits of using the tool to canonicalize code prior to diffing MIR. The tool is available for the whole LLVM community to try.

Speakers
avatar for Puyan Lotfi

Puyan Lotfi

Compiler Engineer, Facebook


Tuesday April 17, 2018 11:00am - 11:40am BST
Bristol 1

11:00am BST

Round Table
  • Build system integration for interactive tools
  • LLVM for secure code
  • LLDB

Tuesday April 17, 2018 11:00am - 12:35pm BST
Conservatory

11:45am BST

11:45am BST

Lightning Talks
 - Measuring the User Debugging Experience (G. Bedwell)
 - Measuring x86 instruction latencies with LLVM (G. Chatelet, C. Courbet, B. De Backer, O. Sykora)
 - OpenMP Accelerator Offloading with OpenCL using SPIR-V (D. Schürmann, J. Lucas, B. Juurlink)
 - Parallware, LLVM and supercomputing (M. Arenaz)
 - Returning data-flow to asynchronous programming through static analysis (M. Gilbert)
 - RFC: A new divergence analysis for LLVM (S. Moll, T. Klössner, S. Hack)
 - Static Performance Analysis with LLVM (C. Courbet, O. Sykora, G. Chatelet, B. De Backer)
 - Supporting the RISC-V Vector Extensions in LLVM (R. Kruppe, J. Oppermann, A. Koch)
 - Using Clang Static Analyzer to detect Critical Control Flow (S. Cook)

Speakers
avatar for Dr. Manuel Arenaz

Dr. Manuel Arenaz

Arenaz, Appentra Solutions
Dr. Manuel Arenaz is the CEO of APPENTRA Solutions and professor at the University of A Coruña (Spain). He holds a PhD in Computer Science from the University of A Coruña (2003) on advanced compiler techniques for parallelisation of scientific codes. He is passionate about technology... Read More →
GB

Greg Bedwell

Sony Interactive Entertainment
avatar for Simon Cook

Simon Cook

Compiler Engineer, Embecosm
avatar for Clement Courbet

Clement Courbet

Software Engineer, Google
MG

Matthew Gilbert

Senior Software Engineer, Microsoft
RK

Robin Kruppe

TU Darmstadt
SM

Simon Moll

Researcher/PhD Student, Saarland University
DS

Daniel Schürmann

Technische Universität Berlin


Tuesday April 17, 2018 11:45am - 12:35pm BST
Bristol 2

11:45am BST

Scalar Evolution - Demystified
Scalar Evolution is an LLVM analysis that is used to analyse, categorize and simplify expressions in loops. Many optimizations such as - generalized loop-strength-reduction, parallelisation by induction variable (vectorization), and loop-invariant expression elimination - rely on SCEV analysis.

However, SCEV is also a complex topic. This tutorial delves into how exactly LLVM performs the SCEV magic and how it can be used effectively to implement and analyse different optimisations.

This tutorial will cover the following topics:

1. What is SCEV? How does it help improve performance? SCEV in action (using simple clear examples).

2. Chain of Recurrences - which forms the mathematical basis of SCEV.

3. Simplifying/rewriting rules in CR that SCEV uses to simplify expressions evolving out of induction variables. Terminology and SCEV Expression Types (e.g. AddRec) that is common currency that one should get familiar with when trying to understand and use SCEV in any context.

4. 2. LLVM SCEV implementation of CR - what's present and what's missing?

5. How to use SCEV analysis to write your own optimisation pass? Usage of SCEV by LSR (Loop Strength Reduce) and others.

6. How to generate analysis info out of SCEV and how to interpret them.

The last talk on SCEV was in LLVM-Dev 2009. This tutorial will be complementary to that and go further with examples, discussions and evolution of scalar-evolution in llvm since then. The author has previously given a talk on machine scheduler in llvm - https://www.youtube.com/watch?v=brpomKUynEA&t=310s

Speakers

Tuesday April 17, 2018 11:45am - 12:35pm BST
Bristol 1

12:35pm BST

Lunch
Tuesday April 17, 2018 12:35pm - 2:00pm BST
Bristol 3

1:50pm BST

Pointers, Alias & ModRef Analyses
Alias analysis is widely used in many LLVM transformations. In this tutorial, we will give an overview of pointers, Alias and ModRef analyses. We will first present the concepts around pointers and memory models, including the representation of the different types of pointers in LLVM IR, then discuss the semantics of ptrtoint, inttoptr and getelementptr and how they, along with pointer comparison, are used to determine memory overlaps. We will then show how to efficiently and correctly use LLVM’s alias analysis infrastructure, introduce the new API changes, as well as the highlight common pitfalls in the usage of these APIs.

Speakers
NL

Nuno Lopes

Microsoft Research


Tuesday April 17, 2018 1:50pm - 2:40pm BST
Bristol 1

2:00pm BST

LLVM Foundation
Tuesday April 17, 2018 2:00pm - 2:40pm BST
Empire Suite

2:00pm BST

Organising benchmarking LLVM-based compiler: Arm experience
The ARM Compiler 6 is a product based on Clang/LLVM projects. Basing your product on Clang/LLVM sources brings challenges in organizing the product development lifecycle. You need to decide how to synchronize downstream and upstream repositories. The decision impacts ways of testing and benchmarking. The Arm compiler team does development of the compiler on the upstream trunk keeping a downstream repository synchronized with the upstream trunk. Upstream public build bots guard us from commits which can break our builds. We also have infrastructure to do additional testing. There are a few public performance tracking bots which run the LLVM test-suite benchmarks. Although the LLVM test-suite covers many use cases, products often have to care about a wider variety of use cases. So you will have to track quality of code generation on other programs too.

In this presentation we will explain how we protect the Arm compiler product from code generation quality issues that the public bots don’t catch. We will cover topics like continuous regression tracking, process of fixing regressions, a benchmarking infrastructure. We will show that the most important part of protecting the quality of a LLVM-based product is to be closely involved into development of the upstream LLVM which means detect issues in the upstream LLVM as early as possible and report them as soon as possible. We hope our experience will enable both better LLVM-derived products to be made and for product teams of other companies to contribute to LLVM itself more effectively.

Speakers
avatar for Evgeny Astigeevich

Evgeny Astigeevich

Staff Software Engineer, Arm
Evgeny Astigeevich is a staff engineer at Arm in Cambridge, UK. He has experience in performance analysis and in implementing compiler optimisations for different architectures. He was leading the Arm Compiler optimization team.


Tuesday April 17, 2018 2:00pm - 2:40pm BST
Bristol 2

2:00pm BST

Round Table
  • LLVM-MCA - Static machine code performance analysis tool
  • LLDB
  • Traec/Superblock scheduling in LLVM
  • RISC-V LLVM

Tuesday April 17, 2018 2:00pm - 3:25pm BST
Conservatory

2:45pm BST

Point-Free Templates
Template metaprogramming is similar to many functional languages; it's pure with immutable variables. This encourages a similar programming style; which begs the question: what functional features can be leveraged to make template metaprogramming more powerful? Currying is just such a technique, with increasing use cases. For example the ability to make concise point-free metafunctions using partially applied combinators and higher-order functions. Such point-free template metafunctions can be leveraged as a stand-in for the lack of type-level lambda abstractions in C++. Currently there exist tools for converting pointful functions to point-free functions in certain functional languages. These can be used for quickly creating point-free variations of a metafunction or finding reusable patterns. As part of our research we have made a point-free template conversion tool using Clang LibTooling that takes pointful metafunctions and converts them to point-free metafunctions that can be used in lieu of type-level lambdas.

Speakers
AG

Andrew Gozillon

University of the West of Scotland


Tuesday April 17, 2018 2:45pm - 3:25pm BST
Bristol 1

2:45pm BST

Protecting the code: Control Flow Enforcement Technology
Return-Oriented Programming (ROP), and similarly Call/Jump-Oriented Programming (COP/JOP), have been the prevalent attack methodology for stealth exploit writers targeting vulnerabilities in programs. Intel introduces Control-flow Enforcement Technology (CET) [1] which is a HW-based solution for protecting from gadget-based ROP/COP/JOP attacks. The new architecture deals with such attacks using Indirect Branch Tracking and Shadow Stack. The required support is implemented in LLVM and includes optimized lightweight instrumentation. This talk targets LLVM developers who are interested in new security architecture and methodology implemented in LLVM. Attendees will get familiar with basic control flow attacks, CET architecture and its LLVM compiler aspects. [1] https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf

Speakers
avatar for Oren Benita Ben Simhon

Oren Benita Ben Simhon

LLVM Compilers Developer, Intel


Tuesday April 17, 2018 2:45pm - 3:25pm BST
Bristol 2

3:25pm BST

Poster Session
Poster session and break

Tuesday April 17, 2018 3:25pm - 4:30pm BST
Bristol Foyer

4:30pm BST

LLVM x Blockchains = A new Ecosystem of Decentralized Applications
Recently, blockchains are showing more and more power as application platforms besides transaction-trading platforms. Running applications on decentralized platforms totally differs from the way we did before.  And we can foresee, developers will redefine existing centralized applications and create different decentralized applications. However, the foundations are not ready yet. Both academia and industry are exploring how to enpower the decentralized applications. We, Nebulas, call on the LLVM community to bring LLVM to the blockchain community. We propose several open problems needing to be addressed with the target of leveraging LLVM in blockchains. Besides, we also share our work on using LLVM to build a smart contract execution engine.

Speakers

Tuesday April 17, 2018 4:30pm - 5:15pm BST
Bristol 1 & 2

5:15pm BST

Closing Session
Closing remarks.

Speakers
avatar for Arnaud de Grandmaison

Arnaud de Grandmaison

LLVM Foundation


Tuesday April 17, 2018 5:15pm - 5:30pm BST
Bristol 1 & 2