Martin Abadi (UC Santa Cruz and MSR Silicon Valley, US)

Models and Proofs of Protocol Security: A Progress Report

Over the years, rigorous models have been developed for many security policies and mechanisms; these models have often been the subject of formal proofs, even automated ones. In this talk we discuss some of the progress in this direction and some of the problems that remain, focusing on research on security protocols. We emphasize the use of program-like representations of protocols, and their automated analysis in symbolic and computational models.

(This talk is based on the work of many people, and in particular on joint work with Bruno Blanchet and Hubert Comon-Lundh.)

Luca Benini (U Bologna; Italy)

Predictability vs. Efficiency in the Multicore Era

Fight of titans or happy ever after?

In this talk I will give an overview of recent trends in multi-core platforms for embedded computing. The shift toward multicore architectures has been imposed by technology reasons (power consumption and design closure issues in nanometer technology) and not by the "coming of age" of parallel programming models, compilation, analysis and verification environments. Thus, we may be building terascale many-cores architectures that we cannot program efficiently. Even worse, we may not be able to give any guarantees on execution timing, constraints and real-time properties of applications. This is a challenge AND an opportunity for the software design and verification community: I will give some views on what is being done, what could be done and what I hope will be done to build efficient and predictable multi-core platforms.

Sumit Gulwani (MSR Redmond, US)

SPEED: Symbolic Complexity Bound Analysis

As processor clock speeds begin to plateau, there is an increasing need to focus on software performance. One of the performance metrics is symbolic computational complexity bounds of procedures (expressed as a function of their inputs). Such automatically generated bounds are useful in early detection of egregious performance problems in large modular codebases that are constantly being changed by multiple developers who make heavy use of code written by others without a good understanding of their implementation complexity. These worst-case bounds also help augment the traditional performance measurement process of profiling, which is only as good as the set of test inputs.

The SPEED project develops static program analysis techniques for computing symbolic computational complexity bounds. Computing such bounds is a technically challenging problem since bounds for even simple sequential programs are usually disjunctive, non-linear, and involve numerical properties of heaps. Sometimes even proving termination is hard (Remember halting problem is undecidable!), and computing bounds ought to be a harder problem.

In this talk, I will describe some techniques that enable existing off-the-shelf linear invariant generation tools to compute polynomial and disjunctive bounds. These techniques include: (a) program transformation (control-flow refinement to transform loops with sophisticated control flow into simpler loops), and (b) monitor instrumentation (multiple counter variables whose placement is determined dynamically by the analysis). I will also describe some specialized invariant generation tools (based on abstract interpretation) that enable bound computation. These can compute invariants that describe (a) numerical properties of heap partitions, (b) relationships involving non-linear operators such as logarithm, exponentiation, multiplication, and max. These techniques together enable generation of complexity bounds that are usually precise not only in terms of the computational complexity, but also in terms of the constant factors.

Ofer Strichman (Technion, Israel)

Regression Verification: Proving the Equivalence of Similar Programs

The ability to prove equivalence of successive, closely-related versions of a program can be useful for maintaining backward compatibility. This problem has the potential of being easier in practice than functional verification for at least two reasons: First, it circumvents the problem of specifying what the program should do; Second, in many cases it is computationally easier, because it offers various opportunities for abstraction and decomposition that are only relevant in this context.

The talk will begin by defining six different notions of input/output equivalence between programs. It will then focus on Hoare-style proof rules that can be used for proving the equivalence of recursive functions according to the above-mentioned definitions. We then show a decomposition algorithm that, given a mapping between the recursive functions in the two programs, attempts to reduce the equivalence verification problem into verification of many smaller verification problems corresponding to pairs of mapped functions. Callees of these functions that were already proven equivalent are abstracted with uninterpreted functions. We will conclude the talk by describing a regression verification tool for C programs that, based on these rules and decomposition algorithm, was able to prove automatically the equivalence of some nontrivial programs.

  • Home
  • Workshops
  • Participation

  • Registration
  • Accomodation
  • Venue and Travel
  • Conference Program

  • Accepted Papers
  • Schedule
  • Program
  • Tutorials
  • Invited Speakers
  • CAV Award Process
  • Social Events
  • Misc.

  • PhotosNEW
  • The 2009 CAV Award
  • Important Dates
  • Committees
  • Sponsors
  • Poster
  • Out of date:

  • Call for Papers
  • Instructions to authors
  • PAPER SUBMISSION