PLDI 2017 PLDI Invited Speakers
Martín Abadi, Google
Abstract: The recent, remarkable successes of machine learning are due in part to the invention of machine learning methods (especially for deep learning), to the collection of datasets for tackling problems in many fields, and to the availability of powerful hardware, including CPUs, GPUs, and custom-designed ASICs. Software systems, however, are central to this progress.
This talk suggests that it is instructive and fruitful to think of these software systems from a programming-language perspective. It focuses on TensorFlow, a recent system for machine learning that operates at large scale and in heterogeneous environments. TensorFlow owes its generality to its programmability. In TensorFlow, models for machine learning are assembled from primitive operations by function composition and other simple, familiar constructs. Other aspects of TensorFlow, such as its support for automatic differentiation and its memory management, are less common in mainstream programming languages. TensorFlow enables the development of a wide variety of models, in both production and research. As examples, this talk briefly describes some recent research applications related to programming.
This talk is based on joint work with many people, primarily at Google Brain. More information on TensorFlow is available at tensorflow.org.
Bio: Martín Abadi is a Principal Scientist at Google, in the Google Brain team. He is also a Professor Emeritus at the University of California at Santa Cruz, where was a Professor in the Computer Science Department till 2013. He has held an annual Chair at the Collège de France, has taught at Stanford University and the University of California at Berkeley, and has worked at Digital’s System Research Center, Microsoft Research Silicon Valley, and other industrial research labs. He received his Ph.D. at Stanford University in 1987. His research is mainly on computer and network security, programming languages, and specification and verification methods. It has been recognized with the Outstanding Innovation Award of the ACM Special Interest Group on Security, Audit and Control and with the Hall of Fame Award of the ACM Special Interest Group on Operating Systems, among other awards. He is a Fellow of the Association for Computing Machinery and of the American Association for the Advancement of Science (AAAS). He holds a doctorate honoris causa from École normale supérieure de Cachan.
Martin Odersky, EPFL
Abstract: To understand a piece of program text one must also understand the con-text in which the program fragment is to be executed. Modern programming languages offer an array of constructs to define context. In Scala, those constructs can be summed up as the three I's: Imports, Inheritance, and Implicits. Implicits in particular are a central, but also controversial part of the language.
This talk explores the different facets of implicits in Scala, as they exist now and as they might evolve in the future. It highlights their potential benefits and problems, covering aspects of design, implementation, and ergonomics.
Bio: Martin Odersky heads the programming research group at EPFL. His research interests cover fundamental as well as applied aspects of programming languages. They include semantics, type systems, programming language design, and compiler construction. The main focus if his work lies in the integration of object-oriented and functional programming. His research thesis is that the two paradigms are just two sides of the same coin and should be unified as much as possible. To prove this he has experimented a number of language designs, from Pizza to GJ to Functional Nets. He has also influenced the development of Java as a co-designer of Java generics and as the original author of the current javac reference compiler. His current work concentrates on the Scala programming language, which unifies FP and OOP, while staying completely interoperable with Java and .NET.
Martin Odersky got his doctorate from ETHZ, in 1989. He held research positions at the IBM T.J. Watson Research Center from 1989 and at Yale University from 1991. He was then a professor at the University of Karlsruhe from 1993 and at the University of South Australia from 1997. He joined EPFL as full professor in 1999. He is associate editor of the Journal of Functional Programming and member of IFIP WG 2.8. He was conference chair for ICFP 2000, and program chair for ECOOP 2004 as well as ETAPS/CC 2007.
Frank Wood, University of Oxford
Abstract: Probabilistic programming uses programming language techniques to make it easy to denote and perform inference in the kinds of probabilistic models that inform decision-making, accelerate scientific discovery, and underlie modern attacks on the problem of artificial intelligence. Deep learning uses programming language techniques to automate supervised learning of program parameter values by gradient-based optimization.
What happens if we put them together?
This talk will review probabilistic programming. It will also introduce inference compilation and address how linking deep learning and probabilistic programming is leading to powerful new AI techniques while also opening up significant new research questions.
Bio: Dr. Wood is an associate professor in the Department of Engineering Science at the University of Oxford. Before that Dr. Wood was an assistant professor of Statistics at Columbia University and a research scientist at the Columbia Center for Computational Learning Systems. He formerly was a postdoctoral fellow of the Gatsby Computational Neuroscience Unit of the University College London under Dr. Yee Whye Teh. He received his PhD from Brown University in computer science under the supervision of Dr. Michael Black and Dr. Tom Griffiths.
Dr. Wood is a product of the Illinois Mathematics and Science Academy from which he graduated in 1992. He began college at the University of Illinois at Chicago (UIC) but transfered and received a B.S. in computer science from Cornell University in 1996. Prior to his academic career he was a successful entrepreneur having run and sold the content-based image retrieval company ToFish! to Time Warner and serving as CEO of Interfolio. He started his career working at both the Cornell Theory Center and subsequently the Lawrence Berkeley National Laboratory.
Dr. Wood holds 6 patents, has authored over 40 papers, received the AISTATS best paper award in 2009, and has been awarded faculty research awards from Xerox, Google and Amazon.
Probabilistic Programming and Inference Compilation, or, How I Learned to Stop Worrying and Love Deep NetworksMedia Attached
Mon 19 Jun
|09:00 - 09:05|
|09:05 - 10:00|
Martin OderskyEPFL, SwitzerlandMedia Attached
Tue 20 Jun
|09:00 - 09:50|
Wed 21 Jun
|09:00 - 09:55|
Probabilistic Programming and Inference Compilation, or, How I Learned to Stop Worrying and Love Deep Networks
Frank WoodUniversity of OxfordMedia Attached