Write a Blog >>
Sun 18 Jun 2017 16:30 - 17:00 at Vertex WS218 - Afternoon talks 2 Chair(s): P. Sadayappan

Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility.

This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficent execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.

Sun 18 Jun
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:00 - 17:30: Afternoon talks 2ARRAY at Vertex WS218
Chair(s): P. SadayappanOhio State University
16:00 - 16:30
Talk
Efficient Array Slicing on the Intel Xeon Phi Coprocessor
ARRAY
Benjamin AndreassenNorwegian University of Science and Technology, Jan ChristianNorwegian University of Science and Technology, Lasse NatvigNorwegian University of Science and Technology
DOI File Attached
16:30 - 17:00
Talk
Modular Array-based GPU Computing in a Dynamically-typed Language
ARRAY
Matthias SpringerTokyo Institute of Technology, Peter WauligmannTokyo Institute of Technology, Hidehiko MasuharaTokyo Institute of Technology
DOI File Attached
17:00 - 17:30
Talk
HPTT: A High-Performance Tensor Transposition C++ Library
ARRAY
DOI File Attached