Write a Blog >>
Sun 18 Jun 2017 16:30 - 17:00 at Vertex WS218 - Afternoon talks 2 Chair(s): P. Sadayappan

Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility.

This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficent execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.

Sun 18 Jun
Times are displayed in time zone: (GMT+02:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:00 - 17:30: ARRAY 2017 - Afternoon talks 2 at Vertex WS218
Chair(s): P. SadayappanOhio State University
array-201716:00 - 16:30
Benjamin AndreassenNorwegian University of Science and Technology, Jan ChristianNorwegian University of Science and Technology, Lasse NatvigNorwegian University of Science and Technology
DOI File Attached
array-201716:30 - 17:00
Matthias SpringerTokyo Institute of Technology, Peter WauligmannTokyo Institute of Technology, Hidehiko MasuharaTokyo Institute of Technology
DOI File Attached
array-201717:00 - 17:30
DOI File Attached