Laboratory for Internet and Innovative Technologies

Speedup in parallel execution on SIMD architecture according to Amdahl’s Law is finite. Further more, according to Gustrafson’s Law, there are algorithms that can achieve almost linear speedup. However, researchers have found some examples of superlinear speedup for certain types of algorithms executed on specific multiprocessors. In this paper we achieved superlinear speedup for GPU devices, which are also categorized as SIMD. We implement a structure persistent algorithm which efficiently exploits the shared cache memory and avoids cache misses as much as possible. Our theoretical analysis and experimental results show the existence of superlinear speedup for algorithms that run on existing GPU device.


Leonid Djinevski, Sasko Ristov, and Marjan Gusev


Cache memory, Matrix Multiplication, GPU, CUDA, HPC, Superlinear Speedup

Full Paper

The technical report is published as a paper in ICT Innovations 2012, Advances in Intelligent and Soft Computing, (ed. S. Markovski and M. Gusev), Springer Verlag, Berlin Heidelberg, 2013, volume AISC 257, ISBN 978-3-642-37168-4, ISSN 2194-5357, pp.285-296, DOI: 10.1007/978-3-642-37169-1