Speeding Up N Means Algorithm By Gpus

1662 Words7 Pages
“Speeding up k-means algorithm by GPUs” Krishnali Penta, University of Florida Abstract-This paper gives a brief description of the above titled paper. Data clustering is one of the most widely used method for various applications. And parallelizing these time-consuming applications is of quite importance. This paper brings out an additional feature of handling input data of various dimensions and thus accordingly handle it. I. INTRODUCTION In the world of data clustering, K-means is one of the most extensively used algorithms. And the application area is vast ranging from economics, bioinformatics, astronomy, image processing and so on. All these applications deal with huge chunks of input data sets and parallelizing the application is one of the best ways to save the computation time. Graphics processing units (GPU) is one of the many hardware used for parallelizing such applications. Developed initially for different types of graphics applications, GPU has now found its way in various “general” application processing as well. The existing GPU implementation of the algorithms like UV_kmeans have showcased a speedup of forty to sixty in comparison with other dual-core or multi-core implementations. The programming model used in this paper is CUDA (Compute Unified Device Architecture). The basic organization of the architecture is shown in fig 1. To state briefly, modern-day GPUs contain about hundreds of processor cores each working on the exact same instruction but

More about Speeding Up N Means Algorithm By Gpus

Get Access