GPGPU
Stands for "General-Purpose computation on Graphics Processing Units." GPGPU, or GPU computing, is the use of a GPU to handle general computing operations. Modern operating systems allow programs to access the GPU alongside the CPU, speeding up the overall performance.
While GPUs are designed to process graphics calculations, they can also be used to perform other operations. GPGPU maximizes processing efficiency by offloading some operations from the central processing unit (CPU) to the GPU. Instead of sitting idle when not processing graphics, the GPU is constantly available to perform other tasks. Since GPUs are optimized for processing vector calculations, they can even process some instructions faster than the CPU.
GPGPU is a type of parallel processing, in which operations are processed in tandem between the CPU and GPU. When the GPU finishes a calculation, it may store the result in a buffer, then pass it to the CPU. Since processors can complete millions of operations each second, data is often stored in the buffer only for a few milliseconds.
GPU computing is made possible using a programming language that allows the CPU and GPU share processing requests. The most popular is OpenCL, an open standard supported by multiple platforms and video cards. Others include CUDA (Compute Unified Device Architecture), an API created by NVIDIA, and APP (Accelerated Parallel Processing), an SDK provided by AMD.