GPU computation has been a big event in the high-performace computation world for a year or so now. In case you’ve been living under a rock, GPU supercomputing is the use of the PC graphics card to handle a lot of highly parallel grunt work that multi-core GPUs don’t have the sillicon for.
This makes sense, as a GPU is basically a huge number of very simple CPUs strapped together in a low-latency parallel configuration. For many very parallel problems, such as Smooth Particle Hydronamics, this is ideal. Such simulations require high parallelism to compute simple calculations.
The technology harks back to the Connection Machine back in the 80s, a time when the world was pure and more simple.
Anyway, the problem with GPUs is coding them up quickly. Parallel programming is a pain in the bum-hole due to the number of bugs that crop up and complexity in finding them. This means less time adding value with smart maths and more time tracing slip-ups.
However, things are changing as MathWorks have finally got round to enabling GPU parallelism support in the 2010b iteration of Matlab. It’s restricted to Nvidia’s CUDA platform right now, but so what, at least it’s there.
So now you can use Matlab without having to code all the slow bits in Fortran or C with CUDA . Here’s some prepared blurb on the features of this newishness:
MATLAB GPU computing capabilities include:
- Data manipulation on NVIDIA GPUs
- GPU-accelerated MATLAB operations
- Integration of CUDA kernels into MATLAB applications without low-level C or Fortran programming
- Use of multiple GPUs on the desktop (via the toolbox) and a computer cluster (via MATLAB Distributed Computing Server )
Examples and How To
- Introduction to MATLAB GPU Computing(Video)
- Benchmarking “\” Operator on NVIDIA GPUs(Demo)
- Speeding Up Calculation of Relative Areas of States Using a Point-In-Polygon Method and GPUs(Demo)