This has kind of been the claim for years now, though, with a burgeoning market of compute apps just around the corner. Only it isn't so easy, and compute applications only apply for a specific set of problems, not only because of the GPU geared restrictions and architecture of these designs, but because of the islands of memory forcing endless memory copies back and forth. Unified memory should go a long way to making compute more generally usable, though of course that does nothing for the person paying $6000 for this unit.
I wouldn't be surprised if Apple develops a library to help facilitate GPU usage, similar to how they developed Grand Central Dispatch to help developers utilize multicore CPUs more effectively.
Even better: what if Apple developed a whole language for GPU compute? They could eventually get other vendors to participate and make it an open standard. How about "Open Compute Language"? Nah, too verbose. How about "OpenCL"? ... =)
No - GCD is only about distributing workloads across CPU cores, and doesn't involve the GPU.
OpenCL uses a special programming model so you can't use it for general application code. It's good for doing repetitive operations on large arrays - e.g. Image or signal processing, or machine learning. OpenCL code will run in the CPU if there is no GPU or if the overhead of shipping the data to the GPU is too high.