Hacker News new | past | comments | ask | show | jobs | submit login

It's a "forward looking" architecture (to steal the line from the iPhone 5S pitch.) They're betting big on GPU compute. I wouldn't say it is for "niche markets", though perhaps today it is only useful to smaller markets, it's that this computer is going to over time get faster and faster relative to the 2013 iMac and MBPs as more apps take advantage of the GPUs. It's kind of a unique phenomenon and makes the benchmarks misleading. That said, it remains to be seen if this bet will pay off -- we could end up 5 years from now with the same small subset of apps taking advantage of GPUs as there are now.



As a mostly-hobbyist 3D artist, I'm already feeling really left out when I look at my relatively poor GPU vs. the capabilities of the rendering software that I use. If this fever has already started to seep down to my level, I certainly wouldn't predict against continued growth of the GPU computing world.


As someone doing GPGPU I just hope that they'll release an NVIDIA option. Scientific GPGPU heavily relies on CUDA support, OpenCL just isn't there yet, if ever.


I would argue that at least the current model actually is for a niche market, i.e. applications that use GPUs right now. By the time more non-gaming applications outside of media editing are using GPUs more extensively, the top end mobile graphics chips will have the power of this dual GPU setup.

It's a bold bet on a possible trend, which I like a lot, since it would mean that non-gamers would profit from the GPUs that would otherwise bore themselves to death on their machines. Also, it would give AMD a better position, maybe averting x86 being a complete Intel monopoly.


This has kind of been the claim for years now, though, with a burgeoning market of compute apps just around the corner. Only it isn't so easy, and compute applications only apply for a specific set of problems, not only because of the GPU geared restrictions and architecture of these designs, but because of the islands of memory forcing endless memory copies back and forth. Unified memory should go a long way to making compute more generally usable, though of course that does nothing for the person paying $6000 for this unit.


The person paying $6000 for this unit is a delighted Final Cut Pro user who has read the reviews and understands how the machine is tailored for them.

For more conventional, but still pro workloads, most of us will be much better off with a $3000-4500 model.

It's going to be interesting to see how pro apps end up tailored for this architecture.


I wouldn't be surprised if Apple develops a library to help facilitate GPU usage, similar to how they developed Grand Central Dispatch to help developers utilize multicore CPUs more effectively.


Even better: what if Apple developed a whole language for GPU compute? They could eventually get other vendors to participate and make it an open standard. How about "Open Compute Language"? Nah, too verbose. How about "OpenCL"? ... =)


Doesn't GCD do OpenCL?

Or do you mean something more seamless and auto-magical?


No - GCD is only about distributing workloads across CPU cores, and doesn't involve the GPU.

OpenCL uses a special programming model so you can't use it for general application code. It's good for doing repetitive operations on large arrays - e.g. Image or signal processing, or machine learning. OpenCL code will run in the CPU if there is no GPU or if the overhead of shipping the data to the GPU is too high.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: