It's not the same thing. CPUs are only getting faster in that they are getting more cores. How is a programmer to take advantage of that? That is the question. Hadoop is not about this problem, it's goal is to overcome the bottleneck of disk IO for large data sets.
Map/Reduce is about taking an operation that would be otherwise sequential and breaking it up into smaller, parallelized tasks. Then worker machines can concurrently process the operation. It's not about disk IO. I recommend you read the original paper: http://web.mit.edu/6.033/www/papers/mapreduce-osdi04.pdf
we were talking about hadoop specifically. This is from Hadoop The Definitive Guide
"The problem is simple: while the storage capacities of hard drives have increased massively over the years, access speeds—the rate at which data can be read from drives— have not kept up. One typical drive from 1990 could store 1370 MB of data and had a transfer speed of 4.4 MB/s,§ so you could read all the data from a full drive in around five minutes. Almost 20 years later one terabyte drives are the norm, but the transfer speed is around 100 MB/s, so it takes more than two and a half hours to read all the data off the disk.
This is a long time to read all data on a single drive—and writing is even slower. The obvious way to reduce the time is to read from multiple disks at once. Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.
Only using one hundredth of a disk may seem wasteful. But we can store one hundred datasets, each of which is one terabyte, and provide shared access to them. We can imagine that the users of such a system would be happy to share access in return for shorter analysis times, and, statistically, that their analysis jobs would be likely to be spread over time, so they wouldn’t interfere with each other too much."