It's just obfuscation. In reality the 'room' would have to be the size of a planet and if one person was manipulating the symbols it might take the whole thing the life span of the universe to think 'Hello World'. But by phrasing it as a 'room' with one person in it he makes it look inadequate to the task, and therefore the task impossible.
Neither the size of the room nor the speed of the computation is important to Searle's argument. You could replace the person in the room with the population of India (except for those who understand Chinese), and pretend to the Chinese speaker that the communication is by surface mail. Or use a bank of supercomputers if Indians aren't fast enough.
Fair enough. In which case Sear's argument is that even fantastically complex, sophisticated information processing systems with billions of moving parts and vast information storage retrieval resources operating over long periods of time cannot be intelligent. If that's what his position boils down to, what does casting it as a single person in a room add to the argument? As Kurzweil asked, how is that different from neurons manipulating neurotransmitter chemistry? Searl doesn't seem to have an answer to that.
No, his position, as I understand it, is that it cannot be conscious. It certainly can be intelligent.
Searle does try to explain why there's a difference. Although the person in the Chinese Room might be conscious of other things, he has no consciousness of understanding the Chinese text he's manipulating, and will readily verify this, and nothing else in the room is conscious. Chinese speakers are conscious of understanding Chinese.