There are easier ways to reconstruct your HTTPS Wikipedia browsing habits than to crack HTTPS.
Because Wikipedia's content is public, the NSA can crawl the site repeatedly with all common user agents, generating the number of HTTPS bytes needed to download any given Wikipedia page. Then, simply by looking at the patterns of bits sent over the wire, they can trivially reconstruct the likely pages a user was viewing.
Wikipedia has not discussed any plans to mitigate traffic analysis; until they do so this whole exercise is futile, and I doubt Wikipedia will be able to obfuscate their site sufficiently to evade sophisticated traffic analysis.
Presumably you could make it a single page site where the page and server act like number stations so that the page always uses a fixed bandwidth on a tick, some of which is data.
Or you could just insert a random payload into the served content. I imagine that you would only need to add a small amount of variation to completely thwart the type of analysis that codex described.
Because Wikipedia's content is public, the NSA can crawl the site repeatedly with all common user agents, generating the number of HTTPS bytes needed to download any given Wikipedia page. Then, simply by looking at the patterns of bits sent over the wire, they can trivially reconstruct the likely pages a user was viewing.
Wikipedia has not discussed any plans to mitigate traffic analysis; until they do so this whole exercise is futile, and I doubt Wikipedia will be able to obfuscate their site sufficiently to evade sophisticated traffic analysis.