Hacker News new | past | comments | ask | show | jobs | submit login
Coding Horror: The End of Pagination (codinghorror.com)
221 points by duaneb on March 27, 2012 | hide | past | favorite | 118 comments



Dear God I hate this anti-pagination nonsense.

I loathe infinite scroll. It puts you at the mercy of the site doing browser caching right (no one does) otherwise you click on a link, click back and then have to start from the beginning again.

Even Apple gets this wrong (you'll reach a point going through the Top Charts in the App Store where pressing Back reduces you back to the first 10 results).

Pages can be bookmarked easily, which can be incredibly important. Paging is also easier for crawling (otherwise it requires some JS evaluation or other hackery).

DZone is a good example of a site that annoys the crap out of me with infinite scroll.

Seriously, cut that shit out.


Pages can be bookmarked easily, which can be incredibly important.

Pages can be bookmarked easily only sometimes! Everybody seems to have adopted the same database-centric but user-hostile (and cache/crawler-hostile) approach to pagination of posts (see: any blog platform, engadget, tumblr, etc).

Their brain damaged flow goes: You make a new post. The new post appears at the top of the front page. But, the front page has a limited number of story slots. So, the oldest story on the front page gets pushed to page two. But page two has limited slots, so the oldest story on page two goes to page three. Repeat for every page. Some sites have tens to hundreds of thousands of "pages." (Preemptive note to future comment haters: I know the site doesn't "push" articles to every page because it's all just database queries, but the effect is the same.)

Every time you make a new post, you invalidate all thousand (ten thousand? 100k? million?) older pages. It's absolutely moronic. Google has no chance of keeping up. You make one new post and Google has to re-index thousands of pages.

The proper solution is to make pages "fill up" then have your root page point to the highest numbered page. Example: http://omgpagination.tumblr.com/ would logically be page 600, then the "Older Posts" button would link to page 599. Instead, everybody right now makes / always be page 1, "Older Posts" always links to page 2, etc. But those URLs aren't content stable and every update invalidates your cache of the entire site. Your oldest posts on your site should always be on page 1, not a moving target of page {TOTAL_POSTS / POSTS_PER_PAGE}.

In short: be a little more clever and do the right thing for your caches. What's good for your caches is inevitably good for your users too.


Every time you make a new post, you invalidate all thousand (ten thousand? 100k? million?) older pages. It's absolutely moronic. Google has no chance of keeping up. You make one new post and Google has to re-index thousands of pages.

You shouldn't be allowing Google to index /page/2, /page/3, /page/4; that content is obviously going to change, and no user is ever going to search for "page 9 of the archives of example.com."


You would think so, but at least tumblr isn't restricting it: http://www.google.com/search?q=tumblr+/page/300


no but they do try and group pagnation pages and wheer its the best result show a result deep inside pagnated pages.


Look at blogger urls (the updated-max parameter; reddit does something similar as well) to see an efficient and bookmark-friendly implementation of pagination. I disagree that /page/4/ style urls are a product of database-driven design; that seems to be driven by url legibility, a book metaphor, and convention. Total counts and large offsets are bad for databases because they involve many rows rather than the minimum necessary local information.


Pagination is useful when a user needs to understand time and space for a given context.

Infinite scroll is useful when the time and space is now for the context.

As noted by both parents, implementations for both could be better.


You are quite right. Time and space is the important point here. There's no way of saying "Give me page 6 as it was when I bookmarked it two weeks ago."


Usually you can do this by putting timestamps in the URL rather than pages.


I really like the way techmeme handles this, basically you can get a snapshot of how the front page looked like at any given time.

Makes it more easy to find things you read a few days ago but didn't bother bookmarking and then later you realize you have to find it again. "Hmm, i read about it monday morning"...click click BANG! everything there just the way you remember it.

On most other news sites i can usually not find old articles even after 15 minutes of searching with both google and the internal search.


good point, but we sometimes delete questions from Stack Overflow and that'd also invalidate all pages (depending on how old the question was), would it not?


Or you could invalidate just one page and have it show up with one less result.


"Question deleted" item would help for that, and would also be more transparent for users than silent deletion (at the cost of a few more pages to show all items, becouse some items are deleted).


Mmm, that's not a bad solution, but the clutter would bother me as a user. You might just remove the items from the page (20 per page, 5 deleted on this page, now only 15) but then you might get bare pages.


Now, if I want all these pages to point to the newest page -- how do I do that? Every time when the newest page number changes (e.g. from 599 to 600), it invalidates all other pages anyway, right? They all have to point to page 600 now.

Moreover, I have to make code, that generates link to my newest page more complex.

Another consideration: it could be beneficial from SEO perspective if content constantly shifts on older pages.


> Now, if I want all these pages to point to the newest page -- how do I do that?

You just link to the root. It's conceptually page 600, but you don't actually have to specify the number. Imagine that the blog is you writing pages sequentially in a book. The oldest content is on page one; the newest content is on the last page. You don't need an explicit number to get to the newest content if you use the convention that an unspecified number means to go to the last page.


Ok, let's continue: How many items should be on the root page?

Say it had 10 items (full page size). But then 1 new item was posted. Should we create new root page with just 1 item?

Should we ask users to click on pager after viewing just one item in search results?


If the root page has less than a full page of items, I would also have it load the full next page, and the "next" link would go to page 3. So your first page might have from 10 up to 19 results, then every page after that would have 10.

The other option is to use a "start" parameter which is the offset of the first item to load, and you always load that item + the next 9. However this means you have N cached pages instead of N/10.


first page might have from 10 up to 19 results

So the number of items on the most popular page (root) would fluctuate a lot.

I don't think that would make my users happy.

I don't understand your other option. Are you suggesting not to show 0...9 most recently posted items on the first page?


Assume that an incrementing id is assigned to blog posts. The first page would always have the 10 most recent items. The "next" link would look like "/blog?offset=500". So when you click that, you get the items numbered 500-491, and the "next" link on that page is "/blog?offset=490". Admittedly the URLs are ugly, but the advantage is that the contents don't change when items are added, only when they are deleted (the same can't be said for the page=N scheme).


In this scenario up to 9 items on second page may overlap with items on the first page, right?

That would be confusing to users.

Besides, I still see no business advantage of making content hardwired to certain page. If anything, it's better to change page content from SEO perspective.


YES! Sometimes i read a few pages then get back the next day and click next and suddenly page 5 contains what was on page 1 yesterday. See http://www.yankodesign.com/page/2/ for example of this madness.


This is actually an excellent idea, I think I'll try it in some future projects of mine...


It's especially annoying when people have site footers with important links, but infinite scroll just above it, rendering it impossible to click on any link in said footer.

Drives me fucking insane.


> Drives me fucking insane.

I share exactly the same feelings. Want to create a Facebook page: http://i.imgur.com/vZ5LD.png good luck! It's like the game where you have to click a button but whenever you hover over the button it moves.


Now try getting to that footer on your smartphone (because you wanted some feature of the desktop site that wasn't in the mobile version, for example).


Yeah, the only way to do that right would be to give the infinite scroll a container with a fixed height, and drive the infinite scroll based on the position of the container rather than the entire page.


There are occasionally use cases for infinite scroll, but I agree that most of the time, it's a design anti-pattern.

My preferred solution to pagination: by default, load a large number of results per page. The markup overhead is minimal with gzip, database hits can be mitigated with caching, and images can be lazy-loaded if necessary. Best of both worlds, and friendlier to users and search engines alike.

Also, there's no reason pagination can't work inline (ie, link to "Load 10 More Results"), which also gives you the best of both.


> Even Apple gets this wrong (you'll reach a point going through the Top Charts in the App Store where pressing Back reduces you back to the first 10 results)

Apple gets a lot wrong. Personally, I think their website is horribly difficult to navigate - specifically their web store and web-based app store. The site is chock full of beautiful graphics and plenty of information, but I can't believe how difficult it is to buy the damn thing once I've made up my mind. The call-to-action buttons (ie "BUY THIS RIGHT NOW") are tiny, nearly invisible, or nonexistent. Not what you'd expect from the tech powerhouse that is Apple.

I'd love to do a writeup on this one day. I feel like I'm crazy finding the website so difficult to navigate, but there's no way I'm alone here.


I think that's partly on purpose, it's sort of the same in their brick & mortar stores too. There's lots of gadgets and fancy stuff to play with but no obvious sales funnel.

I think it's because if you decide your going to buy a mac then your going to buy a mac and apple get a cut regardless of where you buy it so there's no incentive to have to convert you right that second before you walk out the door.

The entire apple website is basically just computer porn, it's designed to create a lasting emotional impact rather than to feel like a shop.


This is powerful stuff. I hate to admit it because it goes against almost all conventional ecommerce thinking, but I believe you're right.


Facebook seems to have found a way to do this the right way in their "profile timelines":

* The right side contains a scrollbar annotated with years and months, so you can efficiently navigate to posts from a particular period.

* You can deep link to a specific months: https://www.facebook.com/<profile>/timeline/2008/11

* The center line visualizes that the feed continues and that the user can keep scrolling (until they reach their birth)

* Displays a loading bar while new posts are fetched

* No footer and a static header.

Two problems:

* Forward/backward buttons doesn't work unfortunately.

* Webspiders does not make sense for such a closed system, so I'm not sure how they would be able to index such a site.

--- I would really love if other sites would adapt to this scheme of numbering pages by their year and month. Try navigating through which commits where introduced a year ago in a Github repository: https://github.com/mirrors/linux-2.6/commits/master - it ends in a binary search through page numbers! (You could use the CLI, but that is not the point here. Say you need to link to a particular line in an old commit)


The thing I don't like is that when you have both scrollbars and pagination, there are two very different ways of getting more content onto the screen.

That's one to many.

It doesn't matter that they reflect different things - client-side display of already-retrieved content and interaction with a server for new content. That's a detail that a user of software should not have to react to.

Infinite scroll is one response, maybe there are others?


http://www.chrisnorstrom.com/2011/04/invention-infinite-scro...

I feel your pain. So I came up with a hybrid pagination + infinite scroll solution last year. I can't implement it myself but I'd love to see someone try it.

Basically it puts in pagination links in between the content loads so you can easily skip ahead or go far back without having to scroll your fingers off.


Indeed. Pagination is one of those things not broken. Please don't fix it!


I hate pro-infinite scroll nonsense.

But the argument that your "page 867" means nothing remains.

What's the solution? I'm not sure and it probably depends on the application involved.

One idea in search might be to return fifty items and give ten suggested refinements by key word or otherwise.


"page 867" means nothing, but so does scrolling down for half an hour. And the first one is faster.


I think its obvious... just limit the results to a sensible number.. perhaps randomize the last half...


I love scrolling and hate all that extra pagination.

My browser handles correctly what you just described while browsing huge pages, except may be the bookmarks. Or may be we simply visit different sites, I love to read books and wikis online.

I won't say that you change your browser, as that path leads to endless religious wars, but bear in mind that there's people like me with the exact opposite mindset to yours.


Yep you wouldnt belive how much work on a big site doing pagination and faceting (pagnations big brother)an doing it RIGHT! (wel in my case getting developers to go back and fix stuff they broke/messed up the first time)

Google thinks pagnation is important else why would they produce and suport the new rel tags prev and next.


Even more fun on mobile. Load 10 scrollfuls of content, and your phone halt to a crawl


I think that infinite scroll should have infinite scroll in the upperside as well, meaning that the posts on the top should deleted as you scroll down.

That would also make viable creating links and bookmarking based on your current position.


Yep, this is the only solution in combination with timestamps rather than page numbers.


Yeah, his forum argument made a good point, but what about 200-page threads? How am I supposed to get to page 200 without scrolling?


This is probably the last chance I'll get to let everyone know that Hacker News has recently started to paginate comment threads (see my submission https://news.ycombinator.com/item?id=3746568, which apparently very few people saw).

I definitely have an axe to grind because it breaks my code, but I really believe it's very bad for discussions. In particular the case of only a single comment and it's children appearing on the first page of comments on threads with large number of comments is the biggest problem. It leads to only one response to the article being heard and has the side effect of greatly encouraging piggy-backing on the "first post", as it were.

edit: when I say 'my code' I mean my code that's forked from wvl's hckrnews.com extension and any other extension that highlights new posts since you last opened the discussion.


Additionally, I have a habit of opening a browser window and all of the discussions I want to read each in its own tab.

This browser window may be sitting in the background for hours if I'm busy, and by the time I get to read it, reach the end and press "More" I'm presented with "unknown or expired link" which, to me, is the culmination of disrespect for the reader.


I get that expired link nonsense even if I immediately start reading the comments. It works only if I skip reading most of the comments.


Yes, I agree that having only a single parent comment is very poor UX. It is compounded that often by the time you get to the bottom of the page the "more" link is expired!


Reddit has solved this problem.


This is especially bad for HN because of the "unknown or expired link" problem.

If I am 3 pages deep in a thread and press the "More" button then if I have been on the page for a certain amount of time I get the dreaded "unknown or expired link" and have to click back to the start and through a load more pages.


Almost 2 months ago HN started doing this at short time spans (high traffic?), and coming back to default behavior. (I commented here http://news.ycombinator.com/item?id=3627285 ) This becoming definitive is not a good thing.


"Pagination is also friction. Ever been on a forum where you wished like hell the other people responding to the thread had read all four pages of it before typing their response?"

I mostly agree with wumpus here as this relates to search results, but it is interesting that forums are brought into it because forums (generally the smaller, community type forums with long-tail discussions as opposed to reddit or even HN where discussions are generally about 24 hours max and then dropped) are the one place I absolutely love pagination.

The unfolding discussion of a particular thread tends to have a timeline in my brain that maps to the pagination of the thread. eg. Oh, subtopic XYZ, that first came up at about page 5... lemme jump over there and refresh my mind on how that came up.

Taking away pagination in forums would be a net negative for me, much like code folding is a net negative for me (I tend to have a map of the code in my mind that is impossible to keep if the code is folded in some random configuration).

Granted, this is subjective and entirely personal, so I'm not saying he's wrong (or that code folding is wrong for everyone), just that getting rid of pagination is not necessarily great for all people in all situations.


I can't agree more about code folding. The shape of the code lets me use visual patterns instead of actually thinking about the parse tree


Supposedly the role of code folding is to reduce distractions. But I also find it far too distracting when the code moves around (though it's fine when it's less than a screenful).

Perhaps it would work better if the code were simply hidden instead of collapsed. Then the non-folded code would remain in the same place as before, and it would be less disorienting.


I thought I was the only one! (sob) I've tried folding, so many times, but it just seems like I'm fighting against it. I don't know that it's the shape of the code that's being hidden that spoils it, maybe I'm just a bottom-up kinda guy.

Maybe there is a need for a support group here.


Have you seen Notepad++'s new document map feature? It sounds right up your street, as an alternative to folding for getting up & down long code files more quickly:

http://notepad-plus-plus.org/assets/images/docMap2.png (on the right hand side)


I think that more forums need a simpler, better search function. Although google are solving this problem somewhat themselves with groups search etc.

To do a search on most phpBB or similar forums you have to first be a registered member, fill out a big complicated form and often solve a captcha.

Part of the reason for this I guess is that many of these forums are hosted cheaply so the IO/CPU overhead for doing many searches is not trivial.

I can't count the number of times I've been on the forum and someone has asked a question and they get directed to the search feature.

I also like the feature some forums have where you can show the entire thread on one big page if you so choose.


The worst thing you can do if you implement infinite scroll?: Put key navigation links in your footer.

Facebook does this right now and it seriously pisses me off. Though in Facebook's case they do eventually stop loading more updates so I can click on something in the footer, but other sites definitely don't.

Please don't do this.


A tip: hit "end" on your keyboard then "escape", it will terminate the loading of the next set of pages and allow you to access the links.


Unfortunately don't have an "end" key on my MacBook Pro!


You can emulate this with Cmd+Down.


"It isn't just oddball disemvowelled companies, either. Twitter's timeline and Google's image search use a similar endless pagination approach. Either the page loads more items automatically when you scroll down to the bottom, or there's an explicit "show more results" button.

Pagination is also friction."

Actually if you're surfing on a flaky internet connection, endless pagination has MORE friction. With pagination you can click on the "next" button again if it doesn't load the first time.

If the javascript doesn't want to fire off another request in the endless pagination series and the last one didn't work, you're just screwed.

This makes facebook especially, incredibly annoying to use on flaky wifi.


Or if your browser unloads the page to save memory (it helps that every additional page increases memory footprint!), or crashes, or you want to bookmark your place and come back later, or you're looking for a post by date, or a million other reasonable scenarios.

Endless pagination only makes sense if you consider content after ~4-5 pages to be near worthless and don't intend for people ever to read it.


I despise endless pagination. I do most of my browsing on mobile devices these days, and endless pagination can quickly overload the measly RAM constraints on my iPad and iPhone resulting in a crash.


I've been wanting to play around with "scrolling pagination" as opposed to an "endless pagination." Where items are removed from the top of the list as they are added to the bottom, so you have a low-memory version of endless pagination.

Curious how much havoc that would wreck on the scrollbar & swipe scrolling on mobile.


I have a site in progress that's used an endless paging with a sliding window, so after 5 pages it'd clip the top page. It was a little irritating in that you could notice that something different was happening with the scrollbar once you hit the limit, but other than that it did work well on mobile devices.

In the end we had to replace the endless bit with a big "load more" button though, for unrelated reasons.


When you remove your content, measure it first and keep your new content properly offset. This way the scrollbar won't change since your new content is positioned correctly. The old content is removed from the DOM, but the space it took up is still accounted for.


That's a cool idea. If I recall correctly, that's how table views work in iOS.


Google reader is doing this ok on mobile version for my Androids: they have two action links on bottom of list: "load more items" and "mark all as read", so I do load more a few times, and then mark them all as read when I feel I am loading too much the boat.


I don't see why pagination has to go away any time soon? I can't stand Facebook auto loading more content, I end up dragging too far when it has auto loaded and I have not noticed, I spend more time looking at the scroll bar than the content!

I think the analogy of the forum posts are bad as well where people don't read the previous 4 pages, I think the issue more with forums is that there is a large thread to follow and it generally has to be followed from the beginning. You have generally landed on page 20 of a forum post because it has an answer to a query you put into a search engine. You don't land on page 20 from the forum itself!

In this I am a firm believer that if it ain't broke, don't try to fix it. Why should I as a developer over think about maintaining page position when clicking the back button or showing different paginated content to search engines for indexing? I don't feel like I am putting a lot on the user and I always know where I am with paginated content.


Another problem with endless scrolling: It often results in very small scroll bars which can be hard to use.


Mainly, it makes scrollbars completely impossible to use for actual scrolling. Try using one of these sites loading more and more content without scroll wheel or something similar.


I'm going to have to agree. Another problem with endless pagination is that it makes it difficult for me to refer to content.

For example: "Look at the blue shirt on page 5" rather than "Look at the blue shirt visible after about 30 seconds of scrolling down"

Also, when the page is reloaded, I have to scroll all the way down to where I was (which may take longer than I want it to because the page needs to load everything before my target).


wouldn't you link directly to the blue shirt item?


Sometimes I think I'm the only person who doesn't see scroll bars anymore. Does your mouse not have a scroll wheel? Does your trackpad not have 2-finger scroll?


> In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for.

Not everything is search. Sometimes you genuinely want to see the list. eg. transactions in your bank account, messages in your inbox, posts on your Facebook wall, etc.


Also, who's to say you're only searching for one item? Ever searched for a used car on Craigslist? You probably want to see more than one result.

Even if you were only searching for one item (say you're searching Google for an article you read a few weeks ago), sometimes alternate results are useful too. What's wrong with discovering something new?


He acknowledges that, but if there was a way to see just the listing you're going to accept (of course that's impossible, so they show the first best matches) wouldn't you want it?


Normally I shy away from "end of" or "death of" posts, but this makes sense. Pagination evolved from people having 56k modems and it taking two minutes to load a page. Add some images in, and you're going to be there a while.

I really like the idea of intelligent search and only showing relevant content vs just showing everything on the page and hoping people will find it. It seems so obvious now, but the status quo is so ingrained in us. Sometimes I do want to see the 873rd image, sometimes it's just the one I'm looking for, but burried under endless pages of other images. Having some sort of search (either tagging, date, or text) can really make this process easier for users.


"In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for."

That's just the opposite extreme of thousands of paginations.

I, personally, don't search that way. I generally look for a few good results, and then draw my conclusions based on the total of what I read. Rarely do I look for the page that has the answer. Maybe because I'm not looking for answers.


Pagination still is important from a search engine perspective. You want search engines to crawl your deep pages, and pagination is a great way of giving them that path. Relying on things like AJAX, or worse hiding all the content behind a search box.. will lead to less pages crawled.


A tumbledry.org article on doing infinite scroll better was posted to HN about a year ago (http://news.ycombinator.com/item?id=2592741) and it was most the way towards doing infinite scroll right.


Here's another use case perspective: consider the webcomic. There's a case where you're not searching for something—that is, every single "page" is possibly valuable content—but there's still hundreds, potentially thousands of pages to display.

The problem comes in when someone is looking for that one comic they saw a few years ago. Unlike xkcd, this comic hasn't implemented full-text search yet, so if they can't do a manual comparison sort, they have to go over every comic. What's a natural way to do this?

Still new to web development and user interface design in general; this is probably a different problem altogether.


I think that the solution is right there in your question — full-text search. xkcd takes this to an extreme perhaps, but even just transcribing the text in each comic ought to be enough to make it decently searchable. That and titles. I can usually get to an xkcd via the title or some words similar to the title which are usually in the comic itself. This may not work for comics which tend to be visual gags though.

If it is too much of a pain to manually transcribe the text, perhaps using OCR?


For a lot of things, a huge spatial visual index would be more useful than the text. Microsoft had some sort of project where you could have several thousand photos onscreen at a time. An analogous case from the physical world would be searching for a particular image you remember from a thick book of artwork. I think this would be pretty easy for most people, but I don't know of a good digital version. Maybe the xkcd archives would be a good testbed.


Allowing the comics to be tagged (perhaps invisibly) by users to facilitate search could be another possible solution, that could be easier than OCR and also include non-text specifics.

Usually when I remember a comic and am trying to find it, I remember one specific detail, whether it's a picture or text.


OhNoRobot can search XKCD comics.

http://www.ohnorobot.com/


Kind of a meta comment, but why does a post that should have been called "A Viable Alternative to Pagination" is instead titled "The End of Pagination"? Or maybe it could be more mysterious like, "What happens when you give up paginating the web?" The final title declares the end of something, which is needlessly confrontational and emotional (especially in geeks for whom it's especially distressing since, factually, pagination is still occurring all over the place).

tl;dr Nice thoughts about new ways to interact with web data with a needlessly sensational, counter-productive title.


Infinite scrolling was what killed Digg, for my friends. I admit, I never did use Digg. I do remember my friends telling me about when the "new" Digg launch, though. They cited the infinite scroll as one of the major things that broke usability, for them. He'd check it throughout the day and used the pages as sort of a stopping / reference point that he'd know to skip to when he got a chance to check it again later. The infinite scroll meant he had to sit there and force it to load until he saw the articles that he left off at.


It would seem the solution to infinite pagination isn't infinite scroll (which is the exact same concept, only badly implemented) but sorting and filtering.

If there are too many results, what I want is some way of filtering out the results that are obviously not relevant (because not in the correct language, or too old, or too recent, etc.), not the ability to "scroll" forever (infinite scrolling adds DOM elements to the page to the point where the page is so big, nothing works properly anymore).


One of the things that always killed me about the Forrst experience was that sure, it had great pagination, but it was only one particular page/view. If you were navigating snaps or code posts individually you were stuck with manually clicking next and previous buttons like the good old days. It's one thing to come up with cool techniques, it's a whole new bag to apply it effectively.


Yeah, we admittedly don't do this very well right now.


For those wanting to see an interesting approach to solving this problem (not written by me): http://tumbledry.org/2011/05/12/screw_hashbangs_building

Associated HN discussion: http://news.ycombinator.com/item?id=2592741


I think Alex Micek's infinite scroll is the most impressive I've seen - progressively-enhanced and doesn't break the back button: http://tumbledry.org/2011/05/12/screw_hashbangs_building. Impressive bit of work from a dentistry student.


In fact, was discussed on HN nearly a year ago: http://news.ycombinator.com/item?id=2592741


Honestly, I'm not sure what I think of infinite scrolling, but most of the complaints about it are really about problems with current implementations (e.g. back-button issues), and the people don't seem to be considering whether there might be technological fixes to those problems.


AutoPatchWork for Chrome automatically infinite-scrolls many sites (including Hacker News): https://chrome.google.com/webstore/detail/aeolcjbaammbkgaiag...


AutoPager is on both Chrome and Firefox and bloody changed my life. Parent was TL;DR: I already knew this is bloody essential, totally the only proper way to do things, and that one cannot go back after having tried.

Of course, the other major split from Atwood is that I think he's dead wrong. It's not about delivering better targetted results. That will, inevitably, fail. It's about the other part he briefly mentions- reducing irrelevant friction. The reason Google suceeded only with shorter results was because the friction of scanning through and filtering a result was so very high (taking a couple excerpts, a title, furrowing one's brow, thinking hard, and making a binary click/ don't click value-judgement guess).

https://chrome.google.com/webstore/detail/mmgagnmbebdebebbcl... https://addons.mozilla.org/en-US/firefox/addon/autopager/


I have it installed for Opera.

It's great.


In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for.

But I'm looking for a list of similar items in order to compare them. Many -- if not most -- times I perform a search I have but a vague idea of what I want. I throw something at the engine, take a look at the results, then decide whether to restate my search or not. A list is a perfectly good answer to many questions.

And this business about pagination? I'm sorry Jeff, but infinite scroll must die. It's a god-awful user interface mess. If you like this sort of thing, why have pages at all? The web should be just a seamless, flowing experience without any context-switching.

I almost feel like we are purposelly being baited by Atwood. (Bloggers? Purposely stirring up trouble for pageviews? Say it ain't so) It kills the entire concept of a page. It destroys the user's ability to know how much of something they are seeing, and makes linking a disaster.

Make it go away.


His arguments _for_ pagination seem to totally trounce those against! In terms of user experience, ease of implementation and technically (deep links, spidering etc). I've come away from reading that article absolutely against infinite scroll.


In general, don't paginate at all. Your two thausand word article should be on one page.


Coincidentally, I felt compelled to write a bunch of words about the issues with pagination just this past weekend.

http://nolancaudill.com/2012/03/24/pagination/


I've been using the Autopatchwork chrome extension for quite a while now. It's gotten buggier as more websites have implemented their own infinite scrolling, but like Adblock, it greatly improves web browsing.


I hate infinite scroll as Facebook implements it with the news feed. It's impossible to get to the footer because something else always loads. I have to disable my airport temporarily to get there.


FYI here is a jQuery plugin for doing endless pagination: https://github.com/fredwu/jquery-endless-scroll


Something that Google may think about is the value of being on the front page of a google search: such a position is devalued if there is no literal front page.


To make this effective, we need to be able to artificially control the scroll bar, like we can now artificially control history.


Jeff mentioned this in the article, but it really forces you to rethink your footer if you put important links there.


This is a problem I've noticed with Facebook's footer. On quite a few pages you just can't get to it. I thought it was an obvious problem but it doesn't seem like a lot of sites I use that implement it have put much thought into it.


I just love clicking on something in an infinite scroll and then getting to press "back" later and waiting for 20 pages of results to stream back in to sift through.


I think infinite scrolling works best for uses cases where you don't typically open links after you've scrolled down. In Facebook's case, most of the interaction is ajax (Like, Commenting, etc.) so you don't lose the position.

For me Facebook works pretty well. I like to keep quickly scrolling down (using trackpad inertial scrolling) until I've skimmed over everything new in the feed. Forcing me to move and pinpoint the cursor to click a "Next page" link would certainly add friction to that.

Google's search results, OTOH, are a completely different use case, where you want to have full control to the paginated browsing, constantly navigating back and forth.

Perhaps this involves a distinction between "lean back" and "lean forward" use cases. Leaning back, it's nice to just keep on scrolling with simple finger swipes.


Yes, in essence your ___location in the infinite scroll becomes a very important part of your state, but it is not treated properly on "page back".

For example, in google image search, "back" returns to the head of the list, even if you were on page 20. E.g.:

https://www.google.com/search?tbm=isch&hl=en&biw=973...


This has been my main complaint with "neverending Reddit" from the Reddit Enhancement Suite.

Sometimes when you click back, the "Page" of results you clicked from is omitted from the scroll. (result numbers jump from 50 -> 101 for example).

What's even more frustrating than losing your place is your place not actually existing anymore when you click back. I've grown to look at the result number before I click, so that I can try and find it quicker when I return.


Pagination provides structure which most humans prefer (myself included). I cannot stand endless scrolling with loading pauses every so often if I scroll too quickly. In Google searches, I have my results set at 50 results per page and when I am conducting research, this provides me a guideline when I know it is time to try some different search terms.

Some hybrid scheme may be more acceptable in which the page folds segments that you have scrolled a certain distance past, and unfolds a segment that you are entering. A Floating nav helper could display which segment/page you are currently on and perhaps even provide the ability to jump a previous/next or x segment. Best of both worlds™ navigation


It's a great idea, and actually slashdot is already doing this on their front page. However, there are some problems:

1. If I want to find a particular item within the first "page" of results, but I've scrolled down 10 pages of results, find that item becomes pretty tricky!

2. Let's say I have a user who scrolls down 1000 pages. What does this to the browser? I'd imagine it grinds to a halt...

The only solution would be to have a load page for the previous results. Here again, the issue I highlighted in point 1 becomes more emphasized. Especially if the search results change while it reloads (though you might keep the results for the user's session - but this sounds like a scaling issue).


Pretty cool until you click a link in the list, go back, and make a sad face when you realize the list has reverted to the first 20 items or whatever. Then you have to scroll, wait, scroll, wait, scroll to get back to where you were.


Tumblr implemented endless pagination with exactly this problem. This came up when you tried to reblog/reply to a post or view "read more" content and forgot to open another tab. They have since implemented a regular pagination system that fixes the back button problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: