Hacker News new | past | comments | ask | show | jobs | submit login

Making JS heavy sites crawlable is also possible with libraries like https://github.com/minddust/jquery-pjaxr and https://github.com/defunkt/jquery-pjax . Plus the push state has the advantage of "real" urls.



How?


With each user-interaction that updates a page fragment it modifies the address in the browser's address bar to correspond to the current state. If somebody were to copy and paste that URL into a new tab, your site would load the complete interface if you've structured your back-end code correctly.

You do this by building in logic to the part of the code that outputs your view to see whether the request is coming as a PJAX request, or not. If it is, you output the page-fragment, which is then added to your existing DOM. If it's not a PJAX request, your back-end outputs the entire code for your site.

There's a limitation to PJAX where you can only update one fragment at a time, though PJAXR seems to address that limitation by providing support for updating multiple-fragments simultaneously. Either way, you get the huge advantage of having a fully-crawlable site without needing to integrate pre-rendering work-arounds for search-engine compatibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: