This is all fascinating ... but does it feel like a sustainable approach going forward, or more like a turbo-charged horse and buggy?
Developer happiness aside -- there are plenty of folks who like JavaScript just as much as Ruby -- and assuming that we're talking about an non-public-facing app (like Basecamp Next), is there still an argument that can be made in favor of doing your UI on the server side?
Even just in principle, it can't possibly be close to as fast (client-side can render optimistically, where possible), and it can't be close to as flexible (you don't have a model of state on the client to perform logic with).
I'm not sure what qualifies as sustainable or not in your book. We've achieved our speed goals (sub 100ms pages for the most part), our development goals (Ruby, Rails, server side pleasure), and our UI goals (app that feels like a web page, not just a single-page JS app).
I'm sure Flash or Silverlight or other RIA people would argue that anything that's not a compiled native experience isn't as flexible or as fast as whatever they're peddling. Meh.
Yes, it's great to occasional dip into advanced client-side when that's needed. Just like Flash had its legitimate use cases here and there. But for the bulk of the UI interactions we're giving people, it's just not needed (or desired).
By "sustainable" I'm getting at the point that if you're already doing 50/50 Ruby/JavaScript ... it makes you wonder which direction that ratio will tend to go in the future. Will Basecamp Next Next still be rendered as chunks of HTML that are sent over the wire to be inserted into specific spots on the page in 2015?
Either way, I'm very much looking forward to using the new version.
In terms of client-side MVC vs server-side renderings, the ration is more like 90/10 or 95/5. We have one major section that's all client-side MVC, which is our calendar. We have one tiny section as well, which is a little invite widget. Everything else is server-side renderings.
To me that's like how we in the past used Flash to play sounds in Campfire. Or reimplemented a poller in Erlang. Or use nodejs for the development Pow server. Use all the great tools available in the niches where it makes sense.
But the bulk of Basecamp is exactly the type of application that makes wonderful sense to do in Ruby and Rails. We've been sending down chunks of HTML in response to Ajax requests since we started doing these types of apps in 2005. So far I haven't seen anything to make me reconsider that approach for the majority cases.
Huge fan of your work. Been in love with Rails since the pre 1.0 days. Our team internally (we have an existing Rails 3 app that we want to improve rendering performance) has been going back and forth with "Rails should emit JSON and render client side" and "We should be smarter about caching and AJAX-ifying our pages".
Your post has given more ammunition to the Rails-only side, so that is really awesome; thanks for that!
There are reasonable arguments on both sides.
For Rails Improvement:
* We know Rails
* Scaling Rails is a solved problem
* You can "just throw money at it"
For JS UI:
* We know Javascript (http://www.github.com/toura/mulberry)
* Why spend server $$ on boring HTML rendering?
* We have better GUI test tools in the framework side (we can unit test our componentized JS GUI much easier than a mashed-together Rails ERB HTML)
* We don't have that much money to throw at it, and that's wasteful besides
Presupposing, of course, that the server rendering JSON takes less time than stitching together ERB and the developer toolkit isn't any harder, would you find it more interesting?
Matt, I don't think there's all that much difference in money spent (you still need to buy servers either way and you still need to cache even if you return JSON). We crank out functionality faster when it can be done server-side because the development experience is better and because Ruby still beats even CoffeeScript (although not as thoroughly as it used to beat vanilla JavaScript) for productivity.
I find the complexity needed in having MVCs on both client and server side to be the main problem. Especially since much of what applications like Basecamp are not all that heavy on the UI-interaction. A few bits are, like a calendar, so we use it there.
But obviously you can make it work either way. Just like Facebook manages to make PHP work. And some crazy kids still use Java. You should pick a development environment and style that fits your brain and your sensibilities. If you think client-side development is lovely, then by all means, go to town. Some people even think JavaScript is still as well as Ruby and that you don't even need CoffeeScript -- peace be with them.
I'd be curious as to how much benefit could be derived by moving the business logic in the server side MVC to the database in the form of triggers, stored procedures etc. Seems like that's the really natural place for some of the functionality, such as validation, that can't be implemented safely client-side.
Thanks for the reply. Look forward to using Basecamp NEXT (we use all the 37 signals products here) and seeing the evolution of your process as you continue.
In terms of client-side MVC vs server-side renderings, the ratio is more like 90/10 or 95/5.
DHH, I think you misunderstood here; I believe Jeremy's referring to the blog post where you said Basecamp Next had almost as many lines of CoffeeScript as lines of Ruby. I think that's where the 50/50 number came from.
I'm less curious about the performance here than I am about the maintainability. I saw a tweet where somebody said he'd done things the same way and encountered a maintenance nightmare -- he mentioned nested caching and pjax specifically -- but without context or detail, I'm taking that with a grain of salt. I think disregarding it completely would be a mistake too, though.
It's also kind of tautological that DHH is going to have a more pleasant experience developing in Rails than he is in other frameworks. ;-)
On 50/50, yes, we write lots of JavaScript for Ajax. We've done that since 2005 with Tada list. The debate here is over whether going client-side MVC for everything is a pleasant experience. I contend that it is not.
The maintainability story with pjax+caching is exactly the same as its always been with a Rails app. We just celebrated 8 years with Basecamp. That's a pretty good run.
You can write shit, unmaintainable code in anything, but to point at pjax and granular key-based caching schemes as somehow specifically prone to this? What? That doesn't make any sense to me.
We also had a swanky in-house client-side MVC framework cooking with Cinco. We used it once for Basecamp Mobile and while it was a good experience and the end result was great, it did little to sway my thinking on client-side MVC being a step forward in programming happiness (one of the key things I evaluate platforms by).
Again, it's perfectly fine to have a different opinion. I don't like the aesthetics nor the sensibilities of Python code much, but I certainly respect that people can make cool shit with it and even that they might enjoy the process.
The hoopla here is over the terribly flawed notion that client-side MVC is somehow The Future of web development and if you don't follow that pattern, you're living in The Past. Ha.
"You can write shit, unmaintainable code in anything, but to point at pjax and granular key-based caching schemes as somehow specifically prone to this? What? That doesn't make any sense to me."
To take things to the extreme, if you chose to write Basecamp Next in pure x86 assembly (for speed, of course), I think we'd all agree that the code would be much more prone to maintainability problems. Similarly, it is plausible that choosing to write Basecamp Next entirely with pjax+backend MVC+granular key-based caching could be more prone to maintainability problems. Not that I have any particular reason to believe it will, but we're wondering if there are any particular reasons you believe it won't?
Because neither of those elements have any bearing on maintainability beyond what is customary for a Rails application. The wonders of pjax is that it doesn't require you to change your application style and structure at all, so it has zero impact on maintainability.
The granular caching scheme is similarly just a fragment caching setup using key-based expiration. Nothing new here, just that we used it to full effect.
So you're free to claim that writing Ruby on Rails applications are somehow inherently hard to maintain, but you'd be fighting against 8+ years of evidence to the contrary. Versus, you know, a very short amount of comparable evidence for JavaScript MVC applications based on current, recent frameworks.
We also had a swanky in-house client-side MVC framework cooking with Cinco. We used it once for Basecamp Mobile and while it was a good experience and the end result was great, it did little to sway my thinking on client-side MVC being a step forward in programming happiness (one of the key things I evaluate platforms by).
I'd like to hear more about this, ideally in a blog post or two. I think a lot of people were waiting for Cinco's release, certainly I was, because we wanted to see what you'd do with it. The fact that you have Basecamp Next running without it certainly says something, but I'd be a lot more interested to find out what the specific tradeoffs were.
app that feels like a web page, not just a single-page JS app
I think this is a key point and is one reason that server-side approaches will be relevant for a long time to come. Some applications, such as Gmail and Pivotal Tracker, feel a lot like desktop apps, and heavy use of client-side code is a necessity in these cases. But many—I'd argue the vast majority—of web applications produce a better user experience when they feel like ordinary web pages. I don't see that changing any time soon.
yeah, I agree. actually I don't know if I'd agree on the "vast majority" part, but the question of "is client-side MVC The Future?" can get kind of silly and messianic, while the question of "is client-side MVC A Future?" is undoubtedly yes.
there's a lot you can do browser-side these days which would be insanely masochistic without client-side MVC, but that doesn't change the fact that lots of very useful apps run on the ordinary web pages model.
Well the point is that by doing rendering on the client side you get most of the latency issues out of the way on first load and actually have a lot more control over its effects on the user even after that (e.g. you have the option of syncing with the server in the background).
I'm pretty sure he knows what it's designed for better than you given the fact he's the one who designed it.
Rails was literally lifted out of Basecamp into a standalone framework that works great for CRUD-style apps. You might personally prefer another approach or think this design failed but Rails was definitely designed for this use case.
And you work for "Mozilla Labs"? As a volunteer, or do they go ahead and hire nerds with no manners and silly notions about entitlement and technology?
is there still an argument that can be made in favor of doing your UI on the server side?
Ability to link to documents, bookmark, meaningfully use history and save pages to the disk for offline use.
Ability of the users to customize standard behaviors without reverse-engineering your JavaScript code.
Transparency, which often leads to much, much easier debugging and improved usability.
No need to run a quad-core 4GB desktop to use the website.
And this is just the stuff relevant to internal (non-public) websites. You can argue that all of this can be achieved with JavaScript heavy clients, but in reality, it's just isn't. It's not something you get by default, it's tons of extra work, and most people don't do that work.
Even just in principle, it can't possibly be close to as fast (client-side can render optimistically, where possible), and it can't be close to as flexible (you don't have a model of state on the client to perform logic with).
In principle, server-side rendering can allow you to share pre-rendered components between thousands of users saving everyone tons of work. In practice, rendering on the server side is just string concatenation and is insignificant compared to things like running SQL quires, which you'll have to do anyway.
> Ability to link to documents, bookmark, meaningfully use history and save pages to the disk for offline use. Ability of the users to customize standard behaviors without reverse-engineering your JavaScript code. Transparency, which often leads to much, much easier debugging and improved usability. No need to run a quad-core 4GB desktop to use the website.
You can manipulate the page url/browser history from js, and you can use the url to set the application state. History and bookmarks work perfectly well in gmail, for example.
This does take explicit coding. But, if fragments of the page are being replaced as bcx does, then you need to do similar coding anyhow. Otherwise, you're doing whole page loads on every request.
js works pretty well on modest smartphones, at which level network latency is usually the major concern...
This does take explicit coding. But, if fragments of the page are being replaced as bcx does, then you need to do similar coding anyhow.
It should not be similar. There is a huge architectural difference between the two approaches. In server-side approach you're adding caching or prefetch to an already working application that has established and working URLs. With client-side approach, you need to implement adapters that transform URL information into the client state that is normally achieved by a series of UI operations and AJAX calls. Then you need to add new code to generate URLs and manipulate history.
The beauty of caching or partial page fetches is that they are generic. History manipulation is not.
js works pretty well on modest smartphones, at which level network latency is usually the major concern...
The last time I tried to browse on Kindle, it choked and died on most JS heavy websites. When 500MHz processor is not fast enough to browse the web, to me, that's a problem.
I don't understand why "it can't possibly be close to as fast"? Is the idea that with client-side templating, you've distributed the rendering?
But returning a cached html fragment is the exact same amount of work on the server as returning a cached-json value. Plus, it's less work for the client to drop the already-rendered html into a container than to have to bind the model to the template.
I feel like I'm being super dumb, but to me this approach is going to result in faster rendering.
The important thing is that client-side rendering stays on the client and doesn't need to go back to the server. If you have any noticeable latency (read: everything on mobile, desktop clients not close to a server), needing to go back to the server to render will ruin the perceived performance of your app. Well-written applications running on the client can take a change and optimistically render that before the change has even been persisted back to the database. This is inherently faster. If you're able to invest in the infrastructure to allow you to do this on the server, and your use cases allow you to make that decision, then the trade-off is a little less black and white.
One thing I've noticed is that Google likes to use RC4_128 as their encryption method for SSL. Most people just pick AES_256_CBC like 37s has. RC4_128 is so much faster in tests we've run. Of course, there's a possible security trade off, sort of. RC4, I believe from reading, is still plenty strong when it's properly implemented.
I've definitely observed this performance difference, and it is quite significant. However, the new AES-NI instructions in more recent Intel chips may lessen the performance difference over time, as more users buy machines with these chips. (Of course, that won't help mobile devices yet, and it's possible that one could equivalently speed up RC4 with those instructions as well.)
But I know the latest OpenSSL does have an AES implementation that uses AES-NI when those instructions are supported.
The important thing is that client-side rendering stays on the client and doesn't need to go back to the server.
It still needs to go back to the server to fetch new data. And it's not like rendering on the client side is instant and free. There are plenty of JS-heavy websites that visibly lag during UI operations because of all the stuff that goes on during "rendering" (in quotes, because it usually includes large chunks of business logic).
If you want to use a decent library, you can rest assured that your client-side templates will render far faster than the Ruby version of the same HTML.
Even considering I do not have an 8 core machine with 32 GB of ram sitting on my lap (or in my pocket)?
I realize that ruby is significantly slower than v8/JägerMonkey/Tracemonkey/etc, but is it so easy to discount the significant disparity between the average compute power of a server vs mobile/laptop?
I think a stronger counter argument would be flexibility, smaller http responses (and thus less latency), and possibly an argument that it is simpler or more straightfoward, in favor of client-side javascript templating/rendering, but rendering speed? Not so sure.
The big advantage of client-side rendering is that you can break out of the mindset of outputting blobs of html from templates. You can think in terms of ui elements and the business logic that drives their creation and updates. Once in that mindset, you can think about how to prefetch data and store it locally to not have to go to the server for every ui update. If you're doing it right, many actions don't require going to the server at all.
Don't overlook the bandwidth overhead of html fragments vs. json snippets. That issue is magnified on mobile.
Additionally, if you are doing small updates the DOM API is faster than injecting .innerHTML. I think this has changed in recent years and .innerHTML might have caught up for larger nodes. I personally still use the DOM API because it encourages more simplistic layouts.
Also remember that it's usually latency and connections that kills mobile performance. I've also found the actual data transfer to be fairly good compared to your average domestic DSL.
A small change in state can cause a large change in the view.
When the server renders the html, it has to send all the portions of the view that have changed. When rendering client-side, the server only needs to send the portion of the state that changed.
Good point, thanks. So it potentially provides you with opportunities for further optimization. I guess the message from DHH is that they hit their performance targets, so this would be an unnecessary optimization (and I'm tempte to say "non-trivial to implement"..but now I know I'm being biased..)
Word. Seems like they are going out of their way to avoid doing client side work even when it would be far easier to do so. Also seems motivated by the desire to keep Rails relevant, but without actually trying to evolve Rails towards being more useful for client side driven apps.
On the flip side, you have to congratulate them on what seems like really great results. I'm definitely in favor of doing what works and what you're comfortable with. I just don't think Basecamp Next Next will be written this way.
> Also seems motivated by the desire to keep Rails relevant, but without actually trying to evolve Rails towards being more useful for client side driven apps.
IME, there isn't a smoother way to implement a bunch of RESTish json endpoints hooked up to a database (resources) and have dependency management and compilation for your frontend js/css (via the asset pipeline).
There may be other frameworks out there that make serving up and delivering client-side driven apps easy, but I haven't heard of them.
Agreed. I prefer writing UI code on the client. But you need quality abstractions because it is more complex than doing everything on the server where all your data is at arm's length. I've written two frameworks that make it easier... SpacePen for the view (https://github.com/nathansobo/space-pen) and Monarch for the model: (https://github.com/nathansobo/monarch-rewrite)
Also what about deployments/bug fixes? Seems if you need to make changes to the html fragments you'd have to do a full app deploys every time. Versus making changes to specific JS file and uploading it.
Personally that's why i like this whole shift to single page apps. Just as we've decoupled certain aspects of the server-side development, we can do the same with the UI.
"but does it feel like a sustainable approach going forward"
Maybe you are right. However it it too early to do a big rewrite in JS. There are many JS frameworks but no clear leader. I am sure that in 3 years decision how to design you client side web app will be much easier.
Developer happiness aside -- there are plenty of folks who like JavaScript just as much as Ruby -- and assuming that we're talking about an non-public-facing app (like Basecamp Next), is there still an argument that can be made in favor of doing your UI on the server side?
Sure.
1) Client side UI frameworks are still immature. For example Backbone is minimal, while Ember is more featured but with bad documentation and not proven yet. Big frameworks with UI widgets never caught on and are too restrictive. GWT is also on the down and out, etc...
2) Same goes for the tools you need for debugging, unit testing, automation, etc. Nowhere as complete as the server side tools that have been honed for 10+ years.
3) Client experience can be extremely different, when JS performance differs widely between Firefox, Chrome, Safari and IE versions. Not to mention not everybody supporting the history state API.
4) JS performance for long running pages can also vary, due to memory management.
and it can't be close to as flexible (you don't have a model of state on the client to perform logic with)
And on the client you don't have a model of the server (where the actual data are and where the actual actions are performed) to perform logic with.
I love posts like this, not just for what OP has done, but for what you guys have to say about it. I'm continually asking myself 3 questions:
1. Should I do it on the client or the server?
2. What should be travelling between the client and the server?
3. Based on the answer to #2, what else do I need on the client?
Even though I've read all your great discussion points, I'm not sure I'm that much smarter. But it's nice to know I don't suffer alone. :-)
I think most applications use both client and server components. If your question is more on the lines of which one plays a major role, it depends mostly on the app. GMail won't be half as good without all the JS magic, but again, I assume it also has a heavy server component.
> 2. What should be travelling between the client and the server?
If you need any client side processing or are using a framework which works on raw data(backbone.js), then json/xml/...; or else if your goal is to run the app without reloading unnecessary parts, html is a better idea.
> 3. Based on the answer to #2, what else do I need on the client?
I am not sure what you are looking for here, but if your goal is to make a responsive app, there are some basic things to take care of.
Have bookmarkable links and don't break the back button. Doing that is breaking the base paradigm, and plain ajax does that.
Use something that provides bookmarkable links, and doesn't break the back/forwrad button; and at the same time, doesn't reload the whole page when only a small portion of the page needs reloading.
pjax is one such solution - the 37signals guys investigated it, then went with their home grown solution, but I think pjax will work just fine for majority of use cases.
The trick to pjax is serving content without layout if it is a pjax reques; with layout otherwise. There is nothing you can't handroll, but 1) why would you want to? 2) pjax uses ___pushState to preserve links and back button behavior.
More important than whether to send html, json or xml is WHEN to send that content. People underestimate just how painful latency is to the user because they're sitting right next to their web server. Digg takes 15 times longer to load from bangalore than from california (see http://www.slideshare.net/kkjjkevin03/geographic-distributio... ). The thing that client-side rendering allows is caching your application UI in the browser and on the CDN. The server only gets contacted when it's absolutely necessary.
If you go mobile, it gets worse, because sometimes you'll lose the network for a few minutes, or even a few hours. Building a web app that keeps running in that scenario is possible, but only by embracing a client-side philosophy. In my personal opinion, mobile is going to be the dominant way that people connect, and the network is not going to be robust enough to render mobile web apps on the server.
Looking at the rails pjax_rails gem https://github.com/rails/pjax_rails I see it automatically patches layout to return false when the request is pjax.
I was a bit baffled by the example app since it didn't appear to be doing any pjaxy stuff, but rails being opinionated, I see all links are pjax(except the not classes show above), and layout is turned off for pjax requests.
I put some debug printouts in his example notes controller and it seemed to be setting the no layout rendering option correctly, but as you say, the checks in the controller may be redundent. Anyway, I am planning on giving pjax a very good look: I find doing a lot of client side coding to be tedious and is something to be minimized.
+1, I'm also none the wiser having read the OP and the comments here.
I'm quite used to a fully server side model so I'm currently experimenting with a few side projects to see just how much I can put into the client without going anywhere near the server.
I think once I've pushed it to its limits I can start working my way back to the server and achieve a good balance. None of this would of course be bearable without coffeescript and a nice js framework like backbone.
Hmm, degraded development experience of doing everything on the client?
Bit of a bizarre claim there given the state of templating these days. Sounds more of a prejudice than an actual problem.
Also json tends to be much lighter than HTML and if you're taking an optimistic view of updates you can assume the action's been a success and even immediately update the display without waiting for the server to create some simple HTML, so it's arguably a worse user experience due to the lag.
With all of the backbone and node.js hype going around, people tend to forget that client-side development is developing on the client's side.
Sooner than later you will have to deal with
* nondeterministic testing,
* interactive testing (leaving machines on to run interactive testing at night),
* most of the bugs will live on the client side,
* you'll have to support each and every machine's quirks (you can't pretend everyone are on at least core i5 and chrome, people WILL have machines that do javascript client side template rendering SLOWLY).
* you'll have to find ways to diagnose problems at the client's site, and you will maintain a fleet of various computer configurations to run sanity on.
* which will also cause you to start investing in pairwise testing
* i can go on
DHH is COMPLETELY right. Just because the universe is not ready enough for client side "SPA"s (universe=browsers,internet)
I'm laying my arguments out of experience - I've been a long time Ruby/Rails/.Net/JVM server side dev as well as enterprise software dev (enterprise desktop/workstation application suites, as complex as visual studio and the sorts).
I have several node.js projects in production and I'm also doing a couple of projects with Backbone.
Your arguments are a checklist of responsibilities for anyone running any web software, whether the actual DOM elements are being rendered on the server or the client.
I agree that you do most of these arguments when most of your content render on the servers too. However, you DO have limited resources (time, money, people), and these points become more and more painful as you push towards client side.
Still, I insist that some of these arguments are client side specific. Once instance is javascript template rendering speed. Another is, that you would prefer getting statistics on your speed of rendering at the server's side where it is easily controllable and deterministic, rather than shuffle around to close that loop and build a usage service to report back usage and statistics to from the client's side.
I give my arguments from pain and experience in both sides, and I still do both sides despite of the arguments (since it is not always my own choice).
You can still do unit testing of your client-side code if you wrap everything in components that separate state from rendering. The rendering itself you can only test with a human looking at the page anyway (or by comparing to reerence renderings). Extjs uses this approach. I can easily unit-test my forms written on top of extjs.
Fundamentally this is a bikeshedding debate. Whether you're programming on the client or server, the tools are equally capable. The big difference is the server puts you closer to the data, and the client pus you closer to the user. Either way you're struggling with latency, and you do different trade-offs, and either way you reach your goals.
I used to build stuff on the server, then i switched to extjs, with the server only exposing a bunch of json-rpc web services. It's not as big a deal as people paint it out to be. Extjs abstracts away browser differences well enough that i rarely come across issues that aren't reproduceable for me. If they're reproduceable for me in my browser, they're easy to fix.
Now, extjs is a DSL, it's closer to flex development than to traditional web development. It has a learning curve with a pay-off at the end, and the pay-off is easier ui development. If you're building ui the traditional way, but client-side, ymmv. Still, anything to do with rendering glitches is something you'll have to debug by eyeballing it, regardless of where you render your ui. There's no way of automatically logging misplaced floats.
P.S. There's also less to monitor. Typically you'll monitor for performance and security. Performance depends on the browser and pc specs, not the server's load, and security has nothing to do with the client. So, really, there's just not much to monitor. You can attach error handlers to automatically report exceptions to the server, but i've not had a need.
Very true. This is why I'm sticking with a server-heavy design (i.e. Rails) for my main projects. I agree with all of your arguments about the difficulties of testing. And I'd add that there's a distinct lack of tools for testing client-side. Sure, you can get unit tests for client-side JS, and you can run integration tests in Selenium etc.. But the client-side testing ecosystem is tiny and immature compared to that for server-side testing. This will all change, but for now, I don't envy anyone who has to test a heavily client-side app.
We've found that the additional overhead of sending HTML vs JSON is negligible in most cases. The additional speed gained by just sending JSON across is not worth the programming enjoyment hit of doing everything client-side.
When you get below 50-100ms per action, things are generally fast enough that this is not a key problem any more. The user cares greatly if you can go from 800ms to 100ms, but not so much if you can go from 100ms to 70ms.
I have to agree here. A major problem with client-side json is the need for additional documentation and likelihood of things breaking because of simple html edits.
For example, if you are populating a bunch of divs inside a parent div with the data returned using json, you would describe this using something like jquery selectors. Now if someone else comes and moves around the div during a redesign, he must go through whole bunch of js to make sure his shuffling of the divs around won't break the jquery selectors populating the data from json.
Now think about having a parent div that is simply populated with returned HTML from server-side and it seems way easier than using json to populate the data.
That's nonsense, the same issue would apply on the server side templates. If you go around messing with DIVs without an idea of the consequences, you will get in trouble regardless of your templates being on the server or client. If you try to have the designer do a redesign without testing the application afterwards, you're always in for trouble.
you will get in trouble regardless of your templates being on the server or client
My view is that you will get in significantly less trouble because manipulating an HTML template that is rendered on server side is much easier to manage and adapt to a new design than finding nitpick jquery selectors across the app that depend on a specific div structure.
Btw, my post doesn't imply anywhere that a designer should redesign an app without testing. Therefore, your argument about testing is garbage.
Rather than "finding nitpick jquery selectors" I use a client side framework (backbone.js) with a lot of Views which makes my development faster vs using server side templating, so I cannot share your opinion.
50-100ms is the time it takes to generate the response on the server. On top of that you have to add the ping time between you and server and whatever other network overhead there is.
That's why it's great to be closer to 50ms so you'll stay under 100ms even with the network overhead.
That assumes, of course, that the viewmodel used to render the server-side view is identical to the model you'd return serialized into JSON. That's mostly true, but not always.
Exactly what I thought. But you have to remember that 37 is the quintessential Rails shop - client-side logic is not their sweet spot.
So it's only natural that for their flagship product they'd rather keep pushing the limits of their comfort zone rather than go all out client-side, no matter how natural that choice might be for a "web app".
Stacker sounds like AJAX as it was done in 2006, except trigger by ___pushState. Maybe I'm misunderstanding it though. I'd be curious what the performance penalty is on mobile.
Well, it's really stacker + ___pushState + MAX cache.
But why would there be a performance penalty? Doesn't this result in the client doing _less_ work all around? Less work than the multi-page approach...and less work from the client-side templating approach, no?
Seems it's a highly refined version of what was done in the "old" days. I remember doing something similar for an old project in ASP.NET. (Well more similar to pjax)
As to performance on mobile, i think it might all depend on how large the html fragments are.
Your comment ignores the next section in the article, detailing how they are caching templates with a high level of granularity, and actively removing template logic that impedes efficient cacheing.
The article goes on to state that delivery times for cached segments of HTML are under 50ms in most cases. While I agree that it would be possible for them to improve the user's perception of the app performance even further via JSON or client-side updates, with the level of performance they described that type of improvement is unnecessary and introduces additional vectors for bugs to creep into production code.
When you have a persistent page (that you update with HTML5 push-state) you suddenly have to worry about issues like memory fragmentation. If you do lots of small updates to build a page from a template + JSON every time the user navigates it's easy to hit some situation where everything starts lagging for no clear reason.
If you simply swap out a large part of the page with some HTML right out of an Ajax request you don't have to worry about any of that.
Let's be fair - poorly designed code causes memory bloat whether you're on the client or on the server-side.
Also, garbage collection efficiency in browsers is a metric of fierce competition among browsers. It's superb already, and it's getting awesomer by the minute. It's a safe choice for the future.
I have to say, reading this made my day. I've been feeling such a push towards client-side templates lately, that I felt like it was just me that didn't get it. I really don't understand what's so compelling - unless those exact same actions are actually (YAGNI) used as endpoints of a web service. render :partial => '..' is such an sweet, simple and natural paradigm to web programming.
Hey, I totally agree that if your actions are gonna be consumed by clients other than your own UI, it makes total sense. However, I've seen people [strongly] advocating this approach even when this isn't the case..that's what I don't get.
Though I suspect you're just talking about cases that are best chalked up to: there's no accounting for the logic of the illogical.
[1] If you want to ensure a separation between display code and business logic, it's not a bad way to draw a line; some heavily data-driven problems do tangibly benefit from fetching JSON and formatting on the client; solutions that have multiple views on essentially the same data sets benefit in much the same way that solutions with multiple clients benefit; etc.
I think you nailed it with "unless those exact same actions are actually (YAGNI) used as endpoints of a web service". I find client side javascript apps compelling because they are simply another consumer of your web service along with any other mobile/third-party/customer apps.
In my head it's much easier to imagine manipulating a UI based on objects and their state, rather than partial snippets of HTML. It seems like it would get complicated to track the pieces of html you needed to request in order to redraw only portions of the UI. Wouldn't there still be a lot of javascript involved with that logic?
Normally that basic cycle would go:
1. I need to view some data, if I do not already have it, request it.
2. Generate html from data
3. Take generated html and place in appropriate place on page.
This skips step 2 by requesting the required html rather then the raw data. Step 2 is the part that would require the most code.
I disagree. I work on an app that is entirely client side rendered. We use the JSON data approach. Step 2 is trivial. There are many JS templating engines out there. It takes almost no code. The part that's hard is step 1, figuring out if you have the right data or whether your data is stale. And then, to figure out which part of the UI needs to be redrawn. This is where frameworks like Backbone can come in handy.
>> We use nodejs for local development via Sam’s Pow server, but there’s no nodejs in Basecamp Next itself. It’s all Ruby on Rails. We’re running the default stack using Rails 3.2+, MySQL 5.5, Memcached, a tad of Redis (mostly for Resque), ERB , test/unit, CoffeeScript, Sass, and whatever else is in the default box you get with Rails.
Awesome, very interesting article that shows you don't always have to go full client side to get a snappy app.
However, I do agree with another poster about this being a turbo charged horse and buggy. You can't, for example, deploy the app to a static CDN, or serve templates from a CDN. Templates can't be cached on the client, so you've developed what is essentially an elaborate partials caching infrastructure that is likely very brittle and must be watched over closely to maintain speed benefits (keeping all those things in mind when content changes -- what to throw out and what to keep, etc).
Also, by delivering HTML you're limiting your presentation to browser-only devices, at least with this app. I know basecamp has a JSON API, and at that point, why didn't you just do a traditional client side app I wonder? I think the answer rests in the fact that you preferred the "niceness" of server side development, which is arguably more mature than client side at the present time. If that's the only benefit, in a few years, do you think that advantage will hold true?
I find GWT to be the best platform to do full-blown fat client-side paradigm due to several reasons:
1) UI Templating (like XAML)
2) MVP Pattern + JUnit = test automation as part of your build with minimum investment to infrastructure (most JS based solution requires investment on infrastructure)
3) Resource Bundle (similar to Asset Pipelining) cut down HTTP requests
4) Resource splitting (follow up from #3) to reduce the size of the responses when it comes to resources
5) JS binding (like C binding, C++ binding, JNI, etc) to support popular JS library
6) Top Notch Compiler (arguable... but it's there) that can also prune dead code
7) JS switching based on browser's request (IE browser will get IE-specific JS)
8) CSS variable substitution (like Sass)
9) History framework.
10) Flexibility: end point can either return JSON, XML, or use the built-in XML-RPC mechanism (Java only).
The disadvantages are: it's Java and requires compilation.
I like GWT the most so far because I don't have to find libraries or tools that support all of the above (Sass, mustache, various js unit-testing tools, library to merge and compile css/js, etc).
I like GWT as well, but its development history makes me a bit nervous about its future.
It is open source, but doesn't have a developer ecosystem outside of google employees (in fact they outright state that no one else can be a committer). That in itself wouldn't be the end of the world, but unfortunately google has a) reduced resources devoted to GWT dramatically in the last year or so (many team members were reassigned to the dart project), and b) has a history rapidly changing 'paradigms' and semi-abandoning the code and documentation for the old way of doing things.
I think the biggest danger of using GWT is when it breaks on newer browsers and this has happened once in the past with the release of Safari 4 (or 5?) there was a minor hiccup in which they quickly fix it the next day.
That is definitely a major glare. Other than that, if the community decided to fork it (and have the time and capability to do so) it'll be a safe choice for a long time.
GWT as of today is definitely very polished and well maintained and fulfil the needs of Single Page App smoother than having to hunt different tools and libraries and to set them individually to be a part of the build tool.
I think in the whole client-side rendering debate, a lot of people miss the fact that there is a one-to-one correspondence between JSON and a certain subset of HTML, that can be illustrated by the following example:
If, as 37signals basically seem to be doing, you restrict yourself to this subset of HTML and use CSS to do _all_ presentational styling, then honestly it’s all the same either way.
Are you guys taking care to send as-plain-as-possible markup from the server to the client and then style it up to keep most the UI development in CSS or are you just sending through raw, styled HTML that gets inserted as-sent directly into the page?
This reminds me of the early days of Java Servlets when people would stream back HTML from inside their Java Servlet code to JSP pages to build it out... made it impossibly hard to edit the templates.
Then the Java world moved to templating engines, but it was still the same idea.
Just curious how you are managing this pain point (I am not a Ruby dev so maybe there is a well-known 'best practice' templating approach?).
I'm not sure I understand the pain point you're describing. What's "styled HTML"? We're sending HTML across the wire that looks exactly like the HTML we used to render the initial page. It has classes, ids, and custom data descriptors.
Aren't they are using a lot of client side UI code in order to replace parts of the UI with updated HTML from the server?
They have caching on the server side but that can only account for some of this.
Although in my couple of years of using Basecamp in my previous job, I never found it particularly "snappy" but then I was one user out of a team of five, out of how ever many users they have.
I can't help but feel that you are sacrificing the flexibility of your UI in order to stay in your 'safe place'. IMO its easier to implement complex UI if you can deal with it all in the front end. I have used a similar approach before and felt like my hands were tied many times by it.
This ia a bit of a tangent, but Stacker feels like breadcrumbs that require more real estate than necessary. Maybe the physical metaphor is important, but my first thought was 'this is just bread crumbs'.
I'm showing my ignorance of rails here, but is it hard to do the 'russian doll' style of fragment caching that DHH describes? Are there any gems that do this?
yes, the blog post was quite clear that it's built in-house, with no plans to open source it since it isn't a generic solution (it's purpose-built with Basecamp in mind).
Developer happiness aside -- there are plenty of folks who like JavaScript just as much as Ruby -- and assuming that we're talking about an non-public-facing app (like Basecamp Next), is there still an argument that can be made in favor of doing your UI on the server side?
Even just in principle, it can't possibly be close to as fast (client-side can render optimistically, where possible), and it can't be close to as flexible (you don't have a model of state on the client to perform logic with).
I think that the truth of this is already admitted in that the really fancy bits of UI are being implemented in JS: http://37signals.com/svn/posts/3094-code-statistics-for-base...