The author says you can do these things anyway without AMP, but what he doesn't get is that web developers were not doing them. It took the nudge of AMP to get decent performance on the mobile web.
He also failed to mention a key AMP rule: all CSS is in the head, a max of 50 KB. There is no external CSS at all. That's crucial. It reverses a persistent anti-pattern in web development that calls for a bunch of separate files for CSS and JS. Almost all CSS on most webpages is unused (this is still true of a lot of AMP sites -- 50 KB is way too much CSS for an article).
I think the reality here is that early 21st-century web developers are terrible at web development. They stuff massive amounts of JS and CSS down users throats, distributed across 50 or a 100 requests, and call themselves "engineers".
AMP, or something even better, needs to be the default way to build websites.
> AMP, or something even better, needs to be the default way to build websites.
No. We don't need AMP or something else. The existing tools are more than enough. What we need are actual, trained engineers/developers on the platform.
The real problem with web performance is that any random guy with questionable or non-existent knowledge thinks he can whip up a website by mixing any library he comes across. That is fine for a personal site or some random trivial web page. However we over glorify them and appoint them in critical jobs. And then we wonder why 21st century web is horrible. insert shocked pickachu meme in here
Can you imagine what would happen if we started appointing surgeons by putting random guys through a 1 month bootcamp which consists of showing them youtube video courses? We used to do something like this in the middle ages. We also had a surgery in that era with a 300% mortality rate.
What we need are standards on who can actively work at what positions/level based on their training/skills. Just like any other critical industry.
This isn't about engineers. This is about businesses. Up until now there apparently hasn't been a business case for fast websites.
Fast website costs a lot of money in development time. If a website loads fast enough, nobody wants to invest X amount of dollars to make it somewhat faster.
Since AMP sites are prioritized on mobile, there is now a very clear business case for supporting AMP.
Most people do not work in an Dilbertesque organization filled with pointy haired bosses. And this is a very bad excuse for all sorts of startups that are generally behind these atrocious websites. And as a startup your whole damn company is filled with engineering people.
If your management that is filled with engineering people fails to understand the importance of fast websites then the I am afraid the problem is with the engineers. Whether that is incompetence or actively cutting corners can be debatable.
Fast sites are significantly cheaper to develop, it's just a case of _not_ adding things, but that's an impossible choice to make when the website for a new channel is entirely not what the company is for.
As others here are saying, they like AMP because they as developer do not ha e final say on what goes in The product. They can only do what The boss/client tells them to do.
If there are 50-100 requests on a page then the vast majority of them are crap ad network code shovelled onto sites. Most developers hate them just as much as users do.
Ironically, with the advent of HTTP2, more requests might be a better thing. Separate out bundles into per-page (or even per-component!) chunks and ensure you're only sending the user the content they need, without additional overhead.
Most developers hate them just as much as users do.
Which is one of the selling points of AMP.
Sorry Mr. Marketing VP, we simply cannot add all that crap ad network code! The AMP framework doesn't permit it, and if we stop using that we lose all our SEOs!
I think if more web developers understood how browsers actually render a page we'd get much better web pages. In the case of CSS, the browser won't render until all the CSS files are loaded and parsed. Having lots of them is terrible for time-to-first-paint performance. That's reason enough to inline styles in the head and only have one CSS link on a page if you want something simple, or to use media queries in links to create different contexts if you want to minimize download sizes.
Ok, except the advice has been to move CSS to files for a long time, so that they can be cached easily. That way other pages are loaded quickly, since they css file is already stored on the device.
Caching is still important, but there's no point in designing a good caching strategy if the user's very first impression of the site is poor - if they don't come back then they won't benefit from caching. If you use the more advanced media query links mentioned in the article you get the benefits of a minimal first load and the benefits of caching.
To be fair, industry standard practices make it harder than it should be. We're making a very simple Vuejs app in our group (first one we've done in our group although we use Vuejs in other groups in our company). Our first stab out of the gate netted us nearly 500k of JS (production build, minified, etc) -- for something that basically takes a bunch of JSON and renders pretty tables.
Now, admittedly we've obviously done something horribly wrong, and the people on our team are senior enough programmers to accept that. However, if you're right out of a boot camp, or don't have a lot of confidence, then it might be something you overlook. If you've got a manager breathing down your neck saying, "It looks fine to me! Why are you wasting time trying to shave a few KB?", it can be hard to say, "Wait! This could potentially cost us customers". Having that conversation of "If we make the customer wait an extra second, they may walk away", is tough in the best of cases.
Ideally it would be very difficult to make as bad a mistake as we've made, but it really isn't. I suppose it gives us justification for larger salaries for experience :-)
I know nothing about AMP, but isn't having the CSS in the head pretty bad for caching? An external file can be fetched once by the browser and then cached forever, while having the CSS in the head forces that part to be downloaded over and over again?
Edit: I just realize that others posted the same concerns below...
It's mixed: that CSS file is only cached until the next time you change it and browsers, especially mobile ones, do not have enormous caches. As with most engineering problems the best solution is a balance: put enough CSS in the page for the document to render and either prefetch or demand-load resources which aren't as important, all the while working to keep the absolute size down so your worst-case performance is reasonable.
As an example, if you had something like an item page and a detailed viewer which the user could choose to open, the item page's HTML could have its critical CSS inline and a <link rel=prefetch> tag to tell the browser to preload the viewer's CSS so it will likely be in the cache by the time the user opens the viewer.
If you visit several pages on a single website, then yes. But if you want to scroll through AMP pages from different websites in the search results, caching doesn't help much.
Yes: your device can always cache it, if you use a trusted proxy (think enterprise networks) it can be cached, and if the site uses a CDN it can be cached there as well.
AMP is unnecessary. All it would take is making site speed as a major factor in rankings and sites would get faster overnight.
We work with 1000s of publishers. Decisions about websites come from business and marketing, not from the dev team. Performance is a low priority unless it is critical.
> all CSS is in the head, a max of 50 KB. There is no external CSS at all. That's crucial. It reverses a persistent anti-pattern in web development that calls for a bunch of separate files for CSS and JS.
This means CSS and JS cannot be cached between page loads however so you're requesting the same data every time while navigating pages. It's a tradeoff against faster page loading. I'm surprised there isn't a better solution for this yet.
Also, with HTTP/2, splitting up your CSS and JS is actually a good idea because you can include only what you need on each page and the parts are cached between pages.
I think the basic problem is people including way too much CSS and way too much JS. For even desktop pages, 50KB of CSS should be enough for most pages.
One trick I'm interested in trying: use external CSS if the referer is coming from the same site, otherwise include CSS inline. That way people who only view one page (a large percentage of visitors) get it inline, and people who view multiple pages get it cached.
Caching helps if you visit several pages on the same site. But AMP might be optimized for scenario when you scroll through several AMP pages from different sites in the search results. Using an external stylesheet would only make loading slower in this case. Another problem is that on an unreliable connection external CSS can fail to load.
I wonder if one of the user stories being addressed by AMP identifies that users don't typically visit more than a couple pages from any particular site before a style update is pushed by the developers that invalidates the cache anyways.
Well, fast initial page loads would improve the experience a user gets when they're using a search engine to find an answer, especially if they have to visit multiple websites.
For a site like Hackernews though where you probably visit many pages each visit, downloading the same e.g. 50KB of CSS every page is wasteful.
If you really want you can build a very fast site if you only download the part of the html that change between loads and still not break the back button. This is more work though.
This is a hack to accelerate heavy websites. If your site is lightweight you might not need it. Or you can load such JS code later in an unobstructive fashion.
While everything you note is true, it's hardly the fault of the engineers. One cannot expect engineers to hand code css and optimize every time. These things should be handled by tools.
With the advent of css-in-js and compilers, this is slowly but surely happening.
What amp did was prove at scale, and with tooling, that performance can be achieved. That philosophy is dripping into other tools.
In some cases, sure, it's not the fault of engineers — if an engineer is in an environment where they're not able to take the time to do things right.
In some cases, it is the lack of tools.
But I think it's often not either of those, and it is in fact the chosen priorities of the engineers. A lack of attention to craft.
We see a similar split in GameDev. Some game developers (Carmack, Blow, Acton) sweat every millisecond and byte and cache hit. If you work on engines in AAA, that's a job requirement. Others are happy to use Unity (etc), never optimize, and don't mind that their simple 2d platformer is bloated and wastes cycles. But many simply don't care — why spend time cutting down on bloat and waste when the game plays well enough.
I see plenty of engineers who have a "it works, ship it" mentality. To me, that's not the attitude of someone who cares about their craft.
I see plenty of engineers who would love to keep polishing their work far after PMs and other stakeholders have decided it's good enough to ship. Some even do it after hours.
It's a combination of the tooling getting better, and expectations/parceling for a product solidifying at a higher level in the org chart. "Engineers" are devolving into technicians as a result.
I think a lot of people program at work purely for monetary reasons. For this reason they follow the “it works, ship it” mentality because then they can focus on other tasks to make money from. If a sluggish program makes a million dollars but a beautifully written program makes 0, which shows more better craftsmanship? Admittedly that’s an unrealistic situation, but people have different motivations for coding.
>I’m not saying practices promoted by AMP are bad or useless. They are good practices. But nothing stops you from following them in the regular web.
Disagree.
The most important feature of AMP is that it makes it impossible to add the kind of bloated crap that causes poor performance on something like a large media company website.
Something like a news website likely has 2-4+ analytics integrations, and a similar number of ad provider integrations.
Your staff want to use google analytics so that goes in. Your display ad provider (ads appearing in the page around articles) uses some other analytics provider so they have to go in. You want video ads so another ad provider who requires a video specific analytics provider goes in. Site rankings produced by someone or other are important to get direct ad sales so their required analytics integration goes in. Then someone in sales signs some deal to add a third party widget to your homepage so that goes in.
All of this happens over the protestations of the developers. People may literally quit over these integrations but their replacement will implement it.
Turn off your ad blocker, go to a news site, watch the network tab and despair.
AMP makes that crap near enough to impossible that the developers can convince the sales team and management that it simply cannot be done.
> The most important feature of AMP is that it makes it impossible to add the kind of bloated crap that causes poor performance on something like a large media company website.
The only reason AMP matters is because Google was heavily pushing it and for awhile restricted the top search result carousel to AMP pages. This goal could far more easily have been satisfied by a policy which set limits for time-to-render and total transfer rather than dictating usage of a proprietary framework which often makes performance worse (you need a 100KB of JavaScript to finish running before an AMP page isn't blank).
What actually made Google try it is that it gives them control over the advertising system. Performance was the pretext they used to protect their main source of revenue.
The web development community had plenty of time to fix these problems in better ways and it instead just spent time building bigger and bigger JS frameworks and background videos.
What problems exactly? The web works pretty well and the source of most problems is non-technical. AMP is cache + handcuffs. No, thanks. I am no web-developer, but pretty sure I wouldn't want to touch that. Is performance really an issue these days? It probably always is but there are worse ones. One specifically being closed web technologies administered by large companies like Google.
The number of times I open a news article (local/regional seems to be the worst) on my phone, start reading the article, and then 10-15 seconds later have the whole thing go blank, reflow, and start me back at the top of the page... I haven't spent any time investigating what actually causes it, whether it's a stylesheet or a big JS module or what, but it's absolutely enraging. I'm just reading what could be plain text content, and that flow gets horribly interrupted.
I'm really torn on AMP to be honest. I'm morally and technically opposed to it, but... the experience is often dramatically better than what I would have gotten without it.
The top-level post of this thread argues that the problem Amp solves is that it makes it impossible to add bloat, and that the mere possibility of adding bloat to your website means that it'll happen.
Now, I don't know whether that's true, but supposing that it's, the post you're responding to raises an important point. If AMP doesn't inherently limit bloat, maybe it's only a matter of time before AMP pages are all bloated crap.
> Turn off your ad blocker, go to a news site, watch the network tab and despair.
Or don't, and you'd never have the problem in the first place. The problem with bloat IMO is that clients at all support it. Thankfully there are a lot of browser extensions for generous content blocking that can make the user experience passable.
Sadly, Chrome doesn't support extensions for mobile. Firefox does, which is the (only) reason I use it but that's obviously not going to be around forever. If the choice is ever surf with ads/trackers or don't surf I'll be not surfing.
You are implying that ads are mandatory part of life when we know that they aren't. I agree with the analytics part, however I would not use Google for that either. For me AMP is pretty useless and it is another way for Google to hijack the web.
> The most important feature of AMP is that it makes it impossible to add the kind of bloated crap that causes poor performance on something like a large media company website.
Exactly. Nothing stops you from following them in the regular Web
AMP is used mostly to get higher visibility on Google. Imagine if there was a third-party AMP caching website. Would anyone make an AMP version for them? No way. They do it only because Google will show them above their competitors in search results.
You missed GP’s point. They were saying that the devs generally didn’t need to be convinced, it’s the business owners who did. Being that AMP dictates none of the bloatware, the business can’t steamroll their demands over the engineers.
Yes in that sense AMP is a business contact rather than a technical innovation. "Follow these rules and we'll tell users that we think your site is fast and maybe prioritise it in search results".
That's why you can't have a fast news site without it - because new site owners won't follow the rules needed to make their site fast unless they get something in return other than speed. Only AMP gives that.
If Google based is lightning bolt etc. on measured page speed instead then yes you might not need AMP. But they don't.
> The most important feature of AMP is that it makes it impossible to add the kind of bloated crap that causes poor performance on something like a large media company website.
When I load this AMP page [1] with an empty cache, Dev Tools show that 3.3 Mb of files are loaded [2]. Also, the page contains lots of errors [3]. This page contains ads, analytics and even a video that you have mentioned in your comment.
After loading, every 10 seconds a HTTP2 request to cloudapi.imrworldwide.com is sent even if the page is in the background.
AMP looks more like an internal Google standard for integrating websites into search results. The spec contains a lot of restrictions like these: you MUST include a 12 000-line JS script from Google ( https://cdn.ampproject.org/v0.js ) into every AMP page, you MAY use only components made by Google and even custom JS template engine made by Google. If you are an ad network then you have to make negiotiations with Google, your competitor, to be included into the whitelist. There are lot of whitelists: list of sites that can provide fonts, list of sites that can add widgets to AMP pages.
Here is one example of such conditions [4]:
> In order to qualify for inclusion, an extended component that integrates a third-party service must generally meet the notability requirements of the English Wikipedia, and is in common use in Internet publishing. As a rough rule of thumb, it should be used or requested by 5% of the top 10,000 websites as noted on builtwith.com, or already integrated into oEmbed.
The truth probably is that most websites would fail if users had to pay for them. In my opinion, that's a good thing. Ideally, anything that serves an important interest would remain (because users have a genuine interest in further use) while the useless stuff that's just tricking you into paying attention would disappear (because no one actually gives a crap about vapid celebrity gossip unless you really bait them into it). The sooner this can happen, the better.
The biggest sites that are ad-dependent are in the digital media space, and they are undergoing a culling like never before. Mic.com and Mashable recently sold at distress sale prices, and Buzzfeed and Vice are also struggling to meet revenue targets.
> The truth probably is that most websites would fail if users had to pay for them.
I agree with you here, but the implication of this is that if the user isn't willing to pay for it, and ads aren't enough to sustain the business, maybe they should not be in that particular business.
Or, they could try to rise to the challenge and build a brand people are willing to pay for. The Guardian in the UK recently hit a million paying digital subscribers, so it's certainly not impossible. Most sites do not need a million subscribers to make payroll.
I talked of this elsewhere. It's a form of vetting we don't have anymore where there is an educated editor somewhere to make sure what we read is worthwhile (not censorship which isn't the same thing).
I refreshed a few times, scrolling to the bottom of the page each time. According to Chrome dev tools even without a video playing between 6 MB and 21 MB was transferred just for me to scroll to the bottom of the page.
True. The problem with AMP though is that it doesn't push content sites to do anything about their actual sites. Content/news sites produce a stripped down version for AMP in the hopes to get into the search carousel, and that's it.
That's why I view other initiatives by Google such as https://web.dev [1] laughable at best.
AMP's job is to lock publishers in a Google-controlled system where you jump when Google tells you to jump, or else you lose your spot at the top of Google's search results.
Oh. And ads. It's always about the ads.
They don't even attempt to hide it, really. Right there on AMP's page (emphasis mine):
--- quote ---
The project enables the creation of websites and ads that are...
What AMP Provides
Higher Performance and Engagement
....
Flexibility and Results
Publishers and advertisers can decide how to present their content and what technology vendors to use, all while maintaining and improving key performance indicators.
...
More than 1.5B AMP pages have been published to date and 100+ leading analytics, ad tech and CMS providers support the AMP format.
People do. And people who understand what the web and AMP are know that AMP isn't the solution.
By the time you've loaded CNN's AMP page, you'll have loaded at least 4 MB: ~2 MB of AMP prefetching done on Google's search results page and another ~2.2 MB on the AMP page itself.
If Google adhered to its own page performance standards, as outlined here: https://developers.google.com/web/fundamentals/performance/w..., CNN's AMP page would be demoted in search results. However, since its AMP, it gets put front and center in search carousel.
Oh, that was with an adblocker on. But aside from that, many of the websites I go to don't have AMP and load quickly. Hacker News is a pretty good example; I try to keep my own website relatively lightweight as well, just like many other personal blog sites.
To reduce weight issues please replace all your dogs with cows. Research shows that cows are much lighter than your average whale and for this reason should always be used.
But you do need "PWA" if you want your site to work offline. PWAs were never really sold as something that would make your site's initial load time faster. Not really sure how the author got that impression.
It's almost like saying "you don't need jQuery to make your site faster" - to those who understand the purpose of jQuery, it's a bit of a nonsensical statement.
Moreover..PWAs are not meant to replace websites. They are meant to replace Mobile Apps. Almost all business apps can be replaced by PWAs without the rigmarole of creating an app and publishing on Apple and Google store.
I absolutely agree with you there. I bet if we ran a compression algorithm on all the apps in the app store, we would achieve a 90%+ reduction, on account of all the repeated
boilerplate code!
Can't you just use ServiceWorkers in a regular webpage? Seems like you can. That would allow you to have an offline persistent website while still using a webpage. https://caniuse.com/#feat=serviceworkers
There seems to be a few misconceptions around what is and what isn't a PWA. A PWA (in my opinion) is simply a website that makes use of certain browser features (e.g. service workers, push notifications, manifests). It's a bad definition, with a lot of grey areas, but it is what it is (and let's face it, definitions in web development are often hard to pin down).
For example, we built https://usebx.com/app with plain old HTML, JavaScript and CSS. Yet, it can be installed on your phone, send you push notifications and even works offline. So I would categorise it as a PWA rather than a website.
My point is, if you're using service workers, you're more a PWA than a standard website.
That would make it a Progressive Web App, by most definitions. It's kind of a vague term, but IMO anything using service workers to serve content offline is a PWA.
(but I also avoid using the term for exactly this reason)
So the thing that makes a site a PWA in your eyes is the "add $app to your homescreen" prompt? Because I would view any site that works offline, beyond http/s caching, to be a PWA.
The definition of a PWA is a page that uses features like add to homescreen, service workers... or push notifications. It doesn't have to use this and the browser doesn't have to support it. PWA will progressively use such features when they are supported, and that's where the name comes from. If a page requires these things to function then it's not a PWA.
The author mentions that, and says there is a use case there, but suggests that most PWA's don't have any use when offline, so they are useless in lots of cases.
"Initial load time" means "download, install, and run time". With PWAs (just like native apps) the "download and install" part only happens once. The article is irrelevant to PWAs.
> But you do need "PWA" if you want your site to work offline.
The author addresses this point quoting the examples of the most popular apps where offline mode doesn't make sense as their whole point is to facilitate online communication (booking, chatting, dating, social interaction etc.).
But in those examples, offline mode still makes a lot of sense. For example, looking up bookings when signal is bad (when on a train), or past multimedia messages within you chats etc etc.
Well yeah but the author is missing the point: these tools allow your site to load faster than the theoretical limit without them.
AMP buys you Google's CDN and prefetching on Search.
And PWA buys you a 0 distance cache hit on your second visit, the ability to prefetch unvisited resources, and with the ability to update the cache in the background.
Not saying these aren't negligible on a 15MB site but they are tools for making an existing slow site appear faster which is what everyone who isn't a dev really wants.
If it's just a website, then I agree that is indeed excessive. If, however, it's 15mb of useful functionality (perhaps an app that lets you edit and encode video), then in my eyes, it's acceptable, and probably a good candidate to be a PWA.
I know a lot of people hate the idea of doing everything in the browser, but seriously, it's the one ubiquitous platform that we have, which even competing entities have agreed to standardise. I personally think it will be the most popular app delivery mechanism in the next few years, even on mobile devices.
If, however, it's 15mb of useful functionality (perhaps an app that lets you edit and encode video), then in my eyes, it's acceptable
You either have an abundance of high speed internet access, or benefit from internet usage that isn't metered by the amount of data consumed, or both.
Functionality or not, heavy web experiences leave out a lot of people who would otherwise be willing and capable users of %product% if it weren't for the excessive payloads they're often expected to consume just to be a user when that payload could quite trivially be reduced and still accomplish the goal of delivering content and information to the end user.
So here's my question: forget about people who hate doing everything in the browser, what about the users who just don't have access to Cable+ internet speeds, want to use the product, but perhaps can't because they're downloading 15, 20, 30MB websites just to fill out a form element because the browser is all this user is familiar with?
Not everyone on the internet is a regular of HackerNews who groks native applications versus web apps. At what point do we start focusing on the content and asking ourselves "Do I REALLY need this animation library just to indicate 'Start here' is where they should be interacting with my webform"?
To answer that rhetorical question for you: probably, quite likely not.
> but perhaps can't because they're downloading 15, 20, 30MB websites just to fill out a form element?
Parent post (in fact, the segment you quoted) literally says that it's okay when it is useful functionality, i.e. not when it's just to fill out a form.
Sadly, these aren't going to get to experience a decent chunk of the internet anyway, since video, gifs, and images all frequently weigh significantly more when put together.
Parent post (in fact, the segment you quoted) literally says that it's okay when it is useful functionality, i.e. not when it's just to fill out a form.
Since when did forms become useless and what web-based video editor are you (or the original reply-er) pointing to that uses AMP?
We're talking about two completely separate types of websites. Nobody said forms are useless. Web-based video editing was a hypothetical example. AMP wasn't really part of this discussion; presumably, lots of JS + PWA functionality in the hypothetical video editing example was the referenced useful functionality.
Take Google Sheets as an example. I don't know what the total download size is for the JS (certainly not 15mb), but it's an example of a JS heavy website where you wouldn't want to serve -> submit form -> serve cycle for every user input.
We're talking about two completely separate types of websites. Nobody said forms are useless. Web-based video editing was a hypothetical example. AMP wasn't really part of this discussion
This entire discussion started from an opinionated blog post that directly discusses AMP and it's performative nature on the web.
So yes.
It is.
You can't just invent a hypothetical that makes it easier to justify exceptions to the very topic the original blog post we're talking about here, of course exceptional things like resource-heavy, full-featured web apps are going to be outliers to lightweight payloads for web apps. For some reason I highly doubt those were the types of use-cases Google had in mind when AMP was released, so why are we even considering exigent outliers on this one?
I agree that the initial hit to download 15MB is a lot. But if we think of native apps, users would need to download them from the app store as well. If their internet is slow, both experiences are bad.
Having re-read your post, you seem to have misunderstood my point. You seem to be arguing that keeping things lightweight is good. I agree with you there! But remember, for certain applications, this is not always possible. In my hypothetical video editing app, 15MB could well be just the core logic - it's not 1MB of logic and 14MB of bells and whistles!
Okay but why even are web editing applications even being pushed to the web? Why even is this hypothetical relevant for a discussion of AMP here? I understand more and more former desktop operations are being pushed to web clients but full-fledged video editors?
As I mentioned to another commenter: can you show me an example of a full-featured, web-based video editor using AMP?
I'm not sure if I can buy in that this is the example to rest upon here.
Sure, but this is just as achievable, if more agreeably challenging on the desktop as a platform as it is on the web.
I'm not sure if it's a particularly convincing axiom here for heavy payloads of web-apps, particularly when we're talking about simply delivering content to cite a singular code base for why a web application that would benefit from AMP (which was the central conceit of the actual blog post we're all here discussing today) needs to be 15MB heavy.
Well formed markup and smart content compression could accomplish the same thing.
Hmm, I don't think I ever mentioned AMP in my posts. I know the blog post talked about it, but I was focusing on the PWA bit, as it's something I'm fairly experienced with. When the guy mentioned 15MB, I was simply saying that it's not an outlandish size if you're delivering a PWA which provides 15MB worth of solid functionality.
I guess you got your wires crossed, as AMP is not something I was talking about - my focus was specifically on PWAs and the fact that it's not a problem if their payloads are large.
But yes, AMP (read: glorified CDN) wouldn't solve much if you have 15MB to download each time you visit a site (aside: you wouldn't need to do this with PWAs because of caching).
Such a web application would essentially be an installed program!
Look, I know its weird but there is finally a multi-implementation cross-platform runtime that people actually want to use. I would be so excited if I could run full Photoshop in my browser on Linux.
Not really - imagine if you're looking to create a fully featured video editor, complete with 3d effects and whatnot. It could well exceed 15mb to deliver all the required functionality.
This is where the paradigm shift is coming. Your past experiences say it must be a desktop native app, but I believe things like WASM, web workers etc will allow such things to run comfortably in the browser.
I really do believe people will build for the browser more than they will for the desktop because it's a more universal platform.
I can even imagine a future where, for the majority, the only native app you have on your machine is a browser, and everything else is some variant of a PWA. I really feel this will happen very soon.
Exactly! There can never be any reason whatsoever for any site anywhere on the entire World Wide Web to be 15MB. Absolutely not! I mean, I don't even know what the site does, but 15MB can not possibly be justified.
This assumes that Google's CDN is providing a significant advantage in connectivity, when in actuality you'll often have more available bandwidth between say OVH and your ISP at peak times, as compared to relying on Google's congested peering to certain large ISPs. Amazon S3 can be really paimfully slow at peak times too, I often see sub-1MB/s, while fast.com (Netflix) or OVH can fill the pipe.
When you visit what you expect to be a search result url, it should take you to the server/site that was indexed. When AMP for launched, it didn’t even provide a link to the original site. It took a lot of people like me grousing and complaining before Google’s 17 tiers of product managers had enough meetings to even budge on that.
I honestly can’t tell you how so many sites were convinced to implement AMP on their sites. I could wave my hands and say everyone is stupid and just goes along with the latest shiny bullshit but that can’t be but a small part of the actual answer.
Supposedly Google isn’t factoring AMP pages into their search rankings yet, but given the positive signal of page speed (a good thing, generally) and them switching to a mobile first index, a natural direct side effect is that AMP pages will be ranked higher by default.
If Google were somehow automatically transforming sites you’d never hear the end of people complaining, especially content owners. Instead they convinced engineers and content creators to tie the rope on the hanging post themselves, jump off, and be happy about it.
I can’t speak too negatively of PWA. The ability to build offline web apps is actually pretty useful in some situations.
Overall I still firmly believe that a good search engine should deliver the most relevant links or content for what is being queried. Should a bad QA site get ranked higher than a high quality one just because the former delivers mobile-first AMP pages and the latter has a slightly bigger payload but has much more relevant content?
Performance today is not a priority for web developers. It's not even in the top ten priorities. Performance is just not considered unless it is so bad that a site looks like it isn't working.
We can change this at any point we like. We won't. It will continue to get worse.
That kind of thinking is why people STILL wait on computers to do things that could be SO MUCH faster.
Why the hell does it take an application like Photoshop 30 fucking seconds to simply launch and get to where it can accept input on my 32-core, dual-Xeon desktop at work with 192GB of RAM and a very fast SSD? Because people punt the fucking problem into the future and assume everything will just continually get faster
Amp is not about making your site fast but about making sure that sites aggregated in e.g. Google News are all predictably fast by constraining what they can do. It's not about making the individual sites fast but making sure that none of them are needlessly slow by imposing constraints.
Similarly PWA is also not about performance but about adding metadata that phones and browsers can use to treat the site like an app by e.g. giving it an icon in an app folder. You couldn't do this before, now you can. Before people were doing silly things like packaging up websites as apps and putting them in the app store just so they could their own cute little icon in somebody's app folder. Now they don't have to do that and the install/discovery is a bit smoother as well (easier discovery, less steps, less users lost).
>... but making sure that none of them are needlessly slow by imposing constraints.
Anecdotally, AMP sites always load a bit slower for me. The page will sit blank for a few seconds before finally dumping all of the content at once, as opposed to loading text immediately while it takes a moment to load the rest of the content that has a higher file-size.
Without AMP, I can start reading a page before it's done loading. With AMP, especially on a desktop, I'm often stuck staring at a white screen for 15-20 seconds before anything shows up. I often find myself trying to cut the "amp" bit out of the URL to see if I can get to the original page. It's frustrating, and it is a big part of why I'm considering dumping Google as a search engine.
You're correct - Google search results don't return AMP links on desktops, but that doesn't stop me from stumbling upon them.
If you come across an article you want to share from your mobile device, and it's an AMP link, it's the AMP URL that gets shared if you post it on sites like HN, FB or Reddit. If I'm browsing those pages from my desktop, clicking that link almost never redirects me to the "original" page, but loads the AMP page in the desktop browser. Sometimes getting around that is as easy as cutting out "/amp/" from the URL, other times it's a totally different URL and I'm stuck staring at a blank page for 30 seconds before it either loads, I just give up or I Google the headline/title and try to find the original page.
Forced AMP results on mobile devices also make it difficult to get to certain pages when I want them. Take Eater's 38 lists[1] that they put together. If I'm on my phone and want to find a restaurant from that list in a particular ___location (say if I'm out of the house and want restaurants near me), then the AMP result returns a page that doesn't include the map, only a list, which isn't very helpful. In order to get to the map, I either need to go to Eater.com and manually find it, or use something like Bing to search for it. I know that the purpose of AMP is to not load the map in an effort to increase speed, but in that instance the map is exactly what I want and AMP makes it harder to get to.
I'm not saying AMP doesn't have its benefits, but its inconveniences have outweighed them, in my experience.
No idea how Amp actually works, I always thought that one component about it was acting like a CDN for common frameworks.
PWAs are much easier to tune for global performance though if you have a tight budget. The app doesn't have to talk much to a Backend server and (down)loading can be tuned by using a CDN.
Wikipedia is a static delivery website. Of course it’s easy if all you’re delivering is static content. Try that with an ecommerce site where dynamic information is sent from sometimes multiple sources. It’s better to load it once and have a reactive site call just the necessary apis after. Calling everything over and over again in eCommerce is a pain for users. Unless you can afford a database and servers to be distributed globally to counter the added latency of downloading everything again and again. PWA is the next step for that. But, yeah, ok, for Wikipedia you can be static as you want.
Unless you have a very active community commenting and interacting around the items, you can definitely go almost entirely static, even in ecommerce.
Regenerate pages after changes happen and you'll be fine. You'll still need dynamic search most likely, but even those can be pre-generated for many terms. (Your admin / content management part needs to be dynamic of course, but that's not customer-visible)
If your catalogue is so large that constant regeneration is impractical, you can generate on-demand and cache long-term a few layers above for anything not requested recently.
> You'll still need dynamic search most likely, but even those can be pre-generated for many terms.
Have you ever worked on a high scale e-commerce site? I have and what you are talking about is impractical and pretty much impossible.
Products have multiple variants, photos for each variant. Various companion products that depend on what you already selected. Pricing options that can depend on quantities or packages. And search? Spend 5 minutes on any serious e-commerce search system and there is no such thing as “common search terms.” Of course there are common searches, but on any non-trivial e-commerce system, you have potentially thousands of distinct common searches.
I was one of the original engineers for https://www.matalan.co.uk and you can’t just “regenerate” pages after changes. You can regenerate the cache for images or product descriptions, but e-commerce isn’t like a printed catalog. We put exceptional engineering into that application and to trivialize that sort of application like it was some kind of blog site kind of demonstrates a lack of experience in building something that serves millions of visitors per month — visitors that all have different paths based on what they want to buy.
At Tock we had been obsessed with page load performance since the beginning and I agree with the author. We avoided PWA mostly due to it's broken behavior. Often times we are faster than loading the same restaurant page on Google search.
Out challenge has been that we have to load a lot of images, so we spent a lot of time optimizing everything around it and optimizing everything around it. From TLS1.3 to the CDN, to every part of our stack.
Also ours is not a static page, it has dynamic content, and ga + fb tracking for our restaurants and we make it work by correctly prioritizing important rendering elements over other
We have also spent time reducing the initial JS parsing size by chunking out our ever growing JS bundle and we constantly test on slow devices on 2g/3g profiles to emulate bad internet conditions. We have learned a lot in the process probably good for a blog post
Sort of off-topic, but there seems to be a bug with the way search results work. If I click on "search", it shows me an option for "<my city name> nearby", but if I click on that, I get results for a city that has the same name, but is in a completely different area.
edit: this also applies to the "near you" cards on the home page.
True critique, but progressive loading is not supported by html5 alone. I have been following up with srcset to support proper lazy loading behavior and the day it gets supported you will see it on our site.
It's not the hype that has publishers moving to AMP. It's not wanting to be dropped out of the carousel. Shame if somethin' were to happen to yer website traffic, friend.
> By the way, since you first opened this article my ServiceWorker has downloaded XXX Mb of useless data in background. I hope you are on WiFi :)
But the code doesn't really download anything. It stores the intial datetime your first load the page, and diff to the current time, convert it to "bytes" every second.
I agree on AMP, you can just pack less JS and be done with it. But AMP is REALLY about rich search results online.
Google creates more interactive search results for AMP pages than non-AMP versions.
As for PWA's, I'm honestly a fan, to a certain limit. I really like Gatsby.js, especially because the rendered pages work with Javascript disabled.
Django/Rails/$FRAMEWORK with Turbolinks.js is also great, for the same reason I like pwa's. If you aren't a fan of the JS ecosystem (totally fair), Turbolinks is a great way to get some ""snappyness".
Microdata is for rich search results. AMP is for instant (not fast, as the author mistakenly stated) loading from a search results page or content aggregator.
True! But Google incorporates more microdata from AMP pages in Search results (as in more fields), or at the least their Rich Search Results documentation makes it out to be that way.
I may be wrong, this just was my understanding from reading all the Google search guides last week.
Google is steadily undermining the foundational principles of the web that made the platform so popular and allowed Google to do searching in the first place.
The scary part is that they don't just offer an "alternative" technology like ActiveX, Flash, Applets, Silverlight and so on. They are influencing core web standards.
It's true, that with static sites you're better off without any JS. It's pretty much how I've built https://discoverdev.io
It's powered by a home grown static site generator framework. Hosted on Netlify. Works fairly fast even random corners of India. But then I found my own generator limiting and surveyed existing frameworks - decided to go with Gatsby - a ReactJS based SSG. To my surprise Gatsby+Netlify worked much faster and smoother than my JS free solution! It's currently running at https://beta.discoverdev.io
Worth noting that Gatsby generates pure html+css during build time, and page renders fine without JS enabled. Seems to be a good balance between DevX and UX. :)
Minor thing I noticed on your site: a clean load is requesting around 425kb of data, but your /assets/img/dd-logo.ico alone weighs 362kb. Same thing happens in your new site. Maybe it's something to look into, reducing the size of it should make your site even faster.
I too struggle to like AMP. The rhetoric seems a little absent imagination in regard to offline. Are you really saying your handheld computer can't be useful without a worldwide network connection?
The article is with prejudice somehow.
You still need lots of apps which working perfectly with offline mode. For example reading books you already downloaded, watching movies you already downloaded, or whatever immutable content you already have.
The author choose to making examples with apps that demands communications with outside world instantly, which makes the whole point weak.
Though we can find a lot of obvious overengineering cases, Tonsky should have known better that reference-style website a la Wiki and feature rich application like Airbnb impose very different trade offs between initial/overall loading times and functionality.
An interesting case when someone who is firmly saying "BS" three times in every paragraph is BS too.
Wikipedia has editors, varying levels of admin permissions, different types of pages, templates, comments, edits, etc. I'm not sure their data model is any less complicated than Airbnb.
But regardless of the complexity of the database schema, neither one of them is really an "feature rich application" in the way that that say, Photoshop, is a feature rich application.
At the end of the day, I think they're both just CRUD apps.
AMP is embarrassingly buggy on iOS. The address bar doesn't hide properly, rotation breaks things, the "original link" often triggers the address bar, find highlighting is busted, and Reader Mode often fails.
AMP links are often slower than Reader Mode, because they're blocked on slow font loading.
Author has good points about PWAs. I love PWAs because I hate giving native apps access to my phone's data but the emphasis on offline support for PWAs does seem off point. It has Web in the name. You probably need internet access!
The headline is analogous to “Antitrust laws are not needed to prevent price fixing.” Treated as a statement of fact, it’s technically true, but it’s really a statement of policy opinion, whose weaknesses are clear.
Hem... To make a website faster... Make a website! Not a webcrap with tons of "framework", tons of stuff loaded from third parties etc you'll get a skyrocketing high speed.
I tried to look at an example AMP page here [1]. When loading with an empty cache, it loads 5.5 Mb of files. I don't see where are the optimizations.
It uses a custom font. Custom fonts are absolutely unnesessary for a news page; they only delay the moment when text becomes visible. Standard Windows fonts are of a good quality and don't need a replacement (which often renders very poorly on Windows XP).
It uses a lot of scripts. Script block some browsers while parsing and executing.
It has SVG images embedded into the page. If you want the page to load faster, you should move images into external files.
AMP is directly linked to Google and cannot be used without it. Please look at the requirements for AMP HTML documents: [2]
It contains a lot of code not necessary for most websites. For example, a XHR interceptor for "some trusted AMP viewers" [3] is included into the script. Or a cryptographic library for calculating sha hashes [4].
You can use only JS components approved by Google; the spec says:
So for example, Google might make a custom component for Facebook but not for QQ or Mixi. Google defines what widget can appear on an AMP page. If you are an ad network, you have to make negotiations with Google to be added (you have to negotiate with your competitor). If US government imposes sanctions on some foreign site, Google will have to comply.
It is clearly a technology made for integrating news articles into a Google page (and judging by limitations in the spec, Google might plan to make a non-vebview native renderer for AMP pages). Don't believe when they try to pretend that it can be used independently as a standard or made for accelerating the web.
This guy doesn't know what AMP does. AMP is a subset of HTML that can be validated safe to prerender. That means instant page loads for the user, not merely fast.
It is the only available solution to a real problem. If you have a better solution, by all means, be a hero. Meanwhile, the rest of the search and content aggregation industry has gone with AMP, which works today.
I'm talking about link aggregators and search engines, not publishers. They could have come up with another solution, but they decided to use AMP, which provides the same benefits to them as it provides to Google.
Calling AMP pages "instant" loads is disingenuous because Google preloads them before they are opened. Google is free to do this for my website too, but they don't.
> Google is free to do this for my website too, but they don't.
As my GGGGGP post said, that is the entire point of AMP. It can be validated safe to preload, which is not possible for HTML pages in general. So no, Google, Bing, Pinterest, Baidu, LinkedIn, and other sites that preload AMP pages are not free to do this for your website unless it is written in AMP.
> It can be validated safe to preload, which is not possible for HTML pages in general.
What does this even mean? My browser already preloads websites in the background, AMP or not. So assuming I don't go through Google, you could call going to my website via the address bar "instant" as well.
> User isn't deanonymized to the publisher, publisher analytics and ads don't register page views, page can be trivially transformed to lazy load below the fold, etc.
This is all explained in the AMP documentation itself. There was even an article about it on HN not too long ago (https://medium.com/@pbakaus/why-amp-caches-exist-cd7938da245...). People like the author who criticize AMP without knowing what problem it solves are willfully ignorant.
This all works on the open web without signing an agreement with the traffic source, unlike Facebook's and Apple's proprietary instant article solutions. That's why it has been adopted by so many other search engines and link aggregators.
They are not ”instant”. When you visit Google search, Google preloads AMP (and doesn’t preload anything else). The only readon AMP id fast is because Google has search monopoly and preloads its own non-standard things ahead of time, effectively penalising everyone else.
It's a subset of HTML that Google has full control over and whose only purpose is to lock publishers into a model that benefits Google, and Google alone.
BTW. This "subset" is invalid HTML. Just so you know.
If it benefits Google and Google alone, why do Microsoft, Pinterest, Twitter, LinkedIn, WordPress, Baidu, and Weibo (among others) also prerender AMP pages? The reality is that every link aggregator and search engine wants instant results, and your conspiracy theory falls apart as soon as you understand that.
What part of AMP is invalid HTML? It is a competitor to Facebook instant articles and Apple News that works on the open web.
> Because publishers jumped on bandwagon and started deploying AMP pages en masse.
Your conspiracy theory might make sense if the other search engines and link aggregators didn't actively encourage their link targets to also implement AMP or if they tried to extend AMP for their own purposes and were blocked by Google. Neither is the case.
It's not a conspiracy theory. It's the sad reality of the web today. You would've known it if you took more than two seconds paying attention to how AMP was developed, has been developing and is still being developed. And also how Google often holds the web hostage because of it's sheer numbers (search near-monopoly, browser share etc.).
Your first link to a "criticism" of AMP makes the same mistake as the author of the article, which is the whole point of my thread. It doesn't matter how heavy an AMP page is if it prerenders everything above the fold. It will appear to load faster than your text-only static page every time -- instant instead of fast. The other criticisms are also from people who do not understand this.
Meanwhile, Google's competitors have willingly signed onto AMP without any criticism of the development of the standard. Your conspiracy theory doesn't hold water.
You keep willingly not knowing what AMP is and how it works. This conversation becomes moot. I do hope you may find two seconds and educate yourself. Perhaps read the other links, which you clearly didn't.
You're the one who doesn't know the problem it solves, as many times as I've explained it. It loads pages instantly, not merely quickly. That's the whole point of the technology and the whole reason so many link aggregators use it.
I showed you that your first link didn't understand this either. Your last link isn't even about the same technology. I didn't bother clicking the other because that would surely also be a waste of time.
I have patiently explained this to you over and over because I can't stand people spreading conspiracy theories. That's how the US ended up in the mess it's in.
Fine. I clicked link two. That author made the same mistake as link 1, the author of this article, and you. Do you think that repeating something that is wrong will make it right?
User isn't deanonymized to the publisher, publisher analytics and ads don't register page views, page can be trivially transformed to lazy load below the fold, etc.
He also failed to mention a key AMP rule: all CSS is in the head, a max of 50 KB. There is no external CSS at all. That's crucial. It reverses a persistent anti-pattern in web development that calls for a bunch of separate files for CSS and JS. Almost all CSS on most webpages is unused (this is still true of a lot of AMP sites -- 50 KB is way too much CSS for an article).
I think the reality here is that early 21st-century web developers are terrible at web development. They stuff massive amounts of JS and CSS down users throats, distributed across 50 or a 100 requests, and call themselves "engineers".
AMP, or something even better, needs to be the default way to build websites.