Yeah - this was arguably mostly my fault (sorry!).
There's quite a bit of history here, but the abbreviated version is that the dialog element was originally added as a replacement for window.alert(), and there were a libraries polyfilling dialog and being surprisingly widely used.
The mechanism which dialog was originally positioned was relatively complex, and slightly hacky (magic values for the insets).
Changing the behaviour basically meant that we had to add "overflow:auto", and some form of "max-height"/"max-width" to ensure that the content within the dialog was actually reachable.
However this was pushed back against as this had to go in a specification - and nobody implemented it ("-webit-fill-available" would have been an acceptable substitute in Blink but other browsers didn't have this working the same yet).
Hence the calc() variant. (Primarily because of "box-sizing:content-box" being the default, and pre-existing border/padding styles on dialog that we didn't want to touch).
One thing to keep in mind is that any changes that changes web behaviour is under some time pressure. If you leave something too long, sites will start relying on the previous behaviour - so it would have been arguably worse not to have done anything.
It may still be possible to change to the stretch variant, however likely some sites are relying on the extra "space" around dialogs now, and would be mad if we changed it again. This might still be a net-positive however given how much this confuses web-developers (future looking cost), vs. the pain (cost) of breaking existing sites.
I wonder, do you think "hidden" behaviors like this discourage lazier devs from using HTML standards and just use divs for everything and reimplement default functionality with 3rd party UI libs?
To add to this, (from what I remember reading the study at the time) it was basically racing a car around a track (think lots of tyre squealing around corners), and found that the tyre wear was very high. I haven't seen a study which actually simulates somewhat "normal" driving (presumably because the wear is so low driving like that it's difficult to measure). Also didn't factor in a bunch of different stuff - tyres are effected by lots of things; compound, temperature, pressure, surface abrasion, etc. There might be an effect! But this study was very bad.
A normal driving test was (famously [1]) done to measure the effect of axle load (vehicle weight per axle) on road wear. This is where we got the “fourth power law” of road wear.
I think as a starting point, I would expect that tire wear should remain roughly in proportion to road wear, given the same tires on each vehicle. From this, I would expect car makers to use larger, thicker, heavier tires on heavier vehicles in order to compensate.
Thus I think we shouldn’t accept claims about the replacement lifecycle of tires without knowing these details of their construction. If an electric car is twice the mass of an older ICE car then the fourth power rule would predict a 16-fold increase in road wear. I would then expect the tires on the EV to have 16 times more rubber in order to last the same duration, unless they’re made of some newer compounds which are more durable.
> I think as a starting point, I would expect that tire wear should remain roughly in proportion to road wear
IMO this would be a suspect assumption to make w/o data to back it up. You've got two dissimilar materials interacting (in very different modes). E.g. rolling a metal ball bearing on a wood surface would obviously cause the wood to degrade far more than the ball bearing, (and even a wooden ball rolling on a wood surface would wear substantially less due to the mode difference).
(If I had to guess the road has a higher wear as the surface has a tensile stress around the contact patch of the tyre, causing most of the damage, but this is just armchair engineering at this stage).
The point I was after to make is that you can't assume that road wear scales the same way as tyre wear (I was assuming same material fwiw, just different loads). They are being worn under very different modes/scenarios.
Given that roads and tires experience the same forces under driving conditions (Newton’s Third Law guarantees this) I think it’s a reasonable prior assumption to start from. There are of course other environmental factors that accelerate road wear (rain and water erosion, freeze thaw expansion) but those conditions were not included in the study that produced the Fourth Power Law.
This assumes that road wear and tyre wear are caused by the same mechanism, but tyre wear is presumably caused predominantly by friction, whereas road wear is at least in part (and perhaps predominantly) caused by compression, creating pot holes by causing the earth underneath to move.
It's quite noticeable (to me at least) that the areas with highest wear are usually places that have heavy vehicles (buses and lorries) braking and accelerating. Traffic lights and bus stops will often have bumps/dips that seem to demonstrate a shearing force between the road surface layers.
I have always assumed it was just due to the uneven distribution of weight by vehicles often being stationary in the same spot, so those points are subject to more compression forces than the surrounding road surface.
If it were acceleration and deceleration I’d have expected the effect to be less localised, as breaking and accelerating happens over a much longer distance.
But, I have no actual idea. It’s just probably not friction…
I'm surprised there isn't more info available to compare vehicles/tyre wear. There's plenty of real-world driving going on, so you'd think someone would have measured tyre replacement frequencies.
Tires and driving styles vary a lot, so it would be really hard to come up with some aggregate numbers. Maybe you could compare the same very popular tire on different cars that still have enough data points to average out differing factors?
I reckon that the different driving styles could be partially dealt with by considering different classes of car. E
G. Fast "prestige" cars will tend to share certain driving styles and be quite different to cheaper run-arounds.
Ideally, it'd be great if insurance companies made a big push to get some kind of standardised "black box" that drivers could fit. As well as providing extra stats, I think they'd be great for detecting illnesses. A simple driving ability stat could be produced from the typical acceleration/braking timings - smoother is better as it shows good anticipation by the driver. If their stat starts decreasing more than expected due to aging etc. then the driver could be alerted that they should consult a doctor as it could cognitive decline, eye problems etc.
You can create a new thread via. `new Worker` but using a worker requires a separate file, and lots of serialisation code as you communicate via `postMessage`. TC39 module expressions helps but not a lot of movement recently. https://github.com/tc39/proposal-module-expressions?tab=read...
There's some progress on that proposal, just happening elsewhere. https://github.com/tc39/proposal-esm-phase-imports defines machinery for importing a ModuleSource object which you can instantiate as a Worker, and once that's done module expressions would just be syntax which evaluates to a ModuleSource rather than needing a separate import.
Basically you don't want any common denominators in teeth count, otherwise the same sets of teeth will engage at some frequency. If there is a small imperfection on a tooth it'll wear out quicker. E.g. a set with 5:14 teeth will theoretically wear better than a set with 5:15 teeth.
It's cool to learn the term for this, I have used the same principle before with bicycle gearing, so the chain would wear more evenly. Not sure if it makes much difference but it was worth doing anyway.
It depends on the project, but most large scale projects require test(s) for the fix, and will block submission unless its provided.
These types of projects undergo constant code-change/refactoring/re-architecture etc. If you don't add a test for your specific issue, there is a non-trivial change that it'd be broken again in some future release.
Its somewhat worse if an issue gets fixed, and broken again, vs. it being broken the whole time. E.g. with the former users have likely started to rely on the fixed behaviour, then will experience disruption when it breaks again.
Part of the tension with building masonry on top of grid is that they work in fundamentally different ways.
Grid you place everything in the grid first (e.g. an item goes in col:2,row:3), then size the grid.
Masonry ideally you want to size the tracks first, then place items in those tracks.
The first Firefox implementation (and the spec at that stage) basically said you don't consider any masonry items for sizing tracks (except for the first row, and some other rules - its complex). This meant that it is trivial to create items that overflow their tracks.
The specification at the moment asks to place every item in every possible track. This has quadratic performance O(N_tracks * N_items) in the worst (and somewhat common case). Quadratic performance is bad[1] and we don't really have this in other layout algorithms.
With nesting introduced the performance goes (sub-)exponential, which is really poor, even if you have a fast CPU.
One may argue that these cases aren't common, but folks always test the boundaries with layout modes in CSS - so things need to be fast by default.
Note: In grid items size themselves differently depending on what tracks you place them in, which is why you need to place in every possible position.
Masonry potentially needs a different algorithm for sizing tracks to mitigate these problems, (the blog post doesn't go into these issues in sufficient detail). There may have been a version of grid sizing which didn't have item positional dependence but that ship has sailed.
Quadratic performance is a bit of an exaggeration. It's not O(N_items^2). It's N_tracks x N_items, and basically nobody has N_tracks ≈ N_items. Practically speaking, the upper limit is closer to (N_items^1.5) because N_tracks is unlikely to go over sqrt(N_items) in cases where N is large. And almost all of those layouts will have repetitive track sizing patterns, so they can be optimized to a much smaller N_tracks that approaches O(N_items).
Why does this require checking every item in every column? It looks like the layout algorithm greedily picks the column with the least total height so far, every time it adds an item.
The examples in the article seem to have CSS that directly sizes the columns, albeit with some flexibility such as being able to change the number of columns to better fit the width. It seems like the item widths depend on the column widths, rather than the other way around (which seems like it'd be circular). What's an example where the column widths depend on the items?
You first need to decide how big each of the three columns are going to be, then you want to place the items in each of the columns.
Depending on the non-fixed column type (there are different rules for each type) you want to ensure their constraints are satified (typeically they are big enough to encompass the items).
Thanks, that helps; I'd thought about variable numbers of columns based on the width of what they're contained in, but hadn't thought about the possibility of wanting to choose the number of columns to fit their content better. Mostly because I was imagining content like images that can be scaled or text that can be wrapped, rather than fixed-size content or content with a minimum necessary size.
Grid has a rich functionality for autoplacement of items and one of the arguments for masonry being part of grid is that you can easily mix the two positioning approaches
Due to the nature of web engine workloads migrating objects to being GC'd isn't performance negative (as most people would expect). With care it can often end up performance positive.
There are a few tricks that Oilpan can apply. Concurrent tracing helps a lot (e.g. instead of incrementing/decrementing refs, you can trace on a different thread), in addition when destructing objects, the destructors typically become trivial meaning the object can just be dropped from memory. Both these free up main thread time. (The tradeoff with concurrent tracing is that you need atomic barriers when assigning pointers which needs care).
This is on top of the safey improvements you gain from being GC'd vs. smart pointers, etc.
One major tradeoff that UAF bugs become more difficult to fix, as you are just accessing objects which "should" be dead.
> Are you referring to access through a raw pointer after ownership has been dropped and then garbage collection is non deterministic?
No - basically objects sometimes have some state of when they are "destroyed", e.g. an Element detached from the DOM tree[1]. Other parts of the codebase might have references to these objects, and previously accessing them after they destroyed would be a UAF. Now its just a bug. This is good! Its not a security bug anymore! However much harder to determine what is happening as it isn't a hard crash.
> the UAF-ish bugs are still bugs, and code poking at a GC-preserved object that the rest of the code doesn't really expect to still be alive might itself be pretty fraugh
For the LayoutObject heirarchy - the team doing that conversion added a NOT_DESTROYED() macro for this reason. It's gross, but was the least worst option.
As an aside - the performance of oilpan is broadly net positive now if you avoid some of the pitfalls. (The largest being a write into a Member<> requires a write-barrier). E.g. Things become trivially destructible, and no ref incrementing/decrementing, etc.
This proposal extends this mechanism to be more general.