It can be a bit difficult, particularly now that some phones are getting more demanding about re-authorising before it will go through. Tap-try to get fingerprint scanner working-tap again is a much less fluid procedure than tap-go.
The position thing is just something you get used to. There's not that many reader models in active use and most of them are pretty good about marking where the nfc reader is these days.
> There's still plenty of hardware-specific APIs, you still debug assembly when something crashes, you still optimize databases for specific storage technologies and multimedia transcoders for specific CPU architectures...
You might, maybe, but an increasing proportion of developers:
- Don't have access to the assembly to debug it
- Don't even know what storage tech their database is sitting on
- Don't know or even control what CPU architecture their code is running on.
My job is debugging and performance profiling other people's code, but the vast majority of that is looking at query plans. If I'm really stumped, I'll look at the C++, but I've not yet once looked at assembly for it.
This makes sense to me. When I optimize, the most significant gains I find are algorithmic. Whether it's an extra call, a data structure that needs to be tweaked, or just utilizing a library that operates closer to silicon. I rarely need to go to assembly or even a lower level language to get acceptable performance. The only exception is occasionally getting into architecture specifics of a GPU. At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know. Thank you, compiler programmers!
> At this point, optimizing compilers are excellent
the only people that say this are people who don't work on compilers. ask anyone that actually does and they'll tell you most compiler are pretty mediocre (tend to miss a lot of optimization opportunities), some compilers are horrendous, and a few are good in a small ___domain (matmul).
It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked, so it effectively doesn't matter how good our assembly is for all but the most specialized of applications. Good assembly, bad assembly, whatever, the point is that your thread is almost always going to be blocked waiting for I/O (disk, network, human input) rather than something that a fancy optimization of the loop that enables better branch prediction can fix.
> It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked
this is again just more brash confidence without experience. you're wrong. this is a post about GPUs and so i'll tell you that as a GPU compiler engineer i spend my entire day (work day) staring/thinking about asm in order to affect register pressure and ilp and load/store efficiency etc.
> rather than something that a fancy optimization of the loop
a fancy loop optimization (pipelinig) can fix some problems (load/store efficiency) but create other problems (register pressure). the fundamental fact is NFL theorem applies here fully: you cannot optimize for all programs uniformly.
I just want to second this. Some of my close friends are PL people working on compilers. I was in HPC before coming to ML, having written a fair amount of CUDA kerenls, a lot of parallelism, and dealing with I/O.
While yes, I/O is often a computational bound, I'd be shy to really say that in a consumer space when we aren't installing flash buffers, performing in situ processing, or even pre-fetching. Hell, in many programs I barely even see any caching! TBH, most stuff can greatly benefit from asynchronous and/or parallel operations. Yeah, I/O is an issue, but I really would not call anything I/O bound until you've actually gotten into parallelism and optimizing code. And even not until you apply this to your I/O operations! There is just so much optimization that a compiler can never do, and so much optimization that a compiler won't do unless you're giving it tons of hints (all that "inline", "const", and stuff you see in C. Not to mention the hell that is template metaprogramming). Things you could never get out of a non-typed language like python, no matter how much of the backend is written in C.
That said, GPU programming is fucking hard. Godspeed you madman, and thank you for your service.
> At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know.
While modern compilers are great, you’d be surprised about the seemingly obvious optimizations compilers can’t do because of language semantics or the code transformations would be infeasible to detect.
I type versions of functions into godbolt all the time and it’s very interesting to see what code is/isn’t equivalent after O3 passes
The need to expose SSE instruction to system languages tells that compilers are not good at translating straightforward code into optimal machine code. And using SSE properly allows often to speed up the code by several times.
Melbourne is easily the worst city in the country for this. Most of the tech sector is in the very large enterprise space lead by the banks, and as a result it's who you know and whether you went to Melbourne Grammar or Geelong Grammar that will determine which company you work for once you reach a certain level. Sydney is better just because there's more smaller stuff going on, and because CBA is better than NAB and ANZ combined on tech. (I hate Sydney otherwise and am based out of Melbourne)
Some places in Melbourne get real work done, even in the data sector. They're hard to find, but they exist.
People look at me funny when I say this, but it's true.
I work in performance - a space where we're thinking about threading, parallelism and the like a lot - and I often say "I want to hire who play with trains". What I mean is "I want people who play Factorio", because the concepts and problems are very very similar. But fewer people know Factorio, so I say trains instead.
I think I know why it's enjoyable even though it's so close to work, too. It's the _feedback_. Factorio shows you visually where you screwed up, and what's moving slowly. In actual work the time and frustration is usually in finding it.
I feel like it's not the RAT you'll notice from inside the plane, it will be the silence from the engines. That combined with at least a momentary flicker of the lighting (I'm not sure if a RAT on a 787 will run cabin lighting but I doubt it), and you'll know.
> Former Australian foreign minister Alexander Downer says "most people" in Australia do not see Assange as a journalist.
The Downer family have recent history in misjudging what "most people" in significant chunks of the Australian public think. Chunks, for example, like the electorate they're trying to be members of parliament in.
I've been taking this approach more recently. Something shipped is 100% better than nothing at all, even if it's 25% worse than if I whole-assed it.
It's been working, in that the 'leave room for others to step in' has worked well, and those others have filled the gaps with new and interesting ideas that I likely would not have done even had I whole-assed it. At the same time, those people weren't equipped to whole-ass something from nothing.
Not shipping internationally is a pretty frustrating decision. This device is almost exactly what I want, but I can't buy it because you... can't put it in a DHL box with a different country's name on it?
Congrats on the launch, but this is kind of a kick in the nuts.
Mild edit: I checked a UK address and it turns out you can put it in a DHL box with a different country's name on it, you just can't print the word Australia on the box. Nice. If this is because the device can't stand up to kangaroo rides, can I have one if I assure you that I won't take it with me when I hop to the shops?
The position thing is just something you get used to. There's not that many reader models in active use and most of them are pretty good about marking where the nfc reader is these days.