The human eye has an approximate pixel resolution of 120 million pixels per eye. On top of that, our brain constantly processes and integrates the output of our eyes. This creates an even higher perceived pixel resolution of about 480 million pixels per eye. Some estimates are even higher.
I'm not saying Apple created a bad product...but I wouldn't expect a mere 23 million pixels to be indistinguishable from reality.
The human eye actually has terrible resolution. We only see in high resolution in the fovea in the very center of our eye -- basically the single point of primary focus. Resolution beyond that drops off dramatically (1/7th and much worse).
I've seen people claiming on sites like Reddit that people who watch with CC on simply read it in their peripheral vision while focused on the action, and that just isn't possible in most situations for the reason I mentioned. You actually only see high resolution in the middle 1 degree of angular view.
So to come up with such a number someone took the entire FOV of the human eye and assumed that you focus your fovea on each and every angular degree of it.
That's neither here nor there are your point is as valid -- where you're focused on will have a pixel density below "reality" for your fovea, however it presents lots of optimization potentials in software (e.g. no need for fine rendering outside of the focus) and in hardware. There are already devices which use tiny mirrors and optics to basically concentrate the pixels wherever you're looking and render a distorted view to match.
It will definitely not be indistinguishable from reality, but might be good enough to fool us after a short getting used to, similarly to how even 24fps is enough for continuous motion. Of course you will see 60fps as more “fluid”, but only in comparison. And afterwards the differences quickly plateau and not many people can see any difference between 120 and higher.
It is probably similar with this as well, the question is where apple stands on this scale.
I can't really imagine it will be a job long term. It's more of a skill you use for some other task. Not that different from having good google search skills as a programmer.
It’s a real job for “raw” transformer models, where the model is just playing “follow the leader” to whatever kind of text you prompt it with – but depends a lot on how the model is trained and fined-tuned.
Not necessarily, if he gets a lot of airtime and is seen as identifying and fixing problems he could get a real big boost. Most of the problems the Transportation Secretary has to deal with are the fault of aging systems, a limited budget from congress, etc.
Thus far I’ve really enjoyed https://xstate.js.org/. It’s fantastic for ensuring that your application is in an expected state and to control transitions between states. When it’s overkill you can often just drop to useState for simple stuff.
I sometimes prefer a rebase to a merge when pulling in changes from some other branch. It can be easier to deal with several smaller merges than one big one which a rebase accomplishes. This is not always true of course, so I usually start with a regular merge and if it's sufficiently complex or hard to untangle then I give a rebase a try to see if things get easier.
For many levels the enemies come from "off screen" so there is no base to attack. There are some levels where you are supposed to send troops to destroy the enemy base, but once you beat that level if your base there comes under attack the enemies come from "off screen" again from what I remember.
>Doesn't it do the opposite though? It gives criminals a map of exactly who they shouldn't rob, lest they risk getting shot. I imagine criminals will go after the softest, easiest targets, not the hardened, well-armed ones.
I'd say thieves generally want to rob when _nobody_ is home so it doesn't matter if they are armed or not. Knowing that there are guns stored in the house is just advertising something specific they might be able to steal.