I'm confused why this is still an unsolved problem. A simple cryptographic challenge with pre-shared keys + button press ought to make key fobs perfectly secure for all practical purposes. Is there something I'm missing here?
It requires two-way communication, which makes the system more complex, with all the negatives that come with it.
Cars are not very secure by nature: they have easy to break glass windows, and are made of relatively lightweight materials. The key system just needs to match that level of security, and AFAIK, attacks on the keyfob are uncommon compared to other, less subtle techniques.
The more complex and sensitive "PKES" system, according to the article already has a challenge-response system, but it doesn't help with relay attacks.
You have to be able to get new keys made without having an original to read. A database of vin, key would be too big of a target and would have to be shared with dealers anyway so they could program new ones. I'm not a security expert but it seems like it would really shorten battery life on the fob if you wanted to protect against replay attacks by adding a time sensitive value.
Key distribution is (as always) an important, but solvable problem. There are some tradeoffs involving centralization vs cost of replacement, but those apply generally, not just in this particular case.
As for replay attacks, that's where the button press comes in (like on a hardware security token) -- the key only responds to challenges within a second or so of a button press and the car sets a similar timeout for validity.
And how that will protect you from repeater attack? I just steal your car while you are in mall with this just as easy, encryption or not. I don't care about the signal, just that I capture it, send it to my other device near your car and kaboom!, your car unlocked.
The key fob would always return a response the exact same amount of time after receiving the challenge, so couldn't the car learn the distance by measuring the time of flight?
This adds complexity and with complexity there comes a price tag. That would make the key fob more expansive. It also adds higher power requirements this then comes with new requirements for the battery.
Re price tag: you can buy a smartphone for 100$. Surely it is possible to mass produce cheap key fobs with send/receive capability and a tiny crypto module.
Re power: Key fobs already do some form of crypto and broadcast. Adding reception capabilities ought not to be that power hungry.
I've got an even better solution: Picture a piece of metal, cut in a specific way as to allow metal "tumblers" inside a small cylinder to turn, engaging and disengaging the locks and/or ignition, whereas other pieces of metal, cut differently, would not allow any motion. I know, it sounds far out there, but we should give it a shot.
That doesn't sound very secure at all. I've heard there are little known techniques called "lockpicking" and "rakes" that make such technology practically useless.
From what I can tell, they didn't. The theoretical basis for it is the Linear No-Threshold Model, which is, bluntly put, garbage. The empirical evidence seems to come mostly from so-called case-control studies, which are, bluntly put, garbage.
You conduct a case-control study as follows: 1) Select some number of lung cancer patients and a control group without lung cancer. 2) Try to establish why the cancer patients got cancer and the non-cancer group didn't, by retroactively estimating their smoking habits, radon exposure (from radon maps or retroactive measurements of places they lived at), as well as other factors. 3) Use statistics to try to disentangle suspected radon cases from smoking (90%) and other stuff. 4) Pretend that all this is not totally biased, sampling or otherwise.
This is somewhat inaccurate from a physical perspective. What you say would be applicable to the attenuation of gamma rays, which is governed by low-probability interactions and is therefore exponential. But alpha particles, being charged an heavy, lose energy continuously through electron interactions and have a relatively fixed range in matter, beyond which the incidence is practically zero.
No, that is just plain wrong. All attenuation is exponential (except very special stuff like energy resonances at certain particle energies). You just get very different exponents for rarely vs. frequently interacting particles. The charge of an alpha particle is also only twice the charge of a beta particle with a far higher (orders of magnitude, not just a factor of 2) penetration depth. The relevant difference is actually the mass and size (i.e. cross-section) of an alpha particle that makes it far more frequently interacting.
There are some more differences that govern the cross-section like conservation of momentum and angular momentum in certain interactions where there are differences between gamma and alpha particles. Or magic number nuclei that are relevant for neutron absorption.
But in general, for a given material and particle, attenuation is always always always exponential over the penetration depth vs number of particles, until you hit a low enough energy that some other mechanism for energy transfer can come into play or become unavailable, where the exponent will change and you will see a change in slope.
Please take a look at slide 11 of this presentation [1]. Stopping power and Bragg peak (slide 30) are also relevant concepts. What you are describing applies to photons and neutrons, but not to charged particles.
Well then, please read what those slides say. Then read what I wrote.
The Bragg peak is about energy deposition, not the number of particles arriving at some point. When the energy of a particle changes going through the medium, the absorption changes. Still an exponential curve, just piece-wise exponential. Look at slide 11, that cutoff. Look at the logistic formula for it and its limit going right: still an exponential.
It would be cooler still if this technique could be used for future VR technology, creating full immersion by targeting all photoreceptors individually. But unfortunately... the optics of the eye does not actually allow individual cones to be fully isolated, as the spot size would be below the diffraction limit. They discuss this in Fig. 2 and the first section of the results.
Even with a wide-open pupil and perfect adaptive optics, there would be 19% bleedover to nearby cells in high-density areas, while what they achieve in practice is 67% bleedover in a lower-density (off-center) area. This is enough to produce new effects in color perception, but not enough to draw crisp color images on the retina. :(
Honest question: Why would you want to? The average CO2 concentration in your lungs is something like 20,000 ppm, so it doesn't matter much whether ambient CO2 is at 200ppm or 800ppm. Other aspects of air quality do matter, but I'm highly skeptical about the value of capturing CO2 in your house.
Are you not familiar with any of the research linking high CO2 with harmed cognition and harmed alertness? It is manifest at as low as 800 ppm, certainly at 1000 ppm, and personally even at 600 ppm for me. There is good reason to want the preindustrial level that humans lived with for almost their entire history.
Yes, I've seen such research, but I am highly skeptical, to put it very mildly. This actually being the case is simply implausible from the physiological perspective, considering the very high (and constantly changing) concentration of CO2 in the lungs and in the bloodstream. Early research with Navy submariners was consistent with this -- no detectable impairment below 1% (10,000 ppm) and clear impairment at 3-4%. Later long term studies with 0.5% exposure (5,000 ppm) also found nothing. I put the recently popular research showing cognitive impairment as low as 600 ppm in the same mental niche as vitamin megadosing -- underpowered, low quality research with no basis in human physiology.
That said, CO2 can often serve as an easily measurable proxy for stuffy air, which can contain particulates, formaldehyde and other organic exhalations which may actually have some effect (psychological, if nothing else). But this just calls for ventilation and air filtering, not CO2 scrubbing.
It is me who is skeptical of extremely healthy test subjects in their early 20s being used to define what's healthy for the population at large. As for vitamin dosing, there is most certainly an optimal dose and form of each vitamin, somewhere between its lower and upper limits.
Offices have high ventilation but also high CO2, and the effect shows clearly in a meeting room, putting people to sleep.
If I understand correctly, the Hermite functions are the eigenfunctions of the Fourier Transform and thus all have this property -- with the Gaussian being a special case. But sech(x) is doubly interesting because it is not a Hermite function, though it can be represented as an infinite series thereof. Are there other well-behaved examples of this, or is sech(x) unique in that regard?
As well as the linear combinations (including infinite sums!) of Hermite functions with the same eigenvalue under the Fourier transform. (Those eigenvalues are infinitely degenerate). You could express sech(x) as such a sum.
There has to be a link to the harmonic oscillator here. That's the Hamiltonian that's symmetric under exchange of position and momentum, and the Hermite functions are its eigenfunctions.
Indeed, the (quantum) harmonic oscillator Hamiltonian (with suitable scalings) commutes with the Fourier transform. Since the former has the Hermite functions as eigenbasis, the Hermite functions also form an eigenbasis for the latter.
1) Why is this even on HN? Some guy complains about third wave coffee on his trashy, possibly AI generated, blog. Very interesting.
2) Since people take this seriously for some reason: Fine coffee is neither a hallucination, nor a theater performance, nor a sign of ultimate decadence. Or at least no more so than fine tea and wine. Different producers, roasts and preparation methods give markedly different coffee with a lot of nuance that you can learn to discern and enjoy. Or not -- to each his own.
This submission is not technically inappropriate and flagging it would be an abuse of that feature. It's just very low quality and very surprising that it made #1 on HN.
Jai's perpetual closed beta is such a weird thing... On the one hand, I sort of get that the developers don't want to waste their time and attention on too many random people trying to butt in with their ideas and suggestions. On the other hand, they are thereby wasting the time and attention of all the people who watched the development videos and read the blog posts, and now can do basically nothing with that knowledge other than slowly forget it. (Except for the few who take the ideas and incorporate them into their own languages).
The reality of a project like this is that to get it right (which is by the creator's standards, no one else's) takes time. Add on top of that Blow and Thekla are building games with this to dogfood it which takes time, too.
Sadly, there exists a breed of developer that is manipulative, obnoxious, and loves to waste time/denigrate someone building something. Relatively few people are genuinely interested (like the OP) in helping to develop the thing, test builds, etc. Most just want to make contributions for their Github profile (assuming OSS) or exercise their internal demons by projecting their insecurities onto someone else.
From all of the JB content I've seen/read, this is a rough approximation of his position. It's far less stressful to just work on the idea in relative isolation until it's ready (by whatever standard) than to deal with the random chaos of letting anyone and everyone in.
This [1] is worth listening to (suspending cynicism) to get at the "why" (my editorialization, not JB).
Personally, I wish more people working on stuff were like this. It makes me far more likely to adopt it when it is ready because I can trust that the appropriate time was put in to building it.
I get that. But if you want to work in relative isolation, would it be too much to ask to not advertise the project publicly and wax poetic about how productive this (unavailable) language makes you? Having had a considerable interest in Jai in the past, I do feel a little bit cheated :) even though I realize no binding promises have been made.
As well as "a few early presentations" (multiple hour+ conference talks) Jon keeps appearing on podcasts, and of course he's there to talk about this unavailable programming language although sometimes he does also talk about The Witness or Braid.
It's a common thing in programming language design and circles where some people like to form little cults of personality around their project. Curtis Yarvin did that with his Urbit project. V-Lang is another good example. I consider Elm an example as well.
They get a few "true believer" followers, give them special privileges like beta access (this case), special arcane knowledge (see Urbit), or even special standing within the community (also Urbit, although many other languages where the true believers are given authority over community spaces like discord/mailing list/irc etc.).
I don't associate in these spaces because I find the people especially toxic. Usually they are high drama because the focus isn't around technical matters but instead around the cult leader and the drama that surrounds him, defending/attacking his decisions, rationalizing his whims, and toeing the line.
Like this thread, where a large proportion is discussion about Blow as a personality rather than the technical merit of his work. He wants it that way, not so say that his work doesn't have technical merit, but that he'd rather we be talking about him.
One thing I want to add to the other (so far) good responses: They also seem to build Jai for a means to an end, which is: they are actively developing a game engine with it (to be used for more than one project) and a game, which is already in advanced stages.
If you consider a small team working on this, developing the language seriously, earnestly, but as a means to an end on the side, I can totally see why they think it may be the best approach to develop the language fully internally. It's an iterative develop-as-you-go approach, you're writing a highly specific opinionated tool for your niche.
So maybe it's best to simply wait until engine + game are done, and they can (depending on the game's success) really devote focus and time on polishing language and compiler up, stabilizing a version 1.0 if you will, and "package" it in an appropriate manner.
Plus: they don't seem to be in the "promote a language for the language's sake" game; it doesn't seem to be about finding the perfect release date, with shiny mascot + discord server + full fledged stdlib + full documentation from day one, to then "hire" redditors and youtubers to spread the word and have an armada of newbie programmers use it to write games... they seem to much rather see it as creating a professional tool aimed at professional programmers, particularly in the ___domain of high performance compiled languages, particularly for games. People they are targeting will evaluate the language thoroughly when it's out, whether that's in 2019, 2025 or 2028. And whether they are top 10 in some popularity contest or not, I just don't think they're playing by such metrics. The right people will check it out once it's out, I'm sure. And whether such a language will be used or not, will probably, hopefully even, not depend on finding the most hyped point in time to release it.
Very unlikely. Here's a back of the envelope calculation: The human energy requirement per day is about 9MJ. This corresponds to about 500g sugar (or starch), which releases around 750g CO2. Metabolic activity is reduced at night, so 250g CO2 is the upper limit for a full night's sleep. At typical temperature and pressure, this is < 0.14 m^2 of CO2. Assuming a very small (20 m^3) and hermetically sealed bedroom, you'll end up with a concentration of 0.7%, or less. Serious physiological studies (with divers and submariners) show that CO2 has a measurable effect starting at about 1% concentration and only becomes pronounced at 3% or so. This is consistent with the fact that exhaled air contains about 4% CO2 during normal breathing and can go much higher (>10%) during breath holds. In summary, sleeping in a stuffy room might give you respiratory problems, but no improved CO2 tolerance.
reply