This is basically the solution. Light travels about 1 foot in one nanosecond, so the car needs to reject latent replies.
I did research in this area a few years ago. Here's a research paper [1] from 1993 that goes into more detail about this type of "distance bounding" solution (i.e. authenticating received signal only if 1) it is received within a few nanoseconds AND 2) the decrypted received signal contains the previously sent random number) in order to defend against "relay attacks". The paper discloses many variations to this general solution as well.
[1] Brands and Chaum, "Distance-Bounding Protocols"
I realised after writing it that you don't actually need to send the time itself, but it was my first 5 minute stab. Plus it is sort of fun to have the time flying about.
edit - thanks for the link, having a read through.
For me, I use my annual checkup to get data (i.e. lipid and comp panels, etc, hearing test, electrocardiogram, urinalysis) on myself so I have a baseline of what the data looks like when I'm healthy. Everyone has slightly different baselines, so it's nice to have a better picture/time-series of mine.
Edit: I had an idea for an improved sms 2fa, but comments gave persuasive reasons why google authenticator was better. Thanks for the comments!
Idea basically is a 3FA system where bank sends you a one-time 6-digit number. You then have to translate that number using a user-seeded cryptographic hash function. This secret function is your third factor which translates the received SMS code into the value you'll input at login.
Analysis: Security would increase; but ease-of-use would decrease, especially in regards to how a user would reset their password if they lose both their password and their program that calculates the cryptographic hash.
2FA is already a hassle for users. Now you want to make them do math too? This is not a solution. Just don't use SMS at all. Google Authenticator is a better solution than yours.
You make a good point about ease-of-use. I agree a phone app is much easier to use with a smartphone. However, people with flip phones couldn't install such an app. You might then argue the demographic with flip phones would either use an RSA device or not have 2FA enabled at all - which seems like a valid point.
Security-wise, having a secret user math function seems more secure than the Google app. I can give reasons why if needed.
Seems vulnerable to phishing. The attacker already uses phishing to get account number, password and phone number; now they just have to send a fake 2Factor message and observe how the number is translated.
Even if the function is lossy, it has very little entropy. Maybe even vulnerable to brute forcing...
I agree with the other poster, Google Authenticator looks like a better solution.
One, you'd need to use an app and something actually secure to combine the password (that's what you're proposing, a second password that mutates the token) and the 2FA token -- if the password was a simple algorithm like you're suggesting, attackers could guess it a good proportion of the time. This is a good example of why you (or I) shouldn't try and invent security measures; leave it to professionals.
Second, the regular passwords had already been compromised on these accounts. Presumably, at the time they phished the regular password, they could have phished the special 2FA password as well. It also means that 2FA could no longer be used as a password reset mechanism -- because you need to have another password to use it. You've essentially made if 3FA.
Edit: part of my comment is corrected by comment below - Thanks openasocket!
Another comment about the content of this article:
Three quarters down the wiki page there is code for "adding foreign language" to the code. The options are are to add code comments in Arabic/Chinese/Russian/Korean/Farsi. My gut reaction is the purpose of this added language is to obfuscate the true source of the code - i.e. the code has Chinese comments in it so it must be from China. Ahh. I guess this makes sense to do. Only problem now is that the Chinese/Russian/Farsi/etc characters that they included in their code is now public. (Obviously now the CIA will change the foreign language words they insert)
I'd posit if someone had an X-year-old (i.e. x=7) copy of some malware, and the malware had these specific foreign language comments as shown by the article, there's a good possibility the source of the malware would be from the us government.
This is for obfuscating string constants, the foreign languages included is a red herring. The reason for this is that nontrivial code often has string constants in it, and the string contents are stored in the ELF/PE file in a manner that makes it trivial to extract. Since these strings often reveal a lot about the malware (e.g. a string constant "Your computer has been infected with randomware. Please deposit %d bitcoins to address %s") antivirus signatures often use them to detect specific kinds of malware, and reverse engineers find them useful in determining what a binary does. This framework scrambles the string contents (using techniques like XOR-ing every character against a random key), and injects some code into the executable so that the strings are unscrambled on startup. They just have foreign languages in the example to demonstrate this framework correctly handles unicode.
Analysts never use the language of the code comments for attribution, because such things are trivial to forge.
Considering that debug symbols, comments in code and Cyrillic characters in the metadata of files is being used a solid evidence Russia hacked the DNC, I'd say that it's probably still a useful tool
Source? I've read the stuff Crowdstrike and Manidant have put out and they mentioned none of those as evidence. Just binary analysis and network indicators from what I've seen.
Thanks for this insight! I'll edit my comment to credit you, but I won't delete it since someone might have the same thought process as me.
My comment:
So I see now (thanks to you) that it is just showing test cases (test warbles) to demonstrate that these scrambling techniques work with foreign languages. However, why would the us gov need to make sure that this program can successfully obfuscate Unicode strings in Chinese/Russian/Arabic/Farsi?
My gut reaction: while code comments would be trivial to forge, it appears the us gov is still using foreign language strings in some way - maybe having just one string constant originally in a foreign language that is then obfuscated/scrambled (such as by xoring every char against a random key)
Just FYI. Those Chinese characters are really really really rarely used in any writings. In fact, anyone with Chinese reading compression will tell you those are gibberish words and none of the words make any sense.
Hi, it looks there is a factual dispute about the linked article. I think I might be able to add some value to this conversation (but that's of course for you to decide).
It appears the parent poster is arguing Google did ban [all payday loans] while you are arguing that the article says Google did not ban [all payday loans], but instead only banned [loans with interest rates >=36%].
My viewpoint: The article says both points, i.e. Google banned [all payday loans] and Google also banned [all loans (i.e. of the non-payday variety) with interest rates >=36%]
Evidence / Recitation from article:
"...In addition to the broad payday loan ad ban, Google will not display ads from lenders who charge annual interest rates of 36 percent or more in the United States. The same standards will apply to sites that serve as middlemen who connect distressed borrowers to those lenders..."
"...Google announced Wednesday that it will ban all payday loan ads from its site..."
As seen from the first above recitation, the words "in addition to" appears to mean that two separate bans have been enacted: The first ban is for any loan classified as a payday loan. That means a payday loan of any interest rate (i.e. 35%, 25%, even 3%) will be banned. The second ban is for a loan of any type where the interest rate is >= 36%.
From reading the above report, it seems it was officially only a one-lane road, but the road was big enough to handle two-streams of traffic. The google car could have just stayed in the middle of the road, but instead, was hugging the right side of the road in preparation for a right-hand turn. Due to sand bags next to a storm drain, the google car had to "merge" back into the one-land road to get around the sandbags. Considering it's still a one-land road, the bus driver should have yielded to any car that was in front of it. I'd place a majority of the blame on the bus.
What I don't understand: Why doesn't the Google car have video of the accident? Or if they do, does anyone know if they will share a video of it?
I suspect Google will render some of the fancy video of 'what their car sees' of this incident like they've used in previous marketing materials. Those fancy videos aren't rendered on the spot though! We'll probably see it soon.
I doubt this is big enough a deal to be handled in open court. But I'm sure the data could be looked at by a third party, and the visualization could be vetted as a fair representation of the sensor data picked up by the car.
I did research in this area a few years ago. Here's a research paper [1] from 1993 that goes into more detail about this type of "distance bounding" solution (i.e. authenticating received signal only if 1) it is received within a few nanoseconds AND 2) the decrypted received signal contains the previously sent random number) in order to defend against "relay attacks". The paper discloses many variations to this general solution as well.
[1] Brands and Chaum, "Distance-Bounding Protocols"
https://link.springer.com/chapter/10.1007/3-540-48285-7_30