Yes, as someone who both gets open source bug reports and has sent out open source bug reports, what determines whether a behavior being looked at is a bug depends on what the relevant standards say.
> When integers are divided and the division is inexact, if both operands are positive the result of the / operator is the largest integer less than the algebraic quotient and the result of the % operator is positive. If either operand is negative, whether the result of the / operator is the largest integer less than the algebraic quotient or the smallest integer greater than the algebraic quotient is implementation-defined, as is the sign of the result of the % operator. If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a .
Between C89 and C99, the standard was updated to require integer division to truncate towards 0.
C99 draft standard: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94, physical page 82, “When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.” with a footnote: “This is often called ‘‘truncation toward zero’’”
I wish % was a modulo instead of a remainder operation; this bug has bitten me more than once.
If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a
For division:
The result of the / operator is the quotient from the division of the first operand by the second
Which is remarkably vague. Presumably, they are doing the kind of third grade math where we get a quotient and a remainder (Euclidean division). So, 1/3 is 0, 2/3 is 0, 3/3 is 1, 4/3 is 1, 5/3 is 1, 6/3 is 2, etc.
That in mind, let’s look at -1 % 3 (is 2 when doing a proper modulo operation)
With integers:
(-1 / 3) is 0 (1 / 3 is “0 remainder 1” and multiplied by -1 still gives us 0)
This in mind, the modulo (-1 % 3) needs to give us -1:
(0 * 3) + -1 = -1
So a%b is negative here.
There are some ways to work around this and get a proper modulo.
One common case is to get something modulo a power of 2. This will give us a modulo, 20 in this case, as per the C standard:
The reason is because, as per PDF page 55 (page 43) of that draft C99 standard:
if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
So, since foo is unsigned, and a 32-bit int, -foo is -12 + 4294967296, or 4294967284. 4294967284 % 32 as per the spec above is 20, i.e. -12 mod 32.
This trick only works if we modulo a number by a power of 2.
Yes, I have used this trick when writing “golfed” C code which needs to do a circular rotate:
While I’m a moderate Democrat, and no libertarian, [1] I have a lot of respect for Robby Suave, the author of this piece.
He has been very good about pointing out the abuses of the radical left destroying people’s lives for things like wearing a costume at a 2018 Halloween party which the radical left woke mob decides is politically incorrect in 2020. [2] In addition, he has supported due process during the #MeToo “believe all women” moral panic when many on the radical left wanted to get rid of presumption of innocence and due process. [3]
[1] Certain problems require big government to solve: Police, military, roads, and, yes, health care
I don't know the guy but he looks very preoccupied with facebook drama, with inflammatory titles, and mounting scandal on top of a scandal. There are people who would rather see the liberal left burn alive than convince a few people not to vote for them for this boring reason 1, and this boring reason 2 and...
It's fun to troll online, but respect... I wouldn't ask for it, I don't think he should either :D
Without more context, I have to assume you’re talking about the articles he wrote which I linked to in the grandparent.
I am not sure how pointing out it was unfair to get a woman in her mid-50s fired because she wore a costume in 2018 the Washington Post retroactively decided was politically incorrect in 2020 is “trolling”. More like, standing up for fairness, compassion, and justice.
I was referring to my own post. That person trolls like I do. I wouldnt ask for respect, I wouldnt give him any: he's exacerbating drama rather than solving the problem with the CDC, simply by using emotions rather than rational proposals.
Something I respect surprisingly is the wapo article https://www.washingtonpost.com/local/social-issues/blackface... full of various points of views, explaining the issue clearly, taking no special position itself, even going to tell that person felt so bad about her idea of a joke or the reactions to it she left in tear, causing compassion for her in me...
So maybe it's trolling to say the wapo calls an old incident politically incorrect ? :p As an amateur troll myself I get why she found it funny to fight fire with fire and why some people just cant go that many levels of irony and it's hard to sometimes face the reactions to your own trolling. The Wapo itself seems completely innocent of any position and I appreciate the further attempt to paint it as such as good trolling, when they were trying to untroll the debate. Just like wearing a blackface to criticise defending blackfaces.
Yes. We're talking about growing time_t from int32_t to int64_t, instead of uint32_t. If you change it to uint32_t behind the scenes, some code will silently fail while compiling OK, because it was not expecting unsigned math.
For open source software, it’s a simple recompile. Most OSS compiles are 64-bit these days, where time_t has always been 64-bit. In the case of compiling a new 32-bit application, -D_TIME_BITS=64 apparently needs to be a compile time flag.
For binary software, Windows has had a Y2038 compliant proprietary API since Windows NT in the 1990s; most Windows applications use this API so Y2038 is generally not an issue.
The issue only affects a subset of 32-bit binary-only applications where we don’t have the source code any more. Hopefully, any and all applications like that will be decommissioned within the next couple of years.
I think you misread the root comment, it suggests a new function call that no one will use. Apple made that mistake, then they just switched the size and dealt with the fallout.
Looks like I lost the context. In terms of the context:
The issue is, besides having to rewrite code, it’s not just one function. It’s time_64(), but now we need gmtime_64(), strftime_64(), stat_64(), and so on for any and all functions which use timestamps.
The thinking in Linux land is that we won’t have 32-bit applications come 2038 where this matters, because everything will be 64-bit by then.
I test for these things when building open source software.
• 64-bit compiles and applications have a 64-bit time_t in Linux. If your distro and binaries are 64-bit, there isn’t a problem.
• 32-bit compiles and applications still have a 32-bit time_t in mainstream Linux libraries, so that old binaries still run. The timestamp is a rolling one; once Y2038 is hit, the timestamp will be a negative one, but one which is updated and is off by 136 years. The workaround is code like this (this is real production code in my Lua fork, Lunacy[1]):
time_t t;
int64_t tt;
if (lua_isnoneornil(L, 1)) /* called without args? */
t = time(NULL); /* get current time */
else {
// Lunacy only supports getting the current time
lua_pushnil(L);
return 1;
}
if(t < -1) {
tt = (int64_t)t + 4294967296ULL;
} else {
tt = (int64_t)t;
}
if (t == (time_t)(-1))
lua_pushnil(L);
else
lua_pushnumber(L, (lua_Number)tt);
return 1;
This gives us accurate timestamps until 2106. If post 2106 compatibility is needed, we can add 2 ^ 32 again for timestamps with a positive value once the Y2038 rollover happens.
• Legacy 32-bit Windows applications using the Posix compatibility layer are not Y2038 compliant. Once Y2038 hits us, the timestamp will always return -1 (in Windows XP, the behavior was to return a number off by 136 years, but this changed in Windows 10). The workaround is to use Windows’ proprietary API for post-Y2038 timestamps. Again, from Lunacy which has a Win32 port:
/* Convert Windows "filetime" in to Lua number */
uint64_t t;
FILETIME win_time = { 0, 0 };
GetSystemTimeAsFileTime(&win_time);
t = win_time.dwHighDateTime & 0xffffffff;
t <<= 32;
t |= (win_time.dwLowDateTime & 0xffffffff);
t /= 10000000;
t -= 11644473600LL;
lua_pushnumber(L, (lua_Number)t);
return 1;
Now, last time I researched this, there wasn’t a “int64_t time_64bit()” style system call in the Linux API so that newly compiled 32-bit binaries can be Y2038 compliant without breaking the ABI by using “time_64bit()” instead of “time()”. This was based on some digging around Stackoverflow just last year, and simple Google searches are still not returning pages saying, in big bold letters “-D_TIME_BITS=64 when building 32-bit apps”.
[3] To ruin a classic sadistic interview question for sys admin roles, Linux these days returns both the modification time and the mostly useless “status change” timestamp. Facebook once decided to not move forward because I said that file timestamp was “modification time” and not “status change”; if Facebook is still asking that question, their knowledge is out of date.
The linked LWN article mentions that it’s difficult to put comments in inline LuaTeX macros because those macros remove newlines.
To add a comment to the middle of a line in Lua, use the --[[ comment ]] form:
a = 1 --[[ Set a to 1 ]] print(a)
If we have a comment that has ]] in it, we can do this instead:
a = 1 --[=[ We love [[brackets]] a lot ]=] print(a)
The --[[ (or --[=[ ) form also works for multi-line comments
a = 1 --[[ Lorem ipsum dolor sit amet, consectetur
adipiscing elit, sed do eiusmod tempor incididunt ut
labore et dolore magna aliqua. Dictum fusce ut placerat
orci nulla pellentesque dignissim. Quam viverra orci
sagittis eu volutpat odio facilisis mauris. ]] print (a)
it is really a shame that Lua was chosen as the language to embed in TeX, instead of something sane like a Scheme, Common Lisp or at least Python (today, maybe Julia)
I personally find Lua to be a completely sane and easy to program language. But, to each their own.
Let’s look at these options and some others:
Scheme/Lisp: It’s true that Scheme and Lisp-like languages are pretty compact and easy to embed, but the syntax of Lisp is, for most modern programmers, pretty esoteric. Asking today’s programmers to program in a Lisp is, for better or worse, like asking secretaries to use Vi to edit documents.
Python: Python is a big, bulky language and one which is not really suitable for embedding inside another program. See this recent discussion for more on this: https://news.ycombinator.com/item?id=29700947
Julia: A somewhat non-mainstream language, and, like Python, it doesn’t look to be a language designed for embedding the way Lua is.
Forth: Like Lisp dialects, Forth’s completely stack-based syntax is not intuitive for today’s generation of programmers.
Lua needs batteries, which I know is off putting to some programmers. It turned me off from Lua for a while until Roblox made me give Lua a second chance. Here is my own set of public ___domain Lua batteries: https://github.com/samboy/LUAstuff (stuff like a Lua copy.deepcopy(), sorted table/dictionary iteration, split strings based on a regex, etc.)
This is why I rolled my own cryptography to generate random passwords for each site I use.
There is a tradition here that we tell programmers they must never write cryptographic code, that they will screw it up, and so on. To which I say: Yes, I agree that writing crypto code if you don’t know what you are doing can cause problems. It should not be done unless you know what you are doing; if you think using MD5 in any cryptographic context is secure, you don’t know what you are doing and shouldn’t be writing code using crypto.
If one wishes to write crypto code, the first thing is to realize that it’s very important to choose an algorithm wisely. Use one which has been made by an esteemed cryptographer, has been released to the academic cryptographic community, and has not been broken by said community.
Never try to make your own algorithm. Unless you know the difference between differential cryptanalysis and linear cryptanalysis, you have no business making your own algorithm. Even if you do, you have no business making you own algorithm and using it in production without releasing it to the academic cryptographic community so they can analyze it and see if it’s broken in some way you didn’t see.
It’s not just algorithms. It’s how to use an algorithm. If you don’t understand why it’s a bad idea to use a block cipher in ECB mode, then you probably shouldn’t be writing code that uses a block cipher in live production.
I would not have anyone write crypto code for production use unless they have read Applied Cryptography cover to cover; while somewhat dated (it came out before AES, MD5 getting broken, SHA-3, or post-quantum crypto) it is an excellent introduction to the basics.
That said, I have written my own password generator. I have read Applied Cryptography. I know MD5 is broken. I know to random pad plaintext before encrypting it with RSA. I know not to use a block cipher in ECB mode. I have written cryptographic code used in production and it hasn’t ever been shown to be weak or broken; I have revised the code when purely academic attacks have been made against it: I started transitioning from AES to RadioGatún[32] back in 2007 because, while purely academic, I felt the cache timing attacks made it too insecure for me to continue using it in production code.
My password generator takes a master password, and it appends it to that master password the name of the site I am visiting, then runs it through a strong cryptographic hash (RadioGatún[32], for the record, which has been around for over 15 years and remains unbroken) for over 500,000 rounds, to generate a secure password. Since it’s not an online service, there is no point of failure where hackers could get in to the online site; since it’s not a browser plugin, there is no point of failure where a browser security hole or a Javascript hack can get at my master password.
I have planned a very similar thing. I'll probably have it in my custom keyboard's firmware once I'm done building it.
The idea is one or more "master" password(s) that can be stored in keyboard's RAM (so you don't need to type them every time; and yet you still never need to input it into your computer at all!), then a short memorable site-specific "password" (could be as simple as the site's name) plus a prefix/postfix that adjusts the output to work around different services' fucked up password rules.
I might also make a completely offline device for this in case I need to "look up" a password without actually typing it into a computer.
I don't need to write any crypto for this, just use a well known and secure KDF / hash.
Since some people don’t like the cathedral development style VCV rack uses, I point people to a fork of an older version of VCV rack with two features VCV rack 2 doesn’t have:
* It can run as a VST plugin, so one can use it with their favorite digital audio workstation (DAW)
Source: The draft C99 standard http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf PDF page 94, physical page 82.
Yes, as someone who both gets open source bug reports and has sent out open source bug reports, what determines whether a behavior being looked at is a bug depends on what the relevant standards say.