Delete comment from: The History of Python
Should note that C Classic didn't define the sign of the result, but did require that
a == (a/b)*b + a%b
hold whenever a/b was representable. Not one C programmer in a million knew that, though ;-) C99 finally did insist on giving the wrong answer, seemingly just to be compatible with FORTRAN.
I've never seen an integer use case where the wrong answer was helpful. Use cases where the right answer are helpful abound. For example, when trading equity options the 3rd Friday of the month is a crucial date, and
import datetime
FRIDAY = 4
# Return date of third Friday of month.
def fri3(year, month):
d = datetime.date(year, month, 1)
diff = (FRIDAY - d.weekday()) % 7 + 14
return d + datetime.timedelta(days=diff)
is screamingly natural. Weekdays, hours on a 24-clock, minutes, seconds, months in a year, year numbers in a century ... many things related to time are naturally represented by the ordinals in range(N) for some N, repeated endlessly. Using the right definition of % allows uniform code to move forward or backward in such sequences.
But for floats it's usually most useful to have
0 <= a%b <= abs(b/2)
and damn the signs. The mathematical result is exactly representable (ark's point) under that constraint too, and it caters to that floating mod is most often used for range reduction (so that it's helpful for the result to have the smallest absolute value possible).
The integers are a subset of the reals in math, but that has nothing to do with floats ;-)
Aug 24, 2010, 1:39:27 PM
Posted to Why Python's Integer Division Floors