Hacker News new | past | comments | ask | show | jobs | submit | thedufer's comments login

I think the author was pointing at the fact that "Venessa" looks like a typo.


I don't understand the advantages of recreating SSE yourself like this vs just using SSE.

> SSE always breaks and requires client side retries to get it to work

Yeah, but these are automatic (the browser handles it). SSE is really easy to get started with.


My issue with eventsource is it doesn't use standard auth. Including the jwt in a query string is an odd step out requiring alternate middleware and feels like there is a high chance of leaking the token in logs, etc.

I'm curious though, what is your solution to this?

Secondly, not every client is a browser (my OpenAI / fine tune example is non-browser based).

Finally, I just don't like the idea of things failing all time with something working behind the scenes to resolve issues. I'd like errors / warnings in logs to mean something, personally.

>> I don't understand the advantages of recreating SSE yourself like this vs just using SSE

This is more of a strawman and don't plan to implement it. It is based on experiences consuming SSE endpoints as well as creating them.


> I'm curious though, what is your solution to this?

Cookies work fine, and are the usual way auth is handled in browsers.

> Secondly, not every client is a browser (my OpenAI / fine tune example is non-browser based).

That's fair. It still seems easier, to me, to save any browser-based clients some work (and avoid writing your own spec) by using existing technologies. In fact, what you described isn't even incompatible with SSE - all you have to do is have the server close the connection every 60 seconds on an otherwise normal SSE connection, and all of your points are covered except for the auth one (I've never actually seen bearer tokens used in a browser context, to be fair - you'd have to allow cookies like every other web app).


> it doesn't use standard auth

I'm not sure what this means because it supports the withCredentials option to send auth headers if allowed by CORS


I mean Bearer / JWT


SSE can be implemented over HTTP GET; there is no difference in handling of JWT tokens in headers.


I mean that eventsource doesn't handle it.


> which has a pub date from Oct 19, 2022

I think you're misinterpreting this. That's just the date the most recent version of the library was published. The library is something like 15 years old.

> the standard integer data types (and, therefore, the standard language), that not only have no signedness

I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?

> and only have Int32 and Int64, but have "one bit is reserved for OCaml's runtime operation".

The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits). Int32 and Int64 really are normal 32- and 64-bit ints. The trade-off is that they're boxed (although IIRC there is work being done to unbox them) so you pay some extra indirection to use them.

> The stdint package also depends on Jane Street's "Dune", which they call a "Fast, portable, and opinionated build system". I don't need or want or need any of its capabilities.

Most packages are moving this way. Building OCaml without a proper build system is a massive pain and completely inscrutable to most people; Dune is a clear step forward. You're free to write custom makefiles all the time for your own code, but most people avoid that.


> The library is something like 15 years old.

It's not clear from the docs, but, yeah, I suspected that might be the case. Thanks.

> I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?

Yes, that's what I mean. And doesn't that mean that it's fully unsuitable for systems programming, as this entire topic is focused on?

> The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits).

I don't get it. What is it reserved for then, if the int size is determined when the runtime is built? How can that possibly affect the runtime use of ints? Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?

Once again, I don't see how any of this is suitable for systems programming. Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.

> Building OCaml without a proper build system

But I don't want to build the programming language, I want to use it. Sure, I can recompile gcc if I need to, but that shouldn't be a part of my dev process for building software that uses gcc, IMO.

It looks to me like JaneStreet has taken over OCaml and added a ton of apparatus to facilitate their various uses of it. Of course, I admit that I am very specific and focused on small, tightly-defined software, so multi-target, 3rd-party utilizing software systems are not of interest to me.

It looks to me like OCaml's intrinsic install is designed to facilitate far more advanced features than I care to use, and that looks like those features make it a very ill-suited choice for a systems programming language, where concise, straightforward semantics will win the day for long-term success.

Once again, it looks like we're all basically forced to fall back to C for systems code, even if our bright-eyed bushy tails can dream of nicer ways of getting the job done.

Thanks for your patient and excellent help on this topic.


> I don't get it. What is it reserved for then, if the int size is determined when the runtime is built? How can that possibly affect the runtime use of ints?

Types are fully erased after compilation of an OCaml program. However, the GC still needs to know things about the data it is looking at - for example, whether a given value is a pointer (and thus needs to be followed when resolving liveness questions) or is plain data. Values of type `int` can be stored right alongside pointers because they're distinguishable - the lowest bit is always 0 for pointers (this is free by way of memory alignment) and 1 for ints (this is the 1 bit ints give up - much usage of ints involves some shifting to keep this property without getting the wrong values).

Other types of data (such as Int64s, strings, etc) can only be handled (at least at function boundaries) by way of a pointer, regardless of whether they fit in, say, a register. Then the whole block that the pointer points to is tagged as being all data, so the GC knows there are no pointers to look for in it.

> Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?

To be clear, you have to choose at build time what you're targeting, and the integer sized is part of that target specification (most processor architectures these days are 64-bit, for example, but compilation to javascript treats javascript as a 32-bit platform, and of course there's still support for various 32-bit architectures).

> Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.

Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.

> But I don't want to build the programming language, I want to use it.

I meant building OCaml code, not the compiler.


Thanks for the fantastic explanation for how ints are handled in OCaml, but I've got to say that having the low bit be the flag is a strange design decision, IMO, but I understand that aligning the pointers will make the low bit or two irrelevant for them. But, oh!, the poor ints.

All this said, thanks for putting to bed, once and for all, any notion anyone should have that OCaml can be used as a systems language. Yikes!

> Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.

You know that at compile time, surely, when you set the build target, no? Even the pointer sizes. Besides, after years of C programming, I got to where I never used the nonspecific versions; if I wanted 64-bits unsigned, I would specifically typedef them at the top, and then there's no ambiguity because I specifically declared all vars. (You can see how I did the same thing in F# at the bottom of this reply.)

It makes working with printf much less problematic, where things can easily go awry. Anyway, I want my semantics to percolate down pyramid-style from a small set of definitions into larger and larger areas of dependence, but cleanly and clearly.

Sure, DEFINEs can let you do transparent multi-targetting, but it ends up being very brittle, and the bugs are insidious.

Thanks for your excellence. It's been a joy learning from you here.

---

As an aside, here's a small part of my defs section from the final iteration of my F# base libs, where I created an alias for the various .NET types for standard use in my code:

   type tI4s = System.Int32
   type tI1s = System.SByte
   type tI2s = System.Int16
   type tI8s = System.Int64

   type tI1u = System.Byte
   type tI2u = System.UInt16
   type tI4u = System.UInt32
   type tI8u = System.UInt64
Why risk relying on implicit definitions (or inconsistent F# team alias naming conventions) when, instead, everything can be explicity declared and thus unambiguous? (It's really helpful for syscall interop declarations, as I remember it from so many years ago). Plus, it's far more terse, and .NET not being able to compile to a 64-bit executable (IIRC) made it simpler than C/C++'s two kinds of executable targets.


> But, oh!, the poor ints.

Empirically this is a rather low cost. IIRC, the extra ops add less than a cycle per arithmetic operation, due to amortizing them over multiple operations and clean pipelining (and also things like shifts just being really cheap).

But yes, there are certainly applications where we almost exclusively use Int64 or Int32 rather than the primary int type, if you need exactly that many bits.

> You know that at compile time, surely, when you set the build target, no?

Well, that's true of OCaml as well.

This is ultimately a difference of opinion - I think that the cost of installing a single extra library to get ints of various widths/signedness would be worth the advantage of eliminating nearly all memory errors (and various other advantages of a higher-level language).

The main carveout I would agree with is any case where you absolutely need strict memory bounds - it's not clear to me how you'd satisfy this with any GC'd language, since the GC behavior is ultimately somewhat chaotic.


I would assume they're pointing at the comment at the top of this thread ("They built / manage login.gov?") rather than the article itself.


I guess that's why I'm confused, which facts are not straight or were corrected?


18f built but did not then manage login.gov. It was handed off elsewhere. The implication of them managing it was that disbanding 18f left login.gov ownerless (or at least handed off to some group that knows nothing about it) which does not seem to be the case.


> is pretty straightforward to accept without fear of charge-backs or fraud

Well yeah, it's a great option for the seller. You get non-recourse cash before shipping the item. The real question is why people are willing to send it to you. From the other side, it's basically asking to be defrauded.


You can have escrow or something using a 2-of-3 multisignature if you are worried, but you can also just have trust mechanisms outside of the payment network.


> Now, why Junio thought deciseconds was a reasonable unit of time measurement for this is never discussed, so I don't really know why that is.

xmobar uses deciseconds in a similar, albeit more problematic place - to declare how often to refresh each section. Using deciseconds is fantastic if your goal is for example configs to have numbers small enough that they clearly can't be milliseconds, resulting in people making the reasonable assumption that it must thus be seconds, and running their commands 10 times as often as they intended to. I've seen a number of accidental load spikes originating from this issue.


I have both and they certainly each have their place. The Steam Deck has a much wider variety of games and can handle heavier graphics loads, but it is too heavy to be all that comfortable for handheld use, and the Switch is in my mind the undisputed champion of local multiplayer (more portable controllers, controller connections Just Work, good variety of local multiplayer games, etc).


Our code review process involves a reviewer explicitly taking ownership of the PR, and I think it's a reasonable expectation with the right practices around it. A good reviewer will request a PR containing 1000s of lines be broken up without doing much more than skimming to make sure it isn't bulked up by generated code or test data or something benign like that.


And just to add to this, at least in Google generated code is never seen in a code review. That’s all just handled by Bazel behind the scenes.


Ah, we draw a distinction between checked-in generated code and JIT generated code, and the former does show up in code review (which is sometimes the point of checking it in - you can easily spot check some of it to make sure the generator behaves as you expect).


September is a funny choice to use as an example, because it is named after a number (sept-: prefix, 7). The wrong number, though.


This is because September used to be the seventh month. March was the new year and coinsided with spring planting, the spring equinox. At some point we switched from a solar calendar to a lunar one and that's when the new year month changed. Source for all this is the dead sea scrolls, see the book "Ancient Mysteries of the Essenes" for a deep dive on our calendar.


I thought the reason is because they added two months named July and August after emperors, which offset all the numbers by 2. (Sept, Oct, Nov, Dec - 7, 8, 9, 10)


That happened later, well after January and February had been added. I think the twelve month Roman calendar was from pre-history so we don't know when or why it was done. July and August were Quintilis and Sextilis, five and six.


Yeah, seems like I mixed my calendar history up. Reading this set me straight: https://en.m.wikipedia.org/wiki/Roman_calendar

Thanks for the reply, it led me to look into it deeper.


so, for those who haven't heard it yet...

why do programmers get Halloween and Christmas confused?

because oct(31) == dec(25)


Sept, Oct, Nov, Dec. My favourite months, wish they still where 7-10.


So remembering this little tid bit would do more harm than good


All of the months with numerical prefixes are wrong by the same offset, though. So as long as you remember that as well, it can be useful. Particularly since they're the last ones and thus take the longest to count to.


Same with Quartember, Quintober, Sextober, October, November, and December.


In fact, it sounds like the 40% includes all exits, including those that returned only 1x. That would mean that the 60% is all 0s, and that the chart shows a negative total return in all rows, without even discounting for the holding period.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: