Hacker News new | past | comments | ask | show | jobs | submit login

> (e.g. the lack of native 64-bit integers that MrMacCall mentioned.

They exist, I think you just mean `int` is 63-bit and you need to use operators specialized `Int64.t` for the full precision.




How can you access the full 64 bits if "one bit is reserved for the OCaml runtime"? (the link is in the my original post's thread)


The usual int type is 63 bits. You can get a full 64 bit int, it just isn't the default.


The docs say, "one bit is reserved for the OCaml runtime", so doesn't that mean that one of the bits (likely the high bit) are unavailable for the programmer's use?

I mean, I understand "reserved" to mean either "you can't depend upon it if you use it", or "it will break the runtime if you use it".


So the "one bit" you refer to is what makes the standard int 63 bits rather than 64. If you could do things with it it would indeed break the runtime- that's what tells it that you're working with an int rather than a pointer. But full, real, 64-bit integers are available, in the base language, same goes for 32.


And that means that the OCaml runtime is not compatible with systems-level programming.

If something is "available", it should mean that it can be used to its full capacity. One of those bits are definitely not available.


I think you need to re-read some of the comments you are replying to. There is a 64 bit int type: https://ocaml.org/manual/5.3/api/Int64.html You can use all 64 bits. There are also other int types, with different amounts of bits. For example, 32 bit: https://ocaml.org/manual/5.3/api/Int32.html No one will stop you. You can use all the bits you want. Just use the specific int type you want.


ravi-delia explained that the fact is that an OCaml int is different than either Int32 or Int64 because an 'int' sacrifices one of its bits to the OCaml runtime. Int32 or Int64 are treated completely differently and are library defintions, bolted onto the OCaml runtime.

That is a runtime system not suitable for systems-level programming.

My C experience gave me a fundamental misunderstanding because there, an int is always derived from either an 32- or 64-bit int, depending on architecture.

OCaml is architected differently. I imagine the purpose was to keep the programs mostly working the same across processor architecture sizes.

I imagine this fundamental difference between OCaml's native int and these more specific Ints is why there are open issues in the libray that I"m sure the int does not.

Regardless, no one should be using OCaml for systems-level programming.

Thanks for helping me get to the heart of the issue.


The situation is that OCaml is giving you all the options:

(a) int has 31 bits in 32-bit architectures and 63 in 64-bit architectures (which speed up some operations)

(b) the standard library also provides Int32 and Int64 modules, which support platform-independent operations on 32- and 64-bit signed integers.

In other words: int is different but you always have standard Int32 and Int64 in case you need them.

It seems therefore that the use for system-level programming should not be decided for this (although the fact that it is a garbage collected language can be important depending on the case, note that still its garbage collector has been proved one of the fastest in the comparisons and evaluations done by the Koka language team of developers).


Ok, running this by you one more time. There is a type called "int" in the language. This is a 63-bit signed integer on 64-bit machines, and a 31-bit integer on 32-bit machines. It is stored in 64 bits (or 32), but it's a 63-bit signed integer, because one of the bits is used in the runtime. There is also a 64 bit integer, called "Int64". It has 64 bits, which is why I call it a 64-bit integer rather than a 63-bit integer. An "int" is a 63-bit integer, which is why I call it a 63-bit integer rather than a 64-bit integer.


So an int has nothing to do with an Int32 or Int64.

Thanks for your patient elucidation.

This means the semantics for Int32 and Int64 are COMPLETELY different than that of an int. My problem is that I come from the C world, where an int is simply derived from either a 32- or 64-bit integer, depending on the target architecture.

OCaml's runtime is not a system designed for systems-level programming.

Thanks again.

Now I know why the F# guys rewrote OCaml's fundamental int types from the get-go.


The reason of F# guys did things different from OCaml is not because system-level programming but because F# is a language designed for the .NET ecosystem which imposes specific type constrains. F# language was not specifically designed for systems-level programming.

Again, the semantics of Int is different but the semantics in OCaml of Int32 and Int64 is the same/standard. So you have 3 types: int, Int32 and Int64 and it is an static typed language.


I mean I guess you could say they have different semantics. They're just different types, int and Int64 aren't any more different from each other than Int64 and Int32. You can treat all of them exactly the same, just like how you have ints and longs and shorts in C and they all have the same interface.

Regardless, I don't think C's "probably 32 bit" non-guarantee is the make or break feature that makes it a systems language. If I care about the exact size of an integer in C I'm not going to use an int- I'm going to use explicit types from stdint. Rust makes that mandatory, and it's probably the right call. OCaml isn't really what I'd use for a systems language, but that's because it has no control over memory layout and is garbage collected. The fact that it offers a 63-bit integer doesn't really come into it.


> int and Int64 aren't any more different from each other than Int64 and Int32

They are, though. Int64 and Int32 only differ in bit length and are in formats native to the host microprocessor. int has one of its bits "reserved" for the OCaml runtime, but Int32 has no such overhead.

> The fact that it offers a 63-bit integer doesn't really come into it.

It does if you interoperating with an OS's ABI though, or writing a kernel driver.

But you're right: there are a host of other reasons that OCaml shouldn't even have been brought up in this thread ;-)

Peace be with you, friend. Thanks for so generously sharing your expertise.



I see, now. From that doc:

> Performance notice: values of type int64 occupy more memory space than values of type int

I just couldn't even imagine that a 64-bit int would require MORE memory than an int that is one bit less (or 33 bits less if on a 32-bit architecture).

It really makes absolutely no sense discussing OCaml as a possible systems-level programming language.


Sorry, I should have said that an Int64 shouldn't take more memory on a 64-bit system where the default int is 63 bits, because of the "reserved bit".

It was early this morning.


bruh, it's just saying single scalar Int64 types are boxed. This is totally normal thing that happens in garbage collected languages. There's no semantic loss.

OCaml does this 63-bit hack to make integers fast in the statistically common case where people don't count to 2^64 with them. The top bit is reserved to tell the GC whether it manages the lifetime of that value or not.

For interoperating with binary interfaces you can just say `open Int64` at the top of your file and get semantic compatibility. The largest industrial user of OCaml is quant finance shop that binds all kinds of kernel level drivers with it.

(and yes, 64-bit non-boxed array types exist as well if you're worried about the boxing overhead)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: