Hacker News new | past | comments | ask | show | jobs | submit | chris_armstrong's comments login

OCaml

The compiler is very fast, even over large codebases.

Mostly trying to bring AWS tooling to the platform[1], or experimenting with cross-compilation[2] using another less well known systems language, zig.

[1] https://github.com/chris-armstrong/smaws/ [2] https://github.com/chris-armstrong/opam-cross-lambda


I've used a lot of programming languages and the kind of groove you can get into with OCaml is hard to match. You can just dive into an enormous, unfamiliar codebase and make changes to it with so much more confidence. But while it's reasonably fast, it's also higher level than Rust so you don't have to struggle quite so much with forms like `Arc<Mutex<HashMap<String, Box<dyn Processor + Send + Sync>>>>` everywhere.

Re: AWS tooling, have you seen https://github.com/solvuu/awsm ?

It generates code for all 300+ AWS services and produces both Async and Lwt forms. Should be fairly extensible to Eio.

I worked on this. Let me know if you want to tag team.


I want to like OCaml but OPAM is just so bad... and tooling is super important (it's one of the reasons Go is popular at all). Windows support is also an afterthought. There's no native debugger as far as I can tell. This is before you even get to the language which definitely has its own big flaws (e.g. the lack of native 64-bit integers that MrMacCall mentioned.

The syntax is also not very friendly IMO. It's a shame because it has a lot of great ideas and a nice type system without getting all monad in your face. I think with better tooling and friendlier syntax it could have been a lot more popular. Too late for that though; it's going to stay consigned to Jane Street and maybe some compilers. Everyone else will use Rust and deal with the much worse compile time.


> The syntax is also not very friendly IMO.

Very true. There's an alternate syntax for OCaml called "ReasonML" that looks much more, uh, reasonable: https://reasonml.github.io/


The OCaml syntax was discussed a long time ago between the developers and the whole community and the agreement was that the community is happy with the current/original syntax. ReasonML was created for those developers more familiar with Javascript, but it was not very successful in attracting new developers as they usually look more at the semantics of the language along with the syntax (and that is where OCaml's type system shines). Strictly speaking, there is a long list of ML family languages that share many properties of OCaml's syntax. However, what is a ‘reasonable’ syntax is open to debate. Javascript and Python were not mainstream languages when Ocaml was developed and it made much more sense to create a syntax in line with the ML family of powerful languages available at the time. Once you program a bit in OCaml syntax is not a problem, learning to program in a functional paradigm and getting the most out of it is the real challenge.


> (e.g. the lack of native 64-bit integers that MrMacCall mentioned.

They exist, I think you just mean `int` is 63-bit and you need to use operators specialized `Int64.t` for the full precision.


How can you access the full 64 bits if "one bit is reserved for the OCaml runtime"? (the link is in the my original post's thread)


The usual int type is 63 bits. You can get a full 64 bit int, it just isn't the default.


The docs say, "one bit is reserved for the OCaml runtime", so doesn't that mean that one of the bits (likely the high bit) are unavailable for the programmer's use?

I mean, I understand "reserved" to mean either "you can't depend upon it if you use it", or "it will break the runtime if you use it".


So the "one bit" you refer to is what makes the standard int 63 bits rather than 64. If you could do things with it it would indeed break the runtime- that's what tells it that you're working with an int rather than a pointer. But full, real, 64-bit integers are available, in the base language, same goes for 32.


And that means that the OCaml runtime is not compatible with systems-level programming.

If something is "available", it should mean that it can be used to its full capacity. One of those bits are definitely not available.


I think you need to re-read some of the comments you are replying to. There is a 64 bit int type: https://ocaml.org/manual/5.3/api/Int64.html You can use all 64 bits. There are also other int types, with different amounts of bits. For example, 32 bit: https://ocaml.org/manual/5.3/api/Int32.html No one will stop you. You can use all the bits you want. Just use the specific int type you want.


ravi-delia explained that the fact is that an OCaml int is different than either Int32 or Int64 because an 'int' sacrifices one of its bits to the OCaml runtime. Int32 or Int64 are treated completely differently and are library defintions, bolted onto the OCaml runtime.

That is a runtime system not suitable for systems-level programming.

My C experience gave me a fundamental misunderstanding because there, an int is always derived from either an 32- or 64-bit int, depending on architecture.

OCaml is architected differently. I imagine the purpose was to keep the programs mostly working the same across processor architecture sizes.

I imagine this fundamental difference between OCaml's native int and these more specific Ints is why there are open issues in the libray that I"m sure the int does not.

Regardless, no one should be using OCaml for systems-level programming.

Thanks for helping me get to the heart of the issue.


The situation is that OCaml is giving you all the options:

(a) int has 31 bits in 32-bit architectures and 63 in 64-bit architectures (which speed up some operations)

(b) the standard library also provides Int32 and Int64 modules, which support platform-independent operations on 32- and 64-bit signed integers.

In other words: int is different but you always have standard Int32 and Int64 in case you need them.

It seems therefore that the use for system-level programming should not be decided for this (although the fact that it is a garbage collected language can be important depending on the case, note that still its garbage collector has been proved one of the fastest in the comparisons and evaluations done by the Koka language team of developers).


Ok, running this by you one more time. There is a type called "int" in the language. This is a 63-bit signed integer on 64-bit machines, and a 31-bit integer on 32-bit machines. It is stored in 64 bits (or 32), but it's a 63-bit signed integer, because one of the bits is used in the runtime. There is also a 64 bit integer, called "Int64". It has 64 bits, which is why I call it a 64-bit integer rather than a 63-bit integer. An "int" is a 63-bit integer, which is why I call it a 63-bit integer rather than a 64-bit integer.


So an int has nothing to do with an Int32 or Int64.

Thanks for your patient elucidation.

This means the semantics for Int32 and Int64 are COMPLETELY different than that of an int. My problem is that I come from the C world, where an int is simply derived from either a 32- or 64-bit integer, depending on the target architecture.

OCaml's runtime is not a system designed for systems-level programming.

Thanks again.

Now I know why the F# guys rewrote OCaml's fundamental int types from the get-go.


The reason of F# guys did things different from OCaml is not because system-level programming but because F# is a language designed for the .NET ecosystem which imposes specific type constrains. F# language was not specifically designed for systems-level programming.

Again, the semantics of Int is different but the semantics in OCaml of Int32 and Int64 is the same/standard. So you have 3 types: int, Int32 and Int64 and it is an static typed language.


I mean I guess you could say they have different semantics. They're just different types, int and Int64 aren't any more different from each other than Int64 and Int32. You can treat all of them exactly the same, just like how you have ints and longs and shorts in C and they all have the same interface.

Regardless, I don't think C's "probably 32 bit" non-guarantee is the make or break feature that makes it a systems language. If I care about the exact size of an integer in C I'm not going to use an int- I'm going to use explicit types from stdint. Rust makes that mandatory, and it's probably the right call. OCaml isn't really what I'd use for a systems language, but that's because it has no control over memory layout and is garbage collected. The fact that it offers a 63-bit integer doesn't really come into it.


> int and Int64 aren't any more different from each other than Int64 and Int32

They are, though. Int64 and Int32 only differ in bit length and are in formats native to the host microprocessor. int has one of its bits "reserved" for the OCaml runtime, but Int32 has no such overhead.

> The fact that it offers a 63-bit integer doesn't really come into it.

It does if you interoperating with an OS's ABI though, or writing a kernel driver.

But you're right: there are a host of other reasons that OCaml shouldn't even have been brought up in this thread ;-)

Peace be with you, friend. Thanks for so generously sharing your expertise.



I see, now. From that doc:

> Performance notice: values of type int64 occupy more memory space than values of type int

I just couldn't even imagine that a 64-bit int would require MORE memory than an int that is one bit less (or 33 bits less if on a 32-bit architecture).

It really makes absolutely no sense discussing OCaml as a possible systems-level programming language.


Sorry, I should have said that an Int64 shouldn't take more memory on a 64-bit system where the default int is 63 bits, because of the "reserved bit".

It was early this morning.


bruh, it's just saying single scalar Int64 types are boxed. This is totally normal thing that happens in garbage collected languages. There's no semantic loss.

OCaml does this 63-bit hack to make integers fast in the statistically common case where people don't count to 2^64 with them. The top bit is reserved to tell the GC whether it manages the lifetime of that value or not.

For interoperating with binary interfaces you can just say `open Int64` at the top of your file and get semantic compatibility. The largest industrial user of OCaml is quant finance shop that binds all kinds of kernel level drivers with it.

(and yes, 64-bit non-boxed array types exist as well if you're worried about the boxing overhead)


Why opam is bad? Compared to what? Could you elaborate


1. I've found it to be extremely buggy, often in confusing ways. E.g. there was a bug where it couldn't find `curl` if you were in more than 32 Linux groups.

2. It has some kind of pinning system that is completely incomprehensible. For example you can do `opam install .`, which works fine, and then `git switch some_other_branch; opam install .` and it will actually still install the old branch?? Honestly I've never figured out what on earth it's trying to do but me and my colleagues have had constant issues with it.

> Compared to what?

Compared to good tooling like Cargo and Go and NPM and uv (if you give it some slack for having to deal with Python).

It's better than Pip, but that doesn't take much.


In my case I have not found opam buggy at all, and I never find it confusing but this last point may be personal taste. The bug you commented is something I have never experimented with opam in linux or Mac OS and I am sure if you report the developer will check about it.

The point 2 you mention, I don't understand the issue. There is an opam switch which works for me perfectly, no issues at all. Please, like any other tool it is better to read the manual to understand how it works.

Cargo and opam is not something comparable, probably next generation of dune could be, but at this moment it is make no sense compare two utilities that are so different. Compare with pip, julia package manager, etc is fine. Personally, I like more opam than npm and pip.


Interesting, thanks, I have been using opam, but since I am lal alone and by myself, I never hit the cases you mentioned


>The syntax is also not very friendly IMO.

Why do you think that the syntax is not very friendly?

Not saying you are wrong, just interested to know.


Have you tried esy?


I've read some part of the book Real World OCaml, by Yaron Minsky and Anil Madhavapeddy.

https://dev.realworldocaml.org/

I also saw this book OCaml from the Very Beginning by John Whitington.

https://ocaml-book.com/

I have not read that one yet. But I know about the author, from having come across his PDF tools written in OCaml, called CamlPDF, earlier.

https://github.com/johnwhitington/camlpdf

>CamlPDF is an OCaml library for reading, writing and modifying PDF files. It is the basis of the "CPDF" command line tool and C/C++/Java/Python/.NET/JavaScript API, which is available at http://www.coherentpdf.com/.


My problem with OCaml is just that there is no stepping debugger for VScode. I'd use it except for that.


Yes

Symbolic debugger seem to be going out of fashion


It's my understanding that OCaml does not allow its programs to specify the size and signedness of its ints, so no 16-bit unsigned, 32-bit signed, etc...

Being a huge fan of F# v2 who has ditched all MS products, I didn't think OCaml was able to be systems-level because its integer vars can't be precisely specified.

I'd love to know if I'm wrong about this. Anyone?


You’re wrong, not sure where you got that conception but the int32/64 distinction is in the core language, with numerous libraries (eg stdint, integers) providing the full spectrum.


Thanks. They're not in the basic-data-types, but you are correct, they are available in the stdint module, which has a pub date from Oct 19, 2022. It can be found here:

> https://opam.ocaml.org/packages/stdint/

It's been a while since I investigated OCaml, so I guess this is a recent addition and is obviously not a part of the standard integer data types (and, therefore, the standard language), that not only have no signedness, and only have Int32 and Int64, but have "one bit is reserved for OCaml's runtime operation".

The stdint package also depends on Jane Street's "Dune", which they call a "Fast, portable, and opinionated build system". I don't need or want or need any of its capabilities.

As well, the issues page for stdint has a ton of more than year old open issues, so, as I understood, OCaml does not, like F#, have all sizes and signedness of ints available in their fundamental language. Such a language is simply not a good fit for system-level programming, where bit-banging is essential. Such low-level int handling is simply not a part of the language, however much it may be able to be bolted on.

I just want to install a programming language, with its base compiler and libraries and preferably with man pages, open some files in vi, compile, correct, and run. That is my requirement for a "systems-level" language.

I would never in my life consider OCaml with opam and Dune for building systems-level software. I wish it could, but it's not copacetic for the task, whose sole purpose is to produce clean, simple, understandable binaries.

Thanks for helping me understand the situation.


> which has a pub date from Oct 19, 2022

I think you're misinterpreting this. That's just the date the most recent version of the library was published. The library is something like 15 years old.

> the standard integer data types (and, therefore, the standard language), that not only have no signedness

I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?

> and only have Int32 and Int64, but have "one bit is reserved for OCaml's runtime operation".

The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits). Int32 and Int64 really are normal 32- and 64-bit ints. The trade-off is that they're boxed (although IIRC there is work being done to unbox them) so you pay some extra indirection to use them.

> The stdint package also depends on Jane Street's "Dune", which they call a "Fast, portable, and opinionated build system". I don't need or want or need any of its capabilities.

Most packages are moving this way. Building OCaml without a proper build system is a massive pain and completely inscrutable to most people; Dune is a clear step forward. You're free to write custom makefiles all the time for your own code, but most people avoid that.


> The library is something like 15 years old.

It's not clear from the docs, but, yeah, I suspected that might be the case. Thanks.

> I'm not sure what you mean by this - they're signed integers. Maybe you just mean that there aren't unsigned ints in the stdlib?

Yes, that's what I mean. And doesn't that mean that it's fully unsuitable for systems programming, as this entire topic is focused on?

> The "one bit is reserved" is only true for the `int` type (which varies in size depending on the runtime between 31 and 63 bits).

I don't get it. What is it reserved for then, if the int size is determined when the runtime is built? How can that possibly affect the runtime use of ints? Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?

Once again, I don't see how any of this is suitable for systems programming. Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.

> Building OCaml without a proper build system

But I don't want to build the programming language, I want to use it. Sure, I can recompile gcc if I need to, but that shouldn't be a part of my dev process for building software that uses gcc, IMO.

It looks to me like JaneStreet has taken over OCaml and added a ton of apparatus to facilitate their various uses of it. Of course, I admit that I am very specific and focused on small, tightly-defined software, so multi-target, 3rd-party utilizing software systems are not of interest to me.

It looks to me like OCaml's intrinsic install is designed to facilitate far more advanced features than I care to use, and that looks like those features make it a very ill-suited choice for a systems programming language, where concise, straightforward semantics will win the day for long-term success.

Once again, it looks like we're all basically forced to fall back to C for systems code, even if our bright-eyed bushy tails can dream of nicer ways of getting the job done.

Thanks for your patient and excellent help on this topic.


> I don't get it. What is it reserved for then, if the int size is determined when the runtime is built? How can that possibly affect the runtime use of ints?

Types are fully erased after compilation of an OCaml program. However, the GC still needs to know things about the data it is looking at - for example, whether a given value is a pointer (and thus needs to be followed when resolving liveness questions) or is plain data. Values of type `int` can be stored right alongside pointers because they're distinguishable - the lowest bit is always 0 for pointers (this is free by way of memory alignment) and 1 for ints (this is the 1 bit ints give up - much usage of ints involves some shifting to keep this property without getting the wrong values).

Other types of data (such as Int64s, strings, etc) can only be handled (at least at function boundaries) by way of a pointer, regardless of whether they fit in, say, a register. Then the whole block that the pointer points to is tagged as being all data, so the GC knows there are no pointers to look for in it.

> Or is any build of an OCaml program able to target (at compile-time) either 32- or 64-bit targets, or does it mean that an OCaml program build result is always a single format that will adapt at runtime to being in either environment?

To be clear, you have to choose at build time what you're targeting, and the integer sized is part of that target specification (most processor architectures these days are 64-bit, for example, but compilation to javascript treats javascript as a 32-bit platform, and of course there's still support for various 32-bit architectures).

> Knowing one's runtime details is intrinsic at design-time for dealing with systems-level semantics, by my understanding.

Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.

> But I don't want to build the programming language, I want to use it.

I meant building OCaml code, not the compiler.


Thanks for the fantastic explanation for how ints are handled in OCaml, but I've got to say that having the low bit be the flag is a strange design decision, IMO, but I understand that aligning the pointers will make the low bit or two irrelevant for them. But, oh!, the poor ints.

All this said, thanks for putting to bed, once and for all, any notion anyone should have that OCaml can be used as a systems language. Yikes!

> Doesn't this mean that C can't be used for systems programming? You don't know the size of `int` there, either.

You know that at compile time, surely, when you set the build target, no? Even the pointer sizes. Besides, after years of C programming, I got to where I never used the nonspecific versions; if I wanted 64-bits unsigned, I would specifically typedef them at the top, and then there's no ambiguity because I specifically declared all vars. (You can see how I did the same thing in F# at the bottom of this reply.)

It makes working with printf much less problematic, where things can easily go awry. Anyway, I want my semantics to percolate down pyramid-style from a small set of definitions into larger and larger areas of dependence, but cleanly and clearly.

Sure, DEFINEs can let you do transparent multi-targetting, but it ends up being very brittle, and the bugs are insidious.

Thanks for your excellence. It's been a joy learning from you here.

---

As an aside, here's a small part of my defs section from the final iteration of my F# base libs, where I created an alias for the various .NET types for standard use in my code:

   type tI4s = System.Int32
   type tI1s = System.SByte
   type tI2s = System.Int16
   type tI8s = System.Int64

   type tI1u = System.Byte
   type tI2u = System.UInt16
   type tI4u = System.UInt32
   type tI8u = System.UInt64
Why risk relying on implicit definitions (or inconsistent F# team alias naming conventions) when, instead, everything can be explicity declared and thus unambiguous? (It's really helpful for syscall interop declarations, as I remember it from so many years ago). Plus, it's far more terse, and .NET not being able to compile to a 64-bit executable (IIRC) made it simpler than C/C++'s two kinds of executable targets.


> But, oh!, the poor ints.

Empirically this is a rather low cost. IIRC, the extra ops add less than a cycle per arithmetic operation, due to amortizing them over multiple operations and clean pipelining (and also things like shifts just being really cheap).

But yes, there are certainly applications where we almost exclusively use Int64 or Int32 rather than the primary int type, if you need exactly that many bits.

> You know that at compile time, surely, when you set the build target, no?

Well, that's true of OCaml as well.

This is ultimately a difference of opinion - I think that the cost of installing a single extra library to get ints of various widths/signedness would be worth the advantage of eliminating nearly all memory errors (and various other advantages of a higher-level language).

The main carveout I would agree with is any case where you absolutely need strict memory bounds - it's not clear to me how you'd satisfy this with any GC'd language, since the GC behavior is ultimately somewhat chaotic.


As I commented above, Int32 and Int64 are part of the standard library since at least 4.X Ocaml versions (we are now in 5.3). So normally all them are available when you install any distribution of Ocaml. Note that there is also a type named nativeint (which, I think is the kind of int that you were looking for in all your comments and post) and it is part of the standard library, so in summary:

Int type (the one you dislike for systems programming)

Int32 type (part of the standard library, one of those you were looking for)

Int64 type (part of the standard library, one of those you were looking for)

Nativeint (part of the standard library, maybe the one you were looking for)

The library stdint is other option, which can be convenient in some cases but for Int32 and Int64 you don't need it also for Nativeint you don't need it.


The modules Int64 and Int32 and part of the OCaml standard library. You mentioned that it is needed dune or Janestreet in your comments to have this functionality. They are part of the standard library. Really, they are part of Ocaml core developments. Actually, for example, you even can use the library big-arrays with these types and int8, int16, signed, unsigned... even more you have platform-native signed integers (32 bits on 32-bit architectures, 64 bits on 64-bit architectures) with Bigarray.nativeint_elt as part of the standard library so all these types are there.

You also mention that Int32 and Int64 are recent, however these libraries were part of OCaml already in the 4.X versions of the compiler and standard library (now we are in the 5.3).

Note that in OCaml you can use C libraries and it is quite common to manage Int32, Int64, signed etc...


> F# v2

What does that mean?


The second version of F#, where they implemented generics, before they got into the type provider stuff.


What is ML programming language? They say OCaml is the same thing with the different name, is it truth?



Can a systems programming lanugage use garbage collection? I don't think so.


You´d be surprised.

In the 1980s, complete workstations were written in Lisp down to the lowest level code. With garbage collection of course. Operating system written in Lisp, application software written in Lisp, etc.

Symbolics Lisp Machine

https://www.chai.uni-hamburg.de/~moeller/symbolics-info/fami...

LMI Lambda http://images.computerhistory.org/revonline/images/500004885...

We're talking about commercial, production-quality, expensive machines. These machines had important software like 3D design software, CAD/CAM software, etc. And very, very advanced OS. You could inspect (step into) a function, then into the standard library, and then you could keep stepping into and into until you ended up looking at the operating system code.

The OS code, being dynamically linked, could be changed at runtime.


I hate Java just as much, but at least it’s trying to improve and grow beyond its past limitations, instead of promoting bad design decisions as good for the programmer.


A recent book, the Light Eaters, summarises much of the recent research into plant behaviour like this, including how maligned and misreported it has been over the past 50 years or so.


Interview (audio and full transcript) with the author, Zoë Schlanger:

Emergence Magazine: The World Is a Prism, Not a Window

https://emergencemagazine.org/interview/the-world-is-a-prism...


Investigating tree-shaking in OCaml (tldr; there isn't any, but it's been discussed a lot over the past 8 years) https://www.chrisarmstrong.dev/posts/dead-code-elimination-d...


Although Docs have almost given up making in Northampton, you'll still find a number of shoemakers producing good quality boots that should last a decade or more. You'll be paying £250-£500 as you guessed.


It's different degrees of the same thing I would have thought.

If you're running your own box, you still depend on network infrastructure and uplink of a service provider, whereas a cloud infrastructure provider may go the other way and negotiate direct connections themselves.

Plenty of valuable lessons await for those who even just provision a virtual host inside AWS and configure the operating system and its software themselves. Other lessons for those who rack up their own servers and VNETs and install them in a data-centre provider instead of running them onsite.

There's only so much you can or should or want to do yourself, and its about finding the combination and degree that works for you and your goals.


Interesting to see Sydney Water put their sponsorship near the top of this. So much of Sydney including its larger rivers and waterholes are swimmable. Even our unique (but not very well known) ocean pools make many of our more dangerous surf beaches accessible.


》Even our unique (but not very well known) ocean pools make many of our more dangerous surf beaches accessible.

What do you mean?


"Ocean pools" in Australia are large pools, often square or rectangular regular swimming pool shaped, that are set next to the ocean at sea level and can be connected or pump filled (ie tidal or permanent level).

It's sea water, but without the sharks, wave action, rip currents, jelly fish, Bondi cigar's. etc

Here are some in Sydney: https://oceanswims.com/lifestyle/5-of-the-best-ocean-pools-i...

Variations exist all around Australia.

The N'West has Dampier's Shark Cage Beach - not a rock pool but a large steel fence enclosed swimming area to keep the sharks and the shark snacks seperated.


I lived in Burleigh Heads for four years. I just didn't catch what you meant. Dumb me left and now I am stuck in the nordics, mortgage, wife and kids.


Right right, Arlie Beach has one. The ocean there is so treacherous that no one bathes in the ocean :)


Bronte beach checks all the boxes: Open water surf, natural rock pool (Bogey Hole), and ocean pools.


> I am a simple sole, I want to reduce the cognitive load in my web projects. The general idea is to go back to the halcyon early days of the web before Netscape dropped the JS-bomb. You know HTML for the layout and CSS for the style. An elegant division of roles.

(I'm not quite sure if this is the author's sentiment), but the point shouldn't be to escape JS entirely, but make it into something that can be used in a repeated pattern that works in lockstep with your application, such that you are neither creating custom JS for each page (e.g. React), nor blindly manipulating the DOM (like JQuery).

The division of roles between CSS and HTML is an almost contradictory point - your styling and layout should be coupled if you are to impose any meaningful order in your design. If you are rejecting the "decoupling" of front-end and back-end that React gives you, then why would you expect to be able to do it between HTML and CSS?


these two points are exactly where I am coming from ...

(i) I am deliberately starting from a(n over-) simplified pov to see how far I can get with zero JS ... but as I read the comments above I realise that, for real applications, I will need to "sprinkle" some Alpine and Tailwind here and there. (I chose Cro for the user auth.)

(ii) Indeed HTML and CSS are in an intricate contradictory dance. In the purest sense I think that my sites should be composed of reusable parts that sit within context. These parts can be composed from roles and may selectively tweak them. The roles convey aspects of the design (colours, typography, containment, scale, size). The parts contain content (text, image, actions), identity and semantic intent. Role mixing in SASS is a small start in this direction.

[This is a very weak stab at a very thorny problem ... please let me know if there is any reading that you recommend.]

HTMX is a great tool to bring the UI implementation back into the realm of more capable & expressive languages. And to start with a new perspective.

My personal preference is Raku, but you may prefer OCAML, Rust, Go, Python, Haskell, Elixir, ROC, Zig ...


Why not just use a pure CSS framework like Spectre.css (including accordions, modals, carousels, tooltips, tabs) and optionally a 166-byte HTMZ where CSS is not enough? You don't need more than 1KB of JS, see PHOOOS at https://kodus.pl.


that looks very interesting - thanks for the tip


Hi Steve. Please include https://github.com/beercss/beercss in your exploration.


it’s already my favourite - great tip


This whole situation is absolutely bizarre to me as an Australian. Our states converted to a system of centrally registered title (also known as Torrens title) over a hundred years ago to avoid the “old system” problem of tracing ownership records backwards in time. Although the system is still in effect for some properties, in many cases they’ve been converted anyway.


And yet my conveyancer (WA) last year still tried to hock me some expensive additional title insurance. My line to her was “this sounds like it’s protecting me from you not doing your job”. I don’t recall the response but it was unconvincing.


You’d think conveyancing would be cheaper here, but you’re still spending $2-3k on who knows what to transact property. At least the process is quick and final compared to the mess in other common law jurisdictions


That's partly because the dominant platform for electronic conveyancing, PEXA, has a monopoly. Only licensed professionals who pay subscription fees can access PEXA. This removes competitive pressure on conveyancing fees because self-represented buyers must use a slower, riskier, non-standard paper process.

https://www.afr.com/companies/financial-services/nsw-product...

https://www.productivity.nsw.gov.au/market-study-on-econveya...


I'd love to do something like this for Australian native plants. It seems like quite a lot of work though!


I'm Aussie and a native plant society person. Will come along to your meeting next week and we can chat about it.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: