Hacker News new | past | comments | ask | show | jobs | submit | romes's comments login

Thanks! The whole idea is quite neat. Also a good excuse to delve a bit deeper into probabilities and number generation.

Despite it being an advertisement of something I've done -- in the Show HN sense -- there are many features which distinguish it from other existing resources. For instance:

- Highlighting the Kangxi radical really helps with visually breaking apart the components of each character, and often serves as a hint for the meaning

- Include the meanings for the kun'yomi readings trailed by okurigana, most often verbs and adjectives, which can be easily conjugated according to included cheatsheet

- Always try to use compound examples which only use Kanji seen before according to the order (and, within the same JLPT level, we also topologically sort the Kanji s.t. components appear before Kanji they are part of)

- An etymological classification which hints towards the origin of the character, and, in the digital version, includes a link to the corresponding wiktionary page which often has an explanation. Broadly:

    * Pictographic, a Kanji whose origins lay in "drawings"

    * (Compound) Ideographic, where the Kanji represents an idea or forms a new idea resulting from the meanings of its components

    * Phonetic, where one component (typically the radical) gives a semantic to the character, and another component gives the phonetic family of the character. Essentially the semantic comes broadly from the semantic component and then multiplexes on the component which provides the sound...

On white rabbit specifically:

- We do the traditional flashcard style, where no information at all is given from the back

- We include card markers for physical spaced repetition ala Leitner Box style

- Afer going through the physical cards, eventually up to N3, students have access to the full digital Anki deck to continue studying through all the Joyo with a better scheduler (Anki's FSRS).

Of course, that's where we stand on the design space. A key differing point is mnemonics, which are very common in these resources, but that we purposefully do not use (in fact, this deck was first born out of the frustration with existing mnemonic-based resources, as they typically are disconnected from any real etymological reason, or from the character at hand).


Thanks for this extensive response. I respect what you created there, my comment was not supposed to talk down your creation!

As I already said, they look great! Wish you all the best and lots of success, I mean it.

Edit: maybe add this explanation about what makes your product so special to your site. I mean this extensive explanation (or did I overlook it there?). I am not sure whether it is a good idea to refer to competing products though, this is a question for marketing persons.

Edit2: also, you are absolutely right, this is a show HN post. I should not have made that part of the comment. I apologise.


Let me know if you find some reasonably fast sub-optimal algorithm, I'd be interested in trying it.

There seems to be some literature on the complexity this kind of sliding puzzles, like https://www.sciencedirect.com/science/article/abs/pii/S03043....


Some examples include the well-known shellcheck shell analysis tool, or the Ivory language embedded in Haskell used in real world autopilots for flight systems described in this experience report: https://leepike.github.io/pubs/embedded-experience.pdf, or the copilot language powering monitoring systems for flight systems at NASA (https://copilot-language.github.io/index.html).


Thanks saagarjha. Those are very actionable, good, suggestions. I will look into updating the packages which do this under the hood to do that.


@dang sorry, I missed your reply in the previous post. Thanks for letting me re-try this many times. Letting it go...


Great to see this! Funnily, I was just now working on a problem to which I'm trying to apply e-graphs (using my -- shameless plug -- haskell e-graphs/eqsat library[1])

[1] https://github.com/alt-romes/hegg


Just curious, what's the problem you are solving?


I think it's Menlo


The Haskell Discourse: https://discourse.haskell.org/


Equality graphs (e-graphs) for theorem proving and equality saturation and other equality-related things.

They're awesome data structures that efficiently maintain a congruence relation over many expressions

> At a high level, e-graphs extend union-find to compactly represent equivalence classes of expressions while maintaining a key invariant: the equivalence relation is closed under congruence.

e.g. If I were to represent "f(x)" and "f(y)" in the e-graph, and then said "x == y" (merged "x" and "y" in the e-graph), then the e-graph, by congruence, would be able to tell me that "f(x) == f(y)"

e.g. If I were to represent "a*(2/2)", in the e-graph, then say "2/2 == 1", and "x*1 == x", by congruence the e-graph would know "a*(2/2) == a" !

The most recent description of e-graphs with an added insight on implementation is https://arxiv.org/pdf/2004.03082.pdf to the best of my knowledge.

P.S: I'm currently implementing them in Haskell https://github.com/alt-romes/hegg


Shameless plug, I also maintain an OCaml implementation of egraphs (named ego) at https://github.com/verse-lab/ego

While the most popular implementation at the moment seems to be egg in Rust, I find that OCaml serves as a much more ergonomic environment for quickly prototyping out uses of egraphs in practice. As a bonus, ego also shares the same logical interface as egg itself, so once you've finalised your designs, you shouldn't have much trouble porting them to egg if you need the performance gains.


I wonder if you could apply e-graphs to situations where the union-find data structure would be used, i.e. if there are any additional benefits gotten from the congruence relation.


More pointers and more processing in exchange for representing equivalence classes of many, often infinitely many graphs compactly and enumerating them efficiently. Usually not relevant when equivalence of simple nodes is the object.


Isnt that the building blocks of logic based programming languages like Prolog?


Yes, at least these egg-based projects are usign this idea, but it looks like it's just one of the options:

https://souffle-lang.github.io/docs.html http://www.philipzucker.com/souffle-egg4/ https://arxiv.org/pdf/2004.03082.pdf


I had a similar thought when I learned about e-graphs. I'm not sure yet, but I think congruence closure is a slightly different problem from the one Prolog unification solves. In particular, if you tell an e-graph that `f(x) = g(y)`, it will take you at your word -- while Prolog would give a unification error. (An e-graph should also handle `X = f(X)`, while this would fail Prolog's admittedly often-ignored occurs check.)


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: