Hacker News new | past | comments | ask | show | jobs | submit login

Well, we're using it for business automation. We have automated agents that are selectively override-able by humans on an as-needed basis (e.g. a case we don't currently handle, or because of a runtime error).

Also, most of our code needs to support suspend/resume on another machine, either in the middle of an action or more often between actions. So, a "behavior" might begin on machine A and then migrate to machine B to do more work, then on to machine C. While doing work, its execution state might be serialized to Postgres while some dependency is waited on—say, a human task that doesn't get done until the following Monday. It's then resumed in the same execution state, potentially on an entirely different worker/machine, and continues executing.

The suspend/resume stuff completely destroys the code if you're writing it by hand, as does moving from machine to machine.

So we write the core logic in our own internal MLIR dialect and then output code that has the suspend/resume semantics automatically (i.e. literal compiler transformations, plus our own "interpreter" (which is just JavaScript/v8 with all of the extra suspend/resume cruft added in).

We don't translate out of SSA form at all, our codegen can execute it directly. We also insert debug hooks so when there's an error, you can map the execution state to the original code.

Most of the cool machine learning stuff MLIR can do, we're not even doing yet outside of some internal prototypes. So far, just the methodology of MLIR has made a huge impact—it gives really nice structure (read: tooling) for the kinds of code transforms we've needed to do.

HTH




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: