I have a US number and live in Switzerland. At least for me, I only receive SMS messages whenever I visit the US -- the rest of the time they're just dropped and I'll never see them.
(Doesn't really bother me, my friends and I all use WhatsApp/etc. anyway.)
n=1 though, maybe this is some quirk of my phone provider.
Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!
You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.
As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.
---
Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.
I concur. As a postdoc for many years adjacent to this work, I was similarly unimpressed.
The best part about PINNs is that since there are so many parameters to tune, you can get several papers out of the same problem. Then these researchers get more publications, hence better job prospects, and go on to promote PINNs even more. Eventually they’ll move on, but not before having sucked the air out of more promising research directions.
I believe a lot of this hype is purely attributable to Karniadakis and how bad a lot of the methods in many areas of engineering are. The methods coming out of CRUNCH (PINNs chief among them) seem, if they are not just actually, more intelligent in comparison, since engineers are happy to take a solution to inverse or model selection problems by pure brute force as "innovative" haha.
The general rule of thumb to go by is that whatever Karniadakis proposes, doesn't actually work outside of his benchmarks. PINNs don't really work, and _his flavor_ of neural operators also don't really work.
PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.
I love karniadakis energy. I invited him to give a talk in my research center ands his talk was fun and really targeted at physicists who understand numerical computing. He gave a good sell and was highly opinionated which was super welcomed. His main argument was that these are just other ways to arrive optimisation and they worked very quickly with only a bit of data. I am sure he would correct me greatly at this point. I’m not an expert on this topic but he knew the field very well and talked at length about the differences between one iterative method he developed and the method that Yao lai at Stanford developed after I had her work on my mind because she talked in an ai conference I organised in Oslo. I liked that he seemed to be willing to disagree with people about his own opinions because he simply believed he is correct.
Edit: this is the Yao lai paper I’m talking about:
Likewise, spending another comment just to agree. Both on the low profile and the low travel distance.
I've tried low-profile chocs and they still have too much travel! But I'm stuck with them as split keyboards are important for me just for the usual collection of wrist health reasons.
So I'm just waiting for Apple to make a split keyboard I guess :)
I have sincerely been considering a bandsaw and a soldering iron! To find out how hard it is to split a keyboard that’s already in one piece and have it remain working.
So why this over qutebrowser [1] ? (Which has been my go-to keyboard-first browser for a long time.) This isn't mentioned in the FAQ despite I think being the natural comparison.
My impression is that it has been stuck in bug fixing/dependency churn for a long time now. Switched to Firefox while waiting for Nyxt to be usable (apparently, Nyxt 4 will be it).
> My impression is that it has been stuck in bug fixing/dependency churn for a long time now
I don't think it's just your impression: it's exactly what happened. Depending on Qt for the rendering engine means the browser has been tied to the painfully long release cycle of the whole of Qt. Quickly fixing bugs or implementing new features is hard, they have to hack around limited APIs, beg for more and continually fix new bugs introduced by upstream (both Qt and google).
The engine is QtWebEngine, which is essentially Chromium without the proprietary stuff. It may a be a bit outdated, but I've never seen a page not being rendered properly. Maybe you used it way back when the default engine was QtWebKit.
Like always it's a second class citizen. I spend a stupid 6 months trying to use emacs like vim. Emacs isn't a text editor. If you need to edit text as a rectangle of characters then you can drop in evil mode. Expecting to use emacs control characters from evil mode it a bit like using Kanji to write English.
Evil (VIM emulation mode in Emacs) does not in any way behave like a second-class citizen. I use evil every single day and it's fantastic.
Emacs is a text editor, yes, among other things.
If anyone is reading this who hasn't tried Emacs, don't let takes like this spoil you giving Emacs a try. Doom Emacs is a fantastic experience to get started but there are more minimal starter kits that give you just evil-mode to start.
I literally said you can use evil mode to edit text.
But trying to use vim inspired motion and editing in other modes is a terrible idea. Just learn how Emacs does it and stop thinking of everything as text. There is usually deeper semantic meaning behind the syntax that an Emacs mode will let you edit directly.
Doomemacs was everything I wanted Neovim to be for me personally. I know it’s a big war on the web, but for some of us evil mode emacs is the easy way to use vim motions.
The only real disadvantage for me is that it’s significantly easier to run Neovim on windows (work).
Thanks for posting this. I had seen beartype several years ago but I don't believe it had the whole-module registration feature yet. I'm looking forward to trying both of the libraries since the ergonomics are better than decorating every function individually.
I think the biggest, well "con" I've seen is non-technical - the fear of JAX being killed by Google.
I mention in the blog as well [here](https://neel04.github.io/my-website/blog/pytorch_rant/#gover...) how important having an independent governance structure is. I'm sure for many big companies and labs, the lack of a promise of long-term, stable support is a huge dealbreaker.
I'm not sure how much Google bureaucracy would limit this, but have you raised the subject of forming an independent entity to govern JAX, very much like PyTorch? I believe XLA is protected, as its with TF governance. But perhaps, there could be one for JAX's ecosystem as well, encompassing optax, equinox, flax etc.
I can personally say, I am not super concerned about it being killed. Google supported TF1 for quite a long time and all these projects have a shelf life.
What concerned me about JAX, at a small company, is that it doesn't benefit from the network effects of almost everyone developing for it. E.g. There is no Llama 3.1 implementation in JAX afaict.
So as long as there is a need to pull from the rest of the world the ecosystem will trump the framework.
Activity in the LLM space is slowing down though, so there is an opportunity to take the small set of what worked and port it to JAX and show people how good that world is.
I was just reading this too! I think it's a really interesting choice in the design space.
So to elucidate this a little bit, the trade-off is that this is now incompatible with e.g. `jax.grad` or `lax.scan`: you can't compose things in the order `discharge_effect(jax.grad(your_model_here))`, or put an effectful `lax.scan` inside your forward pass, etc. The effect-discharging process only knows how to handle traversing pytree structures. (And they do mention this at the end of their docs.)
This kind of thing was actually something I explicitly considered later on in Equinox, but in part decided against as I couldn't see a way to make that work either. The goal of Equinox was always absolute compatibility with arbitrary JAX code.
Now, none of that should be taken as a bash at Penzai! They've made a different set of trade-offs, and if the above incompatibility doesn't affect your goals then indeed their effect system is incredibly elegant, so certainly give it a try. (Seriously, it's been pretty cool to see the release of Penzai, which explicitly acknowledges how much it's inspired by Equinox.)
Author of Penzai here! In idiomatic Penzai usage, you should always discharge all effects before running your model. While it's true you can't do `discharge_effect(jax.grad(your_model_here))`, you can still do `jax.grad(discharge_effect(your_model_here))`, which is probably what you meant to do anyway in most cases. Once you've wrapped your model in a handler layer, it has a pure interface again, which makes it fully compatible with all arbitrary JAX transformations. The intended use of effects is as an internal helper to simplify plumbing of values into and out of layers, not as something that affects the top-level interface of using the model!
(As an example of this, the GemmaTransformer example model uses the SideInput effect internally to do attention masking. But it exposes a pure functional interface by using a handler internally, so you can call it anywhere you could call an Equinox model, and you shouldn't have to think about the effect system at all as a user of the model.)
It's not clear to me what the semantics of ordinary JAX transformations like `lax.scan` should be if the model has side effects. But if you don't have any effects in your model, or if you've explicitly handled them already, then it's perfectly fine to use `lax.scan`. This is similar to how it works in ordinary JAX; if you try to do a `lax.scan` over a function that mutates Python state, you'll probably hit an error or get something unexpected. But if you mutate Python state internally inside `lax.scan`, it works fine.
I'll also note that adding support for higher-order layer combinators (like "layer scan") is something that's on the roadmap! The goal would be to support some of the fancier features of libraries like Flax when you need them, while still admitting a simple purely-functional mental model when you don't.
IIUC then penzai is (deliberately) sacrificing support for higher-order operations like `lax.{while_loop, scan, cond}` or `diffrax.diffeqsolve`, in return for some of the other new features it is trying out (treescope, effects).
So it's slightly more framework-y than Equinox and will not be completely compatible with arbitrary JAX code. However I have already had a collaborator demonstrate that as long as you don't use any higher-order operations, then treescope will actually work out-of-the-box with Equinox modules!
So I think the answer to your question is "sort of":
* As long as you only try to inspect things that are happening outside of your `diffrax.diffeqsolve` then you should be good to go. And moreover can probably do this simply by using e.g. Penzai's treescope directly alongside your existing Equinox code, without needing to move things over wholesale.
* But anything inside probably isn't supported + if I understand their setup correctly can never be supported. (Not bashing Penzai there, which I think genuinely looks excellent -- I think it's just fundamentally tricky at a technical level.)
Author of Penzai here. I think the answer is a bit more nuanced (and closer to "yes") than this:
- If you want to use the treescope pretty-printer or the pz.select tree manipulation utility, those should work out-of-the-box with both Equinox and Diffrax. Penzai's utilities are designed to be as modular as possible (we explicitly try not to be "frameworky") so they support arbitrary JAX pytrees; if you run into any problems with this please file an issue!
- If you want to call a Penzai model inside `diffrax.diffeqsolve`, that should also be fully supported out of the box. Penzai models expose a pure functional interface when called, so you should be able to call a Penzai model anywhere that you'd call an Equinox model. From the perspective of the model user, you should be able to think of the effect system as an implementation detail. Again, if you run into problems here, please file an issue.
- If you want to write your own Penzai layer that uses `diffrax.diffeqsolve` internally, that should also work. You can put arbitrary logic inside a Penzai layer as long as it's pure.
- The specific thing that is not currently fully supported is:
(1) defining a higher-order Penzai combinator layer that uses `diffrax.diffeqsolve` internally,
(2) and having that layer run one of its sublayers inside the `diffrax.diffeqsolve` function,
(3) while simultaneously having that internal sublayer use an effect (like random numbers, state, or parameter sharing),
(4) where the handler for that effect is placed outside of the combinator layer.
This is because the temporary effect implementation node that gets inserted while a handler is running isn't a JAX array type, so you'll get a JAX error when you try to pass it through a function transformation.
This last case is something I'd like to support as well, but I still need to figure out what the semantics of it should be. (E.g. what does it even mean to solve a differential equation that has a local state variable in it?) I think having side effects inside a transformed function is fundamentally hard to get right!
I have a Tap Strap 2. (Although only as of a couple of weeks ago, so still pretty new to it.)
Answering your questions, split into pros and cons:
## Pros
You can customise the layout, including meta keys like control/alt/windows. I think the "more advanced mode" is basically just designing your own layout.
It's honestly very accurate for a keyboard that is basically just tapping your figures against a table. It definitely misreads the odd input (or perhaps more accurately, it is sufficiently easy for me to waggle my fingers wrong), but it doesn't make so many that I'm really bothered by it.
I had an issue with the firmware on mine when I first got it, and the support team were super responsive. I really appreciated this.
The Android/iOS app is well designed for learning how to use it -- it comes with an excellently pedagogical typing tutor.
If you're curious, I'd definitely recommend giving it a go. IIUC the WPM most folks get with it is about equivalent to to other one-handed-keyboards, ~50 or so. I've found learning it to be pretty easy. (Substantially easier than learning a new layout on a regular keyboard, for some reason.)
It connects as a bluetooth keyboard, no special software required.
## Cons
Customising layouts is unfortunately a bit of a chore, being both tied-to-the-company and requiring additional devices. The layout must first be designed through a webpage on their site, which you need to log in to using your account. Then you need an Android/iOS device to actually connect to the TS2 and push the layout.
## Worth knowing
You have essentially five layers: default, double tap, triple tap, shift, and switch. Shift and switch are pretty similar to layers as you'll find them on most ergomech keyboards. However, the double and triple tap layers work by inputting the key corresponding to that chord on the default layer, then detecting that you're doing a double/triple tap, then inputting backspace, and then inputting the key corresponding to the chord on the double/triple tap layer. So if you'd like to use it for something like vim, then that first input might actually mean something! If that will affect you, then in practice you can't use the double/triple tap layers, and you only have 60% of the real estate to fit your custom layout into. That is just enough to fit basically a whole keyboard -- I've got a custom layout that does this -- but it took me some careful thought for how to cram all those keys into such little space in a logical way.
The insertion of backspaces is interesting. One of the drawbacks of--say--Ardux is that some layers are accessed by holding down one key and then pressing others. It's adjustable but it means there's a delay when you want to enter characters on such layers as the keyboard has to wait and make sure you're actually holding. Conflicts aside, committing early and then correcting is interesting.
The Twiddler eventually got third party tools, maybe the Tap Strap can.. as well. Being at the mercy of an app service for what is for a large portion of users a medical device is insane.
Helix:
https://github.com/helix-editor/helix/
Like vim, but already has an LSP etc out of the box. Things are already there so the config files are minimal.