Hacker News new | past | comments | ask | show | jobs | submit | falkaer's comments login

Those new language constructs are exactly what's enabling the performance gains, by giving reliable information to the compiler. Projects like numba have clearly demonstrated the limitations of trying to compile pure Python.


It's only closed-source for now, with plans to open-source the language when it's more finalized - similar to LLVM early on. Not sure if it says so explicitly on their website somewhere, but Chris Lattner has stated that several times


I use the dwindle layout which is similar to bspwm in automatic splitting mode. One advantage is you can drag and drop windows into other nodes, which will be split appropriately, it's surprisingly neat when you have a lot of stuff open. It's easiest to get a feel for the differences by just trying it out or watching some videos of it on r/unixporn


I'm curious, why not use LLVM?


I'm a big GPL advocate, and publish I everything I can publish under GPLv3. Also, I want the code I publish to be able to be built with openly available tools. In other words, I want my software to be free and can be easily reproduced/built. This is the first aspect.

Second aspect is I don't like the behavior of LLVM ecosystem which is trying to subtly EEE gcc toolchain.

Lastly, I don't desire to be able to build a project I developed, or cloned to be only can be built with "clang/llvm/20201024+somecompanyClosedGitBuild20210514+bp1" which is available in binary form for a single distribution for a specific architecture. I have experienced SDKs and other software like that, and I don't want to go through again, or put anyone through that hoops.


> Second aspect is I don't like the behavior of LLVM ecosystem which is trying to subtly EEE gcc toolchain.

As opposed to GCC, which not-very-subtlety added extensions onto C which hurt portability?

GCC really shot themselves in the foot by making their architecture so difficult to use as a library, even from GPL code.


The M1 (laptops) do emulate x86, and the M1 (chip) has a few x86 specific instructions to improve emulation performance


I love how you added "laptop" to make your statement... still false. There is a program running on macos that literally recompiles x86 binaries to arm, then the m1 executes the arm code. the m1 does not execute x86 binaries. period. it only runs arm binaries.


No, parent comment isn't false, even if the wording could be more precise. It is true that M1 CPUs do not execute x86 instructions, but the machines do, in effect, execute x86 binaries. Also, M1 does have added instructions for TSO to improve performance of translated code.


I used this blog post as the basis for a small library to control my own closely related LB130 lightbulbs from the command line a while ago[0], works like a charm.

[0]: https://github.com/falkaer/tplink-lb130-api


With PyCharm I've managed to hide every single part of the UI besides the editor itself (I only had to get one plugin for the last bit, main menu toggler) and just use keybinds to show parts as I need them, so it's definitely a problem you can solve if you like


You are given money (5.435 dkk monthly after tax) for attending university in Denmark and pay no tuition. This is in stark contrast to most other countries.


How unfair indeed of them to release the model as open source with an accompanying freely available paper explaining all the details of it


If the solution program is sufficiently complex (as one would imagine it to be in non-trivial cases where we use AI, e.g. computer vision, speech synthesis, etc.), what makes you think the solution program is going to be more lightweight than running inference on an "AI model"? Futhermore, what guarantee do you have that the discovered solution is going to be efficient w.r.t. computation at all?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: