Hacker News new | past | comments | ask | show | jobs | submit | williamzeng0's comments login

not yet - but it's fairly straightforward to implement the UI once you get the AI down. i've been working on an MVP here: https://docs.sweep.dev/autocomplete#next-edit-prediction-in-...


I think they're getting there but missing big features like a high quality "Apply" workflow and next-edit predictions.

I'm working with two of my friends to fill the missing pieces as a JetBrains plugin: https://docs.sweep.dev/


That's cool that you're looking at those things. I hope we've made progress on "Apply" (and we're doing more.) And as heads-up, as you can imagine, we're looking at NEP.


Ah - we do support PhpStorm. Docs will be updated.

We only support Sonnet 3.5/3.7 (we still go back and forth on which one we like), and we also support on-prem deployments!


Fixing this! This was a prompting bug re: claude sonnet.


This is great! I see it also supports an 'ignore' parameter.


That sounds great, I'm going to see how Sweep does on this issue: https://github.com/sweepai/sweep/issues/3333


I think python objects have a __hash__ method available on them as well that can be used for hashing. That should be even much faster than sha1, but for this use case I'm not sure how much it really matters. Would be interesting to benchmark!


The version input makes sense, I could also see some developers disliking that ux because of it's verbosity. But to deliberately invalidate you have to make a manual effort in either case.


To me, it's more that invalidating the cache without it requires knowledge of implementation details (i.e. where the cache is stored, and how the cache files are named) that ideally shouldn't have to "leak" for the cache to be generally useful.

(Buy hey, we are talking about what is famously one of the hard problems in comp sci, so (respectful) disagreements on how best to do it should be expected :-)


I appreciate the sentiment! Makes a ton of sense.


Thread safety is a big issue with ours, we'll run into issues when two different processes attempt to write to the same ___location, or we'll get a bad read. This is a better solution for large scale workloads.

Ours is more meant for single-process scripts like an LLM workflow.


+1, we considered traversing the function's dependencies to key the cache on (not just the initial function source code), but decided to leave this in a as a constraint. Otherwise we also blowing up the cache when we didn't want it to happen.


Making the __dict__ opt-in makes it a lot more user-friendly at the expense of a little verbosity. That makes sense.

These tips make sense, we often use named args in our function calls (not using them has caused so many bugs), but we don't really enforce the order. Copilot doesn't always get it right either.

By moving inspect.getsource out of the wrapper, do you mean initializing it when the module is imported? I'm curious how that improves performance.


Yeah I too try to avoid positional args as much as possible, huge source of bugs and time wasting especially when refactoring code

Re inspect.getsource, I'm not sure if it'd be a huge performance impact, but if it's in the wrapper fn it will get called every time the function gets called, while if it's outside it will be called only when the decorator runs (eg when the module containing the function being decorated is imported).

eg: https://gist.github.com/mpeg/ff1d99fde06f39916b5aaadd76b534f...

EDIT: on a quick test, over 100k function calls, with inspect.getsource inside the wrapper it runs in 2.7s on my Apple M2, and that's not even including the md5 hash, so I suspect this should dramatically improve performance for you


This is great, thanks for the suggestion! We've just updated our docs and code with this change: https://github.com/sweepai/sweep/pull/3332


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: