Hacker News new | past | comments | ask | show | jobs | submit | claiir's comments login

Maybe just have commands auto-execute if you click on links in the existing text? That would allow someone to experience the entire interface on a touch device! :>

E.g. there is **__contact__** in the page, bold and underlined, but you cannot click on it to do anything.


Sounds like the LLM you used when writing this slop comment struggled with the problem too. :>

Same experience with my personal benchmarks. Generally unimpressed with Qwen3.

o1-preview had this same issue too! You’d give it a long conversation to summarize, and if the conversation ended with a question, o1-preview would answer that, completely ignoring your instructions.

Generally unimpressed with Qwen3 from my own personal set of problems.


Likely not the case, given (1) the body was peri-mortem decapitated (by a human) and (2) apparent structural damage was limited to a single bite mark (on the ilium), with no signs of "taphonomic" damage (indicating limited soft tissue trauma)? [1]

(1) > 6DT19 had been decapitated with a single cut between the second and third cervical vertebrae , delivered from behind.

(2) > Additional [to the decapitation] peri-mortem trauma was present in the form of a series of small depressions on both sides of the pelvis [..]

> Taphonomic damage alone is also unlikely due to the appearance and margins of the lesions, which are the same colour as the surrounding bone (this differs if the break is post-mortem; [56]), and the adherence of bony fragments at the injury site (which occurs when soft tissue is present) .

[1]: https://journals.plos.org/plosone/article?id=10.1371/journal...



> GoDaddy is actively experimenting to integrate image generation so customers can easily create logos that are editable [..]

I remember meeting someone on Discord 1-2 years ago (?) working on a GoDaddy effort to have customer-generated icons using bespoke foundation image gen models? Suppose that kind of bespoke model at that scale is ripe for replacement by gpt-image-1, given the instruction-following ability / steerability?


Okay, but why did `LeaveCriticalSection` change? Compiler changes, new features, refactoring, etc? That’s the most interesting part (and absent)!

It’s not just you. The speedup is an artefact of the CFG (Classifier-Free Guidance) the model uses. The other problem is the speedup isn’t constant—it actually accelerates as the generation progresses. The Parakeet paper [1] (which OP lifted their model architecture almost directly from [2]) gives a fairly robust treatment to the matter:

> When we apply CFG to Parakeet sampling, quality is significantly improved. However, on inspecting generations, there tends to be a dramatic speed-up over the duration of the sample (i.e. the rate of speaking increases significantly over time). Our intuition for this problem is as follows: Say that is our model is (at some level) predicting phonemes and the ground truth distribution for the next phoneme occuring is 25% at a given timestep. Our conditional model may predict 20%, but because our uncondtional model cannot see the text transcription, its prediction for the correct next phoneme will be much lower, say 5%. With a reasonable level of CFG, because [the logit delta] will be large for the correct next phoneme, we’ll obtain a much higher final probability, say 50%, which biases our generation towards faster speech. [emphasis mine]

Parakeet details a solution to this, though this was not adopted (yet?) by Dia:

> To address this, we introduce CFG-filter, a modification to CFG that mitigates the speed drift. The idea is to first apply the CFG calculation to obtain a new set of logits as before, but rather than use these logits to sample, we use these logits to obtain a top-k mask to apply to our original conditional logits. Intuitively, this serves to constrict the space of possible “phonemes” to text-aligned phonemes without heavily biasing the relative probabilities of these phonemes (or for example, start next word vs pause more). [emphasis mine]

The paper contains audio samples with ablations you can listen to.

[1]: https://jordandarefsky.com/blog/2024/parakeet/#classifier-fr...

[2]: https://news.ycombinator.com/item?id=43758686


Yea they mention a “perplexity drop” relative to naive quantization, but that’s meaningless to me. > We reduce the perplexity drop by 54% (using llama.cpp perplexity evaluation) when quantizing down to Q4_0.

Wish they showed benchmarks / added quantized versions to the arena! :>


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: