Hacker News new | past | comments | ask | show | jobs | submit | rvz's favorites login

Just amazed that ‘better testing’ isn’t one of the takeaways in the summary.

Just amazed. Yea ‘write code carefully’ as if suggesting that’ll fix it is a rookie mistake.

So so frustrating when developers treat user machines like their test bed!


Agreed. The head of line problem is worth solving for certain use cases.

But today, all streaming systems (or workarounds) with per message key acknowledgements incur O(n^2) costs in either computation, bandwidth, or storage per n messages. This applies to Pulsar for example, which is often used for this feature.

Now, now, this degenerate time/space complexity might not show up every day, but when it does, you’re toast, and you have to wait it out.

My colleagues and I have studied this problem in depth for years, and our conclusion is that a fundamental architectural change is needed to support scalable per message key acknowledgements. Furthermore, the architecture will fundamentally require a sorted index, meaning that any such a queuing / streaming system will process n messages in O (n log n).

We’ve wanted to blog about this for a while, but never found the time. I hope this comment helps out if you’re thinking of relying on per message key acknowledgments; you should expect sporadic outages / delays.


As someone who has worked on a systems programming language for a long time, my strongest advice would be to avoid trying to make syntactic or semantic choices that are just different unless they're really motivated by making the systems aspect better, or to make the language more self-coherent. Having surprises and syntax to learn is a barrier to entry and probably won't impress anyone.

That is to say, do focus on systems problems. Key ones I identified are efficient data representation, avoiding needless memory churn/bloat, and talking directly to lower-level software/hardware, like the kernel.

Focus on systems programming and not on syntactic niceties or oddities.


The problem with Clean Code and other architecture porn methodologies, is that they work until they don't. You might follow clean code, until a new teammate comes and he/she has their own opinions about your code, your architecture, or even your choice of framework.

I have a strong feeling is that architecture methodologies work as long as you are a very small and super committed team (see: startup where you all are invested into the success of the product), but they shatter to parts in a bigger corporate environment where acquisitions, mergers, and rewrites happen every other month, and you need to adopt the new framework/language/methodology/scrum/kanban/whatever.


I'm doing theorical research in the topological quantum computing.

The idea behind topological quantum computing is to utilize quantum materials whose low-energy physics looks like an error correcting code. Since these systems are very large (macroscopic number of atoms), the error rates are (theoretically) very low, ie the qubit is fault tolerant by construction, without any additional error correction. In reality, we do not know how good these qubits will be at finite temperature, with real life noise, etc.

Moreover, these states do not just occur in nature by themselves, so their construction requires engineering, and this is what Microsoft tries to do.

Unfortunately, Majoranas in nanowires have some history of exaggerated claims and data manipulation. Sergey Frolov's [1] twitter, one of the people behind original Majorana zero bias peaks paper, was my go-to source for that, but it looks like he deleted it.

There were also some concerns about previous Microsoft paper [2,3] as well as the unusual decision to publish it without the details to reproduce it [4].

In my opinion, Microsoft does solid science, it's just the problem they're trying to solve is very hard and there are many ways in which the results can be misleading. I also think it is likely that they are making progress on Majoranas, but I would be surprised if they will be able to show quantum memory/single qubit gates soon.

[1] https://spinespresso.substack.com/p/has-there-been-enough-re...

[2] https://x.com/PhysicsHenry/status/1670184166674112514

[3] https://x.com/PhysicsHenry/status/1892268229139042336

[4] https://journals.aps.org/prb/abstract/10.1103/PhysRevB.107.2...


Hi, I work on Dart and was one of the people working on this feature. Reposting my Reddit comment to provide a little context:

I'm bummed that it's canceled because of the lost time, but also relieved that we decided to cancel it. I feel it was the right decision.

We knew the macros feature was a big risky gamble when we took a shot at it. But looking at other languages, I saw that most started out with some simple metaprogramming feature (preprocessor macros in C/C++, declarative macros in Rust, etc.) and then later outgrew them and added more complex features (C++ template metaprogramming, procedural macros in Rust). I was hoping we could leapfrog that whole process and get to One Metaprogramming Feature to Rule Them All.

Alas, it is really hard to be able to introspect on the semantics of a program while it is still being modified in a coherent way without also seriously regressing compiler performance. It's probably not impossible, but it increasingly felt like the amount of work to get there was unbounded.

I'm sad we weren't able to pull it off but I'm glad that we gave it a shot. We learned a lot about the problem space and some of the hidden sharp edges.

I'm looking forward to working on a few smaller more targeted features to deal with the pain points we hoped to address with macros (data classes, serialization, stateful widget class verbosity, code generation UX, etc.).


If the profits move out, the opportunity cost is that this money isn't spent in the UK which hits GDP which is how growth is calculated.

It is far poorer after you account for the brain drain and the opportunity cost of the brains that choose to remain.

Common people who don't know anything about AI will expect it to be able to perform logical deduction. As they experience human like speech so they acribe the human understanding behind that speech.

>For the past year I've been looking at all the progress happening in ML/AI and each day I'm more convinced that there's a lot of game-changing stuff that will come out of it (what we're seeing with Stable Diffusion and GPT3 are some examples of this).

Wow, I must admit that out of all big branches of computer science AI/ML is the least exciting for me. I don't know, but all that unreliability is just putting me off.

I do agree that it's better to have something automated in 90% instead of 0% or 40%, but the impossibility to getting it to 100% is annoying.

It's been like over a decade of huge hype on AI/ML and yet I feel like biggest applied AI/ML that affects my life directly or indirectly is search & ad industry or things like chat bots, but it's mehhh.

I don't believe in autonomous cars based on computer vision


Americans voted for Obama because of hope; did good times come about? I don't think so.

Then Americans voted for Trump because of fear; did bad times come about? Definitely.

You might be onto something with the fear part, but I don't think the hope part really works. People buying into hope can easily just be misguided hope and not based on reality. People buying into fear means things really have gone bad.


You sound so grumpy. People want cool shots from drones for themselves, deal with it.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: