Hacker News new | past | comments | ask | show | jobs | submit | math's comments login

I'm using SolidJS for Infumap (https://github.com/infumap/infumap), which is getting pretty big.

Two main comments that come to mind:

Because the state -> DOM mapping is non-trivial for my application, I ended up writing my own virtual DOM diffing code, a primary facet of React. I appreciate the ease of being able to circumvent this where it's not necessary and performance considerations dominate, though I admit i've not felt the need to do it anywhere yet.

The AI training data set for React is much larger. The models seem to do fine with SolidJS, though I suspect there is meaningful benefit to using React from this point of view.

Overall, I'm happy with where I'm at and I prefer the SolidJS way of thinking, though if I were to do it all over again, I'd probably just go with React for the above two reasons.


I was not expecting to loose karma for this comment!

I have a couple of years familiarity with SolidJS and thought the two key insights that came to mind from building a large project in it would have positive value.

Apparently not!


[flagged]


Your tone sucks. You're being rude about them not answering questions they weren't asked yet. There are much nicer ways of going about this.

Edit: Oh. All of your replies are like this. Please take a chill pill.


Aduro’s tech works with hard to recycle, dirty, mixed plastic. Search for Eric Appelman interview on YouTube for more info. Some of the retail investor coverage is cringeworthy, but my take is the tech is likely as good as described (disclosure: I’m an investor).


Mixed plastic is definitely impressive, but how is it separated? I imagine a lot of it must be done by hand? And then washed with water? Seems both labor intensive and energy intensive


No, they pass it through successively intensive process conditions, each of which selects for a different plastic type. No need to wash, their process is water based and some contamination from organic material actually helps in saturating the output product.


Apparently 95% of the test sample can be recovered. If its dirty and mixed, how much can TPA can be recovered and is the sludge chemically inert and landfillable?


Results will vary a lot with all the parameters (of which there are many). They don’t disclose all the detail but I know they have done thousands of tests across the parameters space. I know they believe they can process sludge produced from other processes.. The industry generally is very opaque, I think for competitive reasons and also because it’s so complex/nuanced.


Worth noting that Kafka is getting queues: https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A...


And also Rabbit has streams[1]. There's a lot of overlap.

[1] https://www.rabbitmq.com/streams.html


Michael Pettis did a great interview recently that covers why growth in China is likely to stall: https://www.youtube.com/watch?v=WE5VczIFGZA


Stall no. Moderate yes.

I think a large part depends on if they are able to translate their infrastructure spending away from unprofitable or marginally profitable ventures like bridges into high-tech nuclear industry, more battery and solar tech, semi-conductors & lithography, etc.

They are certainly capable (we saw that with EVs, batteries and solar already) but proof will be in the pudding.


Disturbing observation: 5 years ago, the trends that seemed most important to me were all technical - I really paid no attention to politics. Today, they are mostly geo-political.


Could it be an ennui about tech news, resulting from the Tower of Babel of great projects and great ideas, that haven't reliably improved our IT or development experiences?


Let me guess, 5yr ago you had an order of magnitude (or two) less money tied up in markets and illiquid investments.


it is the WebGPU API (targeting both the web and native)


In the context of a company, the higher up you get (on a technical track), the more you need to operate across different projects, technologies etc. - being able to jump into new things is mandatory/expected. But having deep knowledge in something important to the org gives you a foundation on which to succeed. I would say that specialising perhaps allows you to stand out more efficiently as a lower level IC.


You can tune Kafka down fairly well if you know what you're doing, but it's not optimised for that OOTB. Or just use Confluent Cloud, which is fully managed and scales down as low as you want (costs cents per Gb). Disclosure: work for Confluent.


This is great advice IMO, let someone else manage your Kafka at scale. I feel compelled to mention that other Apache Kafka managed services are available, but agree that it makes sense to offload the complexity if possible! Disclosure: work at Aiven, who offer managed Apache Kakfa on whatever cloud you are using.


Thank you for disclosing and not disclaiming.


Why would someone choose Confluent Cloud over the Kafka offerings of Azure/AWS/GCP?


Confluent Cloud is a truly 'fully managed' service, with a serverless-like experience for Kafka. For example, you have zero infra to deploy, upgrade, or manage. The Kafka service scales in and out automatically during live operations, you have infinite storage if you want to (via transparent tiered storage), etc. As the user, you just create topics and then read/write your data. Similar to a service like AWS S3, pricing is pay-as-you-go, including the ability to scale to zero.

Kafka cloud offerings like AWS MSK are quite different, as you still have to do much of the Kafka management yourself. It's not a fully managed service. This is also reflected in the pricing model, as you pay per instance-hours (= infra), not by usage (= data). Compare to AWS S3—you don't pay for instance-hours of S3 storage servers here, nor do you have to upgrade or scale in/out your S3 servers (you don't even see 'servers' as an S3 user, just like you don't see Kafka brokers as a Confluent Cloud user).

Secondly, Confluent is available on all three major clouds: AWS, GCP, and Azure. And we also support streaming data across clouds with 'cluster linking'. The other Kafka offerings are "their cloud only".

Thirdly, Confluent includes many additional components of the Kafka ecosystem as (again) fully managed services. This includes e.g. managed connectors, managed schema registry, and managed ksqlDB.

There's a more detailed list at https://www.confluent.io/confluent-cloud/ if you are interested. I am somewhat afraid this comment is coming across as too much marketing already. ;-)

Disclaimer: I work at Confluent.


Confluent Cloud has some nice point-and-click UI for creating associated Kafka resources like Schema Registries and Connect Clusters.

My preference is MSK but I'm very comfortable with vanilla Kafka in AWS at a good price with auto-updates.


One nice thing about confluent cloud vs MSK is the minimum cost of a confluent cloud cluster is far, far cheaper than the minimal cost of an MSK cluster


Is there a GCP offering that isn't just Confluent Cloud billed via Google?


You can use Pub/Sub Lite: https://cloud.google.com/pubsub/lite/docs

With a Kafka compatibility shim: https://github.com/googleapis/java-pubsublite-kafka

Disclaimer: I work on GCP.


You can get managed Kafka on Aiven (disclaimer: I work there) on GCP, either through the marketplace or directly through Aiven.


I was affiliated with a company in Singapore which had it's bank account closed, I believe due to FACTA, even though at the time no one involved with the company had ever been to the US and didn't have any ties to the US. It was simply that the size of the account was too small to justify the associated administrative burden. FACTA has impact beyond US citizens - pretty annoying.


best trick i know for this is to have another source of noise that you can tolerate (e.g. a fan). noise cancelling headphones are useful if you sleep on your back...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: