Because the state -> DOM mapping is non-trivial for my application, I ended up writing my own virtual DOM diffing code, a primary facet of React. I appreciate the ease of being able to circumvent this where it's not necessary and performance considerations dominate, though I admit i've not felt the need to do it anywhere yet.
The AI training data set for React is much larger. The models seem to do fine with SolidJS, though I suspect there is meaningful benefit to using React from this point of view.
Overall, I'm happy with where I'm at and I prefer the SolidJS way of thinking, though if I were to do it all over again, I'd probably just go with React for the above two reasons.
I was not expecting to loose karma for this comment!
I have a couple of years familiarity with SolidJS and thought the two key insights that came to mind from building a large project in it would have positive value.
Aduro’s tech works with hard to recycle, dirty, mixed plastic. Search for Eric Appelman interview on YouTube for more info. Some of the retail investor coverage is cringeworthy, but my take is the tech is likely as good as described (disclosure: I’m an investor).
Mixed plastic is definitely impressive, but how is it separated? I imagine a lot of it must be done by hand? And then washed with water? Seems both labor intensive and energy intensive
No, they pass it through successively intensive process conditions, each of which selects for a different plastic type. No need to wash, their process is water based and some contamination from organic material actually helps in saturating the output product.
Apparently 95% of the test sample can be recovered. If its dirty and mixed, how much can TPA can be recovered and is the sludge chemically inert and landfillable?
Results will vary a lot with all the parameters (of which there are many). They don’t disclose all the detail but I know they have done thousands of tests across the parameters space. I know they believe they can process sludge produced from other processes.. The industry generally is very opaque, I think for competitive reasons and also because it’s so complex/nuanced.
I think a large part depends on if they are able to translate their infrastructure spending away from unprofitable or marginally profitable ventures like bridges into high-tech nuclear industry, more battery and solar tech, semi-conductors & lithography, etc.
They are certainly capable (we saw that with EVs, batteries and solar already) but proof will be in the pudding.
Disturbing observation: 5 years ago, the trends that seemed most important to me were all technical - I really paid no attention to politics. Today, they are mostly geo-political.
Could it be an ennui about tech news, resulting from the Tower of Babel of great projects and great ideas, that haven't reliably improved our IT or development experiences?
In the context of a company, the higher up you get (on a technical track), the more you need to operate across different projects, technologies etc. - being able to jump into new things is mandatory/expected. But having deep knowledge in something important to the org gives you a foundation on which to succeed. I would say that specialising perhaps allows you to stand out more efficiently as a lower level IC.
You can tune Kafka down fairly well if you know what you're doing, but it's not optimised for that OOTB. Or just use Confluent Cloud, which is fully managed and scales down as low as you want (costs cents per Gb). Disclosure: work for Confluent.
This is great advice IMO, let someone else manage your Kafka at scale. I feel compelled to mention that other Apache Kafka managed services are available, but agree that it makes sense to offload the complexity if possible! Disclosure: work at Aiven, who offer managed Apache Kakfa on whatever cloud you are using.
Confluent Cloud is a truly 'fully managed' service, with a serverless-like experience for Kafka. For example, you have zero infra to deploy, upgrade, or manage. The Kafka service scales in and out automatically during live operations, you have infinite storage if you want to (via transparent tiered storage), etc. As the user, you just create topics and then read/write your data. Similar to a service like AWS S3, pricing is pay-as-you-go, including the ability to scale to zero.
Kafka cloud offerings like AWS MSK are quite different, as you still have to do much of the Kafka management yourself. It's not a fully managed service. This is also reflected in the pricing model, as you pay per instance-hours (= infra), not by usage (= data). Compare to AWS S3—you don't pay for instance-hours of S3 storage servers here, nor do you have to upgrade or scale in/out your S3 servers (you don't even see 'servers' as an S3 user, just like you don't see Kafka brokers as a Confluent Cloud user).
Secondly, Confluent is available on all three major clouds: AWS, GCP, and Azure. And we also support streaming data across clouds with 'cluster linking'. The other Kafka offerings are "their cloud only".
Thirdly, Confluent includes many additional components of the Kafka ecosystem as (again) fully managed services. This includes e.g. managed connectors, managed schema registry, and managed ksqlDB.
There's a more detailed list at https://www.confluent.io/confluent-cloud/ if you are interested. I am somewhat afraid this comment is coming across as too much marketing already. ;-)
One nice thing about confluent cloud vs MSK is the minimum cost of a confluent cloud cluster is far, far cheaper than the minimal cost of an MSK cluster
I was affiliated with a company in Singapore which had it's bank account closed, I believe due to FACTA, even though at the time no one involved with the company had ever been to the US and didn't have any ties to the US. It was simply that the size of the account was too small to justify the associated administrative burden. FACTA has impact beyond US citizens - pretty annoying.
best trick i know for this is to have another source of noise that you can tolerate (e.g. a fan). noise cancelling headphones are useful if you sleep on your back...
Two main comments that come to mind:
Because the state -> DOM mapping is non-trivial for my application, I ended up writing my own virtual DOM diffing code, a primary facet of React. I appreciate the ease of being able to circumvent this where it's not necessary and performance considerations dominate, though I admit i've not felt the need to do it anywhere yet.
The AI training data set for React is much larger. The models seem to do fine with SolidJS, though I suspect there is meaningful benefit to using React from this point of view.
Overall, I'm happy with where I'm at and I prefer the SolidJS way of thinking, though if I were to do it all over again, I'd probably just go with React for the above two reasons.
reply