My first reaction is that this just pushes up the price point for an entire segment of the industry for U.S. customers, but it doesn't seem like it would outright _kill_ companies like this.
But then I thought a bit deeper.
Any small company that relies on profits from the last batch to fund the next batch will certainly just be killed or need to acquire debt to fund purchasing of parts/stock under new tariffs. That really sucks for small manufacturers.
I've heard about businesses that had to jump into a fundraising mode to prepare for the tariffs just so they could receive product. There's also the problem in less direct sales where businesses in the chain expect a certain margin. Simply passing on the tariffs quickly leads to large price increases to the customers. I've looked at a few ways to pass on the tariff and none are very appealing, likely the healthiest option is to try to spreading the effect out by reducing margins at various stages.
I feel lucky that the tariffs were announced at a a time where we were preparing to order new inventory. Otherwise the cash flow effects would be ruinous. Unfortunately we primarily export and there is a real chance we move our manufacturing out of the US as a result of this mess.
> There's also the problem in less direct sales where businesses in the chain expect a certain margin. Simply passing on the tariffs quickly leads to large price increases to the customers.
This is what a lot of people aren't understanding. If the tariff is added by the first middleman, each additional middleman between that first one and the customer is adding their increase to that existing increase and the base price, not just the original base price.
Yea, it's really the opposite of the stated desired effect, small businesses will go under or not even start, while Target and Walmart afforded to pre-stock their warehouses to ride the wave as best they can. Amazon already informed suppliers they "won't be accepting price increases". So we will all just be working for the big box stores and nobody will be living the American Dream of starting their own company.
And, since this is all really just chaos without a plan, what happens when replacement suppliers don't spring up all over the USA?
I was just about to launch a small business based on a circuit I designed and had manufactured in China. The business also relies on LEDs that are simply not available outside of China. I could have had a $50,000 tax credit to start my small business if the other side won, but no, now I have to pay so much more and it just doesn't work for me as a very small start-up. The tariffs have killed my business before it even really started.
If you think tariffs only affect small manufacturers, just wait until the accumulated machine spares and parts stock of the local major ones run out.
I can tell you, up close and personal, that the COVID era already was pretty f.cking nasty for all manufacturers with costs booming thanks to supply chain issues.
Trumps tariffs bullshit is the COVID era on steroids. Seriously. Just wait and see.
As I get older, I just accept that this is the way the game is played. Tech is one of the top blackjack tables of the investment world. Unless you're bootstrapping and/or building a lifestyle business, you need some promise of potential big returns. Jumping on a trending investment target like AI makes that easier, even if it is partly (or mostly) bullshit.
I bet you could find a better cultural fit and be a lot happier while still doing what you love. There are vastly different cultures across different companies, particularly in startups and mid-size.
I took a break from city life and regular work for a year while I lived and traveled in an RV doing contract work. It was fun for a while, but I missed having a challenge and feeling like I had a stake in what I’m working on. I now work remotely in a town that has nothing to do with tech, and my friends here work largely in tourism and real estate. They all get treated like shit compared to software engineers. It made me thankful for my place in life. I’m likely moving back to the SF Bay Area after my lease ends here.
I feel like cultural fit is hard to really know in an interview. I've definitely seen different cultures even in different departments at the same large company. But it seems like the true culture only comes out after at least one performance rating cycle.
What dev tools/processes are you using? I’m enjoying the enhanced capabilities of Windsurf with Sonnet 3.7, but I mostly use it for analyzing and problem-fixing in a huge codebase I recently inherited. I have yet to feed it anything like a PRD.
I try to generally keep up with the overall trends, but I’m an engineer at a resource-constrained startup, not a research scientist. I want to see real-world application, at least mid-term value, minimum lock-in, and strong supportability. Until then, I just don’t have time to think about it.
I did scoff a bit when the response to "it's hard to keep up with what's actually important in AI" with "just read this summary of the 10 most relevant papers every week".
Unless you are really working on the bleeding edge (or trying to make money by predicting the hype machine) you probably need to know about one or two developments every 6 months. The summary of 60 papers in that time might not be what everyone needs.
To be clear, I didn't downvote here and I have no issue with you promoting a blog!
6 months is way too infrequent. If last time you checked the state of AI was 6 months ago, you'd miss - among other things - NotebookLM's podcast generator, the rise of "reasoning models", Deepseek-R1 debacle, Claude 3.7 Sonnet and "deep research" - all of which are broadly useful to end-users.
The focus of your link appears to be papers and research. I would imagine somebody with less time for these developments is looking for more practical "here's how you can use this cool new AI" style articles instead.
For Livebook, this looks really cool. Love that it calls CPython directly via C++ NIFS in Elixir and returns Elixir-native data structures. That's a lot cleaner than interacting with Python in Elixir via Ports, which is essentially executing a `python` command under the hood.
For production servers, Pythonx is a bit more risky (and the developers aren't claiming it's the right tool for this use case). Because it's running on the same OS process as your Elixir app, you bypass the failure recovery that makes an Elixir/BEAM application so powerful.
Normally, an Elixir app has a supervision tree that can gracefully handle failures of its own BEAM processes (an internal concurrency unit -- kind like a synthetic OS process) and keep the rest of the app's processes running. That's one of the big selling points of languages like Elixir, Erlang, and Gleam that build upon the BEAM architecture.
Because it uses NIFs (natively-implemented functions), an unhandled exception in Pythonx would take down your whole OS process along with all other BEAM processes, making your supervision tree a bit worthless in that regard.
There are cases when NIFs are super helpful (for instance, Rustler is a popular NIF wrapper for Rust in Elixir), but you have to architect around the fact that it could take down the whole app. Using Ports (Erlang and Elixir's long-standing external execution handler) to run other native code like Python or Rust is less risky in this respect because the non-Elixir code it's still running in a separate OS process.
One possibility for production use (in case there is a big value) is to split the nodes into one "front" node which requires strong uptime, and a "worker" node which is designed to support rare crashes gracefully, in a way that does not impact the front.
> an unhandled exception in Pythonx would take down your whole OS
Is there a class of exceptions that wouldn't be caught by PythonX's wrapper? FTA (with emphasis added):
> Pythonx ties Python and Erlang garbage collection, so that the objects can be safely kept between evaluations. Also, it conveniently handles conversion between Elixir and Python data structures, bubbles Python exceptions and captures standard output.
And...
> Rustler is a popular NIF wrapper for Rust in Elixir
From Rustler's Git README:
> The code you write in a Rust NIF should never be able to crash the BEAM.
I haven't used Rustler, Zigler or PythonX (yet), so I'm genuinely asking if I'm mistaken in my understanding of their safety.
I’ve been eyeing gleam as my next language to learn. Lots to like about it for sure, and I have always like the idea of OTO but never had an opportunity to tinker with it.
I'm more of a Python and C# kind of guy, so Elixir never really hit the itch for me, but Gleam definitely does. One of these days I'll take a crack to see how I can use Gleam with Phoenix.
I’d recommend to first see if you can use a full-Gleam solution (like wisp/lustre) if it’s a greenfield project – interop is of course possible but can sometimes be a bit unpleasant due to the difference in data structures (Elixir structs va Gleam records) and inability to use Elixir macros directly from Gleam, which are heavily used by projects like Phoenix and Ecto.
I’ve been mostly in Python, C# and C++ for the past decade or so but got into Elixir as my first functional language. Never got comfy with the syntax but dig how everything flows. Looking forward to digging into Gleam.
If you liked Elixir but found it too "exotic" you may find F# enjoyable instead - it's a bit like Elixir but with a very powerful, gradually typed and fully inferred type system and has access to the rest of .NET. Excellent for scripting, data analysis and modeling of complex business domains. It's also very easy to integrate a new F# project into existing C# solution, and it ships with the SDK and is likely supported by all the tools you're already using. F# is also 5 to 10 times more CPU and memory-efficient.
(F# is one of the languages Elixir was influenced by and it is where Elixir got the pipe operator from)
Do any of them communicate with the BEAM? There used to be a Go based implementation of the BEAM that allowed you to drop-in with Go, I have to wonder if this could be done with Python so it doesn't interfere with what the BEAM is good that and lets Python code remain as-is.
There are several libraries that allow a Python program to communicate with an Erlang program using Erlang Term Format and such.
This approach targets more performance-sensitive cases with stuff like passing data frames around and vectors/matrices that are costly to serialize/deserialize a lot of the time.
> Because it uses NIFs (natively-implemented functions), an unhandled exception in Pythonx would take down your whole OS process along with all other BEAM processes, making your supervision tree a bit worthless in that regard.
What's the Elixir equivalent if "Pythonic"? An architecture that allows a NIF to take down your entire supervision tree is the opposite of that, as it defeats a the stacks' philosophy.
The best practice for integrating Python into Elixir or Erlang would be to have an assigned genserver, or other supervision-tree element - responsible for hosting the Python NIF(s), and the design should allow for each branch or leaf of that tree to be killed/restarted safely, with no loss of state. BEAM message passing is cheap
That's the thing though: a NIF execution isn't confined to the the BEAM process by its nature. From the Erlang docs:
> As a NIF library is dynamically linked into the emulator process, this is the fastest way of calling C-code from Erlang (alongside port drivers). Calling NIFs requires no context switches. But it is also the least safe, because a crash in a NIF brings the emulator down too.
(https://www.erlang.org/doc/system/nif.html)
The emulator in this context is the BEAM VM that is running the whole application (including the supervisors).
Apparently Rustler has a way of running Rust NIFs but capturing Rust panics before they trickle down and crash the whole BEAM VM, but that seems like more of a Rust trick that Pythonx likely doesn't have.
The tl;dr is that NIFs are risky by default, and not really... Elixironic?
> The emulator in this context is the BEAM VM that is running the whole application (including the supervisors)
You are correct - one could still architect is such that the genserver hosting the NIF(s) run in a separate process/VM/computer in the same cluster since message passing is network-transparent, though inter-host messages have higher latencies.
Claude 3.5 has been fantastic in Windsurf. However, it does cost credits. DeepSeek V3 is now available in Windsurf at zero credit cost, which was a major shift for the company. Great to have variable options either way.
I’d highly recommend anyone check out Windsurf’s Cascade feature for agentic-like code writing and exploration. It helped save me many hours in understanding new codebases and tracing data flows.
DeepSeek’s models are vastly overhyped (FWIW I have access to them via Kagi, Windsurf, and Cursor - I regularly run the same tests on all three). I don’t think it matters that V3 is free when even R1 with its extra compute budget is inferior to Claude 3.5 by a large margin - at least in my experience in both bog standard React/Svelte frontend code and more complex C++/Qt components. After only half an hour of using Claude 3.7, I find the code output is superior and the thinking output is in a completely different universe (YMMV and caveat emptor).
For example, DeepSeek’s models almost always smash together C++ headers and code files even with Qt, which is an absolutely egregious error due to the meta-object compiler preprocessor step. The MOC has been around for at least 15 years and is all over the training data so there’s no excuse.
We’ve already connected! Last year I think, because I was interested in your experience building a block editor (this was before your blog post on the topic). I’ve been meaning to reconnect for a few weeks now but family life keeps getting in the way - just like it keeps getting in the way of my implementing that block editor :)
I especially want to publish and send you the code for that inspector class and selector GUI that dumps the component hierarchy/state, QML source, and screenshot for use with Claude. Sadly I (and Claude) took some dumb shortcuts while implementing the inspector class that both couples it to proprietary code I can’t share and hardcodes some project specific bits, so it’s going to take me a bit of time to extricate the core logic.
I haven’t tried it with 3.7 but based on my tree-sitter QSyntaxHighlighter and Markdown QAbstactListModel tests so far, it is significantly better and I suspect the work Anthropic has done to train it for computer use will reap huge rewards for this use case. I’m still experimenting with the nitty gritty details but I think it will also be a game changer for testing in general, because combining computer use, gammaray-like dumps, and the Spix e2e testing API completes the full circle on app context.
Oh how cool! I'd love to see your block editor. A block editor in Qt C++ and QMLs is a very niche area that wasn't explored much if at all (at least when I first worked on it).
From time to time I'm fooling with the idea of open sourcing the core block editor but I don't really get into it since 1. I'm a little embarrassed by the current unmodularization of the code and want to refactor it all. 2. I want to still find a way to monetize my open source projects (so maybe AGPL with commercial license?)
Dude, that inspector looks so cool. Can't wait to try it. Do you think it can also show how much memory each QML component is taking?
I'm hyped as well about Claude 3.7, haven't had the time to play with it on my Qt C++ projects yet but will do it soon.
The big difference is DeepSeek R1 has a permissive license whereas Claude has a nightmare “closed output” customer noncompete license which makes it unusable for work unless you accept not competing with your intelligence supplier, which sounds dumb
But then I thought a bit deeper.
Any small company that relies on profits from the last batch to fund the next batch will certainly just be killed or need to acquire debt to fund purchasing of parts/stock under new tariffs. That really sucks for small manufacturers.
reply