it isn't necessary in Python, either. GP is only importing it for square roots, but exponentiation (via the ** operator) by .5 works fine with complex numbers in Python as well. It even handles complex exponents (although you'd have to look up the branch cut semantics etc. in the documentation):
Not true by any stretch. I've used komodo[1] for more than 2 decades as a debugger in Linux, Mac, and Windows. More recently VSCode on all 3, for debugging.
Definitely not a dead language. A mature and stable language, which won't surprise you.
I have perl code I wrote in 1992 that still works properly, without changes. The binary executables have changed ISA from MIPS 4 to x86_64, and are compiled fortran (gfortran). There's even a switch I use to be able to read the large endian binary files in gfortran.
My c code from 1996 requires rework. My C++ code from 2014 requires rework (I had to do this with others code as well to use a std capability). Python code rarely survives 3.6 -> 3.12 never mind 2.7 to 3.x. I worked at a company that had (very unwisely) written a massive part of its infrastructure in Py2.7, and was using it a decade past its expiration date.
At a previous job, I regularly worked with dfs of millions to hundreds of millions of rows, and many columns. It was not uncommon for the objects I was working with to use 100+ GB ram. I coded initially in Python, but moved to Julia when the performance issues became to painful (10+ minute operations in Python that took < 10s in Julia).
DataFrames.jl, DataFramesMeta.jl, and the rest of the ecosystem are outstanding. Very similar to pandas, and much ... much faster. If you are dealing with small (obviously subjective as to the definition of small) dfs of around 1000-10000 rows, sticking with pandas and python is fine. If you are dealing with large amounts of real world time series data, with missing values, with a need for data cleanup as well as analytics, it is very hard to beat Julia.
FWIW, I'm amazed by DuckDB, and have played with it. The DuckDB Julia connector gives you the best of both worlds. I don't need DuckDB at the moment (though I can see this changing), and use Julia for my large scale analytics. Python's regex support is fairly crappy, so my data extraction is done using Perl. Python is left for small scripts that don't need to process lots of information, and can fit within a single terminal window (due to its semantic space handicap).
A friend sent me the image from page 9. The email signature. It is mine, from when I ran my company. Mid 2010s.
I'm not much worried about this specific example of information exfiltration, though I have significant concerns over how one may debug something like this for applications working with potentially more sensitive data than email signatures. Put another way, I think we are well within the infancy of this technology, and there is far more work needed before we have actually useful applications that have a concept of information security relative to their training data sets.
1: Yes. Has happened 3 times in my career, most recently this past May.
2: Minimally. I was asked for the reason for my departure, and I was transparent as I could be, indicating what I knew and the circumstances. People were curious about it, but then again, its not relevant to finding new work. I will say that I found that multiple potential employers concluded successful interviews with unrelated programming tests. It felt like a set of coffin problems[1], that older folks like me, not trained in CS, but writing code for 40+ years, would not do well on.
This is a huge red flag. I actually had someone tell me that I 'needed to know how to program' to do the job I'd applied for, even though I have a public documented history of programming and software development/engineering, have developed and shipped code for decades, for research, products, patches, ...
That impacted search a bit.
3: There's not much of a stigma these days. Your self worth is not tied up in your job. Your value isn't either. You can take time to decompress, retool, think, train, research.
Put another way, if an employer thinks its a problem, you might want to steer clear of that employer. Bring the conversation quickly to a close, amicably, so you don't waste time and create bad feelings.
On jobs in general, employers generally are their to please and profit their owners. Understanding all their actions in terms of this (HR is there to protect the employer, etc.) can help you separate your sense of self worth from the position or company.
This is my current pain with Julia. It makes deploying code require the entire environment, or a PackageCompiler built sys-image. I've played with static compiler, and other techniques. They are sadly quite brittle for my previous use cases. Lack of ability to use threads in a static compiler built binary was a deal killer for me.
FWIW, I've used MaraDNS for a while, about 10 years. It works nicely for my small domains. It is missing some things that require I do some workarounds, but for the most part, it does what it says on the wrapper. Its not hard to configure, no messing around with multiple/many backends and backend configuration. Just simple db files, easy to keep in git, easy to deploy, easy to manage and change.
If you are running a large SaaS this might not be the right package for you. But if you are putting together a small site (blog, single page webapp, etc.) this is a great, simple, and easy to deploy tool.
> the "winning" strategy is to have high-level scripting languages where you can ignore most of the details, which call into hyper-optimized, high performance libraries. For instance, when you're using Scipy, you call into Fortran and C almost interchangeably.
Well, no. This is python's strategy. Doesn't make it the winning strategy. Python implicitly forces multiple languages upon you. A scripting one, and a performance one. Meanwhile languages such as Julia, Rust, etc. allow you to do the work in a single (fast/compiled) language. Much lower cognitive load, especially if you have multiple cores/machines to run on.
Another point I've been making for 30+ years in HPC, is that data motion is hard. Not simply between machines, but between process spaces. Take large slightly complex data structures in a fast compiled language, and move them back and forth to a scripting front end. This is hard as each language has their own specific memory layout for the structures, and impedance matching between them means you have to make trade-offs. These trade-offs often result in surprising issues as you scale up data structure size. Which is one of the reasons that only the simplest of structures (vectors/arrays) are implemented in a cross language scenario.
Moreover, these cross language boundaries implicitly prevent deeper optimization. Which leads to development of rather different scenarios for code development, including orthogonal not-quite-python based things (Triton, numba, etc.).
Fortran is a great language, and as one of the comments pointed out, its really not that hard to learn/use. The rumors of its demise are greatly exaggerated. And I note with some amusement, that they've been going on since I've been in graduate school some 30-35 years ago. Yet people keep using it.
So I pulled up my jpg of the Sombrero galaxy to see (its been my laptop background for a few years). I could barely make out Iris in it. Absolutely awesome though!
Something about the deep field and ultra deep field pictures that makes me happy. Also makes me wonder about how many beings in these other galaxies look out, see Milky Way colliding ever so slowly with Andromeda in their own deep field views.