I actually don't see any value-add from SDKs that wrap HTTP requests. HTTP is a standard, and my programming environment already provides a way to make requests. In fact it probably provides multiple, and your SDK might use a different one from what I do in the project, resulting in bloat. And for what gain? I still need to look at docs and try my best to do what the docs are telling me to.
Now if it's a statically typed language then I kinda get it. Better IDE/lsp integration and all. But even then, just publish an OpenAPI spec and let me generate my own client that's more idiomatic with my project.
This is one of the sentiment that powers the Common Lisp ecosystem. There's already good data structures and functions in the standard (and quasi standard) library, why do you need to invent new ones? In other languages (Node JS), you take a library and it brings a whole kitchen with it.
I agree with the sentiment that great APIs are a prerequisite to great SDKs, but great SDKs are really about time saving. Consider AWS's API, which requires a specific signing mechanism. That is annoying to implement manually. In general, the common method of shared-secret passed via bearer token is pretty insecure. I hope to see that change over time, and SDKs can help facilitate that.
I agree with what you're saying, but it can be immensely frustrating when you're rejected for a job when the interviewer themselves is actually wrong, which has happened a few times. I've been given technical questions in interviews, and I answer the questions correctly (I always double-check when I get home), and the interviewer pretty much tells me that I'm wrong.
For example, in an interview once I got the typical "design Twitter" whiteboarding question, and it's going fine, until the topic of databases and storage comes up.
I ask "do we want consistency or availability here?"
The interviewer says that he wants both. To which I say "umm, ok, but I thought you said you wanted this to be distributed?", and he said yeah that's what he wants.
So I have to push back and say "well I mean, we all want that, but I'm pretty sure you can't have stuff be distributed or partitionable while also having availability and consistent."
We go back and forth for about another minute (or course eating away at my interview time), until I eventually pull out my phone and pull up the Wikipedia article for CAP theorem, to which the interviewer said that this is "different" somehow. I said "it's actually not different, but lets just use assume that there exists some kind of database X that gives us all these perks".
Now, in fairness to this particular company, they actually did move forward and gave me a (crappy) offer, so credit there, but I've had other interviews that went similarly and I'm declined. I've never done it, but I've sort of wanted to go onto LinkedIn and try and explain that their interview questions either need to change or they need to become better informed about the concepts that they're interviewing for. Not to change anything, not to convince anyone to suddenly give me an offer, but simply to prove my point.
Not sure how the dialog went irl, but if the conversation was that adversarial and with as little diplomacy applied, I'd not hire the person nor accept the role if I was on either side of it...
Maybe I'm just brainwashed, but most of the time for me, these "forced refactors" are actually a good thing in the long run.
The thing is, you can almost always weasel your way around the borrow checker with some unsafe blocks and pointers. I tend to do so pretty regularly when prototyping. And then I'll often keep the weasel code around for longer than I should (as you do), and almost every time it causes a very subtle, hard-to-figure-out bug.
I think the problem isn't that the forced changes are bad, it's that they're lumpy. If you're doing incremental development, you want to be able to to quickly make a long sequence of small changes. If some of those changes randomly require you to turn your program inside-out, then incremental development becomes painful.
Some people say that after a while, they learn how to structure their program from the start so that these changes do not become necessary. But that is also partly giving up incremental development.
My concern is slightly different; it's the ease of debugging. And I don't mean debugging the code that I (or sb else) wrote, but the ability to freely modify the code to kick some ideas around and see what sticks, etc. which I frequently need to do, given my field.
As an example, consider a pointer to a const object as a function param in C++: I can cast it away in a second and modify it as I go on my experiments.
Any thoughts on this? How much of an extra friction would you say is introduced in Rust?
I would say it's pretty easy to do similar stuff in Rust to skirt the borrow checker. e.g. you can cast a mut ref to a mut ptr, then back to a mut ref, and then you're allowed to have multiple of them.
The problem is Rust (and its community) does a very good job at discouraging things like that, and there are no guides on how to do so (you might get lambasted for writing one. maybe I should try)
I don’t really think it gives up incremental development. I’ve done large and small refactors in multiple Rust code bases, and I’ve never run into one where a tiny change suddenly ballooned into a huge refactor.
I'm quite scared of being this. I tick a lot of the boxes: I have a good rep for being fast and management likes me quite a bit. And I definitely have spearheaded things that I've since been pulled away from. I try to counter balance all that by writing docs and sticking around though. I do my best to help those who work on the stuff I was involved with.
I doubt you are. There is an enormous spectrum, and the parent comment makes it sound all bad.
If you got something working, and are available to answer an email explaining why you made a design decision, then you're already cleared of being a bad Pete.
Pete can't make the perfect product and he shouldn't try to. If it took 2 weeks to make management happy then its a problem you can do "right" in 1 or 2 months. A new dev needs to read up on the problem, what Pete did, what needs improvement, and maybe restart fresh to deliver. Good management knows this.
But a 2-week-delivered project is naturally bounded in scope, and its better off for being 'proven' than whatever OP imagined the right way to do it is.
There are only 3 cardinal sins. Don't destroy/overwrite an existing architecture, don't be a smart/dumb coder, don't do a months long Pete-style yolo project.
I guess it's a way of handling it that's enabled by the fact that the calendars are electronic. It could be that this way is easier for the school staff, but honestly I have no idea and can't even begin to guess, really.
Even in 2024, schools in major cities experience staffing shortages where there are not enough supposedly-qualified adults to supervise all of the students in their originally scheduled classrooms.
"Solution": class X, Y, and Z all meet in Lunchroom W on Wednesday so one adult can supervise all of them.
I still see tons of NordVPN sponsorship messages on youtube. I wonder if they've managed to pick up any good amount of regular people users or not. They sure do seem to be trying.
Pretty much every non-techy person I know under the age of about 50 uses VPNs for accessing regionally restricted streaming TV and sports[0] content, and getting around geoblocks (on US news sites that won't serve to Europe due to GDPR, trading/gambling sites, etc.).
I am pretty sure the sheer quantity of VPN ads on YT are also good evidence that they work and people are signing up. It wouldn't make sense to scale up a marketing approach to those levels unless earlier, smaller campaigns had positive returns.
[0] It's worth calling out explicitly the crazy lengths people will go to to both (a) find a free stream of a sports match; and (b) find a way to watch a match when they're travelling and can't access whatever service they usually watch it on.
I like NordVPN still. If there's any reason I shouldn't I'm all ears but haven't had an issue so far. I travel a lot and I definitely do feel better having my traffic routed through a VPN vs opening it up to whatever random entity happens to control the wifi I'm connected to, despite all the issues with them
I have nothing against NordVPN. I just generally agree with the statement that VPN users are either nerds or employees of companies that mandate it. But at the same time, I see Nord aggressively advertising to the general population - genuinely curious how successful that might be.
> That being said, in cases where the check may be bypassed or in a different implementation scenario, similar vulnerabilities can still appear.
This is so funny. "Oh, I see you have a bounds check that prevents vulnerability. BTW, if you remove that bounds check, your code will be vulnerable!!"
We get bug bounty reports like that sometimes. "I think your site might have an XSS vulnerability but your WAF is stopping it." "What I hear you saying is that we don't have an XSS vulnerability."
I mean, it's possible we do have a mistake in code somewhere we haven't found yet, but if the system effectively protects it, that's not a vulnerability.
This WAF scenario is different, it's a porous unreliable last line defense, like anti-virus. Having a mitigation stop something != not having the vulnerability in the first place.
I formerly worked in triage for a bug bounty program. We paid attention to these kinds of reports because it's often possible to bypass the WAF, or at least repurpose the vulnerability in a way the WAF wasn't designed to defend against.
Absolutely! If you have a known SQL injection behind a WAF, you better go fix it! It seems like these reports come down to the equivalent of “I pasted HTML into a form and you displayed the escaped version back to me, but maybe you forgot some tag.” No, I’m not going to turn off our WAF so you can test that hypothesis.
> No, I’m not going to turn off our WAF so you can test that hypothesis.
It would be worth your while to test it. You could run a dev/testing version of your app on a separate ___domain, without a WAF, and without any sensitive data held on it.
WAFs are a last resort to fix the bugs you didn't know about, and your application should still be safe without a WAF otherwise you're not actually getting the defense-in-depth you wanted. For an attacker that cares enough, WAF bypasses are a near inevitability.
It may be worthwhile to test, but the strength of "I see this field is correctly encoded but maybe hypothetically it could be your WAF protecting a vulnerable application. My sole supporting reason for this hypothesis is that if it is true, your bug bounty program will pay out for me" is, as vulnerability signals go, too uselessly weak to act on.
Bug bounty programs are nifty in that they give real researchers an effective outlet for the things they were quite possibly going to discover anyhow, but part of the price of that is you get a lot of submissions from people basically treating it as a system for spraying bug bounty programs with lottery tickets with low effort.
I'm kind of curious: do these bug bounty "spray and pray" tactics actually make money? I can't help but wonder if people are doing it because it works, or if it just looks like it should work and people are desperate.
Exactly. It's basically spam: there's nearly no cost to send it, so even an abysmal success rate is likely to return a fat profit.
I've heard that the average reward is about $500. You can afford a lot of rejections per success at that rate.
Never mind that you're destroying the effectiveness of those programs, driving staff nuts, and generally making the world less secure; that's their problem, right? (Sarcasm, obv.)
Please give me the benefit of the doubt and assume that we actually do test things. As I assume you’re good at what you do, so am I. The WAF thing was an example of the sort of report we commonly get: “if you turn off all the mitigations, it may be possible to attack you!” Yes, but we use defense in depth so that the mitigations help cover any gaps we might’ve missed, and if the end result isn’t exploitable, it isn’t exploitable.
Just like in the original report here: “if you turn off those checks, it could be vulnerable!” “Yes, but we’re not about to do that because we’re not insane.”
The curl report (incorrectly) describes missing bounds checks leading to buffer overflows.
If the curl project said "buffer overflows are ok because our code gets compiled with ASLR and NX", then that would be comparable to saying SQLi and XSS are non-issues due to a WAF. Fortunately, that's not what they said.
It was an example, an analogy, an illustration. We’re not dependent on our WAF any more than curl is dependent on ASLR and NX. We (and curl) use those things in addition to good coding practices so they all work together to create as secure of a product as our respective teams can manage.