Hacker News new | past | comments | ask | show | jobs | submit | Max-Ganz-II's comments login

She's on BlueSky, which is where I came to know of her.

She has been essential to keeping up to date with developments in the USA regarding Donald.

I subscribed to her newsletter last month.

It seems to me mainstream media doesn't really cut the mustard.

I get most of my information these days from specialist blogs, run by individuals.



It's a tricky one; one wants to trust the media because they should have integrity and checks and balances and the like - which you don't know if unafilliated independent bloggers have.

But the bloggers are also great, but you need to do some more homework to decide if they are trustworthy - no editorial team, no journalistic ethical codes (if those actually exist), etc.

The other factor is that a lot of the conspiracy thinkers based their beliefs on blogs or the equivalent thereof (FB posts, youtube videos, etc), stuff you wouldn't see in the mainstream media - because it's nonsense, but the fact the mainstream media doesn't cover it actually draws the attention to the conspiracy bloggers.


My account on Reddit, and so as I was the founder also the sub r/AmazonRedshift, was in Sep 2023 banned by an automated system.

The sub was working normally, I posted about the Amazon Redshift Serverless PDF, and then Reddit began behaving oddly.

After some investigation, and some guesswork, I concluded my account had been silently shadow-banned, and the sub banned (and then shortly after, deleted).

Two years of posts and the sub disappeared, instantly, abruptly, without warning, reason, appeal process or notification, and Reddit was trying by shadowbanning to lead me into thinking my account was still active.

I used a utility to scramble (you can't delete) all the posts I'd ever made to Reddit, and closed my account.

(There's a bit of a happy ending - about a year later someone who was doing work with Reddit and had an archive of all my poss sent them to me, as a thank you. I processed the JSON and put them up on my Redshift site.)


Similar problem.

Just worked for six months for a client.

Contract is one month of monthly savings made in AWS bill.

They have not paid, and in fact terminated the contract early to avoid payment - "the savings do not occur until AWS reservations end, contract has been ended now, so savings have not occurred". The savings are going to occur, and then the client will keep those savings.

Turns out the contract also contains what looks like a poison pill, to prevent the counter-party (me) having recourse to court at all, no matter what the client does.

Also the entire contract is confidential, so I in theory can't say a word about any of it.


Maybe you already considered this, but contracts are full of unenforceable and illegal clauses. At the end of the day you did work for them that they agreed to and then they stiffed you on it in a very visible way. Generally, if the work is already done, they are liable for the entire amount.

I would be talking to a lawyer. You might even consider it part of your job to have a working relationship with a lawyer.


Don't worry - I'm going to court.

The poison pill is unconscionable, and the action of termination is in bad faith.


Poinson pill can be ignored and smashed on the wall by a judge, consult an attorney if you haven't already.

Confidentiality is usually regarding public disclosure, it does not prevent your attorney and a judge from seeing it.


Yes.

Thank you.


I started working on these back in 2017.

About 780 views at the moment.


Building a set of replacement system tables for Amazon Redshift.

Not out yet, but the GitHub page is here, and will point to them in due course.

https://github.com/MaxGanzII/redshift-observatory.ch


I've not watched the video, but going on the title, I have to say, I have my Framework, and I absolutely love it to bits.

It was relatively expensive, but in absolute terms the difference is minor; and now I have something where I don't have to worry about Samsung/Sony/ASUS/etc accidentally or even deliberately installing a back door, which I can repair at will (and I have - a key failed and I changed keyboard), which I can upgrade over time, where the keyboard is heaven to type on, where I can easily fit and change memory, disk and the battery, the hot-swappable external post interfaces are particularly noteworthy in their usefulness, I have full support for Linux from the get-go, and where I have actual real Support on the odd occasion I've needed it (and boy, have I not had that from Sony and Samsung in the past, when I've owned their products - in fact, I had one of the worse customer support experiences, beyond imagination, with Sony and their Vaio, back in the day).

It puts me in mind of a recent experience with our ISP. We had a five day outage. No notification that it had occurred, no notification when it ended, no explanation, no ETA, nothing. Radio silence. Also the customer support phone number wasn't working, as we discovered.

That's the Big Company experience.

If we had a local ISP we could use, you wouldn't even see me move.

With Framework, I have that, but for my laptop. It's been a God-send.


Now imagine if you had watched the video...


I watched the video. It's hot garbage; Source: my family has two Framework laptops (gen 11), I've been in charge of repairing / upgrading them over the last two years. Our primary issue is Intel+Microsoft breaking the S3 deep sleep, not any of the Framework decisions.

If you're on the market for a laptop and the budget is not an issue, I recommend going with Macbooks, just for the battery performance. For a Linux laptop, Framework is the best we've got.


Replacement system tables for Amazon Redshift.

Thinking a free and paid version.

Free is everything, but runs on the existing system tables; so it's slow, and so a six day history only.

Paid has a system table archiving mechanism, so you have before-and-after when something goes wrong, and where the tables are in Redshift proper, not the leader node, it's fast.


I have to say, I did not have a positive first experience with H1.

Probably mainly my misunderstanding, but H1 did not help in any way.

I opened an account, filed a report - I can easily crash Amazon Redshift as an unprivileged user. Provided the DDL/SQL to do so - dead simple, two statements, issue them and boom.

I received a reply, something like, "we have closed the report, if you can demonstrate a working issue we'll investigate further".

I was confused, replied and asked for explanation. No reply.

I tried going to their Support, 403 - doesn't work via Tor browser - no use for an anonymous report.

And that seems to be it - end of road.

I don't understand, no replies, no support, and I've disclosed valuable information and I have no idea what H1 have done or are doing with it (if it's been made public, for example).

(I asked on HN for advice. One line of reply was that this is not an exploit, but a bug, which I can see. OTOH, when I filled in the severity rating form, there was nothing in that where I was evidently going against the grain of what was expected, so I'm not wholly sure. Any further advice in replies now gratefully received.)


As someone on the other side - we get spurious reports and people who cause DoS but only for that account in non-realistic scenarios regularly. Unfortunately it is hard to tell the two apart or wise to get into debates about these topics - people start demanding money for "issues" you and every other web host in your industry is aware of (for example client side XSS modifying what appears on the screen... yes, really, they'll argue for cash).


> Unfortunately it is hard to tell the two apart

That's kinda a "you had one job" situation. Yes, it's hard to review security reports, and separate legit ones from bogus ones. But that's what these plattforms advertise they do. They regularly do a very bad job.


I think you misunderstood GP:

> Unfortunately it is hard for the reporter to tell the two apart


>As someone on the other side

that's the reviewer of the report, it's actually:

> Unfortunately it is hard for the reviewer to tell the two apart


> That's kinda a "you had one job" situation. Yes, it's hard to review security reports, and separate legit ones from bogus ones.

Security engineers aren't telepaths.

Yes sometimes reviewers dont do a good job, but i think you are severely underestimating how incomprehensible incoming reports can be sometimes. It is not always worth it to spend 6 hours trying to figure out what someone is talking about.


> Yes sometimes reviewers dont do a good job, but i think you are severely underestimating how incomprehensible incoming reports can be sometimes.

Yes, and this also goes for bugs filed by the public, sometimes comments/requests in public Open Source projects. There are lots of examples of incomprehensible communication and then there's also argumentative communication (that usually gets increasingly argumentative and ad hominem as the reply chain continues). Based on viewing a sampling of public bug reports my company gets (including security incident reports), I would not want to be the agent who acts as liaison by replying and clarifying with the bug reporter. Most public reports are polite and constructive but it's shocking how high a percentage are not, and become increasingly unprofessional as the discussion continues.


I can confirm that, we have had extremely annoying ones but they are a minority. Some of the participants are really good and that compensates.


> crash [...] as an unprivileged user.

DoS stuff typically wouldn't qualify for most bug bounties. Thats probably why you got ignored.

Most services aren't awfully interested in fixing this sort of thing - they'll just wait for someone to try and DoS at scale, then have the oncall team put in some extra regex on the input which blocks that specific expensive/crashing query.


so what are you saying is to sell it and then it will get fixed :P


well good luck finding a buyer, there's very little practical use for a DoS on a service that is usually not exposed to the public internet


If you can find a buyer…


> One line of reply was that this is not an exploit, but a bug

Denial of service is absolutely a security problem, so I don't think whoever gave this advice is correct. Sounds like a frustrating experience.


Not if you DoS your own instance only


But who gives DDL/SQL access to untrusted users? It seems to be a rather unusual scenario.


It is AWS redshift - a Postgres compatible SQL service. You interact with it via SQL.


Yes, but generally you don’t give SQL access to the internet or completely unauthenticated users.

Any user could likely cause issues for Redshift by running completely nuts queries that exhaust all the servers resources. Is “shit SQL” a bounty worthy issue


I see, I misunderstood.

Yes, having the power to crash your own instance might not be a security issue per se.


Good point.

The problem I know of is a bit different, in that it is a direct and immediate server crash. It's not a denial of service by making the cluster slow. It's run-query, crash-server.

You are right of course that any normal user can issue crazy queries which hog resources, and hammer performance.


I would just reach out to AWS directly: why go through hacker one?

They have a direct email and are responsive. If the issue meets their criteria then you get a payout.


I Googled for AWS bug bounty programs.

I found nothing.

Do you have a URL of any kind, for more information about this, including contacts?


https://aws.amazon.com/security/vulnerability-reporting/

I wouldn’t expect a bounty for something like this, but I believe the above is the correct avenue for reporting it.


[email protected] - it’s very clearly the first result when you search “AWS security report”.


A scenario where as an attacker:

- You have raw access to the DB

- You don't have enough privileges to get something more valuable than crashing the DB

- You don't care to get noticed/caught

Does sound pretty unlikely


Just as an aside, I don't think your query is directly logged.

I've not actually checked, so I don't know, but knowing how logging works on RS, I think the cluster crashing will mean your killer query is not logged.

Your session will have been logged by the time the cluster crashes. OTOH, maybe you were logged in for some time first, or there's connection pooling, or you slipped the query into an existing connection's query stream, and so on.

Actually, thinking about it, I think you could reduce the problem to a single query, rather than two, which would help cover tracks.


I assume the exploit only takes down your own instance, not the whole AWS redshift service. If it's the latter, it should obviously be rewarded with big $.


Yes and yes :-)


“Yes” as in “yes, my exploit takes down all of redshift”? Or “yes” as in “yes it only takes down my instance” ?


> I assume the exploit only takes down your own instance, not the whole AWS redshift service.

First yes for this.

> If it's the latter, it should obviously be rewarded with big $.

Second yes for this.

:-)


> Denial of service is absolutely a security problem

"Security problem" isn't a binary. DoS can be an issue, but often it is acceptable risk. Especially if your DoS is minor. E.g. i can crash the server by sending 50 Gbps of data to it, is not usually a security issue in context.

In the parent post they implied they needed privleged access to exploit. That probably makes it not a security issue as it can only be triggered by a trusted user.

Additionally most bug bounty programs disallow DoS, due to some combination of reports being low value, and testers being idiots, so it might be out of scope right from the bat.


I see you have never reported a DOS on H1. High-profile millionaire engineers will go out of their way to argue with you that DOS of their TOP100 website is in fact not a security problem.

These people don't even know about CIA triad and are gatekeeping four-figure bounty payouts while earning five figures or more every month. I'm extremely salty about this.


One of these days I'll move to Cuba and run all the long list of exploits I've been told aren't exploits by overpaid idiots.

https://www.youtube.com/watch?v=pQMuSotWCEs


Bugcrowd is no different. The folks doing Triage often don’t comprehend even simple security issues.

I’m convinced it’s largely designed to keep people from going full disclosure rather than actually getting bugs fixed.


As someone who worked on the other side (i.e. at a previous job i handled incoming bug bounty reports) a big part of why we used H1 is that it can be exhausting dealing with reporters. Often the reports are non sensical. Even the one's that are real, often there will be very minor issues that the reporter feels should get top payout. Sometimes reporters are very demanding and rude. At the same time you can't just throw out the crazy reports, because sometimes the crazy looking emails actually have the legit vulns.

H1 exists because once you start offering money the crazies start to show up, and its a lot of work to keep up with it.

(That's not to say that you are entirely wrong either. I am sure some less scrupolous companies do have that goal. However a lot of the time its simply that the vuln has low impact so its low priority. Depending on how the company is managed, often there is dysfunction where the security team lacks the ability to get things prioritized)


Oh yeah, I’d absolutely not want to have a raw unfiltered inbound bug bounty and be first line of triage, so paying H1 or Bugcrowd is the way to go.

But you’re also paying them to make sure the serious bugs absolutely do get to you, and if researchers give up, you’re not getting the value you need.

I suspect the problem is that the type of folks who are prepared to do front-line triage which is most commonly large volumes of nonsense and a few mediocre bugs, are early career security folks who can’t easily spot a really serious P0 and a researcher who clearly knows what they’re talking about.


I feel the same. IMO H1 is a face-saving filter layer for megacorp tech employees so the issues don't bubble up to their bosses via media reports. They tarpit you, send junior people into the ticket who don't understand the issue, and in the end they try to refuse payouts as much as possible.

Meanwhile megacorp tech employees with wikipedia articles spend time to explain how "their platform" is not affected by an issue, even though you show them a POC. Of course, things like DOS is not in scope. It's worse than arguing with layers about a contract, because there is so much face-saving and CYA going on.


Thank you everyone who replied.

I feel like I have a better understanding now of the situation and what happened with H1, and I feel better about it.

I will now see if I can figure out how to wipe everyone's RS clusters instantly with a single command, so I can report something on H1 after all :-)


Yeah, sure, and if someone gives me access to their redshift, I could just run "SELECT digest('foo', 'sha256');" in a loop, which takes redshift CPU and thus costs them lots of money.

If you give an attacker access to run arbitrary queries in your database, it's already pretty bad news, even without a crash.


I may be completely wrong, but that seems a bit narrow a definition - attackers move laterally once they penetrate organizations.

I can also imagine disgruntled or malicious employees or contractors.


"once they penetrate organizations."

Yep, that is the key. Crashing your own org hasn't done that. Unless I'm misunderstanding what you are claiming?


No. It's not a way to get in, but it is a way to crash a system (which should not be crashable) if you are in.


I think you are describing a bug, not a vulnerability. Is it bad? Sure. Is it a security vulnerability where you can impact anyone else? Doesn't sound like it.


This makes sense and I see it, but there is one matter which gives me pause; when I came to submit the issue, there's some kind of standard severity rating system, which you have to fill in. Questions like - can this issue be exploited over a network, or do you need local access? how many privileges are required? what's the consequence of the issue - degraded service, complete denial of service? questions like this. The issue here fitted into this framework of questions - there was no point where it didn't fit, and the severity rating system was evidently thinking in different terms.


Being able to create a table and issue a query means being a normal user on the cluster - so you have an account and can log in. It's enough to issue `CREATE USER [blah] PASSWORD [pwd];`, which is the minimum create user command.

So we're looking at malicious or disgruntled employees, and also normal queries where users do not realize what they're doing will kill the cluster.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: