This so called “security risk” is a role in a nonprod that can list metadata about things in your production accounts. It can list secret names, list bucket names, list policy names, and similar.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
Depending on what the metadata is, it can be a huge security risk.
For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."
AWS does not treat metadata with the same level of sensitivity as other data. The docs explicitly say that sensitive information should not be stored in eg tags or policies. If you are attempting to do so, you’re fighting against the very tool you’re using.
No, that’s not the reality. “Production data” isn’t as black and white as that.
Metadata about your account, regardless of if you call it “production” or not, is not guaranteed to be treated with the same level of sensitivity as other data. Your threat model should assume that things like bucket names, role names, and other metadata are already known by attackers (and in fact, most are, since many role names managed by AWS have default names common across accounts).
Just wanted to point out that it is not just names of objects in sensitive accounts exposed here - as I wrote, the spoke roles also have iam:ListRoles and iam:ListPolicies, which is IMO much more sensitive than just object names. These contain a whole lot of information about who is allowed to do what, and can point at serious misconfigurations that can then be exploited onwards (e.g. misconfigured role trust policies, or knowing about over-privileged roles to target).
ListPolicies does not list the contents of policies, so your comment is wrong.
Things like GetKeyPolicy do, but as I mentioned in my comments already, the contents of policies are not sensitive information, and your security model should assume they are already known by would-be attackers.
“My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out” is security by obscurity. And chances are, they do know about it, because you need to account for default policies or internal actors who have access to your code base anyway (and you are using IaC, right?)
You’re right to raise awareness about this because it is good to know about, but your blog hyperbolizes the severity of this. This world of “every blog post is a MAJOR security vulnerability” is causing the industry to think of security researchers as the boy who cried wolf.
> “My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out”
The goal in preventing enumeration isn't to hide defects in the security policy. The goal is to make it more difficult for attackers to determine what and how they need to attack to move closer to their target. Less information about what privileges a given user/role have = more noise from the attacker, and more dwell time, all other things being equal. Both of which increase the likelihood of detection prior to full compromise.
I disagree with your opinion here: The contents of security policies can easily be sensitive information.
I think what you mean to say is, "Amazon has decided not to treat the contents of security policies as sensitive information, and told its customers to act accordingly", which is a totally orthogonal claim.
It's extremely unlikely that every decision Amazon makes is the best one for security. This is an example of where it likely is not.
It’s not orthogonal. The foundation of good security is using your tools correctly. AWS explicitly tells users to not store sensitive information in policies. If you’re doing so, it’s not AWS making the mistake.
AWS is evidently not using their own tools correctly to build AWS then, because we know that the contents of security policies can easily be sensitive information.
Just because Amazon tells people not to put sensitive information in a security policy, doesn't mean a security policy can't or shouldn't contain sensitive information. It more likely means Amazon failed to properly implement security policies (since they CAN contain sensitive information), and gives their guidance as an excuse/workaround.
The proper move would be to properly implement security policies such that the access is as limited as expected, because again, security policies can contain sensitive information.
This is key to understand here: What Amazon says is best security is not best security.
It’s inaccurate to say that there’s “basically no zoning whatsoever in much of Texas”. In Texas, zoning is up to the individual cities. Houston is notorious for having a lack of zoning laws, but all of the other major cities like Dallas, Austin, Fort Worth, San Antonio all have very complex zoning laws.
The article is typical security issue embellishment/blogspam. They are incentivized to make it seem like AI is a mission-critical piece of software, because more AI reliance means a security issue in AI is a bigger deal, which means more pats on the back for them for finding it.
Sadly, much of the security industry has been reduced to a competition over who can find the biggest vuln, and it has the effect of lowering the quality of discourse around all of it.
I’d be pretty skeptical of any of these surveys about AI tool adoption. At my extremely large tech company, all developers were forced to install AI coding assistants into our IDEs and browsers (via managed software updates that can’t be uninstalled). Our company then put out press releases parading how great the AI adoption numbers were. The statistics are manufactured.
Yup, this happened at my midsize software company too
Meanwhile no one actually building software that I have been talking to is using these tools seriously for anything that they will admit to, anyways
More and more when I see people who are strongly pro-AI code and "vibe coding" I find they are either formerly devs that moved into management and don't write much code anymore, or people who have almost no dev experience at all and are absolutely not qualified to comment on the value of the code generation abilities of LLMs
When I talk to people whose job is majority about writing code, they aren't using AI tools much. Except the occasional junior dev who doesn't have much of a clue
These tools have some value, maybe. But it's nowhere near the hype would suggest
This is surprising to me. My company (top 20 by revenue) has forbidden us from using non-approved AI tools (basically anything other than our own ChatGPT / LLM tool). Obviously it can't truly be enforced, but they do not want us using this stuff for security reasons.
I’ skeptical when I see “% of code generated by AI” metrics that don’t account for the human time spent parsing and then dismissing bad suggestions from AI.
Without including that measurement there exist some rather large perverse incentives.
The AI autocomplete I use (Jetbrains) stands head-and-shoulders above its non-AI autocomplete, and Jetbrains' autocomplete is already considered best-in-class. Its python support is so good that I still haven't figured out how to get anything even remotely close to it running in VSCode
It isn’t even close to being the greatest tool in human history. This type of misunderstanding and hyperbole is exactly why people are tired/bored/frustrated of it.
The uncomfortable truth is that AI is the world’s greatest con man. The tools and hype around them have created an environment where AI is incredibly effective at fooling people into thinking it is knowledgeable and helpful, even when it isn’t. And the people it is fooling aren’t knowledgeable enough in the topics being described to realize they’re being conned, and even when they realize they’ve been conned, they’re too proud to admit it.
This is exactly why you see people that are deeply knowledgeable in certain areas pointing out that AI is fallible, meanwhile you have people like CEOs that lack the actual technical depth in topics praising AI. They know just enough to think they know what “good” looks like, but not enough to realize when the “good” output is just lipstick on a pig.
What is the greatest tool in human history in your opinion?
I think it's too early to call whether AI is the answer to that question, but I think it could be. Yes, LLMs are terrible in all kinds of ways, but there's clearly something there that's of great value. I use it all day every day as a staff-level engineer, and it's making me much better and faster. I can see glimmers of intelligence there, and if we're on a road that delivers human-level intelligence in the next decade, it's difficult to see what else would qualify as the greatest tool humanity has ever invented.
It’s not hype when it’s released and used for concrete tasks. Some are hyping future potential sure. But GP is hyped about how he can use it NOW. Which I agree is very cool.
The human still needs to think, of course. But, I can get to my answer or my primary source using a tool faster than a typical search engine. That's a super power, when used right!
The jump in productivity we had with the world wide web and search engines was several orders of magnitude higher than what you have right now with LLMs, yet I don't remember a single person back in the 2000s calling Google "the greatest tool in human history".
Almost sixty years after ELIZA, chatbots seem to still produce a very strong emotional reaction to some folks.
AI is really bringing out the worst in capitalist corporatism. I think even the most pessimistic AI doomer will agree that the technology is fascinating and can do some cool things, but our corporate overlords have made it such an all-encompassing topic that it’s hard not to be disillusioned by it.
Yes, it’s an interesting technology - but seeing other interesting technologies and projects get disinvested in because it’s not new shiny AI - or watching leaders insist on creating yet another chatbot just so they can pat themselves on the back - or replacing an already-working system with a half-broken AI one just so we can say we use AI, and then forcing everyone at the company to use it just so we can release a press release saying ‘all of our developers use AI’. Watching our overall quality of work decrease, but leaders celebrate it because “mediocre but done with AI” is gold standard now…
All of it feels like a sham. Feels like we’re trying too hard to prop up AI as amazing, rather than letting it succeed on its own merits.
I'm kind of puzzled. I've been interested in AI for ~45 years and read quite a lot of the stuff on HN and similar. I'm not aware of any of that being pushed on me by a corporate overlord? I'll give you there is more hype and money these days.
However I also read Kurzweil / Moravec stuff which has been predicting human level intelligence around now since at least 1990 and that to me is the interesting bit, and not really at all corporate, rather than open-whatever has raised $x bn etc.
You must not work in big tech, then. At my company, AI-companion tools were forcibly installed in everyone’s browser, soon followed by a press release parading how “100% of people at our company now use AI because it’s so great”. Attempts to uninstall the tool are explicitly blocked. Similarly, AI coding tools are forcibly installed in IDEs.
Every single hackathon or team workshop has been turned into a AI-specific hackathons or training. Managers have been told explicitly that they must come up with yearly goals that include the use of AI. Every project proposal must include a section answering how this project plans to utilize AI.
The most egregious example for me were adding a chatbot to Windows and going so far as creating a special button (on laptop's cramped keyboard). Then we got the Apple ads which are just stains on professional ethics.
Amazon Nova is a foundation model created by Amazon and is offered as one of the models you can use in AWS Bedrock, so the model gets a marketing page for it. Note that Llama also has an AWS marketing page (as do other models), but that doesn’t make them AWS products: https://aws.amazon.com/bedrock/llama/
Amazon Nova Chat is a different product that uses Nova, buts it not AWS. Notice that if you try to use Nova Chat, you log in using your Amazon.com account and not an AWS account.
The GriGri does not have an autobrake. Petzl is very intentional in saying it is an “assisted braking device”, not auto braking. If there is any tension at all on the rope (even just lightly being held), then the GriGri will likely brake, but if the rope isn’t being held at all then there is no guarantee it will brake.
See this video, around the 10 minute mark where there’s several examples of the GriGri not locking at all: https://youtu.be/We-nxljgnw4?t=605
This is perhaps an even greater issue than what you pointed out because people misunderstand the GriGri a lot, and assume it will always catch them even if you aren’t holding the rope. It won’t.
To be fair, I suspect the difference between “auto brake” and “assisted braking device” is mostly legal liability. In practical use I would understand both terms to mean the same thing. I think very few people believe a grigri will _always_ catch them. They just (accurately!) believe that in most circumstances it will. The 5% where it won’t catch you is of course deadly.
The military version is auto-brake for rappelling and you can go fully hands free per petzl but you have to use specific rope. My guess is since they cant control the rope in civilian version they can’t vouch for it working in 100% of cases
It’s one of those distinctions which only actually matters in a small and somewhat rare, but very important, edge case. Usually more determined by rope diameter and conditions than anything else.
In contexts where a fuckup is likely to result in death or permanent disability it might be prudent to play the consequences instead of the odds. Marginally related: I was always shocked by the diversity of crispy bullshit other climbers were willing to rap off of. I kept new webbing and rap rings in my daypack at all times just in case.
Sure, it should be noted that applying that rule fully consistently would mean never doing recreational climbing at all eh? After all, it is fundamentally risky.
Maybe still SAR, of course.
I always carried a belay knife and some extra webbing, and used it more than once. A couple times I didn’t need the knife.
I mean I hear what you're saying but at the same time avoidable bullshit like scrambling off rope 50'+ off the deck, rapping off of some random stripper's thong that got caught in a shrub, or letting your intrusive thoughts win while belaying are all dirt common in the community. Its one thing to get caught out while engaging in dangerous hobbies, it's something else entirely to die doing something that's obviously stupid. I absolutely cannot even when someone goes for a 10th of a mile slide off of one of the flatirons in Tevas and the locals have the gall to call it a tragedy.
I have the exact same grigri shown in the video, and somehow it never fails to lock when I'm quickly trying to feed slack to a leader and don't put a bit of pressure on the left side. Even when I've added slack to the brake strand.
I wonder if it has something to do with the angle, as it's pretty uncommon to have the climbing strand going straight up from the belayer.
It’s kind of funny that people think of “sandboxing” as the main feature of containers, or even as a feature at all. The distribution benefits have always been the entire point of Docker.
The logo of Docker is a ship with a bunch of shipping containers on it (the original logo was clearer, but the current logo still shows this). “Containers” has never been about “containment”, but about modularity and portability.
Docker introduced an ambiguity in the meaning of the word "container". The word existed before Docker, and it was about sandboxing. Docker introduced the analogy of the shipping container, which as ranger207 says, is about sandboxing at the service of distribution.
The two meanings - sandboxing and distribution - have coexisted ever since, sometimes causing misunderstandings and frustration.
It's not about sandboxing or distribution, it's about having a regular interface. This is why the container analogy works. In the analogy the ship is a computer and the containers are programs. Containers provide a regular interface such that a computer can run any program that is packaged up into a container. That's how things like Kubernetes work. They don't care what's in the container, just give them a container and they can run it.
This is as opposed to the "old world" where computers needed to be specifically provisioned for running said program (like having interpreters and libraries available etc.), which is like shipping prior to containers: ships were more specialised to carrying particular loads.
The analogy should not be extended to the ship moving and transporting stuff. That has nothing to do with it. The internet, URLs and tarballs have existed for decades.
> Additionally, deployments to ECS are typically handled by invoking the AWS API within a GitHub Action, without continuous reconciliation or drift detection.
No they aren’t. All of the major IaC solutions (TF, CDK, etc) do ECS deployments directly through their own API, including with drift detection and updates.
Good for you for finding something that works, but it sounds like your advice related to IaC solutions is based on a misunderstanding of the benefits of IaC and the tools available.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
reply