Hacker News new | past | comments | ask | show | jobs | submit login

Part of the point of do-nothing scripting is letting humans use their judgement in edge cases. For example, if the file already exists, you might not want to reuse a private key, potentially creating a security vulnerability, as chatgpt has done here.

Also it hallucinated (as usual) some extra args to ssh-keygen.




> security vulnerability

This GPT did the thing it was supposed to do: produce an implementation using a previous script's structures and processes. I'd bet explaining the reasoning, discussing potential vulnerabilities, or changing the business processes, wasn't in the prompt. Or that metadata wasn't included in GP for brevity.

But it has succeeded in sparking discussion, similar to rubber-duck debugging. A good org will look at this as a starting point, discuss the details, and iterate.

> Also it hallucinated (as usual) some extra args to ssh-keygen.

I don't see a hallucination here. I can confirm it works, and is correct on my system with OpenSSH on it.

I assume you mean the slightly strange `["-N", ""]` argument pair? This tells ssh-keygen to create the file with no passphrase and no prompting for a passphrase.


Note: ChatGPT didn't blindly reuse a private key:

    [N] /tmp> "ssh-keygen" "-t" "rsa" "-f" foo "-N" ""
    Generating public/private rsa key pair.
    foo already exists.
    Overwrite (y/n)?
It seems to me like you are assuming ChatGPT is always going to be wrong (which in itself is not an unreasonable place to start), which is coloring your use of the tool.


These lines skip the call to ssh-keygen:

  if os.path.exists(keyfile):
      print("Key file {} already exists. Skipping generation.".format(keyfile))


It seems to me like you are assuming ChatGPT is always going to be right, which is coloring your use of the tool.


I mean, I did say that assuming it was wrong was not an unreasonable place to start, which I think makes my position quite clear. But, I'll clarify: I review all the code that ChatGPT produces before I use it. Sometimes this results in me asking it to revise the code, sometimes it results in me abandoning that code, sometimes it results in me learning something because it has produced better code than I had in my mind thinking about the problem, or it used code or libraries that were a "blind spot" for me. I've been programming in Python since the late 90s, and programming since the early '80s, but I'm not a programmer by day, so I definitely have blind spots. Even if I was a programmer, I'd admit I'd have blind spots.

But: The LLMs can produce rather good code. They can cover a lot of typing and low to mid level design work from a short description. I'm bringing value to my company and my life by using this tool, plain and simple.


Which arguments did it hallucinate?

    [I] /tmp> "ssh-keygen" "-t" "rsa" "-f" foo "-N" ""
    Generating public/private rsa key pair.
    Your identification has been saved in foo
    [...]


Original script:

  ssh-keygen -t rsa -f ~/{0}
Chatgpt's version:

  ["ssh-keygen", "-t", "rsa", "-f", keyfile, "-N", ""]
It added `-N ''`, which means don't set a passphrase—a second potential security downgrade.


Ok, fair enough, I was thrown by your use of the word hallucination, which to me is used to describe picking things that don't exist, like if it had done "ssh-keygen -t rsa --add-public-keyfile-to-gitrepo-and-commit". In this case I had asked it to automate the do-nothing script, so I'd call this more of a "design decision" than a hallucination.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: