Yes, you can pretty easily use "SSH agent forwarding", which grand-parent comment calls "the devil", but really it's not that bad when used selectively, and a lot better than copying the private key to the server.
If you're cloning a specific application, the server could have its own key with read-only access to the repo. But if you're using that for deployment, you could (in AWS) grant the instance access to specific S3 paths and pass a specific artefact URL to be downloaded. (Or something similar for other environments)
But maybe you've got a different use case? I'd start with "why are you using git directly on a (production?) server?"
I'm gonna out myself because nobody is giving reasons
Why is it bad to git clone on a production server? Let's assume I'm not an idiot please, just misguided. This isn't reddit.
Let's also assume I'm using an ssh key that is only used for the purposes of pulling the repo, no write access, never reused (deploy keys as the git* services call them)
It's really hard to see these tips as anything but parroting someone else if nobody throws down a reason
I don't think you're outing yourself exactly; this is a scale-dependent question. People will give different answers because their experience is coloured by the size of the systems and organisations they have experience with.
At small scale it's fine. Plenty of useful and valuable stuff runs on a handful of servers that have consistent identities etc and people manage them interactively. Process lives in people's heads or (better) a wiki.
At large scale it's a completely bonkers thing to do. There are tons of nodes; it doesn't make sense to mutate just one unless something has gone badly wrong. Interactive login to a production system should be triggering an incident or at least linked to one. Really you shouldn't be mutating systems in any kind of "manual" way because that kind of change is supposed to be locked down to authorised deployment methods. The current state of the system has to be legible to other team members and other teams, and the easiest way to do that is by looking at what's gone through the deployment process, which is usually navigable via a dashboard.
In the middle, it's possible you could have a deployment system that relies on "git clone" with a key on the instance. That would be a little weird because git is not a great way to store or distribute build artefacts. Not crazy though - could make sense in some situations.
So you already protected most of it - no write access and scoped keys fix most of the issues.
Other potential problems:
- bad checkout ___location may mean unexpected content is available via .git paths on the web
- anyone with access to the server can copy the key and have external access to both the history of the project and all new commits - they can see the PRs with proposed security fixes before they get merged
- repository may contain ___domain names, credentials and other things which don't need to be deployed, but can be useful for the attacker doing recon
- potentially exposing information about customers if they got mentioned in the history
It's not terrible to use git directly. There are just ways you can deploy a little bit better if it's worth your time investment.
> Why is it bad to git clone on a production server? Let's assume
> I'm not an idiot please, just misguided. This isn't reddit.
This cargocult ritual is for those who manage farms of identical servers. If you have a single production server with no load balancing, it's fine.
But be careful, Git does not handle file permissions well by default. If you have a directory or file that needs special permissions (like a cron job script) then you should set `core.fileMode` to `false` in your git config.
It _is_ cargo-culting when publications begin to recommend it as standard practice, without explaining the reasoning. Because at that point either the publisher is doing it without understanding why, or the reader will follow the ritual without understanding why.
If you are running a modern proper devops environment you shouldn't be git cloning anything on a production sever, nor would production servers be using the same ssh keys to do that via whatever production server login you configured to reach out externally as the keys from a local sys admin dev machine to ssh into to the production server, which hopefully if you're running anything significant in production is through a proper IAM account or service account through a command line tool authing you to a certain level of underlying vm/node access to all production services in that rbac role.
To be hotly debated but regardless of the most secure production deployment environment you should have production environments like layered images and all required installs on a private company image repo where the company vpc deploys through a build server that deploys image tags to a private company image repo in which an ingress pulls down from after running things like clair and what have you to make sure at the very least you have the latest library versions running with the most recent patches on stable and can deploy replicas as across many servers/nodes through an intermediary deployment architecture whether it be k8s or nomad (which also allows for VMs not just containers) in which you could standardize and highly specify seccomp profiles app armour and any custom kernel modules or what have you.
Anyone using the same server to git clone (messing around in a private development box) to deploy production services would be a no go for me.
Any gitclone should be automated from a jenkins or similar build where org repo keys are authed before pushing to a private company repo (which also required with)
The only thing more annoying than the comments scoffing at how basic the advice is the reflection of how basic the production deployment techniques of the scoffers are. Nothing should be so non standardized in production as got cloning on a prod server ad hoc.
Ssh keys on a server seem an irrelevant situation to me in any production scale server I've worked in.
You'd be surprised. I worked at a startup and our entire deployment system was based on "git pulls" onto each production node, recompiling binaries, etc. Yes, it was crazy and also slow.
If you are able to use https instead of ssh, I find "deploy keys" quite handy for this scenario. Gitlab/Github provide these as essentially a temporary https password for selective read-only access to a repo or group of repos with an expiry date. The main downside I find is incompatibility with anything that needs dependencies and expects git over ssh... but then I try to avoid creating the scenario where repos must be built or assembled on each server.
What I would really prefer is to be able to use git over ssh with U2F, i.e a hardware key, in place of a private SSH key this should work the same from the server as the client. U2F is already in openSSH but I am not sure how long it will take before it is commonly available and added to git hosting services... i'm also not sure if the protocol will work through the terminal to an openSSH client on a server.
I already use hardware keys for OTP with SSH (yubico pam) which makes doing ssh between servers secure and easy without private keys, and without client software compatibility issues since it's just a keyboard-interactive mode as far as SSH is concerned... In fact if you are also hosting your own git service you could use this for git cloning over ssh right now and not bother waiting for U2F.
Genuine question, this is why I have a private key on the server in out office so I can clone our private repos.
Can you use your local private key then?