If you are running a modern proper devops environment you shouldn't be git cloning anything on a production sever, nor would production servers be using the same ssh keys to do that via whatever production server login you configured to reach out externally as the keys from a local sys admin dev machine to ssh into to the production server, which hopefully if you're running anything significant in production is through a proper IAM account or service account through a command line tool authing you to a certain level of underlying vm/node access to all production services in that rbac role.
To be hotly debated but regardless of the most secure production deployment environment you should have production environments like layered images and all required installs on a private company image repo where the company vpc deploys through a build server that deploys image tags to a private company image repo in which an ingress pulls down from after running things like clair and what have you to make sure at the very least you have the latest library versions running with the most recent patches on stable and can deploy replicas as across many servers/nodes through an intermediary deployment architecture whether it be k8s or nomad (which also allows for VMs not just containers) in which you could standardize and highly specify seccomp profiles app armour and any custom kernel modules or what have you.
Anyone using the same server to git clone (messing around in a private development box) to deploy production services would be a no go for me.
Any gitclone should be automated from a jenkins or similar build where org repo keys are authed before pushing to a private company repo (which also required with)
The only thing more annoying than the comments scoffing at how basic the advice is the reflection of how basic the production deployment techniques of the scoffers are. Nothing should be so non standardized in production as got cloning on a prod server ad hoc.
Ssh keys on a server seem an irrelevant situation to me in any production scale server I've worked in.
You'd be surprised. I worked at a startup and our entire deployment system was based on "git pulls" onto each production node, recompiling binaries, etc. Yes, it was crazy and also slow.
To be hotly debated but regardless of the most secure production deployment environment you should have production environments like layered images and all required installs on a private company image repo where the company vpc deploys through a build server that deploys image tags to a private company image repo in which an ingress pulls down from after running things like clair and what have you to make sure at the very least you have the latest library versions running with the most recent patches on stable and can deploy replicas as across many servers/nodes through an intermediary deployment architecture whether it be k8s or nomad (which also allows for VMs not just containers) in which you could standardize and highly specify seccomp profiles app armour and any custom kernel modules or what have you.
Anyone using the same server to git clone (messing around in a private development box) to deploy production services would be a no go for me.
Any gitclone should be automated from a jenkins or similar build where org repo keys are authed before pushing to a private company repo (which also required with)
The only thing more annoying than the comments scoffing at how basic the advice is the reflection of how basic the production deployment techniques of the scoffers are. Nothing should be so non standardized in production as got cloning on a prod server ad hoc.
Ssh keys on a server seem an irrelevant situation to me in any production scale server I've worked in.