Hacker News new | past | comments | ask | show | jobs | submit login

Signed, Reproducible builds from source off a trusted build farm are possible with conda-forge, emscripten-forge, Fedora COPR, and OpenSUSE OBS Open Build System https://github.com/pyodide/pyodide/issues/795#issuecomment-1...

What does it mean for a package to have been signed with the key granted to the CI build server?

Does a Release Manager (or primary maintainer) again sign what the build farm produced once? What sort of consensus on PR approval and build output justifies use of the build artifact signing key granted to a CI build server?

How open are the build farm and signed package repo and pubkey server configurations? https://github.com/dev-sec https://pulpproject.org/content-plugins/




The Reproducible Builds project aims to make it possible to not need to trust your build machines, perhaps PyPI could use that approach.

https://reproducible-builds.org/


"Did the tests pass" for that signed Reproducible build?

Conda > Adding packages > Running unit tests: https://conda-forge.org/docs/maintainer/adding_pkgs.html#run...

From https://github.com/thonny/thonny/issues/2181 :

> * https://conda-forge.org/docs/maintainer/updating_pkgs.html

> Pushing to regro-cf-autotick-bot branch¶ When a new version of a package is released on PyPI/CRAN/.., we have a bot that automatically creates version updates for the feedstock. In most cases you can simply merge this PR and it should include all changes. When certain things have changed upstream, e.g. the dependencies, you will still have to do changes to the created PR. As feedstock maintainer, you don’t have to create a new PR for that but can simply push to the branch the bot created. There are two alternatives […]

nektos/act is one way to run a github-actions.yml build definition locally; without CI (e.g. GitLab Runner, which requires ~--privileged access to the docker/Podman socket) to check whether you get the exact same build artifacts as the CI build farm https://github.com/nektos/act

A Multi-stage Dockerfile has multiple FROM instructions: you can build 1) a container for running the build which has build essentials like a compiler (GCC, LLVM) and packaging tools and keys; and 2) COPY the build artifact (probably one or more signed software packages) --from the build stage container to a container which appropriately lacks a compiler for production. https://www.google.com/search?q=multi+stage+Dockerfile

Are there guidelines for excluding entropy like the commit hash and build time so that the artifact hashes are exactly the same; are reproducible on my machine, too?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: