Hm, shouldn't the articles contain the information necessary to rewrite the code? Then rewriting the code could be seen as replicating the experiment.
Both sharing and not sharing seems to have pros and cons. For example if the code is buggy and shared, odds might be higher that the bugs will never be found because nobody will bother trying to write the code again.
I completely agree. If this is publishable science, then a strong and reproducible description of the science and algorithms used is all that is necessary.
I would be very interested if the author had actually given even a single instance where the lack of software code that merely implements the experiment has completely impeded progress on the science in a paper. Even if this were the case, would that not imply simply more algorithmic detail is required?
Of course, for all of the above, I am referring to non-computer science. There may be special circumstances in computer science where the code itself is the published algorithm or an intended description of the underlying science.
For an example of a specific circumstance consider theorem provers, because the proof is usually too large for a paper publication. The Archive of Formal Proofs (AFP) [0] is a repository for Isabelle proofs, which my collegues use. They submit a proof to AFP and write a paper about the results, where they cite the AFP publication.
> For example if the code is buggy and shared, odds might be higher that the bugs will never be found because nobody will bother trying to write the code again.
Your reasoning is a bit odd. It's a good thing that nobody will bother trying to write the code again. The world has enough people who can write code. We need more people who can read and dissect code, refactor them, and add tests where applicable.
Both sharing and not sharing seems to have pros and cons. For example if the code is buggy and shared, odds might be higher that the bugs will never be found because nobody will bother trying to write the code again.