This post is partly an advert for a workshop I'm co-organising in Glasgow on 19-20 August. It's called "ISNET: Information and Statistics in Nuclear Experiment and Theory". It's all about the linked processes in (nuclear) physics of performing experiments to measure some property of a system, and developing theoretical models of the same system, and in particular how to properly quantify things like the errors and uncertainties associated with the models. These uncertainties have not always been treated with the same rigour in theoretical work as they are in experimental, but that is increasingly changing. If you're a nuclear physicist and fancy coming along, please visit the workshop website and register. There is no fee for the workshop, but also no support, so your institution will need to cover local expenses.
The other part of the post is to report on some broader issues related to the workshop. I'm in Cracow, in Poland, at another workshop (this one on things we can do if a new nuclear research facility is built). At the end of my talk, I advertised our workshop in Glasgow, and had a very interesting conversation with Thomas Duguet from Saclay afterwards. Amongst many other interesting things, we discussed the problem of publications only giving the story of successful investigations. Often (or perhaps always), both in experiment and theory, the process of arriving at the point at which you have some positive result to report, has involved a lot of getting things wrong, dead ends, trying fruitless methods, before finally getting a result. Sometimes the result is only negative, in that only the absence of a hoped-for result is "discovered". It is certainly not the case that these things never make it into print - there are plenty of results setting bounds on the existence of physical states through experiments that have been performed and not seen anything, but there is surely a lot of work out there that is never reported.
There are all sorts of potential effects of this. What will a historian of science conclude when they plumb the literature many years from now? These parts of the scientific process will be hidden. What happens to the career of a young scientist who does sterling work that only leads to ruling out the existence of something. How will they be compared on the job market to someone who discovers something? Then there is the waste of repeated labour - trying to do something that someone else has already explored exhaustively, but not reported that it didn't work. Partly because of these issues, Thomas has been discussing with journal editors about starting a new series of articles which document the process of doing science, pitfalls and all.