While computational experiments have become an integral part of the scientific method, it is still a challenge to repeat such experiments, because often, computational experiments require specific hardware, non-trivial software installation, and complex manipulations to obtain results. To address this problem, we are building infrastructure to support the life-cycle of 'reproducible publications': their creation, review and re-use. This infrastructure, which builds upon and extends the open-source VisTrails system, makes it easier to generate and share repeatable results by making provenance a central component in scientific exploration, and the conduit for integrating data acquisition, derivation, and analysis as executable components throughout the publication process. Besides giving authors the ability to link results to their provenance, it enables reviewers to assess the correctness and the relevance of the experimental results described in a submitted paper; and, upon publication, it also allows readers to repeat and utilize the computations embedded in the papers. In this talk, we give an overview of this infrastructure and present case studies that show how this infrastructure has been used. We also discuss its limitations and open problems that require further research.