Recent calls to bolster the transparency of research in computational literary studies (CLS) often belie the realities and limitations of conducting this work. This paper outlines some practical barriers to reproducibility and replicability in CLS, specifically as they apply to text extraction and analysis of digitized, historical documents. We identify several limitations as they emerged in the workflow of our project, “Ciphers of The Times,” which sought to explore modes of intertextuality and cultural exchange between nineteenth century newspapers and novels. Among these, we consider in detail the issue of proprietary text-to-image platforms, institutional subscriptions, and third-party paywalls. While some of these barriers were traversable, others were not. Based on existing restrictions, we articulate a theory for how the limitations of unequal access impede the research community’s ability to engage the material under study effectively, to the preclusion of critique. Unable to offer full transparency into research practices, even well-intentioned scholarship falls short of fostering a culture of critical re-engagement, ultimately decreasing the impact of research findings and the utility of prepared datasets. Short of a major shift in access models, we consider available options for improving reproducibility and replicability.