A venue for work at the intersection of history, computation, and critical approaches to political economy, empire, and environment — a place to publish what doesn't fit traditional journal shapes, and to show the code, the data, and the limitations alongside the argument.
Working Papers in Critical Search publishes scholarship at the intersection of history, computation, and critical approaches to political economy, empire, and environment.
We are looking for work that treats method and interpretation as inseparable: submissions should make their evidentiary chains explicit — where sources originate, how they have been transformed, how methods have been tested, what the data's limitations are, and which passages substantiate the interpretation. We publish things that don't fit neatly into traditional journals:
For the broader intellectual rationale, see the introduction to the series.
Critical search, as developed in Jo Guldi's The Dangerous Art of Text Mining, is the practice of interrogating how knowledge is retrieved, ranked, and assembled — with attention to provenance, bias, and historical context. It treats the act of search as itself an object of historical and methodological scrutiny, not a transparent route to evidence.
For this series, critical search is both a method and a publishing standard. We ask contributors to make their evidentiary chains visible: readers should be able to see where sources originate, how they have been transformed, how algorithms have been tested for bias, how the dataset's own limitations have been interrogated, and which specific passages substantiate a given interpretation. We call this a “white-box” approach to digital history.
New methods are emerging rapidly. We want a place to document what works, what doesn't, and what historians should know about these tools.
Papers are not frozen. Authors update as libraries release new versions, new sources appear, or readers suggest improvements. Each significant revision gets a new version and a new DOI.
Include the code, describe the methods, name the limitations. The point is transparency about the analytical chain from source to claim — not one-click reproducibility. If your analysis required a cluster and weeks of compute, say so. If your dataset has known gaps, say what they are. If your interpretation rests on particular passages, point to them.
Editorial review, not formal peer review. Papers publish in weeks, not months. Editors check fit with scope, that the methods are documented enough to follow, and that the evidentiary chain has no obvious gaps. Substantial revisions happen post-publication, in the open.
Authors draft in their own GitHub repo (forked from the
paper template)
or by sending the editors a notebook, Word doc, or
.qmd file. When the work is ready, the editors
handle the technical handoff — pulling the paper into
its own repo in the journal organization, configuring Pages,
and adding the author as a collaborator for any post-review
revisions.
Editors check fit with scope, that code and methods are documented, that the argument is clear, and that the paper has no obvious errors. Requests for changes go through GitHub Issues or PR comments.
Paper repo is added to the public index. Quarto auto-renders to GitHub Pages.
First release (v1.0) is tagged; Zenodo archives the snapshot and
mints a DOI.
Minor fixes merge straight to main and the site rebuilds. Major
updates get a new version tag (v1.1, v2.0) and a new
Zenodo DOI; previous versions remain citeable.
See the submission guide. Quick version: start from the paper template, write, transfer to the org, we review, we tag a release, Zenodo mints a DOI.
Papers are living documents. Fork the paper's repo, make changes, open a pull request. Minor fixes merge quickly; major revisions get a new version and DOI.
Found a bug?
Open an issue.
Want to improve something? Fork the
home repo,
make changes, test locally with quarto preview, and submit a pull request.
| GitHub | Version control, collaboration, living documents |
| Quarto | Renders notebooks to HTML and PDF |
| GitHub Pages | Free hosting |
| Zenodo | Permanent archiving, DOI minting |
| Hypothes.is | Page-level annotation and discussion |
| arXiv | Distribution to CS/AI communities (optional) |
| Hugging Face | Dataset and model hosting (when applicable) |
All content is published under CC-BY 4.0. Code is typically MIT-licensed unless otherwise specified by authors.
GitHub Issues for problems with the site. For editorial questions: jim.clifford@usask.ca or jo.guldi@emory.edu.