Rapid Capture, Reflecting on Interpretations
Recently, I had the great fortune to be invited to the annual Cultural Heritage Imaging Professionals Conference held at Stanford University. The assembled group is small by design, made up of digital imaging studio managers from libraries, archives and museums who are provided with a focused forum during the three day event, "to share ideas, best practices, techniques and stories."
Part of my invitation included a request to do a short talk on any subject that I wished to choose. The only stipulation was that the discourse should provoke thought and stimulate later discussions at the event.
I had originally considered doing a condensed version of a recent presentation that I made on 3D modeling at The Getty Center. But to keep things fresh I decided to focus instead on some recent thoughts of mine with regard to where we are as a field on the notion of "rapid capture."
While there is some debate on when exactly the phrase first appeared in the lexicon, we can best trace rapid capture's most recent adoption within cultural heritage institutions to Ricky Erway's and Jennifer Schaffner's 2007, Shifting Gears: Gearing Up to Get Into the Flow. In their paper, the authors enthusiastically outline a progressive vision of digital reformatting through a proposed set of competing dichotomies. These include, Access vs. Preservation – Access Wins! and Quality vs. Quantity – Quantity Wins! The means to these ends include the notion that lowering image resolution at capture (i.e. lesser quality) directly allows for faster throughput (i.e. higher quantity). Among the stated goals of such accelerated throughput are to make digital capture an embedded part of overall initial archival processing and to have special collections' digitization emulate Google and Internet Archive's large-scale reformatting initiatives. According to the authors, "All these measures will help us to begin to keep pace with mass digitization of books." At the time of its publication on through to today, the paper has had a strong influence on the higher-level thinking of libraries, archives and museums.
Yet, mass digitized books and special collections objects can be two very different things, a truth that Erway accurately acknowledges in her 2011 follow-up, Rapid Capture: Faster Throughput in Digitization of Special Collections. There she mentions that, “collections vary, and it is important to recognize that comparing throughput rates… [we] must take other factors into account. Different materials require different approaches..." In fact one of the generally agreed upon themes from this year's Stanford gathering was that the handling of fragile and mostly heterogeneous archival objects was the major bottle-neck in subsequent digital capture.
I bring up these two papers in particular for a few reasons. Mainly, they serve as a healthy example of the gradual refinement, through implementation and fair-minded assessment, of one's sometimes raw "out with the past, in with the new" ideas. In this case, 2007's Shifting Gears... presents the radical, yet untested initial outline of a proposed future, while 2011's Rapid Capture... presents us on the other hand with a series of trials and subsequent results analysis. Erway describes the methodology of the later, "So in an extremely casual survey, we asked some of our colleagues in libraries, archives, and museums to identify initiatives where non-book digitization was being done "at scale." We didn’t define "at scale," because we thought we’d know it when we saw it. It wasn’t always so easy." The paper goes on to more closely examine a number of case studies, and some of the unexpected hurdles that were met. In her conclusions, Erway sounds a more sober note on just how far the possibilities of monolithic, rapid capture assumptions can logistically reach when applied to the often heterogeneous formats and fragile nature of archival objects.
Though by 2011 Erway's own initial vision had obviously been re-calibrated by the natural iterations of a reasoned experimentation and evidence-based feedback loop, it is interesting to think about what "rapid capture" still means today. In many ways, the early (at that point untested) assumptions of the 2007 paper still hold a powerful influence. For instance, the relationship between speed (rapidity) and quality is still viewed by many as an inflexible inverse relationship. Yet, this assumption is in many ways a carry over to an earlier time when most digital capture was done with the tri-linear sensor technology of still camera scan back systems or generic flatbed scanners. In order to get such technology to run faster, you needed to lower its sampling rate setting. Hence the idea that, high speed = low resolution or intimations of “low quality" came into being.
However, the technological reality of digital imaging drastically changed in 2008 when Canon released its affordable 5DII 21.1Megapixel area array EOS camera body.
It was at that moment when flat objects up to 11"x17" could be captured at true 300ppi with the instantaneous release of a shutter, rather than the slow broom sweep of a scanner's tri-linear sensor array. Despite such advances in speed and quality, rapid capture can still carry with it vague connotations of quality being necessarily sacrificed for the sake of speed. It is as if the assumptions of the 2007 paper were taken as truth, and the realities of the 2011 follow up were never read beyond "Rapid Capture" appearing in the title along with the term henceforth becoming part of the greater lexicon. Yet, as imaging technologies continue to advance and become even more accessible, and as technical imaging expertise grows more focused, high quality is routinely being attained at scale through skilled workflow engineering and management.
Today, the lingering old assumptions can be problematic for a few reasons. Along with advances in digital capture technology, the advent of 4K, 8K, and "Retina" displays, and in turn the heightened viewing expectations of digital image users have arrived. At the same time, what is becoming apparent more and more is that on such super and ultra high definition devices, low quality images look progressively worse than in today's standard definition. Would it be a stretch then to envision a time in the near future when 4K, for example, will be the new "standard?" Can we not imagine how technology may end up driving us towards an almost regular re-evaluation of the importance of image "quality" whether we want to engage in ongoing self evaluation that deep or not? Our users' interest, and hence our own relevance may inevitably hinge on constantly upping our game.
It is also interesting to note that beyond simple appearance, images, particularly large image aggregations are being used more and more in big data research. This is a topic that I have previously touched upon in citing Peter Leonard's work at the University of Chicago. High quality images created to a consistent standard can be leveraged as a rich and useful data resource for tomorrow's researchers to plumb new data points, and approach new lines of inquiry that may go well beyond such traditional activities as text mining.
Meanwhile, the question remains... will yesterday's more coarsely-standardized archival images frustratingly limit expected use over time? A world of big noise, rather than big data?
In the final analysis, this is not a plea to return to the time when boutique imaging methods were indiscriminately applied to all material formats. Indeed aspects of rapid capture's original vision remain a healthy aspiration in order to get our collective efforts up to an operational level of scale. It is just that the black or white nature of those original dichotomies feel a bit off focus in hindsight, particularly in light of the latest technologies and their applications.
At UConn, our fiscal year recently closed and with that another 12 month tracking cycle for our lab. All total, my photographers shot 100G new image captures that I can confidently describe as attaining FADGI 3-4 star quality (4 being the highest level of the scale). Is this an example of rapid capture? I don't know. All I know is that it is not simply the raw totals that interest me but the ongoing challenge to maintain an accompanying high level of image quality that makes it fun to come to work everyday.