Daniel Paul O’Donnell
University of Lethbridge
Abstract: This article examines how novel technology affects readers’ understanding of digital objects. It begins by examining some recent scandals involving digitally manipulated photographs and argues that some of the uproar stems from the novelty of the techniques used in the manipulation, rather than the manipulation itself. It then explores some of the challenges in using novel technology to mediate the representation of historical objects in scholarly form. The article concludes with some thoughts on early experiments with the objects of the Visionary Cross project, a digital edition of a collection of objects belonging to the Anglo-Saxon “Visionary Cross” tradition.
Keywords: Photographic hoaxes; National Geographic; Gordon Gahan; Digital Humanities; Digital editing; the Visionary Cross Project; Virtual Morgantown; Immersive environments; Serious Gaming; Digital rhetorics
Daniel Paul O’Donnell is Professor of English at the University of Lethbridge. In addition to directing the Lethbridge Journal Incubator (http://www.uleth.ca/lib/incubator) and the Visionary Cross project (http://www.visionarycross.org/), he serves as co-president of the Society for Digital Humanities / Société pour l’étude des médias interactifs and is founding chair of Global Outlook :: Digital Humanities (http://globaloutlookdh.org/). Email: daniel.odonnell@uleth.ca .
The INKE Research Group comprises over 35 researchers (and their research assistants and postdoctoral fellows) at more than 20 universities in Canada, England, the United States, and Ireland, and across 20 partners in the public and private sectors. INKE is a large-scale, long-term, interdisciplinary project to study the future of books and reading, supported by the Social Sciences and Humanities Research Council of Canada as well as contributions from participating universities and partners, and bringing together activities associated with book history and textual scholarship; user experience studies; interface design; and prototyping of digital reading environments.
In 1982, National Geographic found itself involved in a scandal. The cover of its February issue showed a camel train walking across the desert in front of the Pyramids of Giza. There were a number of things wrong with this photo. The camel team in the front, for example, was apparently staged: the photographer, Gordon Gahan, is said to have paid the team to walk in front of this and other shots on the same shoot (Museum of Hoaxes, 2011). But the bigger issue was the positioning of the pyramids. Using then nascent digital photo editing technology, National Geographic’s photo editors changed the spatial relationship of the pyramids in order to improve their fit on the magazine’s vertically-oriented front cover (Figure 1).
Figure 1: Collage comparing National Geographic’s February 1982, with its most likely source, Gahan and National Geographic Stock Cat. No. 277403.
This was neither the first nor the last time a photographic image had been manipulated for aesthetic (or more nefarious) reasons. As Figure 2 demonstrates, such interventions were as common in the pre-digital era as they are now.
Figure 2: Pre and post-digital photo manipulation: Top: Nikolai Yezhov is removed from photo with Stalin after purge (Wikimedia); Middle: Fence post is removed for aesthetic reasons from photo of Mary Vecchio at Kent State (adapted from Lucas, 2009); Bottom: A “fourth” rocket is added to an Iranian propaganda photo to cover up an unsuccessful test (Museum of Hoaxes).
Moreover, in the case of National Geographic, the evidence suggests that the manipulation was carried out with relative care. One photo editor at the time described the alteration as being the equivalent to a “retroactive repositioning of the photographer” (National Press Photographers Association, 2012). Research (currently in preparation) by Simon Justin Julier, Melissa Terras, Tim Weyrich, and me largely supports this claim, suggesting that the placement of the pyramids on the cover was in fact closely modelled on a second photo from the same shoot.
In some ways, the scandal surrounding the National Geographic cover had as much to do with the newness of the technology as the way in which the technology was applied. As Paul Martin Lester, writing soon after the cover was first published, suggested:
Throughout photography’s history, an unsuspecting public has been fooled by manipulated images. What is of concern to modern media watchers is the justifications used to alter images through computer technology — not [the] fact that such alterations can be published without detection. (Lester, 1988, para. 22)
Indeed, in the case of the New York Times, the newspaper’s integrity guidelines explicitly reference the capabilities of pre-digital technology to define the ethical limits for the manipulation of ostensibly documentary news photography:
Images in our pages that purport to depict reality must be genuine in every way. No people or objects may be added, rearranged, reversed, distorted or removed from a scene (except for the recognized practice of cropping to omit extraneous outer portions). Adjustments of color or gray scale should be limited to those minimally necessary for clear and accurate reproduction, analogous to the “burning” and “dodging” that formerly took place in darkroom processing of images. Pictures of news situations must not be posed. (The New York Times Company, 2012, para. 17, emphasis added)
In fact, thirty years on, most people are now probably far more willing to accept various types of digital post-production alteration, even in archival contexts: digital sharpening and colour correction are performed almost universally in contemporary digital production and very few of us would be shocked (or even very dismayed) to hear that a hint of red eye had been removed from a portrait or that background clouds had been lightened in order not to detract attention from a crowd shot on a magazine cover. Professional journalists and editors often (though still controversially) explicitly distinguish between the advertising function of a cover image and the documentary function of editorial photography (Anonymous, 1989). As popular audiences have become more familiar with (and practised in) the ways in which digital images can be manipulated, the popular sense that photos even could represent an unmediated reality almost certainly and correspondingly has been diminished.
The scandal over the National Geographic cover is relevant to scholarly editors because of the equally profound effect that digital technology is starting to have on our discipline. Like early digital photographers, we as digital editors now have access to tools that allow us to present material to audiences in completely novel ways, using approaches and techniques that our users do not yet necessarily understand or know precisely how to interpret. While few editors will have reason to engage in the outright fakery of the propagandist, it is still the case that these new approaches can inadvertently mislead users as to the reliability or intentions of the material we present to them.
In fact, if anything, the effect of this technological revolution on us is likely to be even more profound. Photographers, even in pre-digital days, have always faced the problem that the inherently mediated nature of their work has been easily misunderstood by audiences who did not know what was involved in photographic capture and production. Academic editors, on the other hand, have not generally suffered from the same handicap: with the exception, perhaps, of “reading” texts — which are often presented by editors (and even more often accepted by readers) as representing “the” definitive text of a given work — most aspects of the traditional scholarly critical edition are self-evidently interpretative. Nobody would confuse the diplomatic transcription of a medieval manuscript with the manuscript itself or consider the textual apparatus of a critical edition as an unmediated representation of the surviving witnesses.
Even photographic evidence used by textual scholars has tended to be presented in a fashion that emphasizes its argumentative function. Print scholarly editions have historically tended, for economic and technological reasons, to restrict the number of photographs they present of a given witness to a few important details or pages. Such photos often involved explicit use of special techniques or manipulations designed to draw out specific details (e.g., the use of ultra-violet light or increase of contrast). And such photos are often supplemented by (self-evidently interpretative) drawings when the desired detail is difficult to see or explicate. Even publications that users might be more tempted to understand as unproblematic representations of a given object have tended, for these same economic reasons, to emphasize their distance from the object they document: with the exception of a few non-scholarly publications aimed at the bibliophile market (for an example, see The Book of Kells Facsimile published by Addison Publications), facsimiles of even illuminated manuscripts have tended to be published in black and white (see, for example, Rosenkilde and Bagger ’s Early English Manuscripts series up to Volume 27). In such circumstances, even the most willing reader is unlikely to consider the publication a direct stand-in for the real world object itself.
This is now beginning to change. For almost twenty years, the same revolution that led to the National Geographic scandal, also has affected the way scholarly editors work. With improvements in photography, digital editors now make far more use of high resolution colour images — images that, if anything, often seem better and easier to interpret than the objects themselves. Likewise, new techniques for searching, navigating, and delivering digital editorial material are changing the way users interact with the results of their work. The ease with which multiple versions of a given text can be presented and linked to each other in digital form, for example, makes it easier to forget that the representation and collection of these sources remains no less an editorial act than was ever the case with a print apparatus.
This problem is going to get more serious as existing technologies improve and new technologies allow digital editors to extend the idea of “the edition” to cover objects, ideas, and relationships that have rarely or never been treated editorially before. The demands of usability, editorial assumptions about user behaviour and expectations, and the capabilities and limitations of the hardware and software used for production and user interaction all affect the way digital editions present their material, even if this is not always acknowledged (or in some cases perhaps even recognized) by the users and editors involved.
The significance of this problem has been discussed most extensively and thoroughly in the case of the so-called “Spatial Turn” in Geography and the Geohumanities (see, amongst others Dear, 2011; Bodenhamer, Corrigan, & Harris, 2010). As Bodenhamer, Corrigan, and Harris have noted, for example, the implicit bias towards certain kinds of research questions and certain kinds of data collection inherent in early Geographic Information Systems (GIS) software rapidly became an issue, even among the (relatively positivist) disciplines that were originally responsible for developing and adopting the technology:
The central issue was, at heart, epistemological: GIS privileged a certain way of knowing the world, one that values authority, definition, and certainty over complexity, ambiguity, multiplicity, and contingency, the very things that engaged humanists. From this internal debate, often termed Critical GIS, came a new approach, GIS and Society, which sought to reposition GIS as GIScience, embodying it with a theoretical framework that it previously lacked. This intellectual restructuring pushed the technology in new directions that were more suitable to humanists. (Bodenhamer, Corrigan, & Harris, 2010, p. ix)
Even with this recognition, however, the use of geographic software by humanists in practice still tends to frame questions and answers in ways that keep them on the periphery of humanistic enquiry:
To date, studies using GIS in historical and cultural studies have been disparate, application driven, and often tied to somewhat more obvious use of GIS in census boundary delineation and map making. While not seeking to minimize the importance of such work, these studies have rarely addressed the broader, more fundamental issues that surround the introduction of a spatial technology such as GIS into the humanities. There are core reasons why GIS has found early use and ready acceptance in the sciences and social sciences rather than in the more qualitatively based humanities. The humanities pose far greater epistemological and ontological issues that challenge the technology in a number of ways, from the imprecision and uncertainty of data to concepts of relative space, the use of time as an organizing principle, and the mutually constitutive relationship between time and space... The mathematical topology that underpins GIS brings its own data representations in the form of raster, vector, and object forms. The attribution of these geometric forms lends itself to the classification of natural resources, infrastructure, demography, and environmental phenomena rather than to the less well-defined descriptive terms and categories of the humanities. (Bodenhamer, Corrigan, & Harris, 2010, pp. x–xi)
The problem, for these scholars and others in this rapidly expanding field is the question of how this technology and the ways in which it is used can be shaped to fit the needs and expectations of the researchers who work with it on humanistic questions “to create a language that bridges disciplines, ... to re-conceptualize the Humanities to include spatial perspectives, ... to use GIS to analyze texts and images as well as it parses points and polygons” (Bodenhamer, Corrigan, & Harris, 2010, p. xiv).
Similar problems exist with other technologies that are driving the latest wave of digital scholarly projects. Immersive technologies used in the creation of serious games or other virtual environments still often show, in non-commercial applications at least, a lack of realism and resolution that are perhaps more reminiscent of the line-drawing than the colour photograph. But while nobody would consider the models used in such environments to be unmediated representations of the actual historical objects they represent, the same is not necessarily true of the environments themselves: immersive environments, especially those with claims to some documentary or historical veracity, have a totalizing logic that can obscure the extent to which they manipulate information about their subjects.
An example of this can be seen in the case of the Virtual Morgantown project, in which it can be difficult to distinguish precisely among the parts that are intended to serve as documentary representations of historically supported objects and the parts that are supplying less well-documented “background colour” (Figure 3). According to the project description, “building footprints, streets, and lot boundaries” were entered into a GIS system from historical maps and photographs, while “[h]istorical as well as contemporary photographs were used to aid in the construction of the 3D building models” and “several other GIS layers were generated to populate the virtual landscape, including ground surface, streets, trees, and street furniture” (Virtual Morgantown Project, paras 1 and 3). But while this suggests that the content of the environment is entirely derived from historical sources, there remains some room for doubt about the relative accuracy and certainty of the individual details. For example, there appears to be a complete absence of fences: is this an accurate reflection of the streetscape of early twentieth-century Morgantown or is it an artifact of the surviving data or rendering processes? Likewise, the buildings in the scene vary in colour, even though this is presumably a detail missing in the (largely black-and-white) late nineteenth- and early twentieth-century sources used to construct this model: how reliable is the assignment of colour to individual buildings? Is a distinction made between buildings for which some colour information is known and those for which none survives?
Figure 3: Screenshot from the Virtual Morgantown project
And finally, there is the problem of the trees. The project description suggests that information about trees was derived from historical maps and photographs. Does this mean that all the trees in the scene correspond to the known location of a historical tree found in documentation from the period? And if so, what about the species? The trees in Figure 3 are graphically represented by what appears to be a maximum of three or four different images, which are then repeated throughout the scene to represent individual trees. At least one of these images appears to involve a photo of what is actually two trees of different species in very close proximity to each other, one broad and the other tall and narrow (Figure 4). Can this be representative of the actual streetscape of 1900 Morgantown?
Figure 4: Morgantown trees?
These are, in one sense, sophomoric questions. As Borges has so elegantly parodied in his short story, “On Exactitude in Science,” the argument that a representation is not exactly the same as the thing it attempts to represent ignores the essential fact that we make representations precisely in order to generalize about and make sense of real world objects: a 1:1 map that precisely represented everything in a territory down to the location and size of each blade of grass would lose all explanatory power (Borges, 1998). Moreover, I suspect that these questions also involve overreading a project that appears to have been intended as a teaching, outreach, and visualization tool rather than a critical edition of the early 20th Morgantown cityscape.
The important thing to note here, however, is that it is the rhetoric of immersion that is creating the problem: nobody wants to navigate a virtual city that has no trees or deteriorates into line drawings when the researchers run out of information about the texture or colour of the street or buildings. And this rhetorical imperative makes it very difficult to avoid supplying — and hence, given the context, implying in some sense that they are “documentary”— elements that are either not directly supported by the surviving data or based on incomplete or ambiguous evidence that does not justify the certainty implied by their representation. Like museum curators who fill in missing pieces of objects in their collections with a modern reconstructions, designers of virtual environments are, quite naturally, tempted to fill in the gaps in their virtual representations with reconstructions of likely missing material in order to give a complete impression of the environment. But where few museum directors would encourage their staff to colour, shape, and texture the reconstructed portions of their artifacts so that they were indistinguishable from the historical remains, similar conventions for distinguishing reconstructed data from documentary data don’t appear to have been developed for use in this virtual world as yet (for a discussion of the history of this longstanding problem, which goes back to the late 1980s in Archaeology, see Greengrass and Hughes 2008, especially note 2).
At the Visionary Cross project, we have been struggling for several years with this problem of how the technology we choose affects the representation of our work. The goal of our project is to use developing digital technologies to study and “edit” a collection of interrelated texts and objects from Anglo-Saxon England, all of which share an interest in the representation of Christ’s Cross. The objects in the collection span Anglo-Saxon England temporally, geographically, linguistically, and culturally: they range from the eighth-century Northumbrian Ruthwell and Bewcastle Standing Stone Crosses in the far north, to the tenth- and eleventh-century Brussels Reliquary Cross and Vercelli Book, both of which are of Southern English manufacture (see Ó Carragáin, 2005, for a recent discussion of the crosses and poem).
The objects are also related to each other in different ways and along different planes. The Ruthwell Cross shares artistic similarities, including the use of Anglo-Saxon runes, with the Bewcastle Cross, and textual similarities with the Brussels Cross and Vercelli Book. The Brussels Cross, likewise, had a memorial purpose similar in some ways to that of the Bewcastle Cross, despite their otherwise great differences in design and function, while the Vercelli Book appears to have been compiled by a single scribe with a strong interest in the fate of the soul after death. Given the range in location, time, and language of these objects, it is unlikely that any one Anglo-Saxon ever saw all four of these objects; but it seems equally unlikely given the various ways these objects interact that he or she would have failed to understand how each fit into a larger cultural matrix involving how the Cross was understood and represented in Anglo-Saxon England.
Our objects are also interesting because they are, in several cases, of significant historical importance to the field of Anglo-Saxon studies itself. The Vercelli Book is one of four great books of Old English poetry that form the most heavily studied core of the Anglo-Saxon poetic corpus. The Vercelli Book and the Ruthwell Cross both contain variant texts of the Dream of the Rood, one of the two or three most anthologized Old English poems alongside Beowulf and Cædmon’s Hymn. The runic inscription of the Dream of the Rood poem on the Ruthwell Cross is one of two candidates for the oldest known record of vernacular poetry in Anglo-Saxon England.
And they have interesting and interpretatively significant post-Anglo-Saxon histories: the Vercelli Book, as the name suggests, is found in the Northern Italian cathedral town of Vercelli, near the Italian Alps, where it lay for centuries unknown to Anglo-Saxonists until it was rediscovered in the Modern period. The Ruthwell Cross, for its part, was pulled down and nearly destroyed by iconoclasts in 1640, partially rediscovered during a period of renewed interest in English antiquity towards the end of the century, and moved, studied, and ultimately partially restored by various local ministers and historians in the course of the eighteenth and nineteenth centuries. The Bewcastle Cross remains on the likely spot on which it was first erected in rural Cumbria; while the Brussels Cross found its way to the lowlands by 1315 and was stripped of its jewels in 1799 (see Ó Carragáin, 2005, for a recent and comprehensive discussion and bibliography).
New technologies have made it possible to represent these objects and the connections among them to contemporary audiences in ways never before possible. In fact, it also sometimes makes it possible to undertake otherwise quite old fashioned research properly for the first time. The Ruthwell Cross, for example, has never been adequately photographed: it is nearly six metres tall and it is located in a pit approximately 130 centimetres deep and 150 centimetres from the North wall of the Ruthwell Parish church: it is impossible to take an analogue picture of the cross in its entirety from any side and difficult to photograph anything but close ups from the North side (all measurements are from site visits by the author in August and October 2011). Our project, which has recently captured a 3D laser scan of the cross and taken high-resolution 2D photography of the entire object, will publish the first detailed comprehensive images of the cross as a whole (in 2D and 3D). Scholars using our edition in the future will be able to do better work than they currently can using existing 2D representations, both because our photography will be higher resolution and in full colour, and because the 3D representations will allow them to manipulate the angles at which they study the various inscriptions and carvings.
But we also want to do more with the cross. Because so many questions surrounding its reconstruction involve its post-Anglo-Saxon history and location, we intend to model the relationship of the cross to its physical surroundings — its current location in the Ruthwell parish church and previously documented or presumed locations around the church, church yard, and nearby Manse garden. Likewise, as we begin editing the other objects in the collection, we want our users to be able to navigate intelligently from one object to the next: to following specific artistic, historical, linguistic, or cultural connections, and, since these objects are also related to each other and other objects in different ways, to be able to add new objects to the collection or establish new connections among those it already contains.
Finding the technology to accomplish these goals has not proved so far to be particularly difficult, perhaps in part because we have defined ourselves self-consciously as a project that applies existing technologies to novel scholarly ends, rather than one that develops novel technologies on its own behalf.
The one thing we haven’t been able to do, however, is find a technology that allows us to accomplish these different goals at the same time. We can build highly detailed representations of the objects in our collection in 2D and 3D, model the relationships among them in a flexible and extensible fashion, and, where it is interesting, represent the historical and spatial relationship of our objects to their physical surroundings. But we can’t do all three in a single environment. Our experiments with the most intuitively obvious method of organizing a navigable collection of objects like ours and relating them to their spatial environments — a platform like Second Life or a custom-designed Serious Game — have turned out to be quite unsatisfactory. Many engines and platforms were not able to handle the kind of representational detail professional scholars need to access for their research; those that arguably could support the representational detail such scholars need suffered from other, intractable problems. Commercial game engines by their very nature limit extensibility — additional objects or additional connections among already existing objects in our collection could, by definition, only be added by individuals or projects who agreed to work within our preferred platform, often at the cost of the loss of other functionality (such as arbitrary XML processing for textual objects) that digital scholarly editors usually take for granted. Moreover, we found ourselves in our earliest experiments struggling with the same type of problems we have seen in the case of the Virtual Morgantown project: a desire to fill out the details of the environment or relate the objects to their environments or each other in ways that looked natural, even when not supported by the surviving evidence and a difficulty indicating differing levels of confidence between research objects and the apparently inevitable background “colour.” Even the best engines, moreover, seemed unable to avoid the Uncanny Valley: no matter how photorealistic our data was, there was always something off-putting about the entire appearance and navigation that called into question its representative value.
All other approaches to these goals have required us to surrender visual and rhetorical coherence: we can build the individual components, but we cannot fit them together in a way that allows for seamless switching among the different approaches to the underlying data without sacrificing extensibility. Initially we thought this failure was simply the result of our own lack of knowledge — perhaps the problem was that we couldn’t find the right gaming engine or we simply didn’t have a wide enough experience of software environments to know what alternatives might exist.
More recently, however, we have begun to believe that our inability to find a solution to this problem may be inherent to the problem itself. Each of the questions we are asking requires us to privilege different aspects of our data. An approach that emphasizes the spatial and temporal relationship of an object to its environment will necessarily sacrifice documentary detail of the kind required by a scholar who wants to know the precise reading of a given line of text or the detailed appearance of a given figure on the surface of a stone cross. An approach that emphasizes the connections between objects or the possibilities for expanding the collection through the addition of new objects or connections will necessarily de-emphasize the local context in which any one object is found or the details of its contents. The reason we cannot find an easy way of uniting these different, intellectually complementary, approaches to the study of our objects is because each approach involves a reframing in ways that are not easy to reconcile simultaneously. Each representation, even the most “documentary,” is a mediation that distorts or rules out some other representation and understanding of the real world object in its original context.
In one sense, this conclusion is as sophomoric as the quibbles we raised earlier about Virtual Morgantown project. The idea that all study involves privileging some questions and details and suppressing others is of course a commonplace.
But as was true of our questions about that project, it is the fact that they are raised by the application of new technology to novel scholarly approaches that makes them worth thinking about. The public reaction to National Geographic’s minor manipulation of the image of the Pyramids on their front cover in 1982 was so strong in part, I believe, because the arrival of digital image editing technology seemed to audiences at the time to threaten the way they understood the relationship of documentary photography to the objects it represented. While at least some members of that audience were no doubt aware that such photographs had always been open to manipulation in the darkroom, the ease and comprehensiveness with which that particular manipulation had occurred seemed to completely change the way one had to understand what images meant.
And in fact they were right. Modern technology has changed the way we understand photography. It has caused us to develop a sense that digital images almost always need to be read rather than simply viewed. As audiences have become more aware of how photographs can be manipulated (and more comfortable manipulating their own), they have also begun to pay attention to other ways in which photographs can be seen to shape events even when they are not physically altered — who is not in the scene being photographed? Who is behind the camera? Has the scene itself been staged or otherwise composed?
The same is true of the new scholarly technologies. Readers of print scholarly editions have learned to interpret the significance of the scholarly text and apparatus over the course of several centuries. They know how to understand the relationship of black-and-white photographs to original documents and to accept that standard aspects of the critical edition are inherently interpretative rather than purely documentary. The new technologies, however, threaten this sense of what an edition “means” by requiring them to consider new types of evidence and see old types of evidence in new ways. Things that one would rarely if ever expect to see in a print context, such as the repetitive use of a single image to stand for a number of trees in the Morgantown project, need to acquire a different meaning when they occur in this particular context: when Iranian officials tried something similar with a rocket launcher in their news photo, they were rightly accused of fraud; if we are to understand the Morgantown project correctly, however, we must be able to read this use of the same technique in that different context as a kind of visual shorthand — similar, for example, to the use of patches of hatching in archaeological drawings to represent a continuous surface.
In the case of the Visionary Cross, we are going to have to be even more careful, because our different approaches are going to involve representing the same objects in different ways using different (often novel) conventions to the same community of users. Each technology we intend to use has its own internal logic and standards. Users who might otherwise be shocked at the cartoon-like quality of the representation of an object like the Ruthwell Cross in an immersive environment designed to show the relationship of the Cross to its immediate geographic surroundings need to learn that they are using the wrong standard for judging the representational accuracy of the object in that context — and asking the wrong questions of the scholarship that that environment is presenting to them. Our challenge, as designers of a complex edition that takes advantage of these new technologies in order to look at a group of objects in novel ways, is to make sure that our audiences understand what our representations mean and what they can and cannot do.
Addison Publications. The book of Kells facsimile. http://www.addisonpublications.com/book_of_kells.html .
Museum of Hoaxes. Missile launcher vanishes.
http://www.museumofhoaxes.com/hoax/photo_database/image/the_missile_
launcher_vanishes .
National Geographic. Cover browser. http://www.coverbrowser.com/image/national-geographic/1034-3.jpg) .
National Geographic. National Geographic Stock.
(http://www.nationalgeographicstock.com/comp/02/417/277403.jpg).
Rosenkilde & Bagger A/S. Early English manuscripts in facsimile, The Volumes. http://www.rosenkilde-bagger.dk/Early%20English%20Volumes.htm .
Virtual Morgantown project. Virtual Morgantown – HumanitiesGIS. http://virtualmorgantown.org .
Visionary Cross project. http://www.visionarycross.org .
Wikimedia. Molotov, Stalin, with Nikolai Yezhov.
http://upload.wikimedia.org/wikipedia/
commons/9/91/Voroshilov%2C_Molotov%2C_Stalin%2C_with_Nikolai_Yezhov.
jpg .
Wikimedia. The commisar vanishes. http://upload.wikimedia.org/wikipedia/commons/b/bd/The_Commissar_
Vanishes_2.jpg .
Anonymous. (1989, July 3). New picture technologies push seeing still further from believing. The New York Times. URL: http://www.lexisnexis.com/hottopics/lnacademic/ [August 21, 2012].
Bodenhamer, David J, Corrigan, John, & Harris, Trevor M. (2010). The spatial humanities : GIS and the future of humanities scholarship. Bloomington, IN: Indiana University Press.
Borges, Jorge Luis. (1998). On exactitude in science. In Hurley, A. (Translator), Collected fictions (p. 325), New York, NY: Viking.
Dear, M. J. (2011). GeoHumanities : Art, history, text at the edge of place. New York, NY: Routledge.
Greengrass, Mark & Hughes, Lorna, M. (2008). The virtual representation of the past. Burlington, VT: Ashgate.
Lester, Paul. (1988). Faking images in photojournalism. Media Development 35(1), 41–42. URL: http://commfaculty.fullerton.edu/lester/writings/faking.html [July 14, 2012].
Lucas, Dean. (2009). Altered images. FamousPicturesMagazine. URL: http://www.famouspictures.org/mag/index.php?title=Altered_Images#Kent_State_Pole [July 14, 2012].
Museum of Hoaxes. (2011). The case of the moving pyramids. URL: http://museumofhoaxes.com/hoax/photo_database/image/the_case_of_the_moving_pyramids/ [July 14, 2012].
National Press Photographers Association. (2012). Ethics in the age of digital photography. URL: http://www.nppa.org/professional_development/self-training_resources/eadp_report/digital_manipulation.html [July 14, 2012].
Ó Carragáin, Éamonn. (2005). Ritual and the rood : Liturgical images and the old English poems of the Dream of the Rood tradition. Toronto, ON: University of Toronto Press.
The New York Times Company. (2012). Guidelines on integrity. URL: http://www.nytco.com/company/business_units/integrity.html [July 14, 2012].
CCSP Press
Scholarly and Research Communication
Volume 3, Issue 4, Article ID 040152, 14 pages
Journal URL: www.src-online.ca
Received March 22, 2012, Accepted March 22, 2012, Published May 13, 2013
Daniel Paul O’Donnell. (2012). Move Over: Learning To Read (and Write) With Novel Technology. Scholarly and Research Communication, 3(4): 040152, 14 pp.
© 2012 Daniel Paul O’Donnell. This Open Access article is distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc-nd/2.5/ca), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.