Janneke Adema
Coventry University
Abstract: In light of the recent shift toward increasingly open access to scholarly work, particularly online content, this article examines the concept of openness and the potential to reuse, adapt, modify, and remix online material in the production and consumption of knowledge within the humanities. To this end, the theoretical and practical applications of authorship, stability, and authority are explored, in addition to those of archive, selection, and agency in an attempt to devise a concept of the book built upon fluidity.
Keywords: Open access; Open content; Open books; Online content; Fluidity
Janneke Adema is a PhD candidate in Media and Communication at Coventry University, Priory Street, Coventry, UK CV1 5FB. Email: ademaj@uni.coventry.ac.uk .
The INKE Research Group comprises over 35 researchers (and their research assistants and postdoctoral fellows) at more than 20 universities in Canada, England, the United States, and Ireland, and across 20 partners in the public and private sectors. INKE is a large-scale, long-term, interdisciplinary project to study the future of books and reading, supported by the Social Sciences and Humanities Research Council of Canada, as well as contributions from participating universities and partners, and bringing together activities associated with book history and textual scholarship, user experience studies, interface design, and prototyping of digital reading environments.
So all these preliminary distinctions are indispensable even though, as we are well aware, the problematic of the book as an elaborate set of questions in itself involves all the concepts that I have just distinguished from the book: writing, the modes of inscription, production, and reproduction, the work and its working, the support, the market economy and the economics of storage, the law, politics, and so on.
— Jacques Derrida, Paper Machine, 2005
Over the last few years the Humanities saw an increase in interest in the communication and publishing of the results of Humanities research in an open way.1 This interest has been triggered by a need, as well as by dissatisfaction. This need for Humanities scholarship to be online and openly available, to enable increased readership, and to promote the impact of scholarly research directly follows the (re)search practices of (mostly early-career) scholars. On a daily basis, these scholars are increasingly getting their information concerning relevant scholarly literature via feeds, blogs, and social media sites like Twitter and Facebook. It is not uncommon to hear remarks about how a new generation is entering academia that will not read anything if they cannot find it openly available online. There is an increased need for e-books, and screen reading is on the rise (UCL/CIBER 2008; Milloy, 2010; Swan, 2008).2 This practical need is supplemented by a desire for information to be more easily mined and reused; for data to become truly interactive, offering new possibilities for scholarly methods and analysis in the Humanities; and, for the creation of collaborative environments and the experimentation with digital tools. All these developments offer the potential for Humanities research to become more open, transparent, and interactive, to develop new methodologies and new ways of collaboration, to harvest collective intelligence, and to explore new means of communication in a field that is increasingly affected by digital technologies in both its teaching and scholarship.
On the other hand, there is dissatisfaction with the current publishing system in the Humanities, where the publishing of scholarly monographs is facing a crisis, and with its long-term sustainability in doubt in light of continually declining book sales. Library spending on e-books has decreased due to cuts to acquisition budgets and to increasing priority on purchasing journals in Science, Technology, and Medicine (STM), which have seen rising subscription costs (Thompson, 2005). This has threatened the availability of specialized Humanities research and has led to related problems for (mostly young) scholars, where tenure and career development within the Humanities is directly coupled to the publishing of a monograph by a reputable press. These developments have led
to the rise of scholarly-, library-, and/or university-press initiatives that are currently experimenting more directly with making digital monographs openly available, such as Open Humanities Press and the OAPEN consortium.3
“Openness” is, however, a contested concept, and its exact meaning remains undefined. For example, points of emphasis are slightly different when we look at oppositions such as open source versus proprietary software, open access versus subscription publishing, and open versus closed data, where issues of accessibility, economics and politics are all influential factors to varying degrees. In open source, openness means nothing if the code cannot be reused, remixed, or appropriated into other code; therefore, access to the end product’s source materials is essential in this case. On the other hand, the discourse in Open Access seems mostly to focus on making scholarly research available (keeping the integrity intact), where reuse and re-appropriation of scholarship seems of secondary importance. This, again, stands in contrast to Open Content, where making modifications to the content is an explicit right. The openness in Open Data, although also focusing on creating possibilities to apply and combine data in interesting ways and in new contexts, in many ways focuses on increased transparency, concentrating on gaining an increased insight into the methodology behind research. Openness as a concept in both Open Access and Open Source also stands in direct opposition—in consideration of economic model—to subscription publishing and proprietary software, where in this economic context openness comes to stand for gratis access.4
Although some formal definitions have been composed, which define openness in different contexts, in practice, different combinations of uses of the term are found.5 If we look at the various uses of the concept of openness in the previously mentioned experiments, in some cases putting a book up for the Google Books program is defined as making it available through Open Access (Adema, 2010); this would, however, be a severely stripped down version of Open Access, where the access consists of (in most cases) only the ability to read the content on a specific platform (Google Books), often a limited portion of the book, and where copying of (parts of) the text, transferring it to other formats or platforms, annotating it, let alone remixing it, is not part of the open accessibility in this environment. One can even speak of “degrees of openness,” as, for instance, in Creative Commons licences,6 where licences give the producer the possibility of defining the degree of openness of her or his work, from basically granting only reading rights to giving away all copyright and ownership attribution (that is, CC zero).
These are just a few examples that show there is no final agreement on what openness actually means within a knowledge environment. It is not, however, my ambition here to explore the various uses and complications of the concept of openness (nor its specific drawbacks and benefits) in detail, but to focus on one particular aspect of openness; namely, the possibility to reuse, adapt, modify, and remix material,7 and how it relates to knowledge production and consumption in the Humanities. It is this part of the ethos or “definition” of openness (libre more than gratis) that most actively challenges the concept of stability within scholarly communication that has accompanied printed publications for so long. Where more stripped down versions of openness focus on access, and the stability of a text or product need not be threatened, this degree of openness directly challenges the integrity of a work. It makes problematic the distinction sometimes upheld in academic publishing and scholarly communication in between, as previously mentioned, Open Access (in its stripped down, weak version) and Open Content.8 The possibility to expand and to build upon, to make modifications and create derivative works, to appropriate, change, and update content (within a digital environment), shifts the focus in scholarly communication from product to process. It is this shift away from stability and towards fluidity that will be the focus of this article.
To investigate this feature of openness, this article will analyze a variety of theoretical and practical explorations of the concepts of fluidity, liquidity, and remix. The aim is to examine ways in which scholars within the Humanities have and can come to grasp and deal with these issues of fluidity and versioning, especially concerning the scholarly book. This article looks at theorists, theories, and performative practices that try to look beyond concepts of authorship, stability, and authority, by critically exploring, among others, the concepts of archive, selection, and agency. At the same time, it will devise a critique of these concepts and explore whether it is possible to conceive a concept of the book built upon openness, and with that, a concept of the Humanities built upon fluidity.
The ability to reuse and remix data and research outcomes to create derivative works can be seen as a practice that challenges the concept of the stability of a work or a text,9 one that puts into question the perceived boundaries of that work. Within a knowledge environment, the concept of derivative works also offers the potential to challenge the idea of authorship or, again, the (perceived) authority of a certain work. The founding act of a work, that specific function of authorship described by Foucault (1977) in his seminal article What is an author?, can be seen as becoming of less importance for both the interpretation and the development of a text, once this text goes through the processes of adaptation and reinterpretation. I would like to focus on three alternative views on the issues of authorship, authority and stability as put forward in discussions on remix, that, as I will argue, are of importance for knowledge production in the humanities. I will shortly discuss the concept of modularity; the idea of the selector, moderator or curator; and finally, the concept of the (networked) archive, by looking at the work of Lev Manovich (2005, 2008) and Eduardo Navas (2010).
The concept of modularity is extensively discussed in the work of Lev Manovich, who is a pioneer in the digital humanities with his specific use of scientific methods and digital tools to analyze culture, which he calls digital analytics. In his writing on remix, Manovich (2005) sketches a utopian future where cultural forms will be deliberately made from Lego-like building blocks, designed to be easily copied and pasted into new objects and projects. For Manovich, standardization thus functions as a strategy to make culture more free and shareable, with the aim to create an ecology wherein remix and modularity are a reality. “Helping bits move around more easily” (Manovich, 2005, n.p.) is a method for him to devise a new way with which we can perform cultural analysis. Manovich (2008) explores how, with the coming of software, a shift in the nature of what constitutes a cultural “object” has taken place, where cultural content often no longer has finite boundaries; it is no longer received by the user, but rather is traversed and/or constructed and managed. With the shift away from stable environments and with the introduction of the time-aspect in a digital online environment, there are no longer senders and receivers of information in the classical sense. They are only temporary “reception points” in information’s path through remix. In this way, for Manovich, culture is a product that is ‘constructed’ both by the maker as well as the consumer, and is actively modularized by users to make it more adaptive (2005). In other words, for Manovich, culture is not modular, it is (increasingly) made modular. The real revolution, however, lies not in this kind of agency provoked by the possession of the production tools, but in the possibility to exchange information in between media. In Software takes command, Manovich (2008) calls this the concept of “deep remixability.” He shows how remix of various media has become possible (in a common software-based environment) next to a remix of the methodology of these media, thereby offering the possibility of mash-ups of text with audio and visual media, and expanding the range of cultural and scholarly communication (Manovich, 2005, 2008).
The concept of modularization and of re-combinable data-sets, as put forward by Manovich, offers a way to look beyond static knowledge objects, and presents a view on how not only to structure and control, but also to analyze the ever expanding information flows. With the help of this software-based concept, he examines how remix can be an active stance to shape culture in the future and to deal with knowledge objects in a digital environment.
Remixing as a practice has the potential to question the idea of both authorship as well as related concepts of authority and legitimacy. Do the moral and ownership rights of an author extend to derivative works? And who can be held responsible for a work when authorship is increasingly hard to establish in, for instance, music mash-ups or in data feeds? One of the suggestions made in discussions on remix to cope with the problem of authorship in the digital age is to shift the focus from the author to the selector, the moderator, or the curator. In cases where authorship is hard to establish, or even absent, the archive can function as a similar tool to establish authority. These two solutions to the issues of authorship, authority, and originality have been examined by artist, curator, and remix theorist Navas (2010). Navas (2010) problematizes these issues by analyzing remix from a historical (materialist) perspective. He sees remix foremost as a critical practice, and examines the idea of the selector and of the archive as alternatives to authorship in establishing authority in an environment that relies on continual updates and that prefers process to product. Navas (2010) stresses, however, that keeping a critical distance is necessary to make knowledge possible, and to establish authority. As authorship has been replaced by sampling, Navas (2010) argues that the critical position in remix is taken by s/he who selects; in mash-ups, however, this critical distance increasingly becomes difficult to uphold. As Navas (2010) states:
This shift is beyond anyone’s control, because the flow of information demands that individuals embed themselves within the actual space of critique, and use constant updating as a critical tool. (p. 174)
To deal with the constantly changing now, Navas (2010) turns to history as a source of authority: to give legitimacy to fluidity retrospectively by means of the archive. The ability to search the archive, he argues, gives the remix both its reliability as well as its market value. By recording information, it becomes meta-information, information which is, according to Navas (2010), static; that is, it is available when needed and always in the same form. This recorded state, this staticity of information retrospectively, is what makes theory and philosophical thinking possible, as Navas (2010) claims:
The archive, then, legitimates constant updates allegorically. The database becomes a delivery device of authority in potential: when needed, call upon it to verify the reliability of accessed material; but until that time, all that is needed is to know that such archive exists. (p. 173)
Navas (2010) is, however, also ambivalent about the archive as a search engine, which, he argues, in many ways is a truly egalitarian space, able to answer “all questions,” but one that is easily commercialized too. What does it mean when Google harvests the data we collect and our databases are predominantly built up on social media sites? In this respect, he notes, we are also witnessing a rise in the control of information flow.
The importance of Navas’s (2010) theorizing in this context lies in the possibilities his theories offer for the book and the knowledge system we have created around it. First, he explores the archive as a way of both stabilizing flow and creating a form of authority out of flux and the continual updating of information. Next to that, he proposes the role of s/he who selects (or curates, or moderates) as an alternative for the author. In a way, one can argue that this model of agency is already quite akin to scholarly communication, where selection of resources and referring to other sources, next to collection building, is part of the research and writing process of most scholars. Manovich (2005) argues for a similar potential; that is, the potential of knowledge producers to modularize data and make it adaptable within multiple media and various platforms, mirroring scientific achievements with standardized metadata and the semantic web. These are all interesting steps in thinking beyond status quo of the book, challenging scientific thinking to experiment with notions of process and sharing, and letting go of idealized ideas of authorship.
The ease with which continual updates can be made has brought into question not only the stability of documents, but also the need for and the efficiency of stable objects. Wikipedia is one of the often-cited examples of how the speed of improving factual errors and the efficiency of real-time updating in a collaborative setting can win out on the perceived benefits of stable material knowledge objects. Experiments with liquid texts and with fluid books conceived in collaborative environments not only stress the benefits and potential of “processual scholarship,” they also challenge the essentialist notions underlying the perceived stability of scholarly works.10
Textual scholar John Bryant extensively theorizes the concept of fluidity in his book The fluid text: A theory of revision and editing for book and screen (2002). Bryant argues that stability is a myth and that all works are fluid texts. In The fluid text, Bryant theorizes (and puts into practice) a way of editing and conducting textual scholarship that is based not on a final authoritative text, but rather that focuses on revisions. For many readers, critics and scholars, the idea of textual scholarship is to do away with the “otherness” that surrounds a work and to establish an authoritative or definitive text. This urge for stability is part of a desire for, as Bryant (2002) calls it, “authenticity, authority, exactitude, singularity, fixity in the midst of the inherent indeterminacy of language” (Bryant, 2002, p. 2). Bryant (2002), on the other hand, argues for the recognition of a multiplicity of texts, or rather for what he calls the fluid text. Texts are fluid because the versions flow from one to another; for this, he uses the metaphor of a work as energy that flows from version to version.
In Bryant’s (2002) vision this idea of a multiplicity of texts extends from different material manifestations (drafts, proofs, editions) of a certain work into what is called the social text (translations and adaptations). This also logically leads to a vision of “multiple authorship,” where Bryant (2002) wants to give a place to what he calls “the collaborators” of or on a text – to include those readers who also materially alter texts. For Bryant (2002), with his emphasis on the revisions of a text, and the differences between versions, it is essential to focus on the different intentionalities of both authors and collaborators. The digital environment offers the perfect possibility to show the different versions and intentionalities of a work – to create a fluid text edition. Bryant (2002) established such an edition – both in print and online – for Melville’s Typee, showing how book format and screen, in combination, can be used to effectively present such a fluid textual work.11
For Bryant (2002), this specific choice of a textual presentation focusing on revision is a moral and ethical choice. For, as he argues, understanding the fluidity of language inherently lets us better understand social change. Furthermore, the constructionist intentions to pin a text down fail to acknowledge that, as Bryant (2002) states, “the past, too, is a fluid text that we revise as we desire” (p. 174). Finally, such fluidity encourages a new kind of critical thinking, one that is based on, among others, the concepts of difference, otherness, variation, and change. This is where the fixation of a fluid text to achieve easy retrieval, unified reading experiences, and established discourses loses out to a discourse that focuses on the energies that drive text from version to version. In Bryant’s (2002) words:
by masking the energies of revision, it reduces our ability to historicize our reading, and, in turn, disempowers the citizen reader from gaining a fuller experience of the necessary elements of change that drive a democratic culture. (p. 113)
Another example of a practical experiment that focuses on the benefits of fluidity for scholarly communication is the Liquid Publications (or LiquidPub) project.12 This project, as described by Casati, Giunchiglia, and Marchese (2007), tries to bring into practice the idea of modularity described before. Focusing mainly on textbooks, the aim of the project is to enable teachers to create and compose a customized and evolving book out of modular pre-composed content. This book will then be a “multi-author” collection of materials on a given topic that can include different types of documents.
The Liquid Publications project tries to cope with the issues of authority and authorship in a liquid environment by making a distinction between versions and editions. Editions are solidifications of the Liquid Book, with stable and constant content, which can be referred to, preserved, and can be made commercially available. Furthermore, they create different roles for authors, from editors to collaborators, accompanied by an elaborate rights structure for authors, with the possibility to give away certain rights to their modular pieces whilst holding on to others. In this respect, the Liquid Publications project is a very pragmatic project, catering to the needs and demands of authors (mainly for the recognition of their moral rights), while at the same time trying to benefit from and create efficiencies and modularity within a fluid environment. In this way, they offer authors the choice of different ways to distribute content, from totally open to partially open to completely closed books.
Media theorist Gary Hall (2009) also experiments with liquid books; nonetheless, he provides a different vision on liquidity and on the potential of liquid publications. In his article Fluid notes on liquid books, Hall (2009) describes his experiment with publishing a “liquid book” together with Clare Birchall as part of the Culture Machine Liquid Books series of Open Humanities Press. The Liquid Book series is open on a read/write basis and functions via a logic of “open, decentralized and distributed editing” (Hall, 2009, p. 40). With this project, Hall (2009) distinctively seeks to question the idea of authorship by going beyond concepts of “authors,” “editors,” “creators,” or “curators,” which, as he states, are just a means of “replacing one locus of power and authority (the author) with another (the editor or compiler)” (Hall, 2009, p. 40). Hall argues that if we no longer look at the author (or compiler/moderator/selector) for authority, the authority comes to lie with the text, which means we need to take on a more rigorous responsibility with regards to assessing their importance and quality (Hall, 2009).
Hall (2009) goes on to analyze what the consequences are when the identity and authority of the work itself becomes debatable. What authority does a work have if it can be changed and updated all the time? Hall (2009), like Bryant (2002), questions what constitutes a work in the digital age when a work no longer has any clear-cut boundaries. What does this mean for our whole system of knowledge, which is built upon these kinds of knowledge objects, for its functioning?
Hall (2009) sees a lot of potential to experiment with wikis and similar kinds of environments as they offer a potential to question and critically engage with these issues of authorship, work, and stability. Moreover, different platforms raise different questions that need to be taken into consideration when designing projects for different media. Wikis have the potential to offer increased accessibility and induce participation from contributors from the periphery; in this way, they can be extremely pluralistic and challenging to existing states of affairs:
Rather, wiki-communication can enable us to produce a multiplicitous academic and publishing network, one with a far more complex, fluid, antagonistic, distributed, and decentred structure, with a variety of singular and plural, human and non-human actants and agents. (p. 43)
Hall’s (2009) exploration of fluidity offers a good starting point for a critique of the theories and experiments mentioned in this article, albeit one that might be extended to other digital humanities projects dealing with similar issues. Notwithstanding the fact that the theories and experiments described above offer valuable explorations of some of the important problems being faced by knowledge producers in the digital environment, I will argue that most of the solutions presented above are only halfway solutions to the problems with which they propose to deal. Although they take on the challenge of finding alternative ways of establishing authority and authorship to cope with an increasingly fluid environment, they are still very much based on the concept of stability and the knowledge system built around it. As I will argue, in many ways they remain bound to the essentialisms of this object-oriented scholarly communication system. To explain this more clearly, I will offer next a short critique of the concepts described before—of the archive, of the idea of the selector or moderator, of wikis, and of the concepts of fluidity and liquidity—to show how they neither fundamentally challenge nor form a real critical alternative to the problems facing authorship, authority, and stability in the digital realm.
One of the problems with replacing the idea of authorship with the idea of the selector, as proposed by Navas (2010), amongst others, has been elaborated by Hall (2009); namely, that this move only replaces the locus of authority from the author to the selector or moderator. Selection, although incorporating a broader appreciation for other forms of authorship or for an extension of “the author function,” is another form of agency and does not (fundamentally) challenge the idea of authorship or authorial intention. It is also questionable how the selector (more than the author) has the potential to—echoing Navas (2010)—“embed themselves within the actual space of critique” (p. 174). Besides not really challenging the idea of authorial authority and intentionality, the selector cannot be seen as an adequate alternative form of authority within the digital realm. What happens when the author function is decentred and agency is distributed within the system? Introducing graduations of authorship as, for instance, editors and collaborators, such as is done in the work of Bryant (2002) and in the Liquid Publications project, is a way to deal with multiple authorship or authorship in a collaborative environment, but it does not solve the problem of how to establish authority in an environment where the contributions of a single author are hard to trace back; or where content is created by anonymous users or avatars; or, in situations where there is no human author and the content is machine-generated. And what becomes of the role of the selector as an authoritative figure when selections can be made redundant and choices can be altered and undone by mass-collaborative, multi-user remixes, and mash-ups?
Where the selector does not fundamentally pose a challenge to the concept of authorship, and does not fundamentally offer an alternative to authorial authority, the concept of fluidity as described by Bryant (2002), as well as the concept of liquidity as used within the Liquid Publications project, do not fundamentally challenge the idea of object-like thinking or stability within scholarly communication. Both of these projects are based on a modular system. For Bryant (2002), a fluid book edition is still made up of separate, different, “modular” versions, whereas in the Liquid Publications project, which focuses mostly on an ethos of speed and efficiency, a liquid book is a customized combination of different modular documents. In this way, neither Bryant (2002) nor the Liquid Publications project go beyond the concept of modularity as described by Manovich (2005) (where culture is made modular), nor do they, in any fundamental way, reach a fluid or liquid state. This kind of modularity is more of a challenge to the application of the logic of stability and authoritative “works,” and less of a challenge to object-oriented thinking, where a consensus is reached upon a certain kind of work as being stable and authoritative. The idea of the object (the module) remains, only it is smaller, compartmentalized. Both of these projects still hinge on the idea of extracted objects, of editions and versions in the liquid project. The fluidity in Bryant’s (2002) analysis is not so much about creating a true fluidity—however impossible this might be—but about creating a network between more or less stable versions, whilst showcasing their revision history. He still makes the distinction between works and versions, and neither sees them as either part of one extended work nor gives them the status of separate works; in this way, he keeps a hierarchical thinking alive:
A version can never be revised into a different work because by its nature, revision begins with an original to which it cannot be unlinked unless through some form of amnesia we forget the continuities that link it to its parent. Put another way, a descendant is always a descendant, and no amount of material erasure can remove the chromosomal link. (p. 85)
All texts are not fluid as in the way Bryant (2002) argues, at least not in the sense of being process-oriented; they are modular, networked at the most. What Bryant (2002) and the Liquid Publications project propose is not so much the creation of a liquid text as much as modular texts or networked books. Bryant’s (2002) idea of the incorporation of different material versions of the text with social texts is in this sense akin to McKenzie Wark’s (2007) book Gamer Theory, which he distinctively calls a networked book.13 This terminology might be more fitting; a networked book, at least in its wording, positions itself more in between the ideal types of stability and fluidity.
The problem with the archive as a legitimation device for that which it keeps, as a tool to provide the necessary critical distance, as Navas (2010) argues, is that the archive an sich does not provide any legitimation; rather, any legitimation is derived from the authority held by those who built the archive. This reflects what Derrida (1996) calls the politics of the archive; what is kept and preserved is connected to power structures, the power of those who decide what to collect and on what grounds, and the power to interpret the archive and its content when called upon for legitimation claims later on. The question of authority does not so much lie with the archive, but with who has access to the archive and with who is allowed to build it. Thus, although it has no real legitimation power of its own, the archive is used as an objectified extension of the power structures that control it. Furthermore, as Derrida (1996) also shows, archiving is an act of externalization, of trying to create stable abstracts. A critique of the archive would be that instead of functioning as a legitimation device, its focus is foremost on objectification, commercialization, and consummation, where knowledge streams are turned into knowledge objects, where we order knowledge into consumable bits. As Navas (2010) highlights, the search engine, based upon the growing digital archive we are collectively building, is Google’s bread and butter. For instance, by initializing large projects like Google Book Search, Google, on the one hand, aims to make the world’s archive digitally available—that is, to digitize the “world’s knowledge,” or at least the part Google finds appropriate to digitize, consisting mostly of works in American and British libraries (and thus mostly English language works)—which in Google’s terms means making them freely searchable. Google partners with many libraries worldwide to make this service available; for the most part, however, only snippets of poorly digitized information are freely available, and for the full-text functionality, or more contextualized information, books can be acquired via, for instance, Google’s e-books program (formerly Google Editions).
The interpretation of the archive is a fluctuating one, the stability it seems to offer, a fraud. As Derrida (1996) describes, the digital offers new and different ways of archiving, and thus also a different vision on what it constitutes and archives, both from a producer as well as from a consumer perspective (Derrida 1996). Furthermore, the archiving possibilities also determine the structure of the content that will be archived; the archive thus produces, as much as it records, the event. The archive produces information and knowledge, and it decides how we determine what knowledge will be. The way the archive is constructed is very much a consideration under institutional and practical constraints. What, for instance, made the Library of Congress decide to preserve and archive all public Twitter feeds starting from its inception in 2006? And why only Twitter and not other similar social media sites? Similarly, the relationship of the archive to science is a mutual one, as they determine each other. A new paradigm also asks for, and creates a new vision of, the archive. This is why, as Derrida states, “the archive is never closed. It opens out of the future” (Derrida, 1996, p. 45). The archive does not stabilize or guarantee any concept. Foucault (1969) acknowledges this “fluidity of the archive,” where he sees the archive as a general system of both the formation and transformation of statements. But the archive also structures knowledge and our way of perceiving the world, as we operate and see the world from within the archive. As Foucault (1969) states, “it is from within these rules that we speak, since it is that which gives to what we can say” (p. 146). The archive can thus be seen as governing us, and this contrasts starkly with Navas’s (2010) notion of critical distance provided by the archive, as we can never be outside the archive. This critique is not focused on doing away with the archive or on the creation of Open Access archives, which play an essential role in the accessibility and preservation of scholarly research, as well as in adding metadata and making it harvestable. Rather, it focuses mainly on being aware of the structures at play behind the archive, and on questioning both the perceived stability of the archive, as well as at its authority and legitimacy.
As Hall (2009) has shown, the use of wikis to experiment with new ways of writing and collaborating offers a lot of potential for collaborative and distributive research and publishing practices; however, these are only one possible step toward liquid publications and cannot yet be perceived as real liquid publications. Wikis are envisaged and structured in such a way that authorship and clear attribution/responsibility, as well as version control, remain an essential part of their functioning. The structure behind a wiki is still based on an identifiable author and on a version history (another archive), which lets the reader check all changes and modifications, if needed. In reality, the authority of the author is thus not challenged.
A prime visual and material example of the problems the fluidity of the archive creates is a work published by James Bridle, affiliated with the Institute for the Future of the Book. Bridle (2010) published a twelve volume publication (i.e., every edit to) of the complete history of the Wikipedia article on the Iraq War. This “conceptual art project” shows, on the one hand, the incredible potential we now have in the digital age to archive almost everything; on the other hand, it also shows the futility and the impossibility of trying to preserve in a static form, both material and digital, the flows of information generated on the Internet.14 Another problem evoked by wikis as potential liquid publications is that they mostly work with moderators. As the Iraq entry shows, not all entries are allowed to stay, although they are archived. Although in principle wikis have the potential to work in a distributed way, in practice hierarchies of moderators with different levels of authorities structure many of them.15
The critique of the different theoretical and practical explorations of fluid publications and of more process-oriented research offered here serves to show the strength, the reach, and the impact that notions of stability, authorship, and authority (echoing the rhetoric of printed publications) still have within the digital environment. The critique of these notions thus does not serve as a condemnation of these experiments; on the contrary, these concepts should be explored and questioned to enhance our understanding of them in different contexts. It serves to show how, even in our explorations of the new medium, it is very hard to let go of the essentialist notions that we have inherited from the rhetoric of print publications. On the other hand, my interest in these experiments and in the concept of fluidity—which, as I shall explain next, I believe to be an impossibility—serves another goal: to deconstruct the idea that stability is actually possible, or has ever been possible in the past.
To the extent that true liquidity is a (practical) utopia, it is just as much a construct or an ideal type as stability. I would argue, however, for a wider acknowledgment of the fact that our creation of stability and of stable knowledge objects (as printed books are often perceived) is a construct brought about by the needs of (established) power structures and by customary ways of doing things; in other words, by “knowledge practices” we have adopted and to which we have grown accustomed, such as authorship, stability, and authority. The construction of what we perceive as stable knowledge objects serves certain goals, mostly having to do with establishing authority, preservation (archiving), reputation-building (stability as threshold), and commercialization (the stable object as a [reproducible] product). As Bryant (2002) argues:
all texts are fluid. They only appear to be stable because the accidents of human action, time and economy have conspired to freeze the energy they represent into fixed packets of language. (p. 111)
Any stability we create where it concerns texts can thus be seen as an historical and contextual consensus. Digital and online media offer the potential to increasingly critique notions based on a print knowledge system—such as stability, authorship and authority—where expanding the knowledge system beyond these notions increasingly seems a practical reality. The Internet and digital media have created a situation where there is no longer a certain writing technology that favours stability over liquidity. In Writing Space: Computers, Hypertext, and the Remediation of Print, Jay David Bolter (2001) calls stability, as well as authority, a value that has been defined by being the product of a certain writing technology:
it is important to remember, however, that the values of stability, monumentality and authority, are themselves not entirely stable: they have always been interpreted in terms of the contemporary technology of handwriting or printing. (p. 16)
Jean-Claude Guédon (2009) argues in his article What Can Technology Teach Us About Texts? (and Texts About Technology?), that developments like Wikipedia serve to deconstruct the idea of a final document, and the validity of a document is now marked by only a temporal stability. As he states:
the Wikipedia phenomenon displays this widened range of possibilities in spectacular fashion. It also means that the notion of a final document loses much of its meaning because its finality can only be the result of a consensus, and not the product of a technology that fixes the text. (p. 62)
This acknowledgment of the constructivist nature of stability urges us to conduct a closer analysis of the structures underlying our knowledge and communication system and how they are presently set up. Just like stability, fluidity is an ideal type, and just like openness, it is a rhetorical stance. Within an information environment, it can be seen as a paradox: although information might flow, knowledge inherently needs some form of objectification or stability to be called knowledge. True liquidity is thus an impossibility; fluid knowledge is an impossibility; and, at least in my definition of the term, fluid texts are an impossibility. We can only ever achieve quasi-liquidity. This impossibility to achieve real liquidity should, however, not be seen as a failure, as it still has rhetorical power. As rhetoric, it helps us deconstruct the structures of our object-oriented knowledge systems, and it enables us to experiment with a way of thinking and practising that (performatively) challenges these preconceptions and helps us to think and create them differently.
The scholarly monograph is in the process of being reinvented. Experiments with the format, structure, and content of the book-length treatise are currently being undertaken in a variety of guises, from liquid books to wiki-monographs and blog-anthologies.16 In the humanities, the scholarly book plays a substantial role in an intricate web of knowledge communication, quality control, and reputation management. It traverses power structures and ideological struggles and still comes out as the preferred means of communication among Humanities scholars (Adema & Rutten, 2010; Harley, Krzys, Earl-Novell, Lawrence, and King, 2010). Increasingly, however, the monograph has become a tool in a specific battle for a new knowledge and communication system within academia. The concept of the traditional “printed book” is increasingly being used as a strategic weapon in maintaining a status quo in knowledge production and communication based on such values as stability, authority, and quality. On the other hand, the concept of what I will call “the open book” is used to seek a knowledge system that is based on sharing, connectedness, and liquidity.
What do these experiments and their critique mean for the idea of the book, openness, and the humanities? Remix and fluidity can be seen as new ways to critically conceptualize the potentiality of the book; as a way to think beyond the book as a stable object (which it has never been); and as a strategy to explore its multiplicities and to challenge established notions like stability, identity, and materiality that are all bound up with (printed) books and at the same time with our current conception and practice of knowledge. Such a situation enables argumentation and attention to otherness, difference, and another knowledge system based more upon fluidity. Experiments with new ways of conducting and publishing monographs in an open manner, such as via liquid books or wiki monographs, might be a first step away from an object-oriented approach focused on a finalized product and toward a publishing system based more on constant, collaborative, and simultaneous knowledge production.
A “derivative work” is a work based upon one or more pre-existing works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications, which, as a whole, represent an original work of authorship, is a “derivative work.” See: http://www.copyright.gov/title17/92chap1.html#101 .
Adema, Janneke. (2010). Open access business models for books in the humanities and social sciences: An overview of initiatives and experiments (OAPEN Project Report). Amsterdam, NL: OAPEN.
Adema, Janneke, & Rutten, Paul. (2010). Digital monographs in the humanities and social sciences: Report on user needs. Amsterdam, NL: OAPEN.
Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print. New York, NY: Routledge.
Bridle, James. (2010, September 6). On Wikipedia, cultural patrimony, and historiography. [Blog entry]. booktwo.org. URL: http://booktwo.org/notebook/wikipedia-historiography/ [October 12, 2010].
Bryant, John. (2002). The Fluid Text: A Theory of revision and editing for book and screen. Ann Arbor, MI: University of Michigan Press.
Casati, Fabio, Giunchiglia, Fausto, & Marchese, Maurizio. (2007). Liquid publications: Scientific publications meet the Web (Version 2.3). Trento, IT: University of Trento.
Derrida, Jacques. (1996). Archive fever: A Freudian impression. Chicago, IL: University of Chicago Press.
Foucault, Michel. (1969, rpt. 2007). The archaeology of knowledge (2nd ed.). New York, NY: Routledge.
Foucault, Michel. (1977, rpt. 1980). What is an author? In Donald F. Bouchard (Ed.), Language, counter-memory, practice (pp. 124-127). Ithaca, NY: Cornell University Press.
Guédon, Jean-Claude. (2009). What can technology teach us about texts? (and texts about technology?). In T.W. Luke & J.W. Hunsinger (Eds.), Putting knowledge to work and letting information play: The Center for Digital Discourse and Culture (pp. 54-75). Blacksburg, VA: Center for Digital Discourse and Culture.
Hall, Gary. (2009). Fluid notes on liquid books. In T.W. Luke & J.W. Hunsinger (Eds.). Putting knowledge to work and letting information play: The Center for Digital Discourse and Culture (pp. 33-53). Blacksburg, VA: Center for Digital Discourse and Culture.
Harley, Diane, Krzys, Sophia, Earl-Novell, Sarah, Lawrence, Shannon, & King, C. Judson. (2010). Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Berkeley, CA: Center for Studies in Higher Education.
Johns, Adrian. (1998). The nature of the book: Print and knowledge in the making. Chicago, IL: University of Chicago Press.
Lessig, Lawrence. (2008). Remix: Making art and commerce thrive in the hybrid economy. London, UK: Penguin Press.
Manovich, Lev. (2005). Remixing and remixability. URL http://imlportfolio.usc.edu/ctcs505/ManovichRemixModular.pdf [October 12, 2010].
Manovich, Lev. (2008). Software takes command. Draft version. URL: http://lab.softwarestudies.com/2008/11/softbook.html [October 12, 2010].
Milloy, Caren. (2010) JISC national e-books observatory project: 2007-2010. London, UK: JISC Collections. URL: http://observatory.jiscebooks.org/ [October 12, 2010].
Navas, Eduardo. (2010, August 13). Regressive and reflexive mashups in sampling culture (2010 revision). [Blog entry]. Remix Theory. URL: http://remixtheory.net/?p=444 [October 12, 2010].
Rowlands, Ian, Nicholas, David, Jamali, Hamid R., & Huntington, Paul. (2007). What do faculty and students really think about e-books? Aslib Proceedings, 59(6), 489-511.
Springer. (2008). eBooks – the end user perspective. New York, NY: Springer.
Suber, Peter. (2008, August 2). SPARC open access newsletter, 124. URL: http://www.earlham.edu/~peters/fos/newsletter/08-02-08.htm [October 12, 2010].
Swan, Alma. (2008). Key concerns within the scholarly communication process. URL: http://www.jisc.ac.uk/whatwedo/topics/opentechnologies/openaccess/reports/keyconcerns [October 12, 2010].
UCL/CIBER. (2008). Textual analysis of open-ended questions in e-book national observatory survey. URL: ciber-research.eu/download/20091102-freetext.pdf [October 12, 2010].
Thompson, John. (2005). Books in the digital age: The transformation of academic and higher education publishing in Britain and the United States. Cambridge, UK: Polity Press.
Wark, McKenzie. (2007). Gamer theory. Cambridge, MA: Harvard University Press.
CCSP Press
Scholarly and Research Communication
Volume 3, Issue 3, Article ID 030132, 16 pages
Journal URL: www.src-online.ca
Received August 17, 2011, Accepted November 15, 2011, Published December 20, 2012
Adema, Janneke. (2012). On Open Books and Fluid Humanities. Scholarly and Research Communication, 3(3): 030132, 16 pp.
© 2012 Janneke Adema. This Open Access article is distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc-nd/2.5/ca), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.