DigiTextual Publishing: The Methodology of Construction

Methodology of Construction

My project, as previously noted, is concerned with discerning what kinds of hypertextual enablings are most useful for pedagogical purposes. In order to discern this (or at least begin focusing in), my site consists of four separate versions, with four relatively separate editorial ethea, all rendering the text of the “Aeolus” chapter from James Joyce’s Ulysses. This novel, it should be noted, is famously difficult to publish in terms of versioning – does one use the Bodley Head edition (1960, rev.), the Modern Library edition (1961), the Shakespeare and Company first edition (1922)? – and even the most recent critical edition by Hans Gabler (1984) has been surrounded by harsh criticism (including by McGann, no less) ever since its publication. For my purposes, I am using what may seem like a bit of a cop-out, but is nonetheless a necessary decision: I am using the 1922 edition because it is the only version now in public domain in both the US and Canada, and because Joyce’s estate has lately denied rights for all new editions of the novel (much to John Kidd’s chagrin). Stephen Joyce is also very, very litigious, even in cases against publications and performances which are clearly permissible under copyright law; and, as a graduate student, I can’t afford a lawyer.

Version 1: Plain-text HTML

The first is a sort of control: a plain-text (if such a thing really exists) HTML rendering of the text. Its aesthetic paratextual elements and technologies of navigation will be composed as indicated in items 1 and 2 under the “Publishing Theoretical Apparatus” above. This is not to say that this is actually a control version – if anything, the codex version would be the “control.” The alterations to paratext and navigation, particularly in terms of the reader’s ease and speed in performing Web-based research, are themselves vast alterations to a codex form, and thus this version is just as much a test subject in my experiment as the others. This version is, however, the least technologically-enabled text.

By saying it is the least technologically enabled, I mean that I as an editor have put the least coding into the text which enables a reader, through the coded and rendered text before them, to take actions. The digitized text, I would still argue, is fundamentally a highly technologically-enabled text, as the reader can quite quickly open up other windows to search for definitions, explanations, and even copy and paste longer phrases from the text to draw up any level of commentary available freely on the Web. Thus my curiosity with this version of the text is not so much “How much do students comprehend, critique, and theorize from a plain text?” but rather “How much will students do their own legwork (or fingerwork, as the case would be) in digital research in order to build comprehension, criticism, and theory? And is there a value in forcing them to do so solo?”[1]

This last question is in direct response to Jay Fogleman, Anne Niedbala, and Francesca Bedell’s 2013 article,  in which they note that current incoming college students “have difficulty developing effective search strategies […] and are challenged when making authority, relevance, or accuracy judgments about the pages they retrieve” in electronic searches for information, despite self-reporting as being particularly adept (73). This version is then addressing the possibility that a non-embedded hypertext has greater pedagogical value than an embedded version, as an embedded hypertext may enable laziness (hyperlinks to research, of whatever kind, are provided) and keep students from the valuable experience of learning effective research strategies.

For a comparable version, the Project Gutenberg Ulysses serves as an example, though in keeping with the codex published version’s aesthetics (which Gutenberg defenestrates with apparent nonchalance), the subtitles to “Aeolus” will be centre-justified full-caps; and in keeping with the valuable insights on column widths made by Vandendorpe, the lines will be moved toward the centre from both sides through margins set in the CSS. Project Gutenberg’s aesthetics and editorial practices tend to argue for the existence of a “pure text without editorial interference,” which, as I have previously argued, is an untenable essentialist argument and ignores Hayles’s apt observation, in How We Became Posthuman, that there is no such thing as information disembodied, and thus the medium of embodiment is fundamentally a part of what that information is. As such, my version will very consciously embody the text in a way which is easiest to read and most closely mirrors common hypertextual aesthetics and navigation.

This version also serves as the template to build the other three versions.

Version 2: Authoritative Editorial Enabled Hypertext

The second is an “Authoritative Editorial” edition, in which control over the editorial apparatus is entirely reserved for the editor (currently me). Much of the annotation material will be drawn from Gifford’s Ulysses Annotated, particularly the material which is esoteric to the point that it is essentially impossible to find through general Web searching. In this way, this version represents a close digital remediation of the standard codex critical edition. However, in keeping with McGann’s, Hayles’s, and Shillingsburg’s urging for decentralization, most links in the text which explain or contextualize the material will be external, linking (for example) to the OED for definitions, Encyclopedia Britannica for period and cultural context, images and maps drawn from scholarly sites (such as those by Joe Nugent at BC), and professionally-performed musical pieces accessed via YouTube or similar multimedia databases. For all of these, rather than linking directly, the links will appear along the right-hand margin with an explanation of what the material being accessed will be so that readers do not end up with excessive openings of windows with material they already know, and thereby excessive distractions from the text which might cause them to fatigue in clicking on the links and give up on clicking on potentially-informative links.

The best example for how the linking would work for this edition would be something like Amanda Visconti’s “UlyssesUlysses.com” version, but with a slightly cleaner aesthetic and more reliance on external hyperlinks over internal explanations to decenter the text. It will also contain far less interpretation (e.g. Visconti adds commentary to much of Buck Mulligan’s Eucharistic actions which directly indicate he is mocking the Catholic church) to obviate, as much as possible, issues with editorially dictating a specific interpretation of the text. The point of such annotations, to return to McDonald, is in easing the accessibility, not the difficulty.

Version 3: Non-Authoritative Editorial Enabled Hypertext

The third is a “Non-Authoritative Editorial” edition (for lack of better nomenclature), in which the editorial apparatus is still purely under the control of the editor, but in which the control over what that apparatus does is deferred from the editor to the popularly-generated semiotic field of Web users. This will be accomplished by linking the same materials from the “Authoritative Editorial” edition not to actual websites, but to the “I’m feeling lucky” function of Google or a highly limited Google Images search for images. For audio-visual materials, the links will perform a YouTube search for precisely-defined keywords based on what will draw up the required material at the top of the YouTube search (if only YouTube had an “I’m feeling lucky” element…).

The theoretical justification behind this is multi-fold. First, it distances the movement of the text from editorial control (a bit… the criteria for the searches being performed by the links is still carefully constructed, and “what to link” is still editorially decided). As Daniel Apollon et al. suggest, hyperlinking (among other editorial decisions) is often structured according to the editor’s own interpretation of the specific text and broader textual theory (18), so this version represents an attempt to distance my own interpretive and theoretical approaches, and thereby further avoid dictating a specific reading.

Second, it draws attention to the way information on the Web (and, I would argue, all textual information) is decentered from a specific author and is really all built collectively, collaboratively, and relationally. As McGann argues, “the social intercourse of texts – the context of their relations – must be conceived as an essential part of the ‘text itself’ if one means to gain an adequate critical grasp of the textual situation” (11-12). By maintaining a link to the most commonly-associated informational material, these links demonstrate how the given text is most commonly understood in an ever-present (and, because the links change dynamically with Google’s algorithm, ever-changing) moment of reading. The semiotic field used to build meaning in the texts is extended beyond the reader (or even the editor) and into the socially-constructed digital semiotic field of the Web.

Third, by building on database algorithms based in patterns of search behaviour constructed by Google and YouTube, it draws attention to how information on the Web is commonly indexed and accessed, and thereby potentially privileged for reasons which may be valid or invalid at differing degrees. As Ted Striphas argues, “algorithms aggregate a conversation about culture.” Indeed, search algorithms can be thought of as controlling or creating a certain image of what a given culture is at a given moment merely by controlling the method through which a piece of information’s “meaning” is linked to other explanatory or contextualizing intertexts. Algorithms are built by and simultaneously create an ideology. Thus structuring this version around such algorithms permits students (and classes) to consider how we currently structure information networks, what patterns we currently use to access information from them, and how they are both shaped by and come to shape the way users conceive of their world. This also opens up discussion to potential problems that may arise when algorithmic structurations privilege certain information based largely, for Google, on popularity, recency, and user-side metadata such as location, language settings, and history (if one is logged in to a Google account) – or, indeed for Google, based on paid advertising (albeit this does not factor into “I’m Feeling Lucky” searches).

One rather disconcerting problem with this version is the potential for such searches, out of the hands of the authoritative and expert editor, to bring up explanatory or contextual materials which are poorly-researched if not wholly inaccurate. As Wikipedia is one of the most used websites (and bar-none the most used informational website), it logically follows that an inordinate number of “I’m Feeling Lucky” searches will pull up a Wikipedia page. Though Wikipedia’s credibility has been defended by a series of sources (note that, self-referentially, I am linking here to Wikipedia’s own coverage of this debate), it is still true that their material may, at any moment, be edited by individuals with no expertise in the subject matter, and the material on a page may have any level of inaccurate material, whether added intentionally or otherwise.[2] However, I would argue that this, too, allows for a conversation about how we value information, who gets to speak, what allows one person higher privilege in speaking over another, etc. It may not be as expertly-illuminative of the text, but it opens up avenues for critical and theoretical commentary on textuality and informatics beyond Joyce’s text. Particularly if used in tandem with versions 1 and 2 in a classroom setting, the issue of authority and credibility of information, and the methods students use for accessing and assessing this information, can result in highly productive discussion.

There is, of course, a further issue with this: it is insisting on Google’s and YouTube’s algorithms, which unquestionably asserts a certain editorial preference and control over navigation and also eschews the practices of people who don’t use YouTube or Google for searches like these. Both of these are, of course, concerns which must be kept in mind. However, Google’s algorithm (and YouTube’s, to a slightly lesser extent) is a decent representation of current trends for understanding a given semiotic unit. As of February 2016, Google (which owns YouTube) had a 67.73% worldwide market share among search engines used on personal computers, with the next highest competitor (Bing) coming in at 15.67% and no other search engines breaking double digits. On cell phones and tablets, this skyrockets to 94.11%.[3] Furthermore, using the “I’m Feeling Lucky” function is a fair representation of how most people use search engine results for gathering information: a Nielsen study based on eye tracking shows that, in 59% of all searches, users do not even look at (let alone click on) results beyond the third listed result (“How People Read”). Thus, for the purpose of drawing attention to common patterns of organization and retrieval of information online, these algorithms are presently the most apt methods for demonstration.

This version, too, will be most similar editorially to Visconti’s site.

Version 4: The Genius Edition

The final version is what I will call the “Genius” edition, if only because it is primarily built upon the coding and architecture of Genius.com (and thus has a very similar appearance to this).[4] This version will have no encoded linking ab initio, but will provide the readers with the ability to encode any word or string with whatever annotations they choose, including their own explanations, multimedia materials, and external links of any sort, all of which will appear alongside the text in the right margin. The text will thereby become a multi-author palimpsestual communal space for building that course’s “version” of the text.

The benefits of this are, I hope, quite clear.

First, version 4 is deeply indebted to a “blended learning” system, as it combines the traditional classroom model with a digital extension unbound by the time and space in which students (and instructors) can continually engage in a discourse-space which is not only about the text, it quite literally is the text. The students’ discourse on “Aeolus” is linked within the work itself, appears alongside it, and may provide multimodal materials and hyperlink connections beyond the dedicated server space, all collaboratively created by the class as a whole. As Fogleman et al. note, such a model is beneficial in that it “facilitates free and open dialogue, critical debate, negotiation, and agreement, and supports reflection and the creation of communities of inquiry” (75). It builds a sense of academic community amongst the students in the class, and works toward making them feel like they are a part of the communal academic discourse of criticism.  And, Fogleman et al. point out, discourses based on this model have been proven in pedagogy studies to be more effective for student learning than traditional models (75).

In terms of editorial control and authority, this version also moves beyond version 3 in obviating concerns over a specific, editor-dictated reading of the text, while still allowing the text to serve as the intertextual space of relational meaning which hypertext so effectively demonstrates. The entire critical apparatus, albeit constructed through a coded system that I (and the people at Genius.com) have created, is student-built.

Relatedly, and most importantly from a theoretical standpoint, this version encourages students to conceive of their engagements with the text as shaping its meaning, and to consider themselves potentially as readers/co-editors of the text as it becomes a multi-authored palimpsest. Digital critical editions, Apollon et al. note, can permit readers to “use and interact with the content in such a way that these readers may become actors,” resulting in “a strong notion of transfer of power from the producer to the user or reader” (5). Though Apollon et al. are cautious about blurring “the distinction between producer and consumer, editor and reader” – noting that it can result in “poorly controlled” versioning and reuses of material, and that this may be “perceived as a threat for the original product” (26-7) – this debate is an issue highly important for any individual dealing with textuality in a digital environment, where issues of authority over text (or information broadly), versioning control, plagiarism and intellectual property rights, and the insularity of a given text come into question far more clearly, and more often, than they did or do with the codex. This is also, as I noted earlier, highly relevant for discussions of textuality broadly in terms of poststructuralist theory, as it creates a visible example of decentered authorship and intertextuality.

Apollon et al., of course, do raise an important problem behind this kind of text vs. versions 2 and 3: the students, most of whom have little expertise in this text, are doing the research without the guiding expertise of an editor (for which version 2 is arguably better); and the commentary they include for the text will potentially be used by other students in the same way those students would use commentary from an expert editor’s apparatus. This might lead those students to take as valid potentially-fallacious information posted by other students and, because they “already have” said information, not do their own (potentially better) research. This can both lead to radically uncritical or unfounded interpretations of the text and dissuade students from gaining as much valuable practice in research skills. The first is, of course, a concern equally true when students do their own research and have limited digital information literacy, and the second is equally true with codex authoritative editions (as well as version 2), but these concerns must still at least be kept in mind for the purposes of the project and the data collection.

This version will structurally and theoretically be somewhere between a Genius.com version of a text and Visconti’s (still in Beta-testing) dissertation project, Infinite Ulysses.



[1] I say “solo” because version 4 will similarly require them to do their own research, but will also allow them to build a collaborative, community-annotated version of the text.

[2] I am here reminded specifically of an instance in which, after watching the movie Charlie Wilson’s War, I was curious whether a piece of information therein was accurate. As it was not for academic purposes, I investigated this using Wikipedia and the article indicated that it was, indeed, accurate. However, when I checked the page’s references for verifying this piece of information, I found that the author of this section was citing the very movie which I was investigating.

[3] Google is all-but ubiquitous on console and gaming systems (98.89%), but since very little professional or academic work is done here, this isn’t particularly relevant.

[4] I would like to note that (in total awe and praise) the people at Genius.com have made their annotation coding open-source for any web designer to incorporate into his site code; and, further, have made it such that (provided you are using Chrome and have a Genius.com account) any user can actually view a Genius.com-enabled version of any and every site on the web (stored on Genius.com’s servers). This is truly amazing and surprisingly altruistic work, and strong “Kudos!” are in order.

Leave a Comment