The Garden in the Machine:
The Impact of American Studies on New Technologies

Randy Bass
(Georgetown University)

This essay is undergoing revision. (Constructive) comments may be sent to the author.
Introduction

Resisting the Myths of the Electronic Frontier

A Convergence of Distribution

The Novice in the Archive

Rationale of the Culture-Text
abstract only


Introduction

If it seems as though the Internet has seduced us to spend hundreds of hours dumping millions of bytes of language and ideas into cyberspace everyday, it is only because the academic culture was already awash in words and rhetoric aching for new outlets. If it seems as though new interactive technologies--such as electronic discussion lists, bulletin boards, and newsgroups--have instigated venues for every kind of special interest and subfield imaginable, it is only because the academic disciplines have been subdividing and recombining at an accelerated rate ever since curriculum revision inextricably fused with identity politics in the 1960's. And if it seems as though virtual environments and electronic texts are inviting us to make real the presuppositions of postmodern theory, it is only because both postmodern theory and interactive technologies are manifestations of our lived experience in the "late age of print." (1)

Therefore, if we are to understand the meaning of new technologies for the future of American Studies, then we have to understand the progress and capabilities of those technologies in the context of the shifting structures and practices of the field itself. The important question to ask, then, is not how these new technologies will change us, or even how they will change our relationship to our materials and knowledge, but how will our changing relationship to materials and knowledge determine what we do with interactive technologies? Or, a somewhat more nuanced question may be, Where are the critical and productive affinities between a field's materials, methods, and epistemology on the one hand, and the inherent structure and capabilities of interactive technologies, on the other?

In this essay I am somewhat narrowly limiting my interests to American Studies as a knowledge-making enterprise and the role of new technologies in knowledge-making practices --particularly scholarship and teaching. I am not concerned here with the impact of new technologies on American culture, or indeed all culture, though the broad outlines of that impact--on new kinds of community formation, on shifting economic networks, on the possibly changed nature of subjectivity itself--bear profoundly on the practices of making knowledge and what we might call the "knowledge establishment."

Equally as important are the broad changes that are occurring across all disciplines and subject areas that are the consequence of increasing amounts of scholarly work and communications taking place in electronic environments: the increasingly collaborative nature of knowledge, the shift from individual "ownership" of ideas to ideas that are communally generated, the erosion of the idea of closure, the movement from univocality to polyvocality in certain scholarly contexts, the shift from linear to associational thinking, and the overall change in emphasis--in scholarship and teaching--from knowledge as product to knowledge as process.

Whatever impact new technologies are having on American Studies--and whatever impact American Studies will have on them--will take place within these sweeping and important changes. That being said, I want to turn to those aspects of new technologies that relate specifically to the creation and reproduction of knowledge in the contexts of studying culture and history: that is, the contexts and tools and practices that determine our ability to access and manipulate materials, our ability to write and think about primary materials in their contexts, and our various standards for evidentiary substance and its relationship to knowledge-making rhetoric. What, in short, might it mean to study culture and history over the next twenty years? How will the territory that we think of as the field (and fields) of American Studies be remapped by the convergence of interactive technologies with the constantly expanding domain of cultural and historical studies?

RETURN TO TOP


Resisting the Myths of the Electronic Frontier

I have said on numerous occasions, and I still believe, that with the development of the Internet, and with the increasing pervasiveness of communication between networked computers, we are in the middle of the most transforming technological event since the capture of fire. I used to think that it was just the biggest thing since Gutenberg, but now I think you have to go back farther.

John Perry Barlow, one of the founders of the Electronic Frontier Foundation

As Leo Marx pointed out a long time ago, the "rhetoric of the technological sublime" is an American staple, and "sublime" rhetoric is in no short supply in the so-called computer revolution. The "technological sublime" of course first attached itself to the inventions of the industrial era that symbolized energy and extension: the steam engine, the telegraph, the printing press, the railroad, electricity. Whatever the particular contexts or apparatus, the "rhetoric of the technological sublime" argued that technology would allow America and humanity to escape history, to rise above its corruptions of poverty, ignorance, scarcity, and injustice. Not only would technology enact a new Eden but it would enable nature and technology to coexist in "a middle landscape, an America suspended between art and nature, between the rural landscape and the industrial city, where technological power and democratic localism could constitute an ideal way of life." (2)

Techno-enthusiasts ("homesteaders" on the electronic frontier) depict the new virtual environment of cyberspace as just such a middle landscape, claiming to recover through technology a communal intimacy and interconnection lost in the industrial age. The rhetoric of the new technological sublime argues on several fronts: that the extensibility of worldwide connectivity will eradicate physical and political boundaries; that both the levelling nature of online interaction as well as the universalization of information access will foster democratization; that the decentered nature of hypertext will further erode the existence of limiting hierachies; and that the engaging power and linking capabilities of multimedia will revolutionize learning and eradicate the need for teachers and schools altogether. (3)

On the other extreme of technology prophecy (what I call the "hyperbole-elegy" continuum) the argument goes something like this: electronic texts and networks are pure ephemera, offering us only speed and splashy delight; extensive work and learning in electronic media promotes only a glib superficiality devoid of reflection, intimacy, and critical thinking; and, ultimately, electronic work will threaten the qualities we most associate with humanist inquiry and subjectivity: presence, unity, autonomy, accountability. As John Unsworth puts it in his essay "Electronic Publishing":

When the subject is scholarship, the fear that predominates is the fear of pollution--the fear of losing our priestly status in the anarchic welter of unfiltered, unrefined voices. When the subject is teaching, the fear expressed is the fear of obsolescence--the fear that technology will deprive our students of the inestimable value of our presence in the classroom, or more bluntly that our presence will no longer be required. When the subject is the library, the fear expressed is the fear of disorientation--that we will lose our sense of the value of the past...In a word, the common element is a fear that, as scholars, teachers, and human beings, we stand to lose our mysterious uniqueness--or, what comes to the same thing, that this uniqueness will no longer be honored--in the new technological landscape. (4)

Indeed, what forms the basis of the enthusiast's interest is the very same thing that inspires the skeptic's fear: a thinking and communications environment characterized solely by what Sven Birkerts --one of the most ardent and eloquent elegaists--calls "a vast lateral connectedness."(5)

Both ends of the continuum--the hyperbolous and the elegaic (the technological sublime and the ridiculous?)--are relatively unhelpful gauges of the future not only for the extremity of their positions, but for their assumptions that the future is either about the total acceptance or denial of an electronic world. Both positions, I think, are responses to the earliest, first generation phase (the "skinny dip" phase) of the electronic era. When one looks at what we might think of as second-level adaptation" of interactive technologies to intellectual work, one sees very different tendencies, ones that especially ought to mitigate the fears of those skeptics who see electronic media as the enemy of humanist inquiry. For despite what seems like the ephemeral and self-erasing nature of electronic texts there is actually in the medium the strong counter-tendency toward engendering a heightened sense of objectification; and despite the accusation that electronic environments discourage the self-reflective intellectual subject there is plenty of contrary evidence to suggest their potential for encouraging analytic and rhetorical self-consciousness; and despite the accusation that electronic scholarship and media promotes a lack of intellectual depth and a detachment from rigorous intellectual contact with evidence and reasoning, there are clear indications that the electronic era will provide an unprecedented opportunity for immersion in archival and primary materials, and consequently the making of meaning in cultural and historical analysis for all kinds of learners, from novice to expert.

What these possibilities hinge on are not the qualities inherent in interactive technologies (or at least only partially so--being as they are interpretable at either end of the hyperbole-elegy continuum) but the nature of knowledge-making practices in the fields that adapt them. It is, then, the changes that have been occurring for the last twenty years within the study of culture and history, not the technologies themselves, that make the future for integrating new technologies into these fields so rich in potential.

Of course this is not to downplay the essential truth at the heart of both the enthusiast's and the skeptic's vision of that future: that tools and media are never really only that, detached from us and unaffecting. Our very modes of thinking are being changed by the tools and media we're engaging; knowledge and the technologies of knowledge are always interdependent and inseparable.

That was certainly the vision of Vannevar Bush, the pioneering computer scientist and former director of the Office of Scientific Research, whose 1945 essay "As We May Think" is generally thought of as one of the originary documents of "hypertext" theory. (6) Outlining a number of technological advances that he saw in the near future, Bush culminated "As We May Think" with a vision of what he called the "Memex"machine, a hypothetical thinking machine that enhanced human thought processes by recording and retrieving linked "trails" of data using the natural "associational" patterns of human memory. In "Memex II," the 1959 update of his thoughts on the never built "Memex I", Bush argued that

The Industrial Revolution enabled us to make more of the things we need or desire, to raise our standard of living. It is based on the concept that a machine can perform any repetitive operation a man can do with his hands, and do it faster, more precisely, and with far more strength. Today there is another revolution under way, and it is far more important and significant. It might be called the mental revolution. It is also based on the concept that fully defined repetitive processes may be relegated to the machine. This time steps in the thought processes are becoming mechanized and this is far more significant than mere mechanization of mechanical processes. (7)

The repetitive processes" that Bush was interested in shifting from human to machine included operations like calculation and straightforward data storage. But that was not what really interested him. As Theodor Nelson, the man who coined the word "hypertext" in the 1960's points out, Bush's "real emphasis was on linkage, and new structures and activities that the automatic link-jump would make possible." (8) The issue for Bush--and for his successors in hypertext theory--was that in its most intense, creative, and intellectual moments the human mind constructs trails" of associations from one thought to the next. For the most part those associations are lost and remade over and over again. The ability of a machine to store and retrieve human thoughts through the associational coding that human users naturally gave to them distinguished the "Memex," as Bush put it, as a "device that would supplement thought directly rather than at a distance."

It was this same enmeshing of mind and machine that informed the research vision of Douglas Engelbart, one of the pioneers of human-computer interface design. Writing to Vannevar Bush in 1962, who was by then Professor Emeritus at MIT, Engelbart cited the fundamental influence "As We May Think" had had on the development of what Engelbart (then at SRI) called the "Program on Human Effectiveness." "The possibilities we are pursuing," Engelbart wrote to Bush, "involve an integrated man-machine working relationship, where close, continuous interaction with a computer avails the human of radically changed information-handling and -portrayal skills, and where clever utilization of these skills provides radical changes in the way the human attacks problems." Engelbart and his associates at SRI took an engineering approach to redesign of computing systems that stemmed "from the picture of the 'neuro-muscular' human, with his basic sensory, mental, and motor capabilities, and the means employed to match these capabilities to the problems he must face." (9) This kind of multi-sensory computer design that resulted in some of the really basic components of human-computer interaction, such as the mouse and the "drag and drop" file and directory structure, was based on a fundamental belief in the potential "fluidity" between human thinking and thinking technologies.

But one need not dwell on the dynamics of the human-computer interface to begin fathoming the interdependence between knowledge and tools. As indeed there are any number of "technologies" that serve scholars "directly". Consider just two: the page number and the paperback book. I suppose it is possible to question whether page numbers, or paperback books themselves, are technologies. Nevertheless, I would argue that both are critical dimensions of book technology and play vital roles in facilitating the making of knowledge.

Let's look first at page numbers. As one of the most important elements of the human-book interface, page numbers seem to have made their appearance in the West around the 16th century. They constitute an important part of a constellation of bibliographic changes that characterized the transition from manuscript to print culture, such as the introduction of indexes and tables of content. Page numbering is the vital link in the bibliographic apparatus, because without them there is no way to reference the book's organizational and searching tools. Both indexes and tables of content make it possible to create books of complexity and size. More importantly, the creation of page numbers meant the onset of regularized editions, which in turn revolutionized what it meant to create a book and to "own" knowledge. All of these changes can be grouped under the general history of the stabilization of print"--a transformation that increased in consolidation until the last twenty years when electronic texts started its reversal. (10)

But there is a further effect of these changes that is worth mentioning separately. The inception of page numbers and other stabilizing" features make possible the very notion of a "scholarly community." Regularized page numbers enable concurrence of access and reference. Without page numbers, a scholar in Geneva, for example, and a scholar in Dallas cannot make reference to the same page in the same book or journal; without page numbers, scholars separated by both distance and time cannot talk about the same texts; without page numbers, knowledge cannot build on itself and knowledge producers cannot build on each other.

If page numbers and other stabilizing changes underwrote the growth of a post-renaissance, international scholarly culture, then the widespread application and improvement of paperback book technology has underwritten the expansion of cultural and historical studies in the last thirty years. Imagine the fields of literature, history, and cultural studies without paperback book technology. Is it possible to imagine--to take American literature as an example-- the radical revision to the American literary canon without the technology that makes possible the delivery of inexpensive editions of individual texts or the viable binding of expansive anthologies? One could scan across the fields that have invigorated American Studies in the last three decades and ask the same question: what would the study of culture and history be like without paperback technology licensing the publication of a wide range of primary texts and secondary scholarly works, not to mention the exploded variety of teachable undergraduate texts, which (whether we admit it or not) fundamentally underwrites the range and depth of scholarly inquiry?

Page numbers point up how dependent the knowledge-making process is on the tools that provide us means of accessing, sharing, retrieving, and manipulating textual information; paperback books point up how dependent are the largest changes in discipline-based knowledge-structures on technologies that enable widespread distribution. Electronic interactive technologies--especially hypertext multimedia environments like cd-roms and the World Wide Web--offer powerful tools in all these areas of knowledge reproduction and promise to have as much impact on knowledge formation as page numbers and paperbacks.

Vannevar Bush's initial vision for the original "Memex" was for a device that would serve, as he put it, as "an intimate supplement to memory." By his 1959 vision for the "Memex II" Bush had seen the necessity for opening up the intimacy of the "personal" Memex to the connectivity of a worldwide system of libraries each with their own Memexes and complex networks of trails (accessed, amazingly, by phone lines). That amplification of the original idea--the matching of personal networks of intimate associations to a global network of organized archives--forms the prescient vision for our current state of affairs: the rapidly coalescing world of the personal computer and the Internet. That combination will be a powerful one, especially when it comes together fully in the context of the "cognitive architecture" of knowledge-based communities. (11)

RETURN TO TOP


A CONVERGENCE OF DISTRIBUTION

With the incipient introduction of the information superhighway' and the integration of satellite technology with television, computers and telephone, an alternative to the broadcast model [of communications], with its severe technical constraints, will very likely enable a system of multiple producers/distributors/consumers, an entirely new configuration of communication relations in which the boundaries between those terms collapse. A second age of mass media is on the horizon.

Mark Poster, The Second Media Age

One way to think of the electronic future of cultural and historical study is through what I call a "convergence of distribution," or the convergence of "distributive tendencies," in three key areas: the "distributive communication" of interactive technologies, the development of a "distributive epistemology," and the growing emphasis (at least in the United States) on "distributed learning." As these three tendencies converge they will powerfully remap what it means to study and learn culture and history.

"Modern Media of communications," says James Carey, "...widen the range of reception while narrowing the range of distribution. Large audiences receive, but are unable to make direct response or participate otherwise in vigorous discussion."(12) The ability to alter the message is not "distributed" between sender and receiver. In mass media, such as television, as Nicholas Negroponte puts it, "all the intelligence is at the point of transmission" and none or very little of it at the point of reception. (Negroponte points out that, obviously, he's not talking about the programming when he speaks of intelligence, but the ability to alter and control the "content" of the message.)

Interactive media could not be more different. In interactive media, most of the intelligence--or at least a large portion of it--is held at the point of reception, and therefore increases rather than reduces distribution. Interactive media, such as the Internet, turns any point of reception into a point of transmission (i.e. at any point where text can be read, text can be produced or reproduced). As much as some interactive media may look like conventional media--video games looking like movies, for example--the fundamentally different distributive quality of interactive media sets it apart as belonging to a distinct category of technology and a distinct paradigm of human communication.

The cultural theorist Mark Poster calls this new era of interactive media the "second media age." Yet, it is quite apparent that the first media age--the "broadcast era"--is nowhere near to being supplanted by the second. Rather, as with the longterm juxtaposition that we can expect between print and electronic texts, broadcast and interactive media will both coexist and intersect for some time to come. "The second age,"however, "deflates the pretensions of what now appears as a first age to having not been an age at all. Until now the broadcast model has not been a first age but has been naturalized as the only possible way of having media--few producers, many consumers." (13)

This distributive effect, the shift from a one-to-many to a many-to-many model of communication is one of the most important features of the new media, and provides the fundamental groundwork for a great many changes in social structure and subject formation. The implications are great as well for knowledge-making practices of academic disciplines. In contrast to the McLuhan-esque model of broadcast communications--where tele-media shrinks the space between points of reception--interactive media has an additional counter-effect of enlarging the space in which communication can take place, thereby enlarging the space in which scholars and students can conduct their intellectual work. The enlarged space of interactive media enables the visualization and manipulation of objects, as well as the capacity to experiment with textual arrangements, organization, and argument. What is "distributed" in interactive media is not just the ability to "talk back" but the ability to produce and reproduce knowledge.

Less rapid, but just as profound, as the advent of a second media age, is the paradigmatic changes that have occurred throughout the constituent fields of American cultural and historical studies over the last thirty years. One way to think about these changes collectively is see the evolution of a "distributive epistemology." By that phrase, "distributive epistemology," I want to imply several things. First, and most broadly, the general opening up of what counts as a culture's history--broadening beyond a narrow view of intellectual or political history, or canonical and aesthetic approaches to literary expression. Well known to all of us is the expansion of cultural and historical studies to include social history, so-called "bottom-up" history, the history of the marginalized and excluded, the expanded literary canon, as well as the mainstreaming of the study of everyday life and the extreme widening of the definition of what constitutes a readable cultural artifact. This all adds up to a "distributive epistemology" because how we look for our knowledge--what counts as viable evidence of cultural meaning--is more widely distributed across fields, text, objects, and populations than ever before.

There is a second sense for a "distributed epistemology" implied by the first that extends to the notion of subjectivity and perspective (or more accurately, intersubjectivity and multiperspectivism). Regardless of where one is situated across modernist or postmodernist constructions of this problem, all cultural history and analysis takes place in a context of academic inquiry that has challenged the unity and integrity of a single "voice" speaking in isolation or autonomy. Whether practiced as an analytic methodology or not, the context of cultural criticism challenges that texts (and subjects) be seen as "distributed" across the texts that construct them and to whom they are addressed.

Finally, both the first and second sense of a distributed epistemology further imply a third distributive condition within cultural and historical knowledge: that abandonment of the dream of a unitary cultural narrative and the possibility of writing a single "history" of a "people". In this sense, is forever distributed across a plurality of cultural experiences and texts, without the prospect of being remade into an explanatory coherence except in the context of its own multiplicity and complexity.

At the same time that the field has undergone a distribution of epistemology, there has been (at least in the United States) a movement toward a concomitant shift in pedagogical practice that might be called (for the sake of parallelism) "distributed learning." Distributed learning is a general term for a range of practices that include student-centered pedagogies and process approaches to learning. Practices that encourage collaborative work, the development of ideas and skills rather than the exclusive emphasis on finished product, and the distribution of authority in the classroom from the teacher to the students, are all implied in the phrase "distributed learning." Although relatively unexplored in the context of interdisciplinary cultural history, the linkages between "distributed learning" and the other two distributive tendencies, already have some notable pioneering precedents. The field of composition instruction and particularly its subfield of computers and writing has been experimenting with the affinities between electronic text production and process-based learning for nearly twenty years. Similarly, feminist theory and womens studies has been experimenting almost as long with alignments between theoretical content of feminist approaches and reconstructed classroom practices. Now, these kinds of alignments are also spreading to other areas, particularly in English literature, where an expanded canon and shift to cultural studies approach to literature is developing an increasing discourse in distributing authority in learning settings.

RETURN TO TOP


THE NOVICE IN THE ARCHIVE

If there is anything that binds together the diverse fields and sub-fields of American Studies it is attention to primary cultural and historical materials. Just as it is critical to recall how tied our knowledge is to its technologies of delivery, it is equally important to recall how much of our teaching and research methodologies, as well as our professional hierarchies, are dependent on access (or lack thereof) to primary cultural and historical materials. One of the really key areas for remapping American Studies lies in the potential for new technologies to enable a new expansive contact with primary cultural materials. Extensive contact with electronic primary materials will not only transform the whole idea of archival access (including its economics), but change the way archival collections are structured and delivered as repositories of resources.

More specifically, what changes in an interactive, electronic archive is the relationship between the user and the archival materials. This in part is the result of the enhanced ability to search and sort materials in electronic contexts but it is also the result of the changed position of the user. When archives are only physically located in libraries and museums, a very narrow range of expert users have access-- and that access is usually mediated by an archivist or curator. As electronic archives are made increasingly available in electronic environments such as cd-rom and the World Wide Web, they become 'public' documents, available to a very wide range of users. Consequently, the relatively clear boundary between "the archive" and the "published artifact" of the archive (the collection, anthology, the source study or interpretive history) is now blurring. And the logical result of that blurring is a rethinking of archival standards regarding the arrangement, organization, and presentation of archival materials.

As a general rule, the delivery of primary historical and cultural materials is the least developed area of the Internet and World Wide Web. Yet, even in its incipient stages, a growing range of primary materials are available on the Internet as well as through cd-rom packages. One of the leaders in this effort is the National Digital Library, the electronic collections division of the Library of Congress, which has already run a prototype project for five years called the American Memory project. The American Memory Project was a multimedia archive of primary materials that ran as a self-contained package, although many of those collections are already on the World Wide Web, such as 1100 Civil War photographs from the Matthew Brady collection, 272 Constitutional Broadsides, 1600 color photographs of American life in the 1930's taken from the collections of the Office of War Information and Farm Security Administration, 2900 life histories (22,500 pages) from the folklore project of the WPA Federal Writers Project, 25,000 photographs of American life and culture from the Detroit Publishing Company, 45 paper print films of New York City at the turn of the century, 59 sound recordings of American leaders (1918-1920) and 11,000 pages of books and pamphlets from the Daniel P. Murray African American collection. The American Memory project is now part of the larger division of the National Digital Library, that has undertaken the digitization (i.e. conversion to electronic form) of one million special collection items a year for five years, making five million or their 57 million special collection items available on the Internet by the year 2000. These collections include the earlier ones as well as the addition of 12 new collections, including first person narratives of early California, some 18,000 playscripts and handbills from the American Variety stage, 4,000 panoramic photographs, and 10,000 pages of print and nonprint materials on the Coolindge era and Consumer Economy .

But the Library of Congress is only a small part of the story. If it were only major knowledge institutions like the Library of Congress putting primary materials online, the impact on the future would not seem that profound. But because of the "distributive" nature of interactive media where every point of reception is a potential point of production, the number of production points putting special collections of primary materials online can and will grow at an extremely accelerated rate.

At first this may all seem merely like the promise of valuable resources. It will enhance scholars and learners who don't have good access to libraries; it will certainly be a boon to overseas scholars; it will enhance teaching by providing greater access to materials. But beyond the basic --though considerable-- enhancements to access, the proliferation of electronically accessible primary materials will have an impact on the fields of culture and history no less profound than the paperback book. The exponential growth of primary materials will substantially enhance our ability to access the texts that comprise the "national memory" (both its public and it vernacular record); the proliferation of primary materials will change the way we think about publishing and accessing texts as well as our modes of accessing and harnessing textual evidence. That may in turn change the role of the scholar, the nature of editing, and the creation of exhibitions and public collections. Finally, the presence of primary electronic materials will produce some amount of tension and counterforce to institutional hierarchies--of scholars and nonscholars, professionals and "amateurs," elite research schools and teaching institutions, and novice and expert learners.

Whatever the longterm implications there are already a few early experiements with electronic primary materials that are not only enhancing the delivery of resources, but helping to rethink the boundaries of the archive and the published or packaged artifact. Take for example the Augusta Archive on Civil War era Virginia.

When you "first" enter the Augusta Archive on the World Wide Web, you are told that the Archive does not "require any prior knowledge of computers or history," that by using the mouse to point and click you can "move through the screens that appear before you," and that "there is much more in the Archive than you can see in one visit." The language is much more addressed to entering a museum than entering an archive and indeed, the Augusta Archive was created to highlight local Civil War era resources for the Woodrow Wilson Birthplace Museum in Staunton, Virginia. Yet, in terms of the electronic archive itself, it is irrelevant if you accessing the Archive in Staunton, Virginia, or sitting at your personal computer in Warsaw or Jakarta.

The Augusta Archive is a specialized and augmented subset of the Valley of the Shadow Project on the Civil War created at the University of Virginia. The project tells the story of two communities at different ends of the Shenandoah Valley--Staunton, Virginia and Chambersburg, Pennsylvania--communities sharing a number of characteristics before the war, but separated by their placement on different sides of the Mason-Dixon line. In the archive is a wide range of primary materials, carefully organized but not digested or interpreted: for example, all eight newspapers of the two counties for thirty years (about 20,000 pages), all the manuscript population and agricultural censuses form the two counties between 1850 and 1880, rosters of Union and Confederate soldiers, Official Records of the War of the Rebellion, maps, diaries, and more.

The Valley of the Shadow project has two main components: a narrative and an archive. Building both in an electronic environment, Ed Ayers and the other creators wanted to exploit the associational linking capabilities of hypertext, much as they had been experimented with--sometimes in very wide open, unstructured formats--in hypertext fiction. The temptation is strong," says Ayers, "to explode the narrative in electronic history as well. But it seems to us that an electronic form of dissemination may actually benefit from a more centered kind of storytelling, with strands of narrative that extend for a considerable length, with cohesion provided not only by electronic links but also by the tropes of more traditional storytelling." There is then a running 'story' that the reader can follow that takes the reader through the history of the two communities before, during and after the War. The narrative text is "illuminated" with graphics and optional audio narration. It is entirely up to the reader whether she or he wants to follow the narrative or proceed to the Archive.

If the reader chooses to read the narrative, rather than going directly to the Archive, he or she would confront throughout the narrative numbered "endnotes" that are actually connected pathways from the narrative itself: "The click takes the reader to a place we are calling 'landings'--which might be a simple secondary reference, the roster of an entire regiment, a ten-year series of newspapers, or collections of manuscript diaries and letters. From the landings, people can go from one place to another in the library without being dependent on the narrative." (14)

So, what kind of "text" or "archive" is the Valley of the Shadow project and what kind of experience is it to "read" it? The answers to both those questions points up the significant potential for reconstructing cultural and historical inquiry inside, or in partnership with, electronic environments. It seems to me that the "landings" metaphor is critical to understanding how the Valley of the Shadow project is experimenting with meaning and form, as it puts the reader/user in a viewing position, pointed and directed to look a certain way, but not more. Equally critical is the ability of the reader to return to the narrative from the landing and to continue in a reciprocal alternation between the narrative and archive.

In many ways the structure of the Valley of the Shadow project is like the scholar's experience of reading through the archive: a continuous mutual action between a constructed narrative of events with the ongoing modification and reinforcement by the cultural materials that constructed that narrative to begin with. What is significant here, though, is that the scholar's experience is mapped into an environment where it can be replicated, at least in some form, by the novice. Unlike in a work of interpretive history that might draw on archival resources, in the Valley of the Shadow project, the archive is not subsumed or appropriated by the narrative. And for that matter neither is the reader.

The creation of a reciprocal combination of narrative and archive is very similar to the architecture of the the cd-rom Who Built America?. Who Built America? is based on a book--a basic survey of American history from 1876-1914. The print book forms the "spine" of the electronic book. But, "added to--and, in the process, transforming--this textual survey are nearly two hundred 'excursions,' which branch off from the main body of the text." Like the "landings" in the Valley of the Shadow project, the "excursions" in Who Built America? are meant to be pathways to and from the narrative that afford the novice user an environment to make meaning. As Roy Rosenzweig and Steve Brier, two of the editors and designers of Who Built America?, explain: "Those excursions contain about seven hundred source documents in various media that allow students as well as interested general readers to go beyond (and behind) the printed page and to immerse themselves in the primary and secondary sources that professional historians use to make sense of the past." (15)

The architecture of the Augusta Archive, and on a smaller scale, Who Built America? represents a shift that could only efffectively take place in an electronic text environment: The archive becomes a "space" where work can be conducted and where meaning can be made. This of course has always been the case of archives for scholars. And although it will considerably ease access, the scholar's relationship to archival research, per se, will probably not change that much in electronic contexts. What it will change is the relationship between the novice learner and archival materials--and that ultimately will change the scholar's role as well. The other point that is made by the architecture of these environments is the kind of epistemological activity that can be mapped into electronic space. As Ed Ayers puts it "Such a form is important, for one of the goals of the project is to make available not merely information about the past but also to make palpable the complexity of the past, its interconectedness, its contingency and multiplicity...As a result, our hypertext is, we hope, truer to the complexity of historical experience and more satisfying to use for longer periods."

That electronic spaces can map epistemological complexity suggests a great deal of work that can be done in the development of innovative environments aligned with the structure of fields and sub-fields. There have been other experiments with other metaphors-- the wheel and spoke hypertext structure of the Context32 English literature created by George Landow, the HyperMedia "stack" essay experiments of Gregory Ulmer, the theorization of the links between dramatic theory and human-computer interface design of Brenda Laurel--but these are merely the beginning. What is needed, in part, are disciplinary scholars who are willing and able to cross boundaries with hypermedia design theory, pedagogical theory, and interface design, at least to the extent that material and medium can be creatively and rigorously made to interact.

RETURN TO TOP


Rationale of the Culture-Text

In his electronic essay, "Rationale of Hypertext," Jerome McGann states that 'we no longer have to use books to analyze and study other books or texts. That simple fact carries immense, even catastrophic, significance." Although he acknowledges the "codex" book (critical and scholarly editions) as some of the most distinguished products of our cultural inheritance, he also asserts: "When we use books to study books, or hard copy texts to analyze other hard copy texts, the scale of the tools seriously limits the possible results."

What if we apply the same assertion, then, to the study of culture? Is it possible that the nature of cultural studies and cultural criticism (what many would argue is the current state of American Studies) has outgrown the print book form as a medium of communication? Is it possible that the "scale of the tools" is seriously limiting cultural criticism? Or making it overly theoretical or involuted? Could cultural history and cultural criticism be better served either in electronic environments or in combination print and electronic environments?

If American Studies has a method (are we still asking that question?) then it is what I would call "artifactual literacy." Artifactual literacy is the practice of criticism, analysis and pedagogy that reads texts as if they were objects and objects as if they were texts. Artifactual literacy is the congeries of methodologies that bind the many subfields of cultural and historical study. I would suggest that electronic environments and interactive technologies are a bigger and more suited place for us to engage the work of artifactual literacy.

In the last --yet unfinished section of my essay--I'd like to explore these questions and issues.

RETURN TO TOP


NOTES

(1) Jay David Bolter, "Literature in the Electronic Writing Space," in Myron Tuman, ed. Literacy Online: The Promise (and Peril) of Reading and Writing with Computers, p. 24. ; see also Bolter's Writing Space: The Computer, Hypertext and the History of Writing (Ehrblaum Associates, 1991).

(2) James Carey, "The Mythos of the Electronic Revolution" in Communication as Cuture: Essays on Media and Society," p. 118. See also, of course, Leo Marx, The Machine in the Garden; and David Nye, The American Technological Sublime (MIT Press, 1995).

(3) See Howard Rheingold, Virtual Communities: Homesteading on the Electronic Frontier; Nicholas Negroponte, Being Digital; Lewis Perelman, School's Out.

(4) John Unsworth, "Electronic Publishing: Scholarly Publishing and the Public," IATH Web Site (http://jefferson.village.virginia.edu/~jmu2m/mla- 94.html).

(5) Sven Birkerts, "The Fate of Reading in the Electronic Age," in The Gutenberg Elegies.

(6) "As We May Think" was first published in the Atlantic Monthly (vol 76, No 1) in 1945. It was edited and reprintedin Life Magazine (vol 19, no. 11) also in 1945. It has been reprinted with other essays by and about Vannevar Bush in From Memex to Hypertext: Vannevar Bush and the Mind's Machine, edited by James Nyce and Paul Kahn (Academic Press, 1991).

(7) Vannevar Bush, "Memex II" in Kahn and Nyce.

(8) Theodor H. Nelson, "As We Will Think," presented at the International Conference on online interactive computing held at Brunel University, Uxbridge, England (September, 1972); reprintedin Kahn and Nyce.

(9) Douglas Engelbart, "Program on Human Effectiveness" in Kahn and Nyce.

(10) Jay David Bolter, in Tuman, p.22.

(11) Henrietta Shirk, "Cognitive Architecture in Hypermedia Instruction" in Sociomedia: Multimedia, Hypermedia, and the Social Construction of Knowledge (MIT Press, 1992).

(12) James Carey, p. 136.

(13) Mark Poster, The Second Media Age. Polity Press (1995): 22.

RETURN TO TOP