banner

Georgetown University
Graduate School of Art and Sciences
Communication, Culture & Technology Program

CCTP-715: Computing and The Meaning of Life
Professor Martin Irvine
Spring 2016

About the Course

"Computing and The Meaning of Life" will work out methods to get at some big interrelated questions:

  • (1) why are most non-technical people closed out of understanding how the major "cognitive technologies" of our time (computation, software, digital media, networks), which we use to represent and communicate meaning and values, are designed and work the way that they do (rather than another way)?
  • (2) further, why do most non-technical people feel estranged from the technologies designed to mediate our core human capabilities -- using and representing language and other sign systems, creating and interpreting expression in any symbolic form, building new knowledge with new concepts, communicating with identities in our own social worlds, and preserving our representations--cultural memory--for recovery over time?
  • (3) how and why is computing connected to the longer continuum of human symbolic thought, language, expression, and meaningful artefacts, and how can we recover the deeper history of these connections?
  • (4) what kind of knowledge base can we develop that allows us to reconnect meaning, technology, and human agency that can be used by everyone, and not just technology specialists?
  • (5) how can we create better access to, and interpretations of, usable meaning from the complexity and quantity of information we are creating--and what are the promises (premises) of AI beyond hype and hysteria?
  • (6) what does computing have to do with the meaning of life, and what would it take to make it more meaningful?

Everyone wants our computational and digital technologies to be about or support meaning and value in real-world situations--creating and interpreting meaningful expressions, making useful sense out of all the data being collected, and somehow converting information into knowledge for making the kind of world we want to live in. Why does it seem like there's a huge disconnect between the core abilities and needs that have always defined human societies and the experience of most "users" of today's computing devices and software? (And this isn't a techno-determinist or Luddite question.)

Here's what we're up against, and what we need to do to build new knowledge for making changes. It's a common truism that computing, software, and digital information processes are now pervasive, embedded, and hidden behind most products of daily life from cars to wearable devices. The endless flow of productized consumer devices with digital media services and the continual popular media hype or hysteria about technology drown out important questions: why is it so hard to focus on using our computing and information technologies for things we find meaningful, for supporting values beyond consumerism, for intervening in the world in meaningful ways? Instead of opening up the meaning of computing for all of us as interacting agents, the consumer device industries control their own value by relentless productizing of closed, blackboxed, and tethered devices, in Jonathan Zittrain's term. Our devices and software are designed to lock in users as consumers and lock us out from understanding the functions and processes that we activate in the whole distributed network of hidden agencies. Since computing and software can be designed to do almost anything (including revealing itself), what can we do script a different scenario? The answers we will work toward opening up are far from "academic": we can find answers that apply directly to how students can participate in alternative human-centered designs for our ubiquitous networked computational world.

Learning Goals With Our Interdisciplinary Approach

The goals of the course are to provide students with a new interdisciplinary knowledge base that allows us to reconnect meaning, technology, and human agency. Our main framework for inquiry comes from semiotics and pragmatism--learning by working with the discovery principles (heuristics) for integrative knowledge-building on the main questions. The course will build an integrative perspective at the points where computer science, interface design, linguistics and semiotics, complexity and systems thinking, cognitive science, and design thinking all intersect. Students will develop a conceptual understanding of the design principles of computing and digital information as "technologies of meaning," and be able to use the principles for critiquing current designs, proposing alternatives, and opening up new opportunities for participating in the direction of technologies in their chosen fields and career paths. We will study important new research on computation, information, communication, interface design, and principles of agency and action that can help us better understand computing and software, but, even more important, enable us to conceive and create alternatives to current designs.

Case Studies and Focused Questions (Macro to Micro)

  • How can we re-describe the continuum of human symbolic cognition in language, writing, mathematics, visual representations, cultural media, and computational technologies (all the cognitive technologies or technologies of meaning from early artefacts to computers) that re-centers human meaning agents in the descriptions?
  • The question of cognitive artefacts and the relations between human cognition and computation: why does it matter how we model computation, meaning structures, and human mental processes as long as things "just work"?
  • What are the possible designs and ideas for computing and active user engagement that have not been fully developed or ignored in our current consumer-driven context?
  • How does the Google search algorithm work, and why is it hard to structure ordinary human "commonsense" meanings into the algorithmic process?
  • What is "Semantic Web," and what are the challenges in the semantic description of information?
  • AI: What is all the recent hype and hysteria about Artificial Intelligence about? How can non-technical people understand the background, contexts, and possibilities in AI?
  • Computing and representing cultural meanings: What are the methods and goals of Google's ambitious "Cultural Institute" and the "Google Art Project" for organizing data and metadata about complex cultural artefacts? What are affordances and limitations of the platform? What are the challenges for creating interfaces for interpreting meanings from cultural data?
  • Student proposed cases from their own professional fields.

Expectations and Grades

This course is conducted as a seminar and requires direct participation in the learning objectives by bringing individual engagement with course materials and topics to the class discussions. Each syllabus unit is designed as a building block in the learning path of seminar. This seminar will be a laboratory for students to explore complex concepts, make connections across sciences and disciplines, work out their own applications to issues and technologies, and develop their own syntheses of methods, concepts, and disciplinary approaches. Grades will be based on: (1) weekly short writing assignments (in the course Wordpress site) and participation in class discussions (25%); (2) individual and group presentations and projects developed for class (25%); and (3) a final research project written as a rich media essay or a creative application of concepts developed in the seminar (50%).

In a final project, students will develop an analysis and critique of a current design or interface problem, and/or propose their own innovations for better access to usable meaning and opening greater human agency. (Final projects will be posted on the course Wordpress site, which will become a publicly accessible web publication with a referenceable URL for student use in resumes, job applications, or further graduate research).

This course will be based mainly on an extensive online library of book chapters and articles in pdf format in a shared Google Drive folder (enrolled students only).

Required Books:

  • W. Brian Arthur, The Nature of Technology: What It Is and How It Evolves. New York, NY: Free Press, 2009. [ISBN 1416544062] 
  • Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015. 
  • Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010. ISBN: 0745645720 
  • Lev Manovich, Software Takes Command: Extending the Language of New Media. London; New York: Bloomsbury Academic, 2013. 

Other core readings will include selections from:

  • Philip E. Agre, Computation and Human Experience. Cambridge; New York: Cambridge University Press, 1997.
  • James Gleick, The Information: A History, a Theory, a Flood. New York, NY: Pantheon, 2011.
  • John MacCormick, Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers. Princeton, NJ: Princeton University Press, 2011.
  • Janet H. Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.
  • Donald A. Norman, Living with Complexity. Cambridge, MA: The MIT Press, 2010.
  • Douglas Rushkoff, Program or Be Programmed: Ten Commands for a Digital Age. Berkeley, CA: Soft Skull Press, 2011.
  • Jonathan Zittrain, The Future of the Internet--And How to Stop It. New Haven, CT: Yale University Press, 2009.

Recommended books:

  • Ron White and Timothy Downs. How Computers Work. 9th ed. Indianapolis, IN: Que Publishing, 2007. 
    (10th edition forthcoming).
  • Janet Abbate, Inventing the Internet. Cambridge, MA: The MIT Press, 2000.
  • Tim Berners-Lee, Weaving the Web: The Original Design and Ultimative Destiny of the World Wide Web. New York, NY: Harper Business, 2000.
  • Donald A. Norman, The Design of Everyday Things. New York, NY: Basic Books, 1988 and reprints.
  • Noah Wardruip-Fruin and Nick Montfort, eds. The New Media Reader. Cambridge, MA: MIT Press, 2003.

Links to Online Resources

Learning Objectives:

  • Foundational background for the course. Introduction to key concepts, methods, approaches.
  • Introduction to the schools of thought and major testable hypotheses that we will work with:
  • Establishing the context of research and theory in fields working in the question of human symbolic capabilities, symbolic cognition, actions, and embodiments in media and technology.

Introduction to the framework for our guiding research question:

  • How do we construct a model of meaning processes, symbolic thought and action, sign and symbol systems that allow us to unify all forms of symbolic activity from language and symbolic artefacts like artworks and music to mathematics, media technologies, computation, and software programs?

Major questions motivating our inquiry and learning paths:


In class:
Intro Lecture and Presentation (Prof. Irvine) (Google Presentation)

In-class discussion of basic concepts, examples, and case studies:
sign systems, symbolic-cognitive artefacts and computational devices

  • Artefacts we think with: introducing symbolic and "artefactual" cognition, technologies of meaning
  • Defining terms: symbol/symbolic, sign, medium, artefact, interface, cognition, computation
  • Framing our inquiry with the semiotic and pragmatic approach: open inquiry.

Using Research Tools for this Course (and beyond)

Instructions for weekly discussions on the course Wordpress site

  • Using the Wordpress site for weekly discussion and your journal for ideas and questions.
  • Introduction to next week's readings and assignment.
Learning Objectives:

Learning new concepts to think with and make new discoveries with is like "booting up" software: what you load into your personal concept repertoire shapes what you can do. Doug Engelbart (whose work we will read) called his cognitive and user-centered human interface project the "Bootstrap Project." This is a common computer metaphor for the "start-up" software (pulling up by the "bootstraps," hence, "booting" a program) that needs to be installed (loaded into system memory) so that the system can load or run anything else.

The first weeks, then, are devoted to an orientation to the major domains of research and theory, key concepts, and specialized terminology in a "big picture" and "concept map" overview. Students will gain a sense of the disciplines, terminology (specialized vocabularies), key concepts, arguments, and guiding hypotheses in the relevant fields that we will draw from.

Using the Online Course Reader of Key Texts

For this week, survey (don't read whole sections yet--unless you get compelled to!) the reading selections of "snap shots" of major statements and hypotheses for a map of major sources of the key concepts and approaches in this interdisciplinary study.

Interdisciplinary Challenges

When beginning learning in a new field or research approach, we are always entering intellectual conversations and debates mid-stream and already in progress (sometimes for hundreds of years!). To understand and participate, we must always begin with getting a sense of the major questions, schools of thought, the problem set, the scope of the research domain, and the range of disciplines that converge on a problem. We have to sort out and map out the inherited concepts, terms, methods, and discourses, the best we can, and learn what motivates the current research questions.

As you go through the course, engage directly with the assumptions, arguments, concepts, and array of disciplinary vocabularies in these representative major statements, which everyone working in the relevant field is assumed to know. When reading, you should ask for yourself:

  • Why are these arguments, terms, and concepts important for our overall research program, and how can we use them to think with? how can we test the validity of different (often rival or competing) hypotheses?
  • What conversations, debates, and dialogues are these statements participating in? what were/are the contexts, disciplinary/professional conversations that we need to know to understand how/why the arguments are framed as they are?
  • What are the key "take aways" in terms of getting what the arguments and assumptions are about, and using them as possible approaches in our own learning and research?

Don't be concerned that you don't understand the terms, concepts, and arguments in these texts. No one does on a first reading. We will build out the contexts and backgrounds for understanding the assumptions and arguments in these key statements and why they are important. Through the course, we'll work through the consequences of these concepts and methods and ways to critique them in light of the most research research and theory.

Interrelated fields converging on the study of symbolic cognition and sign systems:

  • Linguistics and philosophy of language
  • General sign systems theory (semiotics)
  • Semiotics in specific disciplinary approaches
  • Cognitive Science approaches to symbolic cognition, language, representation, cognitive artefacts
  • Anthropology, archaeology, human evolution: symbolic behavior, cognition, social organization, and human culture from prehistory to the present
  • Communication, Information, and Media studies: symbolic systems and social-technical mediation
  • Computation: theories and models of abstraction and representation as symbol processes, computation as algorithmic symbol transformation

Readings: Orientations

However, the computer has many other capabilities [beyond mathematical calculation] for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly. (Doug Engelbart, Augmenting Human Intellect: A Conceptual Framework, 1962)

We will use this famous text as a launchpad for thinking in three directions--past to present and future--to begin uncovering the history of ideas and concepts for technologies that led to this big leap in thinking about computing as a cognitive artefact and "interface" for aiding thinking, expression, and creativity. (Engelbart and his team later invented and designed a graphical interface, the mouse, hyperlinked documents, and the windowing concept--but he wouldn't have worked on developing those technical tools without starting from these ideas.) Where have the ideas taken us in current designs and debates about computing and meaning, and why is this "computing revolution" unfinished?

Since we are at one cumulative point in the implementation of many of these ideas, they may not seem radical and new. But for 1962, when only specialists operated huge mainframe computers, Engelbart's "H-LAM/T system (Human using Language, Artifacts, Methodology, in which he is Trained)" was a huge leap beyond what computers were used for and what the computer industry was doing. These ideas also became the basis for models of computing used in the PC industry, after the development of dependent technologies and materials science (microprocessors, memory, screens) provided as way to implement them.

Engelbart's approach and the output of his whole research team also represent a major divide within computer science and engineering itself: his "intelligence augmentation" (IA) is a human-cognitive-agent centered approach, as distinguished from mainstream "artificial intelligence" (AI) which seeks automatable or quasi-autonomous computing/software agents that simulate, replace, or exceed human intelligence. This is a fundamental difference in approach (even with ways of merging the approaches) that we continue to negotiate in many complex ways today. It's goes to the heart of the matter in questions about "computing and meaning." We will return to Engelbart's work later in the larger context of computer interface design and the cognitive-symbolic issues that have motivated development from the 1970s to today.

  • Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.
    [This basic introduction (targeted for undergraduate humanities students) is useful for the key terms and sources of concepts that are known in academic study. We will go on to clarify our own set of terms and extensible concepts for analyzing the complex combinations of symbolic systems used in all kinds of expressions, cultural artefacts, and technologies.]
  • Peter Denning, On Information and Computation (Course Reader)
  • Survey Readings in Course Reader of Key Texts:
  • Begin self-paced tutorial on Code Academy on the key principles in computer programming (each week through week 9).

For discussion in class:

  • For an open seminar discussion: survey the background texts for key terms and concepts, and we will work through the contexts and backgrounds in class.
  • Introduction to next week's readings

For discussion in Wordpress (link to Wordpress site)

  • Even though these texts and concepts are new for you, does this "bigger picture" view begin to open up new ways to think about signs, symbolic thought, media, communication, and computing?
  • This week's writing can be an informal statement or list of questions that come to you as you are surveying the outlines of this integrative, interdisciplinary field.

Learning Objectives and Main Topics:

Gaining a foundation in the recent research on symbolic cognition in intersecting disciplines and sciences--anthropology, evolutionary psychology, archaeology, semiotics, and cognitive science more broadly--and how this developing knowledge base allows up to ask better informed questions about human sign systems, technologies as cognitive artefacts, and mediated communication systems.

Background: The Symbolic-Cognitive Continuum
The human ability for making meaning in any kind of expression and embodying meaningful, collectively understood expression in material media technologies depends on shared symbols and symbolic cognition. Research in many fields continues to discover more and more about the consequences of being the Symbolic Species in Terrence Deacon's term. This week, you will learn the main concepts from cognitive science and archaeological research for describing the "cognitive continuum" from language and symbolic representation to multiple levels of abstraction in any symbolic representation (spanning writing, mathematics, symbolic media like images and combinations in film and multimedia, and computer code). One promising way of studying media, communication, and computational technologies is uncovering a "cognitive continuum" of accumulating capabilities for symbolic representation, abstraction, and material externalizations for storing and "off-loading" collective cognition in symbolic artefacts (all forms of media and communications using material and technical means), including computational processes and digitization.

Overview of Topics and Themes:
Symbolic Cognition, Sign Systems, Mediation > Cognitive Technologies


Within a broad cluster of fields--ranging from neuroscience to cognitive linguistics, cognitive anthropology, and computational models of cognition and artificial intelligence research--there has been a major convergence on questions and interdisciplinary methods for studying cognition, human meaning-making and the "symbolic faculty" generally, including all our cumulative mediations and externalizations in "cognitive technologies." Cognitive science has been closely related to computer science in seeking computational models for brain and cognitive processes, and proposing hypotheses that explain cognition as a form of computation.

Language > Symbol Combinatoriality > Abstraction > Mathematics > Machines > Computation

Symbolic

There's wide agreement that any account of symbolic systems needs to explain the human parallel cognitive-symbolic "architectures" that are based on:

  • (1) the functions of language and other symbol systems for enabling abstraction (generalized concepts), learning, and representations in memory of what its learned
  • (2) rules for combining sign/symbol components (an underlying syntax for forming complex and recursive expressions of meaning units),
  • (3) the intersubjective preconditions of symbolic structures "built-in" to meaning systems for collective and shared cognition, and
  • (4) material symbolic-cognitive externalizations (e.g., writing, images, artefacts, human built environment) transmitted by means of "cognitive technologies" (everything from writing and image making to digital media and computer code) which enable human cultures and cultural memory.

This recent interdisciplinary research convergence is a "game changer" for the way we think about human meaning systems, symbolic cognition, communication, and media technologies.

Readings

  • Kate Wong, “The Morning of the Modern Mind: Symbolic Culture.” Scientific American 292, no. 6 (June 2005): 86-95.
    [A short accessible article on the recent state of research on the origins of human symbolic culture and the relation between symbolic cognition and tool making and technologies.]
  • What do we mean by "cognitive artefact"? How is this related to "symbolic artefact"?
    "Artifacts, Cognition, and Civilization" and "Cognitive Artifacts." Excerpts from Wilson, Robert A., and Frank Keil, eds. The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: The MIT Press, 1999.
  • Terrence W. Deacon, The Symbolic Species: The Co-evolution of Language and the Brain. New York, NY: W. W. Norton & Company, 1998. Excerpts from chapters 1, 3, 11, 13.
    [Deacon's work has been influential and presents an important argument for the co-evolution of language, symbolic cognition, and culture with the human brain. Read for his main argument about language, symbols, and brain evolution. Many details and data from evolutionary neuroscience are, of course, outside our field. Do your best to work through Deacon's main assumptions and conclusions. Recent work in neuroscience and other animal species reveals that some animals have a capacity for signs in a broad sense for signals and communication behavior, but humans are the only species that use symbols in syntactic or combinatorial structures and for abstract conceptualization.]
  • Colin Renfrew, “Mind and Matter: Cognitive Archaeology and External Symbolic Storage.” In Cognition and Material Culture: The Archaeology of Symbolic Storage, edited by Colin Renfrew, 1-6. Cambridge, UK: McDonald Institute for Archaeological Research, 1999.
    [Important argument to supplement Merlin Donald's view about the evolutionary origins of the symbolic brain: material culture is part of the externalizing cognitive process.]
  • John C. Barrett, "The Archaeology of Mind: It's Not What You Think." Cambridge Archaeological Journal 23, no. 01 (2013): 1-17.
    [Barrett provides a good summary of the "state of the question" where evolutionary sciences and archaeology intersect on understanding the origins of symbolic cognition and use of artefacts.]

Supplemental: For Further Research

  • Rodríguez-Vidal, Joaquín, Francesco d'Errico, Francisco Giles Pacheco, Ruth Blasco, Jordi Rosell, Richard P. Jennings, Alain Queffelec, et al. "A Rock Engraving Made by Neanderthals in Gibraltar." Proceedings of the National Academy of Sciences, September 2, 2014, 201411529.

Presentation (for class discussion): "Cognition, Symbols, Meaning"

For discussion in class:

  • Introduction to next week's readings

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • What are some of the main hypotheses and research conclusions (so far) in the research literature above for the question of how we have evolved as a uniquely "symbolic species"?
  • What are the consequences of this research and major working hypotheses for studies of language, communication, symbolic cognition, tools, machines, and technology?
  • Do any of these recent research findings open up new ways for you to understand recent technologies in a deeper and longer historical continuum?

Learning Objectives and Main Topics

Learning the main concepts, terminology, and assumptions developed in contemporary linguistics as essential foundations for understanding and describing language, and by extension, all human symbolic capabilities, sign systems, and communication.

Why Learning Well-Researched Linguistic Methods are Important for All Our Fields

In CCT and beyond, we find many disciplines devoted to intensive inquiry on all human communication and sign systems. We are seeing exciting new interdisciplinary work on the whole history of meaning systems and symbolic representation from the earliest records of symbolic expression to computation, automated symbolic processing, and artificial intelligence. Many fields assume that the meaning systems of interest to the field (e.g., text, images, music, designed artefacts, multimedia, computer code) work "like a language" or "are a language" without clearly defining what a language is: what are the properties or features of language or a language for which other meaning or media systems can be like? The expansive field of semiotics is founded on the inquiry into generalizable principles for meaning-making and meaning systems, and posits language as our "primary modeling system." Is the "like a language" analogy or parallelism more than a vague observation? You can see that it's fundamental for us to begin with a model or description of language that is as clear and as well-informed as possible before assuming that other meaning systems can be "like" (a) language. Contemporary linguistics provides a knowledge base of key concepts and terms for the descriptive levels, rules, and constraints that specify how language works and what a natural language is. Getting as precise as we can at the detail levels will help us think more clearly about important questions like, "are visual image genres a language?", "what do we mean by a 'computer language', code, and 'language processing'?", "is music a language; and/or individual music genres?", "how do we understand film, video, and multimedia genres with multiple combined 'languages'?", "how is language connected to intelligence, symbolic cognition, and other forms of human cognition"?.

Key terms and concepts: 

  • syntax, semantics, lexicon
  • generative grammar, open rule-governed combinatoriality
  • pragmatics (the contexts and situations of language use, shared assumptions, speech and discourse genres, and speech acts)
  • cognitive linguistics (the intersection of language and cognitive/brain science research)

The Importance of Linguistics for Interdisciplinary Thinking and Research

The human natural capacity for language is the starting point of many disciplines and research programs in all aspects of communication, symbolic culture, and media. Many of the research questions and most of the terminology for the study of language and human symbol systems has been established by the various specialities of modern linguistics. The terms and categories for analysis in linguistics have also been widely used by other disciplines, and all students studying media, communication, and computation need to be familiar with the concepts and research programs in the major branches of linguistics. 

We can only do a top-level overview here, but familiarity with the core concepts and research questions will allow you to advance to other questions and topics in your own research. Each of the major topics of linguistic research and theory (especially syntax/grammar and semantics) involve major, ongoing research programs, schools of thought, and competing arguments and terminology. All aspects of linguistic research are now part of larger research programs in cognitive and brain sciences, as well as ongoing investigations in social sciences, philosophy, and humanities fields. Getting a grounding in the central research questions, concepts, and methods will open up many new paths for your own thinking and research.

Our Reference Model: Ray Jackendoff's "Parallel Architecture" Model of Language

We will use Ray Jackendoff's "Parallel Architecture" description of language as a reference model for explaining the structure and principles of natural language, and explore ways to extrapolate from this model to other sign systems (e.g., music, images, semantic representation in software) to uncover how they may be like, or different from, language. Jackendoff's model is driven by unifying syntax, semantics, and the lexicon (the words in a language), and the model assumes that there are cognitive and symbolic "interfaces" between language and other sign systems. Jackendoff was Noam Chomsky's student at MIT in the 1960s, and while maintaining an expanded "generative grammar" framework, he has usefully developed his own research conclusions different from Chomsky's, and has synthesized many schools of thought and research programs in linguistics. He is also a musician, and has written important works on the structure of music and the parallels with, and difference from, language. His models and question-framing can be productively combined with C. S. Peirce's macro model of signs as generative rule-governed processes. When thought through together, Jackendoff's and Peirce's models provide a way to re-model all sign systems and symbolic functions as combinatorial structures with components working in parallel to create the "output" expressions that we understand as meaningful (any symbolic expression in words, images, sounds, and their combinations).

Is Language Our "Master Switch" or "Cognitive Bootstrap" for all Other Symbolic Capabilties?

This is a major open question in all the sciences and disciplines concerned with language acquisition, language and the brain, and many other cognitive, biological, and social aspectes of language, communication and use of sign systems. We won't attempt finding an overall answer, but thinking about this important question spills over into many other questions in the study of any form of symbolic expression and our technologies for representation and mediation.

Readings: Language as a Symbolic Cognitive System: The Linguistics Model

Introductions (in this order)

  • Steven Pinker, Video presentation on Language and the Human Brain (start here)
    • A well-produced video introduction to the current state of knowledge on language and cognitive science from a leading scientist in the field.
    • See also: Steven Pinker's website at Harvard University for a view of his work and career.
  • Martin Irvine, "Introduction to Linguistics and Symbolic Systems: Key Concepts."
  • Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts.
    • [This is an excellent text for an overview and reference. Review the Table of Contents so that you can see the topics of a standard course intro to linguistics. Read and scan enough in each section to gain familiarity with the main concepts. Focus on the excerpts on Introduction to Linguistics as a field, and sections on Words (lexicon) and Sentences (grammatical functions and syntax).]
  • Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003. Excerpts and introduction to "the Parallel Architecture" model of Language.
    Also included in these excerpts is Jackendoff's extensive bibliography of references cited in the book.
    Read as much as you can for this week, especially on the Parallel Architecture; we will continue with Jackendoff next week.
    • Jackendoff has developed a useful synthesis of many important developments in linguistics in a "unification" model of language components that he calls the "parallel architecture." This book is a highly detailed, argumentatively nuanced, exhaustively researched compendium of the major issues in linguistics (especially from the cognitive and generative approaches) that is difficult to summarize, but there are important "take aways" for us. In reading Jackendoff's work, you will be jumping in midstream to a highly-detailed 30-year ongoing debate about syntax and semantics, but he usually provides a good orientation to the issues. He also provides useful conceptual hooks and links for ways to talk meaningfully about the structures of language in relation to other symbolic systems like music and visual narratives.

Supplemental for Further Background[Optional]

  • Steven Pinker, "How Language Works." Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.
    [Accessible introduction to linguistic analysis of syntax and the role of language in cognition; underlying theory to his video presentation above.]
  • Steven Pinker, "The Cognitive Niche: Coevolution of Intelligence, Sociality, and Language." Proceedings of the National Academy of Sciences 107, Supplement 2 (May 5, 2010): 8993–99.
    [A short accessible essay (written on the occasion of appraising Darwin's continuing value) summarizing current issues and state of knowledge on human evolution and cognition.]

For discussion (in class):

Experiment with Visualizing Sentence Structure through Computational Parsing

  • XLE-Web: A sentence parsing tool and syntax tree generator that maps both the "Constituent Structure" and "Lexical Functional Grammar" models of generative grammar. Choose "English" and map any sentence for its constituent (c- ) and functional (f- ) structure! (Uses linguistic notation from two formal systems.) (The term "parse" comes from traditional grammar for breaking sentences down into their "parts of speech" [pars/partes: Latin for "part/parts"]). Syntax parsers are used in computational linguistics and all complex text analysis in software and network applications. Web search algorithms and Siri voice recognition and interpretation software have to use parsers for generating a map of the probable grammatical structure of natural language phrases before processing probable semantic searches. So here you get a visualization of what Siri and Google have to do in milliseconds behind the scenes before the software can initiate a search or other command.
  • Try the test sentence "I like dark beer but dark beer doesn't like me." You will see how the parser generates several grammatical structure options because of the ambiguities in switching subject and object with the verb [like]. Then try any sentence; try some very complex ones (compound subjects, many conjunctions and clauses, etc.).

For Ongoing Discussion on Symbolic Feature Mapping: The Semiotic Matrix

  • We will use this spreadsheet as a tool to think about the features, functions, and properties of our symbolic systems and how the "symbolic-cognitive architectures" are distributed across sign systems (individually or in combination like film and video). Any map is incomplete: for heuristic purposes only. The Google shared sheet is set for comments by all students in the course; make your own analysis and contribute to the map. We will develop the map over the next 2 weeks.
  • Continuing: Presentation for class discussion: Irvine, "Cognition, Symbols, Meaning" (end section on language)

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Drawing from the readings for this week (and any connections to prior weeks), how would you answer these basic questions for someone who has little or no knowledge of linguistics as a science or field of research: What is language? What is "a language," What are the essential features that enables a language to be a language? 
  • From these foundations, we will go on to ask other important questions:
    What are the implications of using the features of language as the model for other symbolic systems (visual, audio, and multimedia combinations) and for most forms of communication and media?
    Even before we investigate the knowledge base from sciences and disciplines that converge on the study of other symbolic systems, can you think through what it would mean to study other sign systems by assuming that they work like a language (e.g., making meaningful rule-governed combinations of symbolic elements in expressions that are understood collectively)? Try working with one or two of the main linguistic concepts to see if they provide extensible models for understanding and describing other meaning systems (like visual genres or music genres) in the forms of media and communication we use every day. (You can begin thinking this through with the Semiotic Matrix map.)
  • What seems intuitively clear at this point, and what is difficult or unclear?

Learning Objectives and Topics:

Developing an understanding of language and other sign systems by using concepts and research from the "cognitive architecture" and "sign systems" (semiotics) perspectives. Terms and concepts in the related disciplines are not consistent, so we will begin by working with C. S. Peirce's key terms and concepts and their application for minimal consistency and for heuristic value. We will work toward an open method for connecting linguistics research with more generalizable semiotics (sign systems theory) and research on symbolic cognition. We will investigate ways to describe the features of language that are extensible to other symbolic systems and to the technologies that mediate them. We will expand from Ray Jackendoff's insightful "parallel architecture" model of language (which usefully rolls up many research questions in cognitive linguistics) for understanding the cognitive functions of signs and symbols in the Peircean semiotics tradition. We will begin surveying the kinds of knowledge resources we need to bring to the main question of human meaning-making in all symbolic forms, the individual and collective cognitive processes required and involved, and the role of externalized symbolic media and artefacts in our cultural/historical representations and transmission of meaning.

Background

One major feature of our symbolic-cognitive "architecture" across sign systems is the capacity for making open rule-governed combinations of the constituent units of a meaning system (like words or patterns of musical sounds) for new expressions in new contexts of meaning (the principle of combinatoriality). Open rule-governed combinatoriality allows symbolic-cognitive agents -- human cognizers and now software with delegated agency -- to generate unlimited, new symbolic expressions (in language, music, art, mathematical abstractions, multimedia, and computer code) in new, unpredictable contexts and situations of meaning. This cognitive architecture for open combinatoriality also extends to the way we develop new modular combinations of technologies in cumulative combinatorial designs for complex systems (think smart phones and the Internet itself). There are many dimensions of always being in sign systems that are not clearly understood and require much more systematic research.

The major open question is how do all signs and symbols work in their simultaneous cognitive, material/perceptible, and communicative/intersubjective dimensions. The Peircean semiotics tradition provides an open model for investigating symbolic cognition and signs in many dimensions -- generative meaning-making principles; communication and information representation; cognition, learning, and knowledge; and the dialogic, intersubjective conditions of meaning and values in communities and societies. Peirce's suggestive, incomplete, and flawed model remains very productive for thinking through problems of meaning, representation, information, logic, and computation in a symbolic and cognitive framework.

Interdisciplinary Challenges

Consistency in terminology and defining the scope of symbolic and semiosic processes. Unifying the semiotic-linguistic-cognitive approaches to sign systems, meaning, conceptual representation, information, and communication with the study of the many levels of meaning and value distributed through the media systems and popular culture genres in a society (confronting the problem of the division between the more "scientific" or "technical" study of language, sign systems, and communication and the study of culture).

Key Questions:

  • Is it possible to work out a generalizable model of meaning processes in any symbolic form or medium (that is, can there be a general semiotics like a "Universal Grammar")?
  • Is there a common, unifying cognitive architecture for sign systems and symbolic cognition, or is it more useful to understand and describe the features and specifications for required for each symbolic system?
  • How are all sign systems based on generative meaning principles, and what are the parallels and differences among the systems?
  • How can we best understand the interfaces between/among meaning systems in the human "semiosphere" (language, writing, images, and time-based media like music and film)?

Key terms and topics:

  • Generativity: generative meaning-making principles of symbolic systems
  • Combinatoriality: procedures for rule-governed, syntactic combinations in sign systems
  • Semiosis: sign/symbol as generative process, sign as function, sign as part of sign system
  • Parallel Architecture model for language (Jackendoff): extensible to other sign systems?
  • Interfaces: cognitive links for "parallel processing" symbolic structures between/to/from one component to others in a symbolic system (like combining speech sounds and meaning concepts in a single process of using language), and cognitive links among different symbol systems used in combination (like words in music, and speech, sound, music, and visual narrative in film and video)
  • Meaning generation and meaning understanding as symbolic-cognitive activity
  • Meaning as conceptual network
  • Cognitive architecture and use of signs/symbols
  • Conceptual semantics and the semantics-semiotics "hand off"
  • Intersubjectivity
  • Recursion
  • Dialogism, dialogic conditions of meaning in living contexts

Readings:

Continuing:
  • Ray Jackendoff, further implications of the "Parallel Architecture" model of language as a combinatorial system
    [Go to last sections of the excerpts; continued from last week.]

Introductions and primary texts for semiotics and sign systems research

  • Signs, Symbols, and Symbolic Cognition Reader
    [Review the excerpts from de Saussure, Jakobson, Peirce, and Semiotics through the Odgen-Richards "Meaning of Meaning" section. For baseline familiarity with inherited terminology.]
  • Martin Irvine, "Introduction to Meaning Systems and Cognitive Semiotics" (book chapter in progress). Read through section 5.
  • Richard J. Parmentier, Signs in Society: Studies in Semiotic Anthropology. Bloomington, IN: Indiana University Press, 1994. Excerpts.
    [Parmentier does a useful summary of Peirce's main concepts. Don't get stuck in the terms that Peirce invented to try to account for all the categories of sign functions. The key is following what Peirce discovered in the triadic structure of the symbolic-cognitive process. In chapter 1, focus on the first 3 sections (through "Language and Logic").]
  • Semiotic Elements and Classes of Signs (Wikipedia)
    [For reference: Good overview of concepts from Peirce on. Wikipedia is seldom reliable, but this entry has had care and attention.]

For Discussion on Symbolic Feature Mapping: The Semiotic Matrix

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Choose an example of a cultural work (movie scene/shot; musical work or section of a composition; image or art work as an instance of its genre[s]) as an implementation of one or more sign systems, and using the terms, concepts, and methods in the readings this week, describe as many of the features that you can for how the meanings we understand (or express) are generated from the structures of the symbolic system(s). Can the "Parallel Architecture" paradigm extent to the features and properties of other symbolic meaning-making systems?
  • In terms of the questions posed in the "Semiotic Matrix" grid of possible symbolic features, how would you discuss either (1) a two-dimensional image or artwork or (2) an instance of a time-based medium like music or film/video? How does the "like a language" assumption in most semiotics theory extend to other media; what are the limitations, and/or potential heuristic value, of understanding symbolic systems on a language model?
Learning Objectives and Main Topics

Learning the main concepts and approaches in C. S. Peirce's model for explaining signs and symbolic thought: sign relations and symbolic processes for applications to our core set of cases.

Semiosis Triad

Readings:

Research Resources

Following Up on Music as a Case Study for Semiotics

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Each student will be assigned a medium or sign system to describe with Peirce's concepts and applications to examples. Follow outlines for a method in the "Student's Introduction" above.

Learning Objectives

"Information consists of (1) a sign, which is a physical manifestation or inscription, (2) a relationship, which is the association between the sign and what it stands for, and (3) an observer, who learns the relationship from a community and holds on to it."
-- Peter Denning, ACM

"Information is the difference that makes a difference." --Gregory Bateson

  • Learning the major terms, concepts, and technical applications for communication and information theory as defined and used in electronics, computation, and digital media for signals and digital-electronic states.
  • Learning how and why the engineering definition of information and communication is essential for the way all information and media technologies work, and why these definitions and technical implementations are necessarily "pre-semantic" (or non-semantic) (thus bracketing off the meanings of digitally encoded information units).
  • Learning why the engineering transmission model is not applicable for describing how we encode meaning in the many contexts of meaning that motivate and frame transmitted signals.
  • Learning how to complete the description of human meaning making in artefactual communication and media representations with current knowledge and models provided by linguistics & pragmatics, semiotics, and cognitive science.

Key Terms and Concepts:

  • Information (as a unit of probability and differentiation from other possibilities)
  • The Transmission model of Communication and Information
  • Dominant metaphors: signal, conduit, container, channel, source, destination
  • The meaning contexts of messages and information
  • Shannon information vs./and Peircean information: the solution to getting meaning back into the information model

Backgrounds: Why the Key Concepts of Information Theory are Important

Communication and information theory from the 1950s-80s is widely taken for granted in discussions of media, technology, and computation. The signal-code-transmission model of information theory has been the foundation of signal and message transmission systems in all communication technology from the telegraph to the Internet and digital wireless signals. Originally developed as models for transmitting error-free electronic signals in telecommunications, these theories and concepts have also also informed models of meaning communication in culture more broadly (with severe limitations on how meaning can be explained).

It's essential to understand how the signal transmission model works and why it's important for all the digital technologies that we use. It provides an essential abstraction layer in all electronic and digital systems. We also need to understand that it is not a model for the larger sense of communication and meaning systems that our symbolic-cognitive technologies allow us to implement. We'll need to understand why meaning and social uses of communication are left out of the signal transmission model and how we use signals and coded information units within the systems of meaning that motivate them.

How do we get "meanings" into and from bits and data? The meaning networks of communicators and information users (for anything expressed in any medium) are understood, assumed, and encoded in signal mediums but are not properties of their material form. (This is the basic feature of human symbols, which is the main subject of semiotics: meaning is not a property of perceptible signals but is what is enacted by cognitive agents who use collectively understood material signs.) In any model of information and communication, we also need to account for contexts in two senses of the term: both the sender's and receiver's context (world of meanings and kinds of expressions assumed), and the message's contexts (its relation to other messages both near and far in time).

Examples: We know when information units/data have been transmitted and received successfully because we recognize the symbolic units (meaning units) that are being encoded! The background technologies are designed to send and receive signals (e.g., radio waves, bits and bytes) without error (or reduction in error for probable decoding). But we would only design this kind of system because we are encoding symbolic expressions or representations already understood as meaningful. Think about the fact that we recognize as meaningful:

  • the text in text messages (after the information is interpreted in software for rendering on our displays): meaning is what motivated the encoded information;
  • the spatial array of visual information in a photograph after display
  • the sounds that we recognize as a musical genre when it plays through audio devices.

We see many other limitations in the linear one-way transmission models for describing communication: all of our communications have production and reception contexts that form networks of meaning, use, and purposes that must be accounted for in other ways. Further, what kind of models can account for all the communication "modalities": one to one, one-many, many-one, many-many, and dialog (one-other, mutually assumed); synchronous (at the same time) and asynchronous (delays in reception-response, short time span or very long).

The transmission model of information is essential to understand, but can't be used for extrapolating to a model for communication and meaning (as it often is in some schools of thought). The signal transmission theory is constrained by a signal-unit point-to-point model. It can't account for the fact that in living human communication there is never only one message to be communicated or one unit of communication/information but, rather, a message unit appears in a dense network of prior, contemporaneous, and future messages surrounding anything communicated. The motivation context of a message includes essential meta-information known to communicators using a medium (kinds/genres of messages, social conventions, assumed background knowledge).

Video Lessons (for background)

Readings: Models of Communication and Information and Their Consequences

  • Luciano Floridi, Information, Chapters 1-4. PDF of excerpts.
    [For background on the main traditions of information theory, mainly separate from cognitive and semantic issues.]
  • Peter Denning, Great Principles of Computing, Chapter 3, "Information".
  • James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).
    Excerpts from Introduction and Chapters 6 and 7.
    [Readable background on the history of information theory. I recommend buying this book and getting as deeply into the issues that Gleick explains as possible.]
  • Ronald E. Day, "The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies." Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.
    [Models and metaphors for "communication" have long been constrained by "transport", "conduit," and "container/content" metaphors that provide only the signal processing map of a larger contextual process. How can we describe "communication" and "meaning" in better ways that account for all the conditions, contexts, and environments of "meaning making"? Are network and other systems metaphors better than the linear point-to-point metaphors?]
  • Peter Denning and Tim Bell, "The Information Paradox." From American Scientist, 100, Nov-Dec. 2012.
    [Computer science leaders address the question: Modern information theory is about "pre-semantic" signal transmission and removes meaning from the equation. But humans use symbol systems for meaning. Can computation include meaning?]

Supplemental and Further Research:
Semantic, Pragmatic, and Communication Dimensions

Martin Irvine, Conceptual Models in Communication and Information (presentation) [In-class discussion]

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • The signal-code-transmission model of information theory has been the foundation of signal and message transmission systems in all communication technology from the telegraph to the Internet and digital wireless signals. Why can't we extrapolate from the "information theory" model to explain transmission of meanings? Where are the meanings in our understanding of messages, media, and artefacts? What is needed to complete the information-communication-meaning model to account for the contexts, uses, and human environments of presupposed meaning not explicitly stated in any specific string of symbols used to represent the "information" of a "transmitted message"?
  • From what you've learned about symbol structures so far, can you describe how the physical/perceptible components of symbol systems (text, image, sounds) are abstractable into a different kind of physical signal unit (electronic/digital) for transmission and recomposition in another place/time? (Hint: as you've learned from semiotic theory, meanings aren't properties of signals or sign vehicles but are relational components in meaning-making structures in the whole process understood by senders/receivers in a meaning community.)
  • Following on with a specific case: how do we know what a text message, an email message, or social media message means? What kinds of communication acts understood by communicators are involved? What do senders and receivers know that aren't represented in the individual texts? Our technologies are designed to send and receive strings of symbols correctly, but how do we know what they mean?

Learning Objective and Main Topics:

  • Learning the approaches to distributed and extended cognition as they apply to cognitive technologies, technical mediation, and symbolic artefacts.
  • Gaining a working knowledge of the main research questions in these fields of cognitive science and philosophy as a starting point for pursuing individual research or further inquiry.
  • Learning how to apply the hypotheses to analyzing and explaining our own uses of cognitive technologies, symbolic artefacts, and sign systems (writing, image technologies, music, film).

Backgrounds:

We've noted that the artefacts of our cognitive technologies -- the technical mediations for symbolic expression, communication, and information -- act as interfaces to the social, technical, and symbolic systems in which they are produced. Our symbolic technologies from writing and image making to computation and multimedia are both mediated by, and mediators of, collective agency and cognition organized in our social institutions. We delegate agency to the technologies and use them to extend and distribute cognitive abilities in material forms so that they function as collective, intersubjective resources.

The large body of accruing work in intersecting cognitive science research programs on extended, embodied, and distributed cognition continues to pay off in a new understanding of "non-individual-brain-bound" human cognition. This interdisciplinary research has a major focus on the central role of symbolic artefacts and cognitive technologies for extending core human cognitive capabilities. This "artefactual" cognitive extension functions among members of social/cultural communities within and across place and time: both within collective moments of lived experience and through longer historical time spans through durable artefacts and technologies for memory.

There are several related research and theory paradigms which are distinguished and elaborated in different arguments in the various schools of thought:

  • the hypothesis of the "extended mind" (a more generalized term) in artefacts, language, symbolic representations (associated with Andy Clark's work)
  • the hypothesis of "distributed cognition" (a more enacted view of group participation in externalized cognitive work with devices, diagrams, equipment, communication and computational technologies) (associated with Edwin Hutchins' work and the UCSD cog sci research group)
  • the hypothesis of "embodied cognition" (which studies the way the body is a continuous part of cognitive functions and empirically cancels any mind/body dichotomy, uniting mind/brain in biological and neurological processes).

Approaches to extended mind and distributed cognition often use the concepts of "collective cognition," "cognitive off-loading" or "scaffolding" to describe the "ecology" (interconnected system) of cognitive actions and processes that become what we experience as our thinking and meaning-making when using artefacts, devices, media, computation, and any form of symbolic representation.

Working with Andy Clark's influential research questions and hypotheses
Clark's arguments for the "extended mind" hypothesis have been influential and have generated important interdisciplinary debate about better-informed ways to understand the brain, cognitive embodiment, and the larger dynamic interaction of individual brains/minds, social and collection cognition, cultural artefacts, language and all communication media. Clark sees the function of language, symbolic systems, and technological artefacts as "cognitive scaffolding" that have shaped human cognitive functions from earliest language and artefacts to current combined technologies (with no break or rupture caused by the industrialization of these cognitive artefacts in computational and digital devices). Macro questions: what is part of a continuum -- for example, recursive implementations of our symbolic faculty, accruing levels of complexity and abstraction from combinatorial processes -- and what can be defined as different in kind or in degree from earlier media or technical systems for extending the mind?

Major Terms and Concepts:

  • Extended Mind
  • Distributed cognition
  • Embodied cognition
  • Cognitive off-loading
  • Cognitive scaffolding
  • Distributed agency

Readings: Introductions to Distributed and Extended Cognition:

  • Andy Clark and David Chalmers. "The Extended Mind." Analysis 58, no. 1 (January 1, 1998): 7–19.
    [The first version of the argument for the hypothesis that set off a large research conversation and debate. Also reprinted in Clark's Supersizing the Mind: Embodiment, Action, and Cognitive Extension (below)]
  • Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 2008).
    [Excerpts from the Forward by David Chalmers, pp. ix-xi; xiv-xvi (attend especially to the comments in the last 3 pages of the Forward); Introduction and Chapter 1.3: "Material Symbols" (especially the concept of "cognitive scaffolding").]
  • James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.
    [This is an important summary of research conclusions from leaders in cognitive science and its relations to technology and HCI. Hutchin's justly famous book, Cognition in the Wild (1996), provided empirical validation for understanding cognition as involving and requiring a larger system of human interactions outside and beyond individual minds/brains.]
  • Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.
  • Itiel E. Dror and Stevan Harnad. "Offloading Cognition Onto Cognitive Technology." In Cognition Distributed: How Cognitive Technology Extends Our Minds, edited by Itiel E. Dror and Stevan Harnad, 1-23. Amsterdam and Philadelphia: John Benjamins Publishing, 2008.

Supplementary: For Research Sources and Going Further

  • Riccardo Fusaroli, Nivedita Gangopadhyay, and Kristian Tylén. "The Dialogically Extended Mind: Language as Skilful Intersubjective Engagement." Cognitive Systems Research 29-30 (September 2014): 31–39.
    [Extended and collective cognition is fundamentally intersubjective, as our use of symbolic artefacts and the fundamentally dialogic basis of ordinary language use shows. ]
  • Hutchins, Edwin. "The Cultural Ecosystem of Human Cognition." Philosophical Psychology 27, no. 1 (February 2014): 34–49.
    [A valuable up-to-date summation of the state research and theory by a leader in cognitive science research.]

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Working in a team of 2 or 3 (to be assigned), use a common representational practice (e.g., writing and drawing on a board, paper, and/or screen) or a technical artefact (a group of functions in a PC or mobile device, not the whole bundle) and analyze the way we distribute, extend, and off-load cognitive functions that working together produce what we experience as thinking, communicating, completing a cognitive task, and/or representing and expressing intersubjectively accessible meanings.

Learning Objectives and Main Topics:

This unit focuses on the key concepts in computation and core models for programming, software, computer and digital media. We will approach the questions from a non-specialist perspective, but it's important for everyone to get a conceptual grasp of the core ideas in computation because they are now pervasive throughout many sciences (including the cognitive sciences), and are behind everything we do daily with computational devices, information processing, and digital media (for example, the Google algorithms for searches, all the apps in mobile devices, the software functions for displaying and playing digital media). We will focus on foundational concepts that can be extended for understanding today's environment of "computation everywhere" and "an app for everything." 

  • Understanding the main conceptual foundations of computation, and how is related to the continuum of human symbolic cognition and generating meaning in levels of symbolic abstraction.
  • Understanding computation in relation to information theory, semiotics, and communication processes.
  • Learning the key concepts in "computational thinking" and the design principles in computation and software code.

Histories and Models for/of Computation before Modern Computers

We will recover the longer history of ideas about computation and its relation to symbol thought and sign systems for representation.

Introductory Videos:

Readings

History of Ideas Leading to Computation

Computation and Computational Thinking:

  • Jeannette Wing, Computational Thinking (Video)
    [Introduction to a way to make computing accessible to non-CS students.]
  • Jeannette Wing, "Computational Thinking." Communications of the ACM 49, no. 3 (March 2006): 33–35.
    [Short essay on the topic; Wing has launched a wide discussion in CS circles and education.]
  • Denning and Martell, Great Principles of Computing, Chapters 3-6.
    • The book combines the work done in prior studies (which can help for background):
    • Denning, Peter J. "The Great Principles of Computing." American Scientist, October, 2010.
    • -----. "What Is Computation?" Ubiquity (ACM), August 26, 2010, and republished as "Opening Statement: What Is Computation?" The Computer Journal 55, no. 7 (July 1, 2012): 805-10.
  • Martin Irvine, An Introduction to Computational Concepts [Top-level intro to Von Neumann architecture.]

Main Reading: Computing Concepts in the Code of a Programming Language

  • David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access: http://computingbook.org/.

    Focus on this text as the core reading for this week. Read chapters 1-3 (Computing, Language, Programming); others as reference and as your time and interest allow at this point. You can always return to other chapters for reference and self-study.
    [This is a terrific book based on Evans' Intro Computer Science courses at the University of Virginia. The book is open access, and the website has updates and downloads.]

Introductions to Computational Semiotics and Symbolic Models of Computation

Going Further (Supplementary)

Main Assignment: Hands-On Tutorial onCode Academy (Instructions page)

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Describe what you learned from working through the CodeAcademy tutorial and making connections to the computing principles introduced this week. Were any key concepts clearer? What questions can you describe now after having a little more background.
  • Can you see how a programming language (and thus a software program or app) is designed to specify symbols that mean things (represent values and conceptual meaning) and symbols that do things (symbols that are interpreted in the computer system to perform actions and/or operations on other symbols). Computation (or "running" software) is a way of defining transitions in information representations that return us interpretable symbol sequences in chains of "states" that combine meanings and actions. (This is what the software layers running on your device right now are doing to render the interpretable text, graphics, images, and window formatting from the digital data sources combined in a Web "page," image or video file, and many other behind-the-scenes sources.)

Learning Objective and Main Topics:

  • Learning the background history for the models of computation that led to the development of interfaces for human interaction with programmable processes.
  • Understanding the important steps in the transition of computing from the post-War environment to the expansion of computation applied to knowledge and cognitive needs.
  • The technical and conceptual development of graphical interfaces in the "human computer interaction" (HCI) design discipline.
  • Learning the concepts behind the technical architectures in all our devices that support the interfaces.

Backgrounds
As Michael Mahoney points out (below), there are a multiple histories of computing(s), involving different communities and historical contexts, none of them, considered independently, capable of producing deterministic or inevitable paths for the technologies that evolved into our current ubiquitous computing environment. A key transition point was the concurrent development of software/hardware interfaces and advances in converging technologies for screens, memory, and processors in the context of new research and a vision for computation beyond machines (the combinatorial moments). Before the 1980s, no one imagined a major consumer or small business market for computers. This required the historical intersection of multiple forces and histories: technical developments, research, and new philosophical contexts for developing computing beyond industrial and military applications.

Models of computation in relation to human cognition took several paths in the 1950s-70s. The AI path sought "homologies" (patterns of like form or structure) between computation, human cognition, and neural organization. This led to the "computational theory of mind" that continues to motivate much cognitive science research and theory. The other important path is represented by Doug Engelbart and the emerging HCI research in labs at Stanford and then at Xerox's multidisciplinary think tank, Xerox PARC (Palo Alto Research Center) (1960s-70s). Engelbart, and the developing "cognitive design" community championed by Don Norman and others, understood computation to be a cognitive "augmentation" for enhancing and expanding human intellectual abilities and needs, not a model for mind/brain as such. These two main models continue in various forms today, and direct all kinds of research and theory across many disciplines. It's important to gain a sense of the main assumptions and presuppositions of these main schools of thought so that you can understand both applied design principles and the consequences of the assumptions and concepts in the arguments and debates you will encounter in all the related literature.

“Technology at present is covert philosophy; the point is to make it openly philosophical.”
--Phil Agre, 1997

I've intentionally bounded the readings this week to conclude with the developments of interface design and computing as “augmenting human intellect” up to the 1970s so that we can pause and consider the concepts and unfulfilled “histories” of computing before moving to the contemporary era of computational “Metamedia” next week. The story from the 1960s to the present configuration of computing systems, software, and markets is usually framed as an evolution of inevitable triumphs and factual “done deals” of market sector adoptions and productized successes. Computers, computation, and software are usually treated as empirical facts, rather than implementations of concepts and design choices from a social-technical history that is far from deterministic. Before continuing to contemporary interfaces and the digital mediations we all now take for granted, we can pull back a bit for reflexive questions about computation and human cognition in the bigger picture, considered apart from narratives of technological progress, innovator’s triumphs, and specific products.

We take “user interface” design as one of the pillars of computing today, but many user-experience paths have been left undeveloped. Why aren’t programs designed to allow “users” (“end-users” in the software industry term) to re-program software (in its various modules or layers) for a customized “user experience” (as Licklider and Nelson proposed)? Software is marketed, objectified, branded, and IP-protected as a fixed product, not an implementation of concepts, computational models, knowledge. The background history this week shows that there were turning points in the late 1960-early 1980s where users could have been quite other kinds of ”agents” in relation to computation and software, and that the creation of the consumer “user” in the history from PCs to mobile devices was one only path, but a path that succeeded through the convergence of conditions for operational, instrumentalized, and black-boxed marketable products. We will see how these issues play out next week in the conclusion to the “interface” study on Alan Kay’s Dynabook Metamedium concept, which got hijacked as a product (Apple Mac) before the user-as-interactive-learner-and-co-producer-of-content was able to implemented.

 

Readings

Computing and Information Access as Knowledge-Making Technologies: Vannevar Bush

  • Vannevar Bush, "As We May Think," Atlantic, July, 1945. (Also etext version in pdf.)
    • A seminal essay on managing information and human thought by a leading computer engineer and technologist during and after World War II. Bush's pre-modern computing extrapolations lead to the concepts of GUIs (graphical user interface for computers), hypertext, and linked documents. His conceptual model, though not yet implementable with the computers of the 1940-50s, inspired Doug Engelbart and other computer designs that followed in the 1970s-80s and on to our own hypermediated era.

Computing and Transitions to Cognitive Artefacts ("Man-Computer Interactions"): 1960s

  • J. C. R. Licklider, "Man-Computer Symbiosis" (1960) | "The Computer as Communication Device" (1968)
    • Review these texts for the main points. Licklider (Wikipedia background) was a leader of many post-War programs, including DARPA and the launch of ARPANet, which became the Internet. In many ways, his initiatives and the teams of engineers he got funded are the bridge between war-time and Cold War computing and computers as we know them now. Note the attempt to work with the "symbiosis" metaphor as a way to humanize computing, and proposing interaction concepts that could not be technically implemented at the time. He was working toward a model of computing as an interactive cognitive artefact.
    • Notable quote: "Certainly, for effective man-computer interaction, it will be necessary for the man and the computer to draw graphs and pictures and to write notes and equations to each other on the same display surface. The man should be able to present a function to the computer, in a rough but rapid fashion, by drawing a graph. The computer should read the man's writing, perhaps on the condition that it be in clear block capitals, and it should immediately post, at the location of each hand-drawn symbol, the corresponding character as interpreted and put into precise type-face. With such an input-output device, the operator would quickly learn to write or print in a manner legible to the machine." ("Man-Computer Symbiosis")
  • Ivan Sutherland, "Sketchpad: A Man-Machine Graphical Communication System" (1963)
    • Review for main concepts. Expanding on techniques for screen interfaces for military radar, Sutherland was way ahead of his time, and it took many years for the whole combinatorial array of technologies to catch up for implementing the concepts. The Sketchpad concepts inspired Engelbart, Alan Kay (Dynabook), and the development of all screen interactive technologies that we know today.

Douglas Engelbart and "Augmenting Human Intellect" (1962-70s)

Theodor H. Nelson, Interaction Via User-Managed Linked File Systems and Hypertext (1965-1980s)

Reference Library (shared folder): major texts on computation and computer history (GU students only)

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • What were the key concepts and technical implementations that enabled computation to be re-conceived in relation to more universal human cognitive uses or needs?
  • What were the developing concepts of "interfaces" and "interactions" (even before possible technical implementations) and how many have been realized or still not implemented? What other paths remain to be realized in the concepts and possibilities for computing but have not (yet) been implemented commercially in all the software and device interfaces we now take for granted?

Learning Objective and Main Topics:

Gaining a foundation in the influential theories, key terms, concepts, and interdisciplinary approaches for the study of media and technical mediation in computer interfaces that have shaped research and popular conceptualization from the 1980s to the present.

The cluster of terms for media, mediation, and interface are used in so many ways that we need to unpack the history of the concepts and find useful ways of using terms in descriptions and analysis. What is "new" about new media, software controlled media, and network mediation, and what aspects can we understand as a continuum in re-combinatorial functions and processes? How can we best understand, describe, and analyze the concepts and implementations for interfaces and multimodal forms? How do interfaces go "meta" in combinatoriality and presentation frameworks (computer devices and digital display screens as a metamedium, a medium for other media)? How do media technologies media social agency and how do they become major nodes in distributed agency and cognition?

Key terms and concepts:

  • Medium/media as social-technical implementations of communication and meaning functions maintained by roles in a larger cultural, economic, and political system.
  • Mediation both as a function of a medium and of institutions of transmission (e.g., text/print, image technologies, media industries; and social/cultural/economic institutions like schools, governments, policy organizations, museums, libraries, industry groups).
  • Interface as a metasymbolic physical-material contact point for technical mediation with users (social-cognitive agents) of media.
  • Technical mediation (in Latour's and Debray's terms) as the means of distributing agency and cultural functions in a social-technical network.
  • Media System as the interdependent social configuration of technologies and institutions; no "medium" is an independent structure outside a system of relations to other media and the social institutions validating its use and power.
  • Communication vs. Transmission (in Debray's terms): differentiating media technologies and their functions for near-synchronous communication within a cultural group (e.g., telephone, email, TV, radio) vs. technologies for transmitting meaning and cultural identity over long time spans (e.g., written artefacts, books, recorded media, computer memory storage, museums, archives).
  • Digital media convergence: the technical and social/economic conditions for an integrated digital media platform implementable across computational and communication devices (PCs to smart phones and streaming Internet content, Internet-enabled TVs, etc.)

Readings:

Computation and Metamedia: Consequences of Symbolic Interfaces and Technical Mediations

  • Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.
    Follow Manovich's central arguments about "metamedium", "hybrid media", and "interfaces" and the importance of Allan Kay's "Dynabook" Metamedium concept.
    • For Manovich, the main differentiating feature of "new media" is software: media produced, controlled, and displayed with software. Digital media = software mediatized media.
    • Recall Manovich's earlier statement of the defining features of "new media" in The Language of New Media: "What is New Media " (excerpt). Cambridge: MIT Press, 2001.
  • Alan Kay and the Dynabook Concept as a Metamedium:
    • Video Documentary: Alan Kay on the history of graphical interfaces: Youtube | Internet Archive
    • Alan Kay's original paper on the Dynabook concept: "A Personal Computer for Children of all Ages." Palo Alto, Xerox PARC, 1972).
      [Wonderful historical document. This is 1972 -- years before any PC, Mac, or tablet device.]
    • Alan Kay and Adele Goldberg, “Personal Dynamic Media” (1977), excerpt from The New Media Reader, ed. Noah Wardrip-Fruin and Nick Montfort. Originally published in Computer 10(3):31–41, March 1977. (Cambridge, MA: The MIT Press, 2003), 393–404.
      [Revised description of the concept for publication.]
    • Interview with Kay in Time Magazine (April, 2013). Interesting background on the conceptual history of the GUI, computer interfaces for "interaction," and today's computing devices.
  • Jay David Bolter and Richard Grusin, Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000. Excerpts in 2 files: Introduction | Chapter 1.
    [Attend to their argument about the "double logic of re-mediation" and how it connects to other assumptions and approaches we are studying.]
  • Bill Moggridge, Designing Interactions. Cambridge, MA: The MIT Press, 2007. Excerpts:

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • Using the concepts and methods from the readings (and any connections throughout the seminar), describe and explain the mediating/mediated functions combined on the digital metamedia platforms for our everyday cognitive technologies (use one or two cases from PCs, iPhones, tablets, mobile devices)? How do digital metamedia platforms frame our experience of symbolic representations (mostly on a 2D display substrate)? (Remember to deproductize and consider any specific device as an instance of larger unifying designs and architectures.)
  • Related question (following on from last week): what useful concepts from Alan Kay's further development of the graphical interface and designs for access to software have yet to be implemented?

Learning Objectives and Main Topics:

Merging the learning throughout the seminar weeks on sign systems, distributed cognition and agency, computation, and mediation in symbolic-cognitive processes. Semiotics can be expanded to describe how computation, software, and programming work as symbolic-cognitive artefacts, and thus help reveal that computation is not external to meaning (as described in an information theory model) but is a set of methods for abstraction and symbolic reflexivity that are co-constitutive in the creation of meaning (in material/conceptual forms) parallel to other sign processes but at automated orders of symbolic abstraction.

Backgrounds:

The definition [of computation] proposed here refocuses from computers to information representations. It holds that representations are more fundamental than computers... It relinquishes the early idea that “computer science is the study of phenomena surrounding computers” and returns to “computer science is the study of information processes”. Computers are a means to implement some information processes. But not all information processes are implemented by computers -- e.g., DNA translation, quantum information, optimal methods for continuous systems. Getting computers out of the central focus may seem hard but is natural. Dijkstra once said: “Computer science is no more about computers than astronomy is about telescopes.”
(Peter Denning, “What is Computation,” Ubiquity ACM Symposium, 2010. See website for symposium papers)

There are multiple directions in research, theory, and philosophy investigating ways to better understand computation and its relation to human symbolic cognition and the principles for meaning and abstraction in symbol systems. This week will introduce a survey of conversations in progress that have the potential to integrate (unify?) the work of many sciences and disciplines that are usually assumed to be unrelated or even mutually contradictory in assumptions, methods, values, knowledge domains, and intellectual commitments. For the first time in modern intellectual history, we are witnessing a broad conversation about computation, symbolic cognition, meaning systems, and the "science" in computer science. Interdisciplinary conversations about:

  • the foundations of computation as symbolic transformation processes based on the structure of signs and symbols that allow abstraction, reflexivity, and instantiations in and out of material states (electronic states, memory locations, physical representations in displays and output devices, network transmissions),
  • the long-standing research questions around cognition and cognitive artefacts, the nature of language and its relation to other symbol systems and function, models of semiosis that explain meaning generation and symbolic representations in and across symbol systems, and the unifying function of symbols in cognition that enable abstraction, memory, and meta or reflexive functions,
  • new frameworks in humanities fields for coming to terms with this new interdisciplinary knowledge and for ways of explaining symbolic activity and cultural expression in all cultural genres.

Thinking through and with computation theory thus helps reveal complexities inaccessible to other theories. We need to work through, as fully as we can, the heuristic potential of computation theory models, concepts, and methods to see what opens up.

Overview of central topics and questions to consider:

  • Symbolic foundations of abstraction, reflexivity, and meta processes. Computation, programming, software, digitization, and digital interfaces employ the reflexive, abstractive, meta structures that appear to be "built-in" (constitutive) structural features of sign systems and their functions in symbolic cognition.
  • Kinds of symbol systems and symbol processes. While computation (software) implemented in a computer (actual physical hardware) is widely accepted as symbolic or (literally) as a symbol system, we have no consensus on how to clearly describe what kind of symbol process this is. All parties acknowledge that this is a different use of terms from the generative meaning process models in Peirce (semiosis) or in cognitive and generative linguistics. We know computation works because it is instrumentalized all around us. How and why it works as a symbolic system is as difficult to describe and explain as natural language.
  • Information systems and/vs. meaning systems. Is computation, software, and all interfaces for digital media a set of methods for handling signals and signifiers for meaning systems and human intentions established outside computation?

Overheard in the CS Dept. coffee room:

  • "Computation works in practice, but not in theory."
  • "I'd rather write programs that can help me write programs than write programs."

Seminar Discussion:

We will use this week's class meeting like a lab to go over ways to combine the concepts and methods we have been studying with a session on semiotics and/of computation and digital media. We will use models and concepts in computation for possible syntheses of theory that can be usefully applied for understanding our symbol systems and symbolic cognition in any medium.

I don't expect thorough reading of the following texts. The point is getting a sense of the main questions, uses of terminology, and how or whether expansions of semiotic theory from Peirce on help get at the central questions of symbolic processes, computation, and ongoing re-mediation across technical forms.

Clarifying terms:

Sign, symbol, representation, symbol process: terms not used in the same ways across disciplines and contexts of argument. Can we work out specific definitions for useful descriptions and arguments?

Readings: A Survey of Research from Several Disciplines

  • Denning, Great Principles of Computing, Chapters 5-7 (for foundational knowledge).

Models of symbol systems in computation:

  • Herbert A. Simon, The Sciences of the Artificial. Cambridge, MA: MIT Press, 1996. Excerpt (11 pp.).
    Section on Computers as Artefacts and Symbol Systems in the "Newell-Simon" theory.
    [This concept of "physical symbol systems" (by a founder of a prominent school of thought in AI) is different from Peirce and cognitive linguistics (explain why), but important to know about for comparing models or merging models and descriptions.]
  • Nils J. Nilsson, "The Physical Symbol System Hypothesis: Status and Prospects." In 50 Years of Artificial Intelligence, edited by Max Lungarella, Fumiya Iida, Josh Bongard, and Rolf Pfeifer, 9–17. (Lecture Notes in Computer Science, 4850). Berlin & Heidelberg: Springer, 2007. (8 pp.).
    [A good summary of the hypothesis with useful critique and extension of its explanatory potential.]
  • John S. Conery, “Computation Is Symbol Manipulation.” The Computer Journal 55, no. 7 (July 1, 2012): 814–16. [2 pp.]
    [Final version of a paper presented at the ACM Ubiquity Symposium on Computation: Ubiquity 2010, (November 2010).]

Semiotics of/and Computation & Computational Semiotics:

  • Mihai Nadin, "Information and Semiotic Processes: The Semiotics of Computation." Cybernetics & Human Knowing 18, no. 1–2 (January 1, 2011): 153–75.
    [This is a review article of recent work in semiotics and computation. Nadin rambles, but focus on his main descriptions from "Semiotic Engines" on, pp. 16-22.]
  • ------. "Computation, Information, Meaning, Anticipation and Games." International Journal of Applied Research on Information Technology and Computing 2, no. 1 (2011): 1-27.
    [Focus on pp. 1-20; some valuable insights, but often rambling explanations.]
  • A. Gomes, R. Gudwin, and J. Queiroz. "Meaningful Agents: A Semiotic Approach." In International Conference on Integration of Knowledge Intensive Multi-Agent Systems, 2005, 399-404.
  • Antônio Gomes, Ricardo Gudwin, Charbel Niño El-Hani, and João Queiroz. "Towards the Emergence of Meaning Processes in Computers from Peircean Semiotics." Mind & Society 6, no. 2 (November 1, 2007): 173-87.
    [These papers are more difficult in using mathematical and set theoretical models for semiosis. Do your best to get the main conclusions.]
  • Kumiko Tanaka-Ishii, Semiotics of Programming. New York: Cambridge University Press, 2010. Chapter 1.
    [Intro to a semiotic analysis of programming and computation focusing on the self-reflexivity of signs and manipulation of levels of abstraction through different types of symbols mapped to both abstract categories and electronic states.]

(Re)Merging with Models of Computation

  • Matthias Scheutz, ed. Computationalism: New Directions. Cambridge, MA: MIT Press, 2002.
    This is an excellent collection of essays by major thinkers in computer science. Many take up the issue of computation as a symbolic process, and other key questions about computation and meaning.
    Read around to get a sense of the issues covered, but especially the essays by: Brian Cantwell Smith, Philip Agre, and John Haugeland. A good launchpad for further research.
  • Brian Cantwell Smith, "Age of Significance: Book Project Overview." 2010. More easily read in pdf.
    Author's site: http://www.ageofsignificance.org/aos/en/toc.html. Background: Info page at Univ. of Toronto.
    • This is a synoptic argument by a computer scientist who has been inside the debates about computation for decades. Follow as best as you can. While not specifically positioned as a semiotics approach, Smith takes on the larger re-description of computers and computations as intentional cognitive artefacts in a very perceptive way, and sets up a framework that can be used for semiotic re-description.
    • "[C]omputers prove ... not to be necessarily digital, not to be necessarily abstract, not to be necessarily formal--not necessarily to exemplify any characteristic property making them a distinct species of the genus "meaningful material system." ... [C]omputers are intentionally significant physical artifacts--the best we know how to build. Period. There is nothing more to say."
    • [There is nothing more to say -- and we are saying it?!]

Research and Reference

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:

  • How does re-thinking computation and digital media as semiotic processes (meaning-making processes with signs and symbols, implementations of symbolic processes understood across sign systems) help open up better ways to understand both computation and the history of human sign systems? Choose a specific implementation (a program, interface, digital media artefact) to describe and explain with any of the models.
  • Do the approaches surveyed this week (combined with earlier readings) give us conceptual tools to re-position computation and digital media as a human symbolic processes and cognitive artefacts to be owned and understood rather than feared or rejected as mechanical, anti-human, posthuman, or dehumanizing?

Learning Objective and Main Topics:

Learning how to apply and extend the concepts and methods of the seminar to examples of technology, and use them for better designs in interfaces and combinations.

Semiotic Concepts and Methods of Analysis to Work With: Course conclusion (Prof. Irvine)

Case studies for working through our approaches:

  • Semiotics of Metamedia in HCI GUI Windows-Based User Interfaces:
    • Consider the "windows" for "content" (symbolic types) and interactive spaces in the grid (icons, hyperlinks, "navigational" constructs) as ways of using a material-perceptible substrate (surface). How can we usefully describe the symbolic combinations and (re)presentations of  text, images, graphics, photo and video, audio? How are the stacks of parallel architectures in symbol systems presented and integrated? How many semiosic layers or levels are we using (sign functions going  up, down, and across conceptual levels and different sign systems for representing meanings)?

Readings:

Recent Research and Theory on Semiotics, Digital Media, and Computation

(in Google Drive [Semiotics & Digital Media] and [Semiotics: Major Studies] - GU Students only)

  • Recent Additions:
  • van den Boomen, Marianne. "Interfacing by Iconic Metaphors." Configurations 16, no. 1 (2008): 33-55.
  • -------. Transcoding the Digital: How Metaphors Matter in New Media. Amsterdam: Amsterdam University Press, 2015.
  • van den Boomen, Marianne, and et al., eds. Digital Material: Tracing New Media in Everyday Life and Technology. Amsterdam: Amsterdam University Press, 2011.

For discussion (link to Wordpress site)
Choose one or two topics as ways to organize your thoughts and questions about the readings for this week:


In class: Group case study discussion and final project presentations.

Final Project Instructions

  • This week will be devoted to a round table discussion of your final projects. Prepare notes and main bibliography developed so far.