Virtual Peace (http://virtualpeace.org) is alive as of last evening.

For the last gosh-don’t-recall-how-many-months I’ve been working as a Project Collaborator for a project envisioned by the other half (more than half) of the Jenkins Chair here at Duke, Tim Lenoir.  For those of you who don’t know Tim, he’s been a leading historian of science for decades now, helping found the History and Philosophy of Science program at Stanford.  Tim is notable in part for changing areas of expertise multiple times over his career, and most recently he’s shifted into new media studies.  This is the shift that brought him here to Duke and I can’t say enough how incredible of an opportunity it is to work for him.  We seem to serve a pivotal function for Duke as people who bring together innovation with interdisciplnarianism.

What does that mean? Well, like the things we study, there are no easy simple narratives to cover it.  But I can speak through examples.  And the Virtual Peace Project is one such example.

Tim, in his latest intellectual foray, has developed an uncanny and unparalleled understanding of the role of simulation in society.  He has studies the path, no, wide swath of simulation in the history of personal computing, and he developed a course teaching contemporary video game criticism in relation to the historical context of simulation development.

It’s not enough to just attempt to study these things in some antiquated objective sense, however.  You’ve got to get your hands on these things, do these things, make these things, get some context. And the Virtual Peace project is exactly that. A way for us to understand and a way for us to actually do something, something really fantastic.

The Virtual Peace project is an initiative funded by the MacArthur Foundation and HASTAC through their DML grant program. Tim’s vision was to appropriate the first-person shooter (FPS) interface for immersive collaborative learning.  In particular, Virtual Peace simulates an environment in which multiple agencies coordinate and negotiate relief efforts for the aftermath of Hurricane Mitch in Honduras and Nicaragua.  The simulation, built on the Unreal game engine in collaboration with Virtual Heroes, allows for 16 people to play different roles as representatives of various agencies all trying to maximize the collective outcome of the relief effort.  It’s sort of like Second Life crossed with America’s Army, everyone armed not with guns but with private agendas and a common goal of humanitarian relief. The simulation is designed to take about an hour, perfect for classroom use. And with review components instructors have detailed means for evaluating the efforts and performance of each player.

I can’t say enough how cool this thing is.  Each player has a set of gestures he or she may deploy in front of another player.  The simulation has some new gaming innovations including proximity-based sound attenuation and full-screen full-session multi-POV video capture.  And the instructor can choose form a palette of “curveballs” to make the simulation run interesting.  Those changes to the scenario are communicated to each player through a PDA his or her avatar has. I was pushing for heads-up display but that’s not quite realistic yet I guess. 😉

The project pairs the simulation with a course-oriented website.  While a significant amount of web content is visible to the public, most of the web site is intended as a sort of simulation preparation and role-assignment course site.  We custom-built an authentication and authorization package that is simple and lightweight and user-friendly, a system that allows instructors to assign each student a role in the simulation, track the assignments, distribute hidden documents to people with specific roles, and allow everyone to see everything, including an after-action review, after the simulation run.

Last evening, Wednesday October 08, 2008, the Virtual Peace game simulation enjoyed its first live classroom run at the new Link facility in Perkins Library at Duke University.  A class of Rotary Fellows affiliated with the Duke-UNC Rotary Center were the first players in the simulation and there was much excitement in the air.

Next up:

I never miss a beat here it seems, for now I am already onto my next project, something that has been my main project since starting here: reading research and patent corpora mediated through text mining methods.  Yes that’s right, in an age where we struggle to get people to read at all (imagine what it’s like to be a poet in 2008) we’re moving forward with a new form of reading: reading everything at once, reading across the dimensions of text. I bet you’re wondering what I mean.  Well, I just can’t tell you what I mean, at least, not yet.

At the end of October I’ll be presenting with Tim in Berlin for the “Writing Genomics: Historiographical Challenges for New Historical Developments” workshop at the Max Planck Institute for the History of Science. We’ll be presenting on some results related to our work with the Center for Nanotechnology in Society at UCSB.  Basically we’ll be showing some of our methods for analyzing large document collections (scientific research literature, patents) as applied to the areas of bio/geno/nano/parma both in China and the US. We’ll demonstrate two main areas of interest: our semiotic maps of idea flows over time I’ve developed in working with Tim and Vincent Dorie, and the spike in the Chinese nano scientific literature at the intersection of bio/geno/nano/parma.  This will be perfect for a historiography workshop. The stated purpose of the workshop:

Although a growing corpus of case-studies focusing on different aspects of genomics is now available, the historical narratives continue to be dominated by the “actors” perspective or, in studies of science policy and socio-economical analysis, by stories lacking the fine-grained empirical content demanded by contemporary standards in the history of science.[…] Today, we are at the point in which having comprehensive narratives of the origin and development of this field would be not only possible, but very useful. For scholars in the humanities, this situation is simultaneously a source of difficulties and an opportunity to find new ways of approaching, in an empirically sound manner, the complexities and subtleties of this field.

I can’t express enough how exited I am about this. The end of easy narratives and the opportunity for intradisciplinary work (nod to Oury and Guattari) is just fantastic.  So, to be working on two innovations, platforms of innovation really, in just one week.  I told you my job here was pretty cool. Busy, hectic, breakneck, but also creative and multimodal.

Wole Soboyejo, Director of the U.S./Africa Materials Institute, and the Director of the Undergraduate Research Program at The Princeton Institute of Science and Technology Materials discussed new frontiers in nanotechnology. Sobyejo’s work is motivated by the desire to create an integrated framework of global researchers, meaning that he’s working to involve a geographically-diverse set of researchers. Soboyejo started with Feynman’s “lots of room at the bottom” talk and entered into an introduction to his own team’s work

One fascinating project is the desire to create a magic nanobullet targeting cancer cells. Specifically certain breast cancers haver four times the normal amount of the product of luteinizing hormone-releasing hormone gene (LHRH), and that overproduction has a correlated magnetic signal that can be met with charged nanoparticles. There needs to be a way of imaging the presence of LHRH-SPION complexes in vivo and in situ. This solution is fascinating because the current well-diffused MRI infrastructure can be used for such visualization. Specifically nanoparticle ingress can be visualized so that sub-millimeter imaging of tumors can be done. A patient desiring evaluation may go into a clinical setting, receive a nanoparticle injection, and undergo simple MRI evaluation to see whether cancer cells are presen

This biophysical approach to cancer-specific targeting seems to hold a great deal of promise.

Another fascinating project of Soboyejo’s is to modify hypertheria-induced drug release techniques by using MEMS technology. Yet another is to use nnano-based gene therapy, specifically HIV.

Soboyejo described an inexpensive nano-based water filtration system as well as the nanostructure of bamboo. One of his students, an olympic cyclist named Nick Frey, built a competition cycling frame that he has used to win races. Bamboo cars, bamboo airplanes, here we come.

Enough ideas under development? wait, there’s one more. How about making organic solar cells on flexible devices utilizing plastic waste? Superfine OLED screens?

Amazing. Simply amazing.

How can digital mapping technology be used in eduation and research? Princeton Professor of History Emmanuel Kreike and his collagues showed different ways in which mapping augments education and research. Kreike and his colleagues showed class and research projects that leveraged GIS or utilized Google maps combined with historic map and student annotation overlays containing historic information, travel data, etc. An older presentation on Kreike’s work using Google Maps, Earth, and GIS can be found here.

It’s the dawn of the posthuman century and so perhaps the irony of phrases such as “virtual help” and “simulated peace” contain the echoes of nostalgia redolent in an ever-accelerated technological era. I’m excited to attend a presentation on humanitarian aid & development sims by Ryan Kelsey and colleagues from Columbia @ the CNMTL

I’m interested in this primarily because of two dimensions in which I work: teaching How They Got Game at Duke, and participating as a project collaborator for the Virtual Peace project. Both gaming and pedagogy are in some ways new subjects for me, new in the sense of analyzing and building both games and courses (and courses about games and building games, with the case of How They Got Game).

The talk is focusing on two sim projects from the CNMTL, one a project begun last year, and another first started back in 2001. Tucker Harding of Columbia spoke about ReliefSim, a health-related turn-based learning simulation used in the classroom to help students develop a deeper understanding of dealing with and working under conditions of a humanitarian crisis. ReliefSim’s development began in 2001.

The crisis in ReliefSim is a forced migration. Students enter ReliefSim first by viewing a text-heavy html interface with long series of interactive selections. The initial interface reflects the overall idea that the sim is not really training as much as it is educational augmentation. Display categories include assessments, interventions, information gauges, team, and age breakdown. With this display a player does not get a picture of the greater context of the crisis (e.g., caused by warring factions along national borders), it immediately gives a sense of features and depth of impact

With the panel the player chooses actions and assigns those actions to members of the team. In turn one, for example, we assign a water supply assesment to Eric, a food supply assessment to Marilyn, and a population assessment to Ryan. When we click “end turn,” the interface gives us back data generated by the assessments. Good information for our crisis: 10,000 people involved, 1600 under 5, 3000 betweeen 5 and 14, and 5400 15 and up (no assessment for elderly and/or inform at this point). We also see we have a 15,000 kcal food supply where each individual needs a minimum of 600 calories. We also have 100k liters of water, with a 5 liter average per person water demand. Our food supply seems good as we can feed everyone an average of 1500 calories per day. We also have 10 liters per day per person. However, will our population grow? We can support up to 20,000 people on our minimum water supply and 25,000 based on food.

The second game, presented by Rob Garfield of Columbia, is the Millenium Village Simulation, developed by Jeffery Sachs. The game’s conceit is that you the player are a sub-Saharan farmer trying to support your family as you move from subsistence farming to generating income. The Millenum Village Simulation reflects Sach’s full-spectrum approach to treating poverty. You can’t just build schools, for example, if your village suffers from occasional malaria epidemics that wipe out entire groups of children.

The sim interface for the turn-based game is similar albeit sexier than the interface for ReliefSim. Not limited to tabular/textual representaton and selection, the player is shown a simple visual representation of the farmer in the context of a village, the village in the context of greater environmental factors.

The player for each turn is to allocate the farmer’s time (including his wife’s) across a set of development tasks, such as collecting water, farming, or organizing a small business. If we choose to assign hours to farming, we are given choices as to whether we want to perform subsistence farming (grow maize) or income-generating farming (grow cotton). At this point we don’t have any idea how much effort translates into a result. We selected four hours of water collection, but we have no idea how many hours are needed to meet basic needs.

As we took a turn I noticed that the daily allocation was being set in the interface for an entire season; each turn is a season. (Which season?) The game takes a general approach to location (sub-Saharan Africa is widely varied in terms of seasonal conditions, for example) and a rationalist-optimization-oriented approach to helpng a student learn to support a farmer in such a location.

As with the previous presentation, the presenters display stunning tools built in a general knowledge/time management orientation. The SN-LMS presenters evaluated server logs within a site to understand character of students; the game tests a student’s ability to delegate time in order to reach optimally managed conditions for the economic development of a farmer. Both suffer somewhat from a level of specificity that can only be gained by the detail of greater context. It’s not clear why cases are more strongly relied upon, even as frameworks for developing evolving and dynamic game scenarios.

I am attending the New Media Consortium (NMC) Conference at Princeton University today and tomorrow (Thursday June 12- Friday June 13) and am giving a talk tomorrow with three of my favorite colleagues at Duke. Currently I am sitting in a morning session hosted by Susan Barnes and Stephen Jacobs of RIT discussing LMSs and social media. A special focus was shared on the formation, identification, and communication of student identity and its role in educational media, specifically LMSs.

Stephen and Susan posed an interesting question to the audience as to how students construct and communicate their identities to others. The answers tended to focus within contexts, ignoring the wholistic nature of computing contexts and what a student’s presence or absence form those contexts communicates to others. If we have backend access to Facebook or an LMS we can certainly build a model of what a student is like, at least to some extent.

What is absent or tacit, and yet what may be most telling about a student about their identities, hwo they contruct them, and how they communicate them, is the presence/absence of students from multiple web contexts. What sites do they use? What don’t they use? How much do they even use the web? How long were they members (or active members) at sites? What years? Were they late-comers to MySpace? Are they students not on Facebook?

I think most web users at least have tacit knowledge of this sense of identifying people by identifying the character, location and scope of participation across the whole web: this is why people Google one another. The search results on a person project this sort of finely detailed whole that might tell us most.

Great presentation on social networks in LMSs by Susan and Stephen. I found it useful in part because it leads me to better understand & appreciate the value of person search engines and the creation of sites like ClaimID (http://claimid.org). This knowledge may also be of great help as we move forward with building our MacArthur DML Virtual Conflict Resolution website. Maybe we want to be aware of how we help students manage their identities as complex heterogeneous wholes.

The practical implication is that I may be using a combination of Ning and Moola for the Virtual Conflict Resolution project site. Maybe I will encourage students within the course sites to actually use ClaimID as they traverse their academic years to construct a sort of timeline of their web presences–and absences. Makes me want to apply our (Casey Alt’s) timeline software to ClaimID, or to the Virtual Conflict Resolution site.

After much anticipation an amazing new Patent retrieval tool launched yesterday. SparkIP is an amazing new patent search tool of which my colleague (he is my boss-man really) Tim Lenoir is a founder. SparkIP combines the robust on-the-fly clustering of search results similar to Vivisimo’s Clusty but with a pretty incredible twist. The search engine results are navigated by the user in a visual way. Results are clustered, and first the user is presented not with patent results per se but rather with patent cluster results. The company refers to each cluster as a “SparkCluster Map.” Each of these cluster “maps” have numerous clusters within them. This set of cluster maps (shown here)

SparkIPLandscape

referred to as a landscape, is an excellent and robust way of reducing often-overwhelmingly-sized relevant document results while providing complex visual information about each cluster. This is truly a forward-looking tool in many respects but particularly in terms of generating intelligent and useful information about technologies, people, and institutions related to a keyword search. SparkIP has raised the bar on information retrieval right here. But your search is not done yet.

Given the landscape you can then select any of the specific cluster maps (seven in all were returned on “text mining”) by clicking directly on the map graphic. I selected the second cluster map, “information retrieval.” This then brings an enlarged view of the cluster map revealing the clusters within the map, shown here:

SparkClusterMap

Then clicking onto one of the map nodes/clusters (I selected the “document information retrieval” node at the very center of the cluster map) you see a view called “Technology Detail” (shown below):

TechDetail

More information-overload-reducing brilliance on display here in SparkIP. First, note that while 61 patents were retrieved, only 10 were returned. Further, there are likely hundreds more patents relevant to “text mining.” What appears to be happening here is that SparkIP has developed patent-filtering heuristics “under the hood” that get rid of the high volume of junk patents cluttering any patent database. After all, many if not most patents are created by their originators for purposes other than to stake a claim on a highly specific technology. Many a business game is played with patents as the pieces. An organization might want to try and occupy an intellectual property space to see if it can land licensing suckers. Other patents are premature. Some others overreach or are incredibly vague and therefore unenforceable. And so on.

There are a number of small problems with the interface as with many a beta product. The back buttom removes you entirely from your search results rather than helping you navigate backwards from, say, technology detail view to cluster map view. The meaning of visual iconography such as cluster map node size or color, while intuitive, are not altogether clear just from naively using the tool.

But wait folks, that’s not all. In addition to keyword-to-landscape patent search SparkIP will also open up an eBay-esque marketplace for intellectual property. I don’t know of that part is already live or not. I hope to have more time to play around with the site in the coming days.

SparkIP was founded at Duke University through collaboration between Dr. Lenoir, current Pratt School of Engineering Dean Rob Clark, and John Hopkins Provost and Senior President of Academic Affairs Kristina Johnson. Since joining Lenoir at Duke I’ve had a couple of small windows of opportunity to provide some technical advice on cluster metrics with SparkIP engineer (and allpatents.org founder) Kevin Webb. But I never even got to see a demo of this thing. And let me tell you, man, this thing is amazing. I put this tool right up there with Clusty and the TRIP evidence-based medicine site as a retrieval tool among the best since the arrival of Google beta.

Congratulations to you Tim, and to you Kevin, and to the rest of the SparkIP team.

Last evening during the weekly Duke FOCUS cluster meeting we enjoyed a talk from Duke OIT AVP and Croquet principal architect Julian Lombardi. Julian is also aligned with ISIS at Duke which is where I enjoy the opportunity to teach on occasion. I can’t say enough how much of a neat guy Julian is, or how his presentation on Croquet was absolutely fascinating. Suffice it to say he had my head nodding in agreement and his ideas were controversial enough to get the Freshman in the room to make smart-alecky remarks. If that’s not a positive sign of innovation I don’t know what is.

I promised that this would be a note, and it will be a note. I promise.

Julian asked those attending his presentation last evening why we use computers that are overpowered and undercollaborated (to coin a word), why we use machines with seemingly prehistoric interface tools like a mouse and keyboard. Further he asked why we don’t have better technologies that work better with the way we work. I’m not sure how he answered this question except to say that we need to engineer software that supports “deep collaboration,” as Julian called it. I think Julian was suggesting that we were sort of stuck in our ways and that we just weren’t picking up available technologies, sticking instead to old guns.

I don’t think the problem is that simple. In fact I suspect there are two significant problems, one intellectual, the other sociological.

The intellectual problem is that I suspect few if any actually understand what “deep collaboration” really is. It appears to me that we are only starting to understand collaboration as a phenomenon, and then only a phenomenon of a digital variety, and then only through data about how people use collaborative technologies. That type of understanding seems to be a sort of cart-leading-the-horse phenomenon.

I don’t think (but I certainly do not know for certain) we have very good understanding of phenomenae such as tacit knowledge, communities of practice, activity theory, and so forth. Do we possess a very good grounding insofar as understanding how people work together?  How they have worked together?  How people might work together?

Funny that Julian called the web pages “brochures.” He’s right, they are brochures. I love that perspective he shared as it made me laugh and then blush.  It also appears to me that we’re a pamphlet-publishing culture in general so web-publishing activities seem to actually comprise an adequate reflection of the way we seem to work. After all, where in our culture are we not engaged in this sort of pamphlet-publishing work mode?  You’re going to have to go far outside information technology in order to respond (e.g., construction). It appears to me that knowledge workers of all sorts work in a rather linear fashion and this is perhaps not surprising since our concepts of ourselves as subject arises from engaging in linear tasks such as writing, reading, watching, all coded in terms of first-person perspective.

Collaboration at least in US technoculture seems to find its apex of sophistication in the assembly of multiple independently-produced pamphlets. We can see this even in open-source software development projects where repositories are open to collaboration. With tools like CVS we lock out as we write to a file and resolve conflicts with any other code-pamphlets that have been written concurrently.

So the intellectual problem appears to have at least three components each of which should be explored independent of computational technology: knowing how we know how to work together; knowing how humans have worked together in the past; and divining how we might expand on the combination of past collaboration modes and knowledge of tacit knowledge to innovate new collaborative paradigms. I think this is an area ripe for intellectual innovation, and I don’t think such an effort should be limited to software engineering.

If this sort of intellectual problem has already been conquered then I admit I just completely missed it.  But I currently see that there is a huge gap between our understanding of the cognitive dimensions of collaboration and the understanding of how people use, say, Facebook, to collaborate with one another.  What is the biology, the phenomenology, and the behavior of human interaction?

The sociological problem is that simply such innovative interfaces have lacked, for a huge number of reasons, crossover to early adopters. Who are early adopters? The cool hip techies to whom the masses look for what’s hot, what’s cool, those who bellwether their intellectual and geographic locales. Those of us who are into inventing are not very good at engineering social transitions and we don’t make early adopters at all. And when we lack early adopters we lack, well, adoption itself, don’t we?

Was this just a note?

Citation:

Cereb Cortex. 2005 Aug;15(8):1261-9. Epub 2005 Jan 5. 

The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity.

Rodd JM, Davis MH, Johnsrude IS.

Department of Psychology, University College London, UK. j.rodd@ucl.ac.uk

A number of regions of the temporal and frontal lobes are known to be important for spoken language comprehension, yet we do not have a clear understanding of their functional role(s). In particular, there is considerable disagreement about which brain regions are involved in the semantic aspects of comprehension. Two functional magnetic resonance studies use the phenomenon of semantic ambiguity to identify regions within the fronto-temporal language network that subserve the semantic aspects of spoken language comprehension. Volunteers heard sentences containing ambiguous words (e.g. ‘the shell was fired towards the tank’) and well-matched low-ambiguity sentences (e.g. ‘her secrets were written in her diary’). Although these sentences have similar acoustic, phonological, syntactic and prosodic properties (and were rated as being equally natural), the high-ambiguity sentences require additional processing by those brain regions involved in activating and selecting contextually appropriate word meanings. The ambiguity in these sentences goes largely unnoticed, and yet high-ambiguity sentences produced increased signal in left posterior inferior temporal cortex and inferior frontal gyri bilaterally. Given the ubiquity of semantic ambiguity, we conclude that these brain regions form an important part of the network that is involved in computing the meaning of spoken sentences. (My emphasis.)

 

Here we may have a possible biological locus for exactly the sort of phenomenon I was positing in my previous post. Interestingly enough, ambiguity seems to a core process, and again we have evidence that language users are able to actively engage with ambiguous language and that an important step in cognition is pre-disambiguated. Importantly, it is in all likelihood that linguistic comprehension engages in parallel visualization of multiple possibilities. This is probably responsible for so much of what makes poetry interesting and road signs uninteresting.

The inferior temporal cortex is a higher-level part of the ventral stream of the visual processing system of the human brain. The ventral stream engages in classification and identification of phenomena. The adjacent inferior frontal gyrus coontains Broadmans Areas 44 and 45, which contain a number of non-visual areas heavily engaged in linguistic understanding. Broca’s Area is contained in Broadmans Area 44. Broca’s area is connected to Wernicke’s area via the arculate fasciculus.

One way to disprove my present theory is to see the neural precursors to these differentiated brain areas in fetal development. Do human brains develop the visual system first? Do these linguistic areas develop out of the visual tissues? Or do they come out of a wholly different set of neural tissues? Anyone know a neuroembryologist?

When reading some Steven Pinker a couple of years back I wondered whether language could be better understood via sound, sentence, and vision rather than by words and rules as Pinker suggests (see his Words and Rules). Rules seem to be elements of narration we use or rather abuse to divine a neat model of causality. However there seems to be very little in biology that’s rather rule-like. Biology is inherently anti-functional, at least in the strict mathematical sense of the word function. Cells and subcellular systems can and do appear to regularly do different things given the same input. And that’s assuming we can even truly tightly control an input to a biological system in any meaningful (re: in vivo) way. Weak and strong AI proponents would have us think that neurons are analogues for computer circuits, but the complexity of neural matter is hardly reducible to such a model without sacrificing crucial information.

Rules just don’t seem inherent to language. Words, however, do seem on some level fundamental to language. From a textual perspective certainly. We can see evidence for this in many ways; in my experience the evidence is in building representations of document collections for various text mining experiments. But from an oral perspective, are words fundamental?

Spoken language seems far more continuous that written language not only from a processing standpoint but also from a sensory point of view. Spoken language is experienced and performed in a rather continuous way; words are deduced in learning language, but it remains to be shown whether words are in and of themselves mere narrative convenience for explaining how we understand language rather than language itself. these sounds continue rather fluidly within sentences. The auditory experience of language is that the most coarse break, the most distinct break, is the break between sentences. But spoken language is not just continuous in the way it is serially composed and experienced in an auditory fashion. It is also continuous in that it speads across the sensory spectrum, from sound to vision. Inflection and gesture are essential to processing meaning, and such experience and interpretation is so incredibly integrated and automatic it operates as intuition does.

While the fundamental descriptive unit of language seems to be the word, with the description generating itself through the appearances of language acquisition, the fundamental unit of language seems to be the sequence of sounds, the sentence. The word “book” or for that matter the sound of the word has some basic meaning but no real rich semantics. What book? What’s it doing? Where is it? What’s in it? How thick is it? Do you even mean a thing with pages? Frankly we have no idea what questions even make sense to ask in the first place. The word and the sound alike seem devoid of context, seem completely empty of a single thought. But once we launch into a sentence, the book comes to life, to at least a bare minimum of utility, representation with correspondence to some reality. It seems the sentence is the first level at which language has information.

But it seems that the sentence, the meaning-melody of distinct thought, is composed more essentially with some visual representational content, something rudimentary that is pre-experiential (children blind from birth seem to have no profound barriers to becoming healthy and fully literate adult language users). There seems to be something visual that is degenerative in nature involved in language. Not generative. It seems that language comprehension is based on breaking down the continuous auditory signal into something very roughly visual and then the utterance becomes informative.

My take on such a process is really not so unusual but rather fundamental to one of the most important linguistic discoveries of the modern era. Wernicke believed that the input to both language comprehension and language production systems was the “auditory word image.”

So here’s what I’m thinking. Language’s syntax is not fundamentally linguistic per se nor compositional but rather sensory (audio-visual) and decompositional. So I wonder, is there some sort of syntax for vision, some decompositional apparatus? Or are we just getting back into rule-sets?

I think we can understand something fundamental in this syntax between the sensory and the linguistic. Linguistic decompositon, which is really either auditory or visual decompositon, becomes visual composition in understanding. Likewise, the visual must be decomposed before it can be composed into a sentence.

In other words, if we knew rules for visual decompositon we could automatically compose descriptions of scenes. Likewise we should be able to compose images from decomposition of linguistic signals.

And how do we do that without rules or functions?


But language is not pure sign, it is also a thing. This exteriority -word as object rather than sense- is an irreducible element within the signifying scene. Language is tied to voice, to typeface, to bitmaps on a screen, to materiality. But graphic traces, visualizations are irreducible to words. Their interpretation is never fully controllable by the writing scientist.

– Timothy Lenoir and Hans Ulrich Gumbrecht,
from the introduction to the Writing Science series