Virtual Peace ( is alive as of last evening.

For the last gosh-don’t-recall-how-many-months I’ve been working as a Project Collaborator for a project envisioned by the other half (more than half) of the Jenkins Chair here at Duke, Tim Lenoir.  For those of you who don’t know Tim, he’s been a leading historian of science for decades now, helping found the History and Philosophy of Science program at Stanford.  Tim is notable in part for changing areas of expertise multiple times over his career, and most recently he’s shifted into new media studies.  This is the shift that brought him here to Duke and I can’t say enough how incredible of an opportunity it is to work for him.  We seem to serve a pivotal function for Duke as people who bring together innovation with interdisciplnarianism.

What does that mean? Well, like the things we study, there are no easy simple narratives to cover it.  But I can speak through examples.  And the Virtual Peace Project is one such example.

Tim, in his latest intellectual foray, has developed an uncanny and unparalleled understanding of the role of simulation in society.  He has studies the path, no, wide swath of simulation in the history of personal computing, and he developed a course teaching contemporary video game criticism in relation to the historical context of simulation development.

It’s not enough to just attempt to study these things in some antiquated objective sense, however.  You’ve got to get your hands on these things, do these things, make these things, get some context. And the Virtual Peace project is exactly that. A way for us to understand and a way for us to actually do something, something really fantastic.

The Virtual Peace project is an initiative funded by the MacArthur Foundation and HASTAC through their DML grant program. Tim’s vision was to appropriate the first-person shooter (FPS) interface for immersive collaborative learning.  In particular, Virtual Peace simulates an environment in which multiple agencies coordinate and negotiate relief efforts for the aftermath of Hurricane Mitch in Honduras and Nicaragua.  The simulation, built on the Unreal game engine in collaboration with Virtual Heroes, allows for 16 people to play different roles as representatives of various agencies all trying to maximize the collective outcome of the relief effort.  It’s sort of like Second Life crossed with America’s Army, everyone armed not with guns but with private agendas and a common goal of humanitarian relief. The simulation is designed to take about an hour, perfect for classroom use. And with review components instructors have detailed means for evaluating the efforts and performance of each player.

I can’t say enough how cool this thing is.  Each player has a set of gestures he or she may deploy in front of another player.  The simulation has some new gaming innovations including proximity-based sound attenuation and full-screen full-session multi-POV video capture.  And the instructor can choose form a palette of “curveballs” to make the simulation run interesting.  Those changes to the scenario are communicated to each player through a PDA his or her avatar has. I was pushing for heads-up display but that’s not quite realistic yet I guess. 😉

The project pairs the simulation with a course-oriented website.  While a significant amount of web content is visible to the public, most of the web site is intended as a sort of simulation preparation and role-assignment course site.  We custom-built an authentication and authorization package that is simple and lightweight and user-friendly, a system that allows instructors to assign each student a role in the simulation, track the assignments, distribute hidden documents to people with specific roles, and allow everyone to see everything, including an after-action review, after the simulation run.

Last evening, Wednesday October 08, 2008, the Virtual Peace game simulation enjoyed its first live classroom run at the new Link facility in Perkins Library at Duke University.  A class of Rotary Fellows affiliated with the Duke-UNC Rotary Center were the first players in the simulation and there was much excitement in the air.

Next up:

I never miss a beat here it seems, for now I am already onto my next project, something that has been my main project since starting here: reading research and patent corpora mediated through text mining methods.  Yes that’s right, in an age where we struggle to get people to read at all (imagine what it’s like to be a poet in 2008) we’re moving forward with a new form of reading: reading everything at once, reading across the dimensions of text. I bet you’re wondering what I mean.  Well, I just can’t tell you what I mean, at least, not yet.

At the end of October I’ll be presenting with Tim in Berlin for the “Writing Genomics: Historiographical Challenges for New Historical Developments” workshop at the Max Planck Institute for the History of Science. We’ll be presenting on some results related to our work with the Center for Nanotechnology in Society at UCSB.  Basically we’ll be showing some of our methods for analyzing large document collections (scientific research literature, patents) as applied to the areas of bio/geno/nano/parma both in China and the US. We’ll demonstrate two main areas of interest: our semiotic maps of idea flows over time I’ve developed in working with Tim and Vincent Dorie, and the spike in the Chinese nano scientific literature at the intersection of bio/geno/nano/parma.  This will be perfect for a historiography workshop. The stated purpose of the workshop:

Although a growing corpus of case-studies focusing on different aspects of genomics is now available, the historical narratives continue to be dominated by the “actors” perspective or, in studies of science policy and socio-economical analysis, by stories lacking the fine-grained empirical content demanded by contemporary standards in the history of science.[…] Today, we are at the point in which having comprehensive narratives of the origin and development of this field would be not only possible, but very useful. For scholars in the humanities, this situation is simultaneously a source of difficulties and an opportunity to find new ways of approaching, in an empirically sound manner, the complexities and subtleties of this field.

I can’t express enough how exited I am about this. The end of easy narratives and the opportunity for intradisciplinary work (nod to Oury and Guattari) is just fantastic.  So, to be working on two innovations, platforms of innovation really, in just one week.  I told you my job here was pretty cool. Busy, hectic, breakneck, but also creative and multimodal.

The following comprises a collection of my intuitions and “big picture” insights resulting from graduate study focused on text mining at SILS. These are insights related to feature representation, knowledge engineering, model building, the application of statistics to real-life phenomena, and the greater whole of information science.

Many of these apparently go without saying, yet so many discussions of supposed problems would go away if some of these observations were made explicit. This is my attempt to make them explicit. Maybe it goes without saying that expressing the obvious is sometimes quite necessary.

1. Statistical models often fail because they’re missing key attributes necessary to describe the phenomena they represent

Attributes that are altogether unrecognized, difficult to quantify, difficult to analyze, truncated out, or simply forgotten arguably dominate and confound the predictive/explanatory power of statistical models. These missing variable abound. Their absence dominates to the point where theory itself must give way to empiricism and its sister, skepticism. It also means that we simply don’t see everything and that it never hurts to try and see more things.
2. Feature reduction of highly dimensional linguistic data sets is a misguided, outdated and counterproductive approach

There. I said it.

Claude Shannon’s model of information as that which is located among noise is a metaphor that appears to have been misleading a number of people in information science, particularly those involved with anything even remotely tangential to text mining (or, if you must, “knowledge discovery”). Information in an atomic form (e.g., bits) allows for the differentiation of signal and noise. A bit either is a signal or it isn’t. Attributes of real-life phenomena (e.g., average first down yardage in football for a team) are not like bits, at least not in the way we experience them and interpret those phenomena, whether in written explanations or in databases. “Real-life” phenomenae comprise different sorts of real-world features that can never be honestly reduced to their atomic constituents. And, pragmatically speaking, they won’t be reduced to quantum atomic states any time soon.

Given that every attribute of real-world phenomenae we identify partakes of both signal and noise, the removal of any attribute (save for the case of redundancy) always corresponds to the loss of information. Ultimately the statistical modeling of phenomena such as competitive sports and stock markets and clinical emergency room chief complaints is wholly unlike modeling communication channels. There’s something immediately discontinuous about binary electronic signals while other these other phenomena need dramatic interpretive steps before they can be represented with discontinuous electronic signals. Finally, signal and noise are terms that don’t apply very well because that which we are modeling can only be realistically described by features that are both informative and misleading at the same time.

There’s something rather continuous about language (something that latent semantic indexing attempts to capture) and that even the simplest of approaches, such as applying stop word lists to bag of word representations, lost critical information that dictate the semantics of the document. “Dog,” “a dog” and “the dog” quite clearly mean different things, as do “of the dog”, “out of a dog” and so forth. Representing all of those quotations as “dog” or going a step further and representing all of these quotes with the very same word-sense identifier, dumbs down human language beyond recognition. Garbage in, garbage out is a phrase I learned more than a quarter century ago when learning to program games for the Commodore Vic-20.

Reading a text book from 1993 on the C4.5 algorithm, I came across reflections that some crucial elements of C4.5 appeared to be motivated by economizing on computer resource issues. Not enough memory, too slow of processing, etc. In 2007 high performance computing is a commodity. The pressures for feature reduction in machine learning needed to be heeded 14 years ago, but they’re considerably less of an issue today.

Finally, at the very end of my stretch of graduate school studies I accidentally came across a new strategy for feature representation that is so painfully obvious in retrospect it leaves me wondering why no one else has been doing this. Fortunately for Hypothia it spells one very big competitive advantage. But I digress.

3. There’s always something missing from your set of attributes (cf. 1 & 2)

4. There’s no substitute for knowing your data set (cf. 1)I credit this oft-neglected, oft-devalued approach to my first and truly excellent data mining instructor, Miles Efron, who may be to blame for turning me on to text mining in the first place. What have you wrought? He made sure to repeat this lesson of knowing thy data a few times, and the lesson was surely not lost on me. In fact it seems as it it frames and justifies my confidence in my approach.

5. [DELETED] and let your algorithms optimize your attributes for maximal classification margin (cf. 2 & 3)

Can’t say the deleted part yet. But I will, eventually. It probably should be obvious by now. But still I’m not prepared to say.

6. SVM+SMO is very good for binary classification of highly dimensional data (cf. 5)

Improvements to SVM+SMO are always welcome of course, and it appears there are now numerous implementations of SVM that improve. I should note that, according to Eibe Frank, SMO in Weka (written in Java) is just as fast as Joachims’ SVM-light written in C. SMO’s pretty good.

SMO solves the QP problem created by SVM efficiently.

7. You always need more computing power (cf. 2, 5 & 6)

The curse is not dimensionality, the curse is not intellectual. The curse is economic, a problem of resources.
Likely it will be difficult to produce a dataset that is intractable for a good HPC setup running SVM+SMO but it doesn’t exactly hurt to try as long as you’re trying to harness more and more power.

8. You don’t know everything (cf. 3 & 4)

9. models only forecast well in forecast-influenced environments only when the model has an information advantage over other models (information assymetry, competitive advantage)

10. You’ll never get it quite right ( cf. 8 )

11. There always more left to do (cf. 5, 7 & 10)

12. Disambiguation can be better pursued not in any pure sense by machinic strategies but rather by messier approach of utilizing the greater context surrounding term, document, and corpus, which in turns permits some degree of ambiguity, which is necessary for understanding

13. Word sense disambiguation is quite possibly the wrong way to go to conjure semantics in one’s text representation (cf. 2 & 12)

As I’ve written before, there are other approaches available to leverage semantic information that are better than word-sense diambiguation (WSD) .

14. More formally, the incorporation of ambiguity into linguistic representations (i.e, representing all possible word senses/meanings and POSs for any given word) allows for better representations of intelligence than ones produced at least in part through WSD strategies

15. For artificial intelligence to become smarter than humans, it must at least be as smart as humans first.  A person’s ability to understand multiple senses of a given word at once (of which poetry is perhaps the most striking example) is strikingly intelligent and far more intelligent than most WSD approaches I’ve seen (cf. 14).  And when you consider that the basic unit of meaning is truly not the word but the sentence, WSD seems all the more foolish, and yet makes me feel there’s a huge opportunity to understand language from its wholes and holes.  Discourse analysis anyone?

16. Not knowing everything, not always getting it right, and always having more left to do makes the hard work a great deal of fun. Discoveries are everywhere waiting to be written into existence. (cf. 8, 10, 11)

17. Don’t panic, be good, and have fun. (cf. 16)

18. The essence of human language is nothing less than the totality of the human language in all of its past present and future configurations and possibilities.