My fascination for the backchannel at academic conferences — you know, that bunch of nerds twittering away behind their laptop screens while you present that laboriously written presentation with utmost theatrical skill — first started at TEI ’09 in Ann Arbor. (I wanted to insert a reference to #tei09, but Twitter is really ridiculously short-memoried, and starts deleting old tweets after I believe one week or so.)
When a discussion on a difficult paper is in full swing, it really helps to follow the conversation when two or three more knowledgeable people are simultaneously sending the key phrases through the air on Twitter.
The real-time magic works even more beautifully when a nice conference you could not afford to visit is streamed live to your living room through tweets. It first happened to me the other day, when a grad conference with the magnificent title The Past’s Digital Presence took place at Yale. I did not attend any of the talks in person (nor am I well acquainted with the speakers) but simply by tuning in to the conference’s hash tag #pdp2010, I could get the gist of the talks — and pick up some nice one-liners on Digital History in the process. More on those below.
I guess following a conference through Twitter might be compared to attending a sports game through the radio.
Radio and tv reporters have long realized that the medium is the message, and that their reporting, imperfectly though it may represent the event, also adds a significant dimension. The speaker’s intonation and the many many background stories (on cyclists, I have a terrible love/hate relationship to Michel Wuyts, who can tell you off the top of his head what color of underwear was worn by any biker at any major racing event) — they all add tons of drama, narrative, and heroics to the game.
So, what does the Twitter medium add to the game of conferencing?
Well, first: the medium is the massage, too. I happen to pick up Northern American daytime tweets at about 8pm, happily enjoying the comforts of a reclined seat & some good Belgian Trappist beer. I don’t know why, but it sure helps me digest academic conferences better.
Seriously, even if Twitter undoubtedly subtracts an awful lot from the live event, it also adds something. That something must be semantic, I would guess. In a way, Twitter may be considered an “auto-summarization service” for academic conferences, and that is surely because the tweets are very high-quality.
Speaking of high-quality tweets, let’s move over to the actual subject of this post.
Some tweets on #pdp2010 caught my attention, because they succinctly expressed some of the crucial questions in Digital Humanities today.
(On the subject of an Ivy League university such as Yale hosting a digital humanities conference, there’s been ample discussion on the Humanist mailing list. It all started out with this fascinating post by Willard McCarty.)
I’m not sure whose statements these are, or who twittered them, but I’ll just quote them as I received them — anonymous infobits, whose main reason of existence is to be re-tweeted. (Are tweets the materialization, at last, of Dawkins’ ill-famed memes?)
We haven’t yet dealt with the issues of reliability and bias (as historians) of digital primary sources. #pdp2010
Digital resources (humanities in particular) are often “link graveyards,” growing collections of broken interfaces #pdp2010
Important comment from a digital librarian-descriptive metadata is key. Scanned images are not the entirety of the situation
We have got to talk to our users more – there’s no excuse for scholars not knowing about the existence of descriptive metadata #pdp2010
All of these statements could be the starting point for interesting discussions. However, for now I’d like to leave them in the air for a moment (just as they kept floating around my head for a few days) and focus instead on one comment that is particularly dear to me. Apparently it came from a talk by April Merleaux:
Merleaux: If u don’t program, your work will always be at the mercy of those who do
I believe that is quite true, and it’s also the main reason why I started investing so much time in learning new technical skills (back in 2008, when I wanted to set up my first online digital document collection). If you do not grasp some core concepts from information science, web programming, or state-of-the-art natural language processing (as it is used by search companies whose interfaces we use daily), you will never fully understand what possibilities digital tools may hold for your own research.
There’s a variety of different opinions on this subject, however. Recently, on the TEI mailing list, Martin Holmes commented (this is from a discussion on eXist, a tool for indexing XML source files):
It’s fair to say that to use eXist effectively, you have to learn quite a lot. Our researchers (faculty, RAs, etc.) never have anything to do with it. We (the technical support) set up eXist, write the XQuery and all the other web application logic, and so on. (…)
So I think people like you need people like me, and vice versa. Sometimes there are people who happily live in both camps — active faculty researchers who are also quite comfortable programming their own web applications — but these are few and far between. But that’s one of the things that makes digital humanities an inspiring enterprise: it’s almost always more collaborative than traditional humanities research.
Holmes is certainly right, only he is advocating one ‘role model’ of digital humanities — that of collaboration between technically skilled programmers & academically skilled scholars. Only, these kind of collaborations always need some time to grow and (more importantly) a well-funded research framework to take place in.
I don’t have the resources for such a framework, and I do not feel like spending the next few years of my academic life obtaining the funds for them. But I do want to do research in digital humanities.
That’s why I prefer the DIY attitude that (I believe) Merleaux represents. It’s tempting (because it feels flattering) to identify with Holmes’ rare individuals — “active faculty researchers who are also quite comfortable programming their own web applications” — but actually our predicament is a little less comfortable. We have to keep up with our ‘traditional’ specializations in the humanities-at-large, and program ourselves.
The medium, then, not merely is the message, no longer is the massage — the medium is the bricolage.
- Podcasts from the talks at PDP 2010 may be found at Jana Remy’s blog (who was also one of the excellent & magnanimous twitterers for PDP, thanxalot!) makinghistorypodcast.com
- Douglas Knox commented on the Humanist mailing list:
Date: Wed, 3 Mar 2010 08:45:51 -0600
From: Douglas Knox
What I thought I glimpsed between the tweets about PDP2010 was nascent home-grown theory arising out of methodological reflection within historically oriented disciplines. Digital challenges to presumptions about research, evidence, analysis, communication, and audience certainly call for this reflection throughout the humanities, not just in humanities departments but in libraries, archives, museums, and publishing enterprises driven by an intellectual mission. The grad students who came together for PDP recognize the necessity of thinking about, and historicizing, the role of libraries, archives, their own collecting and publishing, and, not least, the dark matter of missing information, in the production of knowledge about the past.
Related articles by Zemanta
- How to Hack A Conference (AKA Attend One Productively) (profhacker.com)
- Resources for journalists using Twitter (stevebuttry.wordpress.com)
- Introducing Digital Humanities Now (dancohen.org)