Friday, April 15, 2005

the freesound project

The Freesound Project provides a database of Creative Commons licensed audio pieces.

Thursday, April 14, 2005

Artificial intelligence vs. artificial humanity

Here's something that comes up in discussions of AI that I always notice. This is from Shirky's piece:

After 50 years of work, the performance of machines designed to think about the world the way humans do has remained, to put it politely, sub-optimal.

True, but the thing is, AI shouldn't be about making an artificial human. What always seems to be at issue is that this massive "failed" attempt hasn't been in artificial intelligence, it has been in artificial humanity. There are already many things that could be called artificial intelligence. Where do we get the hubris to think that "intelligent" means "thinking like us" (oh yeah, we get it from telling ourselves we're the apex of everything--that's right). There already are artificial intelligences, but because we don't recognize them as acting like us, we discount them.

Here's some fun food for thought: the entire web is an artificial intelligence. Google is an artificial intelligence. De.licio.us is an artificial intelligence. And you know who're the neurons? Peel yourself away from your monitor (there's a telling term--who's watching whom?) long enough to see your answer in the mirror. The hive is the mind. The network is the brain. Our isolation is the synapse. We're the neurons. Welcome to the machine.

Yes, artificial intelligence in this sense is dependent on other things, like our maintenance, but that is of course true of our own, human intelligences. Not only are ours dependent on the world, a la Heidegger, but we are each an assemblage of organisms. Our mitochondria and intestinal fauna are of us as we are of the machine; together the organisms, the nodes, as it were, form something that is other than themselves.

Ontology is Overrated

IT Conversations has a copy of Clay Shirky's ETech talk, "Ontology is Overrated: Links, Tags, and Post-hoc Metadata". You might be interested in listening...

Re-visiting Hansen

The work that's being done by choreographer Trisha Brown puts some of Hansen's work back into discussion for me. Her 30 minute show, "how long does a subject linger on the edge of volume...", opens Saturday at Arizona State University. You can catch the NYTimes story at this link.

Open-access journals

I thought the class would like to read this article in Wired. It is about the proliferation of open-access journals, which do not charge for subscriptions nor sell advertisment space. Rather, researchers have to pay to publish their work. This raises all kinds of questions about accountablity, economics, etc.

Wednesday, April 13, 2005

The Long Tail Weblog

I forgot to mention that there's a relatively recent weblog called The Long Tail that discusses, you guessed it, the long tail concept. It's also by Chris Anderson, the same person who wrote our Wired essay from yesterday's class.

Tuesday, April 12, 2005

Some thoughts over coffee and eggs

Ok, so I'm reading the blog piece by Tom Coates on defining social software and I'm psycholanalyzing myself as I read (as I usually do when reading about stuff I know I should "like" but instead have some feeling of resistance) and I think I've unpacked something I want to share. And I'm intending to speak in generalitites not of this course in specific.

First, I must say that reading blogs is an interesting way to "hear" the conversation in a modified sense of immediacy--unlike waiting for published replies in editions/volumes of journals, etc. I'm wondering what to do with this idea.

Second, here's what I'm thinking after recalling my comments about being resistant to using blogs: (speaking only for myself, now) I think I resist wholly embracing technology (like using it in my classroom?) because of the rhetoric that surrounds it and its use/incorporation. Granted, I'm really quite new to the conversations but it seems to me that a binary is somehow established--one is either are a luddite or a techie. Luddites are primitive and lost and illiterate while techies are advanced and found and "in the know" (I hate this idea because it leaves plenty of people "out of the know").

This idea came to me after "Anne's" comments about our "need to be very careful about assuming that technology (objects-in-themselves) can 'remove limitations' or 'compensate for human inadequacies'". While I DO see technology, social software, blogs--all of this stuff--as important to know how to use, as needing to be "taught," as not going away, etc. I question the seemingly euphoric embracing of it as almost holy. Technology Will Change Your Life. If you haven't seen the light radiating from The Net, Blogs, Social Networks then you are doomed to damnation. Use it in the classroom because "the kids already know how." I interpret this sort of rhetoric as implying that I am somehow inadequate because I DON'T know how. What about the things that I DO know how to do that they don't? Should I teach THOSE things? Or do I only reach things that are relevant to "today's world?" (Just making ridiculous argument here becasue I don't know where this is going).

My other ponderings surround the idea of access. THose who have access AND can navigate electronic social webs are somehow rewarded (I suppose this is no different than physical social webs). What happens to those who either do not have access or those who CANNOT navigate the web? Are they relegated to some "place" for those who are "limited" and "inadequate?"

I am being quite reductive in expressing this idea because I perceive that already anyone reading this has lost interest (too long for an electronic forum). And I'm falling into the same traps as we discussed on Thursday: I'm critiquing my writing, I'm thinking of counter arguments to my points but not inclined to write a paper-length post, I'm not good at being short and to the point......Oh, here's where my inadequacies are limiting my interaction with our social network......