Meet Nabaztag

I'm the proud owner of a small, white, wi-fi enabled ambient rabbit called Nabaztag. It's one of a new breed of smart objects that I'm using to investigate the design properties of calm technology.

One of my frustrations with physical computing is that the output methods seem pretty limited. It's a little disappointing to go to all the trouble of hooking up a sensor to an Arduino prototyping board just to watch a pathetic LED blink on and off. Nabaztag seems like a good way to solve this problem because it's designed specifically for ambient communication. It boasts three independent channels:

  • A series of full-spectrum LED clusters
  • Two independently positionable ears
  • A speaker for streaming mp3s

I've been prototyping different ways to control that output. I started with the ears, since they're the channel that allows the most control. Inspired by Schultze and Webb's Availabot, I hooked up the ears to monitor an iChat account, drooping a bit when people went idle, flattening completely if they were away and then perking back up when they returned.

The lights are a little harder to control. What I've been able to cobble together so far uses the built-in light patterns. The nose lights up whenever I have new e-mail I'm likely to care about. The rest of the lights pulse blue whenever it's going to rain. What I've learned from this is that binary conditions don't make very interesting ambient displays. In San Francisco, the weather isn't variable enough. The only thing I really care about is whether or not I need an umbrella so Nabaztag's display doesn't tell me much. It's basic communication theory; the amount of information isn't determined by how much something does tell you, but by how much it could tell you.

One source of data that does have some linear variability to it is the Muni Metro. There's a system called Nextbus that tracks the trains thoughout the day so it's possible to calculate their rate. Now a series of lights on my Nabaztag blink in tempo, from very slow to very fast, depending on how Muni is running. Muni runs faster in the morning and after work and slow late at night. But it's like that every day so there's not really much information. I also don't care about it.

I put off tinkering with Nabaztag's speaker because the built-in options didn't really seem ambient. Still, there could be something to it, so I finally set up Nabaztag to read Google News RSS headlines in the morning when I wake up. This ended up being more disconcerting than anything. Text-to-speech is hard to get right, and Nabaztag's voice selection pales in comparison to AT&T's natural voices. Every now and then my bunny speaks some random thing to me and since Violet's based in Paris, it's often in French. This is especially fun late at night.

Nabaztag has a built-in microphone and RFID reader so there are some more potential input experiments in store, but what I'm most interested in is finding more complex data to output. There's a nacent hacker community assembling around Nabaztag, trying to figure out what makes it tick so this is just the beginning.

Jay Howard
I agree about text-to-speech. Jokes are especially hard to get when they're read to you in a completely humorless voice. I've been listening to a G. K. Chesterton audio-book that I found online. Chesterton had a fun sense of humor, but I really have to pay attention in order to catch it because there are no voiced question marks or explanation points. No emphasis of any kind.

Now that I'm used to it, though, I almost prefer it to the other audio-books I have narrated by humans.
scott bower
I worked on a software messaging module that mixed pre-scripted human narrative with machine generated ad libs voice overs that were passed back from a similar service to AT&Ts. It was a blast, much like listening to Subgenius Radio remixes and sampling of NPR news. I think that show still runs on WREK at Georgia Tech. It is amazing how "machine like" the commentary can be on NPR with selected editing, hardly any inflection. But anyway...

Jeff, have you played around with sensorML? I find it hard to keep up with the physical computing community but I don't think anyone has picked up on it yet. There is a guy in Seattle doing some innovative installations that are based on binary information using Mindstorms. He is scraping daily death tolls from news feeds on Iraq to drop plastic yellow BB's into a room. Combining a physical visualization showing magnitude over time with single deaths per day.

Nabaztag reminds me of an open source wi-fi Furbie. Been wanting to play around with that little guy for awhile.
Hey Scott, I hadn't heard about sensorML, but I can see how a common framework for communicating physical sensor data would be a benefit. Right now, any data I get from pretty much anywhere needs to be massaged into a form I can feed into Arduino or my Nabaztag.

Post a Comment

URL (or E-mail Address)

Your name