Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be Too

Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be Too

The pathway leading into Rabbit’s venue—for the launch event of the R1, an artificial intelligence-powered device announced at CES 2024—was paved with gadgets from the past.

First was the orange JVC Videosphere, then the Sony Walkman, a Tamagotchi, a transparent GameBoy Color, heck, even the original Pokédex toy from 1998. At the very end of the hall was Teenage Engineering’s Pocket Operator, and across from it, a few concept prototypes of the Rabbit R1.

If the Pocket Operator stands out, seeing as it’s barely a decade old, that’s because the Swedish design-firm Teenage Engineering helped design the R1. And at the launch event, CEO Jesse Lyu announced on stage that Jesper Kouthoofd, founder of Teenage Engineering, has joined Rabbit as its chief design officer (while still maintaining his role as CEO of TE).

This red little retro gadget isn’t just pulling our nostalgia strings and taking design cues from the second half of the 1900s—Rabbit is also making a bold claim that its R1 deserves a spot in this tech hall of fame.

Rabbit confirmed that an R1-accompanying camera wearable is inbound that will allow the device to know by image alone what you’re pointing at or commanding it to control.

Even the venue being the iconic TWA Hotel at at JFK Airport, New York, which very much feels like a live-action set of The Jetsons and boasts a restored Lockheed Constellation “Connie” L-1649A that’s been repurposed into a cocktail lounge, underscored this yearning for the fun gadgetry of the past, and the exciting promise the future glory.

Fifty years from now, the R1 is that gadget you’ll reminisce about that was at the start of the AI-fueled world. At least that’s what Lyu hopes.

“Our mission is to create the simplest computer,” Lyu announced on stage to a captive audience—and over the course of an hour, Lyu laid out plans to achieve this goal. The R1, it turns out, is just the start.

I go over the minute details of this new AI device in I first reported on from CES that’s orange-red, no bigger than a stack of Post-Its, and it has a little screen. On the right of the screen is a scroll wheel for interacting with the interface, and above it is a camera that can swivel to the front, back, or into the inner casing of the device for privacy. On the right edge is a button, which is the main way to select anything on the screen.

Much like the Humane Ai Pin, you can talk to this device like you would chat with Alexa, Siri, or Google Assistant and ask anything. It’s quite smart, so you can ask complex questions and in a natural way—it’s mainly powered by Perplexity’s large language model, which helps it understand your query and deliver the answer via the speakers and screen. There’s no “hot word” to activate the microphone—you have to push the side button and talk, like a walkie-talkie.

The camera enables vision-powered AI tricks, too, so you can point the camera at a subject and the R1 can understand and parse what you’re looking at.

Lyu had a live demo on stage where he pointed the R1’s camera at a spreadsheet printed on a piece of paper. As it captured a photo, he asked the device to swap the placement of two columns around and to send him a copy. Within seconds, an email appeared in Lyu’s inbox on his computer with a digital version of the spreadsheet, purportedly with the request addressed.

Image may contain Electronics Phone and Mobile Phone

The R1 can perform actions like manipulate spreadsheets, translate languages, generate AI images, and order you a McDonald’s.

Adobe After Effects

The R1 has quite a few other built-in tricks. You can take notes and access them via the “Rabbit Hole” web portal—and even edit them. It can handle translations with ease, as again demonstrated live on stage. You can make voice recordings on the device, access the WAV file in the web portal, and get an AI summary of the recording. There’s a built-in virtual keyboard when you need to type anything into the interface (like a Wi-Fi password).

Lyu demoed the R1’s Teach Mode as well, which lets you point the R1’s camera at a computer screen as you instruct it how to complete a task. After it learns, you can ask it to perform said task to save yourself the time and hassle. However, this feature is not yet available, and when it is, Rabbit says it will start with a small selection of users to beta test it.

But the goal for the R1 is more or less to replace your apps. Instead of hunting for an icon, just push the button and ask the R1 to handle something.

At CES, it seemed as though you’d be able to access multiple third-party apps through the R1 at launch—but, right now, there are just four services: Uber, DoorDash, Midjourney, and Spotify.

You connect to these via the Rabbit Hole web portal—which means yes, you are logging into these services through what seems to be a virtual machine hosted by Rabbit, handing over your credentials—and then you can ask the R1 to call an Uber, order McDonald’s, generate an image, or play a song. It’s using these services’ application programming interfaces (APIs) to tackle these tasks—and the R1 has been pre-trained to use them.

Lyu naturally promises there’s plenty more on the way, of course. In the summer, we are told to expect an alarm clock, calendar, contacts, GPS, memory recall, travel planning, and other features. Currently in development are Amazon Music and Apple Music integrations, and later on, we should see more third-party service integrations including Airbnb, Lyft, and OpenTable.

You might be wondering, “Hang on a minute, that just sounds like a phone,” and you … wouldn’t be off the mark.

As we’ve seen with the clunky and limited Humane Ai Pin, a smartphone can perform all of these tasks better, faster, and with richer interactions. This is where you have to start looking carefully at Rabbit’s overall vision.

The idea is to speak and then compute. No need for apps—the computer will just understand. We are a long ways away from that, but, at the launch event, Rabbit teased a wearable device that would understand what you’re pointing at.

Lyu suggested this wearable could understand you pointing at a Nest thermostat and asking to lower the temperature, without having to say the words “Nest” or “thermostat.” The image of the supposedly all-seeing wearable was blurred, though, so we don’t have much information to go on.

Lyu mentioned generative user interfaces, where a user will be able to have an interface of their own choosing—buttons on a screen placed where you want them, and at the perfect display size—and then claimed that Rabbit is working on an AI-native desktop operating system called Rabbit OS. Again, we don’t have many details, but my mind immediately went to Theo in Her installing OS1 on his PC.

An operating system that puts a personal voice assistant front and center. What can go wrong?

Image may contain Electronics Phone and Mobile Phone

Rabbit is also working on an AI-native desktop operating system called Rabbit OS that puts its personal voice assistant front and center.

The Rabbit R1 retails for $199, and it’s available for purchase now, but units are shipping in batches, and, currently, if you place an order, you’ll get a unit in June.

Lyu doesn’t stop repeating that this device doesn’t have a subscription—unlike the Humane Ai Pin—but it’s worth noting you have to purchase your own monthly data plan and insert the SIM card into the 4G-enabled R1 for it to be useful when you’re away from Wi-Fi (unless you plan to tether it to your phone).

The company says it has sold 100,000 units in the first quarter of 2024. I picked up my unit at the event and have now unboxed it—my early impressions are that it’s undeniably a cute piece of tech, but I’ll have more thoughts lined up in a review after I’ve put it through its paces.


https://www.wired.com/feed/rss

Julian Chokkattu

Leave a Reply