Home Tech The wearable’s screen-free AI assistant leaks humanely in the first demo clips

The wearable’s screen-free AI assistant leaks humanely in the first demo clips

0
The wearable’s screen-free AI assistant leaks humanely in the first demo clips

Humane, the startup founded by former Apple employees Imran Choudhury and Bethany Bongiorno, gave its first live demo of its new device; A wearable gadget with a projected display and AI-powered features intended to act as a personal assistant.

Chowdhury, who serves as the chairman and president of Humane, demonstrated the device onstage during a TED talk, which was recorded. he got it inverse And others Ahead of the expected public release on April 22nd.

“It is a new type of wearable device and platform which is completely built from the ground up for artificial intelligence,” Chaudhary says in comments transcribed by inverse. “And it’s completely self-contained. You don’t need a smartphone or other device to pair with it.”

Thanks to the presentation, we now have at least an idea of ​​what the device can do, and how it can do it without a traditional touchscreen interface. During the presentation, Chaudhary wears the device in his chest pocket, taps it in place of the wake-up word, and then issues voice commands as you would an Amazon Echo smart speaker. Axios notes The device also supports gesture commands.

“Imagine this, you’ve been in meetings all day and you just want a rundown of what you’ve been missing,” Chowdhury says, before tapping the device and asking to be caught up. In response, the device presents a summary of “Emails, Calendar Invitations, and Messages.” It’s not clear exactly where the wearable pulls this information from Chaudri’s comments about not needing a paired smartphone, so it’s presumably connected to cloud-based services.

In addition to spoken responses, the device can also project a screen onto nearby surfaces. At one point in the presentation, Choudhury receives a phone call from Bethany Bongiorno (co-founder, CEO and Choudhury’s wife), who shows the device on his hand. The camera angle obscures how Chowdhury picks up the call, and he doesn’t appear to be interacting at any point with the screen projected onto his hand, though the interface displays what look like buttons. But he is able to hang up as if he were using a phone on speakerphone.

In addition to being able to project a screen, the device also includes a camera that shows object recognition in the world around it, similar to what we saw in a stolen demo stand for investors. On stage, Chowdhury uses the camera to identify a chocolate bar and inform him whether or not to eat it based on his nutritional requirements.

Finally, there is a translation demonstration, in which Chaudhary holds down a button on the device, says a sentence, then waits while the Humane wearable reads the same sentence in French. In the clip, Choudhury never instructs the device to translate his words, so it’s not clear how one activates this functionality.

“We like to say the experience is screenless, seamless, sensorless, allowing you to access computing power while remaining present in your surroundings, and fixing a balance that has been feeling out of place for some time now,” Chaudhary says. inverse.

Humane isn’t the first company to try to offer these kinds of features, but it’s worth noting that they’re trying to do it all in a relatively compact, screenless device that doesn’t require a paired smartphone. What isn’t clear to me, however, is how usable the device will be when you’re out in public or in a hurry. For all their faults, smartphones are still great at quickly getting to the details you need, displaying them on a screen that only you can see, and it’s not clear if Humane’s mix of displayed screen and speakers is capable of matching it. just yet.

LEAVE A REPLY

Please enter your comment!
Please enter your name here