Speculative gadgets at the Future Interfaces Group

0
659

To try to get a glimpse of the everyday devices we could be using a decade from now, there are worse places to look than inside the Future Interfaces Group (FIG) lab at Carnegie Mellon University.

During a recent visit to Pittsburgh by Engadget, PhD student Gierad Laput put on a smartwatch and touched a Macbook Pro, then an electric drill, then a door knob. The moment his skin pressed against each, the name of the object popped up on an adjacent computer screen. Each item had emitted a unique electromagnetic signal which flowed through Laput’s body, to be picked up by the sensor on his watch.

The software essentially knew what Laput was doing in dumb meatspace, without a pricey sensor needing to be embedded (and its batteries recharged) on every object he made contact with.

But more compelling than this one neat device was how the lab has crafted multiple ways to create entire smart environments from a single gadget.

There is 1950s KGB technology that uses lasers to read vibrations. And ultrasound software that parses actions like coughing or typing. And an overclocked accelerometer on another smartwatch which allows it to detect tiny vibrations from analog objects, sensing when a user is sawing wood or helping them tune an acoustic guitar.

Founded in 2014, the research lab is a nimble, fast-prototyping team of four PhD students led by Chris Harrison, an assistant professor of human-computer interaction. Each grad student has a specialty such as computer vision, touch, smart environments or gestures. Academic advances in this field move fast, and tend to be ahead of widespread (and profitable) industry releases by a long shot — VR, touchscreens and voice interfaces all first appeared in the 1960s.

Yet a lot of the FIG’s projects die right here in the lab, where every glass office window is saturated with neat bullet points in marker pen, storyboard panels and grids of colorful post-its. Harrison says the typical project length is only six months. Of the hundreds of speculative ideas the lab generates each year, at most 20 are turned into working prototypes, and 5-10 may be published in the research community. Two projects have spun off into funded startups while a handful have been licensed to third parties (the lab receives funding from companies like Google, Qualcomm and Intel).

Combining machine learning with creative applications of sensors, FIG is trying to find the next ways we’ll interface with computers beyond our current modes of voice and touch. Key technologies whose interfaces are yet to be standardized include smartwatches, AR/VR, and the internet of things, says Harrison.

[Read More]