Last month, Apple launched iOS 11, and with it, a slate of augmented reality (AR) applications that allow users to do things like paint in thin air, try out virtual furniture, and fill the world with gifs galore. And Apple is not alone in the AR charge; Google, Microsoft and Snap are all in the game. Meanwhile, maybe the most telling moment regarding the state of AR in 2017 came back in April, when Facebook chairman/CEO Mark Zuckerberg announced his intention to “make the camera the first mainstream AR platform.”

But while most people were gazing at all the shiny stuff presented by these titans of tech, they didn’t hear the revolutionary developments taking place in a totally different — but equally vital — type of reality augmentation: audio. In fact, AR for the ears is just as poised for mainstream adoption as it is for the eyes.

Leaving aside the semantic debate around “Augmented” vs. “Mixed” Reality, as a general category AR can be defined as the “integration of digital information with the user’s environment in realtime using a device.” It’s easy to grasp how users will see AR content through screens (and later glasses), but how will they hear augmented reality?

Hearables

A renaissance in “Hearables” has begun to reveal how we’ll be able to control the world of sound around us. In a nutshell, these are wireless earbuds that pair with our phones — think of the movie Her — but they’re so much more than a hardware upgrade. These “smart headphones,” such as Doppler’s Here One, Nuheara’s IQBuds and Google’s recently-announced Pixel Buds grant an unprecedented degree of precision in “tuning” the soundtrack of our lives — and in blending our lives into that soundtrack.

Noise-cancelling headphones are designed to uniformly dim the sound around you, and while hearables can do that, they offer something more complex: the ability to “live mix” the world with realtime signal processing. We’ve been able to EQ our music and devices in the past, but it wasn’t something we could make instantly responsive to the noises of life. Here One and the IQBuds have preset filters for common environments like offices and cities, but the real fun comes in the tinkering. Maybe you want to hear ambient sounds louder than you otherwise would, or you’d prefer to tweak certain ranges, or you just want to add reverb for kicks. That’s the power of smart sound.

A Elite Sport Bluetooth earphone manufactured by Jabra GN is displayed during the ShowStoppers event at the 2017 Consumer Electronics Show in Las Vegas. 
David Paul Morris/Bloomberg via Getty Images
A Elite Sport Bluetooth earphone manufactured by Jabra GN is displayed during the ShowStoppers event at the 2017 Consumer Electronics Show in Las Vegas.

This also means you can create unique concoctions with your tunes. If you appreciate the low-end rumble of the subway, you might layer it under a minor-key banger like Cardi B’s “Bodak Yellow.” Maybe you’d prefer to hear beachside seagull calls and waves washing up on the shores to Kamasi Washington’s “Desire.” This isn’t something you need fancy post-production software to do; with hearable technology, this is something you do live. Whatever you choose, you can make each instance completely distinct.

Though recording isn’t a native feature yet, it’s already possible with third-party recorder apps — meaning you can save these to revisit later (and you’ll be able to save higher-fidelity versions as the apps evolve). Plus — say you really dig the effect of a given environmental sound — you can record that in isolation to add to other audio creations later.

The “wearable” reference in the name means hearables are designed to merge with daily life. Paired with AI like Siri and Google Voice, much of this functionality with can be accessed through voice — so if a user decides they’d prefer to hear more reverb, they can simply say, “Turn up the reverb.” The form factor is crucial; for AR to thrive, it needs a mainstream infrastructure. The lightweight, (largely) hands-free approach situates hearables as an appealing market-ready AR solution with staggering potential.

Music With a Brain

Speaking of AI, that’s where the soundtrack of the future gets even wilder.

Hearables allow users to listen to music in totally new ways — and they listen to you in return. The Jabra Sport Elites, for instance, can track your speed, distance, calories, heart rate and even VO2 Max estimations (a measure of the the maximum amount of oxygen the body can use that helps determine aerobic endurance). Equipped with this knowledge — alongside platforms like Spotify and Soundcloud — it won’t be long before hearables will be able to auto-select the perfect track to get you through the trough of your workout. Here One has a “Smart Suggest” option that uses location data to suggest tuning presets; if your GPS indicates you’re in a restaurant, it will recommend the “Restaurant” setting (which amplifies front-facing audio and dampens the rest). This feature points to a future where connected apps can automatically harness data to create the perfect EQ for your songs at any given moment — modulating this in realtime to maintain the exact same sound as you move throughout different environments.

That same location data can be used to bring AR into real world sites; geolocation will provide new ways to socialize around music. As devices intended to stay in your ears for extended periods of time, hearables incentivize the notion of “planted” songs and playlists. Imagine being able to create mixtapes along particular routes and share them with friends or followers. If you thought a particular song was the perfect encapsulation of a given city block, for example, you could “drop” it like a pin so that anybody following you could experience that block the exact same way when they crossed the trigger point. Or, say you bike to work and believe you’ve created the single greatest playlist for cruising through the commute. With geocoded music, you could share that playlist with other people taking the same route so that songs were cued to particular locations in the ride. Artists might even release albums this way, turning an album release into a scavenger hunt of sorts.

Gamers play with the Pokemon Go.
REMKO DE WAAL/AFP/Getty Images
Gamers play with the Pokemon Go.

AR games like Pokemon GO will inevitably start to employ AR audio, too; in-game music will adapt depending on circumstance. It could be something as simple as autoselecting a song from your library based on location and the time of day, or it could be programmatic music that adjusts to your biometrics in realtime; you spot a Mewtwo, your heart rate spikes, and the music shifts to augment the experience.

And our participation will produce new data that will change the music we’re able to encounter in the first place. Spotify generates your “Discover Weekly” playlists based on your and other users’ behaviors. Think about how wild something like that gets when you “three-dimensionalize” that information by adding factors like time, location and EQ into the mix.

This technology is still young, but powerful solutions already exist today — and that’s not even scratching the surface of other advancements in the world of sound. There have been exciting developments in Internet-of-Things (internet-connected “smart” household devices), artificially intelligent music, and, most importantly, spatial audio, which adds “3D” directionality to sound. These technologies are converging with hearables to further immerse us in a futuristic music fantasy world that’s quickly becoming reality.

Keep an ear out; the first few notes sound pretty amazing.

Source: Billboard