The Smithsonian American Art Museum has come up with an amazing new use for SoftBank’s beloved Pepper.

Meet the robotic museum guide that will turn art into sound for the visually impaired

 

Some art institutions get so wrapped up in preserving the past that they fail to properly engage the future. The Smithsonian American Art Museum (SAAM) is not one of those institutions. Fresh off unveiling a virtual reality Burning Man installation this past August, SAAM turned one of its robotic tour guides, Pepper, into a multimedia installation that transformed paintings into ambient synthesizer music. At an eventon Tech Family Day last month, Pepper was found engaging with visitors, capturing their emotions, and interpreting them with a selection of a painting and ambient tunes.

Rachel Goslins, director of the Arts & Industries Building, who kickstarted the Pepper project, said turning Pepper into a synth player grew out of the museum’s work with SoftBank Robotics USA. The company, which created Pepper, offered to donate 100 Peppers to the Smithsonian to experiment with visitor engagement and education. While the Smithsonian had been running various experiments with Pepper, Goslins says she turned to Ian McDermott, a member of the Hirschhorn Museum’s ArtLab, a digital media studio for teens, to create an experience for Tech Family Day.

“[They] coded and built this new way of using Pepper,” says Goslin. “It’s organic, and fun, and extraordinary.”

 

[Photo: courtesy of The Smithsonian]

As the project’s designer, McDermott tells Fast Company that Pepper is programmed with a database of 50 images from SAAM’s artwork collection. Each image corresponds to one of five emotions—happy, sad, neutral, angry, and surprised. Pepper reads emotions based on facial emotion detection algorithms, so that when it sees an emotion like anger, it will display an image that corresponds to the emotion. These images are shown on a tablet that is fixed to Pepper’s chest. If someone seems happy, Pepper will display an image with light, vibrant colors.

 

 

[Photo: courtesy of The Smithsonian]

Pepper’s light sensors detect colors on the RGB scale (red, green, and blue) individually. The sensors translate these different frequencies of light to 1s and 0s by way of an Arduino microchip. This microchip sends the data on to a Bluetooth transceiver, which bounces it to another Bluetooth transceiver, which feeds the information into another Arduino that is hooked up to a Moog Werkstatt, a synthesizer designed to specifically work with the inputs and outputs of the Arduino.

 

The Werkstatt processes the signal in dissonant, pulsating tones. Some are pleasant bleeps and blops, while others are higher pitched squeaks. The Werkstatt then sends these pulses to two Moog Mother-32 semi-modular synths, which then route the sounds to an Eurorack modular system, where other effects are applied to create a more complex and melodic mix.

“Things like tempo will be shifted based on the colors perceived by Pepper,” says McDermott. “You end up getting this ambient music that’s running from images from the museum, and the images get translated to sound ultimately through a series of relay points.”

[Photo: courtesy of The Smithsonian]

PASSING THE PEPPER

Later this fall, the Smithsonian hopes to roll out accessibility programs with Pepper. The museum’s accessibility team already does verbal description tours, where someone describes what a painting looks like to visually impaired guests. With Pepper, the museum can establish a direct correlation between the painting and sound waves. McDermott’s project for Tech Family Day was thus initiated, in part, to hack a solution for future accessibility programs.

“Where I take this from here is Pepper can now go through a gallery and point its hand, or I can grab the data from her cameras, translate that light data, and run it through the system,” McDermott says. “With that idea, we can have Pepper go through the galleries and remix existing works into a way that they weren’t intended to be absorbed, sonically rather than visually, or open it up to provide accessibility to the visually impaired and give tours that way.”

[Photo: courtesy of The Smithsonian]

McDermott, who also works as an educator within the ArtLabstudio, has been teaching teens to program with Pepper. He says that the robot’s physicality has been invaluable in teaching computer programming and robotics to the teens, because they can see Pepper move position by position like a stop-motion animation.

Beyond the Smithsonian’s efforts in accessibility, education, and making artworks into multimedia experiences, there is another element at play. While Pepper’s ability to read human emotions and turn this data from visual art into sound art is intriguing, the Smithsonian seems to have unexpectedly engaged with one of the chief concerns of the future—how humans and robots will interact as artificial intelligence advances. Figures like Elon Musk, and the recently passed Stephen Hawking, are keen on sounding alarm bells about AI and robots. But these creations, and the programs they run, needn’t be adversarial. Gosling seems to agree.

“On a philosophical level, art is an emotional form of creation, right?” she muses. “It’s a bit poetic to think that we’re lending some of our human emotions to a robot, to create a new form of artistic expression.”

In the future perhaps, humans and robots can learn and create together. And what better way than by interpreting and fashioning new artistic experiences?

Source: fastcompany.com