In September, a lively audience gathered at the MIT Media Lab for a concert featuring famed keyboardist Jordan Rudess, alongside his collaborators, violinist and vocalist Camilla Bäckman and an innovative artificial intelligence model known informally as the “jam_bot.” This AI, developed with a team from MIT, was making its public debut as a work in progress. Throughout the performance, Rudess and Bäckman exchanged the effortless signals of seasoned musicians, while Rudess engaged the jam_bot in a novel and intriguing way. During one segment inspired by Bach, Rudess alternated between playing and allowing the AI to carry on, expressing a mix of bemusement, curiosity, and concentration as it continued the baroque-inspired piece. At the concert’s conclusion, he candidly described the experience as “a combination of a whole lot of fun and really, really challenging.”
Rudess is not only an acclaimed keyboardist, often referred to as one of the greatest in history, but he’s also widely recognized for his contributions to the progressive metal band Dream Theater, which is set to embark on its 40th anniversary tour this fall. In addition to his band work, he is a prolific solo artist, educator, and the founder of Wizdom Music. With a formal classical training that began at The Juilliard School at the young age of nine, Rudess seamlessly blends improvisational flair with a spirit of experimentation.
Last spring, Rudess became a visiting artist through the MIT Center for Art, Science and Technology (CAST), collaborating with the Media Lab’s Responsive Environments group on developing cutting-edge AI-driven music technology. His main partners in this initiative are graduate students Lancelot Blanchard and Perry Naseck. Blanchard, with a background in classical piano, explored the musical applications of generative AI, while Naseck, an artist and engineer, focused on interactive and kinetic media. The project is spearheaded by Professor Joseph Paradiso, whose longstanding admiration for Rudess stems from his deep engagement with both physics and avant-garde music technology.
The goal of the team was to create a machine learning model that captures Rudess’ unique musical style. In a paper co-authored with MIT’s music technology professor Eran Egozy and published in September, they introduced the concept of “symbiotic virtuosity,” aimed at achieving seamless real-time duets between human musicians and the AI, where each performance builds upon the last, generating new music in front of live audiences. Rudess provided the initial musical data to train the AI, offering continuous feedback while Naseck focused on developing a visual interface to enhance the audience experience.
Recognizing that modern audiences are accustomed to multimedia elements in concerts, Naseck designed a sculptural installation that visually responded to the AI’s musical output, with light patterns shifting as the AI changed chords. During the concert, a grid of petal-shaped panels lit up dynamically in conjunction with the AI’s contributions, distinguishing its role from that of the human musicians while conveying emotional depth.
The team’s ambition was to create a compelling musical visual experience that elevates the performance. They wanted to communicate the AI’s decisions to the audience in a format similar to how jazz musicians visually cue one another. Naseck’s installation, built from scratch at the Media Lab with help from fellow collaborators, illustrated the AI’s melodic decisions through kinetic movements. For instance, gentle swaying would indicate Rudess’ lead, while dramatic unfurling might accompany the AI’s grand chords during slower melodies.
Blanchard utilized a music transformer—a neural network architecture—to simulate Rudess’ musicality, which operates similarly to language models by predicting the next note in a sequence. By adjusting the model with Rudess’ recorded performances, Blanchard ensured the AI could improvise in real-time. This reframing of the project opened pathways for the AI to respond to Rudess’ musical cues, creating a profound dialogue between artist and machine.
The project inspired ideas beyond music performance; there are ambitions for educational applications stemming from this collaboration. Rudess noted the AI’s potential for teaching music based on ear-training exercises, illustrating how it could serve a multifaceted role within the music community.
Despite the excitement surrounding AI in music, Rudess acknowledges the skepticism from some musicians who feel threatened by this technology. He remains committed to guiding the development of AI for positive applications, ensuring it enhances creativity rather than undermining it. Paradiso emphasizes the importance of integrating AI into musical practices to elevate the collective experience.
Rudess’ motivation to explore AI technology aligns with his background in music innovation. Initially drawn to the Media Lab to experiment with a unique musical device, he has since immersed himself in a wide array of projects and classes at MIT—teaching improvisation, showcasing technology, and engaging with students on their journey through musical exploration.
Reflecting on his experience, Rudess expresses a rush of inspiration when he visits MIT, feeling that his musical concepts intertwine beautifully with the innovative atmosphere of the university. As the journey with MIT continues, the collaboration has sparked an array of possibilities, positioning Rudess and his team at the forefront of experimenting with how AI can redefine musical interaction and creativity.
Source link