Work in this area seeks to use computational tools to enable musical creativity, in particular to give novices a variety of new approaches to experience musical creativity. Signal processing and machine learning systems are combined with insights from traditional music creation processes to develop new tools and new paradigms for music.
Songsmith generates musical accompaniment to match a singer’s voice. Just choose a musical style, sing into your PC’s microphone, and Songsmith will create backing music for you. Then share your songs with your friends and family, post your songs online, or create your own music videos. Songsmith evolved directly from a research collaboration between the CUE and Knowledge Tools groups at MSR.
User-Specific Learning for Recognizing a Singer’s Intended Pitch
We consider the problem of automatic vocal melody transcription: translating an audio recording of a sung melody into a musical score. While previous work has focused on finding the closest notes to the singer’s tracked pitch, we instead seek to recover the melody the singer intended to sing. Often, the melody a singer intended to sing differs from what they actually sang; our hypothesis is that this occurs in a singer-specific way. We thus pursue methods for singer-specific training which use learning to combine different methods for pitch prediction.
Data-Driven Exploration of Musical Chord Sequences
We present data-driven methods for supporting musical creativity by capturing the statistics of a musical database. Specifically, we introduce a system that supports users in exploring the high-dimensional space of musical chord sequences by parameterizing the variation among chord sequences in popular music. We provide a novel user interface that exposes these learned parameters as control axes, and we propose two automatic approaches for defining these axes. One approach is based on a novel clustering procedure, the other on principal components analysis.
Mapping Physical Controls for Tabletop Groupware
Multi-touch interactions are a promising means of control for interactive tabletops. However, a lack of precision and tactile feedback makes multi-touch controls a poor fit for tasks where precision and feedback are crucial. We present an approach that offers precise control and tactile feedback for tabletop systems through the integration of dynamically re-mappable physical controllers with the multi-touch environment, and we demonstrate this approach in our collaborative tabletop audio editing environment.
MySong automatically generates chords to accompany a vocal melody, and lets a user with no knowledge of chords or harmony manipulate those chords with intuitive parameters.
Audio Analogies is a mechanism for rendering new performances of a score using audio from existing recordings. Given an audio recording and a corresponding MIDI score, our method synthesizes audio to correspond with a new input score. The core technique is concatenative synthesis; we reassemble bits of sound from the audio recording to create a new performance of the input score. By adjusting parameters of the model, it is possible to move smoothly between a faithful reproduction of the score with sounds from the recording to a rendition that incorporates some of the stylistic variations of the recording.