-
Notifications
You must be signed in to change notification settings - Fork 48
Lab 4. Interaction
Now that we've trained networks to classify sounds, what can we do with that?
In the Interaction.ipynb
file in 05_Interaction
, we have a bunch of input and output elements that might be useful for your final interaction.
As an exercise, modify Cats vs. Dogs so that something fun happens if the sound heard by the mic is a Cat or a Dog!
We can also try using other pre-trained models from Model Zoo, both as recognition/classification engines, and also as bases for last-layer retraining.
In the Musical_Interaction.ipynb
file in 05_Interaction
, we show more examples of how to synthesize and output midi files.
New! Added a Interaction_osc.ipynb
file, also in 05_Interaction
, for Open Sound Control buffs.
We can try using the emotional transforms from this Intel project(GitHub link) to color musical outputs of existing songs
Music 21[GitHub] Toolkit for Computer-Aided Musicology
Google Magenta [GitHub] An open source research project exploring the role of machine learning as a tool in the creative process
Google WaveNet [GitHub] A generative model for Raw Audio