- A device that takes input from a microphone and changes the pitch of the incoming sound by a different amount based on what it is given. For example, this device can be trained by me holding a note at 500 Hz, pressing start, and using the output slide bars to give me back a pitch 100 Hz above the note that I sang. Then, I can sing a note at 1000 Hz and use the sliders to produce an output 200 Hz below the note that I sang. I can keep on doing this with as many input/output pairs as I want and then I can train the model and run it so that the device harmonizes with me at different intervals depending on how high or low I sing! I was motivated to think of this idea because I am a singer and would love to use cool effects like this in my live performances.
- A device that changes the dynamic of a pitch that it intakes based on a visual position. For example, if there was a large camera taking in a performer on a stage, I could set it so that a device on their microphone would intake visual data from the camera and output different levels of amplification to the singer’s voice based on their position. If this was trained so that the lowest levels of amplification would occur when the singer was on the ground, or in lower positions, and then higher levels of amplification would occur when the singer was standing/in higher positions, then a very cool audio/visual effect would occur as the singer grew taller/louder or smaller/softer at the same times throughout their performance. I was motivated to think of this idea because I am a singer and would love to use cool effects like this in my live performances.
- A device that takes input from a camera visualization of color and sets a certain sound to be linked to that color’s hexadecimal code when it shows up. This would be awesome to view in an art installation, for example, because after the gadget has been trained it can either be used to produce a sonic environment for a live video of an artist painting or- if the camera could be mobilized- this device could move around an art gallery and produce different sounds from its speakers, creating different moods in different parts of the exhibition. I was motivated to think of this idea because I love to frequent art museums, and as a musician, I believe that my experience could be enriched by meaningful aspects of sound being linked to the art that I view.
Experts to Learn From-
- Riddim:
- Develop better models of rhythm for real-time computer-based performance and composition. This analysis tool, Riddim, uses Independent Subspace Analysis (ISA) and a robust onset detection scheme to separate and detect salient rhythmic and timing information from different sonic sources within the input. This information is then represented in a format that can be used by a variety of algorithms that interpret timing information to infer rhythmic and musical structure.
- Tae Hong Park
- This dissertation is comprised of two parts-focus on issues concerning research and development of an artificial system for automatic musical instrument timbre recognition and musical compositions. The developed timbre recognition system follows a bottom-up, data-driven model that includes a pre-processing module, feature extraction module, and a RBF/EBF (Radial/Elliptical Basis Function) neural network-based pattern recognition module. 829 monophonic samples from 12 instruments have been chosen from the Peter Siedlaczek library (Best Service)
- Toshimaru Nakamura
More Ideas-
- Use LEAP Motion Sensor!
- Make a Metaphorical GLOVE
- Remember to intake: PITCH YAW ROLL
- Fingers in reference to palm
- Leap motion sensor
- Beat, synth, verb
- Same EQ/filter patch
- No-Input-Mixing/Autonomous electronic music creation
- https://www.synthtopia.com/content/2014/10/27/no-input-mixing-tutorial/
- https://www.aimusic.co.uk/?gclid=EAIaIQobChMIldD-7Iby5wIVDD0MCh0WRQpmEAAYASAAEgI8BvD_BwE
- Feed AI examples of songs you like, buffer data, output new song of same vibe
- Take in Mic EQ
- Built in filter
- Consider change over time
- Read in examples of songs from different genres
- Give examples of songs you want yourself to sound like in your training data set
- AI music
- What else has a roll/pitch/yaw?
- Concert venue with airplane as part of set
- Wii
- Water bottle
- Phone
- Our head
- Head band?
- Our feet
- Implications of performing off-balance?
- Rolling is hard?
- Standing on small platform that moves
- Sensors in MOON SHOES!
- Something with a button for loops
- Something that can also account for acceleration/velocity