MusifyMe (Hackathon Presentation)
MusifyMe: Translating Biometrics into Generative Soundscapes
What does your face sound like? MusifyMe is an experimental app developed during a one-day hackathon at Waves Audio. It utilizes advanced face recognition algorithms to analyze facial characteristics and translate them into unique, adaptive musical themes in real-time.
The system identifies key variables such as age, hair type, and current facial expressions, passing this data through a custom algorithm to trigger specific musical parameters within Ableton Live. Every smile, frown, or unique feature shifts the harmony, rhythm, and timbre of the output, ensuring that no two faces produce the same melody.
[Image of facial recognition data mapping to MIDI parameters]
Technical Architecture
| Component | Technology Stack |
|---|---|
| Face Recognition | Microsoft Azure Face API |
| Audio Engine | Ableton Live |
| Logic Layer | Custom Adaptive Music Algorithms |
| Location | Waves Audio Hackathon (Tel Aviv) |
Despite a small technical hiccup with the recording at the start of the presentation, you can watch the core demonstration and the results of the algorithmic synthesis below.
Technical Roles: Algorithmic Music Design, App Development, API Integration, Generative Audio Production.




