PORTFOLIO PROJECT
synth muse
2026
Synth Muse is an audio-reactive interactive web prototype that began as a quick afternoon experiment with Omma by Spline and has been updated to expand its capabilities since. It came from something that had been sitting in my mind for a while: the strange inner visions I get while lying on the floor in the middle of a candlelit hot yin yoga class, a dream I had when I was 7, and my love for experimental music, emotional audiovisual worlds, and interfaces that feel more like instruments than static screens.
The first version was built as a fast creative sketch, focused on testing the emotional core of the piece rather than polishing the architecture. I wanted to see how far I could push typed input, generative sound, shader-based motion, image warping, glow, ghosting, spectral splitting, particles, animated text, and palette shifts into something that felt alive, intimate, and slightly hallucinatory.
Under the hood: JavaScript, WebGL, and GLSL, with a custom audio system that turns typed input into generative notes and layered sound. The visuals respond in real time through shader-based effects and a configurable control system, letting users change melody intensity, tempo, pitch range, reverb, distortion, glow, blur, parallax, sway, cradle motion, breathing, particle flow, and background speed.
The two character assets were created with ComfyUI, and the experience includes an anime mode that swaps the character entirely. That part is important to me because I enjoy building pieces where the user gets to personalise what they are seeing and, in a way, become a co-creator of the mood, rhythm, and visual identity of the experience.
The second version was a structural and technical cleanup of the original prototype. I moved the project into a self-hosted setup, componentised the codebase, served the assets locally, and optimised the media so the experience could be easier to maintain, expand, and deploy outside of the original build environment.
V2 also introduced a new background image system. Instead of relying only on shader-driven abstract visuals, the experience can now display generated background images and switch between two different image states. These backgrounds were created with Stable Diffusion and brought into the interactive system as part of the visual world. The background image is also affected by the shaders, so it feels integrated into the same reactive environment as the rest of the piece. I added the option to either preserve the original colours of the generated image or push it through the existing palette system, letting the user decide whether the background stays closer to the source or becomes part of the more synthetic, shifting colour world of Synth Muse.
It still feels like an early piece, but one I would like to keep developing. I would love to push it further with audio capture, a vocoder, more complex sound-reactive behaviours, and a richer shader world around the figure so the whole environment feels even more immersive and alive.
V1
The first version of Synth Muse was created as a fast prototype using Omma by Spline as the starting point. The goal was to quickly test the feeling of an audio-reactive interactive character piece where typed input could become sound, motion, atmosphere, and visual transformation.
This version focused on the core creative system: generative notes, layered sound, shader-driven distortion, animated text, particles, palette changes, and a live configurator letting users shape the experience in real time. Less about building a perfect structure, more about capturing the mood of the idea while it was still fresh.
Even in its rougher form, V1 established the emotional and interactive direction of the project: a small audiovisual world that responds to the user, feels personal, and sits somewhere between a music toy, a shader experiment, and a digital muse.
V2
The second version was developed to clean up the structure of the project and make it more flexible as a self-hosted experience. I moved away from the original prototype setup into a more maintainable structure, componentising the interface, controls, visual systems, audio behaviours, and asset handling.
Assets were moved into the project and served locally, giving more control over loading, optimisation, and deployment. V2 added the switchable background image system with Stable Diffusion images, shader-reactive behaviour, palette integration, and colour preservation mode.
This version made the project feel more like a proper interactive system rather than a one-off sketch. It kept the loose, experimental nature of the original, but gave it a cleaner foundation for future additions: audio capture, richer shader scenes, more character states, and deeper sound-reactive interactions.
TECHNICAL DETAILS
V2: JavaScript, GLSL, WebGL, self-hosted deployment, componentised structure, locally served and optimised assets, Stable Diffusion background images, shader-reactive image layers, palette controls, colour preservation mode, generative audio, custom visual controls.
V1: Omma by Spline, JavaScript, GLSL, WebGL, ComfyUI character assets, generative audio, shader-based visual effects, live configurator.
RESPONSIBILITIES
Creative Direction and Concept: Created the concept, mood, interaction direction, and visual identity of the experience.
Prototype Development: Built V1 as a fast interactive sketch to explore generative sound, shader visuals, character swapping, and live user controls.
Generative Audio System: Developed a custom sound system that turns typed input into generative notes and layered audio behaviours.
Shader and Visual Systems: Built GLSL-driven visual effects including distortion, glow, ghosting, spectral splitting, image warping, palette shifts, particles, and responsive motion.
Interactive Controls: Built the configurable controls system across melody, tempo, pitch, reverb, distortion, blur, glow, parallax, motion, breathing, particle flow, palette, and background speed.
AI Asset Creation: Generated character assets with ComfyUI and V2 background images with Stable Diffusion, integrating both into the interactive visual system.
Background Image System: Built the switchable background feature with two image states, shader-reactive behaviour, palette integration, and colour preservation mode.
Frontend Refactor: Reworked V2 into a self-hosted, componentised structure with locally served and optimised assets.
