In these experiments I seek to teach best practices for media programming that will result in a higher productivity and higher quality results.

The focus of these first Max patches ("Sing" and "Demo") is a demonstration of a more effective user experience where text and numbers are eliminated or buried to focus on a visual and sonic communication of the function and state of the components. I am not the first or last to work on this goal. My work may be a good entry into Jamoma, for example.

Due to my own haste and some oft-decried ommissions in Max/MSP/Jitter (data abstraction and encapsulation, type polymorphism, inheritance, object persistance) the implementations are labored. It may help to review the following objects and features before delving in deep:

- presentation mode
- bpatcher
- "open in presentation mode" checkbox in the patcher inspector
- alpha channel in coloring especially in background (to overlap graphical display objects)
- multislider display modes
- textedit object
- route, osc-route
- resdisplay

What's next? Keep an eye out for the "o." objects that John and I are working on that will give Max/MSP/JItter data abstraction and encapsulation, type polymorphism and a streaming model for gesture signal processing.

I am indebted to Jeff Lubow who showed me how to embed the interactive objects in bpatchers; John MacCallum who provided the sane implementation of my res-display jsui; Tom Duff et al. for alpha channels, Xavier Rodet who showed me how to bridge the worlds of the science of the voice and effficient sound synthesis; and of course the inspiring community of friends, visitors, students,interns, alumni, staff and directors that sustain CNMAT as the "little engine that could".

p.s. You can find the conventional approach to max programming a formant synthesizer in my old 2006 singing voice patch which Michael Z. wrapped up and put in the MMJ depot as "singing-voice.

Attachments