After some preliminary work and mediation, a need to include a resonator~ based synthesis in the Auditory Fiction project becomes necessary.

This is what needs to be done.

• develop a resonator poly player that allows for multiple voice playback and management of voices. Include messaging for MODE OF EXCITATION, MODEL, DECAY RATES.

• use same RBFI probability space that we are building for sample selection, and rhythmic process, to determine selection of frequencies in model, decay rates, mode of excitation.

• frequencies in model would be based on pitches assigned to instrumental parts. The final results would occur in performance in synchrony with instrumental parts to create resonant material, microtonal materials doubling the instrumental parts, as well as rhythmic impulses created by the mode of excitation and sub-division selection system that John has implemented. I imagine resonance Models would have 1-5 frequencies per model with the added feature that a single or double side band frequency could be added to selected partials in the model -- the purpose of that is to add some further inharmonic feature where needed and/or to add beating on certain frequencies. This added feature should not be global but targeted to individual frequencies in the model.

The frequency selection engine would receive the fundamental frequency, then generate a probabilistically determined set of partials -- the result could be tweaked with more control parameters E.g. add more partials, increase the partial range of selection, select octave placement of the resonance model by folding all generated frequencies into any selected octave. All of these parameters should be controlled by the same core tool that is being developed -- a set of probabilities precisely coordinated through time.

The mode of excitation engine would get instructions from timemap with subdivision to excite the model that is currently active for a particular pitch. each iteration of the excitation would involve tweaking the type of excitation on a scale of smooth graduated continuant towards hard attack as well as tweaking the decay rate. This may necessitate have a separate voice for each model excitation or maybe just have a single voice with mode of excitation and decay rate being controlled in real-time.

I don't add this workload lightly as I see our project is already burdened without this. The overall goal is to create a space where the ear will be completely fooled and force the auditory system into an hallucinatory mode of perception -- I need to destroy certain aspects of the instrumental identities and I think I can do it with this approach. Right now, I am envisioning the instrumental parts as forming a resonant platform for the electronics. I am thinking that John has enough similar tools in his chest to already meet most of this production need.