Vaporware

These are various projects under construction that aren't ready for download. I warn you now that I'm not well-known for finishing projects that I start. The fact that I've gotten all of the projects above to a usable state is somewhat miraculous. Here I describe some other projects that I have in mind, which have had varying amounts of work done on them. Feel free to take the ideas and run with them if you like.

Animal: A self-programming drum machine. User should be able to adjust a handful of parameters and get an organic- sounding (though not necessarily conventional/realistic) drum loop. User never does "step programming". Infinite diversity of patterns. Each of six active instruments has its own output channel. Initially it will use its own synthesized drum sounds, though possibly with a hack to allow the user to substitute their own samples. User may or may not be able to supply "style templates" to affect pattern generation. Status: On hold, 20% complete. Drum synthesis engine is mostly done, though needs to be half-rewritten in order to support realtime pitch/volume variations. Blocking items: Better pattern generator will require better understanding of "what drummers do"; my cymbal synthesis algorithm bites a weenie or two; user instrument-sample or style-template selection will require user interface coding (feh). Related: Autonomous rhythm generation also central to Neurobeat project. Make a suggestion.

Plark: A soft synthesizer which departs from the popular virtual analog synth model. Trouble is, there just aren't all that many workable soft-synthesis models. I'm leaning strongly towards waveguide or Karplus-Strong type models at the moment, but those are limited in their own ways. Nonlinear (chaotic) systems as described by Dobson & Fitch seem really attractive here, but the trouble there is in taming them into something musical without forcing them into something dull. Soft synths also tend to need enough parameter control that I will need to do a non-default UI. Did I mention that I hate writing UI? Status: On hold, 10% complete. I've experimented with the waveguide stuff and some chaotic oscillators. I've written a bunch of support classes that I know I'll need. Blocking items: Need a different way of describing envelopes than good old ADSR. Need a better understanding of the exciter end of the variable-delay waveguide model, and to compare lots of different refinements. Need to wire up all the good ideas together and try them out. Related: Synth parameters UI is important to Heartburn project. Make a suggestion.

Façade: Tricky to describe, but it's an "expressor". It will watch MIDI notes you play, generating control signals corresponding to some higher-level analysis of the notes. Those control signals can then be wired to parameters in soft synths or effects (via Renoon if necessary). One output will correspond to the rate at which you play notes. This lets you give fast passages a different timbre, brighter or darker, than slow. Or whatever. Another output corresponds to the difference in note rate from note to note — the irregularity of a passage. Still other outputs are affected by the intervals between notes: ascending or descending, half-step or major third, etc. Finally, the key of a piece of music can be more or less deduced from a sequence of notes; the key itself can be output as a parameter as can the "degree of consonance" of a note within that key. Status: On hold, 50% complete. The basic structure works, but key-consonance determination needs some refinement. Blocking items: I'm kind of waiting to see if Audiomulch grows a good "control voltage" model in the next couple of releases, to avoid MIDI-routing hackery. Actually, a Plogue-Bidule-style explicit MIDI routing scheme would work as well. Related: Nooner and renooN virtual control voltage projects. Make a suggestion.

Neurobeat: Another project inspired by an article in Computer Music Journal. The idea is that it will output a cyclic/rhythmic pattern of noises; the noises itself will be mutated from material coming in on an audio input channel, sliced up, filtered, modulated, stretched, warped, and whatnot. Each mutation of input noise enters an internal repertoire of sounds. Meanwhile, it constructs sequences of those sounds to play back, varying the sequence from repetition to repetition. Pattern engine may involve a neural net fed by the input audio, or maybe not. With no input sound to work with, it can use the drum synthesis engine from Animal to get its "seed" noises. With an input, elements of the input will reappear in the output, though possibly modified beyond recognition. Status: Twinkle in the eye, 1% complete. I have some notes on paper. Blocking items: Just need to bounce some ideas around some more and then find a couple of quiet days to do it. Related: Autonomous rhythm generation is central to the Animal project. Make a suggestion.

MIDIGrind: This would a standalone app. Inspired by discussions about MIDIBounce, 'Grind would be a system for generating MIDI controller data and possibly note data from a more mathematical than musical perspective. Some days, it seems like it should consist of little more than a suitably-designed programming language with a repertoire of real-time MIDI-out functionality. Status: Some good ideas, 2% complete. Notes on paper. Blocking items: Good choice of embeddable language. A clear idea of how it can be useful. A bolt from the blue. Related: Despite clear anatomical similarities, fundamentalists will never believe that MIDIGrind descended from MIDIBounce. Make a suggestion.