Discussion:
[fluid-dev] using fluidsynth to create soundfont based samples for a supercollider sampler
Michael Geis
2011-08-24 21:06:49 UTC
Permalink
Hi,
am trying to use soundfonts for a supercollider sampler that friends are working on.

It would be enormously helpful if you could give feedback on my train of thought below
regarding whether and how this could be achieved with fluidsynth. As I am new to
soundfonts, my approach may be rather naive.

The current idea is to create each sample file by passing fluidsynth a one note .mid file,
outputting the raw wave data to file and then converting to .wav with sox.

The sampler is not very complicated, it essentially applies an ADSR envelope
to the samples. So as long as I can pry those four phases apart within
every .wav file I get from fluidsynth (i.e. find the offset in frames into the .wav
for beginning and end of each phase), it is likely going get the job done or at least be
reasonably close to what they need.

I looked at the pysf (http://code.google.com/p/pysf/) utility and its output lists either
start times or durations for every envelope, which leads me to conclude (hope?) that the .sf2
format actually specifies envelope length and that these would be the parameters I need
(because my friends want to vary them).

So, is there a way of figuring out what frames within a one note .wav sample that is produced
by fluidsynth demarcate the starting and end points of the phases of an ADSR envelope?
How would one go about it?

Thanks for your consideration.

Best,
Michael
Andrew Suffield
2011-08-24 21:39:25 UTC
Permalink
Post by Michael Geis
The current idea is to create each sample file by passing fluidsynth a one note .mid file,
outputting the raw wave data to file and then converting to .wav with sox.
The sampler is not very complicated, it essentially applies an ADSR envelope
to the samples. So as long as I can pry those four phases apart within
every .wav file I get from fluidsynth (i.e. find the offset in frames into the .wav
for beginning and end of each phase), it is likely going get the job done or at least be
reasonably close to what they need.
You're describing a midi renderer.

You just want to adjust the envelope? Grab a soundfont editor (swami)
and add modulators to the instruments you want to work with. Use a
single source, set to an unused midi CC, and a destination of volume
envelope -> attack/hold/delay/sustain/etc. Create one for each
parameter you want to control. Want to adjust something else? Probably
can do it the same way.

Now just use fluidsynth to play the note, and send it CC messages to
adjust those settings. Or use any other soundfont-compliant midi
renderer, including a lot of keyboards, and do the same thing.

This is what midi keyboards were *invented* to do; the rest came
later. It's not specific to fluidsynth. Mutating sounds is the main
purpose of the technology. Normally the CCs would be driver by sliders
on a physical keyboard.

(IIRC, supercollider is jack-based, so you can just route fluidsynth's
output into it directly in real time)
Michael Geis
2011-08-26 19:33:39 UTC
Permalink
Thanks very much for the fast reply. It looks like there a quite a few things I need to learn.
I have been under the weather the last few days which made it difficult to think straight. I'll do more

thinkingand will probably come back with more detailed questions.

Cheers,
Michael
Post by Michael Geis
The current idea is to create each sample file by passing fluidsynth a one note .mid file,
outputting the raw wave data to file and then converting to .wav with sox.
The sampler is not very complicated, it essentially applies an ADSR envelope
to the samples. So as long as I can pry those four phases apart within
every .wav file I get from fluidsynth (i.e. find the offset in frames into the .wav
for beginning and end of each phase), it is likely going get the job done or at least be
reasonably close to what they need.
You're describing a midi renderer.

You just want to adjust the envelope? Grab a soundfont editor (swami)
and add modulators to the instruments you want to work with. Use a
single source, set to an unused midi CC, and a destination of volume
envelope -> attack/hold/delay/sustain/etc. Create one for each
parameter you want to control. Want to adjust something else? Probably
can do it the same way.

Now just use fluidsynth to play the note, and send it CC messages to
adjust those settings. Or use any other soundfont-compliant midi
renderer, including a lot of keyboards, and do the same thing.

This is what midi keyboards were *invented* to do; the rest came
later. It's not specific to fluidsynth. Mutating sounds is the main
purpose of the technology. Normally the CCs would be driver by sliders
on a physical keyboard.

(IIRC, supercollider is jack-based, so you can just route fluidsynth's
output into it directly in real time)

Loading...