Polyphony in AudioGL

From AGL : Help Wiki
Jump to: navigation, search

Polyphony is the amount of notes that a module can play simultaneously. If you have a module with four voices (Polyphonic), that module can play a four note chord. On the other hand, a module with one voice (Monophonic), can only play one note at a time.

AudioGL offers detailed control over Polyphony. To use AudioGL effectively, and to lower the CPU usage of your projects, this article is a must-read.

Contents

Overview

To structure an AudioGL project properly, you have to think a little bit about Polyphony. If you are creating an instrument which will be used to play chords, it should be Polyphonic.

But, if you have a series of insert effects which modify the sound of the instrument, they should be monophonic.

By making an effects system which is monophonic, you will save your computer's resources, and you will also make it easier to organize your project. Monophonic modules have a high degree of connectivity, and this allows you to use the effects system throughout your entire project.

Polyphony in Instruments

Good Instrument Design: Example 1.
Fig 1.0
Good Instrument Design: Example 2.
Fig 1.1

As a general rule, only the sequencing, sound generating, modulation, and filtering sections of a synthesizer need to be polyphonic. Everything downstream from those sections can be monophonic.

Some instruments (drums, basslines, leads) can be entirely monophonic, but there is no reason to do this. Playing one voice on a polyphonic instrument is comparable to using a monophonic synthesizer, in terms of resource usage.

On the other hand, playing a chord on a polyphonic instrument will be far more resource friendly if it has a monophonic effects section (See Fig 1.0). You can think of the green section as being a tone generator, and the blue section as being a chain of Insert Effects.

Modulation

In AudioGL Beta, modulation takes place between Envelope Modules, and Parameters inside of Modules. To make this work, envelopes and parameters must have the same amount of voices. It is important to consider this while making an instrument. In future versions of AudioGL, it will be possible to modulate monophonic parameters with polyphonic envelopes, and vice versa.

When to Use a
Monophonic Instrument

Monophonic instruments have their own playing style. The way that a monophonic instrument is triggered is much different than that of a polyphonic instrument. This playing style is particularly useful for portamento, and basslines.

Another time to use a monophonic instrument, is when you want to modulate your Send Effects using an envelope. As stated above, in AudioGL Beta, modulation sources and modulated parameters must have the same amount of voices.

Polyphony in Projects

Overall Project Organization.
Fig 2.0

The mixing sections of your project will generally be monophonic.

In Fig 2.0, you can see a project which is laid out using generic mixing concepts such as Insert Effects, Send Effects, and Mastering Chains. Most of this project (the section bounded by the blue outline) is monophonic. The sound generating sections (bounded by green outlines) are polyphonic.

Other Things You Should Know

Reverb Modules should always be monophonic. By default, when you create a Reverb unit, it will be monophonic. AudioGL allows you to change the polyphony of a reverb, but this will use up a lot of resources. Reverb units are very complex, and there is very little improvement in sound quality, if your voices are all processed by different reverbs. AudioGL will eventually have a Polyphonic Automation System, and once this feature is available, polyphonic reverbs will make sense.

Output Modules are always monophonic. You cannot change the polyphony of an output module.

Slave Modules are controlled by their Master Module, and their voices will be managed automatically. You cannot manually change the polyphony of a Slave Module.

Personal tools
Namespaces
Variants
Actions