this post was submitted on 14 Dec 2023
105 points (95.7% liked)

Technology

59596 readers
3352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Hey guys, I'm writing a user manual for some software I'm publishing. It's a software synthesizer design toolkit, for making your own software synthesizer in your programming language of choice. Of course, in order to make your own synthesizer, you must know how one works.

My goal in writing this user manual is not only to document my code, but also to teach how synthesizers actually work, so that anyone can make their own. That's where this post comes in. I need inspiration on what exactly it is people don't already know about them, and what all the hot topics are.

I'm happy to actually explain these things in the comments below!

top 29 comments
sorted by: hot top controversial new old
[–] Pxtl@lemmy.ca 20 points 11 months ago (5 children)

For somebody who has no idea about them at all:

When I was a kid in the 80s, a "synthesizer" was an electronic keyboard. Now, a "synthesizer" is a mess of knobs and buttons that looks more like a drum machine than a piano.

So, uh... my Q: "what's a synthesizer?"

[–] madsen@lemmy.world 8 points 11 months ago

That mess of knobs and buttons has been around since the '50s — longer than the more compact '80s synths: https://en.wikipedia.org/wiki/Modular_synthesizer Because of their size they are usually considered studio gear and not stage gear, which may also explain why the more compact synths were more visible earlier, because you rarely got to look into studios then compared to now.

To answer your question: A synthesizer (when talking about sound) is an instrument that generates sound by creating waveforms and possibly combining them in different ways to achieve different sounds. Typically they come with filters and envelopes, that further affect the resulting sound.

[–] TimeSquirrel@kbin.social 7 points 11 months ago (1 children)

There are synthesizers, and there are electronic keyboards. Main difference is, synths generate completely novel sounds using circuits like oscillators, filters, etc, while regular keyboards just play back prerecorded sounds. A synth can come with a keyboard to control it, or it may be a completely independent unit, maybe rack-mountable, that's controlled digitally from a control keyboard or sequencer using MIDI signals.

[–] Pxtl@lemmy.ca 4 points 11 months ago* (last edited 11 months ago) (2 children)

Thanks! I've heard a million explanations but this is clear - so the synth is taking the composition as input from what most people would think of as the "instrument" (as in, the place where somebody is picking notes and rhythm), but the synth is the thing that controls the shape of the actual soundwaves, and ideally that waveform is fully constructed within the synth from first principles, instead of just being a set of samples that are just pitch-shifted to hit each frequency to play different notes, right?

And obviously, adjusting the parameters synth itself is also part of the performance and composition, just as muting a trumpet or hitting an effects pedal is part of that, even though it's not really part of "what note do I play when", and with far more parameters available since the sound is wholly constructed instead of just being modifications of eg. a vibrating string or brass.

So when people talked about "synthesizer music" in the '80s and the popular image was of a guy jamming on the keyboard, what was actually meant was that the keyboardist was playing a keyboard that was using a synth to generate the actual sound, which might or might not be a separate unit from the keyboard.

[–] Wootz@lemmy.world 3 points 11 months ago* (last edited 11 months ago)

which might or might not be a separate unit from the keyboard.

Funny that you mention it. Synthesizers are very much a product of university research programs. Back in the 60's and 70's, when synthesizers as a concept was still new, there was heated debate between the pioneers of the field (Robert Moog in New York and Donald Buchla in Berkley, California) over whether or not synthesizers should even have a keyboard.

The origin of the word "synthesizer" isn't actually "synthetic", as many believe, but rather synthesis, as in the academic sense of the word, from the idea of breaking a sound down into it's individual parts and reassembling them.

[–] lofenyy@lemmy.ca 2 points 11 months ago

This is exactly right!

[–] lofenyy@lemmy.ca 6 points 11 months ago

Fantastic question! A synthesizer is a device that generates audio signals. I remember reading somewhere that they were sometimes referred to as "noise machines", in regards to I think the Minimoog specifically. A drum machine is a type of synthesizer, as were the electronic-keyboard-having synths of the past.

[–] radni@lemmy.world 5 points 11 months ago

And I thought "Wait a second I know the synthesizer, why don't I use the synthesizer, Which is the sound of the future"

-Daft Punk

[–] can@sh.itjust.works 2 points 11 months ago* (last edited 11 months ago)

Something that allows you to design and shape sounds.

You may be thinking of a modular synth with knobs?

[–] Wootz@lemmy.world 8 points 11 months ago (1 children)

Are you sticking to only softsynths / digital, or also going into analog?

I ask because I have previously struggled with explaining why plugins and dsp stuff works the way it does (why is "saving my settings" called a patch?) without going into a long winded history lesson.

Either way, super cool!

I think I know a fair bit about both the history of synths and how they work, so if you need someone try bounce ideas off of don't hesitate to write.

[–] lofenyy@lemmy.ca 4 points 11 months ago

Thank you so much for the offer! I'll mostly stick to soft synths, but I don't mind going into the history a little bit to explain terminology and whatnot. There's a surprising amount of overlap between analog subtractive synths and software subtractive synths anyways.

[–] funkforager@sh.itjust.works 7 points 11 months ago (1 children)

Cool project! Will it explain some of the related concepts like envelopes and ADSR? It might be nice to talk a bit about calibration/microtuning so it can match external gear. Also maybe some recipes for making sure a readers synth they build has a few basics to work from.

[–] lofenyy@lemmy.ca 5 points 11 months ago

Envelopes and ADSR are an absolute must. Thanks for the suggestions, I'll be taking you up on them.

[–] Terminarchs@slrpnk.net 7 points 11 months ago (1 children)

Honestly, latency/performance stuff. As in: how do VST synths ensure that they'll synthesize in time to keep up with the audio buffer, depending on user hardware. I'm asking because I've seen/heard countless VST synths fail at this and sound like a clicky mess, and I feel like if I understood how it's handled in code it would make more sense to me.

[–] lofenyy@lemmy.ca 4 points 11 months ago

I think immediately of libao for the C programming language. Imagine a while loop that completes itself at least 44100 times in a second. If the synth cannot write to the buffer that quickly, the sound card runs out of samples, and can't do anything, so it stops playing. Hence that clicky mess sound. This is for realtime synthesis though. If you can produce audio at your own pace, you have the opportunity to sound good every time.

[–] tias@discuss.tchncs.de 6 points 11 months ago* (last edited 11 months ago)

I'm a long-time software developer who at one point spent a lot of time on a software synth as a hobby project (never finished it as I realized it had fundamental design flaws). I'm also interested in making music (but still suck at it), follow various producers on YouTube and dabble with Ableton. Here are some things that puzzle me:

Latency seems inevitable, regardless of how fast your CPU or code is. Many algorithms simply require a certain window of input data before they can produce something. For example, an FFT with a window size of 2048 requires 2048 samples (~50 milliseconds) before it can react. Chain multiple such filters together and it adds up. In my hobby project I wanted to make a "reverse reverb" module (buffer data, reverse it, apply reverb, then reverse audio again to get an effect as if the sound is "arriving") and I could never wrap my head around how to do it. It could potentially add a latency of tens of seconds. How can we deal with this in the audio pipeline? It seems like for prerecorded or generated audio, it should be possible to consume data ahead of time to make the output come out at the right time. But all of the modules need to be synchronized so e.g. a drum comes out at the right time along all paths.

Typically analog synths have lower latency, but I don't understand why. Aren't they theoretically subject to the same limitations as a digital synth? Even an analog filter would need some kind of buffer to determine frequency. It's like Heisenberg's uncertainty principle but for sound. So how does that work, and how can we replicate the low latency of analog synths in software synths?

I lack an intuition about sound synthesis and it all seems very magical, so I wish somebody would help me untangle the relationship between what I hear and what the algorithm does. I mean it's easy to look up algorithms for producing audio, but I don't know how to apply those algorithms to incrementally work my way toward the sound I'm looking for in my head. As a developer I have an analytical mindset, and most producers I follow seem to go more on feeling (which is difficult to me). I have a hunch that a lot of what they talk about is just placebo, but I don't know how I would test that assertion. For example, there are people who compare the different sounds of Ableton's Operator and Serum, as if they are different beasts. But both are FM synths; it's the same maths behind them. So why would they have different sound? With all the FM synths that are out there, what are the things that actually separate them to produce different "feeling"?

In fact, speaking of FM synths, they are one of the biggest mysteries for me. I know what they do mathematically, but I need help understanding why someone chose to build a synth in this particular way and how they tame it to get the sound they want. It just seems like a really chaotic way to work for me, only slightly better than a random number generator.

Perhaps it would be interesting as a case study to try to replicate some of these commercial software synths by stitching together basic algorithms covered in the manual.

[–] tabarnaski@sh.itjust.works 6 points 11 months ago (1 children)

Hey that's great! Will it cover all types of sound synthesis (FM, substractive etc) or focus on a specific one?

[–] lofenyy@lemmy.ca 6 points 11 months ago

All kinds. No reason to stop at one haha.

[–] ianovic69@feddit.uk 5 points 11 months ago (1 children)

What filter poles are and how they relate to cutoff frequency response.

This is something that confuses people and always needs good explanations.

Great work, good man!

[–] lofenyy@lemmy.ca 4 points 11 months ago
[–] cashews_best_nut@lemmy.world 4 points 11 months ago (1 children)

This looks very interesting and I look forward to reading it.

Will this make me the next Aphex Twin?

[–] lofenyy@lemmy.ca 3 points 11 months ago

Not if it makes me the next Aphex Twin first! :3

[–] monk@lemmy.unboiled.info 4 points 11 months ago (1 children)

"What should I attempt before deciding I need to roll my own?"

[–] lofenyy@lemmy.ca 3 points 11 months ago

In my opinion, this is the same question as whether or not to use Arch Linux over another distro. Roll your own simply if you want, or if no other synthesizer is doing it for you. Sometimes it's just worth doing it for the fun of it, or for the sake of learning. I'm actually planning on making a bunch of soft synths, so if you like, you can let me know what you'd like to see.

[–] TimeSquirrel@kbin.social 3 points 11 months ago* (last edited 11 months ago) (1 children)

What language? I'm decent at C++. Plz no Python. I'm too old to learn that now.

[–] lofenyy@lemmy.ca 2 points 11 months ago

C and C++, but I'd like to make it a language agnostic library if possible.

[–] hamptonio@lemmy.world 3 points 11 months ago

Techniques to avoid aliasing. Difficult topic to do well though.

[–] yum 2 points 11 months ago (1 children)

Any guide to make the GUI actually resemble a synthesizer. Having knobs and sliders don't seem to be enough, in my case.

Great project btw!

[–] lofenyy@lemmy.ca 2 points 11 months ago

This is actually the hardest part for me, personally. I'm an audio guy, not a graphics guy, so I'm afraid I can't help out here.