Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Anyma V, a hybrid physical modelling virtual instrument (aodyo.com)
79 points by oinj 9 months ago | hide | past | favorite | 29 comments
Hi HN! We're a small team in Lille (France) who make synthesizers and MIDI controllers. We've just released a virtual plugin version of our hardware synth Anyma Phi, which offers a semi-modular environment with a focus on physical models, although there are several other kinds of synthesis.

Here's a video: https://www.youtube.com/watch?v=6efDQ9GmRpg

We're not pivoting to VSTs, it's just that it was a practical way of investigating several issues and helping us with the ongoing development of our upcoming Kickstarter-backed synth (Anyma Omega) and MPE controller (Loom), and a gift to thank our backers for the wait they gave to go through due to several manufacturing and production issues.

I enjoy reading music-related entries here, so I thought I'd contribute this time and I hope it will interest some. I'm here for any question or remark.




Cool to see you on HN!

What do you see as setting your synths and hardware apart from, say, the Osmose and Hydrasynth?

If you don't mind me asking, for your hardware, what's running under the hood? Big ARM cores / SOC? RTOS on a Cortex-M? What challenges have you faced working on whichever you're less used to? (The VST if you have more hardware background, the hardware if you have more desktop software background)


The EaganMatrix (inside the Osmose) and the Hydrasynth are both great and each one has its own approach. I think the Anyma synths are less beefy, in terms of computational resources, but the synth engine offers more kinds of modules, more freedom in some way. Not that it's always useful to have 16 LFOs or envelopes, or to be able to modulate the curve of a mapping, but it sometimes makes trying an idea easier during sound design. As we started with a wind instrument (Sylphyo), we also take special care to make support for this kind of MIDI controllers effortless.

The synth engine in the Anyma Phi runs on a STM32F4. The UI and MIDI routing runs on a separate STM32F4. No RTOS, we find it much easier to reason with cooperative multitasking, and easier to debug. So far, we don't have any latency/jitter issue with this approach, although it required writing some things (e.g. graphics) in a specific way. The Omega runs on a mix of Cortex-A7 and STM32.

I have a pure software background but I came to appreciate the stability, predictability and simplicity of embedded development: you have a single runtime environment to master and you can use it fully, a Makefile is enough, and you have to be so careful with third-party code that you generally know how everything works from end to end. The really annoying downside is the total amount of hair lost chasing bugs where it's hard to know whether the hardware or the software is at fault. In contrast, programming a cross-platform GUI is sometimes hell, and a VST has to deal with much more different configurations than a hardware synth, you're never sure of the assumptions you can make. The first version of Anyma V crashed for many people but we never had the case on the dozen machines we tested it on.


Interesting prespective. I can definitely see how you have the immediacy edge over the pain of the EaganMatrix, and having different engines besides the core wavetabel-y of the Hydra is a win, IMHO - though, yeah, both fit different needs.

I'm mostly an embedded guy (Usually much lower power ST parts), so it's neat to hear about how you approached it. Having multiple chips separate so can't underrun as easily if the UI needs to react is really nice design!

I see a lot of your engine is modified from from Mutable Instruments, but you do have a good selection of original sound sources as well. What sets yours apart? Did you have a strong background in DSP before Aodyo?


I did a tiny bit of DSP and I've been exposed to the HCI/NIME community in the past, but that's it. Many modules in the Anyma are just reasonable implementations of clever formulae I didn't design but studied from papers :). And for the Mutable stuff, a lot of optimization work and tradeoffs to make. We are lucky to have a sound designer with a good ear. That said, we've been working for a while on our own waveguide models (Windsyo and others), and we have found some tricks I've never seen elsewhere. There's a lot to explore, especially when looking for "hybrid" acoustic-electronic sounds.


For sure. I really dig those hybird sounds too. I'm particularly fascinated with the sounds that are more electro mechanical (See Korg's Phase-8, or Rhythmic Robot's "Spark Gap") so I'm glad to see more people trying to combine physical modeling and synthesis in smoother ways than just layering them.


> The Omega runs on a mix of Cortex-A7 and STM32.

Oh my. So, how much processing load are you typically at now?

You know your backers are, from what can be gained from the KS comms, (to put it mildly) not too convinced Aodyo will provide more than enough juice (!=JUCE) this time, for chaining up enough modules while guaranteeing 16 note poly? And this with a multi-timbral design?

(you might refer to your end of 2023 update, regarding the 4+1 core concept which had to be changed creating further delay, and so on)


We had to switch to a more powerful architecture and chip but the voices are still dispatched on several processors. It'll be enough to withstand 16-note polyphony at 125% load, with some extra power left because we don't use the second core yet.


I’ve been following the Anyma Phi on the synth blogs; great to see you here!

Any advice for someone on the product side looking to get into the synth development scene? I’m a designer and have so far partnered with a DSP developer on one project, a plugin for Reason based on Mutable Instruments’ Plaits (https://soundlabs.presteign.com), but haven’t really figured out where to go next.


Macro is very good, and beautifully designed! Congrats and thanks for this.


Looks very cool but note, Virus Total fails the windows installer (twice) https://www.virustotal.com/gui/file/29c67d9d9725178a2337f6d0...


Thanks. It's weird, we cross-compile using llvm-mingw from macOS and then run the Inno Setup compiler using Wine inside a fresh Docker image (Linux guest). I'm not sure how I could obtain more info on what caused both antivirus to trigger, but we'll look into it.


Hi! Would this would be misplaced hope to wish that in a near future there would exist an electronic instrument reproducing physically the clarinet (with the same keys), while simulating finger holes physics and (most importantly!) reed/lips/tongue/breath interaction?


MIDI wind instruments have been around for a while now, including:

- Akai EWI (one of the first MIDI wind instrument, and still well known and well used)

- Roland Aerophone

- Berglund NuRAD

Have you tried them and where do you think they fall short?


I've watched a lot of reviews on Youtube.

The problem is that these instruments are not physically similar to the clarinet, for example regarding the system of keys and levers.

I hope that some electronic instrument would make that jump.

That would allow clarinetists to silently practice anywhere, as well as seamlessly engage in electronic music and digital creation without having to change the muscle memory.


My son plays the clarinet and uses an EWI with no difficulty? (He doesn't like it much though, and much prefers the real instrument. But he can play it.)


Yes I believe the second point is less of a problem.

That leaves the first point, which is the major one for me at least (allow to practice clarinet with headphones).


Thanks so much for making a Linux release of this awesome synth plugin (I'm Mr. Ardour).


This is very cool! Some of the samples in the Soundcloud playlist sound really amazing! Is it possible to use the paid version offline? I keep my studio computer off the network so that I can totally avoid distraction.


Thanks! The software doesn't connect to the network. You can use another computer or your mobile phone for activation.


What is your sound engine built with? What tools are you using for the GUI?

I’ve found the GUI the hardest part of VST development (but I’m not on a traditional C++ Juce stack).


We use JUCE for building the app/plugin. It handles the GUI, the audio/MIDI devices and the plugin API. The synth engine was originally developed to run on a STM32F4 (what the Anyma Phi uses), so almost everything is purpose-built (with good old Makefiles). On the hardware, we use an immediate-mode UI and it's hard to go back to something like JUCE, which is flexible but a bit quirky. I often write GUIs with Cocoa for our internal tools (simulators, DSP models, etc) during the development of our hardware products and it's a much more comfortable environment.

In 2019 I had an early version of the Anyma engine running on Dear Imgui, it was really fun, but it would have required too much effort to properly manage audio/MIDI/plugin aspects in a cross-platform way, and the backends were incomplete at the time. JUCE was too much of a time saver to ignore for a team of 1.5.

I'm curious, if you don't use C++ and JUCE, what is your stack?


Did you entertain choosing anything outside of juice?


We also investigated iPlug2, but ultimately decided on JUCE partly because it was easier to find help.


FYI someone (not me) created a thread on Gearspace about your synth[0] where I think it's probably more relevant than here?

[0]https://gearspace.com/board/new-product-alert/1432677-aodyo-...


Thank You very much. It is extremely nice that a virtual piece of hardware such as this is created AND shared.

High regards!


Hello fellow Frenchmen!

Physical modelling is really fascinating... Currently testing this and it sounds good!

The UI is a little overwhelming though. But of course it's a difficult task to allow manipulating many parameters in a simple way. (Reason's modelling synth Objekt does a reasonably good job at that, I think).

Anyway, congrats! HN loves music, please post more! (A month ago I did a ShowHN for a "random" sequencer: https://billard.medusis.com [0]; it works well when connected to unusual sound generators such as this.)

[0] https://news.ycombinator.com/item?id=40719782


I dont know crap about music, but can this simulate a vacuum tube amplifier and demolish the market? One internet guy explains the frequency response of a solid state amp getting a 100Hz sine wave shows a peak at 100Hz, and a tube amp shows multiple peaks. People pay a pretty penny for glowing tubes on their desks claiming its got a warmer sound which a solid state device cannot replicate.


The tube amp simulation market is already pretty…saturated. :)


Awesome




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: