Re: Simulating guitar stack

From: Goran_Mekić <meka_at_tilda.center>
Date: Sun, 28 Jul 2024 19:21:00 UTC
On 4/30/24 23:16, Florian Walpen wrote:
> I did experiment some years ago. That didn't include neural amp models back
> then, but I just had a listen to some current NAM demos and my verdict is
> still the same:
>
>   - Most pedals can be easily reproduced in software.
>   - Nothing in software sounds like a real tube amp (yet).
>   - Speakers can be reproduced well by IR software.
>
> Means if you really care about tube amp tone, use a tube amp or at least a
> hardware amp modeler that utilizes tubes (I have an old one made by VOX). That
> said, many people will not hear the difference in a complete mix, and for
> heavily distorted sounds it's less relevant.
>
> I personally don't use software effect pedals. For Speaker simulation I
> recommend the lsp-plugins IR processor, and I apply some older free IRs from
> Kalthallen Cabs.

I experimented a lot with IR lately as I found that my FX processor for 
live gigs allows me to upload custom ones. In studio, lsp-plugins works 
great! It has everything I expect IR plugin to have. Thank you for the tip!

As for neural amp modeling, I manually compiled following plugins:
* https://github.com/mikeoliphant/neural-amp-modeler-lv2
* https://github.com/AidaDSP/aidadsp-lv2

They both work but first one has only one preset (hopefully others will 
follow as time goes by), and second one is not much better: 
https://tonehunt.org/models?tags%5B0%5D=aida-x. From such a limited 
experience I would say the technology is promising. There's a lot more 
to be done in this field to make it really shine, but what I heard I 
already like more than guitarix and/or rakarrack.

As for capture, I do have problems. Luckily I found aliki from 
https://kokkinizita.linuxaudio.org/linuxaudio/ and managed to compile it 
and run it, but not yet capture any IR (I have a clue what's the 
problem, I'll write if I get stuck). Capturing for neural amp is 
trickier. No matter how I record it with ardour I get different number 
of samples for input and output file. I tried appending silence to both 
files (well, tracks really, as it's ardour I'm using for capturing) and 
aligning them, listening to what jack_iodelay tells me, reseting 
latencies ... everything I could think of, but in and out file always 
have different number of samples. My reamping works great and I can hear 
my gear just fine, but managing to produce the proper file(s) is tricky. 
The procedures I followed:
* neural amp modeler: 
https://colab.research.google.com/github/sdatkinson/NAMTrainerColab/blob/main/notebook.ipynb
* aida: 
https://colab.research.google.com/github/AidaDSP/Automated-GuitarAmpModelling/blob/aidadsp_devel/AIDA_X_Model_Trainer.ipynb 
(it has some errors currently, but reports different sample sizes)

I stream-exported ardour tracks with "apply track processing" off, 
otherwise all files are saved as stereo. If anyone can recommend how to 
capture the sound correctly for training any of the two AI 
implementations, I would be really grateful. I tried to record my studio 
FX processor which introduces additional latency, but I also set 
latencies according to jack_iodelay. Please advise if you have any ideas!

Regards,
meka