Factorsynth – New Max for Live Device To Decompose Sounds With Machine Learning
Juan Jose Burred, ANR reader and independent researcher and developer based in Paris, got in touch with us to announce his first Max For Live devices, Factorsynth and
These devices use a machine learning technique (matrix factorization) to decompose sounds into temporal and spectral elements (notes, impulses, noises…).
According to the developer, users can then manipulate these elements in real time to remix clips, remove parts, create complex textures, cross-synthesis etc..
Here are some demo videos for Factorsynth:
To learn more and buy Factorsynth and
DISCLOSURE: Our posts may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.