Mono to Stereo: How AI Is Respiratory New Life into Music | by Max Hilsdorf | Dec, 2024

Now that we mentioned how related mono-to-stereo expertise is, you is likely to be questioning the way it works beneath the hood. Turns on the market are totally different approaches to tackling this downside with AI. Within the following, I need to showcase 4 totally different strategies, ranging from conventional sign processing to generative AI. It doesn’t function an entire listing of strategies, however fairly as an inspiration for the way this job has been solved during the last 20 years.

Conventional Sign Processing: Sound Supply Formation

Earlier than machine studying turned as widespread as it’s in the present day, the sector of Music Data Retrieval (MIR) was dominated by good, hand-crafted algorithms. It’s no surprise that such approaches additionally exist for mono-to-stereo upmixing.

The basic concept behind a paper from 2007 (Lagrange, Martins, Tzanetakis, [1]) is straightforward:

If we will discover the totally different sound sources of a recording and extract them from the sign, we will combine them again collectively for a sensible stereo expertise.

This sounds easy, however how can we inform what the sound sources within the sign are? How will we outline them so clearly that an algorithm can extract them from the sign? These questions are troublesome to unravel and the paper makes use of quite a lot of superior strategies to attain this. In essence, that is the algorithm they got here up with:

  1. Break the recording into quick snippets and determine the height frequencies (dominant notes) in every snippet
  2. Determine which peaks belong collectively (a sound supply) utilizing a clustering algorithm
  3. Determine the place every sound supply ought to be positioned within the stereo combine (guide step)
  4. For every sound supply, extract its assigned frequencies from the sign
  5. Combine all extracted sources collectively to type the ultimate stereo combine.
Instance of the person interface constructed for the research. The person goes by means of all of the extracted sources and manually locations them within the stereo combine, earlier than resynthesizing the entire sign. Picture taken from [1].

Though fairly complicated within the particulars, the instinct is kind of clear: Discover sources, extract them, combine them again collectively.

A Fast Workaround: Supply Separation / Stem Splitting

So much has occurred since Lagrange’s 2007 paper. Since Deezer launched their stem splitting instrument Spleeter in 2019, AI-based supply separation programs have grow to be remarkably helpful. Main gamers similar to Lalal.ai or Audioshake make a fast workaround attainable:

  1. Separate a mono recording into its particular person instrument stems utilizing a free or business stem splitter
  2. Load the stems right into a Digital Audio Workstation (DAW) and blend them collectively to your liking

This method has been utilized in a analysis paper in 2011 (see [2]), but it surely has grow to be rather more viable since because of the latest enhancements in stem separation instruments.

The draw back of supply separation approaches is that they produce noticeable sound artifacts, as a result of supply separation itself remains to be not with out flaws. Moreover, these approaches nonetheless require guide mixing by people, making them solely semi-automatic.

To completely automate mono-to-stereo upmixing, machine studying is required. By studying from actual stereo mixes, ML system can adapt the blending type of actual human producers.

Machine Studying with Parametric Stereo

Photograph by Zarak Khan on Unsplash

One very inventive and environment friendly approach of utilizing machine studying for mono-to-stereo upmixing was offered at ISMIR 2023 by Serrà and colleagues [3]. This work is predicated on a music compression approach known as parametric stereo. Stereo mixes encompass two audio channels, making it exhausting to combine in low-bandwidth settings similar to music streaming, radio broadcasting, or phone connections.

Parametric stereo is a way to create stereo sound from a single mono sign by specializing in the necessary spatial cues our mind makes use of to find out the place sounds are coming from. These cues are:

  1. How loud a sound is within the left ear vs. the correct ear (Interchannel Depth Distinction, IID)
  2. How in sync it’s between left and proper by way of time or section (Interchannel Time or Section Distinction)
  3. How related or totally different the alerts are in every ear (Interchannel Correlation, IC)

Utilizing these parameters, a stereo-like expertise may be created from nothing greater than a mono sign.

That is the strategy the researchers took to develop their mono-to-stereo upmixing mannequin:

  1. Accumulate a big dataset of stereo music tracks
  2. Convert the stereo tracks to parametric stereo (mono + spatial parameters)
  3. Prepare a neural community to foretell the spatial parameters given a mono recording
  4. To show a brand new mono sign into stereo, use the educated mannequin to infer spatial parameters from the mono sign and mix the 2 to a parametric stereo expertise

At the moment, no code or listening demos appear to be obtainable for this paper. The authors themselves confess that “there may be nonetheless a niche between skilled stereo mixes and the proposed approaches” (p. 6). Nonetheless, the paper outlines a inventive and environment friendly strategy to accomplish absolutely automated mono-to-stereo upmixing utilizing machine studying.

Generative AI: Transformer-based Synthesis

Stereo-Genration in Meta’s text-to-music mannequin MusicGen. Picture taken from one other article by the creator.

Now, we’ll get to the seemingly most straight-forward strategy to generate stereo from mono. Coaching a generative mannequin to take a mono enter and synthesizing each stereo output channels straight. Though conceptually easy, that is by far probably the most difficult strategy from a technical standpoint. One second of high-resolution audio has 44.1k information factors. Producing a three-minute track with stereo channels subsequently means producing over 15 million information factors.

With todays applied sciences similar to convolutional neural networks, transformers, and neural audio codecs, the complexity of the duty is beginning to grow to be managable. There are some papers who selected to generate stereo sign by means of direct neural synthesis (see [4], [5], [6]). Nevertheless, solely [5] practice a mannequin than can remedy mono to stereo era out of the field. My instinct is that there’s room for a paper that builds a devoted for the “easy” job of mono-to-stereo era and focuses 100% on fixing this goal. Anybody right here in search of a PhD subject?