Random Off-topic sound-editing question.
category: general [glöplog]
Hapl!
I'm transcribing a solo which is played in harmony, i.e with a first and second voice. My ears suck, and I would like to listen to the underlying chords a little more closely, but they are in the same frequency spectrum as the solo voices.
However, the voices are split into the left and right channel. So, I was wondering anyone knows a program that sort of does a boolean AND on the two channels, so that the lead voices would be cancelled out? Then it could also be super awesome to substract the result from the two seperate channels, because then I could listen to the two voices isolated.
I'm transcribing a solo which is played in harmony, i.e with a first and second voice. My ears suck, and I would like to listen to the underlying chords a little more closely, but they are in the same frequency spectrum as the solo voices.
However, the voices are split into the left and right channel. So, I was wondering anyone knows a program that sort of does a boolean AND on the two channels, so that the lead voices would be cancelled out? Then it could also be super awesome to substract the result from the two seperate channels, because then I could listen to the two voices isolated.
you cant possibly be asking for a mute or a mix.. i wonder what the hell you mean.
Just reverse the phase on one of the channels and mix them.
set your phasers for stun!
graga, what gloom means, i think, is: use any sound editor you like, split up the two channels into two separate mono wave files, find the "invert" button and apply it on one of the two wave files (effectively reversing the phase for all frequencies), and mix (=(a+b)/2) them. all frequencies which are the same in both channels will cancel each other out, leaving only those which are different. results may vary though, probably there's been some mastering and crap applied that screws it up.
i thought sounds mixed in exactly same phase would cancel each other out, not the inverse phase of the same sound. omg this totally explains why my stuff sounds so noisy all the time!!
skrebbel, gloom: Hmm, that's not really what I wanted. Now I have a mono mix of the two lead channels! It's pretty cool, though, helped me hearing some pauses that were unnoticed because of the backing track and orchestration.
thats prolly couz they werent exactly the same on both channels..
hum, you mean that you're leave with *only* (or mostly) the singing? please use precise terminology, i'm not sure what "lead channel" means in your world.
but i believe i understand the problem better now:
- instruments are equally loud on both channels
- singing is either completely on the right or completely on the left
- you want just the instruments
right? (weird song btw)
in that case, take the new "singing only" mono mix that you just told us about (i guess) and also make a mono mix from the original track. mix them together. if that sucks, invert one of the two, and retry.
if all that sucks, then indeed, the (instrument part on the) two channels aren't equal enough for the trick to work, so you're out of luck.
but i believe i understand the problem better now:
- instruments are equally loud on both channels
- singing is either completely on the right or completely on the left
- you want just the instruments
right? (weird song btw)
in that case, take the new "singing only" mono mix that you just told us about (i guess) and also make a mono mix from the original track. mix them together. if that sucks, invert one of the two, and retry.
if all that sucks, then indeed, the (instrument part on the) two channels aren't equal enough for the trick to work, so you're out of luck.
it's real simple math, btw. the music is a large series signed integers (of some size) describing the waveform. mixing means add (or add, div 2, but who cares about that). inverting means * -1. so mix(inv(a), b) means b - a. obviously, -a and a sound exactly the same. on this hemisphere.
Wait.
If I read Graga properly he has:
- 2 lead instrument playing a solo, harmonized (e.g. tonic and 3rd) split full left and full right;
- some other instruments being the base which he wants to listen to better;
So prof. Graga wants to perform some boolean AND of the channels (equal mixing them to mono) hoping that for some odd reason the two lead instruments would be cancelled out. The he wants to subtract the result from the original in order to have the two lead instruments alone, without the base.
Sir, you are very very wrong about the first step. Mixing the two stereo channels together, even inverting the phase of one of them, won't erase the lead instruments away simply because 1) they are not the same take 2) they are playing different notes anyway.
Hence, the second step won't work either :)
If I read Graga properly he has:
- 2 lead instrument playing a solo, harmonized (e.g. tonic and 3rd) split full left and full right;
- some other instruments being the base which he wants to listen to better;
So prof. Graga wants to perform some boolean AND of the channels (equal mixing them to mono) hoping that for some odd reason the two lead instruments would be cancelled out. The he wants to subtract the result from the original in order to have the two lead instruments alone, without the base.
Sir, you are very very wrong about the first step. Mixing the two stereo channels together, even inverting the phase of one of them, won't erase the lead instruments away simply because 1) they are not the same take 2) they are playing different notes anyway.
Hence, the second step won't work either :)
wasn't there some new and flashy uber-cpu-intensive tools that could figure out which cords were played in sampled music? (that probably only works for a single instrument, i reckon..).. i think what graga asks is impossible with the current state of the art..
Wrt to what earx said, i've a good news and a bad news. There are tools that can analyze waveform data, allowing you to manipulate it in a way similar to midi data. Most prominent name as of now is "Melodyne Direct Note Access" (or Melodyne DNA for short). With that being said, here's the bad news: Melodyne DNA is (officially) not yet available. You might want to check this video out:
http://www.sonicstate.com/news/shownews.cfm?newsid=6281 (the main vid starts after a short ad).
Oh, and btw, in case you're tempted to check the existing version of Melodyne (not DNA) out after watching the clip, don't waste your time. The current version has an ugly interface,is pretty limited to what was shown in the video, and it has several heavy restrictions and requirements that make processing of monophonic waves a real pain and that are nowhere even mentioned (for example, a voice needs to be recorded with professional high-end equipment to be detected correctly).
http://www.sonicstate.com/news/shownews.cfm?newsid=6281 (the main vid starts after a short ad).
Oh, and btw, in case you're tempted to check the existing version of Melodyne (not DNA) out after watching the clip, don't waste your time. The current version has an ugly interface,is pretty limited to what was shown in the video, and it has several heavy restrictions and requirements that make processing of monophonic waves a real pain and that are nowhere even mentioned (for example, a voice needs to be recorded with professional high-end equipment to be detected correctly).
so, vaporware.
skrebbel: Unlikely. Melodyne exists today, and rules. What is missing is this latest update which will come, I am sure.
oh, ok.
hmm, I wouldn't be too optimistic about Melodyne DNA. From what I see from the video, and by what he says, it sounds like it doesn't work for already mixed songs. It would only work for the stuff it shows it working for: piano or guitar chords alone and with no other sounds. And the reason it's not released yet is probably because it only works as nicely as it does in a few carefully selected cases that are shown in the video.
Graga: maybe try playing with Winamp's custom DSP plugin (If you're under windows if not nyquist plugins in audacity maybe)
Options -> Preferences -> DSP/Effect -> Nullsoft Signal Processing Studio DSP
then
Load -> "justin - stupid stereo voice removal"
_could_ be a starting point.
Options -> Preferences -> DSP/Effect -> Nullsoft Signal Processing Studio DSP
then
Load -> "justin - stupid stereo voice removal"
_could_ be a starting point.
But why did they use instruments in the first place if they had a bass?!?!
don't ask. weird graga-music!
bdk:
the voice removal or "karaoke" effect is based on removing common features between left and right channels. this may work for vocals since they are present on both channels and often centered too. for instruments this may be different.
the voice removal or "karaoke" effect is based on removing common features between left and right channels. this may work for vocals since they are present on both channels and often centered too. for instruments this may be different.
It's a jazzy guitar solo from a death metal album :)
But actually I'm a retard, because I just realized that I'm doing this on an MP3... off to buy the album :P
But actually I'm a retard, because I just realized that I'm doing this on an MP3... off to buy the album :P
Thanks guys, this perfect example of un-knowledge of music theory and DSP basics that most of you show in here shows me the reason why I still have to maintain that almost 8 year old synth code and still nobody has kicked my arse publicly...
But I digress. To the problem at hand: Forget trying to extract or separate the lead instruments from the mix. It's impossible in theory and unlike many other impossible things that have been faked really well so far, nobody has come up with a solution yet.
Just remember that most music is following some precise rules, even Death Metal (tho they probably don't know _any_ of the rules they're following subconsciously *g*). Eg. one hint to make your life easier is finding out the base chord and function (major vs minor) and write down the exact scale that's been used. That'll cut the number of notes per octave from 12 to 7 in 90% of cases (and also define the relationship between the two lead voices), and also most chords suddenly jump into place automagically. Then don't listen to the bass player too much, you can do some pretty fucked up things in the lower range that most untrained listeners wouldn't be able to figure out.
But I digress. To the problem at hand: Forget trying to extract or separate the lead instruments from the mix. It's impossible in theory and unlike many other impossible things that have been faked really well so far, nobody has come up with a solution yet.
Just remember that most music is following some precise rules, even Death Metal (tho they probably don't know _any_ of the rules they're following subconsciously *g*). Eg. one hint to make your life easier is finding out the base chord and function (major vs minor) and write down the exact scale that's been used. That'll cut the number of notes per octave from 12 to 7 in 90% of cases (and also define the relationship between the two lead voices), and also most chords suddenly jump into place automagically. Then don't listen to the bass player too much, you can do some pretty fucked up things in the lower range that most untrained listeners wouldn't be able to figure out.
Quote:
Thanks guys, this perfect example of un-knowledge of music theory and DSP basics that most of you show in here shows me the reason why I still have to maintain that almost 8 year old synth code and still nobody has kicked my arse publicly...
the organisation responsible for the fairlight 64k synth wishes to lodge a complaint about this statement, having clearly kicked v2's arse back in 2006. ;)
kb: add granular synthesis to v2 already you fucker