I don't know what you mean by "3 singnals from 2 waveforms", but RGB
encoding in NTSC TV works like this:
The RGB signal is transformed into YCbCr, where Y is luminance, Cb is
approximately B-Y, and Cr is approximately R-Y. For realistic definitions
of Y, Cb, and Cr, check out the publicly available JPEG source.
This transformation is done for two reasons:
1) Y is much more important than Cb or Cr, so it makes transmission more
efficient to divide the signal this way and give Y more bandwidth.
2) The signal was built in such a way that old B&W televisions saw the Y
signal and some "distortion" because of the way Cb and Cr are encoded.
To encode the three components of the TV signal:
First, Y is bandwidth-limited to 4MHz or so.
Then, Cb and Cr are used to modulate 3.58MHz carrier signals, each 90
degrees out of phase with the other.
Then the signals are added together to make TV. This discussion ignores
sync pulses, blanking intervals, porches, colorbursts, and interlacing.
The usual explanation of the Cb/Cr encoding is that because the Cb and Cr
signals are 90 degrees out of phase, they are mathematically orthogonal, and
each can be separately detected from the 3.58MHz carrier. This is untrue,
however, because the signals are modulated, and the modulations on one will
interfere with the other to a significant extent.
The more satisfying explanation is that the quadrature (90 degrees out of
phase) encoding of Cb/Cr makes the magnitude of the 3.58MHz carrier
proportional to the saturation and its phase representative of the hue.
This gives a clear picture of bandwidth allocation in NTSC: Y is important,
and it gets lots of bandwidth. Saturation is less important and it gets
less bandwidth. Hue is much less important and it gets much less
bandwidth -- it can take several cycles before phase is accurately
detectable.