View Single Post
Old 7th Nov 2017, 2:56 pm   #14
Argus25
No Longer a Member
 
Join Date: Oct 2016
Location: Maroochydore, Queensland, Australia.
Posts: 2,679
Default Re: Question about scanning and "sync pulses"

In the American and British early TV systems the modulation of the video signal on the transmitted carrier was different. In your system the sync pulses made the carrier go to a higher level and the picture signal reduced it.

In any case, once the composite video comes out of the video IF stages "detector" , maybe a 3 to 5 volts in amplitude, you have a video signal that needs to be amplified up and fed to the CRT to modulate the CRT's beam current.

Depending on the polarity of the detector diode, you can get either polarity video signal out of the video detector. Also each time it is amplified by a tube stage the polarity fips.

Typically in an American TV set the detector diode is placed with the anode as the output and the negative half of the video carrier is rectified or detected, that way you have about a 4V amplitude signal with negative going sync pulses.

Then that passes to a single video amplifier valve, which amplifies it up to about 40 or 50V with the syncs then going positive, then that is fed to the CRT cathode. But in other sets, I think like yours, last time I looked, there is a video driver stage, that flips the signal polarity again, but it doesn't matter because the designer can choose to use either polarity video detector or drive the CRT grid or cathode to make sure it works out the right way around.

With a CRT, if you take the grid negative with respect to the cathode it DARKENS the beam, this is the same as taking the cathode positive with respect to the grid.

It takes roughly around 30 to 40 volts peak to peak to modulate the CRT for full contrast, depending on the CRT.
Argus25 is offline