View Single Post
Old 3rd May 2018, 2:54 am   #4
Synchrodyne
Nonode
 
Join Date: Jan 2009
Location: Papamoa Beach, Bay of Plenty, New Zealand
Posts: 2,944
Default Re: Tuners in Philips G6-G11 CTV chassis

Here are some questions and comments on just a few of the many topics covered in the epic “Philips TV Tuners Part 3”.

Tuned RF Input for UHF Tuners:

The case of the AT6380 where a tuned input was added ahead of an early Ge transistor RF amplifier in order to reduce noise at the higher frequency end of the UHF band does show that the “conventional wisdom” on this issue was a broad-brush approach that did not address all situations.

That conventional wisdom suggest that the best noise performance was achieved with an aperiodic input, with the required RF selectivity being provided by a bandpass tuned interstage. This in turn indicated 3-gang UHF tuners.

The UK case was an exception. The image rejection requirement in the UK was much higher than in the rest of Europe. This was due to the decision to move the vision IF (VIF) from 38.9 to 39.5 MHz after the European UHF channel plan was determined at ITU Stockholm 1961, including the co-siting of n and (n+10) channel transmitters. This required extra RF selectivity, typically provided by adding a 4th gang for a tuned RF input. But there were other ways in which it was done. In the late 1960s, Thorn used the 4th gang to tune an input image rejector in one case, and to facilitate a triple-bandpass interstage in another, in both cases retaining what was an aperiodic input. That suggests that – with devices that did not need any additional help at the top end of the UHF band – it saw a benefit, presumably a noise benefit in retaining the aperiodic input.

In the USA, the traditional UHF tuners usually had a bandpass tuned input ahead of their diode mixers. That was needed for additional RF selectivity. RF amplifiers in US UHF tuners appeared to have arrived at about the same time that varactor tuning was introduced. The lower Q of early varactor circuits indicated that some additional help with RF selectivity was required, and this was provided by a tuned input. Also, as the VHF tuners moved over to varactors, it was more difficult to provide for feeding the UHF tuner output into the VHF tuner RF amplifier without additional switching, so the UHF tuner output went to the VHF mixer instead, requiring more overall gain in the UHF tuners.

Combined VHF-UHF Tuners with Separate UHF Oscillators:

Whilst it was logical that the three transistors used in such tuners as the Pi1 be used in essentially the same way on both VHF and UHF, looking at the UHF case alone, it was unusual in European practice to use separate transistors for the oscillator and mixer functions. The self-oscillating mixer seems to have been preferred. I should imagine that the use of separate transistors was the better approach, but that the self-oscillating mixer was considered to be adequate, and saved a device. Perhaps that goes back to the valve era. A separate oscillator would have required an additional triode valve, making for a three-valve tuner, probably outruled on cost grounds. Maybe that thinking was carried over to the transistor era.

VHF Tuners with Self-Oscillating Mixers:

This one is really the opposite of the previous case, and I must admit it I was very surprised to find that it was actually done. The customary reason given for its not being widely used is that once TV receiver IFs moved up to be just under the lower edge of Band I, it would be difficult to avoid regeneration at the lower Band I channels with a self-oscillating mixer. The regeneration issue is why, for example, the pentode mixer (in the form of triode-pentode valves such as the 6X8, 6U8 and PCF80) was adopted in the early 1950s, even though the pentode was contra-indicated at Band III. In the transistor era, grounded base mixers were used, and in the USA also cascode bipolar mixers to avoid the regeneration problem. When dual-gate mosfets arrived and were used in VHF TV tuners, the mixers had both signal and local oscillator on gate 1, so that gate 2 screened the drain from both inputs. In contrast, radio applications usually had signal on gate 1 and local oscillator on gate 2.

So the question arises - how did Philips avoid the regeneration problem with self-oscillating VHF mixers?

Intermediate Frequencies:

Intriguing was the Philips VF5 case where the 32.7 MHz VIF was used not only for System L (where it was standard), but also for System E, which put the System E SIF at 43.85 MHz, actually overlapping the lower end of Band I and channel F2.

I wondered why not use the established 28.05 MHz VIF for System E, with SIF at 39.2 MHz, the same as for System L. This arrangement ensured that the System E LO frequencies fell on an opposite channel carrier frequency, and that there was no IF channel overlap with Band I.

VIF-low was necessary to facilitate reception of channels F2 and F4, for which only oscillator-high conversion was feasible. And also for the L’ Band I channels, which were inverted (vision carrier high).

In the solid-state era, one may imagine that the better screening that was possible – for example both tuners and IF strips in those small rectilinear metal boxes – obviated the need for worrying too much about where the in-band LOs fell, or that the SIF was within Band I. That made possible the upward movement of the System E IF channel so that the E and L VIFs were the same, rather than the SIFs.

But I wonder if the primary driver may have been to facilitate the use of synchronous vision demodulation. The quasi-synchronous IC types of the late 1960s and 1970s all used tank circuits to offset some of the quadrature errors produced by limiting an asymmetric sideband signal. So having the same VIFs for Systems L and E avoided the need to adjust synchronous demodulator tank circuit tuning according to the system being received at any given time. That problem did not arise with diode demodulators, which rectified whatever they received (and also added a quite a bit of quadrature distortion in the process). Two SIFs may have been less of a problem in the solid-state era, and there were some (but not all) synchronous demodulation AM SIF ICs that did not require tank circuits, and so were effectively wideband.

That said, I am not sure about the chronology of the move to synchronous demodulation for the positive modulation case. The early ICs, such as the Motorola MC1330 (1969) and Philips TCA270 (1972) appear to have been oriented to negative modulation systems, although I think that the Siemens TBA440 (also 1972) could handle both negative and positive. The TDA2542 was the positive modulation counterpart to the TDA2540/1, from about the mid-1970s, and the TDA2543 was its counterpart for the AM sound channel.

Of course in System IF choices one might say that from the early days, the Belgian and border-area multi-norm receivers did not respect the notion that System E LOs should fall upon an opposite carrier frequency. But then they were intended to receive just a small number of Band III System E channels, so probably did not run into the LO interference issue.

Also interesting was the AT7650 VHF tuner, intended to receive all of the VHF System E channels and the channel E7 Luxembourg System F transmitter. That produced the standard IFs for System E, 28.05 MHz VIF and 39.3 MHz SIF, but it must have been a very rare case where a 7 MHz channel European transmission was transposed to a VIF-low IF channel, 33.7 MHz VIF and 39.2 MHz SIF, which would have required LO-low.

And somewhat surprising was that the use of the Italian IF evidently persisted into the 1970s. The V302 seems to have been the last Philips VHF TV tuner to have catered for this. The Italian IF was probably a case of standardizing too early, as the Italian IF channel of 40 to 47 MHz was decreed early in 1952. It was apparently derived from American practice, where the IF channel was 41 to 47 MHz. Italy’s first experimental 625-line transmitter at Torino, supplied from the US, initially worked in the 6 MHz wide channel A6, which was retained and expanded to 7 MHz as Italian channel C. This IF choice probably explains the odd spacings of the Italian Band III channels, positioned to avoid possible interferences. Within that IF channel, as far as I know the SIF and VIF were normally 40.25 and 45.75 MHz respectively, but Philips moved them slightly to 40.4 and 45.9 MHz respectively. The ITU Stockholm 1961 European UHF planning meeting appears to have taken account of both the Italian and the standard European IF channels when looking at interference patterns that affected channel allocations, so this stage a change to the standard IF was evidently anticipated.

Use of Mosfets:

Something apparent from the chronology is that Philips waited quite a long time, until 1980, before introducing the dual gate mosfet RF amplifier into its TV tuners. Although to be fair, I think that that was generally consistent with what the other European setmakers did. I have a vague notion that Tandberg had been an early mover in using mosfets in VHF TV tuners, but I no longer have the reference for that.

As far as I know, RCA was the first anywhere to introduce a VHF TV tuner with a dual-gate mosfet RF amplifier, namely its KRK-142 of later 1968. This also had a cascode bipolar mixer for better performance.

Other US makers generally waited the year or two before protected gate devices became available before adopting mosfet RF amplifiers for their VHF TV tuners. But Zenith had an early model that combined a bipolar grounded base RF amplifier with a dual-gate mosfet mixer. The latter was more generally adopted through the 1970s, as well.

RCA was also about the first to use a dual-gate mosfet RF amplifier in a UHF tuner, in its KRK-226 of 1975.

Evidently the American TV setmakers were concerned about the relatively poor performance of bipolar VHF tuners, and so often held on to valve types, at least for their more expensive models, until the mosfet type became available. Certainly with the early mosfet models, some work was done to demonstrate their equivalence or superiority to their valved predecessors.

One could argue that the USA was a more difficult case for VHF TV reception, with multiple transmitters in the larger cities. (When I lived in the Dallas-Fort Worth area, we had 5, later 6 VHF TV stations, in the latter case occupying channels A2, A4, A5, A8, A11 and A13.) A specific problem was the channel A6 colour beat. So in turn one might deduce that US receivers needed better VHF tuners than were required elsewhere.

But there is contrary evidence. Here in New Zealand the Philips K9 was a very popular receiver in the early days of colour, say mid- through later 1970s. At that time the major cities usually had just one Band I and one Band III transmitter, so reception conditions were apparently quite mild. One K9 that I was familiar with in Auckland suffered severe cross-modulation when fed with the signal from an outdoor aerial array which an earlier valved monochrome receiver had handled with ease. As I recall around 20 dB of signal attenuation at the receiver input was needed to clear the cross-modulation. I had a similar experience in Wellington in the early 1980s with a later Philips model whose K-number I do not recall; there 12 or 18 dB of attenuation was needed, although I don’t remember the exact number. An oddity was that Pye NZ had introduced a colour TV chassis with a mosfet-based tuner around 1974-75. I think that the chassis may have been non-Philips, and the tuner may well have come from the USA, perhaps Japan.


Cheers,
Synchrodyne is offline