View Single Post
Old 20th May 2018, 6:36 pm   #6
Pieter H
Tetrode
 
Join Date: Sep 2017
Location: Waalre, Netherlands
Posts: 67
Default Re: Tuners in Philips G6-G11 CTV chassis

Hi Synchrodyne,

finally back from holidays, so time to respond to your many inputs.
And again, before I start, thanks a lot for your careful reading and thorough feedback! Much appreciated.
Let me try to answer and respond to your inputs.

Quote:
Tuned RF Input for UHF Tuners:

The case of the AT6380 where a tuned input was added ahead of an early Ge transistor RF amplifier in order to reduce noise at the higher frequency end of the UHF band does show that the “conventional wisdom” on this issue was a broad-brush approach that did not address all situations.

That conventional wisdom suggest that the best noise performance was achieved with an aperiodic input, with the required RF selectivity being provided by a bandpass tuned interstage. This in turn indicated 3-gang UHF tuners.

The UK case was an exception. The image rejection requirement in the UK was much higher than in the rest of Europe. This was due to the decision to move the vision IF (VIF) from 38.9 to 39.5 MHz after the European UHF channel plan was determined at ITU Stockholm 1961, including the co-siting of n and (n+10) channel transmitters. This required extra RF selectivity, typically provided by adding a 4th gang for a tuned RF input. But there were other ways in which it was done. In the late 1960s, Thorn used the 4th gang to tune an input image rejector in one case, and to facilitate a triple-bandpass interstage in another, in both cases retaining what was an aperiodic input. That suggests that – with devices that did not need any additional help at the top end of the UHF band – it saw a benefit, presumably a noise benefit in retaining the aperiodic input.
I can't speak for the whole world, but from what I see within the Philips tuner developments it looks like the initial concept was to keep the UHF input - still a major challenge at the time, for consumer equipment! - as simple as possible, meaning no tuned input match. This was used in the first valve UHF tuners as well as the AT6370 first transistor tuner. However, (noise) performance was almost certainly not optimal and could be improved with a tuned input match. From the AT6380 this was standard in all UHF tuners, with the exception of the KD1, but also this one quickly moved back to the 4-stage tuning in the KD2.
In my analysis I haven't found any UK-specific tuners in this respect. But in line with your assumption the UK requirements for RF selectivity might have pushed the designs in that direction. However, up to this stage Philips maintained global architectures for its tuners, so no special UK versions yet (apart from the IF setting of course). Also I don't think that the N+10 requirement lead to special requirements for UK tuners only. N+10 in UHF (and N+11 in VHF) is a generic issue for all systems with IF's in the 35-45MHz range, being the image channel of the wanted signal.

Quote:
Combined VHF-UHF Tuners with Separate UHF Oscillators:

Whilst it was logical that the three transistors used in such tuners as the Pi1 be used in essentially the same way on both VHF and UHF, looking at the UHF case alone, it was unusual in European practice to use separate transistors for the oscillator and mixer functions. The self-oscillating mixer seems to have been preferred. I should imagine that the use of separate transistors was the better approach, but that the self-oscillating mixer was considered to be adequate, and saved a device. Perhaps that goes back to the valve era. A separate oscillator would have required an additional triode valve, making for a three-valve tuner, probably outruled on cost grounds. Maybe that thinking was carried over to the transistor era.
I think you're absolutely right, I'm convinced that tuner design has been driven until well into the 1970's by mantras like "minimize the number of valves or transistors". So for twenty years the rule was two valves or 3 transistors per tuner, even when combi VHF-UHF tuners were introduced. This was undoubtedly driven by the need to limit component cost. And when it was decided to use a separate oscillator, it meant that the mixer could not be a transistor but became a diode again, see the U322 from 1975. It was the ELC family that finally introduced separate RF and MO transistors per band, adding up to 7 transistors per tuner. So yes, these approaches can be persistent!

Quote:
VHF Tuners with Self-Oscillating Mixers:

This one is really the opposite of the previous case, and I must admit it I was very surprised to find that it was actually done. The customary reason given for its not being widely used is that once TV receiver IFs moved up to be just under the lower edge of Band I, it would be difficult to avoid regeneration at the lower Band I channels with a self-oscillating mixer. The regeneration issue is why, for example, the pentode mixer (in the form of triode-pentode valves such as the 6X8, 6U8 and PCF80) was adopted in the early 1950s, even though the pentode was contra-indicated at Band III. In the transistor era, grounded base mixers were used, and in the USA also cascode bipolar mixers to avoid the regeneration problem. When dual-gate mosfets arrived and were used in VHF TV tuners, the mixers had both signal and local oscillator on gate 1, so that gate 2 screened the drain from both inputs. In contrast, radio applications usually had signal on gate 1 and local oscillator on gate 2.

So the question arises - how did Philips avoid the regeneration problem with self-oscillating VHF mixers?
This refers to the VD1-12ET5732, which indeed introduced VHF self-oscillating MO. However, we should not compare the transistor circuit with the earlier valves. In a valve mixer it indeed helped to use a pentode, giving better anode-to-grid isolation and less regeneration that a triode. However, in a grounded-base self-oscillating MO as in the VD1 and the collector-to-base capacitance is much less relevant. But other than that I can't claim to have the real answer. At the same time we have to admit that the VD1 had a long life for a tuner, so apparently the performance was more than acceptable for the demanding German market.

Quote:
Intermediate Frequencies:

Intriguing was the Philips VF5 case where the 32.7 MHz VIF was used not only for System L (where it was standard), but also for System E, which put the System E SIF at 43.85 MHz, actually overlapping the lower end of Band I and channel F2.

I wondered why not use the established 28.05 MHz VIF for System E, with SIF at 39.2 MHz, the same as for System L. This arrangement ensured that the System E LO frequencies fell on an opposite channel carrier frequency, and that there was no IF channel overlap with Band I.

VIF-low was necessary to facilitate reception of channels F2 and F4, for which only oscillator-high conversion was feasible. And also for the L’ Band I channels, which were inverted (vision carrier high).

In the solid-state era, one may imagine that the better screening that was possible – for example both tuners and IF strips in those small rectilinear metal boxes – obviated the need for worrying too much about where the in-band LOs fell, or that the SIF was within Band I. That made possible the upward movement of the System E IF channel so that the E and L VIFs were the same, rather than the SIFs.
Indeed an intriguing tuner, the VF5, and I was happy to finally have a circuit diagram. Note that the specs specify the tuner only for system-E (819 lines) channels F2 and F4! F3 (PC 56,15MHz, SC 63,4MHz) would require an LO at 23,45MHz, which is of course impossible.
I agree that the change to VIF at 32,7MHz was driven by the emerging IF IC's. I'll try to cover that in the next chapter.

Quote:
But I wonder if the primary driver may have been to facilitate the use of synchronous vision demodulation. The quasi-synchronous IC types of the late 1960s and 1970s all used tank circuits to offset some of the quadrature errors produced by limiting an asymmetric sideband signal. So having the same VIFs for Systems L and E avoided the need to adjust synchronous demodulator tank circuit tuning according to the system being received at any given time. That problem did not arise with diode demodulators, which rectified whatever they received (and also added a quite a bit of quadrature distortion in the process). Two SIFs may have been less of a problem in the solid-state era, and there were some (but not all) synchronous demodulation AM SIF ICs that did not require tank circuits, and so were effectively wideband.
Again, to be worked out in the next chapter, but your statement seems to be correct. With the emergence of SAW IF filters and QSS IF demodulators the flexibility on the VIF became much less, so the trend reversed to fixed VIF and flexible SIF depending upon the standard.

So far for the moment, I'll respond to the Mosfet topic in a separate post.

Regards, Pieter
Pieter H is offline