Ellie's page on Composite Video (NTSC)

always a work in progress.

Table of contents
Raster Graphics
Composite Video
The NTSC Signal
Video Hardware

I highly recommend the book Video Demystified for all the granular details of video. It has everything you would need to know and more. This page is just a basic introduction to video.

Raster graphics

The composite video signal is a form of raster graphics. Raster graphics refers to a method of representing visual information in the form of a one-dimensional array (a list) of pixels, which is accessed and processed by display equipment, which divides the one-dimensional array into a two-dimensional image consisting of rows and columns. The nice thing about raster graphics is that it provides visual information in a very easily-manipulated form.

Arrays are very easy to manipulate and process because you can create and apply a logic operation to each element of the array sequentially or manipulate the array as a whole. That operation might be very simple, such as one that say, shifts all of the items up or down, or it may be a more complicated logical operation involving specific conditions (if/else, for/while etc) which are handled by either a boolean (true/false, on/off, 1/0) operation, or a mathematical function.

There are many other forms of representing and storing visual information, such as vector graphics, or for 2d representations of 3d shapes ray-tracing, among many others. These other forms of graphics can be very useful, but tend to either require more processing power or specialized algorithms. Like I said, nice thing about raster graphics is that arrays are a very commonplace form of storing many different types of information, and so all of the pre-existing methods for working with arrays already exist in any computing platform generically, and the processing of arrays is one of the most fundamental things that any programmer will learn, and that a computer can do, so it doesn't take much specialization to manipulate raster graphics.

Top of Page

Composite Video

Behold, the best connector on earth!!!
Composite video signals
are a composite of:
|- Color
|- Video
|- Blanking
|- Synchronization

In general, electronic signals encode information with Four Noble methods: Amplitude Modulation, Frequency Modulation, Pulse Width Modulation, and Phase Modulation. Hence, we arrive at the Eightfold Path of Retro computing.

Composite is just one way of creating video, there are alternative formats like VGA and HDMI which you know of. With composite, there are were three standards: NTSC, PAL and SECAM. These standards
are on their way to being obsoleteRequiem for NTSC

We tend to think that the thing that draws us to composite video is a simple sense of nostalgia. I would propose a different, or maybe just somewhat more complicated reason.

I think people are drawn to these outdated technologies more because all their qualities embue them, somewhat like a talisman, with a kind of energy, a distilled essence of the whole strange history of technology; all the contingencies and flukes of history that lead to their creation seem holographically inscribed or transposed upon them.

Many, many people sacrificed their lives in all the labor that went in to developing these things, and in religious or spiritual terms, a sacrifice usually entails something greater than the bare mechanistic result of that sacrifice - not just a static reconfiguration of local conditions, but some incomprehensible adjustment in the very underlying mechanics of reality itself.

I don't think it can be argued that the emergence of video and computing technology resulted in exactly that degree of a fundamental transformation of the world we live in.

I have mixed feelings about this, because on the one hand the beauty of composite video is that it transmits video information using only two wires: the signal wire and the ground (reference, zero volts). This makes it extremely convenient and interoperable. Also, the RCA jack is one of the best connectors ever made.

Obligatory Diatribe on Connector Design
As a tangent, i will point out that VGA uses at a minimum four wires, and the HDMI connector is one of the ugliest and excessively complicated connectors that most people are likely to encounter. VGA at least has the redeeming quality that its connector is designed in a way such that damaging the male or female end is nearly impossible, while other connectors such as USB, 1/8" m/f connectors as well as HDMI are horribly designed, as it is extremely easy to damage either female or male end of the connector, rendering them useless, and for HDMI with its enormous pin count the prognosis for a broken female connector is typically fatal not just for the interface, but since the interface is typically soldered to the circuit board itself, potentially fatal for the entire device itself.

1/8" connectors are absolutely horrible and should be avoided at all costs. If anything, I despise the 1/8" more than any other because it is the connector most responsible in the realm of consumer electronics for standardizing the sadistic practice of soldering female connectors directly to the primary circuit board. At least from what I have experienced taking apart electronics, prior to the mass prevalence of 1/8" connectors, female connectors were typically given their own board, ensuring the safety of the primary board. Of course, you can also add on to this already-damning evidence of catastrophic industry failure the fact that 1/8" stereo connectors are pretty much impossible to repair, as well as the fact that along with, and I would argue directly owing to, the ubiquity of the cheap 1/8" connector came the absurd cost-cutting measure of using cotton and copper-plated aluminum wire, which is all but completely un-serviceable.

Even the humble Mono 1/4" Connectors are not immune from my loathing, as they are also far too prone to damage on both the male and female end. However, assuming you don't have the cheap type which are filled with hot-glue for no good reason, they can be repaired relatively easily. Still, even mono 1/4" male connectors have a poor design as the tip can break off pretty easily, and if this happens you are pretty much screwed.

The RCA connector used in composite video and other interfaces is clearly one of the best connectors ever designed, and in my opinion with a few minor tweaks it could be suited for any purpose and be made practically invincible.

There are even different versions of the RCA connector which implement these tweaks, allowing for a universal two-line connector that is indesputably perfect. The male shaft design is robust enough to withstand any reasonable (and plenty unreasonable) amounts of compression and deformation can be fixed reliably by equal compression at a 90 degree angle.

The variety of plastic which is used for the shaft is pliable but strong, and while i typically despise the embedding of critical metal components in a substrate, the way it is done in RCA connectors by using a brilliant axial design makes me recind - in just this one instance - my typical manic repulsion toward the practice.

No less because of my own gender perversions, I am also captivated by the axial design of the RCA connector's subversion of the male-female dichotomy: "female" connectors just as commonly appear on the ends of wires as in a fixed position on a device.

The fixed-position device-mounted "female" connector is brilliantly designed such that even a contraction of the "male" cable at very near a 90 degree angle will result in a full retraction with very little stress to the connector assembly. In most cases, it is far more likely that the source of the contractive force on the wire will be itself retracted than any damage happen to the connector assembly, and if that contracting force is extremely strong, i would put the odds of a safe retraction at somewhere around a 50% possibility.

On the other hand, the Cathode Ray Tube displays that composite is typically displayed on are a pretty terrible invention if i am honest... They take up far too much energy, and are such a bizarre idea in the first place... Firing high powered beams of electrons inside of a massive vacuum-sealed tube is a completely insane idea for displaying video, and I can not understand why in the world this was what we used for video for so long. Aside from this, composite video in NTSC, PAL, and SECAM format, are extremely complicated signals to generate. It is a baffling and seemingly arbitrary amalgam of occult modulation techniques.

We tend to think of composite video as an "analog" signal, and refer to video that uses RCA connectors as "analog" video, but this is, like most times people refer to things as analog, a misnomer. Pretty much all of the time, composite video is handled by fully digital microcontrollers that encode and decode the signals, and the signals themselves use very specific integer values that can be fully represented as just strings of (digital) binary values called bytes.

The NTSC Signal

For the purposes of this page, I will be talking about NTSC.
Welcome to the Kafka-esque labyrinth of the composite video signal!

Color, Video, Blanking, Synchronization: All four of these properties have to be crammed into a period of about 63.5 microseconds in a specific order, with just a few microseconds between them, and with an error tolerance of at most one or two microseconds.

The basic video signal

With the understanding that raster graphics are basically a way of storing image information in very long, one-dimensional array of pixels, how is that one-dimensional array transformed into a two-dimensional image?

Video devices encode a horizontal and vertical sync signal, which turns the one-dimensional array of pixels into a two-dimensional image made of columns and rows in an intuitive way: horizontal synchronization is encoded as a short negative pulse below the “black” level which tells the display that we are sending it a column, and vertical synchronization is encoded as a longer negative pulse that tells the TV that we are moving on to the next row. The "blanking" signal indicates that all of the columns and rows are drawn, and its time to move on to the next frame of video.

The "video" part of the video signal is a period of about 52.6 microseconds that has all of the information about the brightness or color content of the part being transmitted. For black and white video, this is just some amount of voltage that varies a bit over time. Color is a lot more complicated, and we will get into that below.

By the way, composite video also incorporates “interlacing” and “retracing”, which I don’t really understand but apparently is a specific way of translating the array of pixels into a two dimensional image that makes the process not as straightforward as how i described above. Instead of a simple list of values for the columns and rows for the image, you also have Even and Odd rows that are handled differently for some reason. This makes composite video even more of a headache. I'll update this section when i understand it better.

Composite Color Video

The way color information is encoded in composite video is rather strange. Instead of just adding the color carrier signal on top of the sync signals, and using simple amplitude modulation plus frequency modulation, or with pulse width modulation, color is transmitted using a combination of amplitude modulation and phase offset. The amplitude controls the saturation, while the phase controls the hue.

Adjusting amplitude is easily done using some resistors. Adjusting the phase of a waveform is a bit harder, but in theory pretty painless: it is just a matter of delaying the signal, which can be done by simply stopping and then starting a counter very briefly. However, you run into a problem here: how does the device receiving the signal know what reference and offset phases are being used by the device sending a signal?

Both of the devices are likely to have a crystal oscillator inside them, generating a fixed frequency near 3MHz (or some multiple of it), but when that device is turned on the phase of the oscillator is going to be pretty much random, and there is no way to synchronize the phases of the crystals in the two devices. This is a problem... because if color is dependent on phase, this means that every time you connect a receving device the hue will be different.

You could solve this problem by having some kind of a master oscillator that synchronizes the two devices, but you run into two problems: first that this would pretty much require extra peripherals than the standard two-wire RCA composite video connector, and the whole beauty of composite video is that it just requires a single two-wire connector. Secondly, the point of developing the signal is that it should be able to be transmitted through radio broadcast, which is a one-way street. The transmitter of the radio broadcast transmits, and the receiving device receives, they don't communicate back and forth.

To get around this what the NTSC signal does is have a color burst signal built into it, which is just a "beep" of around 9 cycles before the active video period of each scanline that serves as a reference for the phase offset of the color carrier signal. the actual color information is contained in the active video portion of the scanline. Again, the phase of that color carrier in the active video part can be compared to the reference phase and from this a consistent (ish) color can be decoded. This way, the receiver doesn't need to know the exact phase the sender is using, it can just sample that reference and compare it to the active video period and check the difference between those two phases to determine the phase offset and thus the specified hue and saturation.

"Composing" the Signal

So, you now understand that there are several components to an NTSC encoded video signal: the color burst, the active video signal, the front and back porch, the vertical and horizontal sync pulses, and finally the blanking. Combine them together and you get a composite video signal. But how are they combined? Well, fortunately - and maybe a bit surprisingly, based on the complexity of everything described above - it is really just as simple as adding them all together by connecting them with some wires and resistors.

While combining them at the end is easy, generating and decoding the individual components of such a complex signal as is used for composite video is extremely difficult. Aside from the complexity of the signal itself we haven't even yet talked about the fact that all of this stuff is handled by hardware: each component of the signal is generated by a different electronic circuit, and all these different circuits need to work together closely.

In the same way that the device that is receiving a video signal needs to synchronize with the device that is sending one, all of the circuits inside each device individually have to synchronize with each other. It should go without saying, that is incredibly difficult to do as well.

As I often say when it comes to technical stuff: the engineers who are specialists in the feild don't like all this complexity any more than you do as an outsider. It is frustrating, it takes a long time to figure out, and just like learning to ride a bike you can't just read all about it and know how to do it, you have to actually get on the bike and practice for a while before you can do it. Also like riding a bike: sometimes you puncture your tire and have to replace it; sometimes you have to bike uphill and shift gears, and so on. My point is, don't be discouraged. Everything you learn takes patience, practice and a lot of close research.

Top of Page


Most of the time, the circuitry in video display devices and video-generating devices is made possible with microcontrollers

I have a whole page about them, but in short microcontrollers are basically (literally) tiny computers complete with stored memory, RAM and peripheral interfaces. These microcontrollers use binary logic, 1's and 0's, which are representations of transistors - an electronic component that acts as an on/off switch. Astonishingly, these 1's and 0's are all that you need to do everything that any computer does. We translate the 1's and 0's into higher-level abstractions so that it makes sense to us humans, but all of it is based on binary arithmetic. The weirdest part about this is that, at the lowest level, there is no difference between what we think of as "files" and "programs". Essentially, everything in binary is a "file" - an address in memory where a value of 1 or 0 is stored. Electricity brings them to life and turns all of them into one big "program" by travelling through the maze of transistors according to predictable laws of physics.

however, there are also ways to generate video signals using only analog components (transistors, resistors, capacitors, inductors). Analog video was obviously mostly common before digital technology like integrated circuits were affordable. The nice thing about analog video as compared to digital, is that the sync signals are typically completely separated from the active video, ensuring that a "valid" video signal is generated. the video information itself is "encoded" but with a varying current instead of switches. analog video was mainly used for broadcasting video signals for television.

to be continued....