Sandbox‎ > ‎Archive‎ > ‎IPT 2008-09‎ > ‎Blair's page‎ > ‎Blair's updates!‎ > ‎

2009-04-02 Multimedia technology

posted Apr 1, 2009, 4:26 PM by Unknown user   [ updated Jun 11, 2009, 12:34 AM by Eddie Woo ]
For further details, please refer to IPT: Hardware Requirements

Class notes - 'hardware requirements'

This is really more about the use of hardware than the actual hardware.


Pixels are short for 'picture elements'. The provide the visual display hardware the ability to imitate real life. A pixel is the smallest amount of data that can be controlled in an image. Factors that are controlled include the light intensity and frequency (i.e. colour).


The resolution is the number of pixels in an image. Greater resolution results in finer detail - one could say that resolution is directly proportional to detail. The amount of resolution required depends of the data. Image data will generally be of a higher resolution than video data, for example.

Bit depth

A bit is a single 0 or 1 in a data sequence. Bit depth refers to the number of bits that are allocated to each pixel. A greater bit depth results in greater image quality, particularly in terms of colour. If the bit depth is x, the number of colours available for use in every pixel is 2 to the power of x. A bit depth of 24-bit will allow for 16.7 million colours.


The palette refers to the number of colours available for use. The size of the palette will depend on the bit depth. The size of the palette varies; however, in some cases the hardware output may not be compatibile with the palette used. 24-bit colour is extrmeely high; most humans cannot distinguish between colours that are only subtly different. However, if the bit depth is too small, then colour photos will not be convincing enough for mainstream use.

Colour systems

For numbers to be colours, a standard is needed. An example is the RGB system (additive colours for light - red, green, and blue are the primary colours) for display on VDU. This is in contrast to the CMYK system (subtractive colours for ink - cyan, maganeta, yellow, key*) for display on printed media (and for previewing an imitation of the intended CMYK output, using an RGB display).

However, not all colour systems add or subtract colour. The HSB/HLS system (hue, saturation, brightness or hue, lightness, saturation) defines where on the visible colour spectrum the colour is, how vivid the colour is (low saturation = 'pastel') abd how dark or light the colour is.

* K for Key, not blacK

Frame buffer

The frame buffer is a temporary, easily accessible stoage for bitmap data that is abotu to be directly used by the VDU. The size of the frame buffer depends on the resolution and bit depth.

Technology for image data

Pixel: the smallest piece of image data that can be controlled in an image file in terms of light intensity and frequency.
Resolution: the amount of pixels.
Bit depth: the amount of bits (choice of 0 or 1) allocated to each pixel. Determines that amount of colours in the palette.
Frame buffer: temporary storage of bitmap data that is to be directly used by the VDU.

Technology for audio data

Amplitude: the height of the sound wave
Wavelength: the spacing between two points in the wave with the same height
Sample: the smallest piece of audio data that can be controlled in an audio file in terms of sound intensity and frequency
Bit rate: the amount of bits (choice of 0 or 1) allocated to each second
Audio buffer: temporary storage of waveform data that is to be directly used by the speaker(s)
Waveform: a format by which the collection hardware samples real-life audio many times a second to generate a digital approximation
MIDI: a system by which a very large range of waveform recordings of musical instruments playing certain notes are pieced together in a specific order to produce an audio sequence

Modern formats for audio data

OGG: Ogg Vorbis audio encoding and streaming

The Ogg Vorbis audio codec (to encode and decode audio data) was introduced in 2002 to counter a 1998 notice from the creator of the MP3 format, stating that licensing fees for MP3 would soon be enforced. Ogg Vorbis works similarly to MPEG, but is not patented. The codec itself is considered to be open source software, and furthermore, is a free codec, both free as in beer, and free as in free speech. Any software developer can use the OGG format without having to pay royalties.

M4A: MPEG-4 Part 14 container format

M4A is actually a file extension for the container format MPEG-4 Part 14. Being a container format, it is usually used to store audio and video data, but can, in theory, store any data - the advantage of this is that other data, e.g. subtitles, can also be stored in the file. Officially, the file extension should be MP4, but because of the fact that MPEG-4 Part 14 can be used to store any data, the practice of giving audio files the M4A extension, and giving video files the MP4 extension, has become a naming convention.

Jacaranda questions 7.2

4. How many images are needed to create a morph? Why?

Only two images would be needed, because a morph is a smooth transition from one static image to another static image. However, these two images need to be analysed, to create the many frames that would be needed in order to create a smooth transition.

5. Would you describe a morph as a cel-based or path-based animation? Why?

A morph would be classified as cel-based animation: many bitmap frames are created to achieve the smooth transition as the difference between the two static images cannot always be described in terms of movement alone.

6. Why are most multimedia video clips played in a small area of the screen?

Multimedia video clips are of a relatively low resolution, as high-resolution video would simply require too much storage space. This is particularly due to the fact that they are "clips", which implies that they do not have the requirement of being high-quality, but rather, being able to be easily distributed. Some video data - for example, 1080i/1080p high definition video streaming (1920×1080 pixels) would certainly take up the screen on a computer VDU.

7. How do colour laser printers create colour images?

The bitmap image data that is about to be transferred to paper is temporarily onto a roller, in the form of electrons or lack thereof, by a laser which is able reverse electrical charges. The roller is then exposed to ink, and the areas with electrons will attract ink with static electricity. A monochrome laser printer would only need to do this once, with one roller that is exposed to black ink only; however, in a colour laser printer, it needs to be done four times (for cyan, magenta, yellow, and black inks) to achieve a full-colour image.

8. How do vector graphic systems work?

Vector graphics involves using simple geometric instructions to create images using basic shapes and lines such as parallelograms, circles, curves, and triangles, instead of by breaking the image up into thousands of small dots. Colour is still allowed, but can be used more effectively - for example, with the use of gradients instead of different shapes for each shade of the colour. This can conserve a lot of storage space for simple diagrams, but is not suitable for more complex images that cannot be easily described mathematically, such as photographs.

(ABOVE: Public domain image,

However, for simply images such as diagrams, it is extremely effective, particularly if the diagram was indeed composed of nothing more than basic shapes - for example, a system flowchart - as there is no need to describe every single pixel in the image. This also means that vector graphic images effectively have an infinite resolution, as they can be resized simply by changing the mathematical proportions.

Vector graphic files usually have the file extension SVG, which is an acronym for "Scalable Vector Graphics".