Component 3B - ii. The 7 information processes

posted Jul 16, 2009, 4:38 AM by Unknown user   [ updated Jul 17, 2009, 5:01 AM by Eddie Woo ]
A record of the 7 information processes should be maintained as the multimedia system is developed, including relevant discussion of issues related to software/hardware constraints and design decisions (e.g. organising data: compression levels applied to images, audio and video data).

Collecting:
  • Video and image data were collected using digital cameras:
    • Ria's - Canon IXUS 80IS
    • Mine - Canon PowerShot A470
  • Audio data was collected using Mr. Chandra Handa's Sony IC audio recorder
  • Numerical data was collected from other people; using Facebook, MSN, Google Talk, and other email/IM platforms as a communication service
Organising:
  • Video data from the IXUS was organised into AVI format; at a resolution of 640 x 480 and framerate of 30fps
  • Video data from the PowerShot was organised into AVI format; at a resolution of 640 x 480 and framerate of 20fps
  • Image data from the IXUS was organised (lossy compression) into JPG format at a resolution of 3264 x 2448
  • Image data from the PowerShot was organised (lossy compression) into JPG format at a resolution of 1600 x 1200 (the image size was accidentally set to medium, as I discovered later on)
  • Audio data from the Sony recorder was organised (lossy compression) into MP3 format at a bitrate of 192kbps
  • Numerical data was organised into a Microsoft Excel spreadsheet
  • Once my video product had been completely edited, Windows Movie Maker organised it into a single WMV file.
Analysing:
  • My completely edited, final video product was uploaded (in two parts, due to the 10-minute limit on each video) to YouTube. I added a title and description to each video to more accurately describe its contents, and YouTube also automatically generated a screencap image for each video. These are forms of analysing, which serve to add meaning or purpose to data.
  • I analysed the edited audio data by applying a consistent filenaming convention. I also created a text transcript of the first debate, and have linked to this from the original audio files.
  • The results of my survey were analysed to produce several graphs and interesting statistics/trends/patterns.
  • Each audio file, when opened in Audacity, was analysed to produce a graphical representation (waveform).
Storing/retrieving:
  • Video and image data were firstly stored on the camera's memory card. From there:
    • Ria's - it was transferred to my USB flash drive, and then onto the hard drive of my home computer.
    • Mine - it was transferred directly onto the hard drive of my home computer.
  • Audio data was firstly stored on the recorder's internal memory, then copied to my USB flash drive and then onto the hard drive of my computer.
  • Numerical data - the survey and people's answers were stored on Facebook and various email servers; the spreadsheet into which I compiled these answers was stored on my computer's hard drive.
  • The video, image, and audio files were stored on my computer's hard drive while I edited them.
  • The completed multimedia products were stored on the Intranet via the Google Sites infrastructure.
Processing:
  • The raw video files which I had collected were processed in the software Windows Movie Maker. Long video clips were edited by me - this involved dividing them up into separate sections, adding titles and transition effects (such as fades) between scenes, and also incorporating photographs I had taken (of the posters and drawings being presented) for greater relevance to users.
  • Once the video files were uploaded onto YouTube, YouTube spends some time performing its own processing before the video is made available to watch. This includes such items as converting files to a format appropriate for Internet streaming, creating high-quality and low-quality versions of the video, and generating screencaps.
  • The audio file was originally just one long recording (over 30 minutes) for the whole debate. I had to open it in Audacity to process it, by dividing it up into multiple sections (one per speaker) and editing out unnecessary noises or silences.
  • The raw results of the survey (consisting of a series of letters) were processed to find the number of people who gave a certain response to each question.
Transmitting/receiving:
  • The completed multimedia products were uploaded to the Intranet (transmitting).
  • End users will have to download the multimedia (at least temporarily) to their computers (receiving).
Displaying:
  • Video and image data were displayed on the viewfinder (small screen on the back of the camera) while recording and during playback mode.
  • The video, image, and numerical data were all displayed on my computer's VDU while I was editing and manipulating them.
  • The audio data was displayed through speakers while I was playing it back and editing it.
  • End users will also experience this multimedia, displayed through their VDU and/or speakers.
    • I made the design decision to allow users to instantly stream the debating audio files from directly on the page, rather than having to download the entire MP3 file and wait for it to all load.
Comments