Docstoc

Close captioning_ Subtitling_ Supertitling

Document Sample
Close captioning_ Subtitling_ Supertitling Powered By Docstoc
					Types of captioning

   1. Closed captioning

      The term "closed" in closed captioning indicates that not all viewers see the
      captions. These are captions that are hidden in the video signal and that are
      invisible without a special decoder.

      Technically the caption, which is in form of an electrical signal, is buried on line
      21 of the VBI (Vertical Blanking Interval) in the analog video signal or tucked in
      a digital video packet. The VBI consists of a number of "lines" of video. The 21st
      line has been allocated to closed-caption information.

      The viewer therefore has to have a decoder that interprets the information on line
      21 and displays the captions on the viewer's screen. Once decoded, the captions
      display as white block letters on a black background.

      Closed Captioning encoding methods have been established by Government
      regulation and allow for two characters of information to be placed in each video
      frame. There are 30 frames in a second and this translates to 60 characters per
      second, or about 600 words per minute. Note that it takes one frame to transmit a
      command - like "go to a new line of roll-up," – but it takes more than one frame
      to position information on caption.

      Some features of closed captions are;

             They can be turned on and off
             They are Mono-spaced
             Drawn by the viewer’s TV decoder as characters on the screen
             They are displayed on a Black background
             There are 15 lines in the image area and 32 characters per line

      There are a variety of languages or channels from which a viewer can choose.
      One may have French and Spanish captions on different channels and they are
      easy to read more reason why they are very popular.


   2. Subtitling

      Subtitles are textual versions of the dialog in television program. These are
      commonly displayed at the bottom of the screen and appear either in form of a
      written depiction of the dialog in the same language or written translation of a
      dialog in a foreign language.

      Subtitles are hidden unless requested by the viewer from a menu. They are
      intended for hearing audiences but always carry additional sound representations
      for deaf and hard of hearing viewers. They are widely used on the Internet and for
      foreign languages on video.

      Unlike closed captioning a viewer doesn’t need a caption decoder to view
      subtitles. Subtitling is a lot more flexible too and uses proportionally spaced fonts
      displayed on a transparent background allowing for the use of AutoCaption,
      which replaced the traditional permanent etching of subtitles into a video, to
      subtitle with any font style, graphics, background or colors.

      Some features of closed captions are;

             The subtitle encoder draws characters
             Their Background is optional
             They is a variable number of characters per line
             They allow for any character or graphic to be used
             There are about 12 lines that fit in the image area

      Only 12 lines of subtitled text fit in the space occupied by 15 lines of closed
      caption text and for this reason subtitling characters need to be a bit larger than
      closed caption characters.


   3. Supertitling

      Supertitling refers to the electronic display of captions to the entire audience at
      theatric events, hearings, assembly, lectures or even town meetings. These appear
      in form of a dialogue that can be read by the audience hence allowing them to
      read what is being said while watching the performance.

      Technically, electronic supertitles are electronic signboards made of thousands of
      LEDs (Light Emitting Diodes) that have largely replaced Surtitles that use video
      projectors or slides and slide projectors. There are also supertitles that use video
      or computer monitor displays.


Types of closed captioning

      Definition

      Closed captions are captions hidden in the video signal that are invisible without a
      special decoder. There are – types of closed captions.

      The term "closed" in closed captioning indicates that not all viewers see the
      captions – and this is what distinguishes then from open captions, which are
      visible to all viewers.
The terms captions and subtitles have different meaning though most of the world
takes them to mean the same thing. Subtitles assume the viewer can hear but
cannot understand the language, so they translate dialogue and some on-screen
text. Captions explain all significant audio content; spoken dialogue and non-
speech information such as the identity of speakers and their manner of speaking
along with music or sound effects using words or symbols.

Application

Closed captions are used as a tool by those learning to speak a non-native
language, read, or in noisy environments where audio is difficult to hear or
intentionally muted. Closed captions are also used by deaf or hard of hearing
individuals to assist comprehension and also by viewers who simply wish to read
a transcript along with the program audio.

Television and video

In the production of live programs closed captions are created by a captioner
listening to the video conference and key-stroking every word using a special
stenographic or stenomask type of machine attached to a computer. The
stenomask’s phonetic output is instantly translated into text by the computer and
displayed on a screen.

This happens after the conferencing system has added the captions to the video
signal, and the captions and video may then be sent simultaneously to all
participants in the conference. In some cases the transcript is available beforehand
and captions are simply displayed during the program after being edited. For
programs that have a mix of pre-prepared and live content, such as news bulletins,
a combination of the above techniques is used.

For prerecorded programs and home videos, audio is transcribed and captions are
prepared, positioned, and timed in advance.

In PAL and SECAM countries, captioning is broadcasted and stored differently.
Although the method of preparation is similar, teletext is used rather than Line 21.
A variation of Line 21 is used in PAL countries for videotapes since teletext
captions cannot be stored on a standard VHS tape - due to limited bandwidth,
although they are available on S-VHS tapes and DVDs.

In NTSC programming, captions are encoded into Line 21 of the VBI (Vertical
Blanking Interval), a part of the TV picture that sits just above the visible portion
and is usually unseen. For ATSC (digital television) programming, three streams
are encoded in the video; two of which are backward compatible Line 21
captions, and one a set of up to 63 additional caption streams encoded in EIA-708
format.
In the US, for older televisions, a decoder set is usually required. Most television
receivers have to include closed captioning and this has been so since the passage
of the Television Decoder Circuitry Act. High-definition display screens may
lack captioning but HDTV (High-definition TV) sets, receivers, and tuner cards
are covered.

There are three ways that captions can be presented to a viewer. These are the
three style of Line 21 captioning:

      Roll-up: These are used almost exclusively for live events. They are also
       called scroll-up or scrolling. The words appear one at a time at the end of
       the line, from left to right and when a line is filled, it rolls up to make
       room for a new line. Older decoders can only display roll-up captions at
       the bottom of the screen but they can be placed anywhere to avoid
       covering graphics.

      Pop-on: These are also called pop-up or block captions and are the
       standard for pre-taped material. The entire caption appears, all at once,
       anywhere on the screen. The method is used for most pre-taped television
       and film programming. When a pop-on caption appears, all captions
       previously on the screen are erased.

      Paint-on: The name comes from the way they are drawn on the screen a
       letter at a time, so you can see them "paint on" to the screen. They are
       free-form in their positioning, but they don't erase what was already on the
       screen. They are commonly used for commercials and special effects.

A program may be packaged to include scroll-up and pop-on captions with scroll-
up for narration and pop-on for song lyrics). A musical note symbol (hash sign in
UK, Ireland and Australia) is used to indicate song lyrics or background music.
Generally, lyrics are preceded and followed by music notes (or hash signs), while
song titles are bracketed like a sound effect. Standards vary from country to
country and company to company.

Capital letter are often used in captioning since many older home caption decoder
fonts had no descenders intended for the lowercase letters g, j, p, q, and y, while
virtually all modern TVs have caption character sets with descenders. Text can be
italicized, with a few other style choices and captions can be presented in special
colors as well.

There were a lot of limitations in the original Line 21 specification from a
typographic perspective, since, for example, it lacked loads of characters
necessary for captioning in languages other than English.

Captions are often edited to make them easier to read and to decrease the amount
of text presented onscreen. Offensive terms are also captioned, but if the program
is censored for TV broadcast, the broadcaster might not have arranged for the
captioning to be edited or censored also. There is a decoder available to parents
who wish to censor offensive language of programs. The video signal is fed into
the decoder and if it detects an offensive term in the captioning, the audio signal is
muted for that duration of play.


DVDs

There are NTSC DVDs that carry closed captions in the Line 21 format. This type
of captioning is normally carried in a subtitle path labeled either English for the
hearing impaired or SDH (Subtitled for the Deaf and Hard of hearing). On some
DVDs, the Line 21 captions may perhaps have the same text as the subtitles; on
others, only the Line 21 captions incorporate the additional non-speech
information considered necessary for deaf and hard of hearing audience.

Many deaf and hard of hearing subtitle files on DVDs are variations of the
original teletext subtitle files. Blu-ray disc media and HD DVD cannot carry Line
21 closed captioning owing to the design of High-Definition Multimedia Interface
(HDMI) specifications that were designed to replace older analog and digital
standards, such as VGA, S-Video, and DVI.

Movies

There are quite a few technologies used to provide captioning for movies in
theaters and they fall into two broad categories: open and closed.

Open captioning in a theater can be achieved through burned-in captions,
projected bitmaps, or in rare cases a display positioned above or below the movie
screen. Almost certainly the best known closed captioning choice for theaters is
the Rear Window Captioning System from the National Center for Accessible
Media. The model allows for a reflection of the captions to display on panel
mounted in front of the viewers. The reflective system has also been adopted by
other companies like the Cinematic Captioning Systems.

Another company, DTS or Digital Theater Systems, who created surround sound
have a digital captioning device called the DTS-CSS or Cinema Subtitling System
that is a combination of a laser projector which places the captioning anywhere on
the screen, and the CD on the thin playback device holds many languages.

Video games

It had become common for video games to be closed captioned. Many games
nowadays at least offer subtitles for spoken dialog during cut scenes, and many
contain significant in-game dialog and sound effects in the captions as well. In
most games, not only are subtitles available during cut scenes, but any dialog
spoken during real-time gameplay will be captioned as well, allowing hard of
hearing players to know what enemy guards are saying and when the major
character has been identified.

The game systems themselves have no role in the captioning as each game must
have its subtitle display programmed independently. Video games do not offer
Line 21 captioning, decoded and displayed by the television itself but have a
built-in subtitle display, more akin to that of the DVDs.

Theater

Opera houses have used captioning for their productions for a long time while live
theater captioning has only lately begun emerging. Display techniques vary, with
subtitles, surtitles and the individual displays at use.

Telephones

Closed Captioned telephony is a new concept in application. This is meant for the
deaf people and the hard of hearing.

Media monitoring services

The capturing and indexing closed captioning text from news and public affairs
programs is done by most media monitoring services in the United States and
allows for the searching of the text for client references.

HDTV interoperability issues

Originally there were two different kinds of closed captioning datastream
standards specified by the US ATSC HDTV system; the systems specified by
Line 21 and another more modern version encoded in MPEG-2, the EIA-708
standard

DTV standard captioning improvements

The EIA-708 specification provides for dramatically improved captioning

      Viewer-adjustable text size, allowing individuals to adjust their TVs to
       display small, normal, or large captions
      More text styles, including edged or drop-shadowed text rather than the
       letters on a solid background
      More text fonts, including monospaced and proportional spaced, serif and
       sans-serif, and some playful cursive fonts
      An enhanced character set with more accented letters and non-English
       letters, and more special symbols
             More text and background colors, including see-through backgrounds to
              optionally replace the big black block
             Higher bandwidth, to allow more data per minute of video


History of Closed Captioning

      Open captioning

      Open captioning was the first form taken by television captioning in the earliest
      times, with the words printed directly on the screen. It all started with the French
      Chef on PBS in 1972 and was followed by other programs captioned by the
      WGBH Caption Centre.

      However, open captioning was purportedly not well accepted by the hearing
      community and this led to the development of closed captioning that are broadcast
      on Line 21 of the Vertical Blanking Interval, and are not visible unless decoding
      circuitry is utilized.

      Beginning of closed captioning

      Closed captioning begun in 1980 after the government had established a nonprofit
      National Captioning Institute to sell special decoders for closed captioning. A new
      National Captioning Institute was set up to avoid the potential conflict of having
      PBS through the WGBH Caption Center, offer captioning services for other
      networks.

      Closed captioning of television programs grew, but was not sufficient enough to
      satisfy deaf or hard of hearing people. Broadcasters did not want to caption more
      unless more decoders were sold; and many hard of hearing people did not want to
      buy decoders until more captioned programs were available. More decoders were
      in fact being purchased by hearing people, especially people learning English as a
      second language, who found they could benefit from the captions, than by deaf
      people themselves.

      Several factors kept decoder sales low: cost, limited availability, and not least, the
      reluctance of hard of hearing people to reveal their hearing loss by having a
      decoder attached to their television set.

      The Future of closed Captioning

      The Federal Communications Commission (FCC) enacted a rule on the
      implementation of Closed Captioning that will ensure all the TV programming
      distributors in the United States provides closed caption for Spanish language
      video programming by January 1, 2010.
The Captioning Process

The steps followed in the closed captioning process are:

       1. The Master Copy

                      Receiving the master copy of the videotape from the client.
                      Creating a VHS work tape copy of the master with a time-code
                       window.
                      Making of an audio cassette.

       2. Creation of the Transcript

              An accurate transcript is essential for captioning. If a transcript does not
              exist, it must be created. Transcripts can be submitted in the following
              formats:

                      Text file.
                      Captioner or transcriber.
                      Internet (Via Internet file transfer).
                      Fax (A text file can be faxed directly to the computer).
                      Printed Script (A printed script is useful if it can be scanned
                       accurately. The scanner works best with clean, even-toned, typed
                       scripts).
                      Disk (It can be in virtually any word processing application or an
                       ASCII)

              Direct transcription:

              The captioner listens to the audio and changes the speech to text by typing
              what is heard. They can use a transcriber machine with auto backup or
              captioning software which advances, stops, and backs up the tape in the
              VCR or, use professional transcription equipment to speed up the process.

              Indirect transcription:

              The captioner retypes from a printed or faxed script, scans from a clean
              printed script, and imports from a word processing file that’s sent on disk
              or by e-mail. Indirect transcriptions must be compared with the original
              audio eventually. Most scripts received are not conformed to the audio and
              must be fixed.

       3. Formatting
      Next, the transcript is:

             Divided into captions (The text is broken into short phrases which
              will become captions. Where possible, the split is usually by
              appropriate breakdown of sentence structure).
             Cleaned of extraneous text but maintains the meaning and essential
              vocabulary of the message. Music and sound effects are described.
             Checked for accuracy in the area of language mechanics, such as
              punctuation, grammar, spelling, and others.

      Normally text appears as two-line pop-up captions; however, some have
      the capacity to use from one to four lines in pop-up or roll-up fashion. Set
      the appearance of the captions. Add italics, underlining, colors, speaker
      identification, brackets around sound effects, music notes around song
      lyrics, and so forth.

      Some captioner does this as the script is entered; or, may go back and add
      it later.

4. Time-Coding

             A work tape is made.
                  o Receiving the master copy of the videotape from the client.
                  o Creating a VHS work tape copy of the master with a time-
                      code window.
                  o Making of an audio cassette.
             Matching of the Time code (A time code is assigned to each
              caption). This tells the caption when to appear on the screen.
             “Grabbed” time codes. (These are “Absorbed” as the tape plays,
              using the computer keyboard. In this step the captions may be
              moved up, down, left or right to determined where they will appear
              on the screen. One must ensure that necessary information is not
              covered by the captions and that the positioning hints at who the
              speaker is).

5. Presentation Rate

      Presentation rate control or reading speed

      This refers to the number of words appearing per minute for each caption.
      This depends on the grade level of the video and thus, a given presentation
      rate is assigned to a given captioning job.

      This is a key process and requires quite a bit of effort. The reading speed
      has to be correctly set in line with the timings of the tape. It is thus
       necessary to retime the tape whenever the reading speed had been set
       before the tape.

       The errors borne of captions starting too early or staying on too late is
       what necessitates the editing of the captions. There are also a number of
       rules that must be followed in order to preserve the integrity of the original
       script.

6. Positioning

       The position of the captions is to be inclined towards the speaker’s
       position onscreen. Otherwise, they are to be placed on the bottom lines at
       the center of the screen.

7. Checking and Revision

       Viewing

       A test-run of the video and the captions is to be done. They are to be
       played at the same time to test their appearance in the final captioned
       video.

       Checking and revision

       The captions are to be carefully checked for errors before being recorded.
       Automating the process can allow for auto-spelling checks, reading-rate
       checks, and technical timing error checks.

       The captioned video has to be checked for errors relating to caption
       positioning and timing.

       Crunching

       This is the process of fusing the time code to the captions. Any problems
       with conflicting time codes will cause the captions to move faster than the
       encoder will transmit and as a result, cause a gap, or incorrectly processed
       words.

8. Approved Copy

       Create a captioned approved copy of what will be encoded (for broadcast).

9. Encoding

       The captioning project is that transferred to a videotape using a caption
       encoder after the results have been approved.
      10. Captioned Master

             The captioner then works closely with an engineer to produce the finished
             captioned videotape. The captioning file is transmitted from the computer
             to an encoder, where the original video, time code and new captions are
             recorded in the desired videotape format.

             Digital masters can be reproduced digitally using proprietary process for
             any digital format.


How does it work?

      The following are steps representing a brief summary of the process of pre-
      recorded captioning:

            Reviewing of the recording and production of a program transcript.
            Segmenting of the transcript into individual captions, with correct time
             codes and positioning
            Reviewing of the end result for quality and accuracy.
            Encoding of the result into the final media (DVD, MPEG file, etcetera.)

      Captions can be placed on a video signal in one of two ways:

            Online (live)
            Offline (post-production)

      Online captioning

      This is done as an event occurs. Examples of online captioning are television
      news shows, live seminars, and sports events. Online captions can be done from a
      script, or actually created in real-time. Captions appear just a few seconds behind
      the action to show what is being said.

      A stenographer listens to the broadcast and types the words into a special
      computer program that adds the captions to the television signal. The typists have
      to be skilled at dictation and spelling and they have to be very fast and accurate at
      typing.

      Offline captioning

      This is done after the event occurs, in a studio. Examples of offline captioning
      include television game shows, videotapes of movies, and corporate videotapes
      (e.g., training videos). The text of the captions is created on a computer, and
      synchronized to the video using time codes. Caption writers use scripts and listen
      to a show's soundtrack so they can add words that explain sound effects. On a
      game show, for example, when there is no dialogue but there is laughter, the
      caption will say "Audience laughing."

      They are then transferred to the videotape before it is broadcast or distributed.


Who uses closed captioning?

      Closed captioning can be extremely helpful in at least three different situations:

             It has been a great boon to hearing-impaired television viewers. Deaf
              people use closed captioning as a way of being able to watch television.
              Although they don’t hear what is being said, they read as to keep up with
              the action on the screen.

             It can also be helpful in noisy environments. For example, a TV in a noisy
              airport terminal can display closed captioning and still be usable. Turning
              the television volume up in such areas causes nothing more than cause
              confusion.

             Non native language speakers use captions to learn English or learn to
              read. Also in video conferences where some participants are non-native
              speakers, captioning helps these participants better understand the
              dialogue. In many cases, people can understand material in another
              language more easily by reading it than by hearing it.
          

             Those who are not deaf but are hard of hearing find that closed captioning
              is the best option. They may be able to turn the volume up loud in order to
              hear what is going on, but using closed captioning ends up being a better
              option. At the very least this can be used along with the sound as a way of
              making it easier to follow along.

      Statistically:

             95 million Americans use captioning.
             28 million Americans are hearing impaired.
             30 million Americans are learning English as a second language.
             27 million Americans are improving their literacy skills.
             10 million Americans are school children learning to read.


Powerful Closed Captioning and Subtitling Tools
There are a number of useful captioning tools. The key tool used for captioning
and subtitling is AutoCaption that gives you an elegant set of tools.

The most useful of these tools are:

      Speech Recognition – AutoCaption works with the major systems.
      Proffer – Suggests each caption line or automatically breaks a transcript
       or script into captions. No more picking every single word in each
       caption!
      Rules – You make the rules for what each proffer suggests.
      Reading-Rate – Graphically see how fast the viewer has to read each
       caption.
      Smoother – Automatically adjusts appearance times for an even reading
       rate. Grab the in and out points for off-screen narration and you're done!
      Random-Access – Work on digitized video and eliminate expensive tape
       decks.
      Edit Detection – Captions which respect edits are the hallmark of a
       superior captioner. It's effortless too.
      Speech Sync – Synchronize captions to the moment the image begins to
       speak.
      Ripple – Adjust appearance times by a fixed amount.
      Previewing – See the captions on the video while you're working.
      Foreign Languages – Complete support for UNICODE means you can
       work in all languages – simultaneously!
      Free Translator Support Software – Translate captions without having
       to re-do the timing, unless you want to.
      Spell Checking – You can even build your own customer specific
       dictionary extensions for terms-of-art or proper names.
      Macros – Build a library of macros specific for each client or type of
       captioning.
      Multi-channel Captioning – Caption all closed caption channels or all 32
       DVD subtitle channels.
      Time Compression – Instantly re-time captions when a show's run time is
       electronically compressed or stretched.
      Error Manager – Speed clean-up and fix-up with continuous error
       checking, error explanations, and a handy way to skip from error to error.
      Use Video Files – Schedule captions from video files (MPG, ASF, AVI,
       etc.) or you can even use an ordinary consumer video deck without
       SMPTE time code.
      Free Digitizing Software – Make digital files to caption from without
       tying up AutoCaption!
      Make DVD Caption Assets – Make closed caption and subtitling asset
       files for your DVD authoring system.
      Recover Closed Captions From Video – AutoCaption accepts data
       recovery files.
            Free Approve Caption – Internet caption viewer can be used anywhere to
             preview captions.


Closed caption equipment (software and Hardware)

      There are two stages to offline captioning:

            Editing (Creating the captions)
            Encoding (Placing them on the videotape)

      These stages have difference software and hardware requirements.

      Editing

      Editing requires that you have a computer with captioning software. Pick the
      software first, and then buy the appropriate computer to run it, as hardware
      requirements vary from software vendor to software vendor.

      The computer will have to have;

            A video source such as a computer-controlled video tape recorder (VTR)
            A way to tell where it is on the tape (such as a time code reader)
            A way to display the video (such as a full-motion video card in the
             computer, or an extra television monitor)

      Some editing systems require encoders or character generator decoders, and some
      are digital, requiring the videotape only to get the video into the system.

      Encoding

      This stage requires a computer with the caption encoding software, which may or
      may not be the same software you used for editing.

      You will also need;

            Two VTRs (One to play the original master tape, and one to record the
             new captioned submaster)
            A caption encoder will be required to actually place the captions on the
             tape.

      Timecoding for synchronization will be required, and there are a number of ways
      to accomplish this. The best way is to get the specifications for what you need
      from your caption software vendor
       Unless you already have a video production facility, it can get rather expensive to
       acquire all the equipment you'll need to make a finished captioned media.

       Working with broadcast quality tape media can get expensive, because you'll
       need:

             A broadcast video source
             A broadcast video recorder
             Alignment and monitoring equipment

       DVD Authoring

       DVD technology can be a bit simpler unless you need to digitize and author the
       master DVD file.

       To author a DVD you need:

             A high resolution digitized media file
             And you need a DVD authoring system(The authoring system takes the
              video, audio, and AutoCaption caption files and puts them all together into
              a master DVD image file).

       The most cost effective route is to let your client do the encoding. Give them a
       DCAP file and all they need is a caption inserter suitable for the type of video
       supplied.


Closed captioning for the deaf

       The National Association of the Deaf advises that all producers of video,
       television broadcasts and film, including commercials and information presented
       on the Internet, to caption their offerings. This section on captioning outlines the
       NAD's views on a wide variety of captioning technologies and advises consumers
       how they may file complaints about non-captioning and poor captioning.

       As the legal requirements on closed captioning become effective, deaf or hard of
       hearing people are urged to be alert for inadequate captioning. When a
       broadcaster is not meeting its responsibilities, they are urged to file complaints
       with the enforcement agencies.


Federal laws pertaining to closed captioning

       There are a number of laws pertaining to captioning:

       Television Decoder Circuitry Act of 1990
      As of July 1993, the Federal Communications Commission (FCC) began
      requiring all analog televisions manufactured in the US and 13” or larger to have
      the ability to decode closed captioning.

      Digital television receivers were also required to meet this standard beginning in
      July 2002.

      Americans with Disabilities Act (ADA) of 1990

      The ADA requires closed captioning of all public service announcements
      distributed by the federal government.

      Telecommunications Act of 1996

      Congress passed legislation in 1996 that required all television program operators
      to provide the closed captioning for their programs. The FCC has determined
      requirements that have increased incrementally over the years to ensure more
      programs are available with closed captioning.

      The programs of highest importance are those pertaining to emergency
      announcements and news.

      Rehabilitation Act - Section 508 Accessibility

      Section 508 of the Rehabilitation Act as strengthened by the Workforce
      Investment Act of 1998 requires that Federal agencies make their electronic and
      information technology (EIT) accessible to people with disabilities, including
      employees and the general public. The requirements of Section 508 apply to an
      agency's procurement of EIT, as well as to the agency's development,
      maintenance or use of EIT.

      All training and informational video and multimedia productions that support the
      agency's mission regardless of format, must be open or closed captioned if they
      contain speech or other audio information necessary for the comprehension of the
      content.

      All training and informational video and multimedia productions that support the
      agency's mission regardless of format, must include an audible description of the
      video content if they contain visual information necessary for the comprehension
      of the content.


Glossary of Closed Captioning Terms and Acronyms

      Acronyms
AA                   Average Audience - used for measuring TV viewership
AARP                 American Association of Retired Persons
ABES                 Association for Broadcast Engineering Standards
ADA                  Americans with Disabilities Act
ADI                  Area of Dominant Influence
AEA                  Actors Equity Association
AEA                  American Electronics Association
AFTRA                American Federation of Television & Radio Artists
AITS                 Association of Independent Television Stations
ALDA                 Association of Late Deafened Adults
ASL                  American Sign Language
Basys                A newsroom computer system
BNC                  Bayonet Connector (for coaxial cable)
Caption 21           Script-based captioning system from Basys
CAPtivator Online    Real-time (live) captioning software from Cheetah Systems
CAPtivator Offline   Post-production captioning software from Cheetah Systems
CATV                 Cable Television (originally Community Antenna TV)
CBC                  Canadian Broadcasting Corporation
CC                   Closed-Captioned
CCE/PC               Closed-Caption Encoder for the PC (card made by
                     SoftTouch)
CE                   Chief Engineer (Chief Station operator)
Cheetah Systems      Manufacturer of captioning software & systems
CHUT                 Cable Households Using Television decoder
Decoder              A device that displays closed captions "hidden" in the VBI
DTV                  Digital Television
EDTV                 Extended Definition Television
EEG                  Manufacturer of professional caption encoders/decoders
Encoder              A device that "hides" captions in the VBI
ENG                  Electronic News Gathering
FCC                  Federal Communications Commission
FTC                  Federal Trade Commission
HBI                  Horizontal Blanking Interval
HDTV                 High Definition Television
HOH                  Hard-of-Hearing
LTC                  Longitudinal Time Code (SMPTE timecode on audio track)
Line 21              Area of VBI containing closed-caption text
Live-display         Hand-timing captions from a script during live
                     programming
MOP                  Minute of Program
MSA                  Metro Survey Area
MSI                  Market Statistics Inc.
MTBF                 Mean Time Between Failure(s)
MTBR                 Mean Time Between Repair(s)
NAB                  National Association of Broadcasters
NewStar              Newsroom computer system from Dynatech
NCI             National Captioning Institute (captioning company in VA)
NSI             Nielsen Station Index
O&O             Owned & Operated (a TV station owned by its network)
PA              Public Affairs
Paint-on        Captions being "drawn" a letter at a time anywhere on
                screen
PBS             Public Broadcasting Service
PD              Program Director - Production Director
Pop-on          Captions appearing all at once anywhere on TV screen
POTS            Plain Old Telephone Service - used to refer to ordinary
                phone lines
PPV             Pay per View
PSA             Public Service Announcement
RCA connector   Connector used for home video and audio equipment
Real-time       Captions produced using a stenocaptioner
RGB             Red-Green-Blue (for video signals on three wires)
RGB/Sync        Red-Green-Blue with synchronization signal (four wire
                video)
Roll-up         Two- to four-line captions scrolling on screen
RTNDA           Radio & Television News Directors Association
SAP             Second Audio Program on TV subcarrier
SBE             Society of Broadcast Engineers
SCTE            Society of Cable Television Engineers
SECAM           Sequential Coleur Avec Memoire
SMPTE           Society of Motion Picture & Television Engineers
SMSA            Standard Metropolitan Statistical Area
SoftTouch       Manufacturer of plug-in caption encoders for personal
                computers
SVHS            Super VHS - a 1/2" videotape format
TBC             Time Base Corrector (for "cleaning up" video signal)
TDCA            Television Decoder Circuitry Act
TDD             Telecommunications Device for the Deaf (used as synonym
                for TTY) TeleCaption
Teletext        Captioning standard used in UK & parts of Europe
Timecode        Unique time identifier "stamped" on each frame of video
TT              Text Telephone (obsolete synonym for TTY - not used)
TTY             Teletypewriter (see a lso TDD
U-Matic         A 3/4" videotape format
VBI             Vertical Blanking Interval - the "blank" lines of video
                picture
VCR             Video Cassette Recorder
VITC            Vertical Interval Time Code
VPS             Viewers per Set
VPVH            Viewer per Viewing Household
VTR             Video Tape Recorder
Terms

Captioning
This is the practice of converting the narration, dialogue, music and sound effects
of a video production into text that is displayed on a television screen. The
captions are typically white upper-case letters against a black background.

Prerecorded (Off-line) Captioning
The preparation of captions for recorded programming so that, at the time of air or
tape playback, the captions are a part of the videotape. Appearance of captions is
usually "pop-on" but could also be "roll-up." Captions are typically placed in the
upper or lower third of the television screen.

Pop-on Captions
A phrase or sentence appears on the screen all at once – not line by line – stays
there for a few seconds and then disappears or is replaced by another full caption.
The captions are timed to synchronize with the program and placed on the screen
to help identify the speaker. Pop-on captions are used for prerecorded captioning.

Center Placement Pop-On Captions
Pop-on captions are centered at the bottom third of the TV screen. Their
placement is similar to subtitles although they are displayed as captions in white
letters in a black box. Speaker changes are noted by a dash.

Roll-up Captions
Roll-up captions roll onto and off the screen in a continuous motion. Usually two
to three lines of text appear at one time. As a new line comes along, it appears on
the bottom, pushing the other lines on the screen up. Roll-up captions are used for
all live captioning and can also be used for prerecorded captioning.

Timed Roll-Up Captions
For prerecorded programming, roll-up captions can be timed to be closely
synchronized with the audio.

Live (On-line) Captioning
Captioning that is provided at the time of program origination. "Real-time," "live-
display" and a combination of the two are all methods of on-line captioning.
Appearance of captions is "roll-up."

Real-time Captioning
This is the method of captioning in which captions are simultaneously prepared
and transmitted at the time of origination by specially trained real-time captioners
using a stenotype machine.

Real-time Dictionary
This is a computerized dictionary that is comprised of the phonetics and their
corresponding English that the captioner uses to build words and create
punctuation. Real-time captioners write phonetically what they hear. Similar to
playing chords on a piano, multiple keys are depressed on a steno machine to
create different word combinations. No two captioners write exactly the same
way, so each has a custom dictionary.

Live-display Captions
Live-display captioning is used when an accurate script and/or videotape is
available prior to the time a program is telecast. Captions are prepared in advance
and stored on a computer disk. As the program is telecast, a captioner pushes a
button on the captioning system to display each caption. The roll-up captions
appear line-by-line and are synchronized with the program audio as closely as
possible.

Closed Captions
These are captions that can only appear with the use of a decoder. The decoder
may be either attached to a TV or built into TV's made after July 1993. Closed
captioning allows caption users to enjoy the same broadcast and recorded video
materials that other television viewers enjoy. Closed-caption information is
carried in Line 21 of the vertical blanking interval of the television signal.

Open Captions
These are the captions that are visible without using a set-top decoder or a TV
with a built-in decoder chip. When a video is open captioned, the captions are
permanently part of the picture.

Closed Caption Decoder
This is a small electronic device that decodes the captioning signal and causes
captions to appear on the screen. In the 1980's and early 1990's, closed caption
decoders were the major means by which consumers could watch captioned
television. Since July 1, 1993, all television sets with screens 13 inches or larger
manufactured for sale in the United States must have a built-in decoder chip.

Caption File
This is a computer file that stores a program's caption information including the
text, timing and placement information. The caption file is used in conjunction
with an encoder to create the captioned submaster.

Encoding
The process of inserting the caption data into the television signal on Line 21

Encoder
A device that electronically inserts the caption data into the TV signal on Line 21

Line 21
The television signal is comprised of 525 lines. The vertical blanking interval
encompasses Line 1 through 21. The caption information resides on Line 21, and
active video starts on Line 22.

Time-code
An electronic signal embedded in a videotape that discretely identifies each frame
of video.

Master
This is the original, first-generation videotape of the final version of a program.
The master is the source videotape used to create a captioned "submaster."

Submaster
Any duplication created from the master videotape. The captioned videotape is a
submaster of the original.

Automatic Live Encoding (ALE)
When production schedules are tight, this is an alternate means of transmitting or
displaying captions. Automatic live encoding makes use of the same caption
creation techniques used in prerecorded captions, but a different method is used to
trigger the data into Line 21 of the television signal. The captioned data is loaded
into the computer, and the internal clock within the computer is used to trigger the
captions as opposed to using time-code from the program videotape. A manual
trigger is used to start the transmission of data between the computer and the
smart encoder. The display of automatic live encoding is pop-on, the same as used
for prerecorded captions.

Subtitles
Permanent on-screen text that represents the recitation and dialogue of a program
m. Subtitles are created with a character generator; no decoding capability is
required for viewing them. Subtitles are normally in upper and lower case letters
and do not appear in a black background. Also, subtitles are characteristically
placed at the mid bottom of the screen.

Reformat
The practice of revising previously captioned programs for rebroadcast,
necessitating the retiming and/or editing of the caption content to synchronize it to
the edited video and audio.

				
DOCUMENT INFO