MICROCOMPUTER IMAGE PROCESSING ARCHITECTURE NOW RIVALS by w6WLescw

VIEWS: 4 PAGES: 11

									COMTAL - Computer Technology Review -                                                     1



     MICROCOMPUTER IMAGE PROCESSING ARCHITECTURE NOW RIVALS

                          CAPABILITIES OF LARGER SYSTEMS



    High performance can be achieved with standard PC AT by adding special-function

                        boards linked by a separate bus for image I/O.


                                         by Ron Clouthier

                                            Comtal/3M


Image processing imposes exceptional performance requirements on host computers. Until

recently, such applications needed the dedicated support of machines in the minicomputer or

supermicrocomputer class. However, the engineering approach described here has made

possible a new generation of plug-in board modules for standard microcomputers. Initially

designed around the ISA-bus PC, this architecture represents a price/performance breakthrough.

   Impressive performance and functionality can be achieved by modularizing graphic functions

in separate boards on the ISA bus and by providing a separate 32-bit channel for pixel I/O. The

result is a highly flexible range of solutions with off-the-shelf components.

   With these tools, microcomputers can be used to process remotely sensed satellite data,

radiological imagery, video scanner or camera outputs--in short, many of the image processing

applications that formerly required relatively expensive, standalone equipment.



IMAGE PROCESSING REQUIREMENTS
In image processing, an externally generated electronic image is viewed and manipulated. Such

applications differ from computer graphics, in which images are generated by the system. A key

objective in image processing is to permit the viewer to extract information by studying picture
detail. Computer-assisted transformations may be applied, upon command from the
COMTAL - Computer Technology Review -                                                        2



viewer/operator, to isolate and enhance areas of interest within the picture. (Examples are given

in "Image Processing Functions," which accompanies this article.)

   On input from a scanner or video camera, the analog image must be digitized. This frame

grab process involves the sampling of rasters, or scan lines, at a frequency that is some multiple

of the video frame rate. For each frame of the image that is digitized, or captured, the resulting

database is an array of pixels, or picture elements (Figure 1). As shown in the figure, a separate

overlay plane may be generated by the system and used to hold program menus as well as

diagrams or labels that serve as call-outs for the picture.
   Within the system, each pixel is a numeric value that represents the intensity, or grayscale

value, of the image at a given point. For full-color images, three separate values must be stored

for each point. These values correspond to the respective intensities of red (R), green (G), and

blue (B) components. These RGB values, in turn, may be used to drive the color guns on

computer monitors or the color settings on other output devices (Figure 2).

   The digital sampling rate applied to the incoming image determines the fineness of detail that

can be captured. The extent to which the database of pixels describes the actual image is called

the resolution of the image. Aspects of resolution include spatial resolution and grayscale or

color resolution.

   Spatial resolution describes the number of pixels into which the picture has been subdivided.

This parameter is usually expressed as H x V pixels, or the number of pixels along the horizontal

axis of the picture multiplied by the number of pixels along the vertical axis.

   Grayscale or color resolution represents the precision with which pixel intensities are

described in the database. This parameter is a function of the bit depth of the image, or the

number of bits stored for each pixel. Again, a full color rendering of the image will require three

sets of values (R, G, B), whereas a monochrome image will contain only one set of grayscale

values. Bit depth usually is expressed as a third parameter in addition to spatial resolution: H x
V x B, where B is the number of bits that may be stored in memory for each pixel.
COMTAL - Computer Technology Review -                                                           3



   When dealing with resolution, care must be taken to distinguish between the image that is

actually displayed and its data representation within the computer. The greatest resolution that

can be rendered visually by the system and its display device is called displayable resolution.

This parameter is a subset of the actual data representation of the picture, which is called virtual,

or global, resolution (Figure 3). (In practice, this larger database typically is built up by

combining multiple picture scans, as shown in the figure.) Some image processing functions

(PAN, SCROLL, ROAM) permit the viewer to move the frame of reference around within the

global image database. The displayed image, then, can be regarded as an aperture through which
the global database is viewed.

   Ideally, the throughput of frame-grab hardware should be sufficient to capture a single-frame

video image in real time. That is, the capture time should not exceed the time it takes to transmit

the frame from the video source.

   The system described in this article operates at a frame rate of 30 Hz, the same as broadcast

(NTSC) video. However, in NTSC video, each frame is subdivided into two fields containing

alternate scan lines. Fields are transmitted at 60 Hz, and the two fields are interlaced to produce

a single frame on the display device. For image processing applications, it is preferable not to

use this interlaced scheme. The system described here has a display rate of 60 Hz, noninterlaced,

which eliminates the "flicker" of conventional TV and generates a full-resolution frame on each

scanning pass. This approach yields a higher-quality image for interpretation while retaining

compatibility with conventional video sources.



PC ARCHITECTURE
The image processing architecture discussed below is the basis for Comtal's VisionLab II series

of add-on graphics boards for the ISA-bus PC. A functional diagram of the overall system

architecture is shown in Figure 4.
   As shown in Figure 5, a typical board set includes a display board, a frame processor, and an
COMTAL - Computer Technology Review -                                                        4



array processor. The concept is modular. That is, the boards are designed to function separately

or together so that configurations may be tailored to meet specific requirements.

   The flexibility and power of this architecture is rooted in its unique bus design. The PC ISA

bus is used for interprocess communication (IPC) so that board functions are controlled and

coordinated by the PC's microprocessor host. However, a separate, auxiliary bus is used for

high-speed direct memory access (DMA) operations among the graphics boards. These channels

are used for pixel input/output (I/O) using 32-bit data words. Over this high-speed bus, the

system can perform image-transfer operations at rates that rival many minicomputer systems,
while retaining compatibility with the ISA bus for control operations and integration with

application software.

   Compatibility with the host system is further assured by placing all image storage locations

within the address space of the PC's microprocessor.



DISPLAY PROCESSOR BOARD (DPI)
The display processor board, or DPI, is the basic module of the image processing system. This

board stores the pixel database and provides co-processing capabilities that permit the operator to

manipulate the display.

   An important function of the display processor is to control the region of interest--a viewport,

or window, through which a selected area of the displayed image may seen and transformed

independently of the surrounding area. This function is also called local-area processing. In

most image processing systems, the region of interest must be rectilinear. However, the DPI

implementation can show a viewport of any arbitrary shape.

   Other functions of the display processor include the ability to zoom and roam, as well as to

magnify or minify (reduce) the image. Intensity transformations, which vary the red, green,

blue, or grayscale values within the display. Point transformations may be done in real time, or
at frame rates.
COMTAL - Computer Technology Review -                                                          5



   Image transformations are not performed directly on an array of pixel values. Rather,

operations are performed on separately maintained lookup tables (LUTs) that correlate pixel

addresses and parameter values (Figures 6 and 7). This approach increases processing and

storage efficiency, while permitting multiple LUTs to be maintained concurrently. Separate

LUTs may be maintained for values of red, green, blue, X position, Y position, and intensity.

   Maintaining a separate lookup table for each display parameter greatly increases the

flexibility, efficiency, and speed of image processing transformations. For example, parameters

for X and Y position may represent a translation of the pixel from its initial position.
   The graph on the left in Figure 7 plots positional values that might be held in X or Y lookup

tables. The diagram on the right shows the subset of pixel locations that is represented in the

LUT. Changes in the slope or offset of the graph line correspond with different types of

transformation.

   Color components of the image may be modified by updating the corresponding LUT, or the

same transformation may be applied to all the tables. The intensity table is a composite of the R,

G, and B tables and corresponds with the grayscale representation of the image. Thus,

modifying the intensity table changes the overall color saturation of the displayed picture.

   Uses of intensity transformation include pseudocoloring, which is used to highlight and

enhance all pixels of a given value. This technique can be used to detect edges and boundaries,

as in highlighting arid geographical regions within a LANDSAT photo.

   The basic DPI board provides a displayable/virtual image resolution of 512 x 512 x 8, which

is refreshed at 60 Hz, noninterlaced. An additional plane of 640 x 512 x 1 is also displayable.

This plane is separate from the image database and may be used to hold overlays containing

program menu, icons, or text annotations to the display.

   Depending upon application requirements, the DPI may be used in one of three modes, which

represent different ways of allocating image memory:
COMTAL - Computer Technology Review -                                                       6



   True-color mode

   12-bit mode

   Global mode.



True-color mode In this mode, three images are used to provide 8 bits of intensity resolution

for each color (R, G, B). A fourth image (4 bits) is used for graphics and to control local area, or

region of interest, processing.



12-bit mode For applications requiring monochrome or pseudocolor displays, two images may

be used to provide 12-bit resolution.



Global mode A virtual database of 1024 x 1024 x 8 can be built by combining four images of

512 x 512 x 8, as shown in Figure 3.



In the basic DPI configuration, bit depth of LUTs may be allocated between 8 bits of image

color/intensity data and 4 bits of graphics or as 12 bits of image data. To achieve the increased

spatial resolution in global mode, an optional DPC board must be added. This board expands the

image memory of the DPI. With the addition of the DPC, bit depth may be increased to 32 bits.

In this configuration, LUTs may store 24 bits of image and 8 bits of graphics, or all 32 bits may

be allocated to image data for full-color representation.



FRAME PROCESSOR BOARD (FPI)
To the input side of the display processor may be added a frame processor board, or FPI. This

board performs digital image capture, or frame grab, on incoming signals from an analog video

source. (The display processor can be used by itself within systems that receive image inputs
already in digital format.)
COMTAL - Computer Technology Review -                                                       7



   The FPI can accept inputs from a wide range of sources. A common input format is RGB

video, which provides separate 1-volt peak-to-peak signals for each of the red, green, and blue

components. Synchronization pulses may be provided in a fourth channel or may be

superimposed on the green signal. A relevant spec is EIA RS-170, which is used as a broadcast

studio standard and also as a reference for the output of computer graphics systems. Composite

video, either NTSC (the U.S. standard) or PAL (the British standard), also can be accepted.

   The FPI board will genlock to any source. That is, it will set its internal video

synchronization clock (time base) to coincide with that of the source. This feature is essential for
achieving a stable video image and also makes it possible for the system to be connected to

nonstandard video sources. For example, the output of a 1000-line noninterlaced video scanner

could be accepted readily.

   As stated previously, the system should have sufficient throughput to digitize the incoming

signal at frame rates. The FPI can capture a monochrome image in one frame time (1/30

second). The basic board can digitize a color image in three frame times (one pass each to

sample red, green, and blue). Or, the board can be upgraded with additional processing

capability to grab a color image in just one frame time. Throughput requirements, of course, will

depend upon the application. Capturing motion in color in real time normally would mean

digitizing the image within the duration of one frame.

   The FPI also performs intensity pre-processing, or gamma correction. That is, computation of

the intensity, or grayscale, LUT that is normally done by the display processor board is offloaded

to the FPI, and corrections may be applied to compensate for response characteristics of video

sensors. This removes processing overhead from the display processor, increasing its

performance in calculating transformations.



ARRAY PROCESSOR BOARD (API-16)
The array processor board, or API-16, is an optional module that can greatly increase system
COMTAL - Computer Technology Review -                                                          8



throughput on image processing operations. The architecture incorporates parallel pipeline

processing architecture and high-speed cache memory.

   The array processor is designed specifically to support the types of operations commonly

encountered in image processing applications, including spatial operations, ensemble functions

(convolution and rotation), and filtering. Instruction sequences for these functions reside

onboard in firmware.

   A basic algorithm used in most image processing applications is convolution, which is

performed on selected matrices of pixels. The convolution process produces an overall average
of grayscale within a region of 3 x 3, 5 x 5, or 7 x 7 pixels. In effect, convolution is a filtering,

or damping, of noise or unwanted information in a picture. Optionally, the microprocessor host

may be used to convolve regions of 3 x 3 pixels. Alternatively, this function may be handled

entirely by the array processor board, for any size matrix.

   The design of the API includes a writable control store, a set of reserved firmware locations,

that provides for custom programming and future expansion of functions. For compatibility with

applications and ease of programming, commands use IBM control codes.

   The function of the array processor is to increase the overall performance of the system by

offloading the CPU of the PC, especially for complex transformations. With the cache-memory

design and the auxiliary pixel bus, images can be transferred from the API to the DPI at 36

Mpixel/second.



Included in the overall system design is an expansion chassis so that configurations are not

limited to the available slots on the ISA motherboard. The chassis interfaces to any PC-

compatible port. One use of the expansion chassis, for example, would be to permit a

configuration of multiple array processor boards.
COMTAL - Computer Technology Review -                                                      9



INTERFACES AND SOFTWARE
Integration of the image processing subsystem with application software is done through a

software interface, the executive control package (ECP). From the viewpoint of the user or

application programmer, image processing functions are available as a set of routines that are

callable from application programs written in C. The ECP is written in machine code under the

MS-DOS operating system and uses the command structure of the ISA bus. Thus, the ECP is a

software kernel, or command-level interface, that removes the application programmer from

concerns at the hardware level.
   Software modules within the ECP also coordinate functions of the graphics board set and

transfers of images over the auxiliary pixel bus.

   The objective of the ECP interface is to provide an easily extensible set of image processing

capabilities to application programs. The ECP handles all functions of the board set so that

application program code can remain relatively machine-independent. Compatibility with the C

language should help to ease integration of user-developed or third-party software with the

system. Compatibility with C also holds the potential for portability of source code among

machines and among operating systems.



INTEGRATION: MATCHING MODULES TO THE APPLICATION
Image processing applications are particularly demanding of processing overhead and storage

capacity. Relatively large databases (images) must be transformed at speeds sufficient to keep

pace with video frame rates. Considering these requirements, it is hardly surprising that systems

have tended to be large and costly--usually beyond the resources of individual users.

   Microtechnology has changed cost/performance ratios dramatically. The industry continues

to advance, and there is no end in sight. As the architecture described above demonstrates,

exceptional performance in image processing applications now can be achieved with relatively
inexpensive systems. At last, it is feasible to configure an image processing system that is
COMTAL - Computer Technology Review -                                                         10



tailored to the needs of a single user or application.

   These changes also affect the economics of application development. At the cost level of a

microcomputer, custom development and programming often cannot be justified. If the software

project becomes too costly or time-consuming, the economies of using a microcomputer

disappear. Clearly, software development and integration costs must be brought into balance

with this new generation of inexpensive hardware.

   To achieve this balance, users must be able to select and implement image processing

applications from solutions that are available essentially off-the-shelf. An important objective of
a microcomputer-based subsystem, therefore, should be to provide a set of hardware and

software modules that permit flexibility of configuration to suit requirements. Further,

integration must be possible with a minimum of technical knowledge and involvement on the

part of the application developer or user.

   The combination of the display processor, frame processor, and array processor, as integrated

by the executive control package, should achieve this objective for microcomputer-based image

processing systems. Incorporating the expansion chassis in the design further assures that users

will not be constrained by the physical limitations of specific machines. As with hardware

elements, compatibility of functions with MS-DOS and C yields a software environment that is

both general and highly flexible.

   It also should not be necessary to constrain or to scale down applications to fit the

microcomputer environment. With the architecture described here, it becomes feasible for a PC

to be configured for a single user running a single application. It is further possible to interface

such PC-based image processing workstations to a central host, as has been done with larger

Comtal systems built around timesharing hosts. Hosts range in size from the DEC MicroVAX II

to the larger VAX machines. These hosts support multiple users running multiple applications

under operating systems such as Unix or VMS. Portability of image processing applications to
such environments is assured through use of C-callable subroutines and common image-
COMTAL - Computer Technology Review -                                                     11



processing command structures.

   The feasibility of image processing at the level of individual, microcomputer-based systems is

a significant breakthrough. Much of the functionality of larger systems can now be achieved at

this level for a much lower system cost that may be justifiable for individual users or

applications. A modular architecture, combined with general and flexible interfaces, assures that

solutions can be implemented rapidly, with a minimum of technical involvement, so that the

inherent economies of microtechnology actually are realized.



                                               ###

								
To top