Embedded Signal Processing Laboratory by Up1H11j

VIEWS: 1 PAGES: 4

									     Wireless Networking and Communications Group at The University of Texas at Austin


                       Embedded Signal Processing Laboratory
                                      Prof. Brian L. Evans
                                     bevans@ece.utexas.edu
                              http://users.ece.utexas.edu/~bevans

1.0 Introduction

My research group develops theory and implementations for discrete-time signal processing for
baseband communications and image display. When developing algorithms, we derive theory
and incorporate implementation constraints. We keep in mind that our algorithms will ultimately
be implemented in fixed-point data and arithmetic on targets that are constrained in memory size
and memory input/output rates. We gather our algorithms, along with other leading algorithms,
in freely distributable toolboxes in MATLAB that we release on the Internet. We also implement
our algorithms in software and hardware in real-time testbeds with appropriate analog front ends.
We evaluate tradeoffs in application performance vs. implementation complexity, first at a
coarse level using desktop simulation, and then at a fine level using embedded targets. Targets
include digital signal processors, x86 processors and field programmable gate arrays (FPGAs).

My research group also develops system-level electronic design automation methods and tools
for multicore embedded systems. A fundamental problem in multicore systems is the conflict
between concurrency and predictability. To solve this conflict, we abstract the representation of
software by using formal models of computation. We use the Synchronous Dataflow model and
extend the Process Network model. Both models guarantee deadlock-free execution that will
give the same results whether the program is run sequentially, across multiple cores, or across
multiple processors. Both models are well suited for streaming discrete-time signal processing
algorithms for baseband communications as well as speech, audio, image and video applications.
We have released a distributed scalable framework in C++ for our Process Network model.

2.0 Discrete-Time Signal Processing for Baseband Communications

Mitigating interference. We have been researching the modeling and mitigation of co-channel
interference, adjacent channel interference and computation platform noise to improve
communication performance. These sources of radio frequency interference (RFI) are increasing
over time due to increases in frequency reuse, subscribers, and computer performance. Our
approach uses the same three statistical models to characterize RFI from a wide variety of
sources. We have validated the models analytically and empirically. Each statistical model has
two parameters. Based on estimates of the parameter values, we mitigate RFI in the physical
layer to reduce bit error rate by a factor of 10-100. We hope to achieve a 10-100 times increase
in network throughput by using our RFI models for medium access control. We are in the
process of implementing our RFI modeling and mitigation algorithms on FPGAs in an RFI
testbed. Here is more information about the project:
Web site: http://users.ece.utexas.edu/~bevans/projects/rfi/index.html
Slides: http://users.ece.utexas.edu/~bevans/projects/rfi/talks/April2010RFIMitigationTalk.html
Software: http://users.ece.utexas.edu/~bevans/projects/rfi/software/index.html


                                         April 21, 2010
     Wireless Networking and Communications Group at The University of Texas at Austin


Multicarrier equalization. Orthogonal Frequency Division Multiplexing (OFDM) forms each
symbol via an inverse fast Fourier transform (FFT). The symbol is periodically extended by
copying the last few samples to the front of the symbol. The receiver often applies a channel
shortening filter to reduce the effective channel impulse response to be no longer than the cyclic
prefix. This allows frequency equalization to be performed in the FFT domain to reduce
complexity. In ADSL, a channel shortening filter can increase the bit rate by 16x over not using
one, for the same bit error rate. For ADSL, we developed the first channel shortening training
method that maximizes a measure of bit rate and is realizable in real-time fixed-point software.
Our algorithm doubled bit rate over the best training method at the time and only required a
change of software in existing receivers. We also developed a dual-path channel shortening
structure, which increased bit rate by another 20%. Here is more information about the project:
Web site: http://users.ece.utexas.edu/~bevans/projects/adsl/index.html
Slides:   http://users.ece.utexas.edu/~bevans/projects/adsl/equalization.ppt
Software: http://users.ece.utexas.edu/~bevans/projects/adsl/software.html

Multi-channel multicarrier communications testbed. We designed and implemented a testbed
to empower designers to evaluate and visualize tradeoffs in communication performance vs.
implementation complexity at the system level. The testbed uses a type of OFDM known as
discrete multitone (DMT) modulation as found in ADSL systems, and has two transmitters and
two receivers. The 2x2 DMT testbed can execute in real time using National Instruments
embedded hardware over physical cables, or on the PC using cable models. Baseband processing
for the physical and medium access control layers is in C++ and runs on an embedded x86 dual-
core processor. The baseband code contains multiple algorithms for each of the following
structures: peak-to-average power ratio reduction, echo cancellation, equalization, bit allocation,
channel shortening, channel tracking and crosstalk cancellation. Crosstalk cancellation gives
90% of the gain in bit rate. The sponsor is retargetting the C++ code onto an embedded processor
for their commercial system. Here is more information about the project:
Web site:    http://users.ece.utexas.edu/~bevans/projects/adsl/index.html
Slides:      http://users.ece.utexas.edu/~bevans/papers/2007/dmtTestbed/dmtTestbed.ppt
Software:    http://users.ece.utexas.edu/~bevans/projects/adsl/simulator/index.html
             (The most recent software versions are proprietary to the sponsor at present.)

Multiuser resource allocation. For long-term evolution of cellular and Wimax basestations, we
developed the first algorithm to allocate subcarrier frequencies and power to multiple users that
optimizes bit rates, has linear complexity, and is realizable in fixed-point hardware/software.
These basestations transmit to all users at the same time by using a distinct subset of subcarrier
frequencies for each user. The subsets are not necessarily contiguous. Optimal allocation of user
subcarrier frequencies and power requires mixed-integer programming, which is computationally
intractable for common scenarios (e.g. 1536 carrier frequencies and 30 users). Our algorithms are
available for continuous and discrete rates, and apply to perfect or partial knowledge of channel
state. Prior to our breakthrough, engineers relied on heuristics with quadratic complexity for sub-
optimal resource allocation. Here is more information about the project:
Web site: http://users.ece.utexas.edu/~bevans/projects/ofdm/index.html
Slides:   http://users.ece.utexas.edu/~bevans/projects/ofdm/talks/OFDMAResAllocAbs.html
Software: http://users.ece.utexas.edu/~bevans/projects/ofdm/software.html


                                          April 21, 2010
     Wireless Networking and Communications Group at The University of Texas at Austin


3.0 Discrete-Time Signal Processing for Image Display

Image halftoning algorithms reduce image resolution in intensity and color to match those of the
display. Examples include rendering a 24-bit color image on a 12-bit color display, or an 8-bit
grayscale image on a binary device such as a reflective screen. One way to achieve the illusion of
higher resolution is to push the quantization error at each spatial location and for each color
channel into high frequencies where the human visual system is less sensitive. One such method,
error diffusion, filters the quantization error at a pixel and feeds the result to unquantized pixels.

Image halftoning. For color halftoning by error diffusion, we have developed a unified
theoretical framework, methods to compensate the image distortion it induces, and methods for
halftone quality assessment. The framework linearizes color error diffusion by replacing the
color quantizer with a matrix gain plus an additive uncorrelated noise source. We then apply
linear methods to compensate for image distortion, including vector-valued prefiltering to invert
the signal transfer function and vector-valued adaptive filtering to reduce the visibility of color
quantization noise. We compensate for false textures in the halftone (i.e. textures that are not
visible in the original) by replacing the quantizer with a lookup table that flips the outcome near
threshold values. All compensation methods have low enough complexity to be incorporated into
a commercial printer or display driver. Here is more information about the project:
Web site: http://users.ece.utexas.edu/~bevans/projects/halftoning/index.html
Slides:   http://www.ece.utexas.edu/~bevans/projects/halftoning/talks/QMTErrorDiff.ppt
Software: http://users.ece.utexas.edu/~bevans/projects/halftoning/software.html

Video halftoning. For grayscale video halftoning, we have developed methods for assessing
visual quality and compensating for temporal artifacts. We assess and compensate for two key
perceived temporal artifacts of dirty window effect and flicker that arise when displaying video
halftones at 30 frames/s or less. At these rates, flicker between successive halftone frames will
correspond to temporal frequencies at which the human visual system is sensitive. The primary
application is displaying video on handheld devices. Here is more information about the project:
Web site:   http://users.ece.utexas.edu/~bevans/projects/halftoning/index.html

4.0 System-level Electronic Design Automation Tools for Multicore Embedded Systems

System on chip design. We automate the mapping of streaming signal processing tasks onto
multicore processors to achieve high-throughput, low-latency and real-time performance. We
model tasks using the Synchronous Dataflow (SDF) model of computation. An SDF program is
represented as a directed graph, in which edges are first-in first-out queues of bounded size. Each
node in the graph is enabled for execution when enough data values are available on each input.
When node completes its execution, the data values produced on each output edge are enqueued.
We address simultaneous partitioning and scheduling of SDF graphs onto heterogeneous
multicore platforms to optimize throughput, latency and cost. We generate Pareto tradeoff curves
to allow a system engineer to explore design tradeoffs in possible partitions and schedules. Case
studies include an MP3 decoder. Here is more information about the SDF model of computation:
Slides:      http://www.ece.utexas.edu/~bevans/talks/DataflowModelingForDSPComm.ppt



                                           April 21, 2010
     Wireless Networking and Communications Group at The University of Texas at Austin


Scalable software framework. We realize high-throughput, scalable software on multicore
processors by extending the Process Network (PN) model of computation. A PN program is
represented as a directed graph, in which nodes are concurrent processes and edges are first-in
first-out queues. Nodes map to threads. PN guarantees predictability of results regardless of the
rates or order in which processes execute. Thus, correctness of a program does not depend on the
use of explicit synchronization mechanisms, such as mutual exclusion. In PN, a queue could
grow without bound. Our Computational PN (CPN) framework schedules programs in bounded
memory when possible. To increase throughput, CPN decouples input/output management in the
queues from computation in the nodes. C++ programs in our CPN framework automatically
scale to multiple cores via thread scheduling by an operating system, such as Linux. The same
CPN program can run on a single core or multiple cores, without any change to the code. Case
studies include a 3-D beamformer. Here is more information about the project:
Web site:   http://users.ece.utexas.edu/~bevans/projects/pn/index.html
Slides:     http://users.ece.utexas.edu/~allen/CPN-slides.pdf
Software:   http://www.ece.utexas.edu/~allen/CPN/

5.0 Brief Biography

Prof. Brian L. Evans is Professor of Electrical and Computer Engineering at The University of
Texas at Austin. He is an IEEE Fellow “for contributions to multicarrier communications and
image display”. He has graduated 16 PhD students and 8 MS students, and published more than
190 refereed journal and conference papers. He received a 1997 US National Science Foundation
CAREER Award in image and video processing systems.
Web site:   http://users.ece.utexas.edu/~bevans
Slides:     http://users.ece.utexas.edu/~bevans/espl.ppt




                                         April 21, 2010

								
To top