Docstoc

Compressed Sensing Phase Retrieval

Document Sample
Compressed Sensing Phase Retrieval Powered By Docstoc
					             Compressed Sensing Phase Retrieval
    Matthew L. Moravecr , Justin K. Rombergg , and Richard G. Baraniukr
     r Department    of Electrical and Computer Engineering, Rice University
         g   School of Electrical and Computer Engineering, Georgia Tech


                                        ABSTRACT
The theory of Compressed sensing (CS) allows for signal reconstruction from a number of mea-
surements dictated by the signal’s structure, rather than its bandwidth. The information in
signals which have only a few nonzero values can be captured in only a few measurements, such
as random Fourier coefficients. However, there exist scenarios where we can only observe the
magnitude of these coefficients. It is natural to ask if such measurements could still capture
the signal’s information, and if so, how many are needed. We have found an upper bound of
O(k 2 log(N ) random Fourier modulus measurements needed to uniquely specify k-sparse sig-
nals. In addition to using a signal’s structure to observe it with fewer measurements, we also
propose a method to use this structure to aid in recovery from the Fourier transform modulus.
Existing methods to solve this phase retrieval (PR) problem with complex signals require a
priori signal assumptions on the signal’s support. We have shown that a constraint based upon
signal structure, the 1 norm, is also effective for PR. Not only does it help recover structured
signals from their Fourier transform modulus, but it can do so with fewer measurements than
PR traditionally requires.
Keywords: Compressed sensing, phase retrieval, projection algorithms


                                  1. INTRODUCTION
In many physical scenarios for signal processing, the measurement process can be time consuming
or expensive. High frequency radar is beginning to test the limits of analog to digital conversion.
Sensors for terahertz electromagnetic frequencies cost many times times more than those for the
visible spectrum. MRI scans can take over an hour. The irony of these situations is that in each
case the wide-band signal in question is undeniably structured, yet aside from general bandwidth
considerations, no assumptions about this structure are made to ease the sensing process.
   The field of compressed sensing (CS) [1–3] takes the logical step of making measurements that
take a signal’s structure into account, and taking only as many measurements as the structure
requires. CS has already seen practical success such as imaging with a single pixel [4] and
reduction in MRI scan time [5]. Since random samples of a signal’s Fourier transform are one
way to make CS measurements, it would seem the benefits of CS can apply to when we have
access to the an object’s diffraction pattern, since it closely approximates the object’s Fourier
   Email: {moravec, richb}@rice.edu, jrom@ece.gatech.edu
Transform. Taking the object’s structure into account, we could randomly sub-sample the
diffraction pattern, rather than needlessly observe the entire thing. Signal reconstruction from
less measurements could affect such fields as crystallography, astronomy, and wavefront sensing.
Making the jump to applying CS in these cases introduces a new challenge–the measurements of
the complex-valued diffraction pattern are in practice made with sensors that can only observe
its intensity. In this paper we present both theoretical and practical results to demonstrate that
we can indeed take random Fourier measurements and reap their benefits though we cannot
observe their phase. We show that the number of magnitude-only measurements sufficient for
perfect recovery is of the order of k 2 log N for sparse signals, where N is the length of the signal
and k is the number of nonzero elements.
    In the process of finding a way to recover signals from sub-sampled Fourier transform mod-
ulus, we developed a new method of phase retrieval that is based on a signal’s structure. Phase
retrieval (PR) is the process of recovering the phase, given just the magnitude, of a signal’s
Fourier Transform, thereby recovering the signal itself. It amounts to finding a reconstruction
candidate signal that has the same bandwidth and Fourier modulus as the original. Since the
Fourier modulus constraint set is non-convex, there is not a straightforward method of perform-
ing PR. Alternating projection strategies work if the signal constraints are stringent enough.
Positivity is an effective constraint for real signals, and exact support constraints perform well
for complex-valued data. Such a support constraint is somewhat unrealistic, and unfortunately,
PR of complex-valued data is notoriously difficult apart from a strict support constraint.
   We introduce a new constraint of structure on the signal and use this to recover the signals.
To enforce this structure we require that the 1 norm of the reconstruction candidate match
that of the true signal, in addition to matching the Fourier modulus. With this constraint we
have been able to recover signals that normally would only be recoverable if their support were
known. In addition to aiding in reconstruction enforcing signal structure matches with random
Fourier measurements to result in needing fewer measurements to reconstruct.
    To illustrate the effectiveness of a structure constraint for PR we consider a terahertz imaging
example (Figure 1). Terahertz (THz) imaging offers the benefits of x-ray imaging without
ionizing exposure. In [6] the diffraction pattern of an object illuminated by a collimated THz
beam is scanned, and its magnitude recorded. Since it is not a perfect plane wave hitting the
object, its diffreaction pattern is the Fourier transform of a complex-valued signal. As such, loose
support constraints (25% of the diffraction pattern size, to ensure there is no aliasing( are not
effective in recovering the signal. To enforce structure of the reconstructed object, we estimate
and use the 1 norm as a constraint. As a side benefit, we are able to perform reconstruction
with fewer measurements than traditionally needed for PR. This can be useful because existing
methods for terahertz imaging perform a raster scan of an object, recording the intensity of a
focused beam at each location where it passes through the object. This can be a time-consuming
task. An array of THz detectors could be made to speed up acquisition time, but would be very
expensive. By using CSPR, we can save time and expense by randomly sampling the diffraction
pattern.
   This paper is organized as follows. Section 2 provides background on CS and PR. In Section
3 we introduce the compressibility constraint for PR and show how it both helps recovery
                                             Object

                THz transmitter                                             THz receiver




                                   6cm
                                                                     12cm




            Fourier Transform Modulus




                                             compressibility
                                             constraint




                                          loose support constraint



              randomly
              sub-sample




                                                 compressibility
                                                 constraint




Figure 1. A THz trasmitter illuminates an object, in this case a T-shaped aperture. The far-field
diffraction pattern is focuesed in to a reasonable distance with a lens, and is recureded with a raster-
scanning receiver. The 64 × 64 pxel diffraction pattern is approximately the modulus of the signal’s
Fourier transform. We perform PR with a compressibility constraint and retrieve the object. A loose
support constraint is not effective, as the algorithm cannot find an intersection between the support
and Fourier modulus constraint sets. We also use the compressibility constraint with only 2.5% of the
measurements and still nearly recover the object. The method of recovery is explained in Section 3.3
of this paper.
and allows for less measurements, and describe the algorithm we use to implement CSPR. We
conclude the paper with numerical results, both simulated and from physical application.


                                   2. BACKGROUND
2.1. Compressed Sensing
Traditionally, digital signal processing has meant first observing a signal, then processing it in
some way that enhances it and/or prepares it for storage or transmission. The sampling part of
this process is governed by the Nyquist rate for band-limited signals. Shannon sampling is the
best that can be done to recover a sampled signal if all that is known about a signal is that it
is band-limited.
    However in many cases the signal class can be more specified. We expect a natural band-
limited signal to be structured and hence “sparse” in some basis. This means that a large
signal can be represented well with only a few of its elements or a few of the coefficients of
its transform. Smooth signals are sparse in the Fourier basis and piecewise smooth signals are
sparse in a wavelet basis. Knowing that a signal is sparse gives much more information than
just knowing it is band-limited. Transform coding uses this knowledge to efficiently store or
represent this signal, however it was not until the last two years that this type of structure was
exploited in sampling the signal. Fully sampling a compressible signal is wasteful. Just as we
only save the “important parts” when storing or transmitting signals we ought to strive to sense
only the significant content of a sparse signal. CS aims to efficiently observe such signals.

2.1.1. Efficiency of Compressed Sensing
Consider a signal x ∈ RN . Let x be compressible, meaning that if the magnitude of its elements
were ordered from greatest to smallest they would decay like n−1/p . Many natural signals satisfy
this condition when represented as a linear combination of vectors from a basis Ψ that sparsifies
the signal. The best k term approximation of x is defined as xk , and is equal to x at the k
largest entries and is 0 otherwise. The k term approximation error σk (x) is defined as x − xk 2
and is bounded by Ck 1/2−1/p . Let y be linear measurements of x where y = Φx. The “miracle”
of CS is that x can be reconstructed with error O(σk (x)) with only n measurements, where n
is O(k log(N ) and is much smaller than N [1–3]. This is to say that the CS recovery of a given
signal has optimal error and is accomplished not by sampling the whole signal but rather only
as much as the structure of the signal requires.

2.1.2. Measuring and Reconstruction
Accurate signal recovery from limited measurements requires more than the sparsity of a signal.
The measurement matrix Φ must satisfy certain properties so that no matter what the sparsifying
matrix Ψ may be, Φ will capture all the significant information. In [2] it is shown that if Φ is a
random Fourier ensemble, it can capture information in a structured signal with high probability,
and furthermore this recovery is attained via 1 minimization:

                                 x = arg min x   1   s.t. y = Φx.                             (1)
                                           x
CS properties holding for a random Fourier ensemble and 1 minimization being the key to
recover are the inspiration for our PR method which uses random Fourier modulus values and
an 1 constraint.

2.2. Phase Retrieval
In many areas of science and engineering, such as crystallography, astronomy, and wavefront
sensing, measurements of a complex-valued signal must be made with sensors that can only
observe its intensity. These magnitude-only measurements are acceptable in instances like pho-
tography, since our eyes also only observe intensity of a light field. However in certain appli-
cations, the phase of a signal is very important information to have. An object’s diffraction
pattern, like from a crystal illuminated by an x-ray, or a landscape illuminated by a laser [7],
closely approximates the object’s Fourier transform. It would be useful in these circumstances
to recover the object from its transform, but the phase of the diffraction pattern would have to
be known in addition to the magnitude.
   The task of PR is to recover the Fourier phase information in order to perform this seemingly
hopeless inverse problem. Methods for PR are introduced by Fienup in [8, 9]. His concern, as
ours, is PR from the intensity of an unknown object’s Fourier transform.∗ In these papers
he shows a way to accurately recover a discrete signal from its transform modulus, and finds
surprising his algorithms’ ability to recover the phase from a variety of random starting points.

2.2.1. Unique Recovery
The theory behind this uniqueness of recovered results is explained by Hayes in [10]. Observing
the magnitude (squared) of a signal’s Fourier transform is equivalent to observing the autocor-
relation, as these are Fourier transform pairs. Having a signal’s autocorrelation is equivalent to
knowing the z-transform of its autocorrelation. This polynomial is the product of the signal’s
z-transform, and the transform of it’s flipped version. Therefore if a signal’s z-transform is
irreducible, then it is the only signal which will yield it’s autocorrelation (excepting a shifted
or flipped version, since the absolute position and orientation of the signal are irretrievably lost
with the phase). Since virtually all polynomials in two or more dimensions are irreducible, one
can be sure that if a recovered signal has finite support, and has the same autocorrelation as
the original signal, it is indeed equal to the original signal.
    Hayes also explains the conditions needed to guarantee that a recovered signal has the same
autocorrelation as the original signal. To do this he considers how one would know if the z-
transform of a signal’s autocorrelation were equivalent to that of the original signal’s. For a signal
of support N , the z-transform of the autocorrelation is a polynomial of degree 2N − 1. In order
for two polynomial functions of degree 2N − 1 to be equal, they must have the same evaluations
at 2N locations. These 2N locations may be chosen as distinct locations anywhere on the
complex plane, but it is convenient to consider equally spaced positions about the complex unit
circle. The z-transform of the autocorrelation evaluated at these points is the discrete Fourier
transform (DFT) of the autocorrelation at these points, which is simply the magnitude-squared
   ∗ The phase problem, in general, refers to the task of recovering the phase of an object’s Fourier
transform, given its magnitude, and the magnitude of the object itself.
of the original signal’s Fourier transform. Through these relations he shows a signal in two or
more dimensions is uniquely specified by its DFT modulus.

2.2.2. Recovery Methods
While this work addresses the uniqueness of a recovered solution, it provides no guaranteed
method of finding a candidate solution with the necessary support size and DFT modulus. This
problem is an optimization in which a reconstruction candidate’s distance from these constraint
sets must be minimized. Since the Fourier modulus constraint set is non-convex, it is a non-
convex optimization. Though problems of this nature are difficult because only exhaustive
algorithms can guarantee convergence, there are a variety heuristics that do well in solving
this particular optimization [11]. To help find a solution these heuristics should preferably have
significant a priori signal information in addition to the Fourier modulus and maximum support
size constraints. This information could be positivity, explicit support, or a histogram of signal
values.


              3. COMPRESSED SENSING PHASE RETRIEVAL
3.1.   1   Constraint for Phase Retrieval
Though the Fourier modulus and bandwidth constraints are sufficient to guarantee a unique
solution, finding the intersection of these two sets is a difficult problem, since the Fourier modulus
constraint set is non-convex. Were both sets convex, alternating projections would find the
intersection. Despite the non-convexity, projection algorithms can be used to solve the problem.
One in particular, Fienup’s hybrid input-output algorithm† , can find the intersection even in the
presence of local minima. However, if a signal is complex-valued, the loose sufficiency constraints
of maximal bandwidth (which may be derived from the autocorrelation) which can guarantee
the existence of a unique solution are not sufficient in practice to find the solution. A typical
algorithm may search indefinitely for a global minimum, getting trapped by local minima in a
condition known as stagnation [12]. Exact support constraints are needed to aid in finding a
solution. The reason they work is they reduce the size of the constraint set, ostensibly making
it an easier process for iterative projection procedures to find the solution.
    Other strict constraints can also reduce the set size. A histogram constraint defines a dis-
tribution of pixel values that the image must satisfy. A sparsity constraint (referred to as a
number of non-zero constraint in [13]) forces the image to have a certain number of non-zero
elements. A drawback of these constraints is that they may be unknown quantities which are
difficult to estimate.
    We propose an 1 constraint. The 1 norm of a signal is the sum of the absolute values of its
components or its coefficients in a transform basis. It is similar to both sparsity and histogram
constraints but is more forgiving: an exact distribution is not needed, nor an exactly sparse
signal. Rather the value of the signal’s 1 norm is needed, and this could be estimated and then
   † Fienupand others refer to these as algorithms, but strictly speaking they are not, since the running
length to find a solution is indefinite.
optimized over a single degree of freedom. We have found that this constraint is just as effective
as a strict support constraint. Section 4.1 provides empirical support for our assertion.
   This constraint can be useful with all measurements, but is especially helpful in that it can
be used with a structured signal to recover it with less measurements than bandwidth would
require, bringing us to the second contribution of this paper.

3.2. Theoretical Recovery
It has been shown by others, and discussed in the introduction, that the number of measurements
needed to guarantee uniqueness in PR is a function of the bandwidth: the Fourier modulus must
be sampled twice as much in each dimension as the bandwidth of the original. However we know
more about the signal than this. We know that natural signals will probably have a relatively
small number of coefficients in a sparsifying basis, compared to the total size of the signal.
Intuition says that if only a few pieces of information of the signal suffice to represent it well,
then only a few measurements should be needed to capture this information. We have access
to linear measurements of the signals autocorrelation, as the intensity of a signal’s Fourier
transform is equivalent to the Fourier transform of its autocorrelation. Each Fourier magnitude
measurement is a linear projection of the signal’s autocorrelation onto a complex sinusoid. The
good news for us is that a random collection of these projections is sufficient to specify a sparse
autocorrelation, and which is sufficient to uniquely specify the signal.
Lemma 1. Suppose x[n] is a two-dimensional sequence of complex-numbers of support N1 × N2 ,
and x has a z-transform which, except for trivial factors, is irreducible and nonsymmetric. Then
x[n] is uniquely specified by its Fourier transform modulus, and an M (≥ 2N1 × 2N2 ) point DFT
is sufficient for this unique specification.
   Proof sketch: Hayes proves in theorem 9 of [10] that sequences of real numbers are uniquely
specified by their DFT modulus, as a consequence of their z transforms being irreducible. These
arguments also follow for complex sequences as long as their z-transforms are also irreducible.

    This means that if x ∈ CN1 ×N2 has an irreducible z-transform, and there exists an x ∈ CN1 ×N2
such that |F x| = |Fx| on a 2N1 × 2N2 lattice, then x ∼ x (the two signals are equal within a
flip, shift, and/or a constant phase factor). Since the autocorrelation is a Fourier transform pair
with a signal’s Fourier Transform intensity (its modulus-squared), then the Lemma implies that
a signal’s autocorrelation is sufficient to uniquely specify it. Hayes notes that the irreducible
requirement is not strict in two or more dimensions for complex signals, since they correspond
to a set of measure zero [14].
   Since an arbitrary complex signal is specified by its autocorrelation, we must now consider
how many measurements of a signal’s autocorrelation are needed to specify it. We consider the
case in which the signal has only a few non-zero complex entries, whose values and locations are
unknown.
Theorem 1. Suppose x[n1 , n2 ] ∈ CN1 ×N2 has an irreducible z-transform, and is k sparse. Let
N = N1 N2 . Then with probability of at least 1 − O(N −ρ/α ) for some fixed ρ > 0, O(k 2 log(N ))
random Fourier modulus measurements of x are sufficient to uniquely specify it.
    Proof: Since x is k-sparse in Ψ, its autocorrelation is at most k 2 sparse in some Ψ. For a k 2 -
sparse signal in some basis Ψ, it was shown in [3] that due to the Uniform Uncertainty Principle
of a random Fourier ensemble, only n = O(k 2 log(N )) are needed to specify a signal, with the
probability stated above in the theorem. This means that, with overwhelming probability, if
autocorrelation Rx matches Rx at n random Fourier locations, and both Rx and Rx are k 2
sparse, then Rx = Rx . From the lemma, this implies that x ∼ x.
     This gives a new paradigm for defining the number of Fourier modulus measurements needed
to capture the information in a signal, and is significant because it scales with the signal’s
structure, rather than its bandwidth. Though the theorem applies to signals sparse in space, we
have seen in practice the same results for a signal sparse in a different basis, such as wavelets.
We have also observed that the order of measurements needed scales more like k log N than
k 2 log N .

3.3. Practical Recovery
As with the case of regular PR, our goal is to find a signal x that lies in the intersection of two
constraint sets. For us the constraint sets are slightly different. One set is all signals which have
the same Fourier modulus values as x at specified random locations. The other set is all the
signals which have the same 1 norm as x, or whose coefficients in some transform basis have
the same 1 norm as the coefficients for x. Like with regular PR, we would like to find an x that
minimizes the distance between the two sets (since the two sets intersect this distance will be
zero). In our case, as with regular PR, the Fourier constraint set is non-convex. For us the 1
constraint set is also not convex, where a support constraint is convex.
   The presence of many local minima in the distance between the two constraint sets precludes
a direct optimization procedure. Rather we use the same kind of iterative projection scheme
that is used for regular PR. There are many variants of Fienup’s original error reduction and
Hybrid Input-Output algorithms, and one that we found effective and implemented is Relaxed
Averaged Alternating Reflections (RAAR) [15]. Starting with x(0) , an initial guess of random
values, each successive step is calculated as a combination of projections and reflections:
                                     1
                           x(n+1) = [ β(R1 Rm + I) + (1 − β)Pm ]x(n) .                           (2)
                                     2
Pm x refers to taking the current iterate x and projecting it onto Fourier modulus space, by
taking its Fourier transform, fixing the magnitudes of a defined subset to match the known
values while keeping the phases the same, and then performing an inverse transform. Rm x is a
reflection defined as Rm x = 2Pm x − x. The reflection R1 = 2P1 x − x where P1 x is the projection
of x onto a known 1 norm. This is accomplished by uniformly adding or subtraction a constant
value to the magnitudes of the entries of x until x 1 reaches the desired value. We stop the
applications of RAAR once the difference between a modulus projection, and a subsequent 1
projection,
                                        P1 Pm x(n) − Pm x(n) 2
                                 E=                            ,                            (3)
                                              Pm x(n) 2
reaches a negligible value, implying that we have a candidate solution which lies in both con-
straint sets.


                                  4. PERFORMANCE
There are two issues to consider as we evaluate the performance of these PR methods for different
signal sizes (N ), number of nonzero values (k), and number of measurements (n). One is the
rate of convergence, how often a particular kind of signal will converge in a defined amount of
time. The other is the error of reconstructed signals. For the signal sizes we have considered,
simulation suggests that CSPR performs just as well as PR in terms of convergence, and as well
as CS in terms of accuracy. For converged signals, the simulation affirms that k 2 log N is indeed
an upper bound on the number of measurements needed for perfect reconstruction.

4.1. Convergence
To evaluate the effectiveness of the 1 constraint for regular PR, we compare it in different kinds
of simulations, against exact and loose support constraints (see Table 4.1). In each test the
second constraint is the modulus of the input signal’s Fourier transform. For each combination
of test and constraint we perform the RAAR algorithm on 500 randomly generated signals,
recording the number of times a convergent solution is found within 1000 iterations.
    As expected, we find the exact support constraint almost always converges for signals which
are sparse in space. We also find that loose support constraints, which ensure that the solution
does not alias the Fourier transform modulus, are not effective. As discussed in the introduction,
this has been known for some time. The 1 constraint performs fairly well for sparse signals,
but appears to be effected more than the exact support constraint is for growing signal sizes.
This may suggest that the 1 constraint is more sensitive to local minima when searching for a
solution.
    The 1 is better than the strict support constraint when a signal is structured, but takes a
full amount of bandwidth. This is illustrated in the test in which the 64 pixel image has every
pixel as non-zero, but is composed of only 5 wavelets. The exact support constraint does not
really the search space very much, since there are many non-zero pixels compared to the number
of Fourier measurements. However the 1 constraint, does restrict the search space more and
converges at a higher rate.
    The findings support the idea that a smaller search space results in more convergent solutions.
The dimensionality of exact support constraint set is much smaller for sparse signals, which is
why this constraint performs better when the ratio of non-zero pixels to image size is small. The
 1 constraint has a similar property, except its behavior is related not to the number of non-zero
pixels, but to the number of non-zero coefficients in an arbitrary basis. For sensing scenarios
in which the signal has a large bandwidth but is very structured, the 1 offers more hope for
finding a solution than an exact support constraint alone.
Table 1. The convergence properties of PR with different constraint sets are compared under different
tests. In two cases, “exact support” and “loose support” (anti-aliasing) are equivalent, because the
number of non-zero pixels is at the anti-aliasing bandwidth limit of N/4.


                                      Percent Convergence
           Test               strict support constraint 1 constraint loose support constraint
     N = 64, k = 5                      99.8%              59.6%              5.4%
 N = 64, k = 5 (wavelets)                .6%               38.4%                 –
     N = 64, k = 64                     2.2%                2%                   –
    N = 256, k = 20                     100%               16.8%               .2%



4.2. Accuracy
To determine how accurate the algorithm is for various measurement rates, we consider k-sparse
signals and empirically determine how many Fourier modulus measurements are needed for
different signal sizes and sparsity rates for consistent (95 %) exact recovery of 100 converged
solutions. We hold N constant and vary k, and also hold k constant and vary N , in order to
empirically understand the dependence of the number of measurements on these values. We
compare these results with those found via regular CS if the phases were known, using the
SPGL1 solver ‡ .
    For convergent solutions we find that the number of measurements needed does not appear
to follow a k 2 log N trend but appears to be closer to the k log N , as Figure 2 shows. When the
signal size is held constant, the number of measurements needed increases linearly. The slope is
the same as for CS with the known phases, though more measurements are needed for CSPR.
When k is held constant, the number of measurements follows a sub-linear trend, just as CS
does, though as in the other case loosing the phase results in needed more measurements.
    These results support the theorem and give confidence that the number of Fourier mea-
surements truly is a function of the structure of the signal, rather than its bandwidth. This
knowledge, along with the effectiveness of the 1 constraint, lead us to believe that the princi-
ples of CS apply even when only the modulus of measurements is observed, and that the same
structure which allows for less measurements aids in PR reconstruction.


                            5. ACKNOWLEDGEMENTS
We would like to thank Wai Lam Chan and Dan Mittleman for the terahertz data.
   ‡ TheSPGL1 solver can perform 1 minimization for complex signals and measurements, and is free
for download at http://www.cs.ubc.ca/labs/scl/index.php/Main/Spgl1.
        100                                             100
                                           CSPR                                           CSPR
                                           CS                                             CS
           80                                              80


           60                                              60


                                                       n
       n




           40                                              40


           20                                              20


           0                                               0
            0   2    4     6    8     10      12            0   20    40        60   80   100
                           k                                                N




                         (a)                                               (b)

Figure 2. For a given value of N and k, enough trials of CSPR are performed on randomly generated
signals until 100 convergent solutions have been found. The number of measurements n recorded is the
smallest value needed so that at least 95 are perfectly reconstructed. In (a) we hold N constant at 64
and vary k. The number of measurements needed increases linearly. In (b) we hold k constant at 5 and
vary the signal size. The increase in the number of measurements needed is sub-linear. In each case
the number of measurements needed has a trend similar to that of CS with the phases known, though
clearly more measurements are required.
                                      REFERENCES
 1. D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52(4), pp. 1289–
    1306, 2006.
             e
 2. E. Cand`s, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction
    from highly incomplete frequency information,” IEEE Transactions on Information Theory 52(2),
    pp. 489–509, 2006.
             e
 3. E. Cand`s and T. Tao, “Near optimal signal recovery from random projections and universal
    encoding strategies,” IEEE Transactions on Information Theory 52(12), pp. 5406–5425, 2006.
 4. D. Takhar, J. Laska, M. Wakin, M. Duarte, D. Baron, S. Sarvotham, K. Kelly, and R. Baraniuk,
    “A new compressive imaging camera architecture using optical-domain compression,” Proc. of
    Computational Imaging IV at SPIE Electronic Imaging, San Jose, CA , January 2006.
 5. M. Lustig, J. Santos, J. Lee, D. Donoho, and J. Pauly, “Compressed sensing for rapid mr imaging,”
    Proc. of SPARS , 2005.
 6. W. Chan, M. Moravec, R. Baraniuk, and D. Mittleman, “Terahertz imaging with compressed
    sensing and phase retrieval,” to appear in CLEO-2007 , May 2007.
 7. J. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,”
    Optics Express 14(2), pp. 498–508, 2006.
 8. J. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Optics Letters
    3(1), pp. 27–29, July 1978.
 9. J. Fienup, “Phase retrieval algorithms: a comparison,” Applied Optics 21(15), pp. 2758–2769,
    August 1982.
10. M. H. Hayes, “The reconstructionn of a multidimensional sequence from the phase or magnitude
    of its Fourier transform,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-
    30(2), pp. 140–154, 1982.
11. S. Marchesini, “A unified evaluation of iterative projection algorithms for phase retrieval,” Rev.
    Sci. Instrum 78, 2007.
12. J. R. Fienup and C. C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt.
    Soc. Am. A 3, pp. 1897–1907, 1986.
13. H. He, “Simple constraint for phase retrieval with high efficiency,” J. Opt. Soc. Am. A 23, pp. 550–
    556, 2006.
14. M. H. Hayes and J. H. McClellan, “Reducible polynomials in more than one variable,” Proceedings
    of the IEEE 70(2), pp. 197–198, 1982.
15. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Problems
    21, pp. 37–50, 2005.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:12/28/2011
language:
pages:12