SMAA-Enhanced-Subpixel-Morphological-Antialiasing by xiagong0815


									SMAA: Enhanced Subpixel Morphological
    Jorge Jimenez, Jose I. Echevarria and Diego Gutierrez
                 Universidad de Zaragoza

                Technical Report RR-2011-05
                       SMAA: Enhanced Subpixel Morphological Antialiasing
                                       Jorge Jimenez          Jose I. Echevarria        Diego Gutierrez
                                                           Universidad de Zaragoza



                                                                                                                SMAA S2x

                                                                                                                                        SMAA 4x

                                                                                                                SSAA 16x

                                                                                                                                        SSAA 16x
Figure 1: Our SMAA technique yields a quality similar to SSAA 16x while having a performance comparable to a the fastest MLAA
implementations. The improved edge/pattern detection allows to antialias difficult cases (first column). Diagonal pattern detection allows
accurate reconstruction of such shapes (second column). The detection of sharp geometric features allows to better reconstruct corners and
intersections (see second window in the first column, and corners of the aerials in the third image). The number of steps in the gradients
produced by our approach surpasses SSAA 16x (third column). Subpixel feature handling allows to preserve connectivity and accurately
represent distant objects (fourth, fifth and seventh columns). In zones with low-contrast edges, we fall back to 2x instead of using MLAA
(sixth column), which provides good results and maintains performance. The highest quality version of SMAA (4x) runs in 1.06 ms and 0.505
ms for 1080p and 720p respectively.

Abstract                                                                     rendering algorithms, including for instance complex shaders and
                                                                             geometric detail by means of tessellation. However, aliasing re-
We present a new image-based, post-processing antialiasing tech-             mains as one of the major stumbling blocks in trying to close the
nique, that offers practical solutions to all the common problems of         gap between off-line and real-time rendering, making antialiasing
existing filter-based antialiasing algorithms. It yields better pattern       techniques an open challenge for the next years [Andersson 2010].
detection to handle sharp geometric features and diagonal shapes.
Our edge detection scheme exploits local contrast features, along
with accelerated and more precise distance searches, which allows
to better recognize the patterns to antialias. Our method is capable
of reconstructing subpixel features, comparable to 4x multisam-
pling, and is fully customizable, so that every feature can be turned
on or off, adjusting to particular needs. We propose four different
presets, from the basic level to adding spatial multisampling and
temporal supersampling. Even this full-fledged version achieves
performances that are on-par with the fastest approaches available,
while yielding superior quality.


1   Intro                                                                    Figure 2: Image processed with our technique. Close-ups used
                                                                             in Figures 1 and 13 marked in red and green respectively. Scene
Aliasing is one of the longest-standing problems in computer graph-          courtesy of DICE from the Frostbite 2 game engine.
ics, producing clear artifacts in still images (spatial domain) and
introducing flickering animations (temporal domain). While using              For more than a decade, supersample antialiasing (SSAA) and
higher sampling rates can ameliorate its effects, this approach is too       multisample antialiasing (MSAA) have been the gold standard an-
expensive and thus not suitable for real-time applications. During           tialiasing solution in real-time applications such as games. How-
the last few years we have seen great improvements in real-time              ever, these techniques are not well suited for deferred shading, since
                                                                             in that case MSAA degenerates to brute force SSAA on the edges
                                                                             which implies a performance drop (and could only be performed

in DirectX 10 onwards). Furthermore, in older platforms (which                distance searches, which allows to better recognize the patterns to
include current generation of consoles), it is not possible to use            antialias. Performance wise, SMAA 4x and SMAA T2x are about
MSAA in conjunction with multiple render targets, which is crit-              1.5x and 4.8x faster than MSAA 8x respectively. With regards to
ical since most recent techniques make use of supporting frame-               memory footprint, we only need 43% and 10% of the memory used
buffers. Even when not coupled with deferred shading or used in               by MSAA 8x, in a forward and deferred context respectively. As
older platforms, it requires massive memory storage and bandwidth             the images in Figure 1 and 2 show, the quality of our method is
requirements for the higher sample counts, which translates to an             in general on-par with SSAA 16x (possibly lower for shading an-
average cost of 5.4ms for MSAA 8x, with a peak of 7.7ms on mem-               tialiasing).
ory bandwidth intensive games, on a NVIDIA GeForce GTX 470.
Its memory consumption can be as high as 126 MB and 506 MB for                Given the modular nature of our approach, specific features can be
MSAA 8x, for forward and deferred rendering respectively, taking              enabled or disabled, adjusting to the needs of a particular scenario
12% and 50% of a mainstream GPU equipped with 1GB of mem-                     and/or hardware configuration. We propose four different presets,
ory. This problem is aggravated when 16-bit rendering is used (high           from the simplest to the more sophisticated version, which includes
dynamic range), as these numbers double. Ultimately, this limits              both spatial and temporal multi/supersampling. Even the simple
the usage of MSAA in deferred engines to 4x [Andersson 2011],                 version achieves performances on-par with the fastest approaches
which translates into a decrease of gradients quality.                        available, while yielding superior quality2 . This modularization al-
                                                                              lows for direct, practical use of our technique even in mainstream
Recently, industry and academia have begun to explore alternative             hardware, making it market-ready. Furthermore, we provide all the
approaches, where antialiasing is performed as a post-processing              necessary source code through our website to facilitate the wide
step [Jimenez et al. 2011a]. The original, CPU-based Morpholog-               adoption of our technique, including industry.
ical Antialiasing (MLAA) method [Reshetov 2009] gave birth to
an explosion of real-time antialiasing techniques, rivaling in qual-          2    Related Work
ity the results of MSAA and with a performance within the [0.3..3]
ms range. However, analyzing the current generation of filter-based            The basic method to minimize aliasing weighs the color of poly-
antialiasing techniques, we found that they all share at least some           gons inside a pixel according to the visible area (equivalent to con-
of the following problems:                                                    volving with a box filter) [Catmull 1978]. Other classical antialias-
   • The original shape of the objects is not always preserved; a             ing approaches are the A-buffer [Carpenter 1984; Schilling and
     slight rounding of the corners may be clearly visible. This is                                                             e
                                                                              Strasser 1993] and stochastic sampling [Dipp´ and Wold 1985].
     specially critical in sharp, tiny objects and subpixel features.         Supersampling antialiasing (SSAA) involves rendering the scene
                                                                              at a higher resolution, then downsampling to the final resolution,
   • Most approaches are designed to handle horizontal or vertical            which is obviously expensive both in terms of time and memory.
     patterns only, ignoring diagonals (which thus remain aliased).           It is still the basis of multisampling antialiasing (MSAA) [Akeley
   • Subpixel features are not properly handled in general.                   1993], where the color of a pixel is only calculated once instead of
                                                                              running at subsample frequencies. To display the scene, all samples
   • The predefined edge patterns do not allow to completely re-               are aggregated using some filter (a resolve operation). Coverage
     move visible specular and shading aliasing.                              sampling antialiasing (CSAA) [Young 2006] decouples coverage
                                                                              from color, depth and stencil, thus reducing bandwidth and storage
   • Local contrast is not taken into account in techniques that              costs.
     make use of a binary edge classification (i.e. an edge can
     be classified as either existent or non-existent), to determine           The addition of new real-time rendering paradigms (such as de-
     whether an edge is perceived or not by a human viewer.                   ferred shading [Hargreaves 2004; Geldreich et al. 2004] and the
     This leads to inaccuracies in the pattern detection, and sub-            lighting pre-pass [Engel 2008]) along with hardware limitations of
     sequently, to incorrectly classified edges.                               current generation of videogame consoles, have recently motivated
                                                                              a great amount of exciting new research in the field [Jimenez et al.
   • Techniques using bilinear filtering for accelerating the pattern          2011a]. Most of these new antialiasing solutions handle the aliasing
     detection (searches for line ends) introduce artifacts due to the        problem as a post-process, devising filters that are applied over the
     inherent inaccuracy of this approach.                                    final, aliased image, usually rendered at display resolution. While
   • Temporal stability becomes an issue with edges that are al-              the basic idea is not new (finding discontinuities on the image and
     most vertical or horizontal.                                             blurring them in order to smooth the jagged edges) [Bloomenthal
                                                                              1983; Van Overveld 1992; Isshiki and Kunieda 1999], some ad-
   • The original sharpness of the input image is lost at some de-            vanced versions of it have been only recently applied in games
     gree, as a result of the (sometimes over-aggressive) filtering            [Shishkovtsov 2005; Koonce 2007; Sousa 2007]. All these tech-
     performed.                                                               niques alleviate the aliasing problem, although the sharp definition
                                                                              of the edges is obviously lost to a degree. More refined solutions
Solving these problems while maintaining practical real-time per-
                                                                              like directionally localized antialiasing (DLAA) [Andreev 2011],
formance poses a real challenge. We propose a novel antialiasing
                                                                              use smarter blurs that produce very natural results and good tem-
technique that falls in the domain of the post-processing techniques,
                                                                              poral coherence. Nevertheless, these approaches still yield blurrier
particularly of the morphological antialiasing family. Our approach
                                                                              results than MSAA.
consists on tackling these complex problems separately and offer-
ing simple and modular solutions. First, we improve the pattern               Other solutions,     such as morphological antialiasing
detection to handle sharp geometric features and diagonal shapes.             (MLAA) [Reshetov 2009], try to estimate the pixel coverage
Second, by multi/supersampling1 morphological antialiasing, we                of the original geometry. MLAA finds edges by looking for
are able to reconstruct subpixel features comparable to 4x. Finally,
                                                                                 2 We encourage the reviewers to see the comparison images in the Pho-
we introduce a very accurate edge and pattern detection scheme that
exploits local contrast features, along with accelerated yet precise          toshop files provided as supplemental material, since simply looking at the
                                                                              images side by side may not do justice to the technique. Quickly swapping
   1 We  will refer spatial multisampling and temporal supersampling as       layers allows for a much better, direct comparison approach. These files
multi/supersampling.                                                          will be accessible through a public website, once the paper is published.

Table 1: Supported features for a selection of filter-based antialiasing techniques. Memory footprint with respect to no antialiasing (given in
times the size of a non-multisampled 32-bit wide backbuffer). Performance is given for 1080p on a NVIDIA GeForce GTX 295, with exception
of: a) Reshetov’s [2009] CPU-based implementation, which is measured on a Core i7 2620M @ 2.7GHz; b) AMD’s exclusive MLAA [AMD
2010], which is measured on a AMD Radeon HD 6870; and c) SRAA [Chajdas et al. 2011], whose times come from a GeForce GTX 480
(more powerful than the GeForce GTX 295). Please note that our SMAA 2x and 4x times include the 0.3 ms of the mandatory 2x resolve.
  Has accurate distance searches when not using bilinear acceleration. 2 The quality of the diagonals in the case of SRAA is limited to the
accuracy of regular gradients. 3 For a deferred rendering approach there would not be an additional cost. 4 Note for the reviewers: we plan
on getting this performance number for the final version of the article. 5 For fairness, we measured the 4x mode of our algorithm on the same
GPU, yielding a performance of 1.82 ms.

                               [Reshetov 2009]   [AMD 2010]   [Jimenez et al. 2011b]   [Lottes 2011]     [Andreev 2011]   [Chajdas et al. 2011]
                                 MLAA             MLAA             MLAA                FXAA I              DLAA                SRAA                                 SMAA
                                                                                                                                                    1x       S2x           T2x      4x
   Sharp geometric features                                                                                                      yes                yes      yes            yes     yes
          Diagonals                                                                                                           limited2              yes      yes            yes     yes
      Subpixel features                                                                                                          yes                         yes            yes     yes
    Supersampled shading                                                                                                                                                    yes     yes
  Local contrast adaptation                                                             implicit          implicit            yes (n/a)             yes       yes           yes     yes
  Accurate distance searches      yes               yes                                depends1             yes               yes (n/a)             yes       yes           yes     yes
     Accurate gradients∗          yes               yes              yes                  yes               yes                                     yes       yes           yes     yes
   Sharpness preservation∗       medium             low              high               medium            medium                 low               high      high          high    high
        Ghosting-free             yes               yes              yes                  yes               yes                  yes                yes                     yes

     Input (color/depth)         1x n/a           1x n/a           1x 1x                1x n/a             1x n/a              1x 4x               1x n/a    2x n/a     1x n/a     2x n/a
      Memory footprint             0                0               1.5x                   0                 1x              0/4x/8x3                2x        3x         5x         5x
        Performance              350 ms          6.6 ms5           0.89ms               2.04ms              n/a4              2.5 ms              1.07 ms   1.61 ms    1.14 ms    1.88 ms

color discontinuities, and classifies them according to a series of                                     ing. Once the ends are reached, the algorithm looks at the cross-
pre-defined patterns (Z, L and U shapes). For each of these, a                                          ing edges, which provide a mechanism for straightforward pattern
re-vectorization is performed in order to approximate the original                                     classification. With the length of the edge and crossing edges in-
shape. This re-vectorization takes into account perceptual issues,                                     formation, the coverage area is retrieved with a single access to a
and defines a coverage area for each pixel belonging to an edge                                         precomputed texture, and used for the final blending.
(edgel); this area guides the weight in the final blending. Given that
the actual information behind the pixel is already lost, this blending                                 Deviating from pure image based solutions, in Hugh Malan’s
is done effectively with the neighbors, since our visual system                                        distance-to-edge antialiasing (DEAA) the forward rendering pass
assumes that, on edges, they will have a similar color to the actual                                   calculates and stores the distances of each pixel to near trian-
background. Detecting edges based on color information translates                                      gle edges with subpixel precision. The post-process pass uses
in seamless handling of geometric and shading aliasing. However,                                       this information to derive blending coefficients. Similar in spirit,
since the only input is the original aliased image, it cannot recover                                  geometric post-process antialiasing (GPAA) [Persson 2011] uses
subpixel features lost during the render process.                                                      additional geometric information for coverage calculation. This
                                                                                                       produces almost perfect gradients with great temporal coherence.
Reshetov’s original implementation provides great results, but it                                      However, due to precision issues and problems handling subpixel
runs exclusively on the CPU and thus is impractical for real-time                                      features, artifacts may become clearly visible breaking previously
scenarios. This triggered a number of real-time implementations                                        smooth gradients. Also, the method requires processing the ge-
that run on different hardware platforms, such as the GPU [Biri                                        ometry twice, which can be prohibitively costly and thus the prac-
et al. 2010; AMD 2010; Jimenez et al. 2011b], Playstation 3 SPUs                                       ticality of the method is reduced. Providing better handling of
and hybrid approaches that use both CPU and GPU [Jimenez et al.                                        subpixel features in deferred engines, subpixel reconstruction an-
2011a]. Topological reconstruction antialiasing (TMLAA) [Biri                                          tialiasing (SRAA) [Chajdas et al. 2011] combines regular shading
2011] uses topological information to recover subpixel features                                        at final display resolution with supersampled geometry maps (nor-
from the final image. However, this reconstruction can only fill                                         mals and depth). Then, a super-resolution color image is built prop-
one-pixel sized holes, and it is not clear how well its assump-                                        agating the shaded samples over those maps; the resulting image
tions work for animated sequences. Fast approximate antialiasing                                       is finally down-sampled again to final screen resolution. Despite
(FXAA) [Lottes 2011] approaches the subpixel problem by detect-                                        the good results, they sometimes look on-par with other morpho-
ing these features and attenuating them, obtaining good results for                                    logical antialiasing techniques, at the cost of additional geometry
static images but not improving temporal coherence and stability.                                      processing and a possible increase in memory consumption. Direc-
                                                                                                       tionally adaptive edge antialiasing of Iourcha et al. [Iourcha et al.
Unfortunately, but understandably given their business nature, some                                    2009] also leverages MSAA subsample values for better gradient
companies choose not to reveal the details of their specific imple-                                     and color estimation. However, their execution times are on the
mentations. Among those published, Jimenez’s MLAA [Jimenez                                             high side reducing the practicality of the method.
et al. 2011b] is the most documented and one of the fastest. Its key
feature is the use of texture structures (encoding information about                                   Finally, in very demanding realtime scenarios with complex shad-
the location of the edges, their associated patterns and coverage ar-                                  ing and geometry, temporal antialiasing approaches have regained
eas, as well as precomputed areas for blending). Their algorithm                                       interest recently [Kaplanyan 2010; Nehab et al. 2007; Yang et al.
works in three passes: edge detection (which is performed using                                        2009]. Our work also takes this aspect into account, featuring han-
depth or luminance information), pattern detection plus calculation                                    dling of subsamples from the temporal domain. In a slightly dif-
of the blending areas, and final blending. Pattern detection is per-                                    ferent context, the work of Yang and colleagues [Yang et al. 2011]
formed by searching both ends of an edge (distance searching),                                         aims at restoring jagged edges that occur after nonlinear image pro-
halving the necessary iterations by using hardware bilinear filter-                                     cessing filters, for which they require that the original, alias-free

image be available.                                                            MSAA 2x, since we smooth corners conservatively in order to al-
                                                                               low multi/supersampling to reconstruct the real shape (more details
Table 1 provides a detailed insight of the features supported for a            in the description of spatial multisampling below). By using bilin-
representative selection of filter-based antialiasing techniques, in-           early filtered accesses we are able to fetch two edges at once; the
cluding our work. This selection covers most of the recent major               cost of fetching this additional information is almost negligible.
publications in the field, and includes all those for which implemen-
tations are available and are currently in use, in order to perform fair

                                                                               Figure 5: Extended crossing edges (e1 , e2 , e3 , e4 ) and correspond-
                                                                               ing sampling positions (colored dots) for the bilinear filter opti-
Figure 3: Overview of the key weaknesses of post-processing an-
                                                                               mization (horizontal case). Notice how crossing edges of size one
tialising filters and how the core elements of SMAA handle them.
                                                                               (as used by conventional MLAA) would consider e3 = e4 , misin-
                                                                               terpreting the real shape of the object (see Annex for details)
3    SMAA: Features and Algorithm                                              Our new precomputed area texture encodes 256 pattern combina-
                                                                               tions (see Figure 6). To avoid slower access due to its larger size
In this section we explain the core components of SMAA, their mo-              (which may occur due to cache misses), and leveraging the fact
tivation and the main algorithmic ideas (see Figure 3 for a schematic          the gradients in the texture are almost linear, we combine down-
view). Specific implementation details are included in the Annex.               sampling of the areas texture with bilinear filtered accesses. This
We build on the existing MLAA pipeline, improving or completely                way we can handle longer distances (up to 36 pixels at each side)
redefining every step (see Figure 4). In particular, we improve                 at no additional cost. The memory footprint is in this way reduced
edge detection by using accurate distance searches and local con-              from 2.59MB to 60KB. This strategy only requires adding two more
trast adaptation, which allow for more reliable results. In a similar          memory accesses to the area texture.
fashion, we enhance the pattern detection by the usage of sharp
geometric features and diagonal pattern detection, which allow to              For each pattern, the coverage area used for blending is obtained by
generate more accurate gradients in the contours of objects. Last,             combining the re-vectorization of the twelve basic shapes shown in
we show for the first time how morphological antialiasing can be                Figure 7. The rules for combining shapes are simple (see Annex).
efficiently multisampled and supersampled in the spatial and tem-               Figure 8 shows the result of our sharp geometric features detection.
poral domains respectively. We recommend the reader to view the
images on the digital version of this paper, since the results may not
be fully appreciated in a printed version.
Given the modular nature of SMAA, several configurations are pos-
sible. In particular, we have found the following presets to be the
most useful:
    • SMAA 1x: includes accurate distance searches, local contrast
      adaptation, sharp geometric features and diagonal pattern de-
    • SMAA S2x: includes all SMAA 1x features plus spatial mul-
    • SMAA T2x: includes all SMAA 1x features plus temporal
    • SMAA 4x: includes all SMAA 1x features plus spatial and
      temporal multi/supersampling.
Sharp geometric features detection (extended patterns): The re-
vectorization of edges performed by techniques such as MLAA and                Figure 6: Representative sections of the extended patterns map for
FXAA I tends to round sharp corners on the image. The crossing                 handling sharp features (complete map and the corresponding area
edges used for pattern detection are just one pixel long and thus              texture in the supplementary material). Instead of the 16 cases han-
cannot distinguish a jagged edge from an object’s corner (see Fig-             dled by Jimenez’s MLAA [Jimenez et al. 2011b], we are able to
ure 5). We make the key observation that crossing edges in con-                detect 256 different patterns, while introducing a minimal perfor-
tour lines have a maximum size of one pixel, whereas for fea-                  mance hit. Holes inside the map are impossible to access due to the
tures like sharp corners this length can be longer. We thus use                bilinear fetch, but are kept in the texture to simplify queries.
2-pixel long crossing edges, which allows for better, richer pat-
tern detection providing more information about the true shape of              Diagonal pattern detection: All existing filter-based techniques
the object. This extension is critical when combining MLAA with                (except SRAA) search for patterns made up exclusively of horizon-

Figure 4: MLAA finds discontinuities on the image and classifies the resulting edges according to a series of predefined shapes. Coverage
areas are then calculated, and used for final blending. (a) Input image, with the intended approximation outlined by red lines and the coverage
areas shown in green. (b) Predefined shapes in the original algorithm [Reshetov 2009]. (c) A representative portion of the areas texture in
Jimenez’s GPU implementation [Jimenez et al. 2011b]. (d) Edge detection. (e) Area calculation. (f) Final blending. Our SMAA algorithm
redefines the whole pipeline by extending (b) for sharp geometric features and diagonals handling, which translate into new pre-calculated
textures (c). Local contrast adaptation removes spurious edges in (d). Extended patterns detection and accurate searches improve accuracy
in (e). SMAA can handle additional samples in (f) for real subpixel features and temporal supersampling.

                                                                             Figure 9: MLAA (left) and SMAA (right) revectorizations (blue
Figure 7: The twelve basic (top row) and extended (bottom row)
                                                                             lines) of almost-45◦ diagonals. Due to the handling of diagonal
shapes whose re-vectorizations (orange, blue and green lines) are
                                                                             patterns (marked in green for both cases), SMAA reconstructs the
combined to calculate the rest of the patterns. For sharp geomet-
                                                                             diagonal edge accurately.
ric features, in the extended shapes case, subtle re-vectorizations
are performed in order to get better results avoiding sharp color

Figure 8: Comparison between no antialiasing, a regular MLAA
approach, and the SMAA results. Notice how SMAA respects much
better the original shape of the objects, while MLAA tends to round
the objects shape. The subtle changes introduced by SMAA are due             Figure 10: Diagonal patterns map (left) and their precomputed
to the fact that in real scenarios it is very difficult to find squares        area texture (right). Re-vectorization of patterns (in orange) is
perfectly aligned with the pixel grid (as seen on the left).                 not as straightforward as for orthogonal patterns as it needs to
                                                                             match between patterns and also behave symmetrically. So, if there
                                                                             is no crossing edge in one end, we average the two possible re-
                                                                             vectorizations to ensure smooth transitions. These are just concep-
tal and vertical edges. This translates into badly aliased results (in
                                                                             tual diagrams for an easier understanding. Figure 14 shows the
space and time) for diagonal lines (see Figure 9).
                                                                             real positions of the crossing edges, and we refer to the reader to
We introduce a novel diagonal search, and obtain the corresponding           the source code for the exact re-vectorization points.
coverage areas. We look for two types of diagonals: from top-left
to bottom-right, and from bottom-left to top-right. If the diagonals
found are at least two pixels long, we treat them as described in this       less impact at subpixel level (see Figure 11). It thus seems nat-
paper; otherwise, we treat them as regular edges. This search is per-        ural to combine the accurate gradients produced by MLAA with
formed before the regular search for horizontal and vertical edges,          the subpixel features provided by MSAA 2x. However, a naive
and continues until we find the top-right or top-left boundaries, re-         approach rendering MSAA 2x and running MLAA on each sub-
spectively. The final blending pass handles all kinds of patterns             sample would yield an under- or overestimation of pixel coverages.
seamlessly (diagonal and regular). Once distances are obtained,              This is due to the diagonal directional blur on the edges caused by
crossing edges must be fetched. Figure 10 shows the diagonal pat-            re-vectorization of edges around each subsample in the MSAA 2x
terns and their corresponding pre-calculated areas, stored in an ad-         pass (see Figure 12, left and middle), which is specially critical with
ditional area texture for efficiency.                                         subpixel features. Instead, we offset the re-vectorizations of each of
                                                                             the two subsamples to the center of the pixel (see Figure 12, right),
Spatial multisampling: To properly resolve subpixel features, we             matching the slopes of both re-vectorized lines.
rely on spatial multisampling and make the key observation that, in
general, 2x multisampling is sufficient for these cases. More ex-             We consequently offset area calculations by -0.25 and 0.25. In or-
pensive multisampling, while obviously good in other cases, has              der to ensure high performance, this is pre-calculated by generating

two different pre-computed area textures, one for each subsample.            Using one-dimensional bilinear interpolation along the edge may
Further optimizations are described in the Annex. Our optimized              lead to inaccuracies, due to possible missed perpendicular edgels
pipeline allows to cut times by 20%, at the expense of an increased          and/or the potential impossibility to discern which of two pix-
memory footprint (from 2x the size of the back-buffer to 3.5x).              els contains an edgel in the final step of a search [Jimenez et al.
Nevertheless, the increased performance outweighs the burden of              2011b]. We introduce a novel two-dimensional bilinear-filtered ac-
the extra memory usage.                                                      cess which solves both problems without incurring in any perfor-
                                                                             mance loss. Our approach allows to fetch the crossing and regular
                                                                             edges of the searched pixels in a single memory access, and stop
                                                                             accordingly depending on the returned value (see Annex).

                                                                             4   Results and Discussion
                                                                             The performance metrics given in this section are measured on a
                                                                             NVIDIA GeForce GTX 295 using 1080p inputs. Given that the
                                                                             highest quality image we had with the SRAA method had been ren-
Figure 12: Left, top: usual 2x multisampling pattern. Left, bottom:          dered at SSAA 16x, we have opted to compare against those set-
For combining multi/supersampling with MLAA (STMA S—T2x),                    tings, instead of multisampling. Note that in any case this simply
we have to offset the area calculations so that the subsamples on            rises the bar in terms of quality.
top and bottom (pink and orange) are moved to the position in the
center (blue). This ensures that the produced gradients match when           Figure 13 shows a comparison of our technique with a selection of
averaging. Middle: in STMA S2x case, MLAA area calculations                  antialiasing methods. In the following text we will guide the reader
are devised to produce a re-vectorization around the center of the           through this comparison matrix, row by row. We recommend com-
pixel (blue). For STMA S2x, these areas must be offset by −0.25              paring our technique specially with those that are not accurate for
(pink) and +0.25 (orange). Right: For combining spatial multi-               specific features (marked with red or orange dots). We refer the
sampling and temporal supersampling with MLAA, we combine two                reader to the additional material for more examples of our tech-
jittered results of STMA S2x (purple and green). In this case, we            nique.
use different areas than the 2x modes given that the position of the         Local contrast: the first row shows how traditional MLAA usually
subsamples with respect to the center has changed.                           fails to properly detect edges in the presence of gradients. This is
                                                                             due to the fact that the binary edge maps in which they are based do
Temporal supersampling: The same key idea of spatial multisam-               not take into account the surroundings of a pixel to decide whether
ple antialiasing can be exploited to accurately combine jittered sub-        an edge is visible or not. Note how our approach is able to detect
samples over time, effectively supersampling shading. For this, the          and correctly antialias these difficult zones.
camera is jittered in the temporal domain at subpixel level, follow-
ing a specific sampling pattern, which depends on the SMAA mode               Accurate searches: this row shows the artifacts produced by bi-
(detailed in the Annex). We then simply resolve the color buffer             linear accelerated searches. Note that the two techniques using
of the previous frame with the current one. To attenuate ghosting            this optimization [Jimenez et al. 2011b; Lottes 2011] fail to recon-
artifacts common to most sampling approaches in the temporal do-             struct gradients properly (see the dotted artifacts near and around
main, re-projection techniques should be used. These have been               the largest vertical pole, specially when it meets the rooftop). Our
shown to be both robust and practical, and have been used in high            approach is able to leverage this acceleration while still reproducing
profile games [Jimenez et al. 2011a]. This only adds an additional            accurate gradients.
memory overhead of 2x the size of a 32-bit back-buffer, since the            Accurate gradients: We show the quality of gradients, measured as
current and previous color buffer need be additionally stored.               the number of color steps produced. In general, post-process an-
Local contrast adaptation: Color edge detection may produce                  tialiasing is able to synthesize very fine gradients, surpassing even
spurious edges, usually caused by crossing edges forcing early               SSAA 16x by a great margin. Also it can be seen how the SRAA
stops during the search. This of course affects pattern detection.           approach is unable to generate the same level of gradient steps as
To avoid them, we perform an adaptive double threshold based on              the other filter-based antialiasing techniques.
the observation that large contrasts in one direction tend to mask           Diagonal pattern detection: This is a difficult example for most
contrast in other directions. Let c0 be the color of a pixel, we then        techniques, given that these shapes are usually not taken into ac-
have:                                                                        count. Note how we are able to totally remove aliasing, yielding
        di   =    |ci − c0 |;                  i = 1..8                      perfectly defined diagonal lines.
        D    =    max(di )                                                   Sharp geometric features: Our technique manages to preserve the
        ej   =    dj > T ∧ dj > 0.5D;          j = 1, 3, 5, 7     (1)        sharp shape of the building, specially at the corners, whereas most
                                                                             filter-based antialiasing techniques introduce some degree of round-
where i and j iterate over the 8- and 4-neighborhood of the pixel            ness, specially visible in the original MLAA [Reshetov 2009]. Pre-
respectively, T is the desired threshold (usually a value between            serving this information is vital for multi/supersampling to recon-
0.05 and 0.2), and e is the resulting binary value that denotes if an        struct the accurate shape of an object.
edge has been included in the edges texture. We set the delta with
a 4-neighbor to be higher than the 50% of the maximum delta in               Subpixel features: In these rows it can be seen how techniques us-
the 8-neighborhood. This simple change allows to use the efficient            ing 1x inputs are unable to reconstruct the shape of subpixel fea-
binary edge maps while performing similarly to techniques that take          tures, producing artifacts like spurious pixels, gaps in surfaces and
into account real-valued edge contrast deltas in each iteration of the       distracting effects due to subsampling. Note how SRAA improves
search loop (as done for example, in FXAA [Lottes 2011]).                    subpixel features over regular 1x-based techniques. However, it can
                                                                             be seen how our approach is able to better preserve the connectiv-
Accurate distance searches: Key to pattern detection is obtaining            ity of the lines, resembling more faithfully the results obtained with
accurate edge distances (lengths to end of the line at both sides).          SSAA 16x.

                                                                                                                                    SMAA 4x
                                                                                         SMAA 4x              SMAA 4x
    MSAA 1x              MSAA 2x               MSAA 8x              SMAA 1x                                                         Offsets and
                                                                                            Naive                Offsets
                                                                                                                               Sharp Geom. Features

Figure 11: Figure showing a difficult case for both MSAA 1x and SMAA 1x: a grid-like shape at a mid-distance (top), prevents the
reconstruction of accurate coverage; at long distance (bottom, zoomed to minimize negative space), the continuity of the grid-like shape
is destroyed, preventing its recovery. We can see how a naive SMAA 4x approach improves the quality, at the expense of blurring. Using
extended patterns for dealing with sharp geometric features and correct offsets allows for a more accurate area estimation, resembling a
quality between MSAA 2x and 8x, depending on the size of the features.

Sharpness preservation: Some techniques introduce blur in the im-            The overhead introduced by each of our solutions is either negli-
age. In row (b) of this feature, we can see how the ability of low-          gible or very affordable. In particular, the local contrast adapta-
ering the threshold in the case of SMAA S2x (as result of a bet-             tion, the sharp geometric features detection and accurate distance
ter fallback), accounts for an increase of sharpness with respect to         searches are less than 0.01 ms; diagonals processing introduces
SMAA 1x.                                                                     a small overhead of 0.05. Temporal supersampling introduces an
                                                                             overhead of 0.07 ms. Spatial multisampling adds 0.54 ms for filter-
Subpixel SMAA modes not only allow actual handling of subpixel               ing the image and an additional average of 1.7 ms for the overhead
features, but also to have a better fallback in the cases where the          of rendering a scene at 2x. The delta that SMAA 4x adds on top of
edges or pattern detection fails in low-contrast zones (as result of         a 2x forward-rendered scene is as little as 1.58 ms (notwithstand-
using too high a threshold). This upgrades the minimum quality               ing the 0.3ms of the mandatory resolve that we do while calculating
of SMAA 2x from 1x to 2x. Having a better fallback allows to                 lumas), making it attractive for any scenario that can afford such as
increase the edge detection threshold, letting barely visible edges          small multisample count.
be smoothed by 2x. This is a reasonable approach as the lumi-
nance delta of such edges will be low, and thus the 2-step gradients
produced by 2x will be sufficient. Also, it improves the temporal
stability of almost vertical or horizontal lines by a significant mar-        5   Conclusions

Performance-wise, in a forward rendering scenario SMAA 4x and                We have presented a technique that tackles all the weak points re-
SMAA T2x are about 1.5x and 4.8x faster than MSAA 8x respec-                 maining in filter-based antialiasing solutions. Each core element
tively (the first taking into account the required multisampling 2x           can work independently from the others, which increases flexibility.
overhead). With respect to memory consumption, this configura-                In any configuration, our technique provides quality enhancements
tion requires a 43% and a 10% of the memory used by MSAA 8x,                 over existing methods. The novel combination of improved MLAA
in a forward and deferred context respectively. Note that we are             strategies with spatial and temporal multi/supersampling ensures
able to perform better than MSAA 8x, while still delivering better           that no edge remains unprocessed, while minimizing the drawbacks
overall quality than 16x (with the exception of areas with lots of           of scaling the number of samples. SMAA 1x delivers very accu-
subpixel features). In the case of a deferred engine, using MSAA             rate gradients, temporal stability and robustness, while introducing
8x would incur in an excessive drop of performance [Andersson                minimal overhead, making it an obvious choice for low-end config-
2011], given the massive bandwidth required, and the requirement             urations. We believe SMAA T2x offers a very attractive tradeoff
of supersampling the edges at 8x.                                            for any kind of rendering engine (deferred or forward), avoiding
                                                                             2x multisampling while still reconstructing subpixel detail. SMAA
In SMAA 1x and T2x modes, the execution times are on-par with                S2x and SMAA 4x are better options in deferred engines, enabling
Jimenez’s MLAA [2011b] and lower than all the other algorithms               them to finally match the quality of forward rendering engines,
discussed in this paper (see Table 1). The only exception is DLAA,           without the performance drops due to expensive multisampling.
but as Figure 13 shows, its increase in speed has a penalty in overall
image quality. The S2x and 4x modes are obviously more expen-
sive due to multisampling (a 1.7 ms overhead for rendering at 2x             As future work, we would like to explore different re-vectorizations
is expected), but they are still on-par with the techniques that allow       that may fit better the look of natural objects. All the coverage areas
to handle subpixel features (SRAA and MSAA 8x), while yield-                 in this work are obtained from straight lines, something reminiscent
ing superior results. Note that SRAA requires 4x multisampling               of the underlying geometry of the scene. Since we have shown
of both depth and normals, while our approach only multisamples              how to handle complex area calculations with precomputed tables,
color information at 2x. In the case of a deferred renderer, our ap-         it could be interesting to study the use of other re-vectorizations
proach would require supersampling the edges; however, efficient              (maybe using Bezier curves) in order to achieve more natural results
stencil-masked implementations allow for efficient performance.               for curved shapes.

References                                                                  KOONCE , R. 2007. Deferred Shading in Tabula Rasa. In GPU
                                                                              Gems 3. Addison Wesley, 429–457.
A KELEY, K. 1993. Reality engine graphics. In SIGGRAPH ’93:
   Proceedings of the 20th Annual Conference on Computer Graph-             L OTTES , T. 2011. FXAA. Technical Report.
   ics and Interactive Techniques, 109–116.                                 N EHAB , D., S ANDER , P. V., L AWRENCE , J., TATARCHUK , N.,
AMD. 2010. Morphological anti-aliasing.                                        AND I SIDORO , J. R. 2007. Accelerating real-time shading with
                                                                               reverse reprojection caching. In Proceedings of the 22nd ACM
A NDERSSON , J. 2010. 5 major challenges in interactive rendering.             SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hard-
   In SIGGRAPH ’10: ACM SIGGRAPH 2010 Courses.                                 ware, 25–35.
A NDERSSON , J. 2011. DirectX 11 Rendering in Battlefield 3. In              P ERSSON , E. 2011. Geometric post-process anti-aliasing.
   Game Developers Conference 2011.
                                                                            R ESHETOV, A. 2009. Morphological antialiasing. In Proceedings
A NDREEV, D. 2011. Directionally Localized Anti-Aliasing                       of the Conference on High Performance Graphics 2009, 109–
   (DLAA). In Game Developers Conference 2011.                                 116.
B IRI , V., H ERUBEL , A., AND D EVERLY, S. 2010. Practical mor-            S CHILLING , A., AND S TRASSER , W. 1993. EXACT: algorithm
   phological antialiasing on the GPU. In ACM SIGGRAPH 2010                    and hardware architecture for an improved A-buffer. In SIG-
   Talks, SIGGRAPH ’10, 45:1–45:1.                                             GRAPH ’93 Conference Proceedings, 85–91.
B IRI , V. 2011. Morpological antialiasing and topological recon-           S HISHKOVTSOV, O. 2005. Deferred Shading in S.T.A.L.K.E.R. In
   struction. In GRAPP 2011.                                                   GPU Gems 2. Addison Wesley, 143–166.
B LOOMENTHAL , J. 1983. Edge inference with applications to                 S OUSA , T. 2007. Vegetation Procedural Animation and Shading in
   antialiasing. SIGGRAPH Comput. Graph. 17 (July), 157–162.                   Crysis. In GPU Gems 3. Addison Wesley, 373–385.
C ARPENTER , L. 1984. The A-buffer, an antialiased hidden surface           VAN OVERVELD , C. W. A. M. 1992. Application of morpho-
   method. In SIGGRAPH ’84: Proceedings of the 11th annual                    logical filters to tackle discretization artifacts. Vis. Comput. 8
   conference on Computer graphics and interactive techniques,                (April), 217–232.
                                                                            YANG , L., N EHAB , D., S ANDER , P. V., S ITTHI - AMORN , P.,
C ATMULL , E. 1978. A hidden-surface algorithm with anti-aliasing.            L AWRENCE , J., AND H OPPE , H. 2009. Amortized supersam-
   SIGGRAPH Comput. Graph. 12, 3, 6–11.                                       pling. ACM Trans. Graph. 28 (December), 135:1–135:12.
C HAJDAS , M. G., M C G UIRE , M., AND L UEBKE , D. 2011. Sub-              YANG , L., S ANDER , P. V., L AWRENCE , J., AND H OPPE , H. 2011.
   pixel reconstruction antialiasing for deferred shading. In Sympo-          Antialiasing recovery. ACM Trans. Graph. 30 (December).
   sium on Interactive 3D Graphics and Games, 15–22.
                                                                            YOUNG , P. 2006. Coverage sampled antialiasing. Technical Re-
D IPP E , M. A. Z., AND W OLD , E. H. 1985. Antialiasing through              port.
   stochastic sampling. In SIGGRAPH ’85: Proceedings of the 12th
   annual conference on Computer graphics and interactive tech-
   niques, 69–78.                                                           A     Implementation Details
E NGEL , W. 2008. Light pre-pass renderer.                                  We describe now some specific details of our implementation
G ELDREICH , R., P RITCHARD , M., AND B ROOKS , J. 2004. De-                Extended crossing edges for sharp geometric features detec-
   ferred lighting and shading. In Game Developers Conference               tion: As can be seen in Figure 5, sampling coordinates respect
   2004.                                                                    to the current pixel (marked in orange) are: e1 = (−dl , −1.75),
H ARGREAVES , S. 2004. Deferred shading. In Game Developers                 e2 = (dr + 1, −1.75), e3 = (−dl , 0.75), e4 = (dr + 1, 0.75).
   Conference 2004.                                                         Then, pattern classification is performed using cl = 4(5e3 + e1 )
                                                                            and cr = 4(5e4 + e2 ), and area is retrieved querying the texture
I OURCHA , K., YANG , J. C., AND P OMIANOWSKI , A. 2009. A                  with (cl , dl , cr , dl ).
   directionally adaptive edge anti-aliasing filter. In Proceedings of
   the Conference on High Performance Graphics 2009, 127–133.               Rules for handling extended patterns: Patterns involving just ba-
                                                                            sic shapes can be combined directly following simple rules (refer to
I SSHIKI , T., AND K UNIEDA , H. 1999. Efficient anti-aliasing algo-         Figure 7):
   rithm for computer generated images. In ISCAS (4), 532–535.
                                                                                • Basic Zs (shapes (1 + 4) and (2 + 3)) have preference over
J IMENEZ , J., G UTIERREZ , D., YANG , J., R ESHETOV, A., D E -                   basic U s (shapes 5 and 6). For example, pattern (1, 6) from
   MOREUILLE , P., B ERGHOFF , T., P ERTHUIS , C., Y U , H.,                      Figure 6 is re-vectorized as (2 + 3) instead of (1 + 2).
   M C G UIRE , M., L OTTES , T., M ALAN , H., P ERSSON , E., A N -
   DREEV, D., AND S OUSA , T. 2011. Filtering approaches for                    • Combinations of shapes (1+3) and (2+4) should be avoided
   real-time anti-aliasing. In ACM SIGGRAPH Courses.                              since they produce artifacts. They can be ignored, as in pattern
                                                                                  (0, 6), or avoided using the previous rule, as in pattern (1, 6).
   G UTIERREZ , D. 2011. Practical Morphological Anti-Aliasing.                 • Combined Zs are transformed into basic Zs whenever possi-
   In GPU Pro 2. AK Peters Ltd., 95–113.                                          ble (For example shapes (2 + 3) instead of (7 + 4) for pattern
                                                                                  (5, 4)), except when that would break (7 + 9) or (8 + 10)
K APLANYAN , A. 2010. CryENGINE 3: Reaching the Speed of                          shapes (as in pattern (1, 24); please refer to the supplemental
   Light. In SIGGRAPH ’10: ACM SIGGRAPH 2010 Courses.                             material for the complete texture displayed in Figure 6).

Crossing edges in diagonals: For diagonals, the position of the                     the same sampling pattern as MSAA 2x.
crossing edges is not as obvious as for the horizontal-vertical lines
given their asymmetric nature. Figure 14, left, shows the diag-                  • For SMAA 4x, the scene is rendered at 2x, alternating jitters
onal pattern (1, 1), according to the pattern map in Figure 10.                    of (−0.2, 0.2) and (0.2, −0.2) (green and purple subsamples
Diagonal edges are marked in green. The blue line depicts the                      in Figure 12, right). The patterns have been chosen to maxi-
re-vectorizations performed (blending areas are colored in green).                 mize spatial coverage of the subsamples, while ensuring they
Crossing edges are marked in orange. It can be seen how we can                     stay within the pixel. This allows the temporal resolve to
no longer use the bilinear filter optimization. However, given the                  directly combine the two spatially resolved values from the
information contained in the edges texture, where the four bound-                  previous frame, reducing the memory usage and bandwidth.
aries of each pixel are stored, we can still fetch the crossing edges              Given that now the vertical and horizontal offsets are differ-
with just two memory accesses by sampling at the orange points.                    ent, we use a different precomputed area texture for verti-
Crossing edges e1 = 2c1 + c2 and e2 = 2c4 + c3 and distances                       cal and horizontal lines. The set of areas required are 0.05,
(d1 , d2 ) from the current pixel (grey point) to the ends of the diago-           0.45, 0.55 and 0.95, which come from applying the spatial
nal (yellow points), are used to query the precomputed area texture                ±(0.25, 0.25) and temporal offsets ±(0.2, −0.2).
in Figure 10, yielding the coverage area a. The code for all the               Accurate distance searches: Given the binary information en-
different cases can be found in the website.                                   coded in the edges texture, we use a single bilinear filtered access
                                                                               for fetching at the same time the crossing edges and the two line
                                                                               edgels (vertical and horizontal lines in Figure 15, respectively) at
                                                                               each 2-pixel step. We offset the horizontal coordinates by −0.25
                                                                               in order to distinguish between the two edgels. Similarly, we intro-
                                                                               duce a vertical offset of −0.125, which allows checking and dis-
                                                                               ambiguating the crossing edges. So, offsetting the sampling coor-
                                                                               dinates by (−0.25, −0.125) we are effectively hashing the combi-
                                                                               nation of crossing and line edges into a unique RG value, where
                                                                               the red and green channels of the edges texture contain information
                                                                               about the crossing and line edges respectively. At each step, the
                                                                               search is stopped if either R > 0 (meaning that there is a crossing
Figure 14: Examples of diagonal patterns (1, 1) (left) and (2, 1)              edge in the 2-pixel step) or G < 0.875. Because of the position of
(right) (see Figure 10). Left: Despite the added difficulty of diago-           the sampling coordinates, when both line edgels are activated (i.e.
nals, crossing edges are obtained with just two memory accesses by             with values equal to one) the bilinearly filtered value will be more
sampling at the orange points. Right: For diagonal lines, we blend             than 0.875 · (0.25 · 1 + 0.75 · 1), where 0.875 comes from the verti-
between bottom an top pixels (bi and ti respectively).                         cal interpolation, and 0.25 and 0.75 come from the horizontal linear
                                                                               interpolation. In practice, we use a slightly lower threshold because
                                                                               of bilinear access precision issues.
Blending with diagonals: We choose to blend vertically; there-
fore, the value stored in the red channel of the area texture weighs           Once the search has stopped, we need to identify the final pixel-
the blending of the current pixel with its top neighbor, and vice-             size increase of the last step, which can be 0, 1 or 2. Directly de-
versa for the green channel (note that as in MLAA, the blending is             coding the hashed value would involve a rather branchy assembly
not symmetric). Figure 14, right, illustrates an example, where the            code with complex conditionals; instead, we pre-calculate all pos-
bottom pixels bi encode the areas in the green channel of the area             sible combinations of RG values in a 33 × 33 two-channel texture,
texture (meaning that the top pixels ti need to blend with them).              requiring a simple texture access.

Optimizations for spatial multisampling: For spatial multisam-
pling, the pipeline is optimized as follows:
  1. Lumas are pre-calculated, resolving the 2x input to the final
     frame-buffer at the same time using multiple render targets
     (MRT), given that both operations share the same accesses.
     The luma for each subsample is stored in the RG channels of
     the same frame-buffer, allowing to reduce memory accesses
     from 18 to just 9 in the edge detection pass.
  2. The edges texture for both subsamples are generated in the
     same pass (using MRT), creating the stencil buffer for both
     of them at the same time. Next passes will be masked and
     executed only on pixels with edges in the boundaries.
  3. In each pixel, the blending for the first subsample is performed
     as in SMAA 1x. For the second subsample, an alpha blending
     factor of α = 0.5 is used to combine the results with the first
     subsample (as the original pixel may have been altered already
     by the first subsample).
Jittering patterns for temporal supersampling: The jittering pat-
tern in the temporal domain is performed as follows (refer to Fig-
ure 12, right):
   • For SMAA T2x, the scene is rendered at 1x, alternating jitters
     of (0.25, 0.25) and (−0.25, −0.25) at each frame, following

Figure 15: Top: Hardware bilinear filtering can in principle be
employed when searching for distances from each pixel to the ends
of a line. The color of the dot at the center of each pixel repre-
sents the value of that pixel in the edges texture (green means edge
on top, red on left). In the case shown here, we search to the left,
starting from the pixel marked with a star. Rhombuses denote the
positions where the edges texture would be accessed by Jimenez’s
MLAA [Jimenez et al. 2011b], fetching pairs of pixels. It can be
seen how this search would miss the crossing edge in the middle.
Bottom: Our algorithm performs a two-dimensional bilinear fil-
tered access fetching both the regular and the crossing edges at the
same time (accesses marked with color-coded squares containing
the fetched values). This new search would stop after the second
step, successfully locating the crossing edge.

  [Reshetov 2009]       [AMD 2010]   [Jimenez et al. 2011b]       [Lottes 2011]       [Andreev 2011]   [Chajdas et al. 2011]

    MLAA                MLAA              MLAA                    FXAA I                DLAA               SRAA                SMAA 1x   SMAA S2x   SSAA 16x

•                   •                •                        •                   •                    •                       •         •          •

•                   •                •                        •                   •                    •                       •         •          •

•                   •                •                        •                   •                    •                       •         •          •

                                                                                                                                                                                     Pattern Det.
                                                                                                                                                                Subpixel Sharp Geom. Diagonal
•                   •                •                        •                   •                    •                       •         •          •

•                   •                •                        •                   •                    •                       •         •          •

                                                                                                                                                               Features (a)
•                   •                •                        •                   •                    •                       •         •          •

                                                                                                                                                               Features (b)
•                   •                •                        •                   •                    •                       •         •          •

                                                                                                                                                               Features (c)
•                   •                •                        •                   •                    •                       •         •          •

                                                                                                                                                               Features (d)
•                   •                •                        •                   •                    •                       •         •          •           Subpixel
                                                                                                                                                               Preserv. (a)

•                   •                •                        •                   •                    •                       •         •          •
  [Reshetov 2009]       [AMD 2010]   [Jimenez et al. 2011b]       [Lottes 2011]       [Andreev 2011]   [Chajdas et al. 2011]

    MLAA                MLAA              MLAA                    FXAA I                DLAA               SRAA                SMAA 1x   SMAA S2x   MSAA 2x
                                                                                                                                                               Preserv. (b)

•                   •                •                        •                   •                                            •         •          •
Figure 13: Comparison of the features (rows) of our approach with a selection of anti-aliasing techniques (columns). FXAA I (preset 3)
has been used for all the images with the exception of accurate searches, where preset 2 has been selected to exemplify accelarated searches
failure cases. Green, orange and red dots mark accurate, regular and innacurate handling of a feature. Zoom the electronic version of this
paper to see the details. Close-ups on last row taken from Assassin’s Creed Brotherhood, courtesy of Ubisoft.


To top