Docstoc

Cornell

Document Sample
Cornell Powered By Docstoc
					The Development of Unstructured
Grid Methods For Computational
         Aerodynamics


          Dimitri J. Mavriplis
                ICASE
      NASA Langley Research Center
         Hampton, VA 23681
                 USA



         Cornell University, September 17,2002
                 Ithaca New York, USA
                       Overview
• Structured vs. Unstructured meshing approaches
• Development of an efficient unstructured grid solver
   – Discretization
   – Multigrid solution
   – Parallelization
• Examples of unstructured mesh CFD capabilities
   – Large scale high-lift case
   – Typical transonic design study
• Areas of current research
   – Adaptive mesh refinement
   – Moving and overlapping meshes

                    Cornell University, September 17,2002
                            Ithaca New York, USA
 CFD Perspective on Meshing Technology

• CFD Initiated in Structured Grid Context
  – Transfinite Interpolation
  – Elliptic Grid Generation
  – Hyperbolic Grid Generation
• Smooth, Orthogonal Structured Grids
• Relatively Simple Geometries



              Cornell University, September 17,2002
                      Ithaca New York, USA
    CFD Perspective on Meshing Technology
•   Sophisticated Multiblock Structured Grid Techniques for
    Complex Geometries




Engine Nacelle Multiblock Grid by commercial software TrueGrid.
    CFD Perspective on Meshing Technology
•   Sophisticated Overlapping Structured Grid Techniques for
    Complex Geometries




Overlapping grid system on space shuttle (Slotnick, Kandula and Buning 1994)
 Unstructured Grid Alternative




• Connectivity stored explicitly
• Single Homogeneous Data Structure
              Cornell University, September 17,2002
                      Ithaca New York, USA
       Characteristics of Both Approaches

• Structured Grids
   –   Logically rectangular
   –   Support dimensional splitting algorithms
   –   Banded matrices
   –   Blocked or overlapped for complex geometries
• Unstructured grids
   –   Lists of cell connectivity, graphs (edge,vertices)
   –   Alternate discretizations/solution strategies
   –   Sparse Matrices
   –   Complex Geometries, Adaptive Meshing
   –   More Efficient Parallelization
                          Cornell University, September 17,2002
                                  Ithaca New York, USA
                   Discretization
• Governing Equations: Reynolds Averaged Navier-
  Stokes Equations
   – Conservation of Mass, Momentum and Energy
   – Single Equation turbulence model (Spalart-Allmaras)
        • Convection-Difusion – Production
• Vertex-Based Discretization
   –   2nd order upwind finite-volume scheme
   –   6 variables per grid point
   –   Flow equations fully coupled (5x5)
   –   Turbulence equation uncoupled
                     Cornell University, September 17,2002
                             Ithaca New York, USA
          Spatial Discretization
• Mixed Element Meshes
  – Tetrahedra, Prisms, Pyramids, Hexahedra
• Control Volume Based on Median Duals
  – Fluxes based on edges




  – Single edge-based data-structure represents all element
    types
                  Cornell University, September 17,2002
                          Ithaca New York, USA
  Spatially Discretized Equations

• Integrate to Steady-state
• Explicit:
  – Simple, Slow: Local procedure
• Implicit
  – Large Memory Requirements
• Matrix Free Implicit:
  – Most effective with matrix preconditioner
• Multigrid Methods
               Cornell University, September 17,2002
                       Ithaca New York, USA
              Multigrid Methods




• High-frequency (local) error rapidly reduced by explicit
  methods
• Low-Frequence (global) error converges slowly
• On coarser grid:
   – Low-frequency viewed as high frequency
                    Cornell University, September 17,2002
                            Ithaca New York, USA
Multigrid Correction Scheme
     (Linear Problems)




      Cornell University, September 17,2002
              Ithaca New York, USA
Multigrid for Unstructured Meshes




•   Generate fine and coarse meshes
•   Interpolate between un-nested meshes
•   Finest grid: 804,000 points, 4.5M tetrahedra
•   Four level Multigrid sequence
        Geometric Multigrid




• Order of magnitude increase in convergence
• Convergence rate equivalent to structured grid
  schemes
• Independent of grid size: O(N)
                Cornell University, September 17,2002
                        Ithaca New York, USA
 Agglomeration vs. Geometric Multigrid

• Multigrid methods:
  – Time step on coarse grids to accelerate solution on fine
    grid
• Geometric multigrid
  – Coarse grid levels constructed manually
  – Cumbersome in production environment
• Agglomeration Multigrid
  – Automate coarse level construction
  – Algebraic nature: summing fine grid equations
  – Graph based algorithm

                 Cornell University, September 17,2002
                         Ithaca New York, USA
            Agglomeration Multigrid
• Agglomeration Multigrid solvers for unstructured meshes
   – Coarse level meshes constructed by agglomerating fine grid
     cells/equations




                     Cornell University, September 17,2002
                             Ithaca New York, USA
          Agglomeration Multigrid




•Automated Graph-Based Coarsening Algorithm
•Coarse Levels are Graphs
•Coarse Level Operator by Galerkin Projection
•Grid independent convergence rates (order of magnitude improvement)
Agglomeration MG for Euler Equations




• Convergence rate similar to geometric MG
• Completely automatic
             Cornell University, September 17,2002
                     Ithaca New York, USA
        Anisotropy Induced Stiffness

• Convergence rates for RANS (viscous)
  problems much slower then inviscid
  flows

   – Mainly due to grid stretching
   – Thin boundary and wake regions
   – Mixed element (prism-tet) grids



• Use directional solver to relieve stiffness
   – Line solver in anisotropic regions
                        Cornell University, September 17,2002
                                Ithaca New York, USA
Directional Solver for Navier-Stokes Problems
• Line Solvers for Anisotropic Problems
   – Lines Constructed in Mesh using weighted graph algorithm
   – Strong Connections Assigned Large Graph Weight
   – (Block) Tridiagonal Line Solver similar to structured grids
 Implementation on Parallel Computers




• Intersected edges resolved by ghost vertices
• Generates communication between original and
  ghost vertex
  – Handled using MPI and/or OpenMP
  – Portable, Distributed and Shared Memory Architectures
  – Local reordering within partition for cache-locality
                     Partitioning
• Graph partitioning must minimize number of cut
  edges to minimize communication
• Standard graph based partitioners: Metis, Chaco,
  Jostle
   – Require only weighted graph description of grid
      • Edges, vertices and weights taken as unity
   – Ideal for edge data-structure
• Line Solver Inherently sequential
   – Partition around line using weigted graphs
                    Cornell University, September 17,2002
                            Ithaca New York, USA
                   Partitioning
• Contract graph along implicit lines
• Weight edges and vertices




• Partition contracted graph
• Decontract graph
   – Guaranteed lines never broken
   – Possible small increase in imbalance/cut edges
                   Cornell University, September 17,2002
                           Ithaca New York, USA
          Partitioning Example
• 32-way partition of 30,562 point 2D grid




• Unweighted partition: 2.6% edges cut, 2.7% lines cut
• Weigted partition: 3.2% edges cut, 0% lines cut
Sample Calculations and Validation
• Subsonic High-Lift Case
  – Geometrically Complex
  – Large Case: 25 million points, 1450 processors
  – Research environment demonstration case
• Transonic Wing Body
  – Smaller grid sizes
  – Full matrix of Mach and CL conditions
  – Typical of production runs indesign environment

              Cornell University, September 17,2002
                      Ithaca New York, USA
 NASA Langley Energy Efficient Transport
• Complex geometry
  – Wing-body, slat, double slotted flaps, cutouts
• Experimental data from Langley 14x22ft wind tunnel
  – Mach = 0.2, Reynolds=1.6 million
  – Range of incidences: -4 to 24 degrees




                   Cornell University, September 17,2002
                           Ithaca New York, USA
      VGRID Tetrahedral Mesh




• 3.1 million vertices, 18.2 million tets, 115,489 surface pts
• Normal spacing: 1.35E-06 chords, growth factor=1.3
Computed Pressure Contours on Coarse Grid




 • Mach=0.2, Incidence=10 degrees, Re=1.6M
  Spanwise Stations for Cp Data




• Experimental data at 10 degrees incidence
              Cornell University, September 17,2002
                      Ithaca New York, USA
Comparison of Surface Cp at Middle Station




             Cornell University, September 17,2002
                     Ithaca New York, USA
Computed Versus Experimental Results




• Good drag prediction
• Discrepancies near stall
  Multigrid Convergence History




• Mesh independent property of Multigrid
               Parallel Scalability




• Good overall Multigrid scalability
   – Increased communication due to coarse grid levels
   – Single grid solution impractical (>100 times slower)
• 1 hour soution time on 1450 PEs
AIAA Drag Prediction Workshop (2001)




• Transonic wing-body configuration
• Typical cases required for design study
   – Matrix of mach and CL values
   – Grid resolution study
• Follow on with engine effects (2003)
                    Cases Run
• Baseline grid: 1.6 million points
   – Full drag Polars for
     Mach=0.5,0.6,0.7,0.75,0.76,0.77,0.78,0.8
   – Total = 72 cases
• Medium grid: 3 million points
   – Full drag polar for each Mach number
   – Total = 48 cases
• Fine grid: 13 million points
   – Drag polar at mach=0.75
   – Total = 7 cases
                 Cornell University, September 17,2002
                         Ithaca New York, USA
   Sample Solution (1.65M Pts)




• Mach=0.75, CL=0.6, Re=3M
• 2.5 hours on 16 Pentium IV 1.7GHz
    Drag Polar at Mach = 0.75




• Grid resolution study
• Good comparison with experimental data
  Comparison with Experiement




• Grid Drag Values
• Incidence Offset for Same CL
Drag Polars at other Mach Numbers




• Grid resolution study
• Discrepancies at Higher Mach/CL Conditions
            Drag Rise Curves




• Grid resolution study
• Discrepancies at Higher Mach/CL Conditions
  Cases Run on ICASE Cluster




• 120 Cases (excluding finest grid)
• About 1 week to compute all cases
             Cornell University, September 17,2002
                     Ithaca New York, USA
Timings on Various Architectures




         Cornell University, September 17,2002
                 Ithaca New York, USA
          Adaptive Meshing
• Potential for large savings trough optimized
  mesh resolution
  – Well suited for problems with large range of scales
  – Possibility of error estimation / control
  – Requires tight CAD coupling (surface pts)
• Mechanics of mesh adaptation
• Refinement criteria and error estimation

               Cornell University, September 17,2002
                       Ithaca New York, USA
Mechanics of Adaptive Meshing

• Various well know isotropic mesh methods
  – Mesh movement
       • Spring analogy
       • Linear elasticity
  –   Local Remeshing
  –   Delaunay point insertion/Retriangulation
  –   Edge-face swapping
  –   Element subdivision
       • Mixed elements (non-simplicial)
       • Require anisotropic refinement in transition regions

                     Cornell University, September 17,2002
                             Ithaca New York, USA
Subdivision Types for Tetrahedra




          Cornell University, September 17,2002
                  Ithaca New York, USA
Subdivision Types for Prisms




        Cornell University, September 17,2002
                Ithaca New York, USA
Subdivision Types for Pyramids




         Cornell University, September 17,2002
                 Ithaca New York, USA
Subdivision Types for Hexahedra




         Cornell University, September 17,2002
                 Ithaca New York, USA
Adaptive Tetrahedral Mesh by Subdivision




            Cornell University, September 17,2002
                    Ithaca New York, USA
Adaptive Hexahedral Mesh by Subdivision




            Cornell University, September 17,2002
                    Ithaca New York, USA
Adaptive Hybrid Mesh by Subdivision




          Cornell University, September 17,2002
                  Ithaca New York, USA
 Overlapping Unstructured Meshes
• Alternative to Moving Mesh for Large Scale
  Relative Geometry Motion
• Multiple Overlapping Meshes treated as single
  data-structure
   – Dynamic Determination of active/inactive/ghost cells
• Advantages for Parallel Computing
   – Obviates dynamic load rebalancing required with mesh
     motion techniques
   – Intergrid communication must be dynamically
     recomputed and rebalanced
      • Concept of Rendez-vous grid (Plimpton and Hendrickson)

                   Cornell University, September 17,2002
                           Ithaca New York, USA
 Overlapping Unstructured Meshes




• Simple 2D transient example
              Cornell University, September 17,2002
                      Ithaca New York, USA
                  Conclusions
• Unstructured mesh technology enabling technology
  for computational aerodynamics
   – Complex geometry handling facilitated
   – Efficient steady-state solvers
   – Highly effective parallelization
• Accurate solutions possible for on-design conditions
   – Mostly attached flow
   – Grid resolution always an issue
• Adaptive meshing potential not fully exploited
   – Refinement criteria require more research
• Future work to include more physics
   – Turbulence, transition, unsteady flows, moving meshes
                  Cornell University, September 17,2002
                          Ithaca New York, USA

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:9
posted:8/5/2011
language:English
pages:54