Hosted by CU logo University of Colorado
Boulder
Powered by ESGF-CoG logo
Welcome, Guest. | Login | Create Account
ESMF logo
You are at the CoG-CU node
 

System Tests

System tests generally involve multiple components and framework functions. They are bundled with the ESMF distribution and can be found in the directory esmf/src/system_tests.

Due to data structure rework, not all system tests are in every release. The table below shows which system tests are in which supported internal and public releases.

Name Description In ESMF Versions:
ArbitraryDistribution Redistribution of an irregularly distributed logically rectangular Grid using a Field level interface. 300
ArrayBundleRedist Testing of the Array Bundle Redist capability between two gridded components 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayBundleSparseMatMul Testing of the sparse matrix multipy capability between two gridded components 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayRedist 2D Array redistribution between two Gridded Components, using a Coupler Component. 310-310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayRedist3D 3D Array redistribution between two Gridded Components, using a Coupler Component. 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayRedistMPMD 2D Array redistribution between two Gridded Components, using a Coupler Component. Both Gridded Components are compiled into their own executable and executed following the MPMD paradigm under a single MPI_COMM_WORLD. 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayRedistOpenACC 2D Array redistribution between two Gridded Components, using a Coupler Component. OpenACC directives are used inside of the first Gridded Component to initialize the source Array. 610, 611, 620, 630r, 630rp1, 700
ArrayRedistOpenMP 2D Array redistribution between two Gridded Components, using a Coupler Component. OpenMP directives are used inside of the first Gridded Component to initialize the source Array. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArrayRedistSharedObj 2D Array redistribution between two Gridded Components, using a Coupler Component. Two Components (the first Gridded Component and the Coupler Component) are linked into shared objects, separate from the executable. The executable dynamically loads these Components during run-time, using standard ESMF Component methods. 400, 400r-400rp2, 500, 510, 520, 620, 630r, 630rp1, 700
ArrayScatterGather Use of ESMF_ArrayScatter() and ESMF_ArrayGather() to redistribute Array data between three different components running on different sets of PETs. 310-310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ArraySparseMatMul Use of a sparse matrix multiply to redistribute a source Array to a differently distributed destination Array. 300, 301, 302, 303, 310-310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
Attribute Demonstrates the use of Attributes, Attribute hierarchies, and Attribute packages in a multi-component setting. 311,400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
AttributeCIM Demonstrates the use of the standard ESMF-supplied, METAFOR Common Information Model (CIM) Attribute packages for Components and Fields, within a multi-Component, multi-PET application. 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
BundleRedistArb2Arb Redistribution from an arbitrarily distributed Bundle to an arbitrarily distributed Bundle. 222r-222rp3, 301, 302, 303
BundleRedistBlk2Arb Redistribution from a block distributed Bundle to an arbitrarily distributed Bundle. 222r-222rp3, 301, 302, 303
BundleRedistBlk2Blk Redistribution from a block distributed Bundle to a block distributed Bundle. 222r-222rp3, 301, 302, 303, 311
CompCreate Complete Component create with intra-grid communications. 222r-222rp3, 300, 301, 302, 303, 310-310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
CompFortranAndC Verifies that states are transfered accurately between components that are implemented in different languages (Fortran and C). 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
Compliance Checker Demonstrate and verify compliance checking in a 3 Component application. 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ConcurrentComponent Coupling of concurrently executing Gridded Components on exclusive sets of PETs. 310r-310rp3, 311, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
ConcurrentEnsemble Demonstrates how a concurrent ensemble can be written using ESMF. The ensemble configuration changed for the 400r version of this system test. 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
DirectCoupling Coupling between three components without returning to an upper level to exchange data. 310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
DistDir Creation of a distributed directory for efficient parallel communications within a Component - basic directory creation and lookups are verified. 303, 310-310p1, 310r-310rp3, 311
FieldBundleRedistArb2Arb Redistribution from a arbitrarily distributed FieldBundle to a arbitrarily distributed FieldBundle. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldBundleRedistBlk2Arb Redistribution from a block distributed FieldBundle to a arbitrarily distributed FieldBundle. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldBundleRedistBlk2Blk Redistribution from a block distributed FieldBundle to a block distributed FieldBundle. 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldBundleSMM Use of a sparse matrix multiply to redistribute a source FieldBundle to a differently distributed destination FieldBundle. 310rp2-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldConcurrentComponent Demonstrates the use of ESMF coupling framework to couple 2 gridded components with 1 coupler component. The coupler component runs on the union of the PETs that are exclusively allocated to each individual gridded component. 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldExclusive Bilinear regridding between two concurrent Gridded Components on different PETs. 222r-222rp3, 300, 301, 302, 303
FieldHalo Simple halo operation. 222r-222rp3, 300, 301, 302, 303
FieldHaloPeriodic Halo operation on a Field with periodic boundary conditions. 222r-222rp3, 300, 301, 302, 303
FieldLocStreamSMM Use of a sparse matrix multiply to redistribute a source Field with Location Stream to a destination Field with Location Stream. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldMeshSMM Use of a sparse matrix multiply to redistribute a source Field with a Mesh to a destination Field with normal block structure Grid. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRedist Redistribute/transpose data through FieldRedist interface. 222r-222rp3, 300, 301, 302, 303, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRedistArb2Arb Redistribution from an arbitrarily distributed Field to an arbitrarily distributed Field. 222r-222rp3, 301, 302, 303, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRedistBlk2Arb Redistribution from a block distributed Field to an arbitrarily distributed Field. 222r-222rp3, 301, 302, 303, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRedistBlk2Blk Redistribution from a block distributed FieldBundle to another block distributed FieldBundle. 222r-222rp3, 301, 302, 303, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRegrid Bilinear regridding between different Grids on different DELayouts. 222r-222rp3, 300, 301, 302, 303, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRegridConserv First order conservative regridding between different Grids on different DELayouts. 222r-222rp3, 300, 301, 302, 303
FieldRegridDisjoint Verifies that regridding works correctly between components running on disjoint sets of PETs. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRegridMesh Verifies that Mesh reconcile works correctly. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRegridMeshToMesh Bilinear regridding between two Fields each built on a Mesh. 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldRegridMulti Bilinear regridding between 3D Fields on 2D Grids. 222r-222rp3, 300, 301, 302, 303
FieldRegridOrder Bilinear regridding between Grids with different index orders. 222r-222rp3, 300, 301, 302, 303
FieldRegridOverlap Verifies that regridding works correctly between components running on partially overlapping sets of PETs. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
FieldSparseMatMul Use of a sparse matrix multiply to redistribute a source Field to a differently distributed destination Field. 310-310p1, 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 610, 611, 620, 630r, 630rp1, 700
FlowComp PDE solution in a Component with a Clock. 222r-222rp3, 300, 301, 302, 303
FlowWithCoupling PDE solution with coupled Components. 222r-222rp3, 300, 301, 302, 303
InternalStateEnsemble Demonstrate how an ensemble can be written using the internal State of ESMF. 400
RecursiveComponent Recursive creation of subcomponents and demonstration of a recursive Component call tree. 310r-310rp3, 311, 400, 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
SequentialEnsemble Sequential ensemble example using different initial conditions for different ensemble members. 400r-400rp2, 500, 510, 520-520p1, 520r-520rp3, 530, 610, 611, 620, 630r, 630rp1, 700
SeqEnsemEx Sequential ensemble example using different initial conditions for different ensemble members. 311, 400
SimpleCoupling Simple coupling. 300, 301, 302, 303
XGridConcurrent Exchange flux from idealized Land model to Atmosphere model through XGrid created in coupler. Land and Atmosphere Gridded components runs concurrently. 610, 611, 620, 630r, 630rp1, 700
TransferGrid Transfer a Grid object from one gridded component to another gridded component. 630r, 630rp1, 700
TransferMesh Transfer a Mesh object from one gridded component to another gridded component. 630r, 630rp1, 700
XGridSerial Exchange flux from idealized Land and Ocean models to Atmosphere model through XGrid created in coupler. All three gridded components run sequentially. 610, 611, 620, 630r, 630rp1,700

 

Last Update: Jan. 21, 2016, 4:38 p.m. by Silverio Vasquez