Hosted by CU logo University of Colorado
Powered by ESGF-CoG logo
Welcome, Guest. | Login | Create Account
ESMF logo
You are at the CoG-CU node


Since funds provided to the ESMF Project through this program are in the form of a grant rather than a Cooperative Agreement, we have a work plan rather than milestones. Changes to this work plan are anticipated in out years.

Year 1 (2006)

Accept priorities from the ESMF Change Review Board and based on these, implement ESMF software.

Implement a non-gridded, indexed, distributed data communications layer. Methods include canonical operations, and some subset of the following as required: halo, scatter, allgather, global sum, global max/min, and redistribution.

Extend capabilities of ESMF grids and related methods:

  • Implement data structures and methods for grids with general curvilinear coordinates, at minimum canonical operations, halo, and regridding.
  • Implement data structures and methods for unstructured grids and observational data streams, at minimum canonical operations and regridding.
  • Implement grid masks.
  • Implement read/write regridding interpolation weights and read/write grid specifications.

Extend functionality for utilities including I/O, message logging, error handling, configuration attributes, and time management. Specific additions include:

  • Prototype and resolve I/O library strategy, examining GFDL's mpp_io, GMAO's CFIO, and RFIO for possible adaptation to ESMF.
  • Implement basic I/O services for data fields and arrays.
  • Bug fixes and feature additions to logging, error handling, and configuration attributes sufficient to enable multiple groups to begin using these utilities in a production environment.
  • Minor feature additions to the time management utility as required.

With MIT, prototype development of adjoints for select operations.

Ensure consistency of design and implementation of basic data structures (arrays, grids, fields, bundles, etc.). Issue areas include:

  • Interlanguage interfaces.
  • Representation of data types.
  • Error handling and logging.
  • Data reference and allocation strategies, in particular to ensure correct destruction and protection against memory leaks.

Benchmark and validate regridding methods, redistribution, low-level communications, and other methods in the framework. Assimilate contributions to benchmarking and validation from JPL, MIT, and other sources. Make benchmark and validation results easily accessible on the ESMF website.

Facilitate, through the ESMF support line and direct assistance, the integration of ESMF into application codes.

Develop and deliver comprehensive and standard ESMF training classes and user workshops for MAP participants and others. Continue improvement of ESMF tutorials and user and developer documentation. Assimilate contributions to tutorial materials from MIT and others.

Year 2 (2007)

  • Implement observational data class and related data assimilation functions.
  • Implement high performance I/O, including asynchronous I/O.
  • Prototype MPMD version of ESMF and implement further if desired.
  • Implement optimized semi-structured grids, including cubed sphere.
  • Optimize framework for performance with support from JPL.
  • Continue to develop, update and assimilate training materials and examples.

Year 3 (2008)

  • Implement advanced load balancing.
  • Define and implement adjoints for ESMF communications.
  • Implement optimized advanced I/O.
  • Improve and assimilate contributions to optimization and training.
  • Update and extend examples to include multiply connected generalized grids and three-dimensional regridding.

Year 4 (2009)

  • Implement nested and adaptive grids.
  • Improve and assimilate contributions to optimization and training.
  • Update and extend examples to include dynamic redistribution for load balancing with simple graphics integration.

Year 5 (2010)

  • Develop ESMF paradigm for nesting models within models, consistent with ESMF grid nesting implementation.
  • Begin integration with remote visualization and data services.
  • Improve and assimilate contributions to optimization and training.
  • Update and extend examples to include remote dataset access and possibly distributed component execution.
Last Update: Feb. 6, 2014, 2:23 p.m. by deleted user