Ingest and Publication of ESGF Datasets

Original trac page

Introduction

This page provides details of the ingest and publication workflow for datasets that are bound for the Earth System Grid Federation (ESGF) services. The basic workflow is as follows:

  1. A collection of files arrives (a "batch")
  2. Run the ceda-cc compliance checker on the batch.
  3. Run the drs_tool to ingest the data into the archive.
  4. Run the post_ingest_processor.py script to post-process/check the data once in the archive.
  5. Run the generate_mapfiles.py script to generate mapfiles to be used for ESGF publication.
  6. Run the ESGF publisher to scan the data.
  7. Run the ESGF publisher to generate THREDDS catalogues.
  8. Run the ESGF publisher to put the data in the ESGF Search Catalogue.

Whilst developing the documentation we are documenting some examples:

There is also a page on setting up the software environment . The following page looks at what needs to be done to automate this process: opman/ingest/ESGFIngestAndPublication/SPECSIngestAutomation

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us