Joint ECMWF/OceanPredict workshop on Advances in Ocean Data Assimilation
Working group topics
The charge of the working groups (WGs) was to consider the priority areas for advancing ocean data assimilation (DA) during the next decade and the primary challenges to progress. These discussions were particularly timely since 2021 marks the beginning of the U.N. Decade of Ocean Science for Sustainable Development for which ocean analysis, reanalysis and forecasting activities are an integral part.
The workshop organising committee identified four primary themes for the WG discussions:
- Balancing Model Resolution versus Ensembles
- Infrastructure
- Best Practices
- Data Assimilation Methods.
Within each primary theme six secondary themes were also highlighted, namely: opportunities for machine learning
- education, training and outreach
- forecasting and analysis applications
- the use of novel observations
- deficiencies and gaps in knowledge
- emerging service needs.
Each WG discussed all four primary themes in relation to the applicable secondary themes. Our aim was not to dictate the direction of the WG discussions but rather to allow the Chair and co-Chair of each WG session free rein to organise and guide the discussions as they saw fit. However, as guidance, we provided a few example questions that the WG Chairs and attendees might find helpful and may want to consider.
- Which aspects of current DA methods, common to all applications, should be strengthened? Topics here could include:
- The role of machine learning
- Treatment of model error
- Coupled DA and the issues of disparate timescales, system complexity, etc.
- Covariance modeling (e.g. background errors and flow-dependence, correlated observation errors, model error)
- Observation operators for future observing systems such as SWOT.
- Which are the critical aspects where DA solutions diverge amongst different applications (i.e. global vs regional vs coupled, reanalyses vs initialization of medium range forecasting)? Relevant factors here might be the necessary trade-offs between model resolution, the length of the assimilation window, the number of ensemble members, and the complexity of the system (the last particularly in relation to coupled DA). Are there recommended practical solutions?
- What might the best practices be for benchmarking the development and performance of DA systems? And what observations might be needed for developing DA methods and benchmarking?
- What further infrastructure developments might be needed (e.g. JEDI, OOPS, PDAF, DART, etc) to enhance collaboration and sharing of software and algorithms?