Working Meeting - Space Weather Metrics, Verification and Validation
A. Glover, M. Angling, P. Jiggens, S. Bingham, S. Elvidge, P Wintoft
Wednesday, 25th 15:00 - 16:30, Delvaux
In order to provide reliable services to end-users, it is crucial to understand the strengths and potential limitations of the various elements underpinning those services. This includes the assumptions and algorithms on which models are based, and also the reliability of the associated infrastructure: e.g. data systems, space and ground-based measurement infrastructure. This also includes situations during extreme solar and geomagnetic conditions.
At the present time, within the space weather community, prototype services frequently operate as capability demonstrators and a full verification of their ability to reproduce/predict elements of the space environment under a range of space weather conditions, from the moderate to the extreme, has yet to be completed. Forecast accuracy has recently been addressed by a number of separate activities, but as yet a community-wide consensus on how to address this question and provide relevant information supporting end-users, service developers and modellers themselves for the wide range of models and domains involved has not been reached.
This working meeting follows on from the ESWW session addressing the same subject and will provide the opportunity for more detailed discussion addressing verification & validation needs for the current generation of activities under development and in planning. The session will build upon work discussed at the ISES Space Weather Forecast Verification Workshop in April 2015 and will promote further discussions and actions towards the ILWS-COSPAR workshop session on metrics to assess space weather predictions in January 2016, and upcoming space weather panel events scheduled for the COSPAR Assembly in Summer 2016.
Program
1. Intro and short summary of last meeting, plus key milestones this year
2. Validation approach(es)
Creating a statistically significant assessment of a model/tool's performance under (almost) all conditions.
Understanding the strengths and limitations of different approaches and measures
Techniques specific to rare/extreme events
3. Key Parameters
Validation based on forecast goals: addressing user needs e.g. user required forecast lead time as compared to temporal dynamics of the forecast parameter, choice of test parameter
Comparing terminology e.g. do all forecasters apply the same criteria when indicating quiet, moderate, strong disturbances?
4. How to define and select an appropriate scenario
extreme events
quiet conditions
5. Frameworks and standardisation prospects
What currently prevents these activities from happening? Are there technical, political and/or funding issues? How can these be overcome?
Current actions and potential role of organisations - e.g. ESA Expert Service Centres, ISES, CCMC, COSPAR...
6. Wrap up and next steps