Based on the reliable and vetted Sounding and Hodograph Analysis and Research Program (SHARP)
Using the same algorithms and routines as SHARP and SHARPpy, the resulting analysis from gSHARP is numerically consistent with operational and research routines already in use, meaning you can trust the data. Additionally, since gSHARP is built upon open source routines, the algorithms are completely transparent.
More computational power using fewer resources
Because gSHARP is built to use the power of Graphical Processing Units (GPUs) to massively parallelize the computations, processing large datasets requires only 1 compute node and 1 graphics card, instead of distributing across multiple compute nodes and processors. This means fewer physical resources are used and less compute time is required to process a dataset.
Below are some benchmarks computed on the HRRR 3km CONUS grid, which has 1799 longitudinal points by 1059 latitudinal points by 50 vertical points (95.26 million grid points).
Deployable on Existing Supercomputers and the Cloud
All that is required to process data using gSHARP is access to an NVIDIA GPU, meaning that existing supercomputers such as Cheyenne, Blue Waters, and Titan can leverage this accelerated processing code. Additionally, gSHARP was innovated using the Google Cloud platform, meaning that it can be deployed quickly and then shut down after the processing has finished, making it agile, cost effective, and scalable to specific tasks. gSHARP is also compatible with the Amazon Web Services compute engines.