ONNX-Runtime¶
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
homepage: https://onnxruntime.ai/
version | versionsuffix | toolchain |
---|---|---|
1.10.0 |
-CUDA-11.3.1 |
foss/2021a |
1.10.0 |
foss/2021a |
|
1.16.3 |
foss/2022b |
(quick links: (all) - 0 - a - b - c - d - e - f - g - h - i - j - k - l - m - n - o - p - q - r - s - t - u - v - w - x - y - z)