High-efficiency floating-point neural network inference operators
XNNPACK is a highly optimized solution for neural network inference on ARM, x86, WebAssembly, and RISC-V platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, and MediaPipe.
Release | Stable | Testing |
---|---|---|
Fedora Rawhide | 0.0^git20240229.fcbf55a-2.fc41 | - |
Fedora 40 | 0.0^git20221221.51a9875-4.fc40 | - |
You can contact the maintainers of this package via email at
xnnpack dash maintainers at fedoraproject dot org
.