Home > AI Development > RKNN API
To help developers access the NPU easily, Rockchip developed RKNN, which provides C API and Python API.
Rockchip provides a set of RKNN API SDKs, an acceleration solution based on the RK3399Pro Linux/Android neural network NPU hardware, which provides general acceleration support for AI-related applications developed with the RKNN API.
Need to install the rknn-api development kit first:
sudo dnf install –y rknn-api
If the installation fails, go to the OneDrive to download: rknn_api_sdk
After the installation is successful, you can find the RKNN header file rknn_api.h and the library file librknn_api.so in the system directory. The application only needs to include the header file and link the dynamic library to develop related AI applications.
include the header file：
link the dynamic library：
LDFLAGS = -lrknn_api
Go to the rknn-api/Android/rknn_api directory. The RKNN API is defined in the header file of include/rknn_api.h. The dynamic library paths for the RKNN API are lib64/librknn_api.so and lib/librknn_api.so. The application only needs to include the header file and link the dynamic library to devel the JNI library of the relevant AI application. Currently, only JNI development methods are supported on Android.
For more introductions to the RKNN API SDK related API, please refer to the document Rockchip_User_Guide_RKNN_API_V*.pdf
Python development only needs to call the API in the RKNN-Toolkit package to complete Python application development.
RKNN-Toolkit provides for users the development kit of model conversion, inference and performance evaluation based on PC, RK3399Pro, RK1808 hardware. Users can easily implement below features with the provided Python interface:
- Model conversion
- Quantization function
- Model inference
- Performance evaluation
- Memory evaluation
- Model pre-compilation
- Model segmentation
- Custom OP
For more introductions to the RKNN-Toolkit related API, please refer to the wiki page AI/RKNN-Toolkit.