Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.
Jump to: navigation, search

Difference between revisions of "AI/RKNN-Toolkit"

< AI
(Created page with "=== Introduction === Rockchip offers the '''RKNN-Toolkit''' development kit for model conversion, forward inference, and performance evaluation. Users can easily perform the...")
 
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
{{Languages|AI/RKNN-Toolkit}}
 +
 +
    [[Home | Home]] > [[AI | AI Development]]  > [[AI/RKNN-Toolkit | RKNN Toolkit]]
 +
 
=== Introduction ===
 
=== Introduction ===
  
Line 11: Line 15:
 
3) '''performance evaluation''': user can simulate running the model on a PC to get both the total time spent on the model and the time-consuming information of each layer. User can also run the model on the specified hardware platform RK3399Pro/RK1808 by online debugging, and get both the total time of the model running on the hardware and the time-consuming information of each layer.
 
3) '''performance evaluation''': user can simulate running the model on a PC to get both the total time spent on the model and the time-consuming information of each layer. User can also run the model on the specified hardware platform RK3399Pro/RK1808 by online debugging, and get both the total time of the model running on the hardware and the time-consuming information of each layer.
  
This chapter mainly explains how to perform model conversion on the RK3399Pro/RK1808 development board. For other function descriptions, please refer to the RKNN-Toolkit User Guide: "'''RKNN-Toolkit User Guide_V*.pdf'''".
+
This chapter mainly explains how to perform model conversion on the RK3399Pro/RK1808 development board. For other function descriptions, please refer to the [https://dl.radxa.com/rockpin10/docs/sw/rknn-toolkit/Rockchip_User_Guide_RKNN_Toolkit_V1.3.0_EN.pdf RKNN-Toolkit User Guide].
  
 
=== Installation preparation ===
 
=== Installation preparation ===
Line 40: Line 44:
 
==== Example ====
 
==== Example ====
  
    from rknn.api import RKNN
+
<syntaxhighlight lang="Python" line='line'>
   
+
from rknn.api import RKNN
    INPUT_SIZE = 64
+
 
   
+
INPUT_SIZE = 64
    if __name__ == '__main__':
+
 
        rknn = RKNN()  # Create an RKNN execution object
+
if __name__ == '__main__':
   
+
    rknn = RKNN()  # Create an RKNN execution object
     """
+
 
 +
     '''
 
     Configure model input for NPU preprocessing of input data
 
     Configure model input for NPU preprocessing of input data
     channel_mean_value='0 0 0 255', when runing forward inference, the RGB data will be converted as follows
+
     channel_mean_value='0 0 0 255', when runing forward inference, the RGB data will be
    (R - 0)/255, (G - 0)/255, (B - 0)/255, The RKNN model automatically performs the mean and normalization
+
    converted as follows (R - 0) / 255, (G - 0) / 255, (B - 0) / 255,
     reorder_channel=' 0 1 2' , used to specify whether to adjust the image channel order, set to 0 1 2, means no adjustment according to the input image channel order.
+
    The RKNN model automatically performs the mean and normalization.
     reorder_channel=' 2 1 0' , indicates that 0 and 2 channels are exchanged. If the input is RGB, it will be adjusted to BGR. If it is BGR will be adjusted to RGB
+
     reorder_channel='0 1 2' , used to specify whether to adjust the image channel order,  
 +
    set to 0 1 2, means no adjustment according to the input image channel order.
 +
     reorder_channel='2 1 0' , indicates that 0 and 2 channels are exchanged.
 +
    If the input is RGB, it will be adjusted to BGR. If it is BGR will be adjusted to RGB
 
     Image channel order is not adjusted
 
     Image channel order is not adjusted
     """"
+
     '''
   
+
 
        rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2')
+
    rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2')
   
+
 
     """
+
     '''
 
     load TensorFlow model
 
     load TensorFlow model
 
     tf_pb='digital_gesture.pb' specify the TensorFlow model to be converted
 
     tf_pb='digital_gesture.pb' specify the TensorFlow model to be converted
Line 64: Line 72:
 
     outputs specify the output node in the model
 
     outputs specify the output node in the model
 
     input_size_list specify the size of the model input
 
     input_size_list specify the size of the model input
     """
+
     '''
   
+
 
        print('--> Loading model')
+
    print('--> Loading model')
        rknn.load_tensorflow(tf_pb='digital_gesture.pb',
+
    rknn.load_tensorflow(tf_pb='digital_gesture.pb',
                            inputs=['input_x'],
+
                        inputs=['input_x'],
                            outputs=['probability'],
+
                        outputs=['probability'],
                            input_size_list=[[INPUT_SIZE, INPUT_SIZE, 3]])
+
                        input_size_list=[[INPUT_SIZE, INPUT_SIZE, 3]])
        print('done')
+
    print('done')
   
+
 
     """
+
     '''
 
     Create a parsing pb model
 
     Create a parsing pb model
 
     do_quantization=False do not to be quantified
 
     do_quantization=False do not to be quantified
     Quantization will reduce the size of the model and increase the speed of the operation, but there will be loss of precision.
+
     Quantization will reduce the size of the model and increase the speed of the operation,
     """
+
    but there will be loss of precision.
   
+
     '''
        print('--> Building model')
+
 
        rknn.build(do_quantization=False)
+
    print('--> Building model')
        print('done')
+
    rknn.build(do_quantization=False)
        rknn.export_rknn('./digital_gesture.rknn')  # Export and save rknn model file
+
    print('done')
        rknn.release()  # Release RKNN Context
+
    rknn.export_rknn('./digital_gesture.rknn')  # Export and save rknn model file
 +
    rknn.release()  # Release RKNN Context
 +
</syntaxhighlight>
  
 
=== Model Inference ===
 
=== Model Inference ===
Line 93: Line 103:
 
==== Example ====
 
==== Example ====
  
    import numpy as np
+
<syntaxhighlight lang="Python" line='line'>
    from PIL import Image
+
import numpy as np
    from rknn.api import RKNN
+
from PIL import Image
   
+
from rknn.api import RKNN
   
+
 
    # Analyze the output of the model to get the most probable gesture and corresponding probability
+
 
    def get_predict(probability):
+
# Analyze the output of the model to get the most probable gesture and corresponding probability
        data = probability[0][0]
+
def get_predict(probability):
        data = data.tolist()
+
    data = probability[0][0]
        max_prob = max(data)
+
    data = data.tolist()
        return data.index(max_prob), max_prob;
+
    max_prob = max(data)
   
+
    return data.index(max_prob), max_prob
   
+
 
    def load_model():
+
 
        rknn = RKNN()  # Create an RKNN execution object
+
def load_model():
        print('-->loading model')
+
    rknn = RKNN()  # Create an RKNN execution object
        rknn.load_rknn('./digital_gesture.rknn')  # Load RKNN model
+
    print('-->loading model')
        print('loading model done')
+
    rknn.load_rknn('./digital_gesture.rknn')  # Load RKNN model
        print('--> Init runtime environment')
+
    print('loading model done')
        ret = rknn.init_runtime(host='rk3399pro')  # Initialize the RKNN runtime environment
+
    print('--> Init runtime environment')
        if ret != 0:
+
    ret = rknn.init_runtime(host='rk3399pro')  # Initialize the RKNN runtime environment
            print('Init runtime environment failed')
+
    if ret != 0:
            exit(ret)
+
        print('Init runtime environment failed')
        print('done')
+
        exit(ret)
        return rknn
+
    print('done')
   
+
    return rknn
   
+
 
    def predict(rknn):
+
 
        im = Image.open("../picture/6_7.jpg")  # load image
+
def predict(rknn):
        im = im.resize((64, 64),Image.ANTIALIAS)  # Image resize to 64x64
+
    im = Image.open("../picture/6_7.jpg")  # load image
        mat = np.asarray(im.convert('RGB'))    # Convert to RGB format
+
    im = im.resize((64, 64), Image.ANTIALIAS)  # Image resize to 64x64
        outputs = rknn.inference(inputs=[mat])  # Run forward inference and get the inference result
+
    mat = np.asarray(im.convert('RGB'))    # Convert to RGB format
        pred, prob = get_predict(outputs)    # Transform the inference results into visual information
+
    outputs = rknn.inference(inputs=[mat])  # Run forward inference and get the inference result
        print(prob)
+
    pred, prob = get_predict(outputs)    # Transform the inference results into visual information
        print(pred)
+
    print(prob)
   
+
    print(pred)
   
+
 
    if __name__=="__main__":
+
 
        rknn = load_model()
+
if __name__=="__main__":
        predict(rknn)  
+
    rknn = load_model()
        rknn.release()
+
    predict(rknn)  
 +
    rknn.release()
 +
</syntaxhighlight>

Latest revision as of 02:30, 16 March 2020

    Home >  AI Development  >  RKNN Toolkit

Introduction

Rockchip offers the RKNN-Toolkit development kit for model conversion, forward inference, and performance evaluation.

Users can easily perform the following functions through the provided Python interface:

1) Model conversion: support Caffe、Tensorflow、TensorFlow Lite、ONNX、Darknet model, support RKNN model import and export, and so the models can be loaded and used on the hardware platform.

2) forward inference: user can simulate running the model on the PC and get the inference results, and run the model on the specified hardware platform RK3399Pro/RK1808 and get the inference results.

3) performance evaluation: user can simulate running the model on a PC to get both the total time spent on the model and the time-consuming information of each layer. User can also run the model on the specified hardware platform RK3399Pro/RK1808 by online debugging, and get both the total time of the model running on the hardware and the time-consuming information of each layer.

This chapter mainly explains how to perform model conversion on the RK3399Pro/RK1808 development board. For other function descriptions, please refer to the RKNN-Toolkit User Guide.

Installation preparation

   sudo dnf install -y cmake gcc gcc-c++ protobuf-devel protobuf-compiler lapack-devel
   sudo dnf install -y python3-devel python3-opencv python3-numpy-f2py python3-h5py python3-lmdb  python3-grpcio
   pip3 install scipy-1.2.0-cp36-cp36m-linux_aarch64.whl
   pip3 install onnx-1.4.1-cp36-cp36m-linux_aarch64.whl
   pip3 install tensorflow-1.10.1-cp36-cp36m-linux_aarch64.whl

After installing the above basic package, install the rknn-toolkit wheel package. RKNN wheel package and other Python wheel packages can be downloaded from OneDrive.

Since pip does not have a ready-made aarch64 version of the scipy and onnx wheel packages, we have provided a compiled wheel package. If you want the latest version of the wheel package or find a problem with the pre-compiled wheel package, you can use pip to install it yourself. This will compile and install the wheel package. It will take a long time and you need to wait patiently.

   pip3 install scipy
   pip3 install onnx

If the installation encounters an error, please install the corresponding software package according to the error message.

Model Conversion

API call flow

Rknn-conv-call.png

Example

  1. from rknn.api import RKNN
  2.  
  3. INPUT_SIZE = 64
  4.  
  5. if __name__ == '__main__':
  6.     rknn = RKNN()   # Create an RKNN execution object
  7.  
  8.     '''
  9.     Configure model input for NPU preprocessing of input data
  10.     channel_mean_value='0 0 0 255', when runing forward inference, the RGB data will be
  11.     converted as follows (R - 0) / 255, (G - 0) / 255, (B - 0) / 255,
  12.     The RKNN model automatically performs the mean and normalization.
  13.     reorder_channel='0 1 2' , used to specify whether to adjust the image channel order, 
  14.     set to 0 1 2, means no adjustment according to the input image channel order.
  15.     reorder_channel='2 1 0' , indicates that 0 and 2 channels are exchanged.
  16.     If the input is RGB, it will be adjusted to BGR. If it is BGR will be adjusted to RGB
  17.     Image channel order is not adjusted
  18.     '''
  19.  
  20.     rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2')
  21.  
  22.     '''
  23.     load TensorFlow model
  24.     tf_pb='digital_gesture.pb' specify the TensorFlow model to be converted
  25.     inputs specify the input node in the model
  26.     outputs specify the output node in the model
  27.     input_size_list specify the size of the model input
  28.     '''
  29.  
  30.     print('--> Loading model')
  31.     rknn.load_tensorflow(tf_pb='digital_gesture.pb',
  32.                          inputs=['input_x'],
  33.                          outputs=['probability'],
  34.                          input_size_list=[[INPUT_SIZE, INPUT_SIZE, 3]])
  35.     print('done')
  36.  
  37.     '''
  38.     Create a parsing pb model
  39.     do_quantization=False do not to be quantified
  40.     Quantization will reduce the size of the model and increase the speed of the operation,
  41.     but there will be loss of precision.
  42.     '''
  43.  
  44.     print('--> Building model')
  45.     rknn.build(do_quantization=False)
  46.     print('done')
  47.     rknn.export_rknn('./digital_gesture.rknn')  # Export and save rknn model file
  48.     rknn.release()  # Release RKNN Context

Model Inference

API call flow

Rknn-infe-call.png

Example

  1. import numpy as np
  2. from PIL import Image
  3. from rknn.api import RKNN
  4.  
  5.  
  6. # Analyze the output of the model to get the most probable gesture and corresponding probability
  7. def get_predict(probability):
  8.     data = probability[0][0]
  9.     data = data.tolist()
  10.     max_prob = max(data)
  11.     return data.index(max_prob), max_prob
  12.  
  13.  
  14. def load_model():
  15.     rknn = RKNN()  # Create an RKNN execution object
  16.     print('-->loading model')
  17.     rknn.load_rknn('./digital_gesture.rknn')  # Load RKNN model
  18.     print('loading model done')
  19.     print('--> Init runtime environment')
  20.     ret = rknn.init_runtime(host='rk3399pro')  # Initialize the RKNN runtime environment
  21.     if ret != 0:
  22.         print('Init runtime environment failed')
  23.         exit(ret)
  24.     print('done')
  25.     return rknn
  26.  
  27.  
  28. def predict(rknn):
  29.     im = Image.open("../picture/6_7.jpg")   # load image
  30.     im = im.resize((64, 64), Image.ANTIALIAS)  # Image resize to 64x64
  31.     mat = np.asarray(im.convert('RGB'))    # Convert to RGB format
  32.     outputs = rknn.inference(inputs=[mat])   # Run forward inference and get the inference result
  33.     pred, prob = get_predict(outputs)     # Transform the inference results into visual information
  34.     print(prob)
  35.     print(pred)
  36.  
  37.  
  38. if __name__=="__main__":
  39.     rknn = load_model()
  40.     predict(rknn) 
  41.     rknn.release()