<aside> 💡 A guide to highlight the general procedure to convert a PyTorch model to .om

</aside>

Model Conversion — General Procedure

  1. Export model from PyTorch (script & .pth) to ONNX

    1. Evaluate accuracy of converted onnx model with onnxruntime
    2. Simplify onnx model with onnxsim
  2. Convert simplified onnx model to om with ATC/MindStudio

  3. Evaluate om

    Untitled Diagram.drawio.png

General .pth → .om Procedure

  1. Export PyTorch model to ONNX — ****only forward pass should matter in this case

    Model Conversion: PyTorch → ONNX

    It is not guaranteed conversion will run smoothly without fail when converting from .pth.onnx — So what to do if export to onnx fails?

    A general rule-of-thumb — visualizing the computation graph of the network and understanding how the data (shape, type) changes also help tremendously when debugging this type of error.


  2. Converting simplified ONNX to OM — convert the ONNX model to OM such that we can use it on Ascend hardware

    There are 2 ways to convert from ONNX to OM:

    1. using the **ATC** command line tool (reference)
    2. using the MindStudio GUI (which runs on **ATC** backend) — you can either install MindStudio directly [link] or use the following docker image.

    For MindStudio Docker Image: (ensure you have docker installed on your machine)

    1. First, download the MindStudio Docker Image [link] (mindstudio_504a005.zip, ~9G), and unzip it to obtain the mindstudio_504a005.tar file. unzip mindstudio_504a005.zip

    2. Load the tar file with Docker and verify

      docker load < mindstudio_504a005.tar
      docker images | grep mindstudio 
      
    3. Start a container with the image

      docker run -it --rm --privileged=true -p 23:22 \\
          --env="DISPLAY=$DISPLAY" \\
          --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \\
          -v /home/${USER}/Public:/home/HwHiAiUser/Public \\
          mindstudio:5.0.4.a005
      # password: Mind@123
      

    For a more detailed explanation. refer to the “Setup Cross Compiling Environment, Option 1: Docker” section in the Atlas 200 DK Setup Guide

    Atlas 200 DK Setup Guide

    1. Once you are inside the container, run: cd Mindstudio/bin && ./Mindstudio.sh to open the GUI
  3. Evaluate OM — ****Evaluate the converted .om model to ensure the converted model is runnable on Ascend AI processor

    The general step to test a converted model:

    1. Initialize ACL resources and load model
    2. Read and preprocess input
    3. Pass input to model for execution
    4. Verify output is correct

Visualize the Model

<aside> 💡 We recommend 2 ways to visualize the model (computation graph) — Netron and MindStudio’s Model Visualizer

</aside>

  1. Netronhttps://netron.app/

    Supports many types of format (even .om)

  2. MindStudio - Model Visualizer (for visualizing .om)

    Inside the MindStudio GUI — click on ‘Ascend’ → ‘Model Visualizer’ and select the .om file you want to visualize.

    Untitled

    Untitled

[Optional Knowledge] Profiling Network

Tutorial

Sample code for running the pipeline.