Good question, as far as I just check, the TFLITE Object Detection API expect NHWC (https://www.tensorflow.org/lite/inference_with_metadata/task_library/object_detector#model_compatibility_requirements) , and PyTorch model normally expect image in NCHW. (The code comment seems have some typo)
The idea is adding a conversion operation before the YOLO7, so when TFLITE model (being converted) receive an image from let say Android App witn dimension (1, 640, 640, 3), the first operation would permute it into (1,3,640,640) for the YOLO model to process, that’s why NHWC -> NCHW which make it permute(0,3,1,2)
The ONNX convert, I “believe” what it does is make a PyTorch model tracing, by putting a dummy input and output tensor and let the PyTorch model run and see how the layers are being connected. If the tracing does not get thru, it throw error.
Without more detail, I cannot give a better suggestion.