Hello, I was wondering if I could use the .engine file that jetson-inference generates after one run instead of the .onnx. I tried it with a custom imagenet, it gives me an error due to the wrong dims and sizes.
This is a truncated output of the .onnx model which works fine:
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 28
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 4415488
[TRT] -- bindings 2
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1
-- dim #1 3
-- dim #2 224
-- dim #3 224
[TRT] binding 1
-- index 1
-- name 'output_0'
-- type FP32
-- in/out OUTPUT
-- # dims 2
-- dim #0 1
-- dim #1 28
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=224 w=224) size=602112
[cuda] cudaAllocMapped 602112 bytes, CPU 0x100ca8200 GPU 0x100ca8200
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=28 h=1 w=1) size=112
[cuda] cudaAllocMapped 112 bytes, CPU 0x100d3b200 GPU 0x100d3b200
[TRT]
[TRT] device GPU, resnet18.onnx initialized.
[TRT] imageNet -- loaded 28 class info entries
[TRT] imageNet -- resnet18.onnx initialized.
And this is the output of the onnx.1.1.8201.GPU.FP16.engine model:
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 28
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 4415488
[TRT] -- bindings 2
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1
-- dim #1 3
-- dim #2 224
-- dim #3 224
[TRT] binding 1
-- index 1
-- name 'output_0'
-- type FP32
-- in/out OUTPUT
-- # dims 2
-- dim #0 1
-- dim #1 28
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=1 h=3 w=224) size=2688
[cuda] cudaAllocMapped 2688 bytes, CPU 0x100ca8200 GPU 0x100ca8200
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=1 h=28 w=1) size=112
[cuda] cudaAllocMapped 112 bytes, CPU 0x100ca8e00 GPU 0x100ca8e00
[TRT] device GPU, initialized resnet18.onnx.1.1.8201.GPU.FP16.engine
[TRT] imageNet -- loaded 28 class info entries
[TRT] imageNet -- didn't load expected number of class descriptions (28 of 1)
[TRT] imageNet -- failed to load synset class descriptions (28 / 28 of 1)
[TRT] imageNet -- failed to initialize.
imagenet: failed to initialize imageNet
As you could see, there is a dims mismatch: (b=1 c=3 h=224 w=224) vs. dims (b=1 c=1 h=3 w=224) at input and (b=1 c=28 h=1 w=1) vs (b=1 c=1 h=28 w=1) . What can be done to overcome this?
Thanks in advance.
Hello, I was wondering if I could use the .engine file that jetson-inference generates after one run instead of the .onnx. I tried it with a custom imagenet, it gives me an error due to the wrong dims and sizes.
This is a truncated output of the .onnx model which works fine:
And this is the output of the onnx.1.1.8201.GPU.FP16.engine model:
As you could see, there is a dims mismatch: (b=1 c=3 h=224 w=224) vs. dims (b=1 c=1 h=3 w=224) at input and (b=1 c=28 h=1 w=1) vs (b=1 c=1 h=28 w=1) . What can be done to overcome this?
Thanks in advance.