Ticket Name: TDA2: Running caffe-jacinto object detection using Vision SDK 3.03 TIDL_OD usecase Query Text: Part Number: TDA2 Hi, I am trying to run the caffe jacinto object detection usecase from github on TDA2X, using TIDL_OD usecase in Vision_SDK_03_03_00_00. I have generated the NET.BIN and PRM.BIN using the import tool in TIDL. Created inHeader_OD and inData_OD for resolution 512x512(verified by running another model which takes same input resolution) and copied these four files to SD card. In the chains_tidlOD.c I have modified some parameters as follows. #define TIDL_OD_INPUT_WIDTH (512) #define TIDL_OD_INPUT_HEIGHT (512) #define DEC_OUT_WIDTH (512) #define DEC_OUT_HEIGHT (512) For display resolution, I am maintaining the same as for default model 768x320(scaled by 2 for display). #define TIDL_OD_DISPLAY_WIDTH (1536) #define TIDL_OD_DISPLAY_HEIGHT (640) #define TIDL_OD_FPS_OPPNOM (1) #define SYNC_THRESHOLD (12000) When I run the usecase I am getting no errors. The video stream is black. I have faced similar issue before as in https://e2e.ti.com/support/arm/automotive_processors/f/1021/t/688490 By reducing the FPS, I was able to get it working. Now I have reduced the FPS to minimum, and tried initial model as well as sparse model. During import, total GMACS is shown as 3.36 GMACS. Is it because of the high GMACS the model is not able to run? Thanks in advance, Navinprashath.R.R Responses: Hi Navin, Inside the deploy.prototxt, Did you set the "keep_top_k" parameter to 20 , and "confidence_threshold" to 0.15? It seems that these two parameters influence the real time MAC and the performance. Best Regards, Eric Lai Hi Eric, Thanks for the inputs. After setting "keep_top_k" to 20 and "confidence threshold" to 0.15, I am able to run both the initial and sparse model at 3fps. When I tried to change the FPS of sparse model to 6, again there is no stream(doesn't work for 5fps also). In the caffe jacinto github, it is mentioned that sparse model achieves 2.5x speedup, so I thought sparse model should run at least 2x as initial model. Kindly let me know your inputs on this. Regards, Navinprashath.R.R Hi Navin, Please refer to this thread: e2e.ti.com/support/arm/automotive_processors/f/1021/t/681674 The 512x512 model from the caffe-jacinto should run faster than 10 fps. I hope this helps you. Best Regards, Eric Lai Hi Eric, Thanks for the link. I saw this thread earlier. When running in FILE IO Mode, either EVE or DSP is selected (Also in TIDL OD usecase the whole model is configured to run only on 4 EVE cores). So how these changes on setting layersGroupId for extra layers will take effect? Do these layers run on DSP, when layersGroupid configured as 2 irrespective of EVE or DSP being selected in usecase? Regards, Navinprashath.R.R Hi Navin, I am not sure I am correct. I read some of the TIDL source. Most of the layer implemented by TI can run both on the EVE and DSP. But the performance may be very different. For example, the floating point operation should run on DSP for better performance. So yes for OD usecase , if you set the layergroup to 2,it will run on the DSP. (i think the layer group parameter may stored in the NET.BIN and in the usecase it will connected to the TIDL library and run on specific core) If there is any mistake , please correct me. Best Regards, Eric Lai Hi Eric, Thanks for the inputs. For which usecase did you get 10 fps,Object detection using FILEIO usecase or TIDL OD usecase? Regards, Navinprashath.R.R Hi Navin, I run the OD usecase with the caffe-jacinto 512x512 model. Best Regards, Eric Lai Hi Eric, I replaced layersGroupId and conv2Dkerneltype with the one shared on the thread you have shared. But I'm getting error when importing. The import tool crashes. Here is my import file: # Default - 0 randParams = 0 # 0: Caffe, 1: TensorFlow, Default - 0 modelType = 0 # 0: Fixed quantization By tarininng Framework, 1: Dyanamic quantization by TIDL, Default - 1 quantizationStyle = 1 # quantRoundAdd/100 will be added while rounding to integer, Default - 50 quantRoundAdd = 25 numParamBits = 8 # 0 : 8bit Unsigned, 1 : 8bit Signed Default - 1 inElementType = 0 inputNetFile = "..\..\test\testvecs\config\caffe_jacinto_models\trained\object_detection\voc0712\JDetNet\sparse\deploy.prototxt inputParamsFile = "..\..\test\testvecs\config\caffe_jacinto_models\trained\object_detection\voc0712\JDetNet\sparse\voc0712_ssd.caffemodel" outputNetFile = "..\..\test\testvecs\config\tidl_models\sparse\NET_OD.BIN" outputParamsFile = "..\..\test\testvecs\config\tidl_models\sparse\PRM_OD.BIN" rawSampleInData = 1 preProcType = 4 sampleInData = "..\..\test\testvecs\input\tsr_512_512.y" tidlStatsTool = "..\quantStatsTool\eve_test_dl_algo.out.exe" layersGroupId = 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 2 2 0 conv2dKernelType = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Regards, Navinprashath.R.R Hi Navin, How it crash? I think you should ask TI expert about this part. Maybe the number of class is too many? For me, my model is to detect 4 classes. and the import is okay. How about just replace the layersGroupId and conv2dKernelType with your own model? Best Regards, Eric Lai Hi Eric Lai, Thanks for your help here.. Regards, Praveen Hi Navin, Please refer to this new post explains the steps to run TIDL OD use case in VSDK. e2e.ti.com/.../689617 If you still see issues in running OD use case please post here Thanks, Praveen