Spaces:

arjun.a
range data
5aefcf4
Ticket Name: TDA2: TIDL (Deep Learning) object detection use case
Query Text:
Part Number: TDA2 Dear TI: From the VSDK tidl_OD (TI deep learning object detection) use case: Alg_tidlpreproc (A15) -> Alg_tidl_Eve1 (EVE1)
Alg_tidlpreproc (A15) -> Alg_tidl_Eve2 (EVE2)
Alg_tidlpreproc (A15) -> Alg_tidl_Eve3 (EVE3)
Alg_tidlpreproc (A15) -> Alg_tidl_Eve4 (EVE4) so the trained caffe model will be automatically distribute and run on these 4 EVEs, e.g. some layer running on EVE1, and some layers running on EVE2, etc? Am my understating correct? Is it recommend to use all of EVE (i.e. EVE1 to EVE4) to have maximum performance? In the conversion tool configuration text, we can use layersGroupId to define whether it is EVE or DSP to run the algorithm: for e.g. in the below from tidl_import_JDetNet_voc0712.txt: layersGroupId =   0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 2 2 0 If we have set the algorithm to be running on EVE ( in chains_tidlOD.txt), then "2" (DSP) we set in layersGroupId still get effect? And what is "0" means in layersGroupId referring to? Thanks and best regards He Wei
Responses:
Hi, >> some layer running on EVE1, and some layers running on EVE2, etc? Am my understating correct? No, all the layers in on frame runs in one EVE core and next frame will run on next EVE core so the partition is across the frames. >> Is it recommend to use all of EVE (i.e. EVE1 to EVE4) to have maximum performance? Yes. >> for e.g. in the below from tidl_import_JDetNet_voc0712.txt: >>layersGroupId = 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 2 2 0 The first layer and the last layers are just data layers, as there is no compute in them so no need to run them on either on EVE(1) or on DSP (2). So set them 0. Please refer to FAQ 21 and 22 in the TIDL user guide for more details about the usage of the "layersGroupId" parameter. Thanks, Praveen
Thanks Praveen, 1. Actually in the chains_tidlOD.txt, the configuration as below: Alg_tidlpreproc (A15) -> Alg_tidl_Eve1 (EVE1) Alg_tidlpreproc (A15) -> Alg_tidl_Eve2 (EVE2) Alg_tidlpreproc (A15) -> Alg_tidl_Eve3 (EVE3) Alg_tidlpreproc (A15) -> Alg_tidl_Eve4 (EVE4) so I don't need to set layersGroupId value in the txt configuration to EVE (1) or DSP(2) because I am only using EVE (1) as above setting. am I right? or even if I set to DSP(2), it could not take effect? 2. I have trained the attached deploy.prototxt and convert to TI net and param bin files then run on the board, but I got the assertion error as: Assertion @ Line: 231 in ipcOutLink_drv.c: pObj->createArgs.inQueParams.prevLinkQueId < pObj->prevLinkInfo.numQue : failed !!! Can you advise what could be wrong? (let me know if you need more files) Thanks and best regards He Wei 3060.deploy.zip
1. Right. You no need to change anything to run on EVE. For running some layers on DSP, no changes needed in use case, you need to set that in the import config while generating the TI net and param bin files using layersGroupId parameter. 2. I would recommend you to run first these converted TI net and param bin files on the board using standalone TIDL executables (.out files) instead of VSDK. For this you can refer to section 3.3.4 (Building the Test Application Executable through GMAKE) in the TIDL userguide. Once you confirm that TIDL is working on your trained model on standalone, you can try this model on VSDK. Thanks, Praveen
Dear Praveen,, Thanks for advise on using .out file to verify but I have the constraint here: beside need to install /build all the tool and code, i need to have jtag to run the simulator via CCS - I don't have this facility yet. Currently I mainly rely on the tidl_OD use case to test, Do you have any clue based on the deploy.prototxt I provided? Thanks and best regards He Wei
Dear Praveen, When using tidl_model_import.out.exe do the conversion based on for the deploy.prototxt provided by TI: 1. I got the below log: Name of the Network : ssdJacintoNetV2_deploy Num Inputs : 1 Could not find detection_out Params Num of Layer Detected : 49 0, TIDL_DataLayer , data 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 320 , 768 , 0 , 1, TIDL_BatchNormLayer , data/bias 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 320 , 768 , 1 , 3 , 320 , 768 , 737280 , 2, TIDL_ConvolutionLayer , conv1a 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 3 , 320 , 768 , 1 , 32 , 160 , 384 , 147456000 , Can you briefly explain what is each number referring to after each layer name? Is the 1st number referring to layersGroupId ? 2. I notice the layer detected in this case is 49, but layersGroupId defined only 45 value (0 1 ..2) in tidl_import_JDetNet_voc0712.txt, both value shall be same as 49, right? But if I change layersGroupId to 49 entries, I will not get the correct display output. Any opinion? 3. If I am using my prototxt file, I got this information: Unsuported Layer Type : Normalize !!!! assuming it as pass through layer So it will just pass through this layer but it shall not crash the code, right? 4. For camera live capture case, is the"sampleInData " must? sampleInData = "..\..\test\testvecs\input\trace_dump_0_768x320.y" Remark: After I remove this line, I won't see any object detection is running.. Thanks and best regards He Wei
Hi He Wei, I would recommend you to go through FAQs section in the TIDL use guide, that will answer most of your questions. Please find my answers below, 1. No, first number will not refer to layersGroupId, Please refer to FAQ 14 for more information and also this import source code is pubic-ally available to the user, so you can refer the code where these prints are happening in the "tidl_caffeImport.cpp" file for further understanding. 2. NO, This is because of TIDL merges some layers processing to speed up the execution. Each layer in caffe prototxt does not one to one mapping on imported model. So changing layersGroupId to 49 entries will not have any impact. Please refer to FAQ 9 in the userguide for more details. 3. Code mat not crash but it may have effect on the final detection output. So better to avoid this while training or implement code in import tool for supporting this by referring to other layers. 4. SampleData is must for import tool, as I said earlier please take one frame from your training data and use it for sampleData, after successful running of import tool, we should verify that import tool output has the proper object detections. Running use case without getting detections in the import tool output, will not give the proper results. Thanks, Praveen
Dear Praveen, Thanks a lot for the detail explanation, regarding 4. SampleData, so this is mainly for verification purpose? when running the tool on the Sampledata from TI example, the log as below, Do you have the description for the below result? What could be indication if the object being detected or not detected? Processing Frame Number : 0 Layer 1 : Out Q : 254 , TIDL_BatchNormLayer , PASSED #MMACs = 0.74, 0.74, Sparsity : 0.00 Layer 2 : Out Q : 6021 , TIDL_ConvolutionLayer, PASSED #MMACs = 147.46, 92.65, Sparsity : 37.17 Layer 3 : Out Q : 6168 , TIDL_ConvolutionLayer, PASSED #MMACs = 141.56, 53.33, Sparsity : 62.33 Layer 4 : Out Q : 11702 , TIDL_ConvolutionLayer, PASSED #MMACs = 283.12, 83.44, Sparsity : 70.53 Layer 5 : Out Q : 10597 , TIDL_ConvolutionLayer, PASSED #MMACs = 141.56, 66.11, Sparsity : 53.30 Layer 6 : Out Q : 13807 , TIDL_ConvolutionLayer, PASSED #MMACs = 283.12, 91.59, Sparsity : 67.65 Layer 7 : Out Q : 16861 , TIDL_ConvolutionLayer, PASSED #MMACs = 141.56, 57.32, Sparsity : 59.51 Layer 8 : Out Q : 18642 , TIDL_ConvolutionLayer, PASSED #MMACs = 283.12, 96.27, Sparsity : 66.00 Layer 9 : Out Q : 12901 , TIDL_ConvolutionLayer, PASSED #MMACs = 141.56, 52.28, Sparsity : 63.07 Layer 10 :TIDL_PoolingLayer, PASSED #MMACs = 0.06, 0.06, Sparsity : 0.00 Layer 11 : Out Q : 20342 , TIDL_ConvolutionLayer, PASSED #MMACs = 283.12, 76.31, Sparsity : 73.04 Layer 12 : Out Q : 5763 , TIDL_ConvolutionLayer, PASSED #MMACs = 141.56, 31.40, Sparsity : 77.82 Layer 13 :TIDL_PoolingLayer, PASSED #MMACs = 0.03, 0.03, Sparsity : 0.00 Layer 14 :TIDL_PoolingLayer, PASSED #MMACs = 0.01, 0.01, Sparsity : 0.00 Layer 15 :TIDL_PoolingLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 16 : Out Q : 18571 , TIDL_ConvolutionLayer, PASSED #MMACs = 62.91, 62.57, Sparsity : 0.56 Layer 17 : Out Q : 13599 , TIDL_ConvolutionLayer, PASSED #MMACs = 31.46, 31.01, Sparsity : 1.42 Layer 18 : Out Q : 17793 , TIDL_ConvolutionLayer, PASSED #MMACs = 7.86, 7.76, Sparsity : 1.27 Layer 19 : Out Q : 18851 , TIDL_ConvolutionLayer, PASSED #MMACs = 2.36, 2.33, Sparsity : 1.16 Layer 20 : Out Q : 26620 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.79, 0.78, Sparsity : 1.22 Layer 21 : Out Q : 4438 , TIDL_ConvolutionLayer, PASSED #MMACs = 3.93, 3.92, Sparsity : 0.20 Layer 22 :TIDL_FlattenLayer, PASSED #MMACs = 0.02, 0.02, Sparsity : 0.00 Layer 23 : Out Q : 3888 , TIDL_ConvolutionLayer, PASSED #MMACs = 20.64, 21.52, Sparsity : -4.22 Layer 24 :TIDL_FlattenLayer, PASSED #MMACs = 0.08, 0.08, Sparsity : 0.00 Layer 25 : Out Q : 8255 , TIDL_ConvolutionLayer, PASSED #MMACs = 1.47, 1.47, Sparsity : 0.26 Layer 26 :TIDL_FlattenLayer, PASSED #MMACs = 0.01, 0.01, Sparsity : 0.00 Layer 27 : Out Q : 2918 , TIDL_ConvolutionLayer, PASSED #MMACs = 7.74, 7.82, Sparsity : -1.03 Layer 28 :TIDL_FlattenLayer, PASSED #MMACs = 0.03, 0.03, Sparsity : 0.00 Layer 29 : Out Q : 10803 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.37, 0.37, Sparsity : 0.39 Layer 30 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 31 : Out Q : 2794 , TIDL_ConvolutionLayer, PASSED #MMACs = 1.94, 1.96, Sparsity : -1.26 Layer 32 :TIDL_FlattenLayer, PASSED #MMACs = 0.01, 0.01, Sparsity : 0.00 Layer 33 : Out Q : 6802 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.11, 0.11, Sparsity : 2.73 Layer 34 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 35 : Out Q : 3384 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.58, 0.59, Sparsity : -1.28 Layer 36 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 37 : Out Q : 9125 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.02, 0.02, Sparsity : 0.98 Layer 38 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 39 : Out Q : 4334 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.13, 0.13, Sparsity : -4.39 Layer 40 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00 Layer 41 : Out Q : 4455 , TIDL_ConcatLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : -1.#J Layer 42 : Out Q : 2805 , TIDL_ConcatLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : -1.#J Layer 43 : #MMACs = 0.00, 0.00, Sparsity : 0.00 End of config list found ! Thanks and best regards He Wei
Hi He Wei, It is not straight forward from the log to get the defections, we need to check the import tool output (stats_tool_out.bin) for detection's using Visualisation tool. This tool will draw the boxes on the input using this output (stats_tool_out.bin) and input (SampleData). For visualisation tool code, please refer to below thread. e2e.ti.com/.../2502331 Thanks, Praveen
Dear Praveen, Can you post the link e2e.ti.com/.../2502331 again? seems I can't click on the above.. Thanks and best regards He Wei
Check this now.. e2e.ti.com/.../2502331 Thanks, Praveen
Dear Praveen, What is the Visualisation tool? I got the stats_too_out.bin but I didn't see which tool is mentioned in that thread.. Thanks and best regards He Wei
Hi He Wei, Tool is to draw the Boxes on the input using the output bin (stats_too_out.bin) file. Do not take "stats_too_out.bin" from that thread, please use stats_too_out.bin from your import tool output. To generate tool executable use "markBox.c" file from that thread and build it to get the tool executable that I was referring. Thanks, Praveen
Dear Praveen, 1. Beside the caffe-jacinto provided by TI, Can we use official Caffe to train then converted and run on TDA2x? - )I had trained few model via official Caffe and able to convert the net and params to TI format, but when running on TI board, always got different kind of assertion errors but it is hard to debug what is the exact root cause) 2. Can caffe-jacinto support to build without GPU? i.e. CPU only Thanks and best regards He Wei
Dear Praveen, Any update?
Hi He Wei, 1. It should work, Can you share one prototxt and model to root cause the issue. 2. Yes. It should build. If you face any build issues you can post your questions in the git-hub for support. Thanks, Praveen
Dear Praveen, 1. The prototxt and model we used are zip and attached. During the tool conversion in the testing sample input face, we had the error" "Error at line: 1585 : in file .\.\src\tidl_tb.c, of function : test_ti_dl_ivison" Then after deploying into TDA2X board, we has the assertion error. But this is quite hard to debug, Can you help to advise? let me know if you need more information, 2. If the official Cafe and tensorFollow can work, we will use that first. Thanks and best regard He Wei proto_model.zip
Hi He Wei, "Error at line: 1585 : in file .\.\src\tidl_tb.c, of function : test_ti_dl_ivison" This above error is because of unsupported "Normalize" Layer in the prototxt. Please use "BatchNorm layer" instead of this "Normalize" layer and try . Thanks, Praveen
Dear Praveen, Thanks a lot. After I change " Normalize" layer to ""BatchNorm" layer, there is no error during the tools conversion, then after imported on TI board, there is no video come out. The statics log is attached. Take note that: I change the setting as: keep_top_k: 20 confidence_threshold: 0.01 Can you advise what could be the "no video output" issue? Could be the processing too long? Thanks and best regards He Wei static_log.zip
Hi He Wei, Only changing in the prototxt to BatchNorm layer, will not work. The model need to be re-trained again with BatchNorm layer. Thanks, Praveen
Dear Praveen, Ok. But at least I shall see the video output? (even the algo may not working...) Thanks and best regards He Wei
Hi He Wei, You may not see video output because the generated param.bin and net.bin files are not correct. Thanks, Praveen
Dear Praveen, Thanks for confirmation. I would like to check if TI has some example trained model example in about Face detection? - We are currently working on Driving monitoring system based on TDA2x, would be very appropriate if you can have some relevant information. Thanks and best regards He Wei
Hi He Wei, No we don’t have Face Detection example model. Thanks, Praveen
Dear Praveen, Understood you have verified the Tensorflow 1.0: python "C:\Users\Kumar\AppData\Local\conda\conda\envs\tensorflow\Lib\site-packages\tensorflow\python\tools\freeze_graph.py" --input_meta_graph="keras_cifar10_model.ckpt.meta" --input_checkpoint="keras_cifar10_model.ckpt" --output_graph=keras_frozen.pb --output_node_names="conv2d_5/BiasAdd" --input_binary=true But the TensorFlow 1.0 is too old already but now there is no *.ckpt" output file from TensorFlow already, then if we run the above script without --input_checkpoint="keras_cifar10_model.ckpt", we will get output error. Any advise? Thanks and best regards He Wei
Hi He Wei, We did tested with new TensorFlow version 1.8, all the commands given in that thread should work as is for new TensorFlow version. The ".ckpt" file will be generated part of the training. Thanks, Praveen
Dear Praveen, Thanks for confirmation and we are verifying now and will confirm the result later on TensorFlow 1.8. Another question is on Caffe model training: we are trying to build up the caffe object detection tanning model based on "caffe-jacinto-models" example. We had our data set but not sure how to do the labeling for object detection. in example: rain_data="../../caffe-jacinto/examples/VOC0712/VOC0712_trainval_lmdb" test_data="../../caffe-jacinto/examples/VOC0712/VOC0712_test_lmdb" name_size_file="../../caffe-jacinto/data/VOC0712/test_name_size.txt" label_map_file="../../caffe-jacinto/data/VOC0712/labelmap_voc.prototxt" Can you share with us the above data sets? Also Can you share with us one example of label example on one Jpeg pic? Only label as: 0 - object 1, 1- object 2.. ? or location specific label is required? Thanks and best regards He Wei
You can find a link to the following page at caffe-jacinto-models under: "Script for Training for sparse Object Detect network on the PASCAL VOC0712 dataset is provided. Inference script is also provided to test out the final model". https://github.com/tidsp/caffe-jacinto-models/blob/caffe-0.17/docs/VOC0712_ObjectDetect_README.md This will direct you to the following page. Or you can go there directly directly. It explains how to create the LMDB. https://github.com/weiliu89/caffe/blob/ssd/README.md
Dear All, 1. Can I use caffe-ssd to trained the model instead of caffe-jacinto then convert to TI TDA2x format? ( Because I can't build caffe-jacinto-models successfully with CPU only) 2. For TensorFlow, we always got the below error when using tool doing the conversion: Op Type Shape is Not suported will be By passed Op Type StridedSlice is Not suported will be By passed Op Type Pack is Not suported will be By passed Can you advise? Thanks and best regards He Wei
Hi, 1. Yes, you can use caffe-ssd to trained the model instead of caffe-jacinto, but make sure that the prototxt matches with caffe-jacinto prototxt. 2. This is may be because of we support only slim based TensorFlow models. Please refer to section 3.6.5 (Importing Tensorflow Models) in the user guide for more details. Thanks, Praveen