Spaces:

arjun.a commited on
Commit
6c7b14a
1 Parent(s): 09192a6

rename files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. sample_embedding_folder2/0430110.txt +29 -0
  2. sample_embedding_folder2/0462114.txt +8 -0
  3. sample_embedding_folder2/0482769.txt +10 -0
  4. sample_embedding_folder2/0527984.txt +24 -0
  5. sample_embedding_folder2/0529898.txt +18 -0
  6. sample_embedding_folder2/0530602.txt +10 -0
  7. sample_embedding_folder2/0530614.txt +18 -0
  8. sample_embedding_folder2/0531906.txt +14 -0
  9. sample_embedding_folder2/0532505.txt +26 -0
  10. sample_embedding_folder2/0533643.txt +8 -0
  11. sample_embedding_folder2/0533689.txt +10 -0
  12. sample_embedding_folder2/0533854.txt +12 -0
  13. sample_embedding_folder2/0536419.txt +52 -0
  14. sample_embedding_folder2/0537108.txt +24 -0
  15. sample_embedding_folder2/0539137.txt +14 -0
  16. sample_embedding_folder2/0539230.txt +12 -0
  17. sample_embedding_folder2/0540822.txt +8 -0
  18. sample_embedding_folder2/0540982.txt +12 -0
  19. sample_embedding_folder2/0542041.txt +14 -0
  20. sample_embedding_folder2/0543489.txt +10 -0
  21. sample_embedding_folder2/0544217.txt +14 -0
  22. sample_embedding_folder2/0547591.txt +244 -0
  23. sample_embedding_folder2/0563747.txt +14 -0
  24. sample_embedding_folder2/0563807.txt +12 -0
  25. sample_embedding_folder2/0565132.txt +34 -0
  26. sample_embedding_folder2/0565961.txt +22 -0
  27. sample_embedding_folder2/0566481.txt +12 -0
  28. sample_embedding_folder2/0567692.txt +12 -0
  29. sample_embedding_folder2/0571580.txt +18 -0
  30. sample_embedding_folder2/0572635.txt +10 -0
  31. sample_embedding_folder2/0573203.txt +16 -0
  32. sample_embedding_folder2/0574868.txt +21 -0
  33. sample_embedding_folder2/0581287.txt +14 -0
  34. sample_embedding_folder2/0583648.txt +10 -0
  35. sample_embedding_folder2/0584994.txt +10 -0
  36. sample_embedding_folder2/0585063.txt +14 -0
  37. sample_embedding_folder2/0586455.txt +12 -0
  38. sample_embedding_folder2/0587995.txt +14 -0
  39. sample_embedding_folder2/0590092.txt +14 -0
  40. sample_embedding_folder2/0590369.txt +32 -0
  41. sample_embedding_folder2/0597423.txt +10 -0
  42. sample_embedding_folder2/0599158.txt +10 -0
  43. sample_embedding_folder2/0599794.txt +12 -0
  44. sample_embedding_folder2/0600601.txt +12 -0
  45. sample_embedding_folder2/0604515.txt +8 -0
  46. sample_embedding_folder2/0605559.txt +22 -0
  47. sample_embedding_folder2/0608451.txt +12 -0
  48. sample_embedding_folder2/0611062.txt +10 -0
  49. sample_embedding_folder2/0614700.txt +36 -0
  50. sample_embedding_folder2/0614776.txt +8 -0
sample_embedding_folder2/0430110.txt ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Issue of network on GLSDK
2
+
3
+ Query Text:
4
+ Hello. Im working with GLSDK on Jacinto 6 Recently I applied SPL boot as to below. Accurately, It is qspi boot working without u-boot stage(only work SPL and kernel) http://processors.wiki.ti.com/index.php/DRA7xx_GLSDK_Software_Developers_Guide#Using_the_Late_attach_functionality After apply it, I couldn't access SSH server of host side(jacinto6). I guessing it because of no u-boot stage. The reason I think this is, I saw same situation when I remove all network feature in config file in u-boot. Please help me. How can I alive network again? below is log INIT: version 2.88 booting Starting Bootlog daemon: bootlogd: cannot allocate pseudo tty: No such file or directory bootlogd. INIT: Entering runlevel: 5 Starting tiipclad daemon Opened log file: lad.txt Spawned daemon: /usr/bin/lad_dra7xx Sending discover... Sending discover... Sending discover... No lease, forking to background done. Starting Dropbear SSH server: dropbear. root@dra7xx-evm:~# ifconfig eth0 Link encap:Ethernet HWaddr C6:8C:87:D2:FD:9A UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7019 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:543887 (531.1 KiB) TX bytes:20178 (19.7 KiB) Interrupt:78 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@dra7xx-evm:~#
5
+
6
+ Responses:
7
+ Hello Yangwoo, Let's start with the definition of Late attach functionality: To satisfy the startup time requirements of specific use cases, one would need a remote core booted up early at the boot loader before the Linux kernel is booted. The kernel then attaches with the already booted remote core for further communication. We refer to this feature as the "Early Boot - Late Attach" functionality. Could you check the functionality of late_attach: target # dmesg | grep late_attach
8
+ [ 13.387830] remoteproc1: late_attach is 1 My suggestion is to take a detail look on a similar thread - https://e2e.ti.com/support/omap/f/885/t/391870 You can try using new u-boot image - 3755.u-boot.zip Best regards, Yanko
9
+
10
+ Thank you for answer Yanko But I don't know your suggestion is good approach or not. because my question was about network is not available in QSPI spl boot. except network problem, everything is working including remoteprc test(IPU1,2 DSP1,2) Why you suggest late attach ? Anyway I applied late attach for you in short time. but I saw another problem in late attach... there was no attached message root@dra7xx-evm:~# dmesg | grep late_attach [ 6.139856] remoteproc3: late_attach is 0 [ 6.382567] remoteproc2: late_attach is 0 [ 6.733915] remoteproc0: late_attach is 0 [ 6.766229] remoteproc1: late_attach is 0 below is log about error with late attach.
11
+
12
+ Sorry I had small misunderstood about your suggestion. Anyway below my late attach result of qspi spl boot root@dra7xx-evm:~# dmesg | grep late_attach [ 2.831971] remoteproc3: late_attach is 0 [ 2.962444] remoteproc2: late_attach is 0 [ 3.614248] remoteproc0: late_attach is 0 [ 3.636176] remoteproc1: late_attach is 1
13
+
14
+ Hello Yanko.Really this system has some dependency with u-boot and kernel. Related to this issue, network initialization has same relation. If I set network feature in u-boot, like below, network is working also in kernel. But when I remove below configuration in u-boot(include/configs/dra7xx_evm.h), network is also not working in kernel. Really it has dependency in this system. for example, I guessing, kernel has insufficient initialization for network because without uboot network initialize, kernel network also not work. Please fix this issue, and give some patch of kernel. /* CPSW Ethernet */ #define CONFIG_CMD_NET /* 'bootp' and 'tftp' */#define CONFIG_CMD_DHCP #define CONFIG_BOOTP_DNS /* Configurable parts of CMD_DHCP */ #define CONFIG_BOOTP_DNS2#define CONFIG_BOOTP_SEND_HOSTNAME #define CONFIG_BOOTP_GATEWAY#define CONFIG_BOOTP_SUBNETMASK #define CONFIG_NET_RETRY_COUNT 10#define CONFIG_CMD_PING #define CONFIG_CMD_MII#define CONFIG_DRIVER_TI_CPSW /* Driver for IP block */ #define CONFIG_MII /* Required in net/eth.c */ #define CONFIG_PHY_GIGE /* per-board part of CPSW */#define CONFIG_PHYLIB
15
+
16
+ It solved by ethaddr of bootenv. The problem caused by no ethaddr bootenv. For example if I bootup with network feature of u-boot, ethaddr bootenv is automatically applied. Without u-boot network initialize, kernel can't find ethaddr bootenv. So I added fixed "ethaddr=??:??:??:??:??:??\0" to ti_omap5_common.h. Now network is working without u-boot network config.
17
+
18
+ Hello Yangwoo, I am glad to hear that. For your information, you can take a look on following threads: processors.wiki.ti.com/.../AM335x_U-Boot_User's_Guide#U-Boot_Network_configuration lists.denx.de/.../149291.html processors.wiki.ti.com/.../Linux_Core_U-Boot_User's_Guide https://e2e.ti.com/support/embedded/linux/f/354/t/141507 Best regards, Yanko
19
+
20
+ Sorry It was not solution. I confused with mmc partion 1 and 2. It still not working without u-boot initialization
21
+
22
+ Finally I add " #defineCONFIG_DRIVER_TI_CPSW" in u-boot with "ethaddr=??:?:??:??" of bootenv. after then it working now. Anyway still I need kernel patch to activate kernel network without u-boot dependency. because uboot spl will not excute any network feature.
23
+
24
+ Yangwoo, CONFIG_DRIVER_TI_CPSW is defined in u-boot/board/ti/dra7xx/evm.c: #ifdef CONFIG_DRIVER_TI_CPSW #include <cpsw.h> #endif #ifdef CONFIG_DRIVER_TI_CPSW /* Delay value to add to calibrated value */ #define RGMII0_TXCTL_DLY_VAL ((0x3 << 5) + 0x8) #define RGMII0_TXD0_DLY_VAL ((0x3 << 5) + 0x8) #define RGMII0_TXD1_DLY_VAL ((0x3 << 5) + 0x2) #define RGMII0_TXD2_DLY_VAL ((0x4 << 5) + 0x0) #define RGMII0_TXD3_DLY_VAL ((0x4 << 5) + 0x0) #define VIN2A_D13_DLY_VAL ((0x3 << 5) + 0x8) #define VIN2A_D17_DLY_VAL ((0x3 << 5) + 0x8) #define VIN2A_D16_DLY_VAL ((0x3 << 5) + 0x2) #define VIN2A_D15_DLY_VAL ((0x4 << 5) + 0x0) #define VIN2A_D14_DLY_VAL ((0x4 << 5) + 0x0) Take a look on this patch set: comments.gmane.org/.../165201 Best regards, Yanko
25
+
26
+ Below updated modification. Please check condition. It looks related with dpll clock setting. Can you check kernel dpll clock setting for gmac? I expect that kernel dpll for gmac is not perfect.
27
+
28
+ I remove " CONFIG_DRIVER_TI_CPSW" and only remain below code in arch/arm/cpu/armv7/omap-common/clocks-common.c params = get_gmac_dpll_params(*dplls_data); do_setup_dpll((*prcm)->cm_clkmode_dpll_gmac, params, DPLL_LOCK, "gmac");
29
+
sample_embedding_folder2/0462114.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: bare metal interrupt handler
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 is there info or samples for bare metal interrupt handling? I am trying to use interrupts on the C66x DSP1 core on the TDA2 (XC5777X CPU BOARD). I have tried the timer example from the starterware and it configures the timer and interrupt functions, but I get exactly one interrupt. The timer is in auto reload mode and seems to be counting and reloading properly. I can’t make sense of the interrupt though. In previous C64xx projects I have had a vector.asm file that contained the reset, nmi and 12 vectors for the chip interrupts. On the C66x with the starterware code there is a lot added to configure the interrupt crosspoint and handlers. I’m not sure if it needs the vector.asm or not. When I add the vector.asm it is getting located at 0x00800000 and the ISTP is set to the same. When the first interrupt occurs, it branches to a vector at 0x00800220. The DSP ISR is showing a value of 0x0040 indicating vector 6. With a vector for ISR6, I would expect it to be at 0x00800060. What I can’t find documented is whether this should dispatch through some predefined function that uses the inth_handler table or whether I should just branch to my handler directly. I "think" the starterware is assuming dsp-bios is running but it is not in this case. Scott
5
+
6
+ Responses:
7
+ Hi Scott, Welcome to the TI keystone E2E forum. I hope you will find many good answers here and in the TI.com documents and in the TI Wiki Pages (for processor issues). Be sure to search those for helpful information and to browse for the questions others may have asked on similar topics (e2e.ti.com). Please read all the links below my signature. We do not have Starterware package/support for C66xx (Keystone ) DSP's. The only support available for C66xx DSP's (Keystone I) is MCSDK 2.x. Please find the latest MCSDK and user guide link below my signature. is there info or samples for bare metal interrupt handling? I am trying to use interrupts on the C66x DSP1 core on the TDA2 (XC5777X CPU BOARD). We are not familiar with this platforms however i can point you to some CSL based examples for interrupt on C66xx keystone devices. By understanding that, you can be able to develop your own example. Thank you.
8
+
sample_embedding_folder2/0482769.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Toolchain for compiling Qt application for TDA2 board?
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hello All, I am looking for information on how to compile a Qt application for deployment on TI's TDA2 (ARM A15 based) board. In particular which toolchain should be used? I've found Qt documentation for other boards, but nothing specific to the TDA2. Any pointers on how to get started with this board would be greatly appreciated. Regards, Eric Gilbertson [email protected]
5
+
6
+ Responses:
7
+ Hello, I am not aware with TDA2 but have you tried with CodeSourcery? processors.wiki.ti.com/.../Building_Qt BR Margarita
8
+
9
+ Here is the qmake.conf I used for the TDA2. Note, I'm still having a problem with GLES2 but I believe that is unrelated to the conf settings. HTH, Eric G. qmake.conf for TDA2: include(../common/linux_device_pre.conf) DISTRO_OPTS += hard-float QMAKE_INCDIR += $$[QT_SYSROOT]/usr/include QMAKE_LIBDIR += $$[QT_SYSROOT]/usr/lib \ $$[QT_SYSROOT]/lib/arm-linux-gnueabihf \ $$[QT_SYSROOT]/usr/lib/arm-linux-gnueabihf QMAKE_LFLAGS += -Wl,-rpath-link,$$[QT_SYSROOT]/usr/lib \ -Wl,-rpath-link,$$[QT_SYSROOT]/usr/lib/arm-linux-gnueabihf \ -Wl,-rpath-link,$$[QT_SYSROOT]/lib/arm-linux-gnueabihf TDA2_CFLAGS = -mtune=cortex-a15 -mfloat-abi=hard -mfpu=vfpv3-d16 QMAKE_CFLAGS += $$TDA2_CFLAGS QMAKE_CXXFLAGS += $$TDA2_CFLAGS include(../common/linux_arm_device_post.conf) load(qt_config) ~
10
+
sample_embedding_folder2/0527984.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2 Initialisation via JTAG
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hello, We are having issues finding any information on how to access a TDA2 device (specifically TDA2HGBRAQ) via JTAG. We don't use a 3rd party tool, but are building the JTAG communications ourselves. The steps we would follow are: 1. Physically connect to JTAG pins on TDA2 device. 2. Trigger TDA2 reset (or JTAG reset) and initialise the device via JTAG (halt core(s), setup memory, etc.). 3. Upload a boot loader to the RAM of the device and execute it. Steps 1 and 3 aren't really the issue, but we need help with step 2. We can access this same target device with a Lauterbach, so the information for how to do this initialisation via JTAG must be available. As of yet, we haven't been able to find the right contact and/or documentation for this. If someone can help us find the correct contact and/or documentation, that would be very helpful.
5
+
6
+ Responses:
7
+ To use the correct terminology, I am looking for the JTAG scan path details for third party programming support.
8
+
9
+ Hello Daniel, How about refering to the CCS device scan chain tree (attached) for example. The xml clearly shows processor sub-paths. TDA2x.xml You may also refer to "30.6 Power, Reset, and Clock Management Debug Support" in the device TRM and in general to the "Chapter 30 On-Chip Debug Support" for additional JTAG debug support details. Hope it helps, thanks, Alex
10
+
11
+ Hello Alex, Thank you for this information. We've tried to use it, but still encounter some issues. Primarily, we need to know how to navigate to IcePick-D so that we can access the Cortex core. Currently we can't find any information on the correct TAP states or flow we need to reach this goal. We've "guessed" some values to shift and seem to get some limited data when we clock out in the shift state, but this is just trail and error with no real understanding of what the commands we send and receive really are. Do you know where we can find the information to access the Cortex via IcePick-D? TAP states, Shift-IR commands, Shift-DR data, etc? Once we have access to the Cortex, the reading/wring of RAM (and registers) becomes trivial. Thanks in advance, Daniel.
12
+
13
+ Hello Daniel, Could you please also refer to the following doc then. Looks like it has enough information to enable a third party debugger on the processor JTAG test access ports (TAPs) connected to the ICEPick-D. Let me know thanks, Alex
14
+
15
+ Hello Alex, Thanks for this. We use this sequence with a P value of 15 and Q value of 0. The P value we estimated based on ADAS reference manuals, the Q is just taken from practice. Are these correct? After we do these initialisation steps, we try to setup the Cortex-A15. However, at this point, we have really no information. Currently we are trying to perform similar commands as we did for Cortex-A9, but even the A9 commands were a kind of guess work. Are you able to provide any documentation at this point, so that we can ensure proper access to the Cortex and read/write the RAM. Thanks, Daniel.
16
+
17
+ Hi Daniel, I think you are correct: P = 15 (dec); Q = 0 (dec); P is for tap number, Q for Core number, and referring to TRM debug chapter, your values look correct (see attached screenshot) Thanks, Alex
18
+
19
+ Hi Alex, The attached screenshot didn't work, but thanks for clarifying. Regarding the point about setting up the Cortex-A15 after the ICEPick-D initialisation; do you have any documents or contacts who could support us?
20
+
21
+ Hi Daniel, Reattached screenshot. Moreover, I received feedback that the Emulation developer community has this information and you should have access to the below url. Could you please check there if more docs are available for you to clarify your questions? https://www.ti.com/securesoftware/docs/securesoftwarehome.tsp Thanks, Alex
22
+
23
+ Hello Alex, Thanks for the help. We'll look through the documentation in the Emulation Developer Community and see what we can find. Regards, Daniel.
24
+
sample_embedding_folder2/0529898.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: 8CH 720P capture on TDA2
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi all, I want to capture 8CH 720P with YUV422 8bit discrete sync mode on TDA2. My scenario is 4CH for 3D AVM, 2CH for side view, 1CH for front camera and 1CH for rear camera. It should be no problem when run in different situation. Otherwise, the above scenario happens simultaneously. I think the ddr bandwith should be the limitation. Could you please help to comment more about it? Thanks in advance. B.R. OC
5
+
6
+ Responses:
7
+ Hi, Your question has been forwarded to an expert. Best regards Lucy
8
+
9
+ Hi Lucy, Any update? B.R. OC
10
+
11
+ Hi, Just sent a reminder. Best regards Lucy
12
+
13
+ OC, Sorry for late reply. I think this is doable with VIP1 and VIP2 as far as its 8bit YUV (Port B on each slice supports only 8 bit). DDR bw would also depend upon how your algorithms are operating, you can configure bootloader in dual emif interleaved mode to get maximum DDR banwidth if you want to run all this simultaneously. But when its not all simultaneously, I don't see a problem.
14
+
15
+ Please note that if you are using discrete syncs, 2 pins for each ports are available. and if you are using embedded sync, sync codes must be in bt656 format. BT1120 codes over 8bit interface is not supported. Rgds, brijesh
16
+
17
+ Hi Yogesh, Thanks so much for your input. It's clear to me. B.R. OC
18
+
sample_embedding_folder2/0530602.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2 ARM Cortex M4 Floating point support FPv4SPD16
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi, I am using Code Composer Studio 5.4.0.00091 In my project, I have enabled floating point support FPv4SPD16(ARM compiler processor option) for CortexM4 in TDA2. After i build my project, I checked .asm file. It is showing --float_support=vfplib Why compiler has not generated asm file for FPv4SPD16? Jagan
5
+
6
+ Responses:
7
+ Jagan, I will FW this here to CCS experts . thanks, Alex
8
+
9
+ Jagan, The TDA2x Cortex M4 does not have a floating point hardware. Only vfplib is supported. Regards, Chaitanya
10
+
sample_embedding_folder2/0530614.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Processor Capable to decode MJPEG/H.264 for 4 Ethernet HD Camera Input
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2, TDA2E Hi, We have project on having 3 or 4 HD Ethernet connection camera for Automotive and need to be display on LCD screen. Format output from HD Camera will be MJPEG or H.264 in AVB packet ot RTP packet. Frame size are 1080 x 720 with 30fps (can be adjust base on requirement. All the camera will input to a 4 port switch and having 1 output RGMII to CPU and display on the screen. The project are similar to Arround View Monitor, however on the LCD will have 3 section to display individual camera picture. The main task of the processor will be received the packet -> reconstruct the packet become a valid frame -> decode the frame -> rescale to LCD section size -> convert to RGB and display on LCD. 1) Did TDA2 or TDA2e are capable on handling the processing of the MJPEG and H.264 for 4 camera with no frame drop problem? 2) Any parts that can be recommend to me? 3) For TDA2 or TDA2e, what will be the OS using? 4) What is the bundle of the whole Evaluation Kit + Software part number sot that I can get quote from Avnet Supplier? 5) Any tranining material that I can refer to for the TDA2 or TDA2e, so that I can get more understand of the product. Thanks KJ Lee
5
+
6
+ Responses:
7
+ Hello KJ Lee, TDA2 is capable of handling the 4-MJPEG streams through Ethernet. We do have lightweight AVB stack which handles network receive for MJPEG frames. For TDA2 devices we have Vision SDK software stack which has demos for network application similar one you are trying to do. The Vision SDK is based on TI RTOS. You can go through below for understanding more about TDA2 & TI Vision SDK. http://www.ti.com/lit/wp/spry260/spry260.pdf http://www.ti.com/lsds/ti/processors/dsp/automotive_processors/tdax_adas_socs/overview.page For part no & training videos I will get back to you once consult with my team members. If you have queries please let me know. Thanks. Regards, Prasad
8
+
9
+ Hi Here is my comments, 1) Did TDA2 or TDA2e are capable on handling the processing of the MJPEG and H.264 for 4 camera with no frame drop problem? [Shiju] yes, you can use either TDA2x or TDA2ex for 4ch (1280x720 30fps) AVB capture + MJPEG/H264 decode + processing + display. 2) Any parts that can be recommend to me? [Shiju] Try with TDA2xx EVM, you can order the same from spectrum digital, or contact TI fields/sales 3) For TDA2 or TDA2e, what will be the OS using? [Shiju] on A15 you can use either Bios (TI RTOS) or any HLOS like Linux. Bios can be run on other cores (M4, DSP). 4) What is the bundle of the whole Evaluation Kit + Software part number sot that I can get quote from Avnet Supplier? [Shiju] You might Get the TDA2xx EVM + LCD display & Code Compose Studio as development/debugging platform. For debugging/connecting-to-target you need to use XDS560, same can be ordered from spectrum digital 5) Any tranining material that I can refer to for the TDA2 or TDA2e, so that I can get more understand of the product. [Shiju] search for TI automotive offerings, for details on TDA2xx devices. You can also download the TI automotive SW package “vision SDK” from CDDS, where we do have an example use case for 4ch (1280x720 30fps) AVB capture + MJPEG decode +.mosaic display. Regards, Shiju
10
+
11
+ Hi Lee Added "vision SDK" SW download links - Vision SDK v02.10.00.00 Release Notes : cdds.ext.ti.com/.../emxNavigator.jsp Vision SDK v02.10.00.00 Release package - Linux Installer : cdds.ext.ti.com/.../emxNavigator.jsp Vision SDK v02.10.00.00 Release package - Windows Installer : cdds.ext.ti.com/.../emxNavigator.jsp regards, Shiju
12
+
13
+ Hi Prasad, As I walk through the website, TDA2x development for the software only need CCS (TI-RTOS) and SDK. Correct if I'm wrong. 1) I'm waiting the part number of TDA2x processor + Evaluation kit part number so that I can request price from Avnet and plan for the project budget. 2) Which of TDA2x series that I can choose to use as I can't any information from the website? 3) Did TI-RTOS need to purchase or it come with Code Composer Studio (CCS) IDE? 4) Can I said that CCS mainly used with TI-RTOS for controlling the processor and communication to outside like CAN, Ethernet... etc? 5) Can I said that Vision SDK are using together with CCS IDE? 6) I need to buy the Vision SDK? 7) Any advantage of using TI-RTOS compare to HLOS like Linux? 8) for the CCS IDE are one time purchase? 9) Do I need to purchase library stack for MJPEG, Ethernet (AVB) , H.264... etc? Or are included in Vision SDK? Thanks KJ Lee
14
+
15
+ Hi KJ Lee, Vision SDK is comprehensive SW package which can help you get started with what you need. There is no separate cost for it but you need to get NDA signed for it, otherwise its not available. All the SW stacks like mjpeg, AVB, etc are included in the SDK. There is also example demo showing AVB capture - display. CCS IDE provides debugging and development environment for Vision SDK. I will try to get a local sales representative contact you. regards Yashwant
16
+
17
+ Hi Yashwant, We had done NDA signed between us, Avnet, and TI for the ADAS aplication. Currently we are looking for the suitable processor and estimate the project cost as for start. Is that license charge when using Vision SDK and commercial the product? Other than Evaluation board, emulator (purchase from 3rd party), CCS IDE need to purchase from TI, any other cost like license we need to pay as well? Thanks. KJ Lee
18
+
sample_embedding_folder2/0531906.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Differences between TDA2XX and TDA3XX
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi I am trying to port source code that works on TDA2xx on TDA3xx.I have following doubts: 1.Master core in TDA2XX is a15 and in TDA3XX it is m4,so we have to shift all modules that are being used by a15 in TDA2XX to m4 in TDA3xx. 2.Modules on which the functionality like ethernet,camera capture depends in TDA2xx.At the bare minimum i want ethernet and camera capture functionality to be working on TDA3xx. 3.vip module uses tda2xx video driver which has 12 capture,but tda3xx supports 4 camera,this needs to be identified where all changes are required. Regards Mayank
5
+
6
+ Responses:
7
+ Hi, Moving your post to right forum to be better answered. Thanks & regards, Sivaraj K
8
+
9
+ Mayank, You could start with Vision SDK. It already supports Ethernet, camera capture and other drivers on TDA2 and TDA3. In both cases the SW runs on the M4. On TDA2 the physical master is the A15, which will launch the secondary boot loader (SBL). The SBL will load and start the code from the M4.
10
+
11
+ Hi Ejs, Thanks for your mail.I am proceeding the following steps: 1. Take all the TI packages namely bios,starterware,edma,ipc,xdc,bsp from Vision SDK and compile them for TDA3XX.(I could do this for starterware and bsp only.For other packages are they compatible with TDA3xx as they are for TDA2XX or they also need separate compilation) 2.Compile my application code with tda3xx toolchain. I still have following doubts; 1. Will i have to shift modules running on a15 core to dsp/m4 core or simply disabling a15 compilation while compiling the application for tda3xx will work. 2.Will tda3xx compilation require altogether different toolchain from what i was using when compiling for tda2xx. 3.Bootloader packages will be generated from starterware module using sbl user guide and using that will boot the board. 4.If i compile the code for dsp and m4 cores and then generate the combined disc image and flash it in tda3xx,will it work or i have to make other changes. Regards Mayank
12
+
13
+ Hi Mayank, All TI packages are compatible with TDA3xx. In case you have designed your application to run some particular operations on A15, you will need to shift it to dsp/m4. You can refer to Vision SDK as an example. TDA3xx compilation uses same tool chains as TDA2xx. M4 uses TMS470, DSP uses C6000 and EVE uses TI ARP32 compiler. You can create multi-core application image including both dsp and m4 cores (you can follow steps given in SBL user guide) and it will work. Regards, Rishabh
14
+
sample_embedding_folder2/0532505.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Deep Learning and Inference accelerated on DSPs
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi, With the multitude of DSPs built into the processors from J6 -> Sitara -> Multicore ARM+DSP, is TI working on or already supporting Deep Learning inference acceleration support? Specially as it applies to applications very similar to the ADAS and autonomous navigation applications targeted by the Jacinto series?
5
+
6
+ Responses:
7
+ Hi Anup, I will ask our experts to comment on the keystone device support on "Deep Learning inference acceleration" and road map if any. For Jacinto, I knew it is for automotive devices. Can you please post it on Automotive forum for appropriate and faster response. Thank you.
8
+
9
+ Hi Anoop, We have the vision library - VLIB 3.3 ( software-dl.ti.com/.../index_FDS.html) supporting some of the important kernels of deep learning (convolution, maxpooling, Fully connected) - please refer the library and documentation. This library is for C6x DSP and can be used on any TI device with C6x DSP. We have on our roadmap to add more functions for deep learning and appropriate framework support. On Jacinto family (specifically ADAS parts like TDA2/3x) we offer this as part of vision SDK. Please get in touch with TI field representative to get access to vision SDK. Thanks, With Regards, Pramod
10
+
11
+ Hi Pramod, Thanks a lot for the pointers. We will check out the VLIB, and get in touch with the local FAE at an appropriate juncture. Wish to see a lot more proofs of concepts to add confidence on TI DSPs though! :)
12
+
13
+ I wouldn't hold my breath ... the C66x architecture is quite old by now. It does not have enough processing power for most deep learning algorithms. -Robby
14
+
15
+ HI Robby, TI SOCs have hetrogenous processors (EVE and DSP) and multiple instances of those which gives a good horse power for many applications using deep learning technologies. Stay tuned to see some cool demos on this technology at CES - Jan 2017 @ TI booth Thanks, With Regards, Pramod
16
+
17
+ That's good to know. I'd be interested to seeing those demos. -Robby
18
+
19
+ Dear Champs, TIDL (TI deep learining library) was introduced in CES2017 (Youtube link below). Could you share more info about TIDL? Is there a landing web page for TIDL?
20
+
21
+ Hi Luke, We are planning to release it and you should see more documentation and information with that. The releases will be during 1Q and 2Q this year. Thanks, with Regards, Pramod
22
+
23
+ Hello Pramod, Is the TI Deep Learning library available for usage? I am planning to use this on J6 (DRA75x) EVM running Android. Can you please let me know if this is possible? From the video of TI demo at CES 2017, it seems that the demo was on TDA2 and not on J6 but from the spec of TDA2 it seems similar to J6. Regards, Pavan D
24
+
25
+ HI Pavan, On J6 we have 2xEVE and this demo was done using 4xEVE on TDA2x. You can run the demo on J6 with half the speed. Thanks, with Regards, Pramod
26
+
sample_embedding_folder2/0533643.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2 and TDA3
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Can you please advise regarding the customer questions below? Please see the latest update from the customer: What can you tell me about the TDA2 and TDA3x processors? Can they / do they run Linux? (I’m guessing yes) www.ti.com/.../technicaldocuments The information on the web seems a bit sketchy. Are there available evaluation boards? What cost? What are the capabilities of the evaluation board? This link has a picture of a TDA2x “evaluation board” but the ability to purchase it seems somewhat lacking. www.ti.com/.../sprt681.pdf Same document for TDA3x www.ti.com/.../sprt708a.pdf Here is a datasheet for the TDA3 but again not very detailed; www.ti.com/.../tda3.pdf We are very interested in multiple FPD (LVDS) links on a board. There is also a TI automotive camera evaluation displayed but again very little information regarding it: www.ti.com/.../PMP9351 Can I purchase this as an “evaluation module”?
5
+
6
+ Responses:
7
+ We have moved your post to the appropriate forum.
8
+
sample_embedding_folder2/0533689.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Porting application code from TDA2x to TDA3x
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: MATHLIB, TDA2 Hi, I am working on porting the application from tda2x to tda3x. Currently the application runs fine on tda2x.The application uses bios,xdc,ipc,bsp,edma,ivahd,eve,vlib,mathlib packages from vision sdk. On tda2 physical master is a15 that launches the secondary bootloader.The sbl will load and start the code from m4. On tda3 what will be the scenario since tda3 does not support a15. a) How the secondary bootloader launches on TDA3x b)Is it fine to port the software modules running on a15 in TDA2 on m4 in TDA3x. c)Does disabling a15 support from whole application,compiling the ti packages with TDA3 and then generating the application image is right procedure. d)How should i proceed for porting application from tda2x to tda3x. Regards Mayank
5
+
6
+ Responses:
7
+ a - secondary bootloader is launched from M4 on TDA3x b - Yes it is fine, which peripherals / subsystems are these modules on A15 working on? It also depends on that. You need to see if you have driver on M4 + bios for the same. c- Yes, in vision_sdk when you select platfor as TDA3XX_EVM in Rules.make, this happens automatically. d - Please give details of your usecase and peripherals you are using on TDA2X so we can discuss further.
8
+
9
+ Hi Yogesh, Kindly can you guide me, where can I find source code for vision sdk for TDA3x ? What I got is http://software-dl.ti.com/processor-sdk-vision/esd/TDAx/latest/index_FDS.html, but it only contains the prebuilt binaries. I do have NDA and license for CCS. Regards, Waleed
10
+
sample_embedding_folder2/0533854.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Regarding USB KEYBOARD/MOUSE/WEBCAM NOT DETECTING on TI J6 Dra7xx board
2
+
3
+ Query Text:
4
+ Hi, Keyboard/MouIse/Webcam are not getting detected on DRA7XX board. I have enabled all the required drivers like USB keyboard driver and USB mouse driver on both kernel and root file system. I have also enabled USB host side drivers in kernel and USB OTG, When I type lsusb , the command returns nothing. When I give lspci, It does not show any USB controller. Request help on this, as I m stuck, Also request to share .config file of kernel, in which USB drivers are working. Thanks n Regards Shalini KP
5
+
6
+ Responses:
7
+ Hi, Your question has been forwarded to an SW expert. Additionally provide info if you are using custom board or J6 EVM and what is the kernel version. Meanwhile, for more info you can check also below link: processors.wiki.ti.com/.../USB_General_Guide_Linux_v3.8 Best regards Lucy
8
+
9
+ Thank you for the reply. The issue is sorted after enabling all the USB host side drivers.
10
+
11
+ Hi What is the release version you are using ? In general each usb port USB1, USB2 is configured to particular mode (host, peripheral, drd/otg), the dr_mode field in usb device tree node need to set appropriately. Please arch/arm/boot/dts/dra7-evm.dts, if the dr_mode for USB1 port is set as "otg", then you need to insert the gadget module. You can insert any gadget like g_zero. (modprobe g_zero). For host only configuration you can force dr_mode property of USBx node to "host" in DT(device tree). Regards Ravi
12
+
sample_embedding_folder2/0536419.txt ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Enable channel 5 on TDA2xx
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi, I want to enable 6 channels on TDA2xx. My Vision SDK version is v2.08 and running Linux on A15. Now, channel 1, 2, 3, 4, 6 are enabled. But channel 5 still can not work. I did following steps to enable channel 5: 1. Disable NDK in Rules.make 2. Set VIDEO_SENSOR_NUM_LVDS_CAMERAS as 6 in "vision_sdk\examples\tda2xx\include\video_sensor.h" 3. Set pinmux as following: Signal Name from CAMERA Signal used on TDA2 CAM5_D[2] VIN4B_D0 (U4) CAM5_D[3] VIN4B_D1 (V2) CAM5_D[4] VIN4B_D2 (Y1) CAM5_D[5] VIN4B_D3 (W9) CAM5_D[6] VIN4B_D4 (V9) CAM5_D[7] VIN4B_D5 (U5) CAM5_D[8] VIN4B_D6 (V5) CAM5_D[9] VIN4B_D7 (V4) CAM5_HSYNC VIN4B_HSYNC1 (U7) CAM5_VSYNC VIN4B_VSYNC1 (V6) CAM5_PCLK VIN4B_CLK1(V1) Is there anything I missed to enable the 5th VIP port? Thanks, Kevin
5
+
6
+ Responses:
7
+ Hi Kevin, Your question has been forwarded to Vision SDK experts. They will comment here directly. thanks Alex
8
+
9
+ Hi, Any update? Kevin
10
+
11
+ Hi Kevin, I have sent a reminder, team will comment here directly. thanks, Alex
12
+
13
+ Hi Kevin you need some kernel patches as well, FYI, I have attached both Kernel side and SDK side patches to enable 6ch capture. Please refer the patches, sometime these may not apply automatically on v2.8 version, if fails tray manually regards, Shiju6ch builld.zip
14
+
15
+ Hi Shiju, Thanks for your patches. I still have two questions: 1. Should I exclude all EVE cores to enable 6 channels? May I include EVE1 & EVE2 for algorithm links? 2. It seems that my dts files are different from yours. I have no idea how to modify them based on your patch. Would you please check the files for me? ti_components/os_tools/linux/kernel/omap/arch/arm/boot/dts/dra7-evm-infoadas.dts ti_components/os_tools/linux/kernel/omap/arch/arm/boot/dts/dra7-evm-vision.dts 1108.dts.tar.gz Thanks, Kevin
16
+
17
+ Hi Shiju, Based on your patch, I have to modify line 444 in "dra7-evm-vision.dts". But there are only 128 lines in my "dra7-evm-vision.dts". I never changed dts files after I installed SDK. So that I am getting confused. I also got the same problem when I modify "/dra7-evm-infoadas.dts". If you have any suggestion about it, please tell me. Thanks, Kevin
18
+
19
+ Kevin seems like some versioning compatibility issue. i will check your file & get back to you regards, Shiju
20
+
21
+ rel2.10-uboot_kenel-patch-for6ch.zipKevin If possible can you migrate to 2.10 release, I have a patch (for both uboot & kernel) to support 6ch capture. PFA the same. BTW, i havn't tested this, please let me know if you face any issues regards, Shiju
22
+
23
+ Hi Shiju, Thanks for your new patch. Building v2.10 environment and merging code may take several days for us. I will reach you when I am ready. Thanks, Kevin
24
+
25
+ Kevin Just check if these paches can be manually apply on 2.8 version? regards, Shiju
26
+
27
+ Hi Shiju, There are only 583 lines in my "mux_data.h", the same issue. mux_data.tar.gz Kevin
28
+
29
+ mux_data-with-vin4b.hKevin Try use the attched file, I ahve applied the uboot patch on 2.8 version regards, Shiju
30
+
31
+ From SDK side, modify /linux/examples/tda2xx/src/common/chains_main.c + gChains_usecaseCfg.numLvdsCh = 6; regards, Shiju
32
+
33
+ As my first post, I have set pinmux by Starterware SDK. This time, I replace mux_data.h with your attached file. But whether I enable/disable my pinmux setting, channel 5 still does not output any frame. Maybe pinmux is not the main problem. I will try to migrate to v2.10. But if you have any other suggestion about v2.08, please let me know. Thanks again for your help. Kevin
34
+
35
+ Kevin In TDA2X EVM, 5th camera is muxed with Ethernet, it’s a board level Mux issue. so if you enable Ethernet then 5th Cam will not work. i guess in one of the kernel patch I have disabled the Ethernet. BTW, I haven’t tried @ my end. I will check and let you know Regards, Shiju
36
+
37
+ Hi Shiju, Because of versioning compatibility issue, I cannot use some of your patches. Our commercial agent suggested me setting NDK_PROC_TO_USE as none in vision_sdk/Rules.make to disable Ethernet. So I did. But now I see following messages when booting EVM: [ 7.154053] cpsw 48484000.ethernet: Detected MACID = a0:f6:fd:b3:4a:2e [ 7.161616] cpsw 48484000.ethernet: cpsw: Detected MACID = a0:f6:fd:b3:4a:2f ...... [ 10.200474] using random self ethernet address [ 10.204939] using random host ethernet address I think I have to disable Ethernet in another way. Do you have any suggestion about this? Kevin
38
+
39
+ Kevin I just did a quick try, I too seeing the 5th channel issue. debuging. BTW, if you are not very paricular on Linux, then try with Bios only build (disable A15 or run Bios on A15) where you can get 6ch LVDS capture working by just set NDK_PROC_TO_USE=none and in chains_main_bios.c set gChains_usecaseCfg.numLvdsCh = 6; //VIDEO_SENSOR_NUM_LVDS_CAMERAS; for "4CH VIP Capture + Mosaic Display" usecase regards, Shiju
40
+
41
+ Hi Shiju, We need to run OpenGL on Linux. Wait for your good news. Thanks, Kevin
42
+
43
+ Kevin I could get all 6ch working with VSDK 2.10, Please apply the patch (6CAM_Patch.zip) attached on top of VSDK 2.10 and build/test again regards, Shiju 6CAM_Patch.zip
44
+
45
+ Hi Shiju, An error occured when I compile chains_main.c with VSDK 2.10. "fatal error: include/config/system_cfg.h: No such file or directory" And I cannot find any file named system_cfg.h under vision_sdk. BTW, readme.txt mentions about Rules.make. But it is not included in the zip file. Can you help me to solve those problems? Thanks, Kevin
46
+
47
+ Kevin pick only below change for chain_main.c. gChains_usecaseCfg.numLvdsCh = 6; You only need to apply below patches 1. 0001-dra7xx-mux_data-Add-pinmux-iodelay-for-VIN4B 2. dra7-evm-infoadas 3. dra7-evm-vision 4. chains_main - only change is gChains_usecaseCfg.numLvdsCh = 6; for LVDS capture + mosaic display usecase regards, Shiju regards, Shiju
48
+
49
+ Hi Shiju, I try the four steps on v2.10. And channel 5th works now. Thanks for your help. Kevin
50
+
51
+ Hi Kevin Thanks for the confirmation:) regards, Shiju
52
+
sample_embedding_folder2/0537108.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: [TDA3] Stereo camera solution
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Dear we will develop stereo camera system in car sysem. Presently we select the TDA3 solution. But we are not sure whether you can use TDA3 that can meet the effectiveness of stereo camera. From TI ADAS SoC solution get 1. Stereo Front Camera Hight Level Block of of TDA2 can support stereo camera solution, but TDA3 doesn't have this. TDA3 is a cost-down solution, so we worry it cannot meet the stereo camera performance requirement. Can you check the TDA3 or you suggest to use TDA2? And tell me why is main reason. 2. From Stereo Front Camera Hight Level Block of TDA2 can see, you use a FPGA(optional) to link from camera IF to TDA2. Can you tell us what is his main function? If it is "must", who can provide this image of FPGA? Or TI has support this (or provide this SW code) Br. Ivan Wu [email protected]
5
+
6
+ Responses:
7
+ Can I add a question? 3. About stereo camera, VIN1a and VIN2a of TDA3 can input the video signal of 2 camera at the same time?
8
+
9
+ You can interface up to four cameras through each VIN (Internally each VIN has 2 slices and each slice has 2 ports). I think what matters here is resolution, fps and data format of stream you are capturing. There shouldn't be any problem for two signals.
10
+
11
+ Hi, Yogesh Thank your response But I don't know can Camera 1 and Camera 2 be worked at the same time or they are use "Time division multiplexing"? Br. Ivan Wu
12
+
13
+ Hi Ivan, TDA3x can do stereo but the maximum resolution and the maximum number of disparities it can handle, will be less than what TDA2x can handle. What is the target resolution, frame rate and maximum disparity you are trying to achieve in your system ? regards, Victor Cheng
14
+
15
+ Hi, Victor Presently we would plan to use HD (720p) resolution, and this fps criteria should be defined at 30 ~ 60 fps. About this, do you have any suggestion? Br. Ivan Wu
16
+
17
+ Hi Ivan, For 720P, the maximum frame rate TDA3x can handle would be 7.5 fps with a maximum number of disparities of 32. regards, Victor Cheng
18
+
19
+ Ivan, If you have Vision SDK, you can refer to the data sheet under vision_sdk\docs. It explains stereo usecase in detail and how much processing is needed for 640x360@30 fps processing on TDA2x (Capture is still 1280 x 720 from two cameras on xCAM) this info can be used to extrapolate for tda3x in your case but as Victor said for tda3x 720p stereo would yield very low fps.
20
+
21
+ Hi Ivan Wu, The VIP in both TDA3x & TDA2x can receive multiple video streams. VIP1 & VIP2 are two separate instances of VIP. This allows SoC to receive multiple video streams simultaneously. Depending on the camera interface & other system needs (width of parallel port, display width, etc...) we can check on the feasibility. Regards, Sujith
22
+
23
+ Ivan, For that you either need external logic or use TDA's gpio's to sync between two camera using features provided by sensors. This is done on one of our reference designs but with TDA2x www.radiumboards.com/TI_TDA2x_Based_xCAM_Platform.php
24
+
sample_embedding_folder2/0539137.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: PWM control external drive buzzer on TDA2
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi all, My code base is VisionSDK_2_10 and use TDA2 custom board. I want to use PWM to control external drive buzzer. The two function pins could be choose from TDA2. One is ehrpwm and another one is timer to configure to PWM output. But I checked bsp driver (VISION_SDK_02_10_00_00\ti_components\drivers\bsp_01_06_00_11\src) that did not have the example code for ehrpwm or timer. Could you please indicate where has the example code for this function? Thanks a lot. B.R. OC
5
+
6
+ Responses:
7
+ HI, Your question has been forwarded to the Vision SDK team Best regards Lucy
8
+
9
+ Hi are you looking for a timer configuration sample code or something else? regards, Shiju
10
+
11
+ PWM module is not supported in VSDK or Starterware
12
+
13
+ Hi Sivaraj, Shiju, Let move to private forum for discussion. Thanks a lot. B.R. OC
14
+
sample_embedding_folder2/0539230.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Does TDA2 vout support bt656 8-bit mode?
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi all, My code base is VisionSDK_v2.08 and use TDA2 custom board. My vout data pin connect as "TDA2_Vout_D[0~7] -> DS90UB913Q_DIN[2~9]" and data format is "BT656, YUV422, 800x480". In VisionSDK_v2.08, I modify some settings in vision_sdk\examples\tda2xx\src\usecases\common\chains_common.c "ChainsCommon_SetDctrlConfig() " as below "RED". But nothing display on panel. ------------------------------------------------------------------------------------------------------------------------------ if(displayType == CHAINS_DISPLAY_TYPE_LCD_7_INCH) { pPrm->deviceId = DISPLAYCTRL_LINK_USE_LCD; pVInfo->vencId = SYSTEM_DCTRL_DSS_VENC_LCD1; pVInfo->outputPort = SYSTEM_DCTRL_DSS_DPI1_OUTPUT; pVInfo->vencOutputInfo.vsPolarity = SYSTEM_DCTRL_POLARITY_ACT_LOW; pVInfo->vencOutputInfo.hsPolarity = SYSTEM_DCTRL_POLARITY_ACT_LOW; /* Below are of dont care for EVM LCD */ pVInfo->vencOutputInfo.fidPolarity = SYSTEM_DCTRL_POLARITY_ACT_LOW; pVInfo->vencOutputInfo.actVidPolarity = SYSTEM_DCTRL_POLARITY_ACT_LOW; pVInfo->mInfo.standard = SYSTEM_STD_CUSTOM; pVInfo->mInfo.width = displayWidth; pVInfo->mInfo.height = displayHeight; pVInfo->mInfo.scanFormat = SYSTEM_SF_PROGRESSIVE; pVInfo->mInfo.pixelClock = 29232u; pVInfo->mInfo.fps = 60U; pVInfo->mInfo.hFrontPorch = 40u; pVInfo->mInfo.hBackPorch = 40u; pVInfo->mInfo.hSyncLen = 48u; pVInfo->mInfo.vFrontPorch = 13u; pVInfo->mInfo.vBackPorch = 29u; pVInfo->mInfo.vSyncLen = 3u; pVInfo->vencDivisorInfo.divisorLCD = 1; if(Bsp_platformIsTda3xxFamilyBuild()) { pVInfo->vencDivisorInfo.divisorPCD = 1; } else { pVInfo->vencDivisorInfo.divisorPCD = 4; } pVInfo->vencOutputInfo.dataFormat = SYSTEM_DF_YUV422I_YUYV;// SYSTEM_DF_RGB24_888; pVInfo->vencOutputInfo.dvoFormat = SYSTEM_DCTRL_DVOFMT_BT656_EMBSYNC; // SYSTEM_DCTRL_DVOFMT_GENERIC_DISCSYNC; pVInfo->vencOutputInfo.videoIfWidth = SYSTEM_VIFW_8BIT;//SYSTEM_VIFW_24BIT; pVInfo->vencOutputInfo.pixelClkPolarity = SYSTEM_DCTRL_POLARITY_ACT_HIGH; pVInfo->vencOutputInfo.aFmt = SYSTEM_DCTRL_A_OUTPUT_MAX; /* Configure overlay params */ ovlyPrms->vencId = SYSTEM_DCTRL_DSS_VENC_LCD1; } ------------------------------------------------------------------------------------------------------------------------------ I find the description in TDA2 TRM(as below), could TDA2 support 8-bit mode? Or am I missing any settings? Please give me some hints. – Displays supported: • Active matrix color: 12-, 16-, 18-, and 24-bit panel interface support (replicated or dithered encoded pixel values) Thanks in advance.
5
+
6
+ Responses:
7
+ Hi Sherry, I think also pins [9:2] from TDA2 side should be used (instead of [7:0]). See the TRM note below. Regards, Stan
8
+
9
+ Hi Stan, Thank you for your reply that H/W pin design. In the SW for display setting, I set it as below: pVInfo->vencOutputInfo.dataFormat = SYSTEM_DF_YUV422I_YUYV; pVInfo->vencOutputInfo.dvoFormat = SYSTEM_DCTRL_DVOFMT_BT656_EMBSYNC; pVInfo->vencOutputInfo.videoIfWidth = SYSTEM_VIFW_8BIT; and get error message as below dispdrv/src/vpsdrv_dctrl.c @ Line 713: Core control: Set venc output failed!! Assertion @ Line: 719 in links_ipu/display_ctrl/displayCtrlLink_drv.c: retVal == SYSTEM_LINK_STATUS_SOK : failed !!! I change videoIfWidth = SYSTEM_VIFW_10BIT and get the same error message. But set videoIfWidth = SYSTEM_VIFW_12BIT is OK! How do I set display venc's parameter for BT656? Kuve
10
+
11
+ Hi Kuve, I don't have much knowledge in DSS software, therefore I cannot tell the exact settings. Regarding "But set videoIfWidth = SYSTEM_VIFW_12BIT is OK!" : there are no sub-12-bit settings available , because it reflects register bits DISPC_CONTROL1[9:8] TFTDATALINES, which in turn configures the data with in RGB mode. That is, it is irrelevant to BT656 and therefore you can leave it at default (12-bit). Because most of the other settings above are related to DSS registers, you can refer to Table 11-150. DISPC Configure BT.656 or BT.1120 Mode, and other tables in the same section of TRM. Regards, Stan
12
+
sample_embedding_folder2/0540822.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2x EVM (Vayu EVM): DDR3 reset signal and VTT turn off
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hello team, I have two questions about DDR3 reference schematic in Vayu EVM (516582G4_VAYU_EVM_03MAR_2015A.pdf). 1. In Vayu reference schematic, DDR3 reset signals (DDR1_RST, DDR2_RST) come from 1V35_DDR power, not from TDA2 DDR reset singals (AG21,R24). Is there any specific reason not to use TDA2 DDR reset signals? 2. In Vayu reference schematic, VTT regulation LOD includes TR circuit on VTT supply. It looks to discharge VTT supply when it is off. What is the specific reason to add this kinds of circuit, and is it the mendatory for VTT supply? Best regards, Lloyd
5
+
6
+ Responses:
7
+ Hello Lloyd, All this is to support Suspend-To-RAM (STR), aka Fast Suspend-Resume (FSR). During STR the DDR3 memories must of course remain powered while the SoC is entirely off (except for possibly its RTC domains), and they must NOT be in reset (otherwise the memory chips shall exit the Suspend state and loose their contents). Therefore the DDRx_RST signals are derived from the PG output of the memory subsystem power buck (if you notice, the DDR3 supply for the SoC goes through a switch that is off during suspend, but the buck stays on). The SoC reset outputs pull low while it is off and would reset the chips if they were used for this purpose. On the other hand, the memory chips must see a stable low on CKE while in suspend. Therefore we actively discharge VTT, so that no glitches occur on CKE during the power-down (upon suspend entry) and power-up (upon resume) sequences - these are possible through the termination resistors of the other C/A bus signals. If STR/FSR is not a design requirement, the active discharge circuit is not needed, and the DDR3 resets can be driven by the SoC. Hope this answers your questions. Best regards, Lubo
8
+
sample_embedding_folder2/0540982.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: XDS560V2 Emulator connection failure
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Our project is developing in Linux environment. We have custom TDA3 board. I have connected Spectrum Digial XDS560V2 STM JTAG Emulator with our custom board. I try to launch debug configuration. I checked the all connection and initialized the gel files. After few minutes i got the following error. Error initializing emulator: (Error -2083 @ 0x0) Unable to communicate with the emulator. Confirm emulator configuration and connections, reset the emulator, and retry the operation. (Emulation package 5.1.73.0)
5
+
6
+ Responses:
7
+ Hi Jaganathan, Did you try to eliminate the host PC factor? I mean did you try on another PC? Also, the front PC ports are often using cheap wires to the main board and fail to communicate. Rear USBs are always preferred. You can also try with another known good USB cable. Also, do you use known good power supply for the debugger unit? Regards, Stan
8
+
9
+ Hi Stan, Same XDS560V2 JTAG Emulator is using in TDA2 EVM board. It is working in windows environment. I did test connection. Below are report generated by Code Composer Studio [Start: Spectrum Digital XDS560V2 STM USB Emulator_0] Execute the command: %ccs_base%/common/uscif/dbgjtag -f %boarddatafile% -rv -o -F inform,logfile=yes -S pathlength -S integrity [Result] -----[Print the board config pathname(s)]------------------------------------ /home/gnanaveluj/.ti/ti/0/0/BrdDat/testBoard.dat -----[Print the reset-command software log-file]----------------------------- This utility has selected a 560/2xx-class product. This utility will load the program 'sd560v2u.out'. E_RPCENV_IO_ERROR(-6) No connection: DTC_IO_Open::dtc_conf Download failed for file /home/gnanaveluj/ti/ccsv6/ccs_base/common/uscif/./././././xds560v2.out An error occurred while soft opening the controller. -----[An error has occurred and this utility has aborted]-------------------- This error is generated by TI's USCIF driver or utilities. The value is '-250' (0xffffff06). The title is 'SC_ERR_ECOM_EMUNAME'. The explanation is: An attempt to access the debug probe via USCIF ECOM has failed. [End: Spectrum Digital XDS560V2 STM USB Emulator_0]
10
+
11
+ Hi, I resolved the issue. My laptop was connected with docking station. So it was not connected. I removed from docking station. Now i can able to connect with the board.
12
+
sample_embedding_folder2/0542041.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Linker command files for CCS
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hello, I am using CCS v6.2 to check simple program for each core on TDA2 Vayu EVM. However, CCS project (Selected “Empty project w/ main.c”) does not provide linker command files. How can I get the command files for A15, C66 and M4? Best regards, Kenshow
5
+
6
+ Responses:
7
+ Hello Kenshow, Try the attached one. You may include it in <CCS install folder>\ccsv6\ccs_base\c6000\include. TDA2x_C66.cmd Hope it helps, thanks, Alex
8
+
9
+ Hi Alex, Thanks for C66x command file. Also, I would like to need command files of A15 (gcc base) and M4. Can I get them? Best regards, Kenshow
10
+
11
+ Sure, attached. TDA2x.lds TDA2x_CM4.cmd Thanks, Alex
12
+
13
+ Hi Alex, Thank you very much for useful files. I 'll be able to check my programs on each cores. Best regards, Kenshow
14
+
sample_embedding_folder2/0543489.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: How to use IPU1_0 instead of A15_0 to process NDK in TDA2x-EVM
2
+
3
+ Query Text:
4
+ Hi all, When NDK_PROC_TO_USE=a15_0 by default, the target works normally. But when I set NDK_PROC_TO_USE=ipu1_0, the target halts after run. ipu1_0 is halted at Network_waitConnect() and the return status is 0. Meanwhile, there is no longer any response from UART when I try to send cmd to it. I have tried ipu1_1, the same result appeared. I did have done "gmake -s depend" before "gmake -s" as the guide metioned. my VSDK version is 2.08 I wonder if someone could help me. Thanks a lot.
5
+
6
+ Responses:
7
+ Hello Benz, Can you send output of gmake -config after configuring NDK_PROC_TO_USE=ipu1_1? Also have you tried doing clean build after making changes? (remove binaries folder) Regards, Prasad
8
+
9
+ Hello Prasad, Thank you for you tips. I did "gmake -s clean" before build cmd, and the target works well.
10
+
sample_embedding_folder2/0544217.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Realtime clock (RTC) subsystem time and calendar register not writeable?
2
+
3
+ Query Text:
4
+ We are trying the TDA2xx RTC subsystem for time and calendar (TC) use. There appear to be problems writing to the TC registers. At the same time I can write to the a SCRATCH register and later read the correct value from it. I cannot do this for the TC registers, even the SECONDS register does not store the values. In addition the seconds is not counting since reading from that register is always '0'. Are there differences between setting up the TC registers and SCRATCH registers? We are configuring using the internal clock source. Thanks
5
+
6
+ Responses:
7
+ Hi Dan, Do you use func clock from the PRCM or from the external rtc_osc_xi_clkin32 pin? What values you have in the below two registers? CM_RTC_CLKSTCTRL CM_RTC_RTCSS_CLKCTRL Make sure you follow the below TDA2x TRM sections: 23.4.1 Clock Source 23.4.3.3 OCP MMR Spurious Write Protection 23.4.3.5 Modifying the TC Registers 23.5.1.2 RTC Module Global Initialization Regards, Pavel
8
+
9
+ Hi Pavel, Yes, first we are using the PRCM and wanted to pull the rtc_osc_xi_clkin32 pin low as suggested, however cannot find the actual pin pad location. It is not in the TRM. I am checking the other items you suggest, I have looked at those things. First is the RTC_STATUS_REG returns 0x0, which I was thinking it should be 0x2, which indicates the RTC is running. Am checking further. Thanks, Dan
10
+
11
+ Dan, Dan Zulaica said: Yes, first we are using the PRCM and wanted to pull the rtc_osc_xi_clkin32 pin low as suggested, however cannot find the actual pin pad location. It is not in the TRM. Pad locations are documented in DM (data manual), not TRM. For TDA2Hx-17 (SPRS952A), rtc_osc_xi_clkin32 is at AA13. Regards, Pavel
12
+
13
+ Hi Dan, I would like to highlight some spots that Pavel already pointed to. 1. Make sure RTC is enabled in PRCM - CM_RTC_CLKSTCTRL[1:0] CLKTRCTRL = 0x3 - CM_RTC_RTCSS_CLKCTRL[1:0] MODULEMODE = 0x2 2. Make sure RTC Register protection is turned off - RTC_KICK0_REG = 0x83E70B13 - RTC_KICK1_REG = 0x95A4F1E0 3. Select internal clock - RTC_OSC_REG[3] 32KCLK_SEL = 0 4. Enable clock -RTC_OSC_REG[6] 32KCLK_EN = 1 5. Start RTC runnig - RTC_CTRL_REG[0] STOP_RTC =1 6. Check RTC is running - RTC_STATUS_REG[1] RUN == 1 OR - See if seconds are updating Please be aware that internal 32-k clock is made of SYS_CLK1 div 610 and will be close to 32,768kHz only if you use SYS_CLK1 @20MHz. Best regards, Stan
14
+
sample_embedding_folder2/0547591.txt ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Integrating Vision Algorithm on Vision SDK TDA3
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 In our project, we have algorithm for vision algorithm for adas camera monitoring system using in c. Which is already working in TDA2 EVM board. I need to integrated with framework which is based on Vision SDK provided by Texas Intruments. To under stand Vision SDK completely,i would like to do Image Negative. I have attached the image which was generated by VSDK user case generation Using user case generation, i have generated file adasens_issImageNegative_priv.c adasens_issImageNegative_priv.h My image negative input, output structure and function prototype are below. typedef struct { unsigned char *ptImageInput; //Input Image to unsigned int uiNoRows; unsigned int uiNoCols; }stImageNegativeInput; chains_vipSingleCameraEdgeDetection.c /*
5
+ *******************************************************************************
6
+ *
7
+ * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com/
8
+ * ALL RIGHTS RESERVED
9
+ *
10
+ *******************************************************************************
11
+ */
12
+
13
+ /*******************************************************************************
14
+ * INCLUDE FILES
15
+ *******************************************************************************
16
+ */
17
+ #include "chains_vipSingleCameraEdgeDetection_priv.h"
18
+ #include <tda2xx/include/bios_chains_common.h>
19
+
20
+
21
+ #define CAPTURE_SENSOR_WIDTH (1280)
22
+ #define CAPTURE_SENSOR_HEIGHT (720)
23
+
24
+ /**
25
+ *******************************************************************************
26
+ *
27
+ * \brief SingleCameraEdgeDetectionObject
28
+ *
29
+ * This structure contains all the LinksId's and create Params.
30
+ * The same is passed to all create, start, stop functions.
31
+ *
32
+ *******************************************************************************
33
+ */
34
+ typedef struct {
35
+
36
+ chains_vipSingleCameraEdgeDetectionObj ucObj;
37
+
38
+ UInt32 captureOutWidth;
39
+ UInt32 captureOutHeight;
40
+ UInt32 displayWidth;
41
+ UInt32 displayHeight;
42
+
43
+ Chains_Ctrl *chainsCfg;
44
+
45
+ } Chains_VipSingleCameraEdgeDetectionAppObj;
46
+
47
+ /**
48
+ *******************************************************************************
49
+ *
50
+ * \brief Set Edge Detection Alg parameters
51
+ *
52
+ * It is called in Create function.
53
+ * In this function alg link params are set
54
+ * The algorithm which is to run on core is set to
55
+ * baseClassCreate.algId. The input whdth and height to alg are set.
56
+ * Number of input buffers required by alg are also set here.
57
+ *
58
+ *
59
+ * \param pPrm [IN] AlgorithmLink_EdgeDetectionCreateParams
60
+ * \param chainsCfg [IN] Chains_Ctrl
61
+ *
62
+ *******************************************************************************
63
+ */
64
+ Void chains_vipSingleCameraEdgeDetection_SetEdgeDetectionAlgPrms(
65
+ AlgorithmLink_EdgeDetectionCreateParams *pPrm,
66
+ Chains_Ctrl *chainsCfg) {
67
+ pPrm->maxWidth = CAPTURE_SENSOR_WIDTH;
68
+ pPrm->maxHeight = CAPTURE_SENSOR_HEIGHT;
69
+
70
+ pPrm->numOutputFrames = 3;
71
+ }
72
+
73
+
74
+ /**
75
+ *******************************************************************************
76
+ *
77
+ * \brief Set link Parameters
78
+ *
79
+ * It is called in Create function of the auto generated use-case file.
80
+ *
81
+ * \param pUcObj [IN] Auto-generated usecase object
82
+ * \param appObj [IN] Application specific object
83
+ *
84
+ *******************************************************************************
85
+ */
86
+ Void chains_vipSingleCameraEdgeDetection_SetAppPrms(chains_vipSingleCameraEdgeDetectionObj *pUcObj,
87
+ Void *appObj) {
88
+ Chains_VipSingleCameraEdgeDetectionAppObj *pObj
89
+ = (Chains_VipSingleCameraEdgeDetectionAppObj *)appObj;
90
+
91
+ pObj->captureOutWidth = CAPTURE_SENSOR_WIDTH;
92
+ pObj->captureOutHeight = CAPTURE_SENSOR_HEIGHT;
93
+ ChainsCommon_GetDisplayWidthHeight(
94
+ pObj->chainsCfg->displayType,
95
+ &pObj->displayWidth,
96
+ &pObj->displayHeight
97
+ );
98
+
99
+ ChainsCommon_SingleCam_SetCapturePrms(&(pUcObj->CapturePrm),
100
+ CAPTURE_SENSOR_WIDTH,
101
+ CAPTURE_SENSOR_HEIGHT,
102
+ pObj->captureOutWidth,
103
+ pObj->captureOutHeight,
104
+ pObj->chainsCfg->captureSrc
105
+ );
106
+
107
+ ChainsCommon_SetGrpxSrcPrms(&pUcObj->GrpxSrcPrm,
108
+ pObj->displayWidth,
109
+ pObj->displayHeight
110
+ );
111
+
112
+
113
+ ChainsCommon_SetDisplayPrms(&pUcObj->Display_VideoPrm,
114
+ &pUcObj->Display_GrpxPrm,
115
+ pObj->chainsCfg->displayType,
116
+ pObj->displayWidth,
117
+ pObj->displayHeight
118
+ );
119
+
120
+ ChainsCommon_StartDisplayCtrl(
121
+ pObj->chainsCfg->displayType,
122
+ pObj->displayWidth,
123
+ pObj->displayHeight
124
+ );
125
+
126
+ chains_vipSingleCameraEdgeDetection_SetEdgeDetectionAlgPrms
127
+ (&pUcObj->Alg_EdgeDetectPrm,
128
+ pObj->chainsCfg);
129
+ }
130
+
131
+ /**
132
+ *******************************************************************************
133
+ *
134
+ * \brief Start the capture display Links
135
+ *
136
+ * Function sends a control command to capture and display link to
137
+ * to Start all the required links . Links are started in reverce
138
+ * order as information of next link is required to connect.
139
+ * System_linkStart is called with LinkId to start the links.
140
+ *
141
+ * \param pObj [IN] Chains_VipSingleCameraEdgeDetectionAppObj
142
+ *
143
+ * \return SYSTEM_LINK_STATUS_SOK on success
144
+ *
145
+ *******************************************************************************
146
+ */
147
+ Void chains_vipSingleCameraEdgeDetection_StartApp(Chains_VipSingleCameraEdgeDetectionAppObj *pObj) {
148
+ Chains_memPrintHeapStatus();
149
+
150
+ ChainsCommon_StartDisplayDevice(pObj->chainsCfg->displayType);
151
+
152
+ ChainsCommon_StartCaptureDevice(
153
+ pObj->chainsCfg->captureSrc,
154
+ pObj->captureOutWidth,
155
+ pObj->captureOutHeight,1
156
+ );
157
+
158
+ chains_vipSingleCameraEdgeDetection_Start(&pObj->ucObj);
159
+
160
+ Chains_prfLoadCalcEnable(TRUE, FALSE, FALSE);
161
+ }
162
+
163
+ /**
164
+ *******************************************************************************
165
+ *
166
+ * \brief Delete the capture display Links
167
+ *
168
+ * Function sends a control command to capture and display link to
169
+ * to delete all the prior created links
170
+ * System_linkDelete is called with LinkId to delete the links.
171
+ *
172
+ * \param pObj [IN] Chains_VipSingleCameraEdgeDetectionAppObj
173
+ *
174
+ *******************************************************************************
175
+ */
176
+ Void chains_vipSingleCameraEdgeDetection_StopAndDeleteApp(Chains_VipSingleCameraEdgeDetectionAppObj
177
+ *pObj) {
178
+ chains_vipSingleCameraEdgeDetection_Stop(&pObj->ucObj);
179
+ chains_vipSingleCameraEdgeDetection_Delete(&pObj->ucObj);
180
+
181
+ ChainsCommon_StopDisplayCtrl();
182
+ ChainsCommon_StopCaptureDevice(pObj->chainsCfg->captureSrc);
183
+ ChainsCommon_StopDisplayDevice(pObj->chainsCfg->displayType);
184
+
185
+ /* Print the HWI, SWI and all tasks load */
186
+ /* Reset the accumulated timer ticks */
187
+ Chains_prfLoadCalcEnable(FALSE, TRUE, TRUE);
188
+ }
189
+
190
+ /**
191
+ *******************************************************************************
192
+ *
193
+ * \brief Single Channel Capture Display usecase function
194
+ *
195
+ * This functions executes the create, start functions
196
+ *
197
+ * Further in a while loop displays run time menu and waits
198
+ * for user inputs to print the statistics or to end the demo.
199
+ *
200
+ * Once the user inputs end of demo stop and delete
201
+ * functions are executed.
202
+ *
203
+ * \param chainsCfg [IN] Chains_Ctrl
204
+ *
205
+ *******************************************************************************
206
+ */
207
+ Void Chains_vipSingleCameraEdgeDetection(Chains_Ctrl *chainsCfg) {
208
+ char ch;
209
+ UInt32 done = FALSE;
210
+ Chains_VipSingleCameraEdgeDetectionAppObj chainsObj;
211
+
212
+ chainsObj.chainsCfg = chainsCfg;
213
+
214
+ chains_vipSingleCameraEdgeDetection_Create(&chainsObj.ucObj, &chainsObj);
215
+
216
+ chains_vipSingleCameraEdgeDetection_StartApp(&chainsObj);
217
+
218
+ while(!done) {
219
+ ch = Chains_menuRunTime();
220
+
221
+ switch(ch) {
222
+ case '0':
223
+ done = TRUE;
224
+ break;
225
+ case 'p':
226
+ case 'P':
227
+ ChainsCommon_PrintStatistics();
228
+ chains_vipSingleCameraEdgeDetection_printStatistics(&chainsObj.ucObj);
229
+ break;
230
+ default:
231
+ Vps_printf("\nUnsupported option '%c'. Please try again\n", ch);
232
+ break;
233
+ }
234
+ }
235
+
236
+ chains_vipSingleCameraEdgeDetection_StopAndDeleteApp(&chainsObj);
237
+
238
+ }
239
+
240
+ typedef struct { unsigned char *ptImageOutput; }stImageNegativeOutput; void ImageNegative(&stImageNegatvieInput,&stImageNegativeOutput); Assume that, chains_vipSingleCameraEdgeDetection.c(attched with this) is performing image negative. I have to call the ImageNegative for each frame. Can you please tell me, where i have to exactly call my ImageNegative fuctions and initialize the input and output structures.
241
+
242
+ Responses:
243
+ Hi, Algorithm is integrated into Algorithm Link in Vision SDK. You have to create an algorithm link plug-in for your algorithm. Please refer to Chapter 4 in Vision SDK Development Guide. For example, you can refer to ~\VISION_SDK_02_xx_xx_xx\vision_sdk\examples\tda2xx\src\alg_plugins\edgedetection.
244
+
sample_embedding_folder2/0563747.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: RTOS/TDA2EVM5777: which os systems are supported on this platfrom?
2
+
3
+ Query Text:
4
+ Part Number: TDA2EVM5777 Other Parts Discussed in Thread: TDA2, SYSBIOS Tool/software: TI-RTOS Dear, Now We want to use TDA2EVM5777 platform to develope project. Because there are 2-cortexA15 arm cores,2-C66X DSP cores. 2x Dual Cortex-M4. I want to know if the whole platfrom is supported by SYS/BIOS? if so,we can debug project for every cores in SYS/BIOS by emulator? by the way, if there are some others OS systems for this platform,such linux,IOS,and so on? BRS, Meng
5
+
6
+ Responses:
7
+ Hi Meng, Are you using Vision SDK for this platform? It does support SYS/BIOS for all the cores. Please refer to Vision SDK White paper, figure 4 for the software stack diagram ( www.ti.com/.../spry260.pdf). Regarding the question about emulator support and other OSes support, we would have to move this thread to the device forum for a faster response from the TDA2 platform experts. Vikram
8
+
9
+ Dear Vikram, Thank you for your reply. I shall browse through the document mentioned by you. BRS, Meng
10
+
11
+ Dear Vikram, Thank you for your reply. There are some others questions about this platfrom,could you give me help? Thank you in advance. The following picture is TI vision SDK software frame. I want to know which ccs version is used for TI vision SDK software? which sysbios version is used for TI vision SDK software? The software use IPC3.0 for Inter-core communiation,where I can download this IPC version? It is suggested for sysbios version,IPC version and XDCtools from the following website: software-dl.ti.com/.../index.html For IPC3.0, which sysbios version and Xdctools and ccs version are suggested? by the way,my current OS is WIN7. CCS6.0 version and above can be installed in WIn7 OS? T Thanks a lot! BRS, Meng
12
+
13
+ Hi Meng you do not need to download each components like Sys-bios, IPC, XDC separately, VSDK is a single installer contain all these packages, just download VSDK and follow the User guide to build and test. CCS version 6.0.1.00040 or higher should be used along with vision SDK 2.10 or 2.11 release. Regards, Shiju
14
+
sample_embedding_folder2/0563807.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Embedded Xvisor for Ti platforms
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi All, Is possible to port xvisor on any of TI platform like DM8148 , TDA2x ? If so then what are steps to port ? Regards, Vinayak
5
+
6
+ Responses:
7
+ Hello Vinayak, Let me move this to the TDAxx forum, they might be able to help you. Regards, Karl
8
+
9
+ Hi, Experts are notified and will write directly in the thread. Regards, Mariya
10
+
11
+ Hi Vinayak, In general, Hypervisor is an offering from TI 3rd parties. TI does not have an inhouse solution for Hypervisor. Specifically, there are current working solutions on Xen and COQOS from 3Ps. I haven't seen an XVisor port on TI platform. Nevertheless, since Xvisor supports ARMv8, it should be possible to enable it on J6 / TDA2. However, we wouldn't be able to guide you directly on this. We can put you in touch with our 3Ps. Please contact your FAE for 3P details. Regards, Anand
12
+
sample_embedding_folder2/0565132.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2EVM5777: Pedestrian Detection with Network File source
2
+
3
+ Query Text:
4
+ Part Number: TDA2EVM5777 Other Parts Discussed in Thread: TDA2 Hi, I want to run the "vip_single_cam_object_detection2" pedestrian detection usecase on Vayu EVM with network file source instead of camera capture. In this usecase, I replaced the "Capture" Link with "NullSource" Link with network file read with the help of "chains_networkRxDisplay" usecase. While running I am giving my raw NV12 format YUV video through the "network_tx.exe" network tool that came with Vision SDK. Problem is: if I give only a single frame of this raw video as input, it is able to detect pedestrians properly in that frame. But when I give the raw video (multiple frames) as input, it is not able to detect any pedestrian in any frame. I am able to play the video on 10" LCD. I tried to change frame rate in my .c file to 30, 1, 0.5, 0.2 FPS by setting the NullSource Param "timerPeriodMilliSecs" to corresponding time in millisec. Following is my setup: Vayu EVM (DRA74x) Rev G3 with 10" LCD without Vision Application Board. Vision SDK 2.11.00.00 with tda2xx_evm_bios_all configuration NDK_PROC_TO_USE=ipu1_0 I am sensing there is some timing related issue between the links. Can you suggest what could be the problem? Regards, Abhishek Gupta
5
+
6
+ Responses:
7
+ Hi, Abhishek, Your query has been forwarded to an expert. Regards, Mariya
8
+
9
+ Do you see output video with no pedestrains marked ? OR You do not see any video on the display ? regards Kedar
10
+
11
+ Hi Kedar, I am able to see the output video on display with no pedestrian marked. Regards Abhishek
12
+
13
+ Can you send the use-case file that you modified ? Also what is the test input you are using ? It is possible to send few frames of the test input so that we can check at our end. The pedestrian detection algorithm has some notion of history so it shows a pedestrian only if it is detected few times in a sequence of frames. Also the pedestrian detection algorithm is just demo and may not work so well on arbitrary input streams. regards Kedar
14
+
15
+ PD_e2e_post_files.zip I have attached the use-case file and 15 frames of input video that I am using. I have created the video from 15 consecutive images from the Caltech dataset which is widely used for training and testing pedestrian detection algorithms. Regards, Abhishek
16
+
17
+ hi Abhishek, We are able to recreate the issue at our end. We will get back to you on the solution. regards Kedar
18
+
19
+ I looked at the video that you shared - it is mostly a collection of random frames and not a continuous video. Since the object tracking looks at continuity for several frames, it won't be able to output a stable object location. If you don't have a proper video to give as input, you can create one from the following sequence: https://data.vision.ee.ethz.ch/cvl/aess/cvpr2008/seq03-img-left.tar.gz There are several such sequences in the following page: data.vision.ee.ethz.ch/.../ Let us know how it goes. Best regards, Manu.
20
+
21
+ Hi Manu, Thanks for your response. It was able to detect pedestrians in the sequences from the data-vision link you shared. There are some places where it is not able to detect some person or some false positives, but since these is a demo algorithm, I am ok with that. Can you tell me on which technical paper is the algorithm based on? And on what dataset it is trained on? Another thing which I could see is that if I run at more than 5 FPS, the application gets stuck after a few frames. There may be an issue with my network connection which can't support such high rates of data transfer (for 640x480@5fps, it takes ~2.3MBps). Lesser than 5 FPS, it plays fine. Regards, Abhishek
22
+
23
+ Hi Abhishek, The ACF detector is a popular pedestrian detection algorithm. You can check it out at the following link. You can also find literature references there. pdollar.github.io/.../ I am not competent enough to comment about the frame freeze - Kedar can probably help there. Btw, do you mind describing details of the application that you are targetting? Best regards, Manu.
24
+
25
+ Abhishek, Currently networkRxDisplay use case is configured to run at very low frame rate so as to work on core like M4 as well. For enabling higher frame rate in the use case application file change below to Frame rate you desire. As you are running this use-case on TDA2 A15 you should be able to set it upto 30fps Change this pPrm->timerPeriodMilliSecs = 1000; to pPrm->timerPeriodMilliSecs = 1000/30;
26
+
27
+ Hi Abhishek, we are also trying to send video through network port could you please tell me how you have linked "chains_networkRxDisplay" usecase with "vip_single_cam_object_detection2" usecase Thanks, Swati
28
+
29
+ Hi Swati, Make a copy of vip_single_cam_object_detection2 folder in <VISION_SDK>/vision_sdk/examples/tda2xx/src/usecases/vip_single_cam_object_detection2". Then follow the following steps: 1. Replace "Capture" in chains_vipSingleCameraObjectDetect2Tda3xx.txt to "NullSource" 2. In the chains_vipSingleCameraObjectDetect2Tda3xx.c, you will have to replace all the CapturePrms with NullSrcPrms from chains_networkRxDisplay usecase. This involves adding the function chains_myObjDetect_SetNullSrcPrms instead of ChainsCommon_SingleCam_SetCapturePrms. 3. If you are using NDK_PROC_TO_USE=a15_0 (and not ipu1_0) in <VISION_SDK>/vision_sdk/configs/tda2xx_evm_bios_all/cfg.mk (depending on what board and config you are using) then you need to add that dependency in your usecase folder's cfg.mk Follow the build procedure as mentioned in the vision sdk developer's guide. Hope it helps.
30
+
31
+ Hi Abhishek, Thank you for the help. I have one query have you tried running the usecase for FCW which is in "C:\VISION_SDK_02_12_00_00\ti_components\algorithms_codecs\REL.200.V.SFM.C66X.00.01.00.00\200.V.SFM.C66X.00.01\modules\ti_forward_collision_warning\test\src" following path through network port. if yes ,can you suggest how to provide video streams as input to the above sample usecase.
32
+
33
+ Hi Swati, I have not tried to build or run the FCW algo. But I think providing video streams over network would be same as the chains_networkRxDisplay usecase. You will have to use the network_tx.exe on windows provided in Vision sdk to transfer the file. The command and details are mentioned in the Vision sdk developer's guide. Regards, Abhishek
34
+
sample_embedding_folder2/0565961.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: Please help in determining capture to display latency in TDA2 for automotive applications
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Hi, I have a question on how the latency between a camera capture and display, can be measured. I came across an article on TI wiki, that defines what i need: http://processors.wiki.ti.com/index.php/Latency_Measurement_on_Capture_Encode_Decode_Display_Demo. I would like to know if the same solutions exist for TDA2x processors and If yes, can you please share. Thank you in advance. Mahima
5
+
6
+ Responses:
7
+ Hi Mahima, Are you using vision sdk? If yes, it prints latency from the capture to display when you print stats. Regards, Brijesh
8
+
9
+ Just to add.. You would see the local link level latency and the source to link latency (which is equivalent to capture to particular link latency) in these statistics printed out from Vision SDK. Example is as below: [IPU1-0] 176.749191 s: Local Link Latency : Avg = 30 us, Min = 30 us, Max = 30 us, [IPU1-0] 176.749313 s: Source to Link Latency : Avg = 32971 us, Min = 32971 us, Max = 32971 us, Regards, Piyali
10
+
11
+ Thanks Brijesh. Yes, vision sdk is used; but, in development environment. Please let me explain my set-up, with more clarity: 1. I have to measure glass to glass latency on a bench setup using 4 cameras. The camera output goes to the ECU which has a TDA2x processor. The output images from the ECU, goes over LVDS to a monitor. 2. I only have access to M4-0 and A15 core logs thru a serial board. The rest are not accessible while testing the product. 3. I went thru the vision sdk user guide and it outlines a separate hardware setup to execute the example use cases. This, I do not have. Is it possible, barring the above limitations, to capture the glass to glass latency? I’m pretty new to this. Sorry, If I’ve not asked the right question. Thank you once again -Mahima
12
+
13
+ Thanks Piyali, for taking time out to answer. I've outlined, with more clarity, the limitations associated with capturing the latency in my setup. Kindly go thru, and let me know your suggestions. Thanks again. -Mahima
14
+
15
+ Mahima, If you are using vision sdk, just press 'p' when usecase is running, it will print all statistics including latency from the capture to display. This is the latency from capture link to the display link. There will be additional 2 to 3 frames latency in capture and display, which is not counted in this stats. The other way to measure the latency is by keeping a counting clock in front of the camera and take a picture of clock and display output in single shot. The latency is difference between time in clock and display. Regards, Brijesh
16
+
17
+ Thanks Brijesh. I think the system we use is a little different. we have a renderer in between the capture and display. I think vision sdk loses the frame once it goes inside the renderer. But the second method would give us a fair estimate though. So, thanks again for the help.
18
+
19
+ Hi Mahima, It is possible even if you have rendered in between. You just need to copy srctimestamp from source frame to target frame in your rendered link. Regards, Brijesh
20
+
21
+ Thanks Brijesh. I will try this on my system and let you know if I could capture the same.
22
+
sample_embedding_folder2/0566481.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2: Cannot save uBoot env
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2, SYSBIOS Hi all, I want to boot TDA2 EVM with NFS. I follow the document, VisionSDK_LinuxUserGuide.pdf, and do followinf steps: setenv bootargs 'console=ttyO0,115200n8 vram=16M root=/dev/nfs rw nfsroot=172.24.170.60:/datalocal/user/abc/vision_sdk/linux/targetfs rootwait ip=dhcp mem=1024M' setenv fdt_high 0x84000000 setenv bootcmd 'load mmc 0 0x825f0000 dra7-evm-infoadas.dtb;load mmc 0 0x80300000 zImage;bootz 0x80300000 - 0x825f0000' save But it shows error message: Saving Environment to MMC... MMC init failed zImage, dra7-evm-infoadas.dtb, MLO and u-boot.img are already copied to SD card. Is there anything I missed? My environment is TDA2, Linux + sysbios, Vision SDK v2.10. Thanks. Kevin Tsai
5
+
6
+ Responses:
7
+ Hi, Kevin, VSDK expert is notified for your questions. Regards, Mariya
8
+
9
+ hi, I guess you have to set your NFS server address correctly below one is for TI NFS server nfsroot=172.24.170.60:/datalocal/user/abc/vision_sdk/linux/targetfs pick the file uenv_nfs.txt, modify with your NFS server address, rename to uenv.txt and copy to SD card. regards, Shiju
10
+
11
+ Hi Shiju, Thanks for your reply. I did change NFS address and path when I typed the command in console. But I forgot to change it when I typed this post. I am sorry for that. The error message mentions "MMC init failed". So I think something like pinmux should be modified. But the document does not describe about this. By the way, your sugestion is modifying uenv.txt directly instead of typing command at uboot, right? I will try it. Thanks. Kevin Tsai
12
+
sample_embedding_folder2/0567692.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2E: EDMA3 Emulator/Simulator for Desktop PC
2
+
3
+ Query Text:
4
+ Part Number: TDA2E Dear Experts, is there some kind of an emulator/simulator available for EDMA3 for Host PC, similar to EVE SW which can be compiled for Host PC? I am not aware of anything like that. Could You please confirm? Many thanks and best regards, ROGERG
5
+
6
+ Responses:
7
+ Hi, Your question has been forwarded to a customer support lead. Regards, Mariya
8
+
9
+ Hi Rogerg, We do not have standalone EDMA simulator. However, EVE simulator includes the internal EDMA module so you can simulate EDMA functionality with EVE simulator. Regards, Stanley
10
+
11
+ The dmautils similar to EVE are also provided on DSP which can be used for host emulation. You can find the package at VISION_SDK_XX_XX_XX_XX\ti_components\algorithms_codecs\200.V.OD.C66X.xx.xx\dmautils Note that here OD is an example, which is object deetction algorithm. But the directory also exists under the other algorithm directory such as CLR, LD, SFM, etc. Thanks, With Regards, Pramod
12
+
sample_embedding_folder2/0571580.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: Link VLIB in visionSDK
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Hi, Where can I include VLib Libraries (common.lib, vlib.lib, vlib_cn.lib) to build on visionSDK2.10? I would like to to know which setting file to set these libraries. If I use VLib functions on frameCopyAlgoLocalDma.c, the build has link errors as follows: # Linking into C:/VisionSDK/VISION_SDK_02_10_00_00/vision_sdk/binaries/vision_sd k/bin/tda2xx-evm/vision_sdk_c66xdsp_1_release.xe66... warning: creating output section ".sram_start" without a SECTIONS specification undefined first referenced symbol in file --------- ---------------- _sram_start C:\VisionSDK\VISION_SDK_02_10_00_00\ti_components\algorithms_codecs \vlib_c66x_3_3_0_3\packages\ti\vlib\lib\common.ae66<VLIB_memory.oe66> error: unresolved symbols remain error: errors encountered during linking; "C:/VisionSDK/VISION_SDK_02_10_00_00/vision_sdk/binaries/vision_sdk/bin/tda2 xx-evm/vision_sdk_c66xdsp_1_release.xe66" not built gmake[6]: *** [C:/VisionSDK/VISION_SDK_02_10_00_00/vision_sdk/binaries/vision_sd k/bin/tda2xx-evm/vision_sdk_c66xdsp_1_release.xe66] Error 1 gmake[5]: *** [c66xdsp_1] Error 2 gmake[4]: *** [apps_dsp1] Error 2 gmake[3]: *** [apps] Error 2 gmake[2]: *** [apps] Error 2 gmake[1]: *** [vision_sdk_apps] Error 2 gmake: *** [vision_sdk] Error 2 Thanks, Kenshow
5
+
6
+ Responses:
7
+ Hi, Kenshow, Your question has been forwarded to VSDK experts with a copy to VLIB expert. They will commnet directly here. Regards, Mariya
8
+
9
+ Hi you can list these libs in either \vision_sdk\build\makerules\rules_66.mk (if its some kernel libs) or \vision_sdk\examples\MAKEFILE.MK (if they are application specific libs) regards, Shiju
10
+
11
+ Hi Shiju, The default file of makerules_66.mk has already set the lib path. Also, should I add the path in \vision_sdk\examples\MAKEFILE.MK ? [vision_sdk\build\makerules\rules_66.mk] (Line-223): LIB_PATHS += $(vlib_PATH)/packages/ti/vlib/lib/vlib.ae66 (Line-224): LIB_PATHS += $(vlib_PATH)/packages/ti/vlib/lib/vlib_cn.ae66 (Line-225): LIB_PATHS += $(vlib_PATH)/packages/ti/vlib/lib/common.lib In this case, I just added vlib function into a file, C:\VISION_SDK_02_10_00_00\vision_sdk\examples\tda2xx\src\alg_plugins\framecopy\frameCopyAlgoLocalDma.c Should I set the lib path any more? and which file? Regards, Kenshow
12
+
13
+ Kenshow no, you do not need to add this in any other files. One more thing is, include the VLIB header file that defines these functions in framecopy\frameCopyAlgoLocalDma.c regards, Shiju
14
+
15
+ Hi Shiji, I had already defined the header. And, it seems link error. my AlgoLocalDma.c file is follows: --------------------------- : : #include <ti/vlib/src/common/VLIB_test.h> #include <ti/vlib/src/common/VLIB_memory.h> #include <ti/vlib/src/common/VLIB_profile.h> Int32 Alg_FrameCopyProcess(Alg_FrameCopy_Obj *algHandle, UInt32 *inPtr[], UInt32 *outPtr[], UInt32 width, UInt32 height, UInt32 inPitch[], UInt32 outPitch[], UInt32 dataFormat, Uint32 copyMode ) { Int32 rowIdx; Int32 colIdx; UInt32 *inputPtr; UInt32 *outputPtr; UInt32 numPlanes; UInt32 wordWidth; UInt32 lineSizeInBytes; UInt32 opt; uint16_t tccStatus; Alg_FrameCopyDma_Obj * pAlgHandle; pAlgHandle = (Alg_FrameCopyDma_Obj *)algHandle; // TEST VLIB LINK VLIB_cache_init(); : : --------------------------- Regards, Kenshow
16
+
17
+ The common.lib is intended to test VLIB in stand-alone, bare metal DSP environment. When you integrate VLIB into an application in VSDK, you don't need common.lib or the functions associated with them. For example, VLIB_cache_init() shouldn't be used if the VSDK has already initialized the cache, and VLIB_malloc() should not be used as this is just the testbench way of allocating memory. In your example above, you probably don't need VLIB_cache_init() or the 3 common header files you included. Is there some reason you feel that you need these?
18
+
sample_embedding_folder2/0572635.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: Is it impossible to boot from the SD card with NDK on Vayu board?
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Hi, I am using vayu board with visionSDK2.8. On line 617 of the file C: \ VISION_SDK_ 02_ 08 _ 00 _ 00 \ vision_sdk \ Rules.make is written # When NDK is enabled, FATFS can not be used to to MMCSD conflict Dose it mean that it is impossible to boot from the SD card with NDK on Vayu board? Regards, Kenshow
5
+
6
+ Responses:
7
+ Hi kenshow, Vision SDK experts have been notified to comment here. Thanks, Alex
8
+
9
+ Hello Kenshow, This only applies to TDA3xx due to pin mux conflicts for using port1. But if it is necessary on TDA3xx as well we can enable FATFS and NDK together with the help of daughter board. For TDA2/2ex you should be able to use both FATFS and NDK together.
10
+
sample_embedding_folder2/0573203.txt ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Cannot run use cases while TDA2 boot from eMMC
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi, If I boot TDA2 from SD card and run use cases, everything is OK. Now I boot it from eMMC and run the same code, there is something wrong with use cases. ASSERT (system_ipc.c|System_ipcInit|63) [IPU1-0] 8.081711 s: ***** IPU1_0 Firmware build time 13:26:09 Feb 9 2017 [IPU1-0] 8.081894 s: *** SYSTEM: CPU Frequency <ORG = 212800000 Hz>, <NEW = 212800000 Hz> [IPU1-0] 8.201244 s: SYSTEM: System Common Init in progress !!! [IPU1-0] 8.202128 s: SYSTEM: IPC init in progress !!! [IPU1-0] 8.202372 s: SYSTEM: Attaching to [IPU2] ... [HOST ] 11.728485 s: SYSTEM: System A15 Init in progress !!! [HOST ] 11.728546 s: SYSTEM: IPC: Init in progress !!! [HOST ] 11.728546 s: SYSTEM: IPC: Notify init in progress !!! [HOST ] 11.730804 s: SYSTEM: IPC: [IPU1-0] socket bind failed (Invalid argument, 22) !!! [HOST ] 11.730834 s: SYSTEM: IPC: [IPU1-0] Notify RX channel create failed (endpoint = 81) !!! [DSP1 ] 8.184133 s: ***** DSP1 Firmware build time 09:58:42 Jan 9 2017 [DSP1 ] 8.184194 s: *** SYSTEM: CPU Frequency <ORG = 600000000 Hz>, <NEW = 700000000 Hz> [DSP1 ] 8.200817 s: SYSTEM: System Common Init in progress !!! [DSP1 ] 8.201030 s: SYSTEM: IPC init in progress !!! [DSP1 ] 8.201091 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 9.200817 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 10.200847 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 11.200878 s: SYSTEM: Attaching to [IPU1-0] ... [EVE1 ] 8.767248 s: ***** EVE Firmware build time 09:58:43 Jan 9 2017 [EVE1 ] 8.768834 s: *** SYSTEM: CPU Frequency <ORG = 267500000 Hz>, <NEW = 267500000 Hz> [EVE1 ] 8.771426 s: SYSTEM: System Common Init in progress !!! [EVE1 ] 8.774019 s: SYSTEM: IPC init in progress !!! [EVE1 ] 8.776123 s: SYSTEM: Attaching to [IPU1-0] ... [EVE2 ] 8.767370 s: ***** EVE Firmware build time 09:58:44 Jan 9 2017 [EVE2 ] 8.768956 s: *** SYSTEM: CPU Frequency <ORG = 267500000 Hz>, <NEW = 267500000 Hz> [EVE2 ] 8.771518 s: SYSTEM: System Common Init in progress !!! [EVE2 ] 8.774171 s: SYSTEM: IPC init in progress !!! [EVE2 ] 8.776276 s: SYSTEM: Attaching to [IPU1-0] ... [IPU2 ] 8.730464 s: [IPU2 ] EVE1 Image Load Completed [IPU2 ] 8.754376 s: [IPU2 ] EVE2 Image Load Completed [IPU2 ] 8.754498 s: [IPU2 ] EVE MMU configuration completed [IPU2 ] 8.754559 s: [IPU2 ] EVE MMU configuration completed [IPU2 ] 8.754651 s: ***** IPU2 Firmware build time 13:26:10 Feb 9 2017 [IPU2 ] 8.754773 s: *** SYSTEM: CPU Frequency <ORG = 212800000 Hz>, <NEW = 212800000 Hz> [IPU2 ] 8.756786 s: [IPU2 ] 8.756877 s: ### XDC ASSERT - ERROR CALLBACK START ### [IPU2 ] 8.756938 s: [IPU2 ] 8.757213 s: assertion failure [IPU2 ] 8.757274 s: [IPU2 ] 8.757304 s: ### XDC ASSERT - ERROR CALLBACK END ### [IPU2 ] 8.757365 s: [DSP1 ] 12.200908 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 13.200939 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 14.200939 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 15.200969 s: SYSTEM: Attaching to [IPU1-0] ... [DSP1 ] 16.201000 s: SYSTEM: Attaching to [IPU1-0] ... Message will show "SYSTEM: Attaching to [IPU1-0]" continuously. BTW, the memory size is also different between the two booting mode. Booting from SD card: root@dra7xx-evm:~# free total used free shared buffers Mem: 634348 163636 470712 0 10260 -/+ buffers: 153376 480972 Swap: 0 0 0 Booting from eMMC: root@dra7xx-evm:~# free total used free shared buffers Mem: 2046540 143512 1903028 0 2892 -/+ buffers: 140620 1905920 Swap: 0 0 0 What is the different between SD card booting mode and eMMC booting mode? How can I run use cases while boot from eMMC? I am working on TDA2xx with VSDK v2.10. A15 OS is Linux. Thanks, Kevin Tsai
5
+
6
+ Responses:
7
+ Hi, Kevin, Your question has been forwarded to VSDK expert. Regards, Mariya
8
+
9
+ ravi can you please respond? regards, Shiju
10
+
11
+ Hi Kevin, There should be no difference in memory and running usecases whether you are using SD/EMMC. This could be either related to filesystem or boot parameters. Are you using pre-built binaries or compiled? Please share boot parameters . You could print the boot environment in u-boot Stop at U-boot prompt (when it says hit any key to stop autoboot) and use command "printenv". Also did you copy the uenv.txt file to boot partition? If you modified uenv.txt, please share it. Regards, RK
12
+
13
+ Hi Ravikumar, I make a copy of uenv.txt and rename it as uenv-emmc.txt. Now, I can run usecase. Thanks for your reminding. Kevin
14
+
15
+ Hi Kevin, Thanks for updating the status of the thread. I will mark it as closed, if you have any other issues, you can write here. Regards, Yordan
16
+
sample_embedding_folder2/0574868.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Linux/TDA2E: multiple video nodes possible on usb webcam(g_webcam)?
2
+
3
+ Query Text:
4
+ Part Number: TDA2E Other Parts Discussed in Thread: TDA2 Tool/software: Linux Hi Guys, Scenario: If i have multiple video source with different nodes say(/dev/video0...n) and i'm using usb webcam gadget mode. 1-> Is it possible to assign multiple video node to single usb port? or Do we have to unload webcam driver for assigning different node everytime? 2-> what about switching of video nodes. is it possible and how much time it can take to switch between video nodes? regards, Ganesh
5
+
6
+ Responses:
7
+ Hi Ganseh, I have forwarded your question to the expert. Regards, Yordan
8
+
9
+ Hi Yordan Kamenov , Is their any update? regards, Ganesh
10
+
11
+ Hi Ganesh, I have pinged them. Regards, Yordan
12
+
13
+ Hello, Just to clarify This question is comepletely related to V4L2 and USB It does not deal with the capture interfaces on the TDA2 platform. Now to answer your question, V4L2 USB driver would register video devices for as many webcams connected. The device numbers change only if you disconnect and reconnect cameras. You can always find the right device by-path /dev/v4l/by-path This way, you can identify a specific device without worrying about thr order of probing. I hope this helps Regards, Nikhil D
14
+
15
+ Hi Nikhil Devshatwar, V4L2 USB driver would register video devices for as many webcams connected.
16
+ The device numbers change only if you disconnect and reconnect cameras. What you are telling is for usb host mode right?. what i'm asking is for usb device in uvc gadget mode. Just to clarify: I'm using VISION_SDK_02_12 with kernel 4.4. I have TDA2Ex EVM board, on that board two cameras are mounted on two CSI bus. Now i want webcam gadget mode i.e EVM as device mode. As their are two camera so their will be two device node will be created(say /dev/videoX & /dev/videoY) when we load camera driver. So to make EVM as webcam gadget we load usb_f_uvc.ko it will take only one video device node right? 1-> how can we make it possible to assign multiple camera output to single usb port by using usb_f_uvc(configfs). so that host side i can access multiple video device node? 2-> If not multiple video device node on single usb port then one camera output at a time? switching using configfs? but then how can we assign different camera output every time? regards, Ganesh
17
+
18
+ Hello, your question is quiet confusing. First, the usb_f_uvc driver does not USE existing v4l2 CAPTURE device. It rather registers a NEW v4l2 OUTPUT device. Which means, application is supposed to dump buffers into it which internally sent from USB as a webcam gadget. The v4l2 device related to the USB port is a OUTPUT device. It is completely irrelevent to the v4l2 capture device registered from the CAL/CSI drivers. You need to write an application which takes data from v4l2 capture device (you can choose video1 or video2 from the CSI device) and then feed this into the v4l2 output device (video3 registered from USB driver) Application will have full control to decide and switch between the capture devices What you want is a USB gadget driver which somehow uses an existing V4L2 driver to send data over USB This sounds like application code, so I am not sure if this even exists as a kernel driver. I hope I clarified your concerns. Regards, Nikhil D
19
+
20
+ Hi Nikhil Devshatwar , Thanks for your reply, i just needed that clarification only. I was looking for any mechanism available at kernel side to switch with your answer it is clarified their is no such mechanism at kernel side. uvc-gadget is a test application which can be used for testing uvc gadget webcam for single v4l2 capture to v4l2 output. regards, Ganesh
21
+
sample_embedding_folder2/0581287.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: How to boot TDA2 from eMMC whether SD card is inserted or not?
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hi, I boot TDA2 EVM from eMMC. In file "uenv-emmc.txt", I have to define "root". But something make me confused. If SD card is inserted while booting, it will be defined as /dev/mmcblk0 and eMMC will be defined as /dev/mmcblk1. If SD card is not inserted while booting, eMMC will be defined as /dev/mmcblk0. How to define "root" if I want to boot from eMMC whether SD card is inserted or not? Or I can fix eMMC as /dev/mmcblk0? And how to do it? Thanks, Kevin Tsai
5
+
6
+ Responses:
7
+ Hi Kevin, I have forwarded your question to VisionSDK expert. Regards, Yordan
8
+
9
+ Kevin Please provide the details of kernel release version you are using. In linux kernel, eMMC is enumarated as the mmcblk0 and SD card as mmcblk1. Regards Ravi
10
+
11
+ Hi Ravi, Following is my Linux kernel version: Linux dra7xx-evm 3.14.63-00013-gcb5f01e-dirty #11 SMP PREEMPT Wed Mar 15 11:40:54 CST 2017 armv7l GNU/Linux BTW, where can I verify the enumaration of eMMC? Thanks, Kevin
12
+
13
+ Kevin The mmcblk0 or mmcblk1 depends on order in which the devices are enumerated and valid device found. Another option is use UUID (Universally unique identifier), refer to TI-release, Check u-boot scripts, environment variables args_mmc, which setups the UUID for specific mmc boot partition. # part uuid mmc 0:2 uuid # run args_mmc # pri boot_args Regards Ravi
14
+
sample_embedding_folder2/0583648.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2HA-17: Q: Should the power consumption for TDA2x ES2.0 and ES1.1 be equivalent or 'the same' assuming identical configuration?
2
+
3
+ Query Text:
4
+ Part Number: TDA2HA-17 Question: Should the power consumption for TDA2x ES2.0 and ES1.1 be equivalent or 'the same' assuming identical configuration? Alternatively, should a TI EVM with a TDA2x ES2.0 device have the same or similar power consumption as a TI EVM with a TDA2x ES1.1 device? Alternatively, should a customer target board with a TDA2x ES2.0 device have the same or similar power consumption as a customer target board with a TDA2x ES1.1 device? Does TI have available a spreadsheet that calculates power consumption for a set of used components and IP? Is there a version for TDA2x ES2.0 and TDA ES1.1? Or are they available separately? Many thanks!
5
+
6
+ Responses:
7
+ Hi Jason, I have forwarded the question to an expert to help. Regards, Yordan
8
+
9
+ Jason, Yordan, Generally speaking, the power consumption for TDA2x ES2.0 and ES1.1 are the same. There are die to die differences that exist due to normal manufacturing variations. Regards Kyle
10
+
sample_embedding_folder2/0584994.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2E: Power Rail current requirements for TDA2+ and TDA2E
2
+
3
+ Query Text:
4
+ Part Number: TDA2E Other Parts Discussed in Thread: TDA2, Is there any documentation that i can reference that show's current requirements for each rail for TDA2+ and TDA2E.
5
+
6
+ Responses:
7
+ Hi Julio, Current requirements will depend on many factors. I'm not sure we have study for those SoCs. But I'm sure the results would be very similar to these: processors.wiki.ti.com/.../AM57xx_Power_Consumption_Summary Regards, Stan
8
+
9
+ Stan, Yes this is what i was looking for. Just a power consumption summary for typical use cases with TDA nothing too specific. Best, Julio
10
+
sample_embedding_folder2/0585063.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: Data logging on TDA...best/most developed method?
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Hi Team, Regarding algorithm validation via data logging on TDA2 series of parts, which is the recommended/typical interface and why? (i.e. ethernet, straight to SD card, USB to PC)? Do we have any software tools on the PC side already developed to receive video data? I'm aware of example Ethernet usecase which sends one frame at a time, but wondering if we have something more advanced. Best,
5
+
6
+ Responses:
7
+ Hi Lina, I have forwarded your question to VisionSDK experts to comment. Regards, Yordan
8
+
9
+ Hello Lina, In vision SDK we have 'null' link which can be used for data logging. Currently it supports Ethernet, SD card and memory write. The choice of interface depends on throughput requirement of use-case. Ethernet supports higher bandwidth (upto 600Mbps TCP/IP) where as SD(~10Mbps) and memory are limited. If you want to log only few frames you can use SD card but for large video Ethernet is recommended. You can use opensource tools like ffmpeg, yuvplayer etc. for analyzing the post processed data.
10
+
11
+ Hi Prasad, Thanks for the information. What about USB? Have you seen that implementation with a PC? Best, Lina
12
+
13
+ Hello Lina, USB/PCIE are not supported with current implementation of null link.
14
+
sample_embedding_folder2/0586455.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: TDA2Eco and TDA2x be pin-compatible
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Hi We have plans to create the TDA2x Board for AVM Applications. And TDA2x Board will be shared with TDA2x and TDA2Eco. Can TDA2Eco and TDA2x be pin-compatible? Regards, JP Park
5
+
6
+ Responses:
7
+ Hi JP Park, TDA2x and TDA2Eco are pin-for-pin compatible, you can check the note in the bottom of the table here: www.ti.com/.../overview.page Regards, Yordan
8
+
9
+ Hi, I want to add that the note can be somewhat misleading. The two devices are NOT pin-to-pin equivalent, but it IS possible that PCB is designed so it can accommodate the one or the other SoC. For example, TDA2 has no CSI2 interface, that is, those pins are acting as other interface pins (I think VIN pins).
10
+
11
+ Hi, Thank you for your quick reply. the problem is solved thanks to you. Regards, JP Park
12
+
sample_embedding_folder2/0587995.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: RTOS/TDA2E: NDK LLDP and SNMP support
2
+
3
+ Query Text:
4
+ Part Number: TDA2E Other Parts Discussed in Thread: TDA2, CC3200 Tool/software: TI-RTOS Hello Experts, One of our customers is looking for LLDP support with TDA2 devices. Does TI NDK support LLDP and SNMP? If not is there any plans to add that support? Thanks!
5
+
6
+ Responses:
7
+ Hi Prasad, TI does not have SNMP support, but we've partnered with InterNiche who does supply it. We even have an example on processors.wiki.ti.com, but for the TM4C device (but still with the NDK). We basically supply just the binary to proof it works. processors.wiki.ti.com/.../TI-RTOS_SNMP We don't have source code since that needs to be worked out with InterNiche. InterNiche's contact information is on the examples page. Todd
8
+
9
+ Hello Todd, Thanks for your reply. Looks like SNTP offering from InterNiche is based on UDP and not LLDP. I will ask my customer to check SNMP offering from InterNiche. By any chance is there any plans to add LLDP support in NDK?
10
+
11
+ No current plans.
12
+
13
+ Hello Prasad, DMH Software offers SNMP-Agent implementation for the TI-RTOS SimpleLink platform. We built and tested the SNMP-Agent for TI-RTOS on the CC3200 Dev. Board. Please see more information here: www.dmhsoftware.com/ti-simplelink-iot-platform Please contact DMH for more information: [email protected] We will be happy to provide an evaluation SDK. Yigal Hochberg DMH Software
14
+
sample_embedding_folder2/0590092.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: TDA2+ ACD Package information?
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Where can i find information regarding the packaging of the TDA2 plus ACD package.
5
+
6
+ Responses:
7
+ Hi Julio, I wasn't able to find an ACD package for TDA2x. Did you mean ABC?
8
+
9
+ Hi Stanislav, Sure, can you point me to that information.
10
+
11
+ Hi, Julio, The information is in chapter 10 Mechanical Packaging and Orderable Information in Data Manual. You can download the data manual for the appropriate silicon revision from: Regards, Mariya
12
+
13
+ Mariya Petkova, Thank you for the information, i found out my question was on an unreleased device. Thank you, Julio
14
+
sample_embedding_folder2/0590369.txt ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: AM5718: 13MP camera on MIPI interface
2
+
3
+ Query Text:
4
+ Part Number: AM5718 Other Parts Discussed in Thread: TDA2 Hello, I have been read the Document for AM571x Processor but i have below query. I have been checked "AM571x Industrial Development Kit" User's Guide. but this Development kit does not support mipi csi-2 camera interface. it has only provision for parallel camera interface. is there any another development kit which support USB 3.0 and Camera MIPI interface Support? Can we interface 13MP Camera on MIPI CSI-2 interface on AM571x Processor? Please provide us the support as soon as possible. Thanks & Best Support.
5
+
6
+ Responses:
7
+ Hi, 1. No, there is no other EVM for AM571x. 2. The AM571x CSI interface is described in section 8 of the AM571x TRM Rev. E.
8
+
9
+ Thanks for your reply. 2. I have been check AM571x TRM Rev. E. Document but there is no any description for Camera support(How much Mega Pixel). I mean this processor is capable to interface the 13MP Camera?
10
+
11
+ What is the video format of your camera - resolution, frames per second?
12
+
13
+ Thanks for your reply. We don't required the video streaming. We captured the raw data of image (8bit/pixel) from camera. We have some query as below. is there possible to get the line by line raw image data instead of whole image data? is there any Evolution board which support the USB 3.0 and camera (MIPI) interface? Thanks & Best Regards;
14
+
15
+ Gentle Reminder.. Thanks & Best Regards;
16
+
17
+ Sorry about this delay. I have escalated this to the factory team.
18
+
19
+ Gentle Reminder.. Thanks & Best Regards;
20
+
21
+ Hello: Regarding to USB3 camera support: The USB driver in the SDK may support standard Linux USB3 camera modules, though there is no examples in the SDK. You may reference to www.ti.com/.../tidep0076 where a PointGrey ( https://www.ptgrey.com/) camera module was used with an AM57 EVM. You may also want to check if the camera vendor support UVC driver or not. Regarding to 13MP support over CSI-2: Please confirm the spec of the camera module CSI-2 interface, with respect to required clock speed and number of lanes. This will allow us to confirm if the CSI-2 PHY is compatible with the camera. In the meantime, I am confirming maximum line width of our internal buffers to ensure it can support 4k pixel line width. regards jian
22
+
23
+ Hi jian, Thanks for your quick answer. We have not concern with USB3.0 camera. We are asking for is there any evolution kit which support USB 3.0 (usb core (host+device)) and camera mipi CSI-2 interface support. Best Regards; Nikunj Patel
24
+
25
+ gentle reminder. Thanks & Best Regards; Nikunj Patel
26
+
27
+ Hello everyone; I want to download the "ti-processor-sdk-linux-automotive-omap5-uevm-6.04.00.02-installer.bin" sdk for Omap5432 evm board. Please provide me the link for that. Thanks & Best Regards; Nikunj Patel
28
+
29
+ Hello everyone; Gentle reminder. Thanks & Best Regards; Nikunj Patel.
30
+
31
+ Hi Nikunj Sorry for keeping you waiting on this thread. Unfortunately no one on the team supporting these forums are familiar with OMAP5 SDK etc, and will likely not be able to support any queries on it. To your original query on TI EVM supporting MIPI CSI , unfortunately currently none of the AM57x family EVMs have support for this. You can look at the following post e2e.ti.com/.../2262820 You can search for the TDA2 family evaluation boards, available from Spectrum Digital. Hope this helps some. Regards Mukul
32
+
sample_embedding_folder2/0597423.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2EXEVM: How to divide the input and ouput image into several blocks for reaching 30fps
2
+
3
+ Query Text:
4
+ Part Number: TDA2EXEVM Other Parts Discussed in Thread: TDA2 Hi, In surround view camera system, we tried to load our own LUT and divide the input and output images into blocks according to the reference manual, and use DMA to transmit data, but the frame rate is less than 30 frames. we have some questions to ask, Q1, does the output image in DDR also need to be transferred with DMA from L2 SRAM? Q2, how to divide the image, whether the 880*1080 image is divided into small slices of 10*18 or large slices to ensure the width of each block is long enough?
5
+
6
+ Responses:
7
+ Hi, I have forwarded your question to an imaging expert. Regards, Yordan
8
+
9
+ Can you please confirm your device (TDA2 or TDA3?) and SW version (BIOS or Linux)? Please mention Vision SDK version number.
10
+
sample_embedding_folder2/0599158.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2HV: How to interpolate inter video frames from 30 fps to 60 fps
2
+
3
+ Query Text:
4
+ Part Number: TDA2HV Dear experts, is there a way to interpolate (reconstruct or calculate) inter frames, from 30 fps to 60 fps? Similar to the way as of h.264 encoding (b-frames), by bidirectional motion estimation. Could IVAHD being (re)used for this? Or could you think of any other valid approach? Note: We do not just want to double the frames.
5
+
6
+ Responses:
7
+ Hi Ewald, I have forwarded your question to an expert to comment. Regards, Yordan
8
+
9
+ This kind of interpolation can be done by a custom algorithm. IVA-HD does not support this since this is not a art of any codec specification. But you maybe able to develop an algorithm on DSP or EVE.
10
+
sample_embedding_folder2/0599794.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: RTOS: SYS/BIOS: TDA2xx platform missing
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Tool/software: TI-RTOS Hello, I have tried multiple versions of SYS/BIOS and I could not find a TDA2xx platform among offered ones. However, there is a TDA3xx one as can be seen in this image: Is there any version of SYS/BIOS where I can find the TDA2xx and if not where can I acquire it or which platform can I use that is equivalent? Thank you.
5
+
6
+ Responses:
7
+ Hello Nick, Not an SYS/BIOS expert, however an equivalent platform that you may try with is AM572x. Thanks, Alex
8
+
9
+ Hello Alex, Thank you for the suggestion, I'll give it a try.
10
+
11
+ Nick, SYS/BIOS platform information for TDA2 and TDA3 is not included in CCS releases today. For TDA2 or TDA3 development you could start with Vision SDK (under NDA). In this case please contact your local TI representative.
12
+
sample_embedding_folder2/0600601.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2P CTT - When will it be available?
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Team, Please advise when the TDA2P Clock Tree Tool (CTT) will be released. Can the TDA2 CTT be used (i.e. what are the differences in the clock tree)? Best,
5
+
6
+ Responses:
7
+ Hi, Lina, The TDA2Px will be released mid July in Auto Package. In general, the differences in clock tree between TDA2Px and TDA2x will be only in the supported modules of the devices - add modules with their clocks or remove modules. You can compare Data Manuals to figure out the differences. For existing modules in both family of devices, I think that for now you can use TDA2x CTT. Also, there is a released Sitara Plus CTT under NDA, if you want I can sent you in a private mail. Regards, Mariya
8
+
9
+ Hi Mariya, I'm not familiar with the Sitara Plus. Will the clock tree tool be the same for both? If so, please send via email. Best,
10
+
11
+ Hi, The CTT was sent via mail. I will close the thread. Regards, Mariya
12
+
sample_embedding_folder2/0604515.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2SX: VOUT spread spectrum feature
2
+
3
+ Query Text:
4
+ Part Number: TDA2SX Other Parts Discussed in Thread: TDA2 Hello Team, My customer asked that Spread spectrum feature is supported on VOUT1 port due to EMI issue at VOUT1 PCLK frequency. In TDA2 TRM (TDA2x_SR2.0_SR1.x_NDA_TRM_vAD.pdf), there is a register (PLL_SSC_CONFIGURATION1) which can enable SSC feature on DPLL_VIDEO1 as below. On the other hand, it is noted that "SSC feature is not supported." Customer would like to know that they can use SSC feature on DPLL_VIDEO1 by enabling "PLL_SSC_CONFIGURATION1[EN_SSC]" register. How it would work when EN_SSC is enabled?
5
+
6
+ Responses:
7
+ Hi Lloyd, SSC was not tested for one reason or another. Your customer may try it and use it if everything is fine, but TI will not be able to support them. Regards, Stan
8
+
sample_embedding_folder2/0605559.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Linux/DRA744: RGMII-ID ( Internal delay) setting....
2
+
3
+ Query Text:
4
+ Part Number: DRA744 Other Parts Discussed in Thread: TDA2 Tool/software: Linux Hi , I use the RGMII interface connect with Marvell Ethernet SW (88E6390). I set RGMII delay setting enable ( rgmii-id ) but it seems not work. Below is my device setting... &cpsw_emac0 { phy_id = <&davinci_mdio>, <0>; phy-mode = "rgmii-id"; dual_emac_res_vlan = <1>; fixed-link = <1 1000 0 0>; }; How to ensure the RGMII-ID enable setting is correct?
5
+
6
+ Responses:
7
+ Hi Shawn, which is the version of your SDK/Linux kernel? Regards, Yordan
8
+
9
+ GLSDK 7.04.03. Linux dra7xx-evm 3.14.63
10
+
11
+ Hi Shawn, I have forwarded your question to an ethernet expert. Regards, Yordan
12
+
13
+ Shawn, Till Linux Ethernet expert helps you, can you please refer to http://www.ti.com/lit/an/snla243/snla243.pdf for RGMII delay settings?
14
+
15
+ I find the RGMII features that only SR2.0 can enable or disable internal TXC delay. and I got the Chip Revision by dev2mem command 0x1B99002F --> SR 1.1 Is that mean i can not use RGMII-ID setting????
16
+
17
+ The RGMII-ID is not supported on SR1.1. The internal TXC delays are always enabled on this and remote side/PHY needs to take care of not reapplying the delay for Rx lines as TDA2 has already applied it.
18
+
19
+ TXC delay is always enable..but it still have no delay on by board Below image is J6 RGMII TXC and TXD0 and do not connect anything.
20
+
21
+ Could you check DRA7x Ethernet statistics registers? Is there any error bits set? Also check on PHY side too if any align/CRC issues.
22
+
sample_embedding_folder2/0608451.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2x (ADAS) Ethernet problems
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Dear support, I am trying to get Ethernet working on a TDA2 SoC with a DP83848Q PHY. I have reuesed the Ethernet driver from the VISION_SDK_02_08_00_00; I think this driver should run on the Vayu board. On my board I got receive and transmit working with this driver, but I have noticed that there are packets lost on both directions (receice and transmit). On my board only D0 and D1 is connected for transmit and receive to the DP83848Q. The connection is direct with a 22Ohms series resistor and not with a multiplexer in between as on the Vayu board. For the TDA2 SoC I have reused the pad configuration from the SDK. I have verified the 50MHz clock for the DP83848Q and TDA2 (RMII0_MHZ_50_CLK) is fine, the reset and power supply of DP83848Q is stable. Do you have any idea what I could try or what could cause the packet loss on my setup? For testing I would like to try to set the PHY to loopback mode; How do we have to setup the GMACSW_Config structure to enable the loopback mode in PHY? Best regards, Erwin
5
+
6
+ Responses:
7
+ Hi Erwin, First, do you have all connections in place described in TRM Figure 24-178. RMII Interface Typical Application? Also, did you setup the CPSW for RMII mode? And did you check the PHY is discoverable via MDIO? Regards, Stan
8
+
9
+ Hi Stan, Yes, all the connections are there; RMII mode is setup and the PHY is discoverable. When connecting with a PC via crossed over cable the link will be established. Data transfer works in both direction, but packets are lost in both direction. Best regards, Erwin
10
+
11
+ Ok, can you post your strap pins configuration defined in PHY datasheet, 3.8 Strap Options? You said you use a crossed-over cable. Did you try with straight cable? Both should be ok, but just in case. '22-ohm termination' Where did this requirement came from? I could find 50-ohm recommendation in the datasheet. But first you may want to capture the waveforms and compare them vs. . Section 7 RMII Interface Timing Requirements. Regards, Stan
12
+
sample_embedding_folder2/0611062.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: time measurement for each frame in EVE
2
+
3
+ Query Text:
4
+ Other Parts Discussed in Thread: TDA2 Hello, I have to integrate EVE Sparse optical flow into our project in TDA2. I ran the sparse optical flow test bench. Test video has 5 frames. In the console, i get the TSC cycle and SCTM VCOP BUSY cycles at every frame. I would like to calculate the time consuming to perform the optical flow in every frame. Can you please tell me to compute the time for each frame. Jagan
5
+
6
+ Responses:
7
+ Hi Jagan, I have forwarded your question to the EVE experts to comment. Regards, Yordan
8
+
9
+ Hi Jagan, Once you have the TSC cycles per frame ( in terms of VCOP cycles) you can convert it to time by just dividing the cycles by the frequency at which EVE (VCOP) is running ( typically this value is 500MHz). Regards, Anshu
10
+
sample_embedding_folder2/0614700.txt ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: Linux/TDA2EVM5777: JTAG debugging and Linux running?
2
+
3
+ Query Text:
4
+ Part Number: TDA2EVM5777 Other Parts Discussed in Thread: TDA2 Tool/software: Linux Hello! My name is Marco and I'm trying to develop a TDA2 based project in the following constellation: My development platform is the XC5777X CPU Board with TDA2. I'm using latest visual SDK, latest CCS and a XDS200 debugger (Firmware upgrade done). On both A15 linux is running, the other cores have to be programmed bare metal and, of course, have to be debugged over JTAG. So my first question: If I have Linux running, how can I connect the other cores over JTAG chain? I can not start any GEL-Script, or other CCS-Stuff on the "Linuxed" CPU, but I can connect it. I have to release the cores (M4 and C66x) and make them able to be connected using CCS with the XDS200 debugger... and don't know how to do this. Maybe anybody can help me? Second question: I've downloaded the "visual SDK" and was a little bit suprised: Everything works fine, until you are in the use case of pre-developed TI-Stuff. But we have to use the last bit of performance out of M4 and DSP, so this one have to be coded bare metal. There's not really a way shown, how to use a gnu-compiler and build a startupcode for each internal CPU from scratch. The only way shown is using TI-BIOS, RTOS and the buildsystem based on "use cases", "algorithm"... which do not fit our requirements. Are there any other SDK or examples available, which shows a point of start for bare metal programming the TDA2+? (M4, DSP, PRUs, etc... bare metal, Linux almost works great on A15.) TNX - Marco.
5
+
6
+ Responses:
7
+ Hi Marco, Which version of VisionSDK you have? Regards, Yordan
8
+
9
+ I have vision SDK on BIOS 2.12.02....
10
+
11
+ Hi Marco, for your second question you can check if Starterware works for you. It is a software development package that provides no-OS platform support for ARM and DSP processors. It is located in "...\VisionSDK_2_12\ti_components\drivers\starterware_01_07_01_20\" directory and has comprehensive docs and examples. For the JTAG question I will ping an expert to comment. Regards, Yordan
12
+
13
+ Thank you very much: I will take a look and give it a try. Marco
14
+
15
+ Marco Reppenhagen Marco Reppenhagen said: If I have Linux running, how can I connect the other cores over JTAG chain? You will need to install the TDA2x device support(located in the auto dev package) on your CCS. See here Then you will need to target connect to A15. When connected successfully you will have to run TDA2xx_MULTICORE_EnableAllCores() gel from the gel menu. If gel is executed successfully, you will now be able to target connect to M4 and DSP. Thanks, Alex
16
+
17
+ Alex Bashkov : Of course I have installed the TDA2x support. I'm also able to connect A15 with CCS over XDS200 and I can run M4 code and DSP code after calling the GEL script. That is not what I'm looking for, because if Linux runs on the A15 cores, I'm not able to run any gel script on it... So my question is, how can I connect the cores over JTAG when Linux uses both A15 cores? I have to debug my code in the running system, this will be linux on the main cores and bare metal code on the subcores. I can not "connect" the cores in CCS JTAG chain, even though they are almost running (firmware successfully loaded via remoteproc in Linux). I can not run any gel script, after linux have been bootet on A15 cores. I have sadly no idea how to fix this...
18
+
19
+ Marco Reppenhagen, Why are you not able to run any gel script on AM15 cores? Do you receive an error if you try to run the DSP enable gel or you just don't want to disturb Linux? Basically as far as I know, for a standalone DSP app you will need to use ARM-based CCS GEL scripts to take the DSP out of reset, and to do this you need to connect to the ARM, but if you don't want to disturb Linux then you will need to disable the ARM-based GEL scripts from performing "on target connect" functionality. Basically, you don't want the ARM running any GEL scripts *except* the one to take the DSP out of reset. Once the DSP is out of reset, you can connect to the DSP and load/run/debug your app. Thanks, Alex
20
+
21
+ Thank you: I will try to disable "on target connect" functionality and be back for a report, tomorrow.
22
+
23
+ Hello! I'm back again after I've tried many approaches, but I'm still not able to connect to M4 Subcore during Linux is running. I can not execute the GEL-Script, which will bring up the M4 cores; Following Error occoured: IPU1SSClkEnable_API() cannot be evaluated. Target failed to read 0x4AE06514 at (*((unsigned int *) ((cpu_num==1) ? (((0x4AE00000+0x6000)+0x500)+0x14) : (((0x4AE00000+0x6000)+0x700)+0x214)))&0x4) [TDA2xx_multicore_reset.gel:373] at IPUSSClkEnable(1) [TDA2xx_multicore_reset.gel:311] at IPU1SSClkEnable_API() If I bring up the M4 (IPU1@58820000) in the Linux via "remoteproc" (loading a pre-compiled xem4 as "dra7-ipu1-fw.xem4") dmesg shows me: [ 2162.133520] remoteproc0: releasing 58820000.ipu [ 2165.749631] omap-rproc 58820000.ipu: assigned reserved memory node ipu1_cma@9d000000 [ 2165.749686] remoteproc0: 58820000.ipu is available [ 2165.749694] remoteproc0: Note: remoteproc is still under development and considere. [ 2165.749702] remoteproc0: THE BINARY FORMAT IS NOT YET FINALIZED, and backward comp. [ 2165.882920] remoteproc0: powering up 58820000.ipu [ 2165.882937] remoteproc0: Booting fw image dra7-ipu1-fw.xem4, size 4870616 [ 2165.883060] omap-iommu 58882000.mmu: 58882000.mmu: version 2.1 [ 2165.889434] remoteproc0: remote processor 58820000.ipu is now up [ 2165.889784] virtio_rpmsg_bus virtio0: rpmsg host is online [ 2165.890812] remoteproc0: registered virtio0 (type 7) I think, the IPU should be connectable over JTAG now... I tried IPU1/0 IPU1/1.. other m4 IPU2/0... second core: IPU2/1... with the following result: As mentioned: IPU2 (both cores) are still hold in reset. This is OK... because it have not been touched. BUT -> IPU1 (tried out both cores): This IPU1 core I've just started is not connectable in JTAG chain: "Can not access to DAP"... Why? It should be running... being out of reset, setup... a programm should run...? (I've no other processor in development, which cores can't be connected, if they have been setup successfully, and are successfully running, i can connect them over JTAG chain... there must be something missing here....) OK - I tried out to override the linux-stuff on M4 and started a new debug session on CSS. In detail I got the following error message: Error connecting to the target: (Error -1170 @ 0x0) Unable to access the DAP. Reset the device, and retry the operation. If error persists, confirm configuration, power-cycle the board, and/or try more reliable JTAG settings (e.g. lower TCLK). (Emulation package 7.0.48.0) Do you have any suggestions, how to cope with this problem? THANK YOU - Marco.
24
+
25
+ Hello again Marco Reppenhagen , Let me investigate your results thoroughly and try something out on my side. Will get back to you soon. Thanks Alex
26
+
27
+ Thank you... I stand by for any kind of suggestions :-)
28
+
29
+ As posted in another ticket: I've got another problem using "remoteproc" to use firmware in the sub-cores: The linux used in the latest (?) vision-sdk is quite old an do not support the control stuff... more precisely: It do not generate the "/sys/class/remoteproc/" directory. I installed the lates mainline kernel which generates this interface, but of course without any entry: The dra7xx dts do not deal with subcores. A15 Main CPU works fine, but I sadly do not found any support for ipu and/or dsp. My idea was to control the cores with linux and be shure they are running while I try to connect with JTAG chain as mentioned above in this thread. But this is temporary not possbile... Either I have to backport the mainline-kernel remoteproc code to the old 4.4. kernel used in SDK, or I have to implement the support of ipu and dsp into the mainline kernel. Before I try this on my own, I would like to ask you, if theres is any kernel ready to use with TDA2 subcore AND fully functional remoteproc interface available... Thank you: Marco
30
+
31
+ OK... I found a solution: patchwork.kernel.org/.../ After patching I've got what I need.
32
+
33
+ OK... Something new -> Short, FYI: I'm now able to build "something" like a firmware, which I can load into the M4 core. At this moment the startup-code is not working properly, the small programm is not running... but the Core change its state from "offline" to "running" after loading the ELF into the M4 via remoteproc... and after doing this, I'm able to connect the core over XDS200 using the CCS and a standard-ccxml Target confguration. (Later I will try to use this to "upload" some code via JTAG into the M4... (lack of time at the moment...) ) CU Marco.
34
+
35
+ Now I've got a solution. Something went wrong building the ELF... placing the IVT by the " #pragma" which I had to replace using some "__XXX__" GCC instructions. Now I am able to build Cortex M4 code using arm-linux-gnueabi-xxx toolchain in a very small build environment and loading this as a firmware up to the M4s of the TDA2... it's running fine! Thank you for your support.
36
+
sample_embedding_folder2/0614776.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ticket Name: TDA2HG: OpenVX
2
+
3
+ Query Text:
4
+ Part Number: TDA2HG Other Parts Discussed in Thread: TDA2 What accelerator does the OpenVX implementation supports? I know that your demos won’t be moved over to OpenVX but how stable is the OpenVX implementation if we wanted to implement our algorithms with OpenVX for the TDA2 family?
5
+
6
+ Responses:
7
+ Aaron, Thanks for your questions. 1. Supported targets: As of today, all of the OpenVX kernels are implemented on C66 DSPs, so they are certainly supported as available targets. Connectivity to user kernels on EVE target has been verified using a TI extension "Harris Corners" kernel. And the ARM M4s and A15s are also supported targets. 2. OpenVX is currently in beta for TDA2x family. What we mean by this is that it passes the OpenVX conformance tests, as well as additional robustness tests, but it hasn't been internally tested in non-trivial algorithm demos/applications yet. We intend to do this internally by the time we make our end of year release, and in the mean time we also intend to continue to add features (such as graph pipelining) and support bug fixes on TDA2x platform. Algorithms which could benefit from graph pipelining will function, but not as optimally as will be the case when graph pipelinining is supported. Adding custom kernels can be done by following the pattern from existing OpenVX kernels, and one of the tutorials shows how to do this. We are also planning on adding scripts and specific app notes for this in the next few months which facilitate this effort. Please let me know if you have any more questions. Jesse
8
+