text
stringlengths
0
2.01k
🌺 ¿butterflies? 🌸#4118:https://images.anandtech.com/galleries/8261/AMD%20Ryzen%207040U%20Slide%20Deck%206_575px.PNG
🌺 ¿butterflies? 🌸#4118:AMD explicitly said dual issue on their slide
cha0s#0085:In specific scenarios
cha0s#0085:But generally it’s 4.15
TitanicFreak#4722:idk what your point is anymore LH
🌺 ¿butterflies? 🌸#4118:by that standard they should have said 16Tflops FP16
bnieuwenhuizen#2820:on wave64 the dual issue is pretty general
dazer#4964:https://tenor.com/view/2001-a-space-odyssey-hal-gif-22905370
TitanicFreak#4722:different teams different ideas of what to advertise?
🌺 ¿butterflies? 🌸#4118:which is why this whole thing makes no sense
bnieuwenhuizen#2820:while the double fp16 is only dot2
Jiray#3048:WMMA instructions are not exposed outside of ROCm.
🌺 ¿butterflies? 🌸#4118:No. Not true.
cha0s#0085:Is FP16 even dual issue capable?
bnieuwenhuizen#2820:dot2 is though and is also in VOPD?
Mohamexiety#7230:well aktually...
Mohamexiety#7230:they're exposed in vulkan
🌺 ¿butterflies? 🌸#4118:VK_NV_cooperative_matrix
Mohamexiety#7230:through an NV extension!
TitanicFreak#4722:jiray with the 4 notifications
Mohamexiety#7230:yep :kek:
bnieuwenhuizen#2820:they're not exposed yet?
🌺 ¿butterflies? 🌸#4118:they are
🌺 ¿butterflies? 🌸#4118:shipped in prod last month
bnieuwenhuizen#2820:AMD implemented that?
🌺 ¿butterflies? 🌸#4118:yes
bnieuwenhuizen#2820:cool
🌺 ¿butterflies? 🌸#4118:stable diffusion on RDNA3 works through that
bnieuwenhuizen#2820:guess I don't have to worry about the rounding corner cases then while implementing for radv :LeoKek:
Jiray#3048:"Oh yeah, these guys:
V_DUAL_DOT2ACC_F32_F16"
🌺 ¿butterflies? 🌸#4118:they gave up on making it work through rocm (on windows) for the time being
TitanicFreak#4722:do LLMs even work on windows for radeon cards
TitanicFreak#4722:I tried like a week ago and got lost
Mohamexiety#7230:what I was surprised tho is that it's an NV extension. I thought that we'd have to wait till it became a KHR extension for that 😮
Mohamexiety#7230:or AMD rolling their own
bnieuwenhuizen#2820:I mean I really need an extension on the extension
bnieuwenhuizen#2820:the NV extension doesn't specify int4 types
Mohamexiety#7230:"ye, the NV extension is a bit limited.."
bnieuwenhuizen#2820:which is exactly what llama.cpp uses :kek:
Mohamexiety#7230:you don't get the other fun ML datatypes too
bnieuwenhuizen#2820:"btw I really hate the dot8_IU4 instruction. You think you have nice 8 fmas in one, but it takes 4 cycles for some reason"
🌺 ¿butterflies? 🌸#4118:yes but it's bad
🌺 ¿butterflies? 🌸#4118:much faster than running on CPU
🌺 ¿butterflies? 🌸#4118:but still bad
dazer#4964:me too
bnieuwenhuizen#2820:for training probably
bnieuwenhuizen#2820:for inference I think LLAMA in practice is fully bandwidth bound
bnieuwenhuizen#2820:(LLMs on the generating side have the limitation you can only really do 1 token at a time. So only the prompt parsing has nice batching)
🌺 ¿butterflies? 🌸#4118:"nah, for inference it's dead slow"
🌺 ¿butterflies? 🌸#4118:directml issues
bnieuwenhuizen#2820:hmm
🌺 ¿butterflies? 🌸#4118:I don't think that anybody tried running LLMs through vulkan on those yet
bnieuwenhuizen#2820:no idea
bnieuwenhuizen#2820:I've seen llama.vk
🌺 ¿butterflies? 🌸#4118:"```I have done a simple benchmark of ResNetRS50 on an RTX 3080Ti, comparing DirectML plugin 0.1.1.dev221004 and CUDA 11.8 + CUDNN 8.6.0, and found that DML is very slow compared to CUDA, and uses only about 50% of GPU while training, while CUDA constantly uses 100%. Both tests were conducted with mixed precision off and batch size of 64.
Training 10 epochs on DML took 416 seconds, while on CUDA took only 164 seconds. Both on TF 2.10 (CPU for DML) and Python 3.9.13.
```"
🌺 ¿butterflies? 🌸#4118:lol
🌺 ¿butterflies? 🌸#4118:at some of the bug reports
🌺 ¿butterflies? 🌸#4118:meanwhile llama.cpp implemented OpenCL now
🌺 ¿butterflies? 🌸#4118:instead of vulkan
bnieuwenhuizen#2820:yeah I don't think the guy that wrote llama.vk even tried upstreaming
bnieuwenhuizen#2820:just did it as weekend project
Dakhil#7655:https://twitter.com/RyanSmithAT/status/1653872191082205186?t=4Gj_LRJO2Qz0zGYLqwporA&s=19
🌺 ¿butterflies? 🌸#4118:AMD OpenCL runtime supports inline asm
🌺 ¿butterflies? 🌸#4118:so might be worth going that route 😬
bnieuwenhuizen#2820:surprised to hear this from you given all the hammering on ROCm not having an IR 🙃
🌺 ¿butterflies? 🌸#4118:I'm not saying it's a good option
🌺 ¿butterflies? 🌸#4118:I'm saying that if you want things to actually _work_
🌺 ¿butterflies? 🌸#4118:on those GPUs
🌺 ¿butterflies? 🌸#4118:https://github.com/ggerganov/llama.cpp/pull/1087/files
🌺 ¿butterflies? 🌸#4118:ROCm support pull request for llama.cpp is fun
🌺 ¿butterflies? 🌸#4118:just lol
bnieuwenhuizen#2820:"overall I'm also kinda meh on llama.cpp, like it is very cool, but ultimately mostly limited to inference on this single network"
🌺 ¿butterflies? 🌸#4118:no code changes except that .h file and some header ifdefs
bnieuwenhuizen#2820:so mostly just a nice demo vehicle
dgb#6466:can they use hipify instead?
🌺 ¿butterflies? 🌸#4118:perf matters 😛
🌺 ¿butterflies? 🌸#4118:it's a nice experimentation vehicle
🌺 ¿butterflies? 🌸#4118:hipify is not worth using
dgb#6466:oh oof
bnieuwenhuizen#2820:yeah definitely been nice to use for experimentation
🌺 ¿butterflies? 🌸#4118:"you want your source tree to stay CUDA first, so committing the hipify results to the repo doesn't make sense"
🌺 ¿butterflies? 🌸#4118:"(and something that nobody, not even ardent AMD fans, do)"
🌺 ¿butterflies? 🌸#4118:even if AMD would love for that to be the case
FCLC#4504:`sed`
🌺 ¿butterflies? 🌸#4118:lol
🌺 ¿butterflies? 🌸#4118:AMD chose perl
🌺 ¿butterflies? 🌸#4118:https://github.com/ROCm-Developer-Tools/HIPIFY/blob/amd-staging/bin/hipify-perl
Dakhil#7655:https://pulsenews.co.kr/view.php?year=2023&no=339956
Cmoney#5173:https://tenor.com/view/oh-really-o-rly-owl-gif-7905924
Cmoney#5173:going to be honest never used substack because every one I know uses it for work. Since I am allergic to work it makes it easier not to use it.
🌺 ¿butterflies? 🌸#4118:@Cheesecake16 https://twitter.com/HPC_Guru/status/1653855490068336641
Jiray#3048:"So, remember that kerfuffle about nVidia's ML optimized drivers?
https://discord.com/channels/787513876027408407/1073732662902132748/1103463866744180756"
Jiray#3048:Turns out AMD has been doing this for a while.