id
int64 28
222k
| title
stringlengths 14
255
| fancy_title
stringlengths 14
347
| slug
stringlengths 3
251
| posts_count
int64 1
243
| reply_count
int64 0
179
| highest_post_number
int64 1
253
| image_url
stringlengths 100
118
⌀ | created_at
stringdate 2017-01-18 18:36:20
2025-07-24 13:06:12
| last_posted_at
stringdate 2017-01-26 20:40:45
2025-07-24 13:06:11
⌀ | bumped
bool 1
class | bumped_at
stringdate 2017-01-26 20:50:09
2025-07-24 13:06:12
| archetype
stringclasses 1
value | unseen
bool 1
class | pinned
bool 2
classes | unpinned
null | visible
bool 1
class | closed
bool 1
class | archived
bool 1
class | bookmarked
null | liked
null | tags_descriptions
null | views
int64 3
840k
| like_count
int64 0
580
| has_summary
bool 2
classes | last_poster_username
stringlengths 3
20
| category_id
int64 1
44
| pinned_globally
bool 1
class | featured_link
stringclasses 56
values | has_accepted_answer
bool 2
classes | posters
listlengths 1
5
| featured_link_root_domain
stringclasses 12
values | unicode_title
stringclasses 217
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
218,099 |
Torch.xpu.synchronize() hangs, 2.6.0 on intel core ultra 185h
|
Torch.xpu.synchronize() hangs, 2.6.0 on intel core ultra 185h
|
torch-xpu-synchronize-hangs-2-6-0-on-intel-core-ultra-185h
| 1 | 0 | 1 | null |
2025-03-21T02:46:21.850Z
|
2025-03-21T02:46:21.890Z
| true |
2025-03-21T02:46:21.890Z
|
regular
| false | false | null | true | false | false | null | null | null | 65 | 0 | false |
KFrank
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
210,001 |
Quantization fails for custom backend
|
Quantization fails for custom backend
|
quantization-fails-for-custom-backend
| 4 | 2 | 6 | null |
2024-09-24T13:20:08.278Z
|
2025-03-21T00:45:36.057Z
| true |
2025-03-21T00:45:36.057Z
|
regular
| false | false | null | true | false | false | null | null | null | 230 | 2 | false |
jerryzh168
| 17 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 79107
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 80928
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 21770
}
] | null | null |
218,088 |
Multi gpu training on server
|
Multi gpu training on server
|
multi-gpu-training-on-server
| 3 | 1 | 3 | null |
2025-03-20T19:00:14.162Z
|
2025-03-20T23:46:41.477Z
| true |
2025-03-20T23:46:41.477Z
|
regular
| false | false | null | true | false | false | null | null | null | 175 | 0 | false |
Marcelo_Sena
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83390
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
218,096 |
Batch size at inference is influencing accuracy
|
Batch size at inference is influencing accuracy
|
batch-size-at-inference-is-influencing-accuracy
| 1 | 0 | 1 | null |
2025-03-20T21:02:58.000Z
|
2025-03-20T21:02:58.052Z
| true |
2025-03-20T21:02:58.052Z
|
regular
| false | false | null | true | false | false | null | null | null | 32 | 0 | false |
danbull-scanabull
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83393
}
] | null | null |
218,086 |
Pip3 install torch locked on 2.2.2
|
Pip3 install torch locked on 2.2.2
|
pip3-install-torch-locked-on-2-2-2
| 2 | 0 | 2 | null |
2025-03-20T18:05:42.676Z
|
2025-03-20T20:25:54.096Z
| true |
2025-03-20T20:25:54.096Z
|
regular
| false | false | null | true | false | false | null | null | null | 195 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83389
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
218,079 |
Get softmax_lse value for sdpa kernel?
|
Get softmax_lse value for sdpa kernel?
|
get-softmax-lse-value-for-sdpa-kernel
| 1 | 0 | 1 | null |
2025-03-20T14:59:54.746Z
|
2025-03-20T14:59:54.784Z
| true |
2025-03-20T14:59:54.784Z
|
regular
| false | false | null | true | false | false | null | null | null | 56 | 0 | false |
barpitf
| 7 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83386
}
] | null | null |
216,479 |
Libtorch multi GPU training
|
Libtorch multi GPU training
|
libtorch-multi-gpu-training
| 2 | 0 | 2 | null |
2025-02-10T15:05:26.160Z
|
2025-03-20T14:22:30.431Z
| true |
2025-03-20T14:22:30.431Z
|
regular
| false | false | null | true | false | false | null | null | null | 95 | 0 | false |
kevin2004
| 11 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 81205
}
] | null | null |
217,983 |
Catch step function
|
Catch step function
|
catch-step-function
| 3 | 0 | 3 | null |
2025-03-18T10:20:16.514Z
|
2025-03-20T13:58:00.639Z
| true |
2025-03-20T13:58:00.639Z
|
regular
| false | false | null | true | false | false | null | null | null | 55 | 1 | false |
zannas
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83330
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
218,025 |
Why doesn't the instrumented execution time match the time captured by nsys?
|
Why doesn’t the instrumented execution time match the time captured by nsys?
|
why-doesnt-the-instrumented-execution-time-match-the-time-captured-by-nsys
| 6 | 4 | 6 |
2025-03-19T13:41:48.926Z
|
2025-03-20T13:18:19.867Z
| true |
2025-03-20T13:18:19.867Z
|
regular
| false | false | null | true | false | false | null | null | null | 62 | 1 | false |
ptrblck
| 1 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83361
},
{
"description": "Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
|
218,072 |
An error regarding the 5070TI not supporting SM_120
|
An error regarding the 5070TI not supporting SM_120
|
an-error-regarding-the-5070ti-not-supporting-sm-120
| 2 | 0 | 2 | null |
2025-03-20T12:44:55.261Z
|
2025-03-20T13:14:57.535Z
| true |
2025-03-20T13:14:57.535Z
|
regular
| false | false | null | true | false | false | null | null | null | 2,592 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83383
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
218,074 |
Training Machine Learning Model In Browser For Reinforcement Learning
|
Training Machine Learning Model In Browser For Reinforcement Learning
|
training-machine-learning-model-in-browser-for-reinforcement-learning
| 1 | 0 | 1 | null |
2025-03-20T13:12:14.662Z
|
2025-03-20T13:12:14.696Z
| true |
2025-03-20T13:12:14.696Z
|
regular
| false | false | null | true | false | false | null | null | null | 63 | 0 | false |
Sliferslacker
| 6 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83382
}
] | null | null |
218,067 |
CMake error with ROCm: Could NOT find HIP: Found unsuitable version "0.0.0", but required is at least "1.0" (found /opt/rocm)
|
CMake error with ROCm: Could NOT find HIP: Found unsuitable version “0.0.0”, but required is at least “1.0” (found /opt/rocm)
|
cmake-error-with-rocm-could-not-find-hip-found-unsuitable-version-0-0-0-but-required-is-at-least-1-0-found-opt-rocm
| 1 | 0 | 1 | null |
2025-03-20T10:33:31.854Z
|
2025-03-20T10:33:31.902Z
| true |
2025-03-20T10:33:31.902Z
|
regular
| false | false | null | true | false | false | null | null | null | 28 | 0 | false |
chn-lee-yumi
| 14 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83381
}
] | null | null |
218,058 |
Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3, OpType=COALESCED
|
Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3, OpType=COALESCED
|
watchdog-caught-collective-operation-timeout-worknccl-seqnum-3-optype-coalesced
| 1 | 0 | 1 | null |
2025-03-20T08:11:00.468Z
|
2025-03-20T08:11:00.512Z
| true |
2025-03-20T08:13:45.782Z
|
regular
| false | false | null | true | false | false | null | null | null | 310 | 0 | false |
haocong_ma
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83376
}
] | null | null |
218,056 |
torch.cuda.is_available()=False;torch._C._cuda_getDeviceCount() > 0
|
torch.cuda.is_available()=False;torch._C._cuda_getDeviceCount() > 0
|
torch-cuda-is-available-false-torch-c-cuda-getdevicecount-0
| 1 | 0 | 1 | null |
2025-03-20T08:03:12.812Z
|
2025-03-20T08:03:12.854Z
| true |
2025-03-20T08:03:12.854Z
|
regular
| false | false | null | true | false | false | null | null | null | 126 | 0 | false |
Kid
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83377
}
] | null | null |
218,037 |
How does PyTorch's cross-entropy loss transform logits with a soft probability target vector?
|
How does PyTorch’s cross-entropy loss transform logits with a soft probability target vector?
|
how-does-pytorchs-cross-entropy-loss-transform-logits-with-a-soft-probability-target-vector
| 2 | 0 | 2 | null |
2025-03-19T20:18:01.440Z
|
2025-03-20T00:12:00.990Z
| true |
2025-03-20T00:12:00.990Z
|
regular
| false | false | null | true | false | false | null | null | null | 44 | 1 | false |
KFrank
| 1 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83365
},
{
"description": "Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
218,042 |
Data loading time becomes unstable (jumping to high values) when dataset is large
|
Data loading time becomes unstable (jumping to high values) when dataset is large
|
data-loading-time-becomes-unstable-jumping-to-high-values-when-dataset-is-large
| 1 | 0 | 1 | null |
2025-03-19T22:19:46.883Z
|
2025-03-19T22:19:46.923Z
| true |
2025-03-19T22:19:46.923Z
|
regular
| false | false | null | true | false | false | null | null | null | 23 | 0 | false |
Sihe_Chen
| 37 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 40319
}
] | null | null |
218,039 |
Integrating kFold cross validation
|
Integrating kFold cross validation
|
integrating-kfold-cross-validation
| 1 | 0 | 1 | null |
2025-03-19T21:31:30.408Z
|
2025-03-19T21:31:30.450Z
| true |
2025-03-19T21:31:30.450Z
|
regular
| false | false | null | true | false | false | null | null | null | 28 | 0 | false |
amy2
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 68222
}
] | null | null |
218,038 |
ResNet 50 how to improve accuracy
|
ResNet 50 how to improve accuracy
|
resnet-50-how-to-improve-accuracy
| 1 | 0 | 1 | null |
2025-03-19T20:44:31.963Z
|
2025-03-19T20:44:32.002Z
| true |
2025-03-19T20:44:32.002Z
|
regular
| false | false | null | true | false | false | null | null | null | 52 | 0 | false |
Nadiia_Yeromina
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83333
}
] | null | null |
218,014 |
When I install torch==2.6.0 with whl/cu126, none of cuda dependencies get installed. I have cu126 in the environment already. Is this expected?
|
When I install torch==2.6.0 with whl/cu126, none of cuda dependencies get installed. I have cu126 in the environment already. Is this expected?
|
when-i-install-torch-2-6-0-with-whl-cu126-none-of-cuda-dependencies-get-installed-i-have-cu126-in-the-environment-already-is-this-expected
| 6 | 3 | 6 |
2025-03-19T07:50:20.096Z
|
2025-03-19T15:44:20.367Z
| true |
2025-03-19T15:44:20.367Z
|
regular
| false | false | null | true | false | false | null | null | null | 2,734 | 2 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83355
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83362
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
|
187,668 |
HuBERT Pre-training for the Second Iteration without Previous Checkpoints?
|
HuBERT Pre-training for the Second Iteration without Previous Checkpoints?
|
hubert-pre-training-for-the-second-iteration-without-previous-checkpoints
| 3 | 0 | 3 | null |
2023-09-04T06:08:32.975Z
|
2025-03-19T15:39:22.997Z
| true |
2025-03-19T15:39:22.997Z
|
regular
| false | false | null | true | false | false | null | null | null | 638 | 0 | false |
zhu00121
| 9 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 4418
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 4080
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83363
}
] | null | null |
218,006 |
Will a tensor still be in pinned memory which is stacked by two pinned memory
|
Will a tensor still be in pinned memory which is stacked by two pinned memory
|
will-a-tensor-still-be-in-pinned-memory-which-is-stacked-by-two-pinned-memory
| 2 | 0 | 2 | null |
2025-03-19T02:21:20.615Z
|
2025-03-19T15:36:03.413Z
| true |
2025-03-19T15:36:03.413Z
|
regular
| false | false | null | true | false | false | null | null | null | 7 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 29441
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
201,388 |
NCCL failing with A100 GPUs, works fine with V100 GPUs
|
NCCL failing with A100 GPUs, works fine with V100 GPUs
|
nccl-failing-with-a100-gpus-works-fine-with-v100-gpus
| 9 | 4 | 9 | null |
2024-04-22T21:59:43.681Z
|
2025-03-19T14:29:49.772Z
| true |
2025-03-19T14:29:49.772Z
|
regular
| false | false | null | true | false | false | null | null | null | 2,808 | 1 | false |
FSchoen
| 12 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 75209
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 81002
}
] | null | null |
210,087 |
How to store temp variables with register_autograd without returning them as output?
|
How to store temp variables with register_autograd without returning them as output?
|
how-to-store-temp-variables-with-register-autograd-without-returning-them-as-output
| 2 | 0 | 2 | null |
2024-09-25T23:28:03.606Z
|
2025-03-19T14:27:50.754Z
| true |
2025-03-19T14:27:50.754Z
|
regular
| false | false | null | true | false | false | null | null | null | 46 | 1 | false |
FSchoen
| 7 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 58674
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 75209
}
] | null | null |
218,021 |
MaskRCNN: NotImplementedError
|
MaskRCNN: NotImplementedError
|
maskrcnn-notimplementederror
| 1 | 0 | 1 | null |
2025-03-19T10:16:11.915Z
|
2025-03-19T10:16:11.954Z
| true |
2025-03-19T10:40:00.684Z
|
regular
| false | false | null | true | false | false | null | null | null | 45 | 0 | false |
newbie1
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83358
}
] | null | null |
218,022 |
Restrict FCN output to valid convolutions
|
Restrict FCN output to valid convolutions
|
restrict-fcn-output-to-valid-convolutions
| 1 | 0 | 1 | null |
2025-03-19T10:24:29.118Z
|
2025-03-19T10:24:29.158Z
| true |
2025-03-19T10:24:29.158Z
|
regular
| false | false | null | true | false | false | null | null | null | 50 | 0 | false |
cpegel
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83359
}
] | null | null |
217,901 |
5070Ti+Ubuntu 20.04.6+cuda?
|
5070Ti+Ubuntu 20.04.6+cuda?
|
5070ti-ubuntu-20-04-6-cuda
| 3 | 1 | 3 | null |
2025-03-16T03:22:30.520Z
|
2025-03-19T10:15:03.040Z
| true |
2025-03-19T10:15:03.040Z
|
regular
| false | false | null | true | false | false | null | null | null | 235 | 0 | false |
riva_lei
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83301
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
218,017 |
CNN loss is not decreasing
|
CNN loss is not decreasing
|
cnn-loss-is-not-decreasing
| 1 | 0 | 1 | null |
2025-03-19T09:14:16.252Z
|
2025-03-19T09:14:16.295Z
| true |
2025-03-19T09:14:16.295Z
|
regular
| false | false | null | true | false | false | null | null | null | 52 | 0 | false |
Saqlain_Afroz
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83357
}
] | null | null |
217,493 |
Shapes of FakeTensors after graph transformations on ATen IR
|
Shapes of FakeTensors after graph transformations on ATen IR
|
shapes-of-faketensors-after-graph-transformations-on-aten-ir
| 7 | 3 | 8 | null |
2025-03-06T00:43:12.101Z
|
2025-03-19T07:44:20.302Z
| true |
2025-03-19T07:44:20.302Z
|
regular
| false | false | null | true | false | false | null | null | null | 87 | 0 | false |
nd21
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83093
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3718
}
] | null | null |
202,248 |
(When using multiple GPUs) RuntimeError: NCCL Error 1: unhandled cuda error (run with NCCL_DEBUG=INFO for details)
|
(When using multiple GPUs) RuntimeError: NCCL Error 1: unhandled cuda error (run with NCCL_DEBUG=INFO for details)
|
when-using-multiple-gpus-runtimeerror-nccl-error-1-unhandled-cuda-error-run-with-nccl-debug-info-for-details
| 7 | 4 | 7 | null |
2024-05-07T05:14:42.714Z
|
2025-03-19T06:06:37.714Z
| true |
2025-03-19T06:06:37.714Z
|
regular
| false | false | null | true | false | false | null | null | null | 2,177 | 0 | false |
flake_hayleigh
| 8 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18743
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83354
}
] | null | null |
217,873 |
Torch.compile(mode="max-autotune") produces different inference result from eager mode — is this expected?
|
Torch.compile(mode=“max-autotune”) produces different inference result from eager mode — is this expected?
|
torch-compile-mode-max-autotune-produces-different-inference-result-from-eager-mode-is-this-expected
| 7 | 4 | 7 | null |
2025-03-15T09:26:28.208Z
|
2025-03-19T01:53:03.576Z
| true |
2025-03-19T01:53:03.576Z
|
regular
| false | false | null | true | false | false | null | null | null | 124 | 0 | false |
tinywisdom
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83160
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3718
}
] | null | null |
218,005 |
Simulating quantization to lower bit precision with quant_min/max setting on fused modules
|
Simulating quantization to lower bit precision with quant_min/max setting on fused modules
|
simulating-quantization-to-lower-bit-precision-with-quant-min-max-setting-on-fused-modules
| 1 | 0 | 1 | null |
2025-03-19T01:46:52.890Z
|
2025-03-19T01:46:52.927Z
| true |
2025-03-19T01:46:52.927Z
|
regular
| false | false | null | true | false | false | null | null | null | 38 | 0 | false |
TominoFTW
| 17 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83353
}
] | null | null |
218,003 |
Errors while converting Def-DETR pth to .onnx
|
Errors while converting Def-DETR pth to .onnx
|
errors-while-converting-def-detr-pth-to-onnx
| 1 | 0 | 1 | null |
2025-03-19T01:09:32.914Z
|
2025-03-19T01:09:32.955Z
| true |
2025-03-19T01:09:32.955Z
|
regular
| false | false | null | true | false | false | null | null | null | 61 | 0 | false |
Sarvesh_Shashikumar
| 14 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83352
}
] | null | null |
217,994 |
Build from source failing
|
Build from source failing
|
build-from-source-failing
| 4 | 2 | 4 | null |
2025-03-18T16:16:04.698Z
|
2025-03-19T00:51:27.877Z
| true |
2025-03-19T00:51:27.877Z
|
regular
| false | false | null | true | false | false | null | null | null | 152 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83348
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
218,001 |
Triton CodeGen for torch linear operator
|
Triton CodeGen for torch linear operator
|
triton-codegen-for-torch-linear-operator
| 1 | 0 | 1 | null |
2025-03-19T00:44:47.837Z
|
2025-03-19T00:44:47.911Z
| true |
2025-03-19T00:44:47.911Z
|
regular
| false | false | null | true | false | false | null | null | null | 28 | 0 | false |
deepak.vij
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83182
}
] | null | null |
217,999 |
Fuse adds in TorchScript
|
Fuse adds in TorchScript
|
fuse-adds-in-torchscript
| 1 | 0 | 1 | null |
2025-03-18T22:24:10.497Z
|
2025-03-18T22:24:10.538Z
| true |
2025-03-18T22:29:22.920Z
|
regular
| false | false | null | true | false | false | null | null | null | 29 | 0 | false |
kamei
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83351
}
] | null | null |
217,997 |
Multiple Models Performance Degrades
|
Multiple Models Performance Degrades
|
multiple-models-performance-degrades
| 1 | 0 | 1 |
2025-03-18T18:03:18.722Z
|
2025-03-18T18:03:18.772Z
| true |
2025-03-18T18:03:18.772Z
|
regular
| false | false | null | true | false | false | null | null | null | 24 | 0 | false |
goofydoge
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 81841
}
] | null | null |
|
217,962 |
Testing .pte model with all ones give different result on PC and Android
|
Testing .pte model with all ones give different result on PC and Android
|
testing-pte-model-with-all-ones-give-different-result-on-pc-and-android
| 3 | 0 | 3 | null |
2025-03-17T22:17:33.084Z
|
2025-03-18T15:15:16.206Z
| true |
2025-03-18T15:58:12.752Z
|
regular
| false | false | null | true | false | false | null | null | null | 67 | 4 | false |
Sylte
| 42 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83334
}
] | null | null |
216,858 |
How to solve the graph break happen in torch.compile
|
How to solve the graph break happen in torch.compile
|
how-to-solve-the-graph-break-happen-in-torch-compile
| 12 | 10 | 12 | null |
2025-02-19T04:31:39.644Z
|
2025-03-18T15:16:27.719Z
| true |
2025-03-18T15:16:27.719Z
|
regular
| false | false | null | true | false | false | null | null | null | 769 | 5 | false |
RAMESH_BABU
| 41 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82777
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 41997
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83259
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 80547
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82873
}
] | null | null |
217,990 |
Use subset of dataset in c++ frontend
|
Use subset of dataset in c++ frontend
|
use-subset-of-dataset-in-c-frontend
| 1 | 0 | 1 | null |
2025-03-18T14:44:45.792Z
|
2025-03-18T14:44:45.832Z
| true |
2025-03-18T14:44:45.832Z
|
regular
| false | false | null | true | false | false | null | null | null | 19 | 0 | false |
Wousta
| 11 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83347
}
] | null | null |
217,199 |
Import torchvision fails on NVIDIA jetson orin nano
|
Import torchvision fails on NVIDIA jetson orin nano
|
import-torchvision-fails-on-nvidia-jetson-orin-nano
| 3 | 1 | 4 | null |
2025-02-26T18:34:09.423Z
|
2025-03-18T12:34:20.272Z
| true |
2025-03-18T12:34:20.272Z
|
regular
| false | false | null | true | false | false | null | null | null | 185 | 0 | false |
cadip92
| 5 | false | null | true |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 35490
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 77908
}
] | null | null |
217,774 |
How to Export `torchaudio.models.decoder.ctc_decoder` to TorchScript for C++ Deployment?
|
How to Export `torchaudio.models.decoder.ctc_decoder` to TorchScript for C++ Deployment?
|
how-to-export-torchaudio-models-decoder-ctc-decoder-to-torchscript-for-c-deployment
| 2 | 0 | 2 | null |
2025-03-13T06:16:05.295Z
|
2025-03-18T10:53:52.127Z
| true |
2025-03-18T10:53:52.127Z
|
regular
| false | false | null | true | false | false | null | null | null | 48 | 0 | false |
mariaalfaroc
| 9 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 56086
}
] | null | null |
202,563 |
3D RCNN Implementation
|
3D RCNN Implementation
|
3d-rcnn-implementation
| 2 | 0 | 2 | null |
2024-05-11T20:56:01.413Z
|
2025-03-18T08:43:49.581Z
| true |
2025-03-18T08:43:49.581Z
|
regular
| false | false | null | true | false | false | null | null | null | 253 | 0 | false |
Sid_1041
| 5 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 68149
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83341
}
] | null | null |
217,979 |
Understanding Optimal T, H, and W for R3D_18 Pretrained on Kinetics-400
|
Understanding Optimal T, H, and W for R3D_18 Pretrained on Kinetics-400
|
understanding-optimal-t-h-and-w-for-r3d-18-pretrained-on-kinetics-400
| 1 | 0 | 1 | null |
2025-03-18T08:42:02.547Z
|
2025-03-18T08:42:02.587Z
| true |
2025-03-18T08:42:02.587Z
|
regular
| false | false | null | true | false | false | null | null | null | 21 | 0 | false |
Sid_1041
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83341
}
] | null | null |
212,075 |
FSDP2 backward issue
|
FSDP2 backward issue
|
fsdp2-backward-issue
| 3 | 0 | 3 | null |
2024-10-25T04:00:56.026Z
|
2025-03-18T07:33:37.407Z
| true |
2025-03-18T07:33:37.407Z
|
regular
| false | false | null | true | false | false | null | null | null | 368 | 3 | false |
Uday_Singh_Saini
| 12 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 80471
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 78243
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 6783
}
] | null | null |
217,976 |
The selectable configs for irregular shape when using triton.autotune
|
The selectable configs for irregular shape when using triton.autotune
|
the-selectable-configs-for-irregular-shape-when-using-triton-autotune
| 1 | 0 | 1 | null |
2025-03-18T06:32:26.887Z
|
2025-03-18T06:32:26.926Z
| true |
2025-03-18T06:32:26.926Z
|
regular
| false | false | null | true | false | false | null | null | null | 44 | 0 | false |
NiuMa-1234
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 78370
}
] | null | null |
217,970 |
Transformer Stuck in Local Minima Occasionally
|
Transformer Stuck in Local Minima Occasionally
|
transformer-stuck-in-local-minima-occasionally
| 1 | 0 | 1 | null |
2025-03-18T03:33:44.966Z
|
2025-03-18T03:33:44.998Z
| true |
2025-03-18T03:33:44.998Z
|
regular
| false | false | null | true | false | false | null | null | null | 55 | 0 | false |
martin12
| 8 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83338
}
] | null | null |
217,717 |
Torch.compile CudaGraph creation & downstream systems
|
Torch.compile CudaGraph creation & downstream systems
|
torch-compile-cudagraph-creation-downstream-systems
| 3 | 1 | 3 | null |
2025-03-12T00:37:52.786Z
|
2025-03-18T03:18:28.398Z
| true |
2025-03-18T03:18:28.398Z
|
regular
| false | false | null | true | false | false | null | null | null | 85 | 0 | false |
deepak.vij
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83182
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3718
}
] | null | null |
217,950 |
Couldn't find the retrieval_recall in torcheval
|
Couldn’t find the retrieval_recall in torcheval
|
couldnt-find-the-retrieval-recall-in-torcheval
| 3 | 1 | 3 |
2025-03-17T12:32:04.660Z
|
2025-03-18T01:21:23.396Z
| true |
2025-03-18T01:21:23.396Z
|
regular
| false | false | null | true | false | false | null | null | null | 34 | 1 | false |
songsong0425
| 1 | false | null | true |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 72736
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
|
217,710 |
Debugging slow TorchDynamo Cache Lookup
|
Debugging slow TorchDynamo Cache Lookup
|
debugging-slow-torchdynamo-cache-lookup
| 2 | 0 | 2 |
2025-03-11T18:10:56.161Z
|
2025-03-18T00:18:51.847Z
| true |
2025-03-18T00:18:51.847Z
|
regular
| false | false | null | true | false | false | null | null | null | 96 | 0 | false |
richard
| 41 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83200
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3718
}
] | null | null |
|
217,964 |
CUDA memory issue in Hessian vector product
|
CUDA memory issue in Hessian vector product
|
cuda-memory-issue-in-hessian-vector-product
| 1 | 0 | 1 | null |
2025-03-17T23:50:00.337Z
|
2025-03-17T23:50:00.373Z
| true |
2025-03-17T23:56:25.366Z
|
regular
| false | false | null | true | false | false | null | null | null | 35 | 0 | false |
CheukHinHoJerry
| 7 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83335
}
] | null | null |
217,953 |
Autograd with mutable custom operator
|
Autograd with mutable custom operator
|
autograd-with-mutable-custom-operator
| 2 | 0 | 2 | null |
2025-03-17T14:11:56.278Z
|
2025-03-17T20:57:57.891Z
| true |
2025-03-17T20:57:57.891Z
|
regular
| false | false | null | true | false | false | null | null | null | 50 | 0 | false |
BeachHut
| 11 | false | null | true |
[
{
"description": "Original Poster, Most Recent Poster, Accepted Answer",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 9258
}
] | null | null |
205,672 |
Build instructions for ROCM
|
Build instructions for ROCM
|
build-instructions-for-rocm
| 3 | 1 | 3 | null |
2024-07-03T05:38:57.228Z
|
2025-03-17T19:59:20.416Z
| true |
2025-03-17T19:59:20.416Z
|
regular
| false | false | null | true | false | false | null | null | null | 182 | 1 | false |
Matthias_Moller
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 50704
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82792
}
] | null | null |
217,941 |
DDP - sync gradients during optim step instead of backward
|
DDP - sync gradients during optim step instead of backward
|
ddp-sync-gradients-during-optim-step-instead-of-backward
| 2 | 0 | 2 | null |
2025-03-17T07:16:25.292Z
|
2025-03-17T18:11:04.923Z
| true |
2025-03-17T18:11:04.923Z
|
regular
| false | false | null | true | false | false | null | null | null | 41 | 0 | false |
ptrblck
| 12 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 17131
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
102,533 |
The kernel appears to have died. It will restart automatically
|
The kernel appears to have died. It will restart automatically
|
the-kernel-appears-to-have-died-it-will-restart-automatically
| 15 | 10 | 15 | null |
2020-11-12T10:15:39.407Z
|
2025-03-17T14:49:00.834Z
| true |
2025-03-17T14:49:00.834Z
|
regular
| false | false | null | true | false | false | null | null | null | 25,487 | 5 | false |
ptrblck
| 1 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 21332
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3146
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 59671
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 49004
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
143,234 |
How to set OMP_NUM_THREADS for distruted training?
|
How to set OMP_NUM_THREADS for distruted training?
|
how-to-set-omp-num-threads-for-distruted-training
| 7 | 2 | 7 | null |
2022-02-03T23:16:20.726Z
|
2025-03-17T14:10:22.805Z
| true |
2025-03-17T14:10:22.805Z
|
regular
| false | false | null | true | false | false | null | null | null | 17,188 | 17 | false |
MinSnz
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 2282
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 46006
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 65821
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 13736
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 67291
}
] | null | null |
204,999 |
CUDA Memory Profiling: perculiar memory values
|
CUDA Memory Profiling: perculiar memory values
|
cuda-memory-profiling-perculiar-memory-values
| 7 | 4 | 7 | null |
2024-06-20T14:49:37.693Z
|
2025-03-17T10:59:29.140Z
| true |
2025-03-17T10:59:29.140Z
|
regular
| false | false | null | true | false | false | null | null | null | 423 | 4 | false |
Agustin_Barrachina
| 7 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 76810
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 41396
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82766
}
] | null | null |
103,478 |
pytorch profiler to analyze memory consumption
|
pytorch profiler to analyze memory consumption
|
pytorch-profiler-to-analyze-memory-consumption
| 3 | 1 | 3 |
2020-11-20T04:30:47.213Z
|
2025-03-17T10:46:37.637Z
| true |
2025-03-17T10:46:37.637Z
|
regular
| false | false | null | true | false | false | null | null | null | 444 | 1 | false |
Agustin_Barrachina
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 39088
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 34381
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82766
}
] | null | null |
|
91,941 |
DataLoader gives:stack expects each tensor to be equal size,due to different image has different objects number
|
DataLoader gives:stack expects each tensor to be equal size,due to different image has different objects number
|
dataloader-gives-stack-expects-each-tensor-to-be-equal-size-due-to-different-image-has-different-objects-number
| 10 | 4 | 10 | null |
2020-08-07T09:43:11.131Z
|
2025-03-17T10:23:46.870Z
| true |
2025-03-17T10:23:46.870Z
|
regular
| false | false | null | true | false | false | null | null | null | 19,248 | 8 | false |
iliasslasri
| 1 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 17807
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 35180
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 35504
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 54306
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 81150
}
] | null | null |
217,928 |
CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 3.69 GiB of which 20.81 MiB is free
|
CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 3.69 GiB of which 20.81 MiB is free
|
cuda-out-of-memory-tried-to-allocate-20-00-mib-gpu-0-has-a-total-capacty-of-3-69-gib-of-which-20-81-mib-is-free
| 2 | 0 | 2 |
2025-03-17T01:34:00.001Z
|
2025-03-17T08:06:42.852Z
| true |
2025-03-17T08:06:42.852Z
|
regular
| false | false | null | true | false | false | null | null | null | 30 | 0 | false |
Soumya_Kundu
| 5 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83309
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 68149
}
] | null | null |
|
217,939 |
OSError when import torch
|
OSError when import torch
|
oserror-when-import-torch
| 1 | 0 | 1 |
2025-03-17T06:25:03.829Z
|
2025-03-17T06:25:03.868Z
| true |
2025-03-17T06:25:03.868Z
|
regular
| false | false | null | true | false | false | null | null | null | 25 | 0 | false |
MIA_D
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83322
}
] | null | null |
|
217,853 |
When use offload_to_cpu=True to fsdp load, the usage of video memory still increased
|
When use offload_to_cpu=True to fsdp load, the usage of video memory still increased
|
when-use-offload-to-cpu-true-to-fsdp-load-the-usage-of-video-memory-still-increased
| 2 | 0 | 2 | null |
2025-03-14T12:46:38.533Z
|
2025-03-17T06:10:00.871Z
| true |
2025-03-17T06:10:00.871Z
|
regular
| false | false | null | true | false | false | null | null | null | 24 | 0 | false |
nobody
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 47196
}
] | null | null |
204,017 |
Trainable parameters become Nan after optimizer.step() at the end of first iteration
|
Trainable parameters become Nan after optimizer.step() at the end of first iteration
|
trainable-parameters-become-nan-after-optimizer-step-at-the-end-of-first-iteration
| 3 | 1 | 3 | null |
2024-06-04T05:51:14.761Z
|
2025-03-17T04:02:45.159Z
| true |
2025-03-17T04:02:45.159Z
|
regular
| false | false | null | true | false | false | null | null | null | 242 | 0 | false |
lyc.comedymaker
| 5 | false | null | true |
[
{
"description": "Original Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 76375
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83319
}
] | null | null |
217,932 |
Some questions about running pytorch on an intel "arc graphics xpu"
|
Some questions about running pytorch on an intel “arc graphics xpu”
|
some-questions-about-running-pytorch-on-an-intel-arc-graphics-xpu
| 1 | 0 | 1 | null |
2025-03-17T03:33:19.112Z
|
2025-03-17T03:33:19.156Z
| true |
2025-03-17T03:33:19.156Z
|
regular
| false | false | null | true | false | false | null | null | null | 59 | 0 | false |
KFrank
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
215,369 |
Running Two Batches in Parallel Using CUDA Streams Does Not Overlap During Training
|
Running Two Batches in Parallel Using CUDA Streams Does Not Overlap During Training
|
running-two-batches-in-parallel-using-cuda-streams-does-not-overlap-during-training
| 7 | 3 | 7 |
2025-01-14T11:59:53.561Z
|
2025-03-17T03:19:19.107Z
| true |
2025-03-17T03:19:19.107Z
|
regular
| false | false | null | true | false | false | null | null | null | 233 | 0 | false |
wynne_yin
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 77597
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 71261
}
] | null | null |
|
217,919 |
Fitting parameters of custom model with pytorch
|
Fitting parameters of custom model with pytorch
|
fitting-parameters-of-custom-model-with-pytorch
| 2 | 0 | 2 | null |
2025-03-16T19:18:26.757Z
|
2025-03-17T03:08:29.653Z
| true |
2025-03-17T03:08:29.653Z
|
regular
| false | false | null | true | false | false | null | null | null | 21 | 0 | false |
KFrank
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83311
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
217,805 |
How to calculate Jacobians for a batch
|
How to calculate Jacobians for a batch
|
how-to-calculate-jacobians-for-a-batch
| 6 | 4 | 6 | null |
2025-03-13T13:07:58.421Z
|
2025-03-16T15:39:32.524Z
| true |
2025-03-16T15:39:32.524Z
|
regular
| false | false | null | true | false | false | null | null | null | 93 | 2 | false |
KFrank
| 7 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 81460
},
{
"description": "Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
217,889 |
Removing clicks in an audio file with torchaudio
|
Removing clicks in an audio file with torchaudio
|
removing-clicks-in-an-audio-file-with-torchaudio
| 2 | 0 | 2 |
2025-03-15T17:24:42.152Z
|
2025-03-16T15:00:51.428Z
| true |
2025-03-16T15:00:51.428Z
|
regular
| false | false | null | true | false | false | null | null | null | 80 | 0 | false |
KFrank
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83293
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
}
] | null | null |
|
217,908 |
PyTorch for RTX 5090? When will it be out? Thank you
|
PyTorch for RTX 5090? When will it be out? Thank you
|
pytorch-for-rtx-5090-when-will-it-be-out-thank-you
| 2 | 0 | 2 | null |
2025-03-16T10:10:27.595Z
|
2025-03-16T14:00:28.491Z
| true |
2025-03-16T14:00:28.491Z
|
regular
| false | false | null | true | false | false | null | null | null | 127 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83305
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,915 |
This is about optimizing cuDNN to avoid rebuilding the graph when batch, input dimensions change.
|
This is about optimizing cuDNN to avoid rebuilding the graph when batch, input dimensions change.
|
this-is-about-optimizing-cudnn-to-avoid-rebuilding-the-graph-when-batch-input-dimensions-change
| 1 | 0 | 1 | null |
2025-03-16T13:59:44.870Z
|
2025-03-16T13:59:44.902Z
| true |
2025-03-16T13:59:44.902Z
|
regular
| false | false | null | true | false | false | null | null | null | 24 | 0 | false |
yhyang201
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83303
}
] | null | null |
217,888 |
Iterations per second deacreasing over time
|
Iterations per second deacreasing over time
|
iterations-per-second-deacreasing-over-time
| 2 | 0 | 2 |
2025-03-15T16:58:38.793Z
|
2025-03-16T13:24:06.588Z
| true |
2025-03-16T13:24:06.588Z
|
regular
| false | false | null | true | false | false | null | null | null | 23 | 0 | false |
anantguptadbl
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83294
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 19553
}
] | null | null |
|
217,909 |
Storing intermediates outputs in python list + clone() and memory usage problem
|
Storing intermediates outputs in python list + clone() and memory usage problem
|
storing-intermediates-outputs-in-python-list-clone-and-memory-usage-problem
| 1 | 0 | 1 | null |
2025-03-16T10:38:06.410Z
|
2025-03-16T10:38:06.450Z
| true |
2025-03-16T10:38:06.450Z
|
regular
| false | false | null | true | false | false | null | null | null | 38 | 0 | false |
Nyx
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83291
}
] | null | null |
216,953 |
Pruning by manipulating weight_mask
|
Pruning by manipulating weight_mask
|
pruning-by-manipulating-weight-mask
| 2 | 0 | 2 | null |
2025-02-20T14:21:28.443Z
|
2025-03-15T22:52:11.716Z
| true |
2025-03-15T22:52:11.716Z
|
regular
| false | false | null | true | false | false | null | null | null | 68 | 0 | false |
ndronen
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 520
}
] | null | null |
217,883 |
Simple CNN model takes too much memory when forward() is called
|
Simple CNN model takes too much memory when forward() is called
|
simple-cnn-model-takes-too-much-memory-when-forward-is-called
| 2 | 0 | 2 | null |
2025-03-15T15:07:13.694Z
|
2025-03-15T15:53:48.961Z
| true |
2025-03-15T15:53:48.961Z
|
regular
| false | false | null | true | false | false | null | null | null | 46 | 0 | false |
ptrblck
| 5 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83291
},
{
"description": "Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,884 |
Help Needed: High Inference Time & CPU Usage in VGG19 QAT model vs. Baseline
|
Help Needed: High Inference Time & CPU Usage in VGG19 QAT model vs. Baseline
|
help-needed-high-inference-time-cpu-usage-in-vgg19-qat-model-vs-baseline
| 1 | 0 | 1 | null |
2025-03-15T15:34:25.765Z
|
2025-03-15T15:34:25.801Z
| true |
2025-03-15T15:45:37.338Z
|
regular
| false | false | null | true | false | false | null | null | null | 36 | 0 | false |
Auniik
| 17 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83292
}
] | null | null |
217,871 |
Pytorch Dataset why so slow
|
Pytorch Dataset why so slow
|
pytorch-dataset-why-so-slow
| 2 | 0 | 2 | null |
2025-03-15T07:27:39.507Z
|
2025-03-15T14:23:05.697Z
| true |
2025-03-15T14:23:05.697Z
|
regular
| false | false | null | true | false | false | null | null | null | 64 | 0 | false |
ptrblck
| 37 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83287
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,877 |
Initial D_KL loss is high and going down really slow
|
Initial D_KL loss is high and going down really slow
|
initial-d-kl-loss-is-high-and-going-down-really-slow
| 1 | 0 | 1 | null |
2025-03-15T12:04:06.828Z
|
2025-03-15T12:04:06.870Z
| true |
2025-03-15T12:07:55.407Z
|
regular
| false | false | null | true | false | false | null | null | null | 24 | 0 | false |
User_Name
| 8 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83290
}
] | null | null |
217,867 |
Torch::fft::rfft and fftw library disagree
|
Torch::fft::rfft and fftw library disagree
|
torch-rfft-and-fftw-library-disagree
| 1 | 0 | 1 | null |
2025-03-15T03:04:26.249Z
|
2025-03-15T03:04:26.289Z
| true |
2025-03-15T03:04:26.289Z
|
regular
| false | false | null | true | false | false | null | null | null | 23 | 0 | false |
Himalayjor
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 55803
}
] | null |
Torch::fft::rfft and fftw library disagree
|
217,826 |
How to ensure that auto-generated Triton Kernel is executed
|
How to ensure that auto-generated Triton Kernel is executed
|
how-to-ensure-that-auto-generated-triton-kernel-is-executed
| 1 | 0 | 1 | null |
2025-03-13T23:24:40.456Z
|
2025-03-13T23:24:40.519Z
| true |
2025-03-14T22:15:56.529Z
|
regular
| false | false | null | true | false | false | null | null | null | 41 | 0 | false |
deepak.vij
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83182
}
] | null | null |
217,848 |
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn during Autograd
|
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn during Autograd
|
runtimeerror-element-0-of-tensors-does-not-require-grad-and-does-not-have-a-grad-fn-during-autograd
| 6 | 3 | 6 | null |
2025-03-14T10:06:18.898Z
|
2025-03-14T18:55:10.498Z
| true |
2025-03-14T18:55:10.498Z
|
regular
| false | false | null | true | false | false | null | null | null | 25 | 0 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83272
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
192,829 |
RuntimeError: operator torchvision::nms does not exist
|
RuntimeError: operator torchvision::nms does not exist
|
runtimeerror-operator-torchvision-nms-does-not-exist
| 12 | 10 | 14 | null |
2023-11-29T19:20:53.543Z
|
2025-03-14T16:20:11.395Z
| true |
2025-03-14T16:20:11.395Z
|
regular
| false | false | null | true | false | false | null | null | null | 78,966 | 10 | false |
seyed_hosein_Alhosei
| 5 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 70911
},
{
"description": "Frequent Poster, Accepted Answer",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82225
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 19813
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83270
}
] | null | null |
217,811 |
What is the interplay between the "nvidia driver" and "cuda"?
|
What is the interplay between the “nvidia driver” and “cuda”?
|
what-is-the-interplay-between-the-nvidia-driver-and-cuda
| 4 | 2 | 4 | null |
2025-03-13T14:37:13.997Z
|
2025-03-14T13:50:00.704Z
| true |
2025-03-14T13:50:00.704Z
|
regular
| false | false | null | true | false | false | null | null | null | 736 | 1 | false |
ptrblck
| 1 | false | null | true |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 18088
},
{
"description": "Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,832 |
NVIDIA RTX 5070 with CUDA 12.8 (sm_120) Window 11 Error
|
NVIDIA RTX 5070 with CUDA 12.8 (sm_120) Window 11 Error
|
nvidia-rtx-5070-with-cuda-12-8-sm-120-window-11-error
| 2 | 0 | 2 | null |
2025-03-14T03:33:33.075Z
|
2025-03-14T12:52:48.912Z
| true |
2025-03-14T12:52:48.912Z
|
regular
| false | false | null | true | false | false | null | null | null | 5,032 | 1 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83263
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,849 |
FSDP OOM when forwarding 7B model on 16k context length text
|
FSDP OOM when forwarding 7B model on 16k context length text
|
fsdp-oom-when-forwarding-7b-model-on-16k-context-length-text
| 1 | 0 | 1 | null |
2025-03-14T10:33:51.281Z
|
2025-03-14T10:33:51.348Z
| true |
2025-03-14T10:33:51.348Z
|
regular
| false | false | null | true | false | false | null | null | null | 30 | 0 | false |
zh950713
| 8 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83271
}
] | null | null |
217,843 |
DDU for segmentation
|
DDU for segmentation
|
ddu-for-segmentation
| 1 | 0 | 1 | null |
2025-03-14T08:24:38.392Z
|
2025-03-14T08:24:38.430Z
| true |
2025-03-14T08:24:38.430Z
|
regular
| false | false | null | true | false | false | null | null | null | 31 | 0 | false |
Mohamed_Farag
| 5 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 42293
}
] | null | null |
217,566 |
ROCm + torch + xformers
|
ROCm + torch + xformers
|
rocm-torch-xformers
| 3 | 1 | 3 | null |
2025-03-07T15:48:09.580Z
|
2025-03-14T08:13:12.800Z
| true |
2025-03-14T08:13:12.800Z
|
regular
| false | false | null | true | false | false | null | null | null | 573 | 0 | false |
rrunner77
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82562
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83261
}
] | null | null |
217,833 |
Full finetune, LoRA and feature extraction take the same amount of memory and time to train
|
Full finetune, LoRA and feature extraction take the same amount of memory and time to train
|
full-finetune-lora-and-feature-extraction-take-the-same-amount-of-memory-and-time-to-train
| 1 | 0 | 1 | null |
2025-03-14T05:09:34.624Z
|
2025-03-14T05:09:34.664Z
| true |
2025-03-14T05:40:12.747Z
|
regular
| false | false | null | true | false | false | null | null | null | 31 | 0 | false |
Vefery
| 8 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 75074
}
] | null | null |
217,592 |
Custom Dataset __getitem__ is receiving a list from DataLoader
|
Custom Dataset __getitem__ is receiving a list from DataLoader
|
custom-dataset-getitem-is-receiving-a-list-from-dataloader
| 4 | 2 | 4 | null |
2025-03-08T12:27:38.663Z
|
2025-03-14T05:32:05.342Z
| true |
2025-03-14T05:32:05.342Z
|
regular
| false | false | null | true | false | false | null | null | null | 52 | 0 | false |
UMAR_MASUD
| 5 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83139
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 69167
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 74591
}
] | null | null |
217,824 |
Torch::jit::load error file_name!=nullptr
|
Torch::jit::load error file_name!=nullptr
|
torch-load-error-file-name-nullptr
| 1 | 0 | 1 | null |
2025-03-13T22:18:25.486Z
|
2025-03-13T22:18:25.523Z
| true |
2025-03-13T22:18:25.523Z
|
regular
| false | false | null | true | false | false | null | null | null | 31 | 0 | false |
Sanjib
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83260
}
] | null |
Torch::jit::load error file_name!=nullptr
|
217,571 |
Extra memory load while using DDP in rank 0, not cleared after validation
|
Extra memory load while using DDP in rank 0, not cleared after validation
|
extra-memory-load-while-using-ddp-in-rank-0-not-cleared-after-validation
| 8 | 6 | 8 | null |
2025-03-07T20:04:39.876Z
|
2025-03-13T18:41:52.313Z
| true |
2025-03-13T18:51:01.501Z
|
regular
| false | false | null | true | false | false | null | null | null | 220 | 0 | false |
Sihe_Chen
| 12 | false | null | true |
[
{
"description": "Original Poster, Most Recent Poster, Accepted Answer",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 40319
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,817 |
Exposing named parameters and parameters of a custom module (LibTorch)
|
Exposing named parameters and parameters of a custom module (LibTorch)
|
exposing-named-parameters-and-parameters-of-a-custom-module-libtorch
| 1 | 0 | 1 | null |
2025-03-13T16:33:13.635Z
|
2025-03-13T16:33:13.680Z
| true |
2025-03-13T16:33:13.680Z
|
regular
| false | false | null | true | false | false | null | null | null | 21 | 0 | false |
Standard_Deviation
| 11 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83255
}
] | null | null |
217,813 |
Announcement: Share Your Feedback on PyTorch Docs and Tutorial
|
Announcement: Share Your Feedback on PyTorch Docs and Tutorial
|
announcement-share-your-feedback-on-pytorch-docs-and-tutorial
| 1 | 0 | 1 | null |
2025-03-13T14:44:18.138Z
|
2025-03-13T14:44:18.214Z
| true |
2025-03-13T14:44:18.214Z
|
regular
| false | false | null | true | false | false | null | null | null | 20 | 0 | false |
sekyondaMeta
| 3 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83252
}
] | null | null |
217,809 |
Torch::full two overloads
|
Torch::full two overloads
|
torch-full-two-overloads
| 2 | 0 | 2 | null |
2025-03-13T14:03:27.296Z
|
2025-03-13T14:21:16.532Z
| true |
2025-03-13T14:21:16.532Z
|
regular
| false | false | null | true | false | false | null | null | null | 53 | 0 | false |
Dirk10001
| 11 | false | null | true |
[
{
"description": "Original Poster, Most Recent Poster, Accepted Answer",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 73501
}
] | null | null |
217,775 |
Torch expand outputs are different on CPU and CUDA EP
|
Torch expand outputs are different on CPU and CUDA EP
|
torch-expand-outputs-are-different-on-cpu-and-cuda-ep
| 2 | 0 | 2 | null |
2025-03-13T06:23:38.325Z
|
2025-03-13T12:58:01.000Z
| true |
2025-03-13T12:58:01.000Z
|
regular
| false | false | null | true | false | false | null | null | null | 27 | 0 | false |
ptrblck
| 14 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 70623
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,804 |
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 88 but got size 85 for tensor number 1 in the list
|
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 88 but got size 85 for tensor number 1 in the list
|
runtimeerror-sizes-of-tensors-must-match-except-in-dimension-1-expected-size-88-but-got-size-85-for-tensor-number-1-in-the-list
| 1 | 0 | 1 | null |
2025-03-13T12:57:10.946Z
|
2025-03-13T12:57:11.009Z
| true |
2025-03-13T12:57:11.009Z
|
regular
| false | false | null | true | false | false | null | null | null | 21 | 0 | false |
Dmitr
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 82738
}
] | null | null |
217,763 |
Is dynamic quantization in fact doing weight dequant instead of activation quant for `quantize_dynamic()`
|
Is dynamic quantization in fact doing weight dequant instead of activation quant for `quantize_dynamic()`
|
is-dynamic-quantization-in-fact-doing-weight-dequant-instead-of-activation-quant-for-quantize-dynamic
| 2 | 0 | 2 | null |
2025-03-12T21:55:04.470Z
|
2025-03-13T12:56:28.178Z
| true |
2025-03-13T12:56:28.178Z
|
regular
| false | false | null | true | false | false | null | null | null | 98 | 0 | false |
Vasiliy_Kuznetsov
| 17 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83226
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 31938
}
] | null | null |
217,794 |
Stable Diffusion does not work with the 5080 video card
|
Stable Diffusion does not work with the 5080 video card
|
stable-diffusion-does-not-work-with-the-5080-video-card
| 2 | 0 | 2 | null |
2025-03-13T11:42:37.078Z
|
2025-03-13T12:56:00.092Z
| true |
2025-03-13T12:56:00.092Z
|
regular
| false | false | null | true | false | false | null | null | null | 666 | 1 | false |
ptrblck
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83243
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
}
] | null | null |
217,802 |
How to properly implement these 1D convolutions and summations?
|
How to properly implement these 1D convolutions and summations?
|
how-to-properly-implement-these-1d-convolutions-and-summations
| 1 | 0 | 1 | null |
2025-03-13T12:53:43.221Z
|
2025-03-13T12:53:43.257Z
| true |
2025-03-13T12:53:43.257Z
|
regular
| false | false | null | true | false | false | null | null | null | 12 | 0 | false |
Sim_On
| 1 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83244
}
] | null | null |
210,114 |
Defining a ProbalisticActor with two normal distributions
|
Defining a ProbalisticActor with two normal distributions
|
defining-a-probalisticactor-with-two-normal-distributions
| 18 | 4 | 18 | null |
2024-09-26T10:05:23.068Z
|
2025-03-13T11:51:13.663Z
| true |
2025-03-13T11:51:13.663Z
|
regular
| false | false | null | true | false | false | null | null | null | 125 | 0 | false |
Gogh
| 6 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 79178
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 33609
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83246
}
] | null | null |
217,791 |
How to let torch.compile only target GPU
|
How to let torch.compile only target GPU
|
how-to-let-torch-compile-only-target-gpu
| 1 | 0 | 1 | null |
2025-03-13T11:23:34.668Z
|
2025-03-13T11:23:34.736Z
| true |
2025-03-13T11:23:34.736Z
|
regular
| false | false | null | true | false | false | null | null | null | 83 | 0 | false |
woctordho
| 41 | false | null | false |
[
{
"description": "Original Poster, Most Recent Poster",
"extras": "latest single",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83242
}
] | null | null |
197,636 |
Is CUDA 12.0 supported with any pytorch version?
|
Is CUDA 12.0 supported with any pytorch version?
|
is-cuda-12-0-supported-with-any-pytorch-version
| 10 | 8 | 10 | null |
2024-02-24T14:39:05.541Z
|
2025-03-13T10:29:59.601Z
| true |
2025-03-13T10:29:59.601Z
|
regular
| false | false | null | true | false | false | null | null | null | 26,980 | 6 | false |
wren_101
| 1 | false | null | false |
[
{
"description": "Original Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 73488
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 76893
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 44888
},
{
"description": "Frequent Poster",
"extras": null,
"flair_group_id": null,
"primary_group_id": null,
"user_id": 3534
},
{
"description": "Most Recent Poster",
"extras": "latest",
"flair_group_id": null,
"primary_group_id": null,
"user_id": 83241
}
] | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.