title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Measuring Mathematical Problem Solving With the MATH Dataset | 115 | neurips | 40 | 2 | 2023-06-16 16:08:53.975000 | https://github.com/hendrycks/math | 382 | Measuring mathematical problem solving with the math dataset | https://scholar.google.com/scholar?cluster=15840802134856527968&hl=en&as_sdt=0,33 | 10 | 2,021 |
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning | 25 | neurips | 12 | 1 | 2023-06-16 16:08:54.175000 | https://github.com/abacusai/xai-bench | 38 | Synthetic benchmarks for scientific research in explainable machine learning | https://scholar.google.com/scholar?cluster=16562504409000765600&hl=en&as_sdt=0,7 | 7 | 2,021 |
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation | 305 | neurips | 313 | 23 | 2023-06-16 16:08:54.376000 | https://github.com/microsoft/CodeXGLUE | 1,143 | Codexglue: A machine learning benchmark dataset for code understanding and generation | https://scholar.google.com/scholar?cluster=3348257757676709546&hl=en&as_sdt=0,34 | 34 | 2,021 |
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark | 23 | neurips | 116 | 23 | 2023-06-16 16:08:54.577000 | https://github.com/modeltc/mqbench | 586 | MQBench: Towards reproducible and deployable model quantization benchmark | https://scholar.google.com/scholar?cluster=3991463510006314628&hl=en&as_sdt=0,33 | 15 | 2,021 |
Measuring Coding Challenge Competence With APPS | 101 | neurips | 38 | 5 | 2023-06-16 16:08:54.778000 | https://github.com/hendrycks/apps | 274 | Measuring coding challenge competence with apps | https://scholar.google.com/scholar?cluster=17541608988106931861&hl=en&as_sdt=0,5 | 12 | 2,021 |
ATOM3D: Tasks on Molecules in Three Dimensions | 62 | neurips | 32 | 18 | 2023-06-16 16:08:54.981000 | https://github.com/drorlab/atom3d | 249 | Atom3d: Tasks on molecules in three dimensions | https://scholar.google.com/scholar?cluster=8766868616148993451&hl=en&as_sdt=0,5 | 14 | 2,021 |
WaveFake: A Data Set to Facilitate Audio Deepfake Detection | 32 | neurips | 6 | 0 | 2023-06-16 16:08:55.188000 | https://github.com/rub-syssec/wavefake | 42 | Wavefake: A data set to facilitate audio deepfake detection | https://scholar.google.com/scholar?cluster=6599528507595040003&hl=en&as_sdt=0,39 | 6 | 2,021 |
RAFT: A Real-World Few-Shot Text Classification Benchmark | 23 | neurips | 10 | 1 | 2023-06-16 16:08:55.389000 | https://github.com/oughtinc/raft-baselines | 11 | RAFT: A real-world few-shot text classification benchmark | https://scholar.google.com/scholar?cluster=14991051401140095655&hl=en&as_sdt=0,14 | 2 | 2,021 |
Physion: Evaluating Physical Prediction from Vision in Humans and Machines | 25 | neurips | 2 | 12 | 2023-06-16 16:08:55.589000 | https://github.com/cogtoolslab/physics-benchmarking-neurips2021 | 44 | Physion: Evaluating physical prediction from vision in humans and machines | https://scholar.google.com/scholar?cluster=8733318111076645893&hl=en&as_sdt=0,5 | 9 | 2,021 |
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning | 17 | neurips | 13 | 0 | 2023-06-16 16:08:55.790000 | https://github.com/lupantech/iconqa | 31 | Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning | https://scholar.google.com/scholar?cluster=6611908787102909279&hl=en&as_sdt=0,5 | 3 | 2,021 |
SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation | 42 | neurips | 4 | 0 | 2023-06-16 16:08:55.992000 | https://github.com/segmentmeifyoucan/road-anomaly-benchmark | 20 | Segmentmeifyoucan: A benchmark for anomaly segmentation | https://scholar.google.com/scholar?cluster=402806083575370360&hl=en&as_sdt=0,21 | 0 | 2,021 |
B-Pref: Benchmarking Preference-Based Reinforcement Learning | 29 | neurips | 17 | 6 | 2023-06-16 16:08:56.192000 | https://github.com/rll-research/b-pref | 76 | B-pref: Benchmarking preference-based reinforcement learning | https://scholar.google.com/scholar?cluster=13266882268362659539&hl=en&as_sdt=0,33 | 0 | 2,021 |
NaturalProofs: Mathematical Theorem Proving in Natural Language | 22 | neurips | 6 | 0 | 2023-06-16 16:08:56.392000 | https://github.com/wellecks/naturalproofs | 90 | Naturalproofs: Mathematical theorem proving in natural language | https://scholar.google.com/scholar?cluster=955828414616536580&hl=en&as_sdt=0,32 | 7 | 2,021 |
OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs | 165 | neurips | 397 | 17 | 2023-06-16 16:08:56.592000 | https://github.com/snap-stanford/ogb | 1,685 | Ogb-lsc: A large-scale challenge for machine learning on graphs | https://scholar.google.com/scholar?cluster=15358624115412194871&hl=en&as_sdt=0,33 | 42 | 2,021 |
An Information Retrieval Approach to Building Datasets for Hate Speech Detection | 12 | neurips | 1 | 0 | 2023-06-16 16:08:56.793000 | https://github.com/mdmustafizurrahman/An-Information-Retrieval-Approach-to-Building-Datasets-for-Hate-Speech-Detection | 6 | An information retrieval approach to building datasets for hate speech detection | https://scholar.google.com/scholar?cluster=8624990227295438686&hl=en&as_sdt=0,33 | 3 | 2,021 |
RedCaps: Web-curated image-text data created by the people, for the people | 54 | neurips | 7 | 0 | 2023-06-16 16:08:56.993000 | https://github.com/redcaps-dataset/redcaps-downloader | 34 | Redcaps: Web-curated image-text data created by the people, for the people | https://scholar.google.com/scholar?cluster=16709143259160494609&hl=en&as_sdt=0,33 | 1 | 2,021 |
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation | 35 | neurips | 2 | 0 | 2023-06-16 16:08:57.193000 | https://github.com/karazijal/clevrtex-generation | 30 | Clevrtex: A texture-rich benchmark for unsupervised multi-object segmentation | https://scholar.google.com/scholar?cluster=13383231498167855057&hl=en&as_sdt=0,37 | 2 | 2,021 |
A Channel Coding Benchmark for Meta-Learning | 6 | neurips | 1 | 0 | 2023-06-16 16:08:57.393000 | https://github.com/ruihuili/MetaCC | 8 | A channel coding benchmark for meta-learning | https://scholar.google.com/scholar?cluster=1943764158077040305&hl=en&as_sdt=0,18 | 5 | 2,021 |
Chaos as an interpretable benchmark for forecasting and data-driven modelling | 31 | neurips | 26 | 0 | 2023-06-16 16:08:57.594000 | https://github.com/williamgilpin/dysts | 204 | Chaos as an interpretable benchmark for forecasting and data-driven modelling | https://scholar.google.com/scholar?cluster=10113442544337188110&hl=en&as_sdt=0,33 | 7 | 2,021 |
HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML | 15 | neurips | 3 | 1 | 2023-06-16 16:08:57.794000 | https://github.com/releaunifreiburg/HPO-B | 17 | HPO-B: A large-scale reproducible benchmark for black-box HPO based on OpenML | https://scholar.google.com/scholar?cluster=7650782388880150578&hl=en&as_sdt=0,39 | 2 | 2,021 |
Monash Time Series Forecasting Archive | 42 | neurips | 34 | 0 | 2023-06-16 16:08:57.994000 | https://github.com/rakshitha123/TSForecasting | 103 | Monash time series forecasting archive | https://scholar.google.com/scholar?cluster=2787747679550330203&hl=en&as_sdt=0,31 | 6 | 2,021 |
Which priors matter? Benchmarking models for learning latent dynamics | 16 | neurips | 6 | 2 | 2023-06-16 16:08:58.195000 | https://github.com/deepmind/dm_hamiltonian_dynamics_suite | 28 | Which priors matter? Benchmarking models for learning latent dynamics | https://scholar.google.com/scholar?cluster=377030899492556244&hl=en&as_sdt=0,1 | 5 | 2,021 |
Benchmarks for Corruption Invariant Person Re-identification | 11 | neurips | 17 | 2 | 2023-06-16 16:08:58.395000 | https://github.com/MinghuiChen43/CIL-ReID | 78 | Benchmarks for corruption invariant person re-identification | https://scholar.google.com/scholar?cluster=13448668385156906300&hl=en&as_sdt=0,10 | 4 | 2,021 |
ClimART: A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models | 4 | neurips | 4 | 0 | 2023-06-16 16:08:58.595000 | https://github.com/RolnickLab/climart | 32 | ClimART: A benchmark dataset for emulating atmospheric radiative transfer in weather and climate models | https://scholar.google.com/scholar?cluster=15949022047670845408&hl=en&as_sdt=0,41 | 2 | 2,021 |
Variance-Aware Machine Translation Test Sets | 1 | neurips | 0 | 0 | 2023-06-16 16:08:58.796000 | https://github.com/nlp2ct/variance-aware-mt-test-sets | 6 | Variance-aware machine translation test sets | https://scholar.google.com/scholar?cluster=101231479911651461&hl=en&as_sdt=0,10 | 2 | 2,021 |
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research | 44 | neurips | 44 | 2 | 2023-06-16 16:08:58.996000 | https://github.com/facebookresearch/minihack | 383 | Minihack the planet: A sandbox for open-ended reinforcement learning research | https://scholar.google.com/scholar?cluster=6630578925704373127&hl=en&as_sdt=0,5 | 11 | 2,021 |
NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search | 8 | neurips | 2 | 1 | 2023-06-16 22:56:56.096000 | https://github.com/thumnlab/nas-bench-graph | 12 | NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search | https://scholar.google.com/scholar?cluster=974156453210928124&hl=en&as_sdt=0,10 | 7 | 2,022 |
Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement | 3 | neurips | 0 | 0 | 2023-06-16 22:56:56.310000 | https://github.com/trevorcampbell/quasi-newton-coresets-experiments | 1 | Fast Bayesian coresets via subsampling and quasi-Newton refinement | https://scholar.google.com/scholar?cluster=12514193164456670939&hl=en&as_sdt=0,31 | 1 | 2,022 |
What You See is What You Classify: Black Box Attributions | 1 | neurips | 3 | 0 | 2023-06-16 22:56:56.520000 | https://github.com/stevenstalder/nn-explainer | 22 | What You See is What You Classify: Black Box Attributions | https://scholar.google.com/scholar?cluster=7817582227897435675&hl=en&as_sdt=0,5 | 1 | 2,022 |
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning | 20 | neurips | 6 | 3 | 2023-06-16 22:56:56.731000 | https://github.com/dongzelian/ssf | 105 | Scaling & shifting your features: A new baseline for efficient model tuning | https://scholar.google.com/scholar?cluster=15457903862760581709&hl=en&as_sdt=0,33 | 2 | 2,022 |
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models | 30 | neurips | 19 | 1 | 2023-06-16 22:56:56.941000 | https://github.com/antoyang/FrozenBiLM | 98 | Zero-shot video question answering via frozen bidirectional language models | https://scholar.google.com/scholar?cluster=14506268695911835029&hl=en&as_sdt=0,44 | 4 | 2,022 |
Using natural language and program abstractions to instill human inductive biases in machines | 5 | neurips | 3 | 0 | 2023-06-16 22:56:57.151000 | https://github.com/sreejank/language_and_programs | 4 | Using natural language and program abstractions to instill human inductive biases in machines | https://scholar.google.com/scholar?cluster=18321817709222277184&hl=en&as_sdt=0,44 | 1 | 2,022 |
Theory and Approximate Solvers for Branched Optimal Transport with Multiple Sources | 0 | neurips | 1 | 1 | 2023-06-16 22:56:57.362000 | https://github.com/hci-unihd/branchedot | 3 | Theory and Approximate Solvers for Branched Optimal Transport with Multiple Sources | https://scholar.google.com/scholar?cluster=2014031354721865805&hl=en&as_sdt=0,21 | 1 | 2,022 |
CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis | 0 | neurips | 1 | 0 | 2023-06-16 22:56:57.573000 | https://github.com/niopeng/CHIMLE | 3 | CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis | https://scholar.google.com/scholar?cluster=6104344160615943312&hl=en&as_sdt=0,5 | 2 | 2,022 |
Diffusion Visual Counterfactual Explanations | 6 | neurips | 5 | 1 | 2023-06-16 22:56:57.784000 | https://github.com/valentyn1boreiko/dvces | 22 | Diffusion visual counterfactual explanations | https://scholar.google.com/scholar?cluster=10867197549616618589&hl=en&as_sdt=0,5 | 2 | 2,022 |
Recurrent Video Restoration Transformer with Guided Deformable Attention | 17 | neurips | 17 | 15 | 2023-06-16 22:56:57.996000 | https://github.com/jingyunliang/rvrt | 216 | Recurrent video restoration transformer with guided deformable attention | https://scholar.google.com/scholar?cluster=11993953591906088344&hl=en&as_sdt=0,1 | 22 | 2,022 |
On-Demand Sampling: Learning Optimally from Multiple Distributions | 5 | neurips | 1 | 0 | 2023-06-16 22:56:58.207000 | https://github.com/ericzhao28/multidistributionlearning | 7 | On-demand sampling: Learning optimally from multiple distributions | https://scholar.google.com/scholar?cluster=89881707711489723&hl=en&as_sdt=0,5 | 3 | 2,022 |
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays | 9 | neurips | 1 | 0 | 2023-06-16 22:56:58.417000 | https://github.com/konstmish/asynchronous_sgd | 4 | Asynchronous sgd beats minibatch sgd under arbitrary delays | https://scholar.google.com/scholar?cluster=2013363266003001191&hl=en&as_sdt=0,5 | 1 | 2,022 |
Coresets for Relational Data and The Applications | 0 | neurips | 0 | 0 | 2023-06-16 22:56:58.627000 | https://github.com/cjx-zar/coresets-for-relational-data-and-the-applications | 1 | Coresets for Relational Data and The Applications | https://scholar.google.com/scholar?cluster=9554541870090821318&hl=en&as_sdt=0,5 | 1 | 2,022 |
Generating Training Data with Language Models: Towards Zero-Shot Language Understanding | 33 | neurips | 9 | 1 | 2023-06-16 22:56:58.838000 | https://github.com/yumeng5/supergen | 47 | Generating training data with language models: Towards zero-shot language understanding | https://scholar.google.com/scholar?cluster=14481752723663721801&hl=en&as_sdt=0,5 | 2 | 2,022 |
Robust Binary Models by Pruning Randomly-initialized Networks | 1 | neurips | 1 | 0 | 2023-06-16 22:56:59.049000 | https://github.com/IVRL/RobustBinarySubNet | 2 | Robust Binary Models by Pruning Randomly-initialized Networks | https://scholar.google.com/scholar?cluster=4369217517871260894&hl=en&as_sdt=0,22 | 3 | 2,022 |
Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning | 2 | neurips | 0 | 0 | 2023-06-16 22:56:59.260000 | https://github.com/lviano/identifiability_irl | 1 | Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning | https://scholar.google.com/scholar?cluster=14730598469172065139&hl=en&as_sdt=0,5 | 1 | 2,022 |
Efficient Knowledge Distillation from Model Checkpoints | 6 | neurips | 0 | 2 | 2023-06-16 22:56:59.471000 | https://github.com/leaplabthu/checkpointkd | 17 | Efficient Knowledge Distillation from Model Checkpoints | https://scholar.google.com/scholar?cluster=2353993256352314616&hl=en&as_sdt=0,10 | 2 | 2,022 |
ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs | 9 | neurips | 239 | 19 | 2023-06-16 22:56:59.682000 | https://github.com/divelab/DIG | 1,503 | ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs | https://scholar.google.com/scholar?cluster=1138590591357875306&hl=en&as_sdt=0,5 | 33 | 2,022 |
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret | 1 | neurips | 0 | 0 | 2023-06-16 22:56:59.893000 | https://github.com/jiaweihhuang/tiered-rl-experiments | 1 | Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret | https://scholar.google.com/scholar?cluster=7975992698003675864&hl=en&as_sdt=0,5 | 1 | 2,022 |
BR-SNIS: Bias Reduced Self-Normalized Importance Sampling | 2 | neurips | 0 | 0 | 2023-06-16 22:57:00.103000 | https://github.com/gabrielvc/br_snis | 0 | BR-SNIS: Bias Reduced Self-Normalized Importance Sampling | https://scholar.google.com/scholar?cluster=18224130644100416616&hl=en&as_sdt=0,33 | 2 | 2,022 |
Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks | 2 | neurips | 0 | 0 | 2023-06-16 22:57:00.315000 | https://github.com/wmz9/early_stage_convergence_neurips2022 | 0 | Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks | https://scholar.google.com/scholar?cluster=6474562401144643315&hl=en&as_sdt=0,36 | 1 | 2,022 |
On Divergence Measures for Bayesian Pseudocoresets | 2 | neurips | 0 | 0 | 2023-06-16 22:57:00.525000 | https://github.com/balhaekim/bpc-divergences | 6 | On Divergence Measures for Bayesian Pseudocoresets | https://scholar.google.com/scholar?cluster=2002320216778529184&hl=en&as_sdt=0,33 | 1 | 2,022 |
Unsupervised Learning of Equivariant Structure from Sequences | 1 | neurips | 0 | 0 | 2023-06-16 22:57:00.736000 | https://github.com/takerum/meta_sequential_prediction | 13 | Unsupervised Learning of Equivariant Structure from Sequences | https://scholar.google.com/scholar?cluster=304500116743207302&hl=en&as_sdt=0,33 | 3 | 2,022 |
DC-BENCH: Dataset Condensation Benchmark | 13 | neurips | 15 | 4 | 2023-06-16 22:57:00.948000 | https://github.com/justincui03/dc_benchmark | 60 | DC-BENCH: Dataset Condensation Benchmark | https://scholar.google.com/scholar?cluster=16210328737996830947&hl=en&as_sdt=0,31 | 3 | 2,022 |
Mask Matching Transformer for Few-Shot Segmentation | 2 | neurips | 2 | 2 | 2023-06-16 22:57:01.159000 | https://github.com/picsart-ai-research/mask-matching-transformer | 9 | Mask matching transformer for few-shot segmentation | https://scholar.google.com/scholar?cluster=10843608391275474221&hl=en&as_sdt=0,14 | 2 | 2,022 |
Causal Discovery in Linear Latent Variable Models Subject to Measurement Error | 1 | neurips | 0 | 0 | 2023-06-16 22:57:01.369000 | https://github.com/yuqin-yang/sem-me-ur | 3 | Causal Discovery in Linear Latent Variable Models Subject to Measurement Error | https://scholar.google.com/scholar?cluster=2946367080464939107&hl=en&as_sdt=0,5 | 1 | 2,022 |
Sparsity in Continuous-Depth Neural Networks | 1 | neurips | 0 | 0 | 2023-06-16 22:57:01.579000 | https://github.com/theislab/pathreg | 6 | Sparsity in Continuous-Depth Neural Networks | https://scholar.google.com/scholar?cluster=17433656016983930477&hl=en&as_sdt=0,5 | 2 | 2,022 |
Learning Probabilistic Models from Generator Latent Spaces with Hat EBM | 1 | neurips | 0 | 0 | 2023-06-16 22:57:01.790000 | https://github.com/point0bar1/hat-ebm | 6 | Learning Probabilistic Models from Generator Latent Spaces with Hat EBM | https://scholar.google.com/scholar?cluster=9884499776664824848&hl=en&as_sdt=0,5 | 1 | 2,022 |
Learning Best Combination for Efficient N:M Sparsity | 8 | neurips | 1 | 1 | 2023-06-16 22:57:02.001000 | https://github.com/zyxxmu/lbc | 12 | Learning Best Combination for Efficient N: M Sparsity | https://scholar.google.com/scholar?cluster=16372091815388983729&hl=en&as_sdt=0,47 | 1 | 2,022 |
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera | 10 | neurips | 56 | 2 | 2023-06-16 22:57:02.212000 | https://github.com/USTC3DV/NDR-code | 473 | Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera | https://scholar.google.com/scholar?cluster=13429723672791415144&hl=en&as_sdt=0,44 | 14 | 2,022 |
Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations | 6 | neurips | 4 | 0 | 2023-06-16 22:57:02.423000 | https://github.com/ramanshsharma2806/dt-pinn | 7 | Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations | https://scholar.google.com/scholar?cluster=8326898373618608697&hl=en&as_sdt=0,28 | 4 | 2,022 |
DOPE: Doubly Optimistic and Pessimistic Exploration for Safe Reinforcement Learning | 7 | neurips | 2 | 0 | 2023-06-16 22:57:02.634000 | https://github.com/archanabura/dope-doublyoptimisticpessimisticexploration | 1 | DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning | https://scholar.google.com/scholar?cluster=15050715295292728061&hl=en&as_sdt=0,5 | 1 | 2,022 |
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate | 1 | neurips | 0 | 0 | 2023-06-16 22:57:02.845000 | https://github.com/kexinjinnn/equitopo | 7 | Communication-Efficient Topologies for Decentralized Learning with Consensus Rate | https://scholar.google.com/scholar?cluster=16384367809793868682&hl=en&as_sdt=0,33 | 1 | 2,022 |
Dataset Distillation via Factorization | 20 | neurips | 6 | 3 | 2023-06-16 22:57:03.055000 | https://github.com/huage001/datasetfactorization | 47 | Dataset distillation via factorization | https://scholar.google.com/scholar?cluster=1635742164576449623&hl=en&as_sdt=0,5 | 1 | 2,022 |
A Large Scale Search Dataset for Unbiased Learning to Rank | 8 | neurips | 8 | 8 | 2023-06-16 22:57:03.266000 | https://github.com/chuxiaokai/baidu_ultr_dataset | 51 | A large scale search dataset for unbiased learning to rank | https://scholar.google.com/scholar?cluster=16787793600985661869&hl=en&as_sdt=0,33 | 5 | 2,022 |
SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation | 58 | neurips | 66 | 20 | 2023-06-16 22:57:03.476000 | https://github.com/visual-attention-network/segnext | 630 | Segnext: Rethinking convolutional attention design for semantic segmentation | https://scholar.google.com/scholar?cluster=761718241536208511&hl=en&as_sdt=0,47 | 6 | 2,022 |
Understanding Hyperdimensional Computing for Parallel Single-Pass Learning | 5 | neurips | 1 | 0 | 2023-06-16 22:57:03.687000 | https://github.com/cornell-relaxml/hyperdimensional-computing | 5 | Understanding hyperdimensional computing for parallel single-pass learning | https://scholar.google.com/scholar?cluster=2441954374351827630&hl=en&as_sdt=0,5 | 0 | 2,022 |
Pre-trained Adversarial Perturbations | 1 | neurips | 1 | 0 | 2023-06-16 22:57:03.898000 | https://github.com/banyuanhao/pap | 15 | Pre-trained Adversarial Perturbations | https://scholar.google.com/scholar?cluster=1036412260609158515&hl=en&as_sdt=0,5 | 1 | 2,022 |
An Empirical Study on Disentanglement of Negative-free Contrastive Learning | 1 | neurips | 0 | 0 | 2023-06-16 22:57:04.108000 | https://github.com/noahcao/disentanglement_lib_med | 6 | An Empirical Study on Disentanglement of Negative-free Contrastive Learning | https://scholar.google.com/scholar?cluster=8166223620648232228&hl=en&as_sdt=0,10 | 0 | 2,022 |
MABSplit: Faster Forest Training Using Multi-Armed Bandits | 1 | neurips | 0 | 79 | 2023-06-16 22:57:04.320000 | https://github.com/thrungroup/fastforest | 4 | MABSplit: Faster Forest Training Using Multi-Armed Bandits | https://scholar.google.com/scholar?cluster=16839682885410953737&hl=en&as_sdt=0,22 | 0 | 2,022 |
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints | 3 | neurips | 0 | 0 | 2023-06-16 22:57:04.531000 | https://github.com/gallego-posada/constrained_sparsity | 6 | Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints | https://scholar.google.com/scholar?cluster=3017868657771533183&hl=en&as_sdt=0,5 | 2 | 2,022 |
Okapi: Generalising Better by Making Statistical Matches Match | 0 | neurips | 0 | 1 | 2023-06-16 22:57:04.743000 | https://github.com/wearepal/okapi | 5 | Okapi: Generalising Better by Making Statistical Matches Match | https://scholar.google.com/scholar?cluster=14348083558003086680&hl=en&as_sdt=0,5 | 1 | 2,022 |
Revisiting Heterophily For Graph Neural Networks | 24 | neurips | 5 | 0 | 2023-06-16 22:57:04.953000 | https://github.com/SitaoLuan/ACM-GNN | 27 | Revisiting heterophily for graph neural networks | https://scholar.google.com/scholar?cluster=10728534830275344250&hl=en&as_sdt=0,5 | 5 | 2,022 |
Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation | 7 | neurips | 297 | 60 | 2023-06-16 22:57:05.164000 | https://github.com/microsoft/muzic | 3,299 | Museformer: Transformer with Fine-and Coarse-Grained Attention for Music Generation | https://scholar.google.com/scholar?cluster=9919738130893761480&hl=en&as_sdt=0,14 | 63 | 2,022 |
Emergent Communication: Generalization and Overfitting in Lewis Games | 4 | neurips | 0 | 0 | 2023-06-16 22:57:05.375000 | https://github.com/mathieurita/population | 4 | Emergent Communication: Generalization and Overfitting in Lewis Games | https://scholar.google.com/scholar?cluster=9098136952282832762&hl=en&as_sdt=0,5 | 2 | 2,022 |
Efficient and Effective Augmentation Strategy for Adversarial Training | 6 | neurips | 1 | 0 | 2023-06-16 22:57:05.585000 | https://github.com/val-iisc/dajat | 13 | Efficient and effective augmentation strategy for adversarial training | https://scholar.google.com/scholar?cluster=14581218917168092627&hl=en&as_sdt=0,5 | 14 | 2,022 |
Adaptive Data Debiasing through Bounded Exploration | 1 | neurips | 0 | 0 | 2023-06-16 22:57:05.797000 | https://github.com/yifankevin/adaptive_data_debiasing | 0 | Adaptive Data Debiasing through Bounded Exploration | https://scholar.google.com/scholar?cluster=6378226570310143908&hl=en&as_sdt=0,40 | 1 | 2,022 |
When does return-conditioned supervised learning work for offline reinforcement learning? | 13 | neurips | 0 | 0 | 2023-06-16 22:57:06.008000 | https://github.com/davidbrandfonbrener/rcsl-paper | 6 | When does return-conditioned supervised learning work for offline reinforcement learning? | https://scholar.google.com/scholar?cluster=13396358502953618671&hl=en&as_sdt=0,33 | 1 | 2,022 |
PDEBench: An Extensive Benchmark for Scientific Machine Learning | 21 | neurips | 44 | 6 | 2023-06-16 22:57:06.220000 | https://github.com/pdebench/pdebench | 402 | PDEBench: An extensive benchmark for scientific machine learning | https://scholar.google.com/scholar?cluster=15542719739478133736&hl=en&as_sdt=0,47 | 15 | 2,022 |
Learning Robust Dynamics through Variational Sparse Gating | 0 | neurips | 1 | 1 | 2023-06-16 22:57:06.431000 | https://github.com/arnavkj1995/vsg | 19 | Learning Robust Dynamics through Variational Sparse Gating | https://scholar.google.com/scholar?cluster=5582932369755688869&hl=en&as_sdt=0,36 | 2 | 2,022 |
Where to Pay Attention in Sparse Training for Feature Selection? | 4 | neurips | 0 | 1 | 2023-06-16 22:57:06.641000 | https://github.com/ghadasokar/wast | 4 | Where to Pay Attention in Sparse Training for Feature Selection? | https://scholar.google.com/scholar?cluster=1186481368031859899&hl=en&as_sdt=0,44 | 1 | 2,022 |
General Cutting Planes for Bound-Propagation-Based Neural Network Verification | 14 | neurips | 27 | 7 | 2023-06-16 22:57:06.853000 | https://github.com/huanzhang12/alpha-beta-CROWN | 148 | General cutting planes for bound-propagation-based neural network verification | https://scholar.google.com/scholar?cluster=16952567700251161551&hl=en&as_sdt=0,44 | 8 | 2,022 |
Mildly Conservative Q-Learning for Offline Reinforcement Learning | 15 | neurips | 3 | 0 | 2023-06-16 22:57:07.064000 | https://github.com/dmksjfl/mcq | 32 | Mildly conservative Q-learning for offline reinforcement learning | https://scholar.google.com/scholar?cluster=11648694472509786601&hl=en&as_sdt=0,36 | 4 | 2,022 |
Functional Ensemble Distillation | 0 | neurips | 0 | 0 | 2023-06-16 22:57:07.275000 | https://github.com/cobypenso/functional_ensemble_distillation | 3 | Functional Ensemble Distillation | https://scholar.google.com/scholar?cluster=7557864995422109600&hl=en&as_sdt=0,5 | 1 | 2,022 |
Lethal Dose Conjecture on Data Poisoning | 3 | neurips | 0 | 0 | 2023-06-16 22:57:07.486000 | https://github.com/wangwenxiao/FiniteAggregation | 5 | Lethal dose conjecture on data poisoning | https://scholar.google.com/scholar?cluster=10656232532262319468&hl=en&as_sdt=0,5 | 1 | 2,022 |
TempEL: Linking Dynamically Evolving and Newly Emerging Entities | 3 | neurips | 3 | 0 | 2023-06-16 22:57:07.697000 | https://github.com/klimzaporojets/tempel | 3 | TempEL: Linking dynamically evolving and newly emerging entities | https://scholar.google.com/scholar?cluster=3241654383736118484&hl=en&as_sdt=0,5 | 1 | 2,022 |
Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative | 2 | neurips | 2 | 0 | 2023-06-16 22:57:07.908000 | https://github.com/weitianxin/HyperGCL | 28 | Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative | https://scholar.google.com/scholar?cluster=8987357747154997241&hl=en&as_sdt=0,5 | 2 | 2,022 |
Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization | 6 | neurips | 3 | 1 | 2023-06-16 22:57:08.119000 | https://github.com/alstn12088/sym-nco | 13 | Sym-nco: Leveraging symmetricity for neural combinatorial optimization | https://scholar.google.com/scholar?cluster=8234123365488999500&hl=en&as_sdt=0,33 | 1 | 2,022 |
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning | 93 | neurips | 39 | 9 | 2023-06-16 22:57:08.330000 | https://github.com/r-three/t-few | 298 | Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning | https://scholar.google.com/scholar?cluster=242306292951569763&hl=en&as_sdt=0,1 | 6 | 2,022 |
DeepInteraction: 3D Object Detection via Modality Interaction | 17 | neurips | 10 | 13 | 2023-06-16 22:57:08.540000 | https://github.com/fudan-zvg/deepinteraction | 162 | Deepinteraction: 3d object detection via modality interaction | https://scholar.google.com/scholar?cluster=2369292758377733249&hl=en&as_sdt=0,15 | 19 | 2,022 |
Deep Differentiable Logic Gate Networks | 2 | neurips | 20 | 0 | 2023-06-16 22:57:08.752000 | https://github.com/felix-petersen/difflogic | 241 | Deep Differentiable Logic Gate Networks | https://scholar.google.com/scholar?cluster=12936836443171799268&hl=en&as_sdt=0,41 | 12 | 2,022 |
Maximizing and Satisficing in Multi-armed Bandits with Graph Information | 0 | neurips | 0 | 0 | 2023-06-16 22:57:08.962000 | https://github.com/parththaker/Bandits-GRUB | 0 | Maximizing and Satisficing in Multi-armed Bandits with Graph Information | https://scholar.google.com/scholar?cluster=5836306663005448433&hl=en&as_sdt=0,5 | 4 | 2,022 |
GOOD: A Graph Out-of-Distribution Benchmark | 14 | neurips | 15 | 1 | 2023-06-16 22:57:09.173000 | https://github.com/divelab/good | 130 | Good: A graph out-of-distribution benchmark | https://scholar.google.com/scholar?cluster=5688487541372761713&hl=en&as_sdt=0,5 | 4 | 2,022 |
PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits | 1 | neurips | 0 | 0 | 2023-06-16 22:57:09.384000 | https://github.com/jajajang/sparse | 0 | PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits | https://scholar.google.com/scholar?cluster=15122938357126562545&hl=en&as_sdt=0,5 | 1 | 2,022 |
What You See is What You Get: Principled Deep Learning via Distributional Generalization | 1 | neurips | 1 | 0 | 2023-06-16 22:57:09.594000 | https://github.com/yangarbiter/dp-dg | 6 | What You See is What You Get: Principled Deep Learning via Distributional Generalization | https://scholar.google.com/scholar?cluster=2822362220882233132&hl=en&as_sdt=0,24 | 3 | 2,022 |
GAPX: Generalized Autoregressive Paraphrase-Identification X | 0 | neurips | 0 | 0 | 2023-06-16 22:57:09.806000 | https://github.com/yifeizhou02/generalized_paraphrase_identification | 2 | GAPX: Generalized Autoregressive Paraphrase-Identification X | https://scholar.google.com/scholar?cluster=17804560355779547348&hl=en&as_sdt=0,26 | 1 | 2,022 |
Scalable Infomin Learning | 1 | neurips | 1 | 0 | 2023-06-16 22:57:10.016000 | https://github.com/cyz-ai/infomin | 8 | Scalable Infomin Learning | https://scholar.google.com/scholar?cluster=10543006398190520114&hl=en&as_sdt=0,5 | 1 | 2,022 |
Learning to Accelerate Partial Differential Equations via Latent Global Evolution | 6 | neurips | 4 | 0 | 2023-06-16 22:57:10.227000 | https://github.com/snap-stanford/le_pde | 11 | Learning to accelerate partial differential equations via latent global evolution | https://scholar.google.com/scholar?cluster=11413037155228818629&hl=en&as_sdt=0,43 | 41 | 2,022 |
Not too little, not too much: a theoretical analysis of graph (over)smoothing | 13 | neurips | 0 | 0 | 2023-06-16 22:57:10.438000 | https://github.com/nkeriven/graphsmoothing | 2 | Not too little, not too much: a theoretical analysis of graph (over) smoothing | https://scholar.google.com/scholar?cluster=2063487353980385484&hl=en&as_sdt=0,10 | 1 | 2,022 |
Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers | 1 | neurips | 0 | 0 | 2023-06-16 22:57:10.650000 | https://github.com/nerdslab/EIT | 6 | Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers | https://scholar.google.com/scholar?cluster=7588522259705770791&hl=en&as_sdt=0,33 | 1 | 2,022 |
Riemannian Score-Based Generative Modelling | 41 | neurips | 12 | 2 | 2023-06-16 22:57:10.861000 | https://github.com/oxcsml/riemannian-score-sde | 56 | Riemannian score-based generative modeling | https://scholar.google.com/scholar?cluster=11808970878216966405&hl=en&as_sdt=0,5 | 7 | 2,022 |
Open-Ended Reinforcement Learning with Neural Reward Functions | 2 | neurips | 0 | 0 | 2023-06-16 22:57:11.074000 | https://github.com/amujika/open-ended-reinforcement-learning-with-neural-reward-functions | 8 | Open-ended reinforcement learning with neural reward functions | https://scholar.google.com/scholar?cluster=12071061069808672843&hl=en&as_sdt=0,5 | 1 | 2,022 |
Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks | 7 | neurips | 2 | 3 | 2023-06-16 22:57:11.285000 | https://github.com/casia-iva-lab/obj2seq | 72 | Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks | https://scholar.google.com/scholar?cluster=9616302849095650848&hl=en&as_sdt=0,5 | 3 | 2,022 |
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering | 42 | neurips | 47 | 2 | 2023-06-16 22:57:11.495000 | https://github.com/lupantech/ScienceQA | 337 | Learn to explain: Multimodal reasoning via thought chains for science question answering | https://scholar.google.com/scholar?cluster=15090414004847508782&hl=en&as_sdt=0,44 | 7 | 2,022 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.