title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Accelerating Reinforcement Learning through GPU Atari Emulation | 18 | neurips | 32 | 15 | 2023-06-16 15:12:10.222000 | https://github.com/NVLABs/cule | 216 | Accelerating reinforcement learning through gpu atari emulation | https://scholar.google.com/scholar?cluster=14852827801833804671&hl=en&as_sdt=0,5 | 20 | 2,020 |
Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs | 43 | neurips | 5 | 2 | 2023-06-16 15:12:10.414000 | https://github.com/dtak/mbrl-smdp-ode | 27 | Model-based reinforcement learning for semi-markov decision processes with neural odes | https://scholar.google.com/scholar?cluster=6882030783154485592&hl=en&as_sdt=0,5 | 3 | 2,020 |
Graph Stochastic Neural Networks for Semi-supervised Learning | 28 | neurips | 6 | 1 | 2023-06-16 15:12:10.607000 | https://github.com/GSNN/GSNN | 17 | Graph stochastic neural networks for semi-supervised learning | https://scholar.google.com/scholar?cluster=12398431409964717174&hl=en&as_sdt=0,26 | 2 | 2,020 |
Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning | 23 | neurips | 113 | 0 | 2023-06-16 15:12:10.799000 | https://github.com/Thinklab-SJTU/ThinkMatch | 714 | Graduated assignment for joint multi-graph matching and clustering with application to unsupervised graph matching network learning | https://scholar.google.com/scholar?cluster=15043532701197211063&hl=en&as_sdt=0,43 | 22 | 2,020 |
Estimating Training Data Influence by Tracing Gradient Descent | 145 | neurips | 14 | 4 | 2023-06-16 15:12:10.992000 | https://github.com/frederick0329/TracIn | 186 | Estimating training data influence by tracing gradient descent | https://scholar.google.com/scholar?cluster=1975203419691170892&hl=en&as_sdt=0,5 | 8 | 2,020 |
Joint Policy Search for Multi-agent Collaboration with Imperfect Information | 11 | neurips | 9 | 0 | 2023-06-16 15:12:11.187000 | https://github.com/facebookresearch/jps | 41 | Joint policy search for multi-agent collaboration with imperfect information | https://scholar.google.com/scholar?cluster=9814706809980127110&hl=en&as_sdt=0,5 | 6 | 2,020 |
Learning Retrospective Knowledge with Reverse Reinforcement Learning | 11 | neurips | 658 | 6 | 2023-06-16 15:12:11.380000 | https://github.com/ShangtongZhang/DeepRL | 2,943 | Learning retrospective knowledge with reverse reinforcement learning | https://scholar.google.com/scholar?cluster=5697894321582614972&hl=en&as_sdt=0,34 | 93 | 2,020 |
Dialog without Dialog Data: Learning Visual Dialog Agents from VQA Data | 8 | neurips | 1 | 2 | 2023-06-16 15:12:11.573000 | https://github.com/mcogswell/dialog_without_dialog | 5 | Dialog without dialog data: Learning visual dialog agents from VQA data | https://scholar.google.com/scholar?cluster=15836872788471162855&hl=en&as_sdt=0,33 | 2 | 2,020 |
The Complete Lasso Tradeoff Diagram | 7 | neurips | 0 | 0 | 2023-06-16 15:12:11.767000 | https://github.com/HuaWang-wharton/CompleteLassoDiagram | 0 | The complete Lasso tradeoff diagram | https://scholar.google.com/scholar?cluster=11441396913332504259&hl=en&as_sdt=0,5 | 0 | 2,020 |
The Primal-Dual method for Learning Augmented Algorithms | 77 | neurips | 0 | 0 | 2023-06-16 15:12:11.959000 | https://github.com/etienne4/PDLA | 3 | The primal-dual method for learning augmented algorithms | https://scholar.google.com/scholar?cluster=17410161354545999384&hl=en&as_sdt=0,5 | 1 | 2,020 |
A Class of Algorithms for General Instrumental Variable Models | 26 | neurips | 1 | 0 | 2023-06-16 15:12:12.156000 | https://github.com/nikikilbertus/general-iv-models | 13 | A class of algorithms for general instrumental variable models | https://scholar.google.com/scholar?cluster=6114438229492187489&hl=en&as_sdt=0,5 | 3 | 2,020 |
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms | 25 | neurips | 3 | 3 | 2023-06-16 15:12:12.354000 | https://github.com/antoniobarbalau/black-box-ripper | 26 | Black-Box Ripper: Copying black-box models using generative evolutionary algorithms | https://scholar.google.com/scholar?cluster=2038937056151338541&hl=en&as_sdt=0,5 | 2 | 2,020 |
Bayesian Optimization of Risk Measures | 34 | neurips | 3 | 0 | 2023-06-16 15:12:12.547000 | https://github.com/saitcakmak/BoRisk | 20 | Bayesian optimization of risk measures | https://scholar.google.com/scholar?cluster=11597649173870382888&hl=en&as_sdt=0,10 | 4 | 2,020 |
TorsionNet: A Reinforcement Learning Approach to Sequential Conformer Search | 24 | neurips | 3 | 0 | 2023-06-16 15:12:12.743000 | https://github.com/tarungog/torsionnet_paper_version | 12 | Torsionnet: A reinforcement learning approach to sequential conformer search | https://scholar.google.com/scholar?cluster=15323026786978211130&hl=en&as_sdt=0,11 | 5 | 2,020 |
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis | 490 | neurips | 69 | 8 | 2023-06-16 15:12:12.940000 | https://github.com/autonomousvision/graf | 376 | Graf: Generative radiance fields for 3d-aware image synthesis | https://scholar.google.com/scholar?cluster=6074305542312504170&hl=en&as_sdt=0,5 | 19 | 2,020 |
A Simple Language Model for Task-Oriented Dialogue | 341 | neurips | 76 | 24 | 2023-06-16 15:12:13.133000 | https://github.com/salesforce/simpletod | 217 | A simple language model for task-oriented dialogue | https://scholar.google.com/scholar?cluster=13901694758455015611&hl=en&as_sdt=0,43 | 13 | 2,020 |
A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval | 10 | neurips | 0 | 0 | 2023-06-16 15:12:13.325000 | https://github.com/fawuuu/mirror_spr | 0 | A continuous-time mirror descent approach to sparse phase retrieval | https://scholar.google.com/scholar?cluster=17231492085001366085&hl=en&as_sdt=0,5 | 1 | 2,020 |
Confidence sequences for sampling without replacement | 16 | neurips | 0 | 0 | 2023-06-16 15:12:13.536000 | https://github.com/wannabesmith/confseq_wor | 4 | Confidence sequences for sampling without replacement | https://scholar.google.com/scholar?cluster=4371792767519028336&hl=en&as_sdt=0,33 | 2 | 2,020 |
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge | 75 | neurips | 1 | 1 | 2023-06-16 15:12:13.728000 | https://github.com/alontalmor/TeachYourAI | 44 | Leap-of-thought: Teaching pre-trained models to systematically reason over implicit knowledge | https://scholar.google.com/scholar?cluster=11221279526378822822&hl=en&as_sdt=0,33 | 6 | 2,020 |
Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games | 49 | neurips | 16 | 2 | 2023-06-16 15:12:13.922000 | https://github.com/JBLanier/pipeline-psro | 37 | Pipeline psro: A scalable approach for finding approximate nash equilibria in large games | https://scholar.google.com/scholar?cluster=8078944900964563231&hl=en&as_sdt=0,5 | 4 | 2,020 |
Latent Template Induction with Gumbel-CRFs | 8 | neurips | 8 | 0 | 2023-06-16 15:12:14.115000 | https://github.com/FranxYao/Gumbel-CRF | 53 | Latent template induction with Gumbel-CRFS | https://scholar.google.com/scholar?cluster=11572320243625839339&hl=en&as_sdt=0,45 | 5 | 2,020 |
Factorizable Graph Convolutional Networks | 110 | neurips | 9 | 1 | 2023-06-16 15:12:14.307000 | https://github.com/ihollywhy/FactorGCN.PyTorch | 47 | Factorizable graph convolutional networks | https://scholar.google.com/scholar?cluster=8785212060536911333&hl=en&as_sdt=0,5 | 1 | 2,020 |
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses | 54 | neurips | 8 | 0 | 2023-06-16 15:12:14.500000 | https://github.com/val-iisc/GAMA-GAT | 23 | Guided adversarial attack for evaluating and enhancing adversarial defenses | https://scholar.google.com/scholar?cluster=8789193805515156711&hl=en&as_sdt=0,4 | 13 | 2,020 |
A Study on Encodings for Neural Architecture Search | 62 | neurips | 5 | 0 | 2023-06-16 15:12:14.692000 | https://github.com/naszilla/nas-encodings | 29 | A study on encodings for neural architecture search | https://scholar.google.com/scholar?cluster=10654503174667687184&hl=en&as_sdt=0,5 | 8 | 2,020 |
Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising | 56 | neurips | 14 | 1 | 2023-06-16 15:12:14.885000 | https://github.com/divelab/Noise2Same | 61 | Noise2Same: Optimizing a self-supervised bound for image denoising | https://scholar.google.com/scholar?cluster=11034449771862821776&hl=en&as_sdt=0,47 | 4 | 2,020 |
Early-Learning Regularization Prevents Memorization of Noisy Labels | 304 | neurips | 28 | 5 | 2023-06-16 15:12:15.078000 | https://github.com/shengliu66/ELR | 249 | Early-learning regularization prevents memorization of noisy labels | https://scholar.google.com/scholar?cluster=3805522034549943304&hl=en&as_sdt=0,25 | 8 | 2,020 |
LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single Image Super-resolution and Beyond | 71 | neurips | 33 | 11 | 2023-06-16 15:12:15.271000 | https://github.com/dvlab-research/Simple-SR | 215 | Lapar: Linearly-assembled pixel-adaptive regression network for single image super-resolution and beyond | https://scholar.google.com/scholar?cluster=5145084170737435928&hl=en&as_sdt=0,6 | 5 | 2,020 |
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot | 48 | neurips | 5 | 1 | 2023-06-16 15:12:15.464000 | https://github.com/JingtongSu/sanity-checking-pruning | 39 | Sanity-checking pruning methods: Random tickets can win the jackpot | https://scholar.google.com/scholar?cluster=2172629804299709441&hl=en&as_sdt=0,33 | 2 | 2,020 |
Position-based Scaled Gradient for Model Quantization and Pruning | 24 | neurips | 3 | 1 | 2023-06-16 15:12:15.658000 | https://github.com/Jangho-Kim/PSG-pytorch | 17 | Position-based scaled gradient for model quantization and pruning | https://scholar.google.com/scholar?cluster=1487663288303677561&hl=en&as_sdt=0,5 | 3 | 2,020 |
Graph Information Bottleneck | 105 | neurips | 25 | 1 | 2023-06-16 15:12:15.853000 | https://github.com/snap-stanford/GIB | 104 | Graph information bottleneck | https://scholar.google.com/scholar?cluster=11004655296553092045&hl=en&as_sdt=0,5 | 44 | 2,020 |
RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference | 39 | neurips | 369 | 28 | 2023-06-16 15:12:16.052000 | https://github.com/Microsoft/EdgeML | 1,453 | RNNPool: Efficient non-linear pooling for RAM constrained inference | https://scholar.google.com/scholar?cluster=9340951550254223370&hl=en&as_sdt=0,5 | 87 | 2,020 |
Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation | 22 | neurips | 2 | 1 | 2023-06-16 15:12:16.246000 | https://github.com/Jacobew/AutoPanoptic | 19 | Auto-panoptic: Cooperative multi-component architecture search for panoptic segmentation | https://scholar.google.com/scholar?cluster=11947807024654626869&hl=en&as_sdt=0,6 | 2 | 2,020 |
On Completeness-aware Concept-Based Explanations in Deep Neural Networks | 162 | neurips | 13 | 1 | 2023-06-16 15:12:16.439000 | https://github.com/chihkuanyeh/concept_exp | 42 | On completeness-aware concept-based explanations in deep neural networks | https://scholar.google.com/scholar?cluster=1524554551065921155&hl=en&as_sdt=0,5 | 4 | 2,020 |
Why Normalizing Flows Fail to Detect Out-of-Distribution Data | 147 | neurips | 11 | 9 | 2023-06-16 15:12:16.632000 | https://github.com/PolinaKirichenko/flows_ood | 79 | Why normalizing flows fail to detect out-of-distribution data | https://scholar.google.com/scholar?cluster=2771286037773844242&hl=en&as_sdt=0,47 | 2 | 2,020 |
Unsupervised Translation of Programming Languages | 105 | neurips | 114 | 33 | 2023-06-16 15:12:16.825000 | https://github.com/facebookresearch/CodeGen | 540 | Unsupervised translation of programming languages | https://scholar.google.com/scholar?cluster=1104657131784756679&hl=en&as_sdt=0,21 | 31 | 2,020 |
Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation | 76 | neurips | 12 | 1 | 2023-06-16 15:12:17.018000 | https://github.com/RoyalVane/ASM | 69 | Adversarial style mining for one-shot unsupervised domain adaptation | https://scholar.google.com/scholar?cluster=12682829350097829096&hl=en&as_sdt=0,5 | 5 | 2,020 |
Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder | 120 | neurips | 6 | 2 | 2023-06-16 15:12:17.211000 | https://github.com/XavierXiao/Likelihood-Regret | 36 | Likelihood regret: An out-of-distribution detection score for variational auto-encoder | https://scholar.google.com/scholar?cluster=17961908496712601770&hl=en&as_sdt=0,5 | 4 | 2,020 |
Meta-Learning through Hebbian Plasticity in Random Networks | 61 | neurips | 20 | 0 | 2023-06-16 15:12:17.403000 | https://github.com/enajx/HebbianMetaLearning | 103 | Meta-learning through hebbian plasticity in random networks | https://scholar.google.com/scholar?cluster=14182623640516258528&hl=en&as_sdt=0,47 | 1 | 2,020 |
Statistical and Topological Properties of Sliced Probability Divergences | 44 | neurips | 2 | 0 | 2023-06-16 15:12:17.596000 | https://github.com/kimiandj/sliced_div | 0 | Statistical and topological properties of sliced probability divergences | https://scholar.google.com/scholar?cluster=12747887556426720635&hl=en&as_sdt=0,5 | 1 | 2,020 |
Probabilistic Active Meta-Learning | 28 | neurips | 5 | 0 | 2023-06-16 15:12:17.791000 | https://github.com/jeankaddour/paml | 15 | Probabilistic active meta-learning | https://scholar.google.com/scholar?cluster=10986627198228240905&hl=en&as_sdt=0,5 | 1 | 2,020 |
Linearly Converging Error Compensated SGD | 58 | neurips | 0 | 0 | 2023-06-16 15:12:17.986000 | https://github.com/eduardgorbunov/ef_sigma_k | 1 | Linearly converging error compensated SGD | https://scholar.google.com/scholar?cluster=9254067822190880000&hl=en&as_sdt=0,5 | 1 | 2,020 |
Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction | 8 | neurips | 3 | 2 | 2023-06-16 15:12:18.178000 | https://github.com/facebookresearch/c3dm | 18 | Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction | https://scholar.google.com/scholar?cluster=11150205973301180781&hl=en&as_sdt=0,5 | 11 | 2,020 |
The Cone of Silence: Speech Separation by Localization | 35 | neurips | 20 | 3 | 2023-06-16 15:12:18.371000 | https://github.com/vivjay30/Cone-of-Silence | 127 | The cone of silence: Speech separation by localization | https://scholar.google.com/scholar?cluster=8905558076062704423&hl=en&as_sdt=0,33 | 11 | 2,020 |
High-Dimensional Bayesian Optimization via Nested Riemannian Manifolds | 12 | neurips | 4 | 0 | 2023-06-16 15:12:18.564000 | https://github.com/NoemieJaquier/GaBOtorch | 38 | High-dimensional Bayesian optimization via nested Riemannian manifolds | https://scholar.google.com/scholar?cluster=1646248372513417394&hl=en&as_sdt=0,5 | 2 | 2,020 |
Matrix Completion with Quantified Uncertainty through Low Rank Gaussian Copula | 17 | neurips | 0 | 0 | 2023-06-16 15:12:18.758000 | https://github.com/yuxuanzhao2295/Matrix-Completion-with-Quantified-Uncertainty-through-Low-Rank-Gaussian-Copula | 1 | Matrix completion with quantified uncertainty through low rank gaussian copula | https://scholar.google.com/scholar?cluster=18308777915678894427&hl=en&as_sdt=0,5 | 1 | 2,020 |
Sparse and Continuous Attention Mechanisms | 20 | neurips | 2 | 0 | 2023-06-16 15:12:18.950000 | https://github.com/deep-spin/mcan-vqa-continuous-attention | 20 | Sparse and continuous attention mechanisms | https://scholar.google.com/scholar?cluster=8098274056344502290&hl=en&as_sdt=0,5 | 4 | 2,020 |
Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection | 415 | neurips | 70 | 25 | 2023-06-16 15:12:19.143000 | https://github.com/implus/GFocal | 546 | Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection | https://scholar.google.com/scholar?cluster=16305632232773240100&hl=en&as_sdt=0,18 | 14 | 2,020 |
Learning by Minimizing the Sum of Ranked Range | 12 | neurips | 1 | 0 | 2023-06-16 15:12:19.336000 | https://github.com/discovershu/SoRR | 10 | Learning by minimizing the sum of ranked range | https://scholar.google.com/scholar?cluster=5995735188540741359&hl=en&as_sdt=0,44 | 1 | 2,020 |
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations | 151 | neurips | 16 | 3 | 2023-06-16 15:12:19.570000 | https://github.com/chenhongge/StateAdvDRL | 90 | Robust deep reinforcement learning against adversarial perturbations on state observations | https://scholar.google.com/scholar?cluster=4468368848724952344&hl=en&as_sdt=0,23 | 5 | 2,020 |
Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features | 61 | neurips | 3 | 1 | 2023-06-16 15:12:19.772000 | https://github.com/boschresearch/hierarchical_anomaly_detection | 40 | Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features | https://scholar.google.com/scholar?cluster=16029376900610177245&hl=en&as_sdt=0,39 | 5 | 2,020 |
Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment | 10 | neurips | 0 | 0 | 2023-06-16 15:12:19.966000 | https://github.com/MInner/lrmf | 1 | Log-likelihood ratio minimizing flows: Towards robust and quantifiable neural distribution alignment | https://scholar.google.com/scholar?cluster=13503385515572497823&hl=en&as_sdt=0,34 | 1 | 2,020 |
Implicit Regularization in Deep Learning May Not Be Explainable by Norms | 108 | neurips | 2 | 1 | 2023-06-16 15:12:20.159000 | https://github.com/noamrazin/imp_reg_dl_not_norms | 7 | Implicit regularization in deep learning may not be explainable by norms | https://scholar.google.com/scholar?cluster=15094324317237150803&hl=en&as_sdt=0,44 | 3 | 2,020 |
POMO: Policy Optimization with Multiple Optima for Reinforcement Learning | 97 | neurips | 24 | 0 | 2023-06-16 15:12:20.352000 | https://github.com/yd-kwon/POMO | 96 | Pomo: Policy optimization with multiple optima for reinforcement learning | https://scholar.google.com/scholar?cluster=10640697018374796647&hl=en&as_sdt=0,5 | 2 | 2,020 |
RSKDD-Net: Random Sample-based Keypoint Detector and Descriptor | 21 | neurips | 6 | 0 | 2023-06-16 15:12:20.550000 | https://github.com/ispc-lab/RSKDD-Net | 34 | Rskdd-net: Random sample-based keypoint detector and descriptor | https://scholar.google.com/scholar?cluster=4142945676817836667&hl=en&as_sdt=0,33 | 2 | 2,020 |
ContraGAN: Contrastive Learning for Conditional Image Generation | 105 | neurips | 316 | 30 | 2023-06-16 15:12:20.744000 | https://github.com/POSTECH-CVLab/PyTorch-StudioGAN | 3,190 | Contragan: Contrastive learning for conditional image generation | https://scholar.google.com/scholar?cluster=18317588262394095158&hl=en&as_sdt=0,44 | 52 | 2,020 |
On the distance between two neural networks and the stability of learning | 38 | neurips | 7 | 0 | 2023-06-16 15:12:20.937000 | https://github.com/jxbz/fromage | 116 | On the distance between two neural networks and the stability of learning | https://scholar.google.com/scholar?cluster=16791561363789203322&hl=en&as_sdt=0,5 | 6 | 2,020 |
A Topological Filter for Learning with Label Noise | 58 | neurips | 7 | 2 | 2023-06-16 15:12:21.129000 | https://github.com/pxiangwu/TopoFilter | 21 | A topological filter for learning with label noise | https://scholar.google.com/scholar?cluster=3115391967239595458&hl=en&as_sdt=0,47 | 3 | 2,020 |
Personalized Federated Learning with Moreau Envelopes | 454 | neurips | 81 | 3 | 2023-06-16 15:12:21.321000 | https://github.com/CharlieDinh/pFedMe | 243 | Personalized federated learning with moreau envelopes | https://scholar.google.com/scholar?cluster=17442117675158664178&hl=en&as_sdt=0,5 | 3 | 2,020 |
Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters | 11 | neurips | 3 | 0 | 2023-06-16 15:12:21.546000 | https://github.com/PrincetonLIPS/AHGP | 20 | Task-agnostic amortized inference of gaussian process hyperparameters | https://scholar.google.com/scholar?cluster=12673972723308026781&hl=en&as_sdt=0,5 | 4 | 2,020 |
Energy-based Out-of-distribution Detection | 527 | neurips | 52 | 0 | 2023-06-16 15:12:21.740000 | https://github.com/wetliu/energy_ood | 326 | Energy-based out-of-distribution detection | https://scholar.google.com/scholar?cluster=6749168752375875068&hl=en&as_sdt=0,14 | 8 | 2,020 |
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them | 52 | neurips | 4 | 0 | 2023-06-16 15:12:21.932000 | https://github.com/liuchen11/AdversaryLossLandscape | 32 | On the loss landscape of adversarial training: Identifying challenges and how to overcome them | https://scholar.google.com/scholar?cluster=5092768704180341925&hl=en&as_sdt=0,50 | 2 | 2,020 |
User-Dependent Neural Sequence Models for Continuous-Time Event Data | 10 | neurips | 5 | 0 | 2023-06-16 15:12:22.126000 | https://github.com/ajboyd2/vae_mpp | 10 | User-dependent neural sequence models for continuous-time event data | https://scholar.google.com/scholar?cluster=18086639497022008062&hl=en&as_sdt=0,36 | 1 | 2,020 |
Active Structure Learning of Causal DAGs via Directed Clique Trees | 23 | neurips | 1 | 0 | 2023-06-16 15:12:22.320000 | https://github.com/csquires/dct-policy | 5 | Active structure learning of causal DAGs via directed clique trees | https://scholar.google.com/scholar?cluster=2190615991629114246&hl=en&as_sdt=0,5 | 3 | 2,020 |
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs | 51 | neurips | 1 | 0 | 2023-06-16 15:12:22.559000 | https://github.com/nkeriven/random-graph-gnn | 12 | Convergence and stability of graph convolutional networks on large random graphs | https://scholar.google.com/scholar?cluster=8332036655143866488&hl=en&as_sdt=0,33 | 1 | 2,020 |
BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization | 408 | neurips | 319 | 64 | 2023-06-16 15:12:22.752000 | https://github.com/pytorch/botorch | 2,663 | BoTorch: A framework for efficient Monte-Carlo Bayesian optimization | https://scholar.google.com/scholar?cluster=1764580662257780594&hl=en&as_sdt=0,5 | 51 | 2,020 |
Reconsidering Generative Objectives For Counterfactual Reasoning | 19 | neurips | 3 | 0 | 2023-06-16 15:12:22.945000 | https://github.com/DannieLu/BV-NICE | 10 | Reconsidering generative objectives for counterfactual reasoning | https://scholar.google.com/scholar?cluster=17354375508713844403&hl=en&as_sdt=0,14 | 3 | 2,020 |
Quantile Propagation for Wasserstein-Approximate Gaussian Processes | 2 | neurips | 1 | 4 | 2023-06-16 15:12:23.138000 | https://github.com/RuiZhang2016/Quantile-Propagation-for-Wasserstein-Approximate-Gaussian-Processes | 0 | Quantile propagation for wasserstein-approximate gaussian processes | https://scholar.google.com/scholar?cluster=12042120379690917891&hl=en&as_sdt=0,36 | 2 | 2,020 |
Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning | 48 | neurips | 6 | 1 | 2023-06-16 15:12:23.368000 | https://github.com/trzhang0116/HRAC | 29 | Generating adjacency-constrained subgoals in hierarchical reinforcement learning | https://scholar.google.com/scholar?cluster=16065617713704817902&hl=en&as_sdt=0,5 | 2 | 2,020 |
High-contrast “gaudy” images improve the training of deep neural network models of visual cortex | 3 | neurips | 1 | 0 | 2023-06-16 15:12:23.592000 | https://github.com/pillowlab/gaudy-images | 5 | High-contrast “gaudy” images improve the training of deep neural network models of visual cortex | https://scholar.google.com/scholar?cluster=3979615738480690604&hl=en&as_sdt=0,11 | 8 | 2,020 |
Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion | 46 | neurips | 7 | 0 | 2023-06-16 15:12:23.785000 | https://github.com/MIRALab-USTC/KGE-DURA | 42 | Duality-induced regularizer for tensor factorization based knowledge graph completion | https://scholar.google.com/scholar?cluster=2035583007156508987&hl=en&as_sdt=0,10 | 3 | 2,020 |
H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks | 9 | neurips | 3 | 0 | 2023-06-16 15:12:23.980000 | https://github.com/IGITUGraz/H-Mem | 9 | H-mem: Harnessing synaptic plasticity with hebbian memory networks | https://scholar.google.com/scholar?cluster=16583441948206163184&hl=en&as_sdt=0,47 | 5 | 2,020 |
Curriculum By Smoothing | 42 | neurips | 4 | 1 | 2023-06-16 15:12:24.174000 | https://github.com/pairlab/CBS | 38 | Curriculum by smoothing | https://scholar.google.com/scholar?cluster=13722465389000493780&hl=en&as_sdt=0,33 | 4 | 2,020 |
Fast Transformers with Clustered Attention | 95 | neurips | 161 | 28 | 2023-06-16 15:12:24.367000 | https://github.com/idiap/fast-transformers | 1,433 | Fast transformers with clustered attention | https://scholar.google.com/scholar?cluster=12028542204791594532&hl=en&as_sdt=0,1 | 27 | 2,020 |
Strongly Incremental Constituency Parsing with Graph Neural Networks | 20 | neurips | 6 | 0 | 2023-06-16 15:12:24.562000 | https://github.com/princeton-vl/attach-juxtapose-parser | 30 | Strongly incremental constituency parsing with graph neural networks | https://scholar.google.com/scholar?cluster=11445099204030608115&hl=en&as_sdt=0,22 | 5 | 2,020 |
Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians | 19 | neurips | 9 | 0 | 2023-06-16 15:12:24.764000 | https://github.com/pomonam/Self-Tuning-Networks | 46 | Delta-stn: Efficient bilevel optimization for neural networks using structured response jacobians | https://scholar.google.com/scholar?cluster=4174355255756713694&hl=en&as_sdt=0,11 | 2 | 2,020 |
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis | 45 | neurips | 11 | 1 | 2023-06-16 15:12:24.957000 | https://github.com/Khrylx/RFC | 125 | Residual force control for agile human behavior imitation and extended motion synthesis | https://scholar.google.com/scholar?cluster=14507242578909021405&hl=en&as_sdt=0,5 | 9 | 2,020 |
Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings | 99 | neurips | 4 | 0 | 2023-06-16 15:12:25.151000 | https://github.com/chrsmrrs/sparsewl | 18 | Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings | https://scholar.google.com/scholar?cluster=6518483326763541093&hl=en&as_sdt=0,47 | 1 | 2,020 |
Adversarial Crowdsourcing Through Robust Rank-One Matrix Completion | 16 | neurips | 2 | 0 | 2023-06-16 15:12:25.343000 | https://github.com/maqqbu/MMSR | 7 | Adversarial crowdsourcing through robust rank-one matrix completion | https://scholar.google.com/scholar?cluster=13042202879705481913&hl=en&as_sdt=0,5 | 1 | 2,020 |
Learning Semantic-aware Normalization for Generative Adversarial Networks | 8 | neurips | 3 | 1 | 2023-06-16 15:12:25.547000 | https://github.com/researchmm/SariGAN | 50 | Learning semantic-aware normalization for generative adversarial networks | https://scholar.google.com/scholar?cluster=9760501643800019907&hl=en&as_sdt=0,11 | 19 | 2,020 |
Differentiable Causal Discovery from Interventional Data | 88 | neurips | 9 | 0 | 2023-06-16 15:12:25.740000 | https://github.com/slachapelle/dcdi | 52 | Differentiable causal discovery from interventional data | https://scholar.google.com/scholar?cluster=3426161106232828380&hl=en&as_sdt=0,23 | 4 | 2,020 |
Robust Persistence Diagrams using Reproducing Kernels | 3 | neurips | 0 | 0 | 2023-06-16 15:12:25.934000 | https://github.com/sidv23/robust-PDs | 4 | Robust persistence diagrams using reproducing kernels | https://scholar.google.com/scholar?cluster=18368713409545563505&hl=en&as_sdt=0,47 | 3 | 2,020 |
CrossTransformers: spatially-aware few-shot transfer | 216 | neurips | 136 | 44 | 2023-06-16 15:12:26.127000 | https://github.com/google-research/meta-dataset | 698 | Crosstransformers: spatially-aware few-shot transfer | https://scholar.google.com/scholar?cluster=17678351520585842037&hl=en&as_sdt=0,5 | 24 | 2,020 |
SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology | 39 | neurips | 19 | 3 | 2023-06-16 15:12:26.321000 | https://github.com/MIT-AI-Accelerator/neurips-2020-sevir | 55 | Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology | https://scholar.google.com/scholar?cluster=8777075661534579096&hl=en&as_sdt=0,5 | 9 | 2,020 |
High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization | 12 | neurips | 3 | 0 | 2023-06-16 15:12:26.633000 | https://github.com/facebookresearch/ContextualBO | 13 | High-dimensional contextual policy search with unknown context rewards using Bayesian optimization | https://scholar.google.com/scholar?cluster=13229202016902486124&hl=en&as_sdt=0,36 | 7 | 2,020 |
Model Fusion via Optimal Transport | 83 | neurips | 15 | 4 | 2023-06-16 15:12:26.837000 | https://github.com/sidak/otfusion | 72 | Model fusion via optimal transport | https://scholar.google.com/scholar?cluster=4296035737617171484&hl=en&as_sdt=0,14 | 4 | 2,020 |
Learning Individually Inferred Communication for Multi-Agent Cooperation | 66 | neurips | 11 | 2 | 2023-06-16 15:12:27.030000 | https://github.com/PKU-AI-Edge/I2C | 31 | Learning individually inferred communication for multi-agent cooperation | https://scholar.google.com/scholar?cluster=1670323167364350618&hl=en&as_sdt=0,5 | 1 | 2,020 |
Set2Graph: Learning Graphs From Sets | 35 | neurips | 6 | 0 | 2023-06-16 15:12:27.224000 | https://github.com/hadarser/SetToGraphPaper | 19 | Set2graph: Learning graphs from sets | https://scholar.google.com/scholar?cluster=3992706616039043484&hl=en&as_sdt=0,47 | 1 | 2,020 |
Graph Random Neural Networks for Semi-Supervised Learning on Graphs | 228 | neurips | 37 | 8 | 2023-06-16 15:12:27.417000 | https://github.com/Grand20/grand | 182 | Graph random neural networks for semi-supervised learning on graphs | https://scholar.google.com/scholar?cluster=2995656499437981589&hl=en&as_sdt=0,11 | 3 | 2,020 |
Gradient Boosted Normalizing Flows | 5 | neurips | 3 | 4 | 2023-06-16 15:12:27.610000 | https://github.com/robert-giaquinto/gradient-boosted-normalizing-flows | 25 | Gradient boosted normalizing flows | https://scholar.google.com/scholar?cluster=952614259564825666&hl=en&as_sdt=0,10 | 3 | 2,020 |
Open Graph Benchmark: Datasets for Machine Learning on Graphs | 1,349 | neurips | 397 | 17 | 2023-06-16 15:12:27.804000 | https://github.com/snap-stanford/ogb | 1,685 | Open graph benchmark: Datasets for machine learning on graphs | https://scholar.google.com/scholar?cluster=4143980941711296523&hl=en&as_sdt=0,44 | 42 | 2,020 |
Texture Interpolation for Probing Visual Perception | 13 | neurips | 0 | 0 | 2023-06-16 15:12:28.002000 | https://github.com/JonathanVacher/texture-interpolation | 4 | Texture interpolation for probing visual perception | https://scholar.google.com/scholar?cluster=7728700650682598427&hl=en&as_sdt=0,5 | 1 | 2,020 |
Hierarchical Neural Architecture Search for Deep Stereo Matching | 216 | neurips | 50 | 13 | 2023-06-16 15:12:28.196000 | https://github.com/XuelianCheng/LEAStereo | 246 | Hierarchical neural architecture search for deep stereo matching | https://scholar.google.com/scholar?cluster=16363724602040348057&hl=en&as_sdt=0,43 | 4 | 2,020 |
Auditing Differentially Private Machine Learning: How Private is Private SGD? | 114 | neurips | 4 | 2 | 2023-06-16 15:12:28.389000 | https://github.com/jagielski/auditing-dpsgd | 26 | Auditing differentially private machine learning: How private is private sgd? | https://scholar.google.com/scholar?cluster=281241057337328648&hl=en&as_sdt=0,33 | 3 | 2,020 |
Measuring Systematic Generalization in Neural Proof Generation with Transformers | 41 | neurips | 0 | 0 | 2023-06-16 15:12:28.584000 | https://github.com/NicolasAG/SGinPG | 8 | Measuring systematic generalization in neural proof generation with transformers | https://scholar.google.com/scholar?cluster=8849018836826676230&hl=en&as_sdt=0,33 | 2 | 2,020 |
Big Self-Supervised Models are Strong Semi-Supervised Learners | 1,567 | neurips | 570 | 69 | 2023-06-16 15:12:28.777000 | https://github.com/google-research/simclr | 3,562 | Big self-supervised models are strong semi-supervised learners | https://scholar.google.com/scholar?cluster=18105628451996555050&hl=en&as_sdt=0,5 | 46 | 2,020 |
Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization | 32 | neurips | 0 | 0 | 2023-06-16 15:12:28.971000 | https://github.com/gpleiss/ciq_experiments | 2 | Fast matrix square roots with applications to Gaussian processes and Bayesian optimization | https://scholar.google.com/scholar?cluster=4177384831232294846&hl=en&as_sdt=0,41 | 1 | 2,020 |
Model Class Reliance for Random Forests | 16 | neurips | 4 | 0 | 2023-06-16 15:12:29.166000 | https://github.com/gavin-s-smith/mcrforest | 4 | Model class reliance for random forests | https://scholar.google.com/scholar?cluster=4402509966168777669&hl=en&as_sdt=0,25 | 2 | 2,020 |
Learning to Adapt to Evolving Domains | 39 | neurips | 7 | 4 | 2023-06-16 15:12:29.390000 | https://github.com/Liuhong99/EAML | 25 | Learning to adapt to evolving domains | https://scholar.google.com/scholar?cluster=16226509627178633585&hl=en&as_sdt=0,44 | 3 | 2,020 |
Synthesizing Tasks for Block-based Programming | 7 | neurips | 2 | 0 | 2023-06-16 15:12:29.584000 | https://github.com/adishs/neurips2020_synthesizing-tasks_code | 1 | Synthesizing tasks for block-based programming | https://scholar.google.com/scholar?cluster=16452730924427259118&hl=en&as_sdt=0,21 | 1 | 2,020 |
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks | 25 | neurips | 3 | 1 | 2023-06-16 15:12:29.787000 | https://github.com/klightz/Firefly | 27 | Firefly neural architecture descent: a general approach for growing neural networks | https://scholar.google.com/scholar?cluster=13122447831516243168&hl=en&as_sdt=0,15 | 1 | 2,020 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.