title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models | 8 | neurips | 0 | 1 | 2023-06-16 16:07:53.483000 | https://github.com/echoyi/rps_lje | 2 | Representer point selection via local jacobian expansion for post-hoc classifier explanation of deep neural networks and ensemble models | https://scholar.google.com/scholar?cluster=10184783151152200562&hl=en&as_sdt=0,5 | 3 | 2,021 |
Editing a classifier by rewriting its prediction rules | 33 | neurips | 7 | 0 | 2023-06-16 16:07:53.684000 | https://github.com/madrylab/editingclassifiers | 88 | Editing a classifier by rewriting its prediction rules | https://scholar.google.com/scholar?cluster=10393645433715100130&hl=en&as_sdt=0,5 | 6 | 2,021 |
How Modular should Neural Module Networks Be for Systematic Generalization? | 5 | neurips | 0 | 0 | 2023-06-16 16:07:53.885000 | https://github.com/vanessadamario/understanding_reasoning | 6 | How Modular Should Neural Module Networks Be for Systematic Generalization? | https://scholar.google.com/scholar?cluster=1661765216246697940&hl=en&as_sdt=0,5 | 2 | 2,021 |
The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization | 4 | neurips | 1 | 0 | 2023-06-16 16:07:54.096000 | https://github.com/dlej/adaptive-dropout | 0 | The flip side of the reweighted coin: duality of adaptive dropout and regularization | https://scholar.google.com/scholar?cluster=7949218782652631707&hl=en&as_sdt=0,5 | 1 | 2,021 |
Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs | 21 | neurips | 3 | 3 | 2023-06-16 16:07:54.300000 | https://github.com/akirato/perm-gaussiankg | 9 | Probabilistic entity representation model for reasoning over knowledge graphs | https://scholar.google.com/scholar?cluster=3279393825125769301&hl=en&as_sdt=0,5 | 1 | 2,021 |
Black Box Probabilistic Numerics | 2 | neurips | 0 | 0 | 2023-06-16 16:07:54.502000 | https://github.com/oteym/bbpn | 0 | Black box probabilistic numerics | https://scholar.google.com/scholar?cluster=11244542960585978883&hl=en&as_sdt=0,5 | 1 | 2,021 |
Interpolation can hurt robust generalization even when there is no noise | 8 | neurips | 0 | 0 | 2023-06-16 16:07:54.706000 | https://github.com/michaelaerni/interpolation_robustness | 1 | Interpolation can hurt robust generalization even when there is no noise | https://scholar.google.com/scholar?cluster=15775630453700777923&hl=en&as_sdt=0,5 | 2 | 2,021 |
On the Equivalence between Neural Network and Support Vector Machine | 8 | neurips | 3 | 0 | 2023-06-16 16:07:54.910000 | https://github.com/leslie-ch/equiv-nn-svm | 8 | On the equivalence between neural network and support vector machine | https://scholar.google.com/scholar?cluster=13784067833914528352&hl=en&as_sdt=0,5 | 2 | 2,021 |
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training | 50 | neurips | 316 | 30 | 2023-06-16 16:07:55.125000 | https://github.com/POSTECH-CVLab/PyTorch-StudioGAN | 3,190 | Rebooting acgan: Auxiliary classifier gans with stable training | https://scholar.google.com/scholar?cluster=15126723779815766107&hl=en&as_sdt=0,10 | 52 | 2,021 |
Robust and Decomposable Average Precision for Image Retrieval | 13 | neurips | 9 | 0 | 2023-06-16 16:07:55.326000 | https://github.com/elias-ramzi/roadmap | 70 | Robust and decomposable average precision for image retrieval | https://scholar.google.com/scholar?cluster=16259594709481566013&hl=en&as_sdt=0,5 | 4 | 2,021 |
Spatio-Temporal Variational Gaussian Processes | 15 | neurips | 1 | 1 | 2023-06-16 16:07:55.528000 | https://github.com/aaltoml/spatio-temporal-gps | 30 | Spatio-temporal variational Gaussian processes | https://scholar.google.com/scholar?cluster=5327408766327785744&hl=en&as_sdt=0,31 | 2 | 2,021 |
Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes | 2 | neurips | 1 | 0 | 2023-06-16 16:07:55.728000 | https://github.com/AminKolarijani/ConjVI | 0 | Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes | https://scholar.google.com/scholar?cluster=16725357238288679502&hl=en&as_sdt=0,47 | 1 | 2,021 |
Adaptive Risk Minimization: Learning to Adapt to Domain Shift | 78 | neurips | 24 | 3 | 2023-06-16 16:07:55.959000 | https://github.com/henrikmarklund/arm | 78 | Adaptive risk minimization: Learning to adapt to domain shift | https://scholar.google.com/scholar?cluster=6509702681777063562&hl=en&as_sdt=0,5 | 7 | 2,021 |
Learning State Representations from Random Deep Action-conditional Predictions | 3 | neurips | 0 | 0 | 2023-06-16 16:07:56.163000 | https://github.com/Hwhitetooth/random_gvfs | 3 | Learning state representations from random deep action-conditional predictions | https://scholar.google.com/scholar?cluster=15623109071018458033&hl=en&as_sdt=0,5 | 3 | 2,021 |
Tracking People with 3D Representations | 19 | neurips | 6 | 6 | 2023-06-16 16:07:56.363000 | https://github.com/brjathu/T3DP | 83 | Tracking people with 3D representations | https://scholar.google.com/scholar?cluster=18142751187854037322&hl=en&as_sdt=0,36 | 4 | 2,021 |
Optimal Sketching for Trace Estimation | 8 | neurips | 0 | 0 | 2023-06-16 16:07:56.564000 | https://github.com/11hifish/OptSketchTraceEst | 1 | Optimal sketching for trace estimation | https://scholar.google.com/scholar?cluster=1153169636268932836&hl=en&as_sdt=0,5 | 2 | 2,021 |
Estimating Multi-cause Treatment Effects via Single-cause Perturbation | 8 | neurips | 1 | 0 | 2023-06-16 16:07:56.764000 | https://github.com/zhaozhiqian/single-cause-perturbation-neurips-2021 | 9 | Estimating multi-cause treatment effects via single-cause perturbation | https://scholar.google.com/scholar?cluster=15417661006229778320&hl=en&as_sdt=0,5 | 2 | 2,021 |
MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms | 20 | neurips | 1 | 0 | 2023-06-16 16:07:56.964000 | https://github.com/vanderschaarlab/miracle | 16 | Miracle: Causally-aware imputation via learning missing data mechanisms | https://scholar.google.com/scholar?cluster=637656559224861079&hl=en&as_sdt=0,5 | 1 | 2,021 |
Efficient Training of Visual Transformers with Small Datasets | 86 | neurips | 11 | 1 | 2023-06-16 16:07:57.164000 | https://github.com/yhlleo/VTs-Drloc | 124 | Efficient training of visual transformers with small datasets | https://scholar.google.com/scholar?cluster=17891879498080154736&hl=en&as_sdt=0,5 | 3 | 2,021 |
CoFiNet: Reliable Coarse-to-fine Correspondences for Robust PointCloud Registration | 54 | neurips | 8 | 1 | 2023-06-16 16:07:57.365000 | https://github.com/haoyu94/coarse-to-fine-correspondences | 79 | Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration | https://scholar.google.com/scholar?cluster=11101496447247741194&hl=en&as_sdt=0,5 | 7 | 2,021 |
Partial success in closing the gap between human and machine vision | 87 | neurips | 31 | 3 | 2023-06-16 16:07:57.565000 | https://github.com/bethgelab/model-vs-human | 286 | Partial success in closing the gap between human and machine vision | https://scholar.google.com/scholar?cluster=875131557547078483&hl=en&as_sdt=0,44 | 14 | 2,021 |
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes | 7 | neurips | 2 | 0 | 2023-06-16 16:07:57.765000 | https://github.com/RAIVNLab/LLC | 10 | Llc: Accurate, multi-purpose learnt low-dimensional binary codes | https://scholar.google.com/scholar?cluster=13039200529155817900&hl=en&as_sdt=0,26 | 7 | 2,021 |
Well-tuned Simple Nets Excel on Tabular Datasets | 65 | neurips | 12 | 2 | 2023-06-16 16:07:57.966000 | https://github.com/releaunifreiburg/WellTunedSimpleNets | 61 | Well-tuned simple nets excel on tabular datasets | https://scholar.google.com/scholar?cluster=3278110535551285021&hl=en&as_sdt=0,5 | 0 | 2,021 |
POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples | 17 | neurips | 0 | 0 | 2023-06-16 16:07:58.166000 | https://github.com/lehduong/poodle | 15 | Poodle: Improving few-shot learning via penalizing out-of-distribution samples | https://scholar.google.com/scholar?cluster=3110608132459166392&hl=en&as_sdt=0,5 | 1 | 2,021 |
Densely connected normalizing flows | 27 | neurips | 9 | 0 | 2023-06-16 16:07:58.369000 | https://github.com/matejgrcic/DenseFlow | 32 | Densely connected normalizing flows | https://scholar.google.com/scholar?cluster=12123857522303227293&hl=en&as_sdt=0,32 | 4 | 2,021 |
Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing | 8 | neurips | 0 | 0 | 2023-06-16 16:07:58.569000 | https://github.com/thecharlieblake/snowflake | 4 | Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing | https://scholar.google.com/scholar?cluster=12712996642787863150&hl=en&as_sdt=0,5 | 1 | 2,021 |
VAST: Value Function Factorization with Variable Agent Sub-Teams | 5 | neurips | 1 | 0 | 2023-06-16 16:07:58.769000 | https://github.com/thomyphan/scalable-marl | 4 | Vast: Value function factorization with variable agent sub-teams | https://scholar.google.com/scholar?cluster=15101436546519629155&hl=en&as_sdt=0,3 | 1 | 2,021 |
Multiwavelet-based Operator Learning for Differential Equations | 58 | neurips | 6 | 1 | 2023-06-16 16:07:58.969000 | https://github.com/gaurav71531/mwt-operator | 40 | Multiwavelet-based operator learning for differential equations | https://scholar.google.com/scholar?cluster=15278573285274207764&hl=en&as_sdt=0,5 | 1 | 2,021 |
Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning | 13 | neurips | 6 | 1 | 2023-06-16 16:07:59.170000 | https://github.com/aakashrkaku/intermdiate_layer_matter_ssl | 39 | Intermediate layers matter in momentum contrastive self supervised learning | https://scholar.google.com/scholar?cluster=17990388829355645344&hl=en&as_sdt=0,36 | 2 | 2,021 |
Learning Nonparametric Volterra Kernels with Gaussian Processes | 2 | neurips | 2 | 0 | 2023-06-16 16:07:59.371000 | https://github.com/magnusross/nvkm | 1 | Learning nonparametric Volterra kernels with Gaussian processes | https://scholar.google.com/scholar?cluster=10898264461292575760&hl=en&as_sdt=0,33 | 3 | 2,021 |
DiBS: Differentiable Bayesian Structure Learning | 28 | neurips | 8 | 1 | 2023-06-16 16:07:59.571000 | https://github.com/larslorch/dibs | 35 | Dibs: Differentiable bayesian structure learning | https://scholar.google.com/scholar?cluster=4035014769080983661&hl=en&as_sdt=0,5 | 3 | 2,021 |
Nonparametric estimation of continuous DPPs with kernel methods | 2 | neurips | 0 | 0 | 2023-06-16 16:07:59.771000 | https://github.com/mrfanuel/learningcontinuousdpps.jl | 0 | Nonparametric estimation of continuous DPPs with kernel methods | https://scholar.google.com/scholar?cluster=4870049229735004027&hl=en&as_sdt=0,5 | 2 | 2,021 |
FINE Samples for Learning with Noisy Labels | 37 | neurips | 11 | 1 | 2023-06-16 16:07:59.972000 | https://github.com/Kthyeon/FINE_official | 28 | Fine samples for learning with noisy labels | https://scholar.google.com/scholar?cluster=5795819026441834181&hl=en&as_sdt=0,1 | 3 | 2,021 |
Residual2Vec: Debiasing graph embedding with random graphs | 8 | neurips | 1 | 1 | 2023-06-16 16:08:00.173000 | https://github.com/skojaku/residual2vec | 5 | Residual2Vec: Debiasing graph embedding with random graphs | https://scholar.google.com/scholar?cluster=741770936150407440&hl=en&as_sdt=0,48 | 3 | 2,021 |
Training Neural Networks with Fixed Sparse Masks | 47 | neurips | 1 | 1 | 2023-06-16 16:08:00.372000 | https://github.com/varunnair18/fish | 44 | Training neural networks with fixed sparse masks | https://scholar.google.com/scholar?cluster=16194905137327399007&hl=en&as_sdt=0,3 | 5 | 2,021 |
Learning to Schedule Heuristics in Branch and Bound | 27 | neurips | 0 | 0 | 2023-06-16 16:08:00.573000 | https://github.com/antoniach/heuristic-scheduling | 2 | Learning to schedule heuristics in branch and bound | https://scholar.google.com/scholar?cluster=5910831186806034579&hl=en&as_sdt=0,5 | 1 | 2,021 |
On Training Implicit Models | 31 | neurips | 0 | 0 | 2023-06-16 16:08:00.773000 | https://github.com/gsunshine/phantom_grad | 3 | On training implicit models | https://scholar.google.com/scholar?cluster=15707261069141178694&hl=en&as_sdt=0,33 | 1 | 2,021 |
MLP-Mixer: An all-MLP Architecture for Vision | 1,181 | neurips | 976 | 108 | 2023-06-16 16:08:00.973000 | https://github.com/google-research/vision_transformer | 7,383 | Mlp-mixer: An all-mlp architecture for vision | https://scholar.google.com/scholar?cluster=10553738615668616847&hl=en&as_sdt=0,10 | 83 | 2,021 |
A Framework to Learn with Interpretation | 19 | neurips | 0 | 0 | 2023-06-16 16:08:01.173000 | https://github.com/jayneelparekh/flint | 4 | A framework to learn with interpretation | https://scholar.google.com/scholar?cluster=4070242673228533811&hl=en&as_sdt=0,44 | 2 | 2,021 |
One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective | 34 | neurips | 10 | 2 | 2023-06-16 16:08:01.376000 | https://github.com/kamwoh/orthohash | 86 | One loss for all: Deep hashing with a single cosine similarity based learning objective | https://scholar.google.com/scholar?cluster=2583147407697394986&hl=en&as_sdt=0,47 | 6 | 2,021 |
Discovering and Achieving Goals via World Models | 45 | neurips | 16 | 0 | 2023-06-16 16:08:01.577000 | https://github.com/orybkin/lexa | 76 | Discovering and achieving goals via world models | https://scholar.google.com/scholar?cluster=5829288564563555127&hl=en&as_sdt=0,33 | 5 | 2,021 |
Understanding and Improving Early Stopping for Learning with Noisy Labels | 68 | neurips | 4 | 0 | 2023-06-16 16:08:01.779000 | https://github.com/tmllab/PES | 21 | Understanding and improving early stopping for learning with noisy labels | https://scholar.google.com/scholar?cluster=15957250689455234622&hl=en&as_sdt=0,5 | 1 | 2,021 |
On the Power of Edge Independent Graph Models | 4 | neurips | 1 | 0 | 2023-06-16 16:08:01.985000 | https://github.com/konsotirop/edge_independent_models | 0 | On the power of edge independent graph models | https://scholar.google.com/scholar?cluster=18323628081237600189&hl=en&as_sdt=0,43 | 1 | 2,021 |
Understanding Adaptive, Multiscale Temporal Integration In Deep Speech Recognition Systems | 2 | neurips | 2 | 0 | 2023-06-16 16:08:02.186000 | https://github.com/naplab/pytci | 5 | Understanding adaptive, multiscale temporal integration in deep speech recognition systems | https://scholar.google.com/scholar?cluster=12420066153878945080&hl=en&as_sdt=0,5 | 3 | 2,021 |
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer | 16 | neurips | 8 | 1 | 2023-06-16 16:08:02.395000 | https://github.com/zinengtang/VidLanKD | 56 | Vidlankd: Improving language understanding via video-distilled knowledge transfer | https://scholar.google.com/scholar?cluster=7463854148128804617&hl=en&as_sdt=0,5 | 4 | 2,021 |
Detecting Individual Decision-Making Style: Exploring Behavioral Stylometry in Chess | 14 | neurips | 0 | 0 | 2023-06-16 16:08:02.607000 | https://github.com/csslab/behavioral-stylometry | 13 | Detecting individual decision-making style: Exploring behavioral stylometry in chess | https://scholar.google.com/scholar?cluster=5114217380206270337&hl=en&as_sdt=0,10 | 4 | 2,021 |
AutoGEL: An Automated Graph Neural Network with Explicit Link Information | 19 | neurips | 1 | 0 | 2023-06-16 16:08:02.809000 | https://github.com/zwangeo/autogel | 8 | Autogel: An automated graph neural network with explicit link information | https://scholar.google.com/scholar?cluster=17230311752348468985&hl=en&as_sdt=0,5 | 2 | 2,021 |
Recognizing Vector Graphics without Rasterization | 6 | neurips | 12 | 2 | 2023-06-16 16:08:03.009000 | https://github.com/microsoft/YOLaT-VectorGraphicsRecognition | 59 | Recognizing vector graphics without rasterization | https://scholar.google.com/scholar?cluster=15241098815827282500&hl=en&as_sdt=0,5 | 7 | 2,021 |
On Episodes, Prototypical Networks, and Few-Shot Learning | 45 | neurips | 4 | 1 | 2023-06-16 16:08:03.210000 | https://github.com/fiveai/on-episodes-fsl | 26 | On episodes, prototypical networks, and few-shot learning | https://scholar.google.com/scholar?cluster=7793453768259983774&hl=en&as_sdt=0,5 | 7 | 2,021 |
CHIP: CHannel Independence-based Pruning for Compact Neural Networks | 51 | neurips | 5 | 3 | 2023-06-16 16:08:03.411000 | https://github.com/eclipsess/chip_neurips2021 | 18 | Chip: Channel independence-based pruning for compact neural networks | https://scholar.google.com/scholar?cluster=8136547128458704716&hl=en&as_sdt=0,33 | 3 | 2,021 |
Active Offline Policy Selection | 11 | neurips | 2 | 0 | 2023-06-16 16:08:03.611000 | https://github.com/deepmind/active_ops | 29 | Active offline policy selection | https://scholar.google.com/scholar?cluster=11479789843875532495&hl=en&as_sdt=0,5 | 5 | 2,021 |
Information-theoretic generalization bounds for black-box learning algorithms | 19 | neurips | 1 | 0 | 2023-06-16 16:08:03.812000 | https://github.com/hrayrhar/f-cmi | 3 | Information-theoretic generalization bounds for black-box learning algorithms | https://scholar.google.com/scholar?cluster=17028084888610967844&hl=en&as_sdt=0,5 | 5 | 2,021 |
Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation | 6 | neurips | 4 | 0 | 2023-06-16 16:08:04.038000 | https://github.com/mingcv/ytmt-strategy | 39 | Trash or treasure? an interactive dual-stream strategy for single image reflection separation | https://scholar.google.com/scholar?cluster=18395690050017455415&hl=en&as_sdt=0,5 | 2 | 2,021 |
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding | 22 | neurips | 1 | 0 | 2023-06-16 16:08:04.239000 | https://github.com/tewiSong/Rot-Pro | 10 | Rot-pro: Modeling transitivity by projection in knowledge graph embedding | https://scholar.google.com/scholar?cluster=11215289012161533976&hl=en&as_sdt=0,33 | 1 | 2,021 |
Modular Gaussian Processes for Transfer Learning | 4 | neurips | 1 | 0 | 2023-06-16 16:08:04.439000 | https://github.com/pmorenoz/modulargp | 13 | Modular Gaussian processes for transfer learning | https://scholar.google.com/scholar?cluster=2796305591602379959&hl=en&as_sdt=0,33 | 2 | 2,021 |
Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering | 68 | neurips | 11 | 8 | 2023-06-16 16:08:04.639000 | https://github.com/YoungJoongUNC/Neural_Human_Performer | 112 | Neural human performer: Learning generalizable radiance fields for human performance rendering | https://scholar.google.com/scholar?cluster=7942977182226378581&hl=en&as_sdt=0,5 | 10 | 2,021 |
Asymptotics of representation learning in finite Bayesian neural networks | 21 | neurips | 1 | 0 | 2023-06-16 16:08:04.839000 | https://github.com/pehlevan-group/finite-width-bayesian | 2 | Asymptotics of representation learning in finite Bayesian neural networks | https://scholar.google.com/scholar?cluster=3625210166021367573&hl=en&as_sdt=0,33 | 2 | 2,021 |
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn? | 20 | neurips | 2 | 1 | 2023-06-16 16:08:05.050000 | https://github.com/dmirlab-group/dsan | 19 | Domain adaptation with invariant representation learning: What transformations to learn? | https://scholar.google.com/scholar?cluster=10398669082966342781&hl=en&as_sdt=0,33 | 3 | 2,021 |
CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation | 91 | neurips | 40 | 3 | 2023-06-16 16:08:05.252000 | https://github.com/ermongroup/csdi | 148 | CSDI: Conditional score-based diffusion models for probabilistic time series imputation | https://scholar.google.com/scholar?cluster=3890787205229522603&hl=en&as_sdt=0,5 | 8 | 2,021 |
Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging | 7 | neurips | 2 | 0 | 2023-06-16 16:08:05.452000 | https://github.com/alihashemi-ai/dugh-neurips-2021 | 4 | Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging | https://scholar.google.com/scholar?cluster=6986870933699161127&hl=en&as_sdt=0,11 | 1 | 2,021 |
Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels | 13 | neurips | 0 | 0 | 2023-06-16 16:08:05.653000 | https://github.com/skarp/local-signal-adaptivity | 1 | Local signal adaptivity: Provable feature learning in neural networks beyond kernels | https://scholar.google.com/scholar?cluster=5974588458999600841&hl=en&as_sdt=0,39 | 1 | 2,021 |
Symbolic Regression via Deep Reinforcement Learning Enhanced Genetic Programming Seeding | 12 | neurips | 87 | 8 | 2023-06-16 16:08:05.853000 | https://github.com/brendenpetersen/deep-symbolic-optimization | 374 | Symbolic regression via deep reinforcement learning enhanced genetic programming seeding | https://scholar.google.com/scholar?cluster=17727261586296192952&hl=en&as_sdt=0,5 | 12 | 2,021 |
Choose a Transformer: Fourier or Galerkin | 49 | neurips | 23 | 1 | 2023-06-16 16:08:06.054000 | https://github.com/scaomath/galerkin-transformer | 172 | Choose a transformer: Fourier or galerkin | https://scholar.google.com/scholar?cluster=8571374970772054230&hl=en&as_sdt=0,43 | 6 | 2,021 |
Canonical Capsules: Self-Supervised Capsules in Canonical Pose | 33 | neurips | 21 | 1 | 2023-06-16 16:08:06.254000 | https://github.com/canonical-capsules/canonical-capsules | 168 | Canonical capsules: Self-supervised capsules in canonical pose | https://scholar.google.com/scholar?cluster=8210427563278866334&hl=en&as_sdt=0,5 | 15 | 2,021 |
Dynamics-regulated kinematic policy for egocentric pose estimation | 27 | neurips | 5 | 0 | 2023-06-16 16:08:06.455000 | https://github.com/KlabCMU/kin-poly | 64 | Dynamics-regulated kinematic policy for egocentric pose estimation | https://scholar.google.com/scholar?cluster=3653129200622032279&hl=en&as_sdt=0,33 | 8 | 2,021 |
Not All Low-Pass Filters are Robust in Graph Convolutional Networks | 21 | neurips | 1 | 1 | 2023-06-16 16:08:06.655000 | https://github.com/swiftieh/lfr | 8 | Not all low-pass filters are robust in graph convolutional networks | https://scholar.google.com/scholar?cluster=931846674338665597&hl=en&as_sdt=0,47 | 2 | 2,021 |
Counterfactual Maximum Likelihood Estimation for Training Deep Networks | 3 | neurips | 1 | 1 | 2023-06-16 16:08:06.856000 | https://github.com/WANGXinyiLinda/CMLE | 9 | Counterfactual Maximum Likelihood Estimation for Training Deep Networks | https://scholar.google.com/scholar?cluster=12195718352241039351&hl=en&as_sdt=0,34 | 1 | 2,021 |
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks | 7 | neurips | 3 | 1 | 2023-06-16 16:08:07.056000 | https://github.com/gsimchoni/lmmnn | 15 | Using random effects to account for high-cardinality categorical features and repeated measures in deep neural networks | https://scholar.google.com/scholar?cluster=6823564351084758904&hl=en&as_sdt=0,3 | 2 | 2,021 |
Learning the optimal Tikhonov regularizer for inverse problems | 10 | neurips | 1 | 0 | 2023-06-16 16:08:07.256000 | https://github.com/LearnTikhonov/Code | 2 | Learning the optimal Tikhonov regularizer for inverse problems | https://scholar.google.com/scholar?cluster=4351597932105828079&hl=en&as_sdt=0,3 | 1 | 2,021 |
NovelD: A Simple yet Effective Exploration Criterion | 27 | neurips | 4 | 1 | 2023-06-16 16:08:07.456000 | https://github.com/tianjunz/NovelD | 32 | Noveld: A simple yet effective exploration criterion | https://scholar.google.com/scholar?cluster=5494596245419796169&hl=en&as_sdt=0,5 | 3 | 2,021 |
Second-Order Neural ODE Optimizer | 8 | neurips | 7 | 0 | 2023-06-16 16:08:07.657000 | https://github.com/ghliu/snopt | 40 | Second-order neural ode optimizer | https://scholar.google.com/scholar?cluster=440731558768338090&hl=en&as_sdt=0,26 | 2 | 2,021 |
Dense Unsupervised Learning for Video Segmentation | 15 | neurips | 21 | 2 | 2023-06-16 16:08:07.858000 | https://github.com/visinf/dense-ulearn-vos | 178 | Dense unsupervised learning for video segmentation | https://scholar.google.com/scholar?cluster=4698820805615701905&hl=en&as_sdt=0,5 | 8 | 2,021 |
Charting and Navigating the Space of Solutions for Recurrent Neural Networks | 8 | neurips | 1 | 0 | 2023-06-16 16:08:08.059000 | https://github.com/eliaturner/space-of-solutions-rnn | 0 | Charting and navigating the space of solutions for recurrent neural networks | https://scholar.google.com/scholar?cluster=1383134726251772649&hl=en&as_sdt=0,5 | 2 | 2,021 |
Reusing Combinatorial Structure: Faster Iterative Projections over Submodular Base Polytopes | 2 | neurips | 0 | 0 | 2023-06-16 16:08:08.260000 | https://github.com/jaimoondra/submodular-polytope-projections | 0 | Reusing combinatorial structure: Faster iterative projections over submodular base polytopes | https://scholar.google.com/scholar?cluster=4313712568936757155&hl=en&as_sdt=0,39 | 2 | 2,021 |
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes | 8 | neurips | 0 | 0 | 2023-06-16 16:08:08.461000 | https://github.com/salusanga/alm-dnn | 12 | Constrained optimization to train neural networks on critical and under-represented classes | https://scholar.google.com/scholar?cluster=6071197627058372251&hl=en&as_sdt=0,5 | 1 | 2,021 |
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification | 6 | neurips | 0 | 0 | 2023-06-16 16:08:08.661000 | https://github.com/clreda/misspecified-top-m | 0 | Dealing with misspecification in fixed-confidence linear top-m identification | https://scholar.google.com/scholar?cluster=17658923978445131586&hl=en&as_sdt=0,5 | 1 | 2,021 |
Set Prediction in the Latent Space | 2 | neurips | 0 | 0 | 2023-06-16 16:08:08.862000 | https://github.com/phizaz/latent-set-prediction | 5 | Set prediction in the latent space | https://scholar.google.com/scholar?cluster=7307560885637402716&hl=en&as_sdt=0,5 | 3 | 2,021 |
SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision | 6 | neurips | 6 | 2 | 2023-06-16 16:08:09.062000 | https://github.com/deepmind/dm_hamiltonian_dynamics_suite | 28 | Symetric: measuring the quality of learnt hamiltonian dynamics inferred from vision | https://scholar.google.com/scholar?cluster=17033719678461609846&hl=en&as_sdt=0,15 | 5 | 2,021 |
Learning with Holographic Reduced Representations | 12 | neurips | 0 | 2 | 2023-06-16 16:08:09.262000 | https://github.com/NeuromorphicComputationResearchProgram/Learning-with-Holographic-Reduced-Representations | 15 | Learning with holographic reduced representations | https://scholar.google.com/scholar?cluster=17605710809418918656&hl=en&as_sdt=0,18 | 2 | 2,021 |
Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations | 22 | neurips | 6 | 1 | 2023-06-16 16:08:09.463000 | https://github.com/roosephu/crabs | 12 | Learning barrier certificates: Towards safe reinforcement learning with zero training-time violations | https://scholar.google.com/scholar?cluster=14400533417780078206&hl=en&as_sdt=0,33 | 2 | 2,021 |
On the Second-order Convergence Properties of Random Search Methods | 3 | neurips | 0 | 0 | 2023-06-16 16:08:09.663000 | https://github.com/adamsolomou/second-order-random-search | 0 | On the second-order convergence properties of random search methods | https://scholar.google.com/scholar?cluster=13871613628804983300&hl=en&as_sdt=0,33 | 1 | 2,021 |
A Max-Min Entropy Framework for Reinforcement Learning | 5 | neurips | 0 | 0 | 2023-06-16 16:08:09.865000 | https://github.com/seungyulhan/mme | 3 | A max-min entropy framework for reinforcement learning | https://scholar.google.com/scholar?cluster=7183103060961218750&hl=en&as_sdt=0,50 | 1 | 2,021 |
Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods | 21 | neurips | 4 | 0 | 2023-06-16 16:08:10.066000 | https://github.com/desi-ivanova/idad | 12 | Implicit deep adaptive design: policy-based experimental design without likelihoods | https://scholar.google.com/scholar?cluster=8438101725055656373&hl=en&as_sdt=0,33 | 1 | 2,021 |
Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis | 19 | neurips | 0 | 0 | 2023-06-16 16:08:10.266000 | https://github.com/livreq/meta-sgld | 1 | Generalization bounds for meta-learning: An information-theoretic analysis | https://scholar.google.com/scholar?cluster=15486384152648151886&hl=en&as_sdt=0,5 | 1 | 2,021 |
Identification of the Generalized Condorcet Winner in Multi-dueling Bandits | 2 | neurips | 0 | 0 | 2023-06-16 16:08:10.470000 | https://github.com/bjoernhad/gcwidentification | 1 | Identification of the generalized Condorcet winner in multi-dueling bandits | https://scholar.google.com/scholar?cluster=4702390528340000199&hl=en&as_sdt=0,38 | 1 | 2,021 |
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch | 12 | neurips | 1 | 0 | 2023-06-16 16:08:10.671000 | https://github.com/lviano/robustmce_irl | 3 | Robust inverse reinforcement learning under transition dynamics mismatch | https://scholar.google.com/scholar?cluster=6158260538019956069&hl=en&as_sdt=0,5 | 1 | 2,021 |
Post-processing for Individual Fairness | 32 | neurips | 1 | 0 | 2023-06-16 16:08:10.871000 | https://github.com/felix-petersen/fairness-post-processing | 5 | Post-processing for individual fairness | https://scholar.google.com/scholar?cluster=4902734240414782212&hl=en&as_sdt=0,33 | 1 | 2,021 |
OpenMatch: Open-Set Semi-supervised Learning with Open-set Consistency Regularization | 26 | neurips | 11 | 6 | 2023-06-16 16:08:11.071000 | https://github.com/VisionLearningGroup/OP_Match | 45 | Openmatch: Open-set semi-supervised learning with open-set consistency regularization | https://scholar.google.com/scholar?cluster=2362582259050725811&hl=en&as_sdt=0,44 | 2 | 2,021 |
End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering | 63 | neurips | 10 | 1 | 2023-06-16 16:08:11.271000 | https://github.com/DevSinghSachan/emdr2 | 96 | End-to-end training of multi-document reader and retriever for open-domain question answering | https://scholar.google.com/scholar?cluster=6640291202097102131&hl=en&as_sdt=0,33 | 14 | 2,021 |
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis | 36 | neurips | 4 | 0 | 2023-06-16 16:08:11.471000 | https://github.com/fel-thomas/Sobol-Attribution-Method | 24 | Look at the variance! efficient black-box explanations with sobol-based sensitivity analysis | https://scholar.google.com/scholar?cluster=18305760760422611286&hl=en&as_sdt=0,33 | 2 | 2,021 |
PatchGame: Learning to Signal Mid-level Patches in Referential Games | 3 | neurips | 2 | 0 | 2023-06-16 16:08:11.672000 | https://github.com/kampta/patchgame | 22 | PatchGame: learning to signal mid-level patches in referential games | https://scholar.google.com/scholar?cluster=15355548784664334020&hl=en&as_sdt=0,5 | 3 | 2,021 |
Implicit Generative Copulas | 9 | neurips | 1 | 0 | 2023-06-16 16:08:11.873000 | https://github.com/timcjanke/igc | 4 | Implicit generative copulas | https://scholar.google.com/scholar?cluster=9521615669512014539&hl=en&as_sdt=0,33 | 1 | 2,021 |
Tensor Normal Training for Deep Learning Models | 10 | neurips | 1 | 0 | 2023-06-16 16:08:12.075000 | https://github.com/renyiryry/tnt_neurips_2021 | 4 | Tensor normal training for deep learning models | https://scholar.google.com/scholar?cluster=3326882924041786200&hl=en&as_sdt=0,47 | 2 | 2,021 |
Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning | 25 | neurips | 1 | 2 | 2023-06-16 16:08:12.279000 | https://github.com/cuis15/FCFL | 14 | Addressing algorithmic disparity and performance inconsistency in federated learning | https://scholar.google.com/scholar?cluster=11506353861688134805&hl=en&as_sdt=0,26 | 1 | 2,021 |
Morié Attack (MA): A New Potential Risk of Screen Photos | 4 | neurips | 8 | 3 | 2023-06-16 16:08:12.483000 | https://github.com/Dantong88/Moire_Attack | 25 | Morié attack (ma): A new potential risk of screen photos | https://scholar.google.com/scholar?cluster=5204824031822855869&hl=en&as_sdt=0,50 | 1 | 2,021 |
Lattice partition recovery with dyadic CART | 3 | neurips | 0 | 0 | 2023-06-16 16:08:12.684000 | https://github.com/hernanmp/partition_recovery | 0 | Lattice partition recovery with dyadic CART | https://scholar.google.com/scholar?cluster=194222908834003497&hl=en&as_sdt=0,33 | 2 | 2,021 |
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection | 132 | neurips | 101 | 12 | 2023-06-16 16:08:12.886000 | https://github.com/hustvl/YOLOS | 716 | You only look at one sequence: Rethinking transformer in vision through object detection | https://scholar.google.com/scholar?cluster=8455459026871994587&hl=en&as_sdt=0,21 | 22 | 2,021 |
Learning to delegate for large-scale vehicle routing | 33 | neurips | 11 | 1 | 2023-06-16 16:08:13.086000 | https://github.com/mit-wu-lab/learning-to-delegate | 59 | Learning to delegate for large-scale vehicle routing | https://scholar.google.com/scholar?cluster=3486762460110339204&hl=en&as_sdt=0,33 | 2 | 2,021 |
Towards Context-Agnostic Learning Using Synthetic Data | 3 | neurips | 0 | 0 | 2023-06-16 16:08:13.286000 | https://github.com/charlesjin/synthetic_data | 0 | Towards Context-Agnostic Learning Using Synthetic Data | https://scholar.google.com/scholar?cluster=5766633238116465358&hl=en&as_sdt=0,38 | 2 | 2,021 |
Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers | 3 | neurips | 0 | 0 | 2023-06-16 16:08:13.486000 | https://github.com/blairbilodeau/neurips-2021 | 0 | Minimax optimal quantile and semi-adversarial regret via root-logarithmic regularizers | https://scholar.google.com/scholar?cluster=6590407016231039594&hl=en&as_sdt=0,33 | 1 | 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.