title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks | 95 | icml | 19 | 0 | 2023-06-17 04:14:08.998000 | https://github.com/aks2203/poisoning-benchmark | 127 | Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks | https://scholar.google.com/scholar?cluster=15855049854905847899&hl=en&as_sdt=0,34 | 6 | 2,021 |
Learning Intra-Batch Connections for Deep Metric Learning | 33 | icml | 8 | 2 | 2023-06-17 04:14:09.202000 | https://github.com/dvl-tum/intra_batch | 43 | Learning intra-batch connections for deep metric learning | https://scholar.google.com/scholar?cluster=10851391941882516865&hl=en&as_sdt=0,33 | 3 | 2,021 |
Personalized Federated Learning using Hypernetworks | 124 | icml | 24 | 0 | 2023-06-17 04:14:09.405000 | https://github.com/AvivSham/pFedHN | 138 | Personalized federated learning using hypernetworks | https://scholar.google.com/scholar?cluster=9364037892005853502&hl=en&as_sdt=0,5 | 4 | 2,021 |
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization | 49 | icml | 4 | 0 | 2023-06-17 04:14:09.607000 | https://github.com/PurduePAML/K-ARM_Backdoor_Optimization | 11 | Backdoor scanning for deep neural networks through k-arm optimization | https://scholar.google.com/scholar?cluster=18424002237979010229&hl=en&as_sdt=0,5 | 9 | 2,021 |
State Relevance for Off-Policy Evaluation | 2 | icml | 0 | 0 | 2023-06-17 04:14:09.810000 | https://github.com/dtak/osiris | 1 | State relevance for off-policy evaluation | https://scholar.google.com/scholar?cluster=1184988858503207705&hl=en&as_sdt=0,25 | 2 | 2,021 |
Learning Gradient Fields for Molecular Conformation Generation | 96 | icml | 28 | 7 | 2023-06-17 04:14:10.013000 | https://github.com/DeepGraphLearning/ConfGF | 131 | Learning gradient fields for molecular conformation generation | https://scholar.google.com/scholar?cluster=1418815604364379894&hl=en&as_sdt=0,47 | 9 | 2,021 |
Deeply-Debiased Off-Policy Interval Estimation | 22 | icml | 3 | 0 | 2023-06-17 04:14:10.216000 | https://github.com/RunzheStat/D2OPE | 9 | Deeply-debiased off-policy interval estimation | https://scholar.google.com/scholar?cluster=16793961424384021624&hl=en&as_sdt=0,33 | 2 | 2,021 |
On Characterizing GAN Convergence Through Proximal Duality Gap | 5 | icml | 2 | 1 | 2023-06-17 04:14:10.421000 | https://github.com/proximal-dg/proximal_dg | 9 | On characterizing gan convergence through proximal duality gap | https://scholar.google.com/scholar?cluster=16988175738385537443&hl=en&as_sdt=0,44 | 3 | 2,021 |
PopSkipJump: Decision-Based Attack for Probabilistic Classifiers | 1 | icml | 0 | 0 | 2023-06-17 04:14:10.626000 | https://github.com/cjsg/PopSkipJump | 4 | Popskipjump: Decision-based attack for probabilistic classifiers | https://scholar.google.com/scholar?cluster=8512283764080476060&hl=en&as_sdt=0,39 | 1 | 2,021 |
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances | 31 | icml | 1 | 0 | 2023-06-17 04:14:10.856000 | https://github.com/jbrea/symmetrysaddles.jl | 0 | Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances | https://scholar.google.com/scholar?cluster=6069341273217919605&hl=en&as_sdt=0,5 | 2 | 2,021 |
Skew Orthogonal Convolutions | 34 | icml | 6 | 1 | 2023-06-17 04:14:11.067000 | https://github.com/singlasahil14/SOC | 12 | Skew orthogonal convolutions | https://scholar.google.com/scholar?cluster=17464482494309423430&hl=en&as_sdt=0,39 | 1 | 2,021 |
Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks | 2 | icml | 3 | 0 | 2023-06-17 04:14:11.274000 | https://github.com/srsohn/shortest-path-rl | 11 | Shortest-path constrained reinforcement learning for sparse reward tasks | https://scholar.google.com/scholar?cluster=5761539218622911437&hl=en&as_sdt=0,10 | 5 | 2,021 |
Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving | 9 | icml | 0 | 0 | 2023-06-17 04:14:11.476000 | https://github.com/ermongroup/fast_feedforward_computation | 18 | Accelerating feedforward computation via parallel nonlinear equation solving | https://scholar.google.com/scholar?cluster=9587891109353811026&hl=en&as_sdt=0,11 | 7 | 2,021 |
PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration | 13 | icml | 3 | 0 | 2023-06-17 04:14:11.686000 | https://github.com/yudasong/PCMLP | 3 | Pc-mlp: Model-based reinforcement learning with policy cover guided exploration | https://scholar.google.com/scholar?cluster=8561706312159715447&hl=en&as_sdt=0,36 | 2 | 2,021 |
Decoupling Representation Learning from Reinforcement Learning | 220 | icml | 326 | 63 | 2023-06-17 04:14:11.889000 | https://github.com/astooke/rlpyt | 2,141 | Decoupling representation learning from reinforcement learning | https://scholar.google.com/scholar?cluster=4351064812627090102&hl=en&as_sdt=0,47 | 53 | 2,021 |
Not All Memories are Created Equal: Learning to Forget by Expiring | 24 | icml | 17 | 2 | 2023-06-17 04:14:12.093000 | https://github.com/facebookresearch/transformer-sequential | 134 | Not all memories are created equal: Learning to forget by expiring | https://scholar.google.com/scholar?cluster=18323176449983399592&hl=en&as_sdt=0,11 | 10 | 2,021 |
Nondeterminism and Instability in Neural Network Optimization | 17 | icml | 0 | 1 | 2023-06-17 04:14:12.304000 | https://github.com/ceciliaresearch/nondeterminism_instability | 1 | Nondeterminism and instability in neural network optimization | https://scholar.google.com/scholar?cluster=3721428237004074314&hl=en&as_sdt=0,44 | 1 | 2,021 |
What Makes for End-to-End Object Detection? | 79 | icml | 74 | 3 | 2023-06-17 04:14:12.506000 | https://github.com/PeizeSun/OneNet | 633 | What makes for end-to-end object detection? | https://scholar.google.com/scholar?cluster=17182921757850029040&hl=en&as_sdt=0,4 | 20 | 2,021 |
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning | 29 | icml | 2 | 0 | 2023-06-17 04:14:12.709000 | https://github.com/j3soon/dfac | 22 | DFAC framework: Factorizing the value function via quantile mixture for multi-agent distributional Q-learning | https://scholar.google.com/scholar?cluster=13269837837943676067&hl=en&as_sdt=0,5 | 3 | 2,021 |
Scalable Variational Gaussian Processes via Harmonic Kernel Decomposition | 5 | icml | 0 | 0 | 2023-06-17 04:14:12.911000 | https://github.com/ssydasheng/Harmonic-Kernel-Decomposition | 9 | Scalable variational gaussian processes via harmonic kernel decomposition | https://scholar.google.com/scholar?cluster=5527723102830248655&hl=en&as_sdt=0,44 | 1 | 2,021 |
Model-Targeted Poisoning Attacks with Provable Convergence | 24 | icml | 4 | 3 | 2023-06-17 04:14:13.114000 | https://github.com/suyeecav/model-targeted-poisoning | 9 | Model-targeted poisoning attacks with provable convergence | https://scholar.google.com/scholar?cluster=1651990358981165914&hl=en&as_sdt=0,5 | 3 | 2,021 |
Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap | 21 | icml | 4 | 1 | 2023-06-17 04:14:13.316000 | https://github.com/gkswamy98/pillbox | 16 | Of moments and matching: A game-theoretic framework for closing the imitation gap | https://scholar.google.com/scholar?cluster=7938694148424637226&hl=en&as_sdt=0,49 | 2 | 2,021 |
Parallel tempering on optimized paths | 10 | icml | 2 | 0 | 2023-06-17 04:14:13.520000 | https://github.com/vittrom/PT-pathoptim | 2 | Parallel tempering on optimized paths | https://scholar.google.com/scholar?cluster=14697506612657062549&hl=en&as_sdt=0,5 | 1 | 2,021 |
Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed Self-Training | 19 | icml | 3 | 0 | 2023-06-17 04:14:13.736000 | https://github.com/stanford-futuredata/sinkhorn-label-allocation | 50 | Sinkhorn label allocation: Semi-supervised classification via annealed self-training | https://scholar.google.com/scholar?cluster=13645843302447766832&hl=en&as_sdt=0,44 | 7 | 2,021 |
Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts | 10 | icml | 2 | 0 | 2023-06-17 04:14:13.939000 | https://github.com/RAO-EPFL/DR-DA | 3 | Sequential domain adaptation by synthesizing distributionally robust experts | https://scholar.google.com/scholar?cluster=6930689921879255394&hl=en&as_sdt=0,13 | 2 | 2,021 |
T-SCI: A Two-Stage Conformal Inference Algorithm with Guaranteed Coverage for Cox-MLP | 4 | icml | 0 | 0 | 2023-06-17 04:14:14.141000 | https://github.com/thutzr/Cox | 1 | T-sci: A two-stage conformal inference algorithm with guaranteed coverage for cox-mlp | https://scholar.google.com/scholar?cluster=1012431253971456969&hl=en&as_sdt=0,39 | 2 | 2,021 |
Moreau-Yosida $f$-divergences | 2 | icml | 0 | 0 | 2023-06-17 04:14:14.358000 | https://github.com/renyi-ai/moreau-yosida-f-divergences | 2 | Moreau-Yosida -divergences | https://scholar.google.com/scholar?cluster=3652869154522690970&hl=en&as_sdt=0,5 | 2 | 2,021 |
Training data-efficient image transformers & distillation through attention | 3,348 | icml | 516 | 12 | 2023-06-17 04:14:14.561000 | https://github.com/facebookresearch/deit | 3,450 | Training data-efficient image transformers & distillation through attention | https://scholar.google.com/scholar?cluster=16235705232339507184&hl=en&as_sdt=0,48 | 48 | 2,021 |
Conservative Objective Models for Effective Offline Model-Based Optimization | 36 | icml | 7 | 2 | 2023-06-17 04:14:14.764000 | https://github.com/brandontrabucco/design-baselines | 42 | Conservative objective models for effective offline model-based optimization | https://scholar.google.com/scholar?cluster=10951629581873877852&hl=en&as_sdt=0,10 | 4 | 2,021 |
On Disentangled Representations Learned from Correlated Data | 71 | icml | 5 | 0 | 2023-06-17 04:14:14.966000 | https://github.com/ftraeuble/disentanglement_lib | 10 | On disentangled representations learned from correlated data | https://scholar.google.com/scholar?cluster=10644866140945749570&hl=en&as_sdt=0,33 | 0 | 2,021 |
A New Formalism, Method and Open Issues for Zero-Shot Coordination | 16 | icml | 0 | 0 | 2023-06-17 04:14:15.169000 | https://github.com/johannestreutlein/op-tie-breaking | 4 | A new formalism, method and open issues for zero-shot coordination | https://scholar.google.com/scholar?cluster=7081499741440160815&hl=en&as_sdt=0,10 | 1 | 2,021 |
Learning a Universal Template for Few-shot Dataset Generalization | 49 | icml | 136 | 44 | 2023-06-17 04:14:15.371000 | https://github.com/google-research/meta-dataset | 698 | Learning a universal template for few-shot dataset generalization | https://scholar.google.com/scholar?cluster=1180369253723418240&hl=en&as_sdt=0,45 | 24 | 2,021 |
Provable Meta-Learning of Linear Representations | 127 | icml | 1 | 0 | 2023-06-17 04:14:15.573000 | https://github.com/nileshtrip/MTL | 2 | Provable meta-learning of linear representations | https://scholar.google.com/scholar?cluster=14454744225976907789&hl=en&as_sdt=0,36 | 2 | 2,021 |
LTL2Action: Generalizing LTL Instructions for Multi-Task RL | 43 | icml | 4 | 0 | 2023-06-17 04:14:15.777000 | https://github.com/LTL2Action/LTL2Action | 16 | Ltl2action: Generalizing ltl instructions for multi-task rl | https://scholar.google.com/scholar?cluster=14511888964718858114&hl=en&as_sdt=0,5 | 1 | 2,021 |
Online Graph Dictionary Learning | 32 | icml | 6 | 0 | 2023-06-17 04:14:15.981000 | https://github.com/cedricvincentcuaz/GDL | 12 | Online graph dictionary learning | https://scholar.google.com/scholar?cluster=7527452774562329300&hl=en&as_sdt=0,33 | 1 | 2,021 |
Efficient Training of Robust Decision Trees Against Adversarial Examples | 18 | icml | 8 | 0 | 2023-06-17 04:14:16.184000 | https://github.com/tudelft-cda-lab/GROOT | 18 | Efficient training of robust decision trees against adversarial examples | https://scholar.google.com/scholar?cluster=9227298780298647203&hl=en&as_sdt=0,33 | 5 | 2,021 |
Object Segmentation Without Labels with Large-Scale Generative Models | 28 | icml | 30 | 3 | 2023-06-17 04:14:16.387000 | https://github.com/anvoynov/BigGANsAreWatching | 118 | Object segmentation without labels with large-scale generative models | https://scholar.google.com/scholar?cluster=7466808437204273550&hl=en&as_sdt=0,5 | 7 | 2,021 |
Principal Component Hierarchy for Sparse Quadratic Programs | 3 | icml | 0 | 1 | 2023-06-17 04:14:16.591000 | https://github.com/RVreugdenhil/sparseQP | 3 | Principal component hierarchy for sparse quadratic programs | https://scholar.google.com/scholar?cluster=2335943370788592099&hl=en&as_sdt=0,5 | 2 | 2,021 |
Safe Reinforcement Learning Using Advantage-Based Intervention | 26 | icml | 5 | 1 | 2023-06-17 04:14:16.794000 | https://github.com/nolanwagener/safe_rl | 18 | Safe reinforcement learning using advantage-based intervention | https://scholar.google.com/scholar?cluster=5048043466827651236&hl=en&as_sdt=0,33 | 1 | 2,021 |
Learning and Planning in Average-Reward Markov Decision Processes | 38 | icml | 0 | 0 | 2023-06-17 04:14:16.998000 | https://github.com/abhisheknaik96/average-reward-methods | 12 | Learning and planning in average-reward markov decision processes | https://scholar.google.com/scholar?cluster=750901868273869826&hl=en&as_sdt=0,47 | 1 | 2,021 |
Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces | 27 | icml | 5 | 0 | 2023-06-17 04:14:17.202000 | https://github.com/xingchenwan/Casmopolitan | 21 | Think global and act local: Bayesian optimisation over high-dimensional categorical and mixed search spaces | https://scholar.google.com/scholar?cluster=6765216544866118683&hl=en&as_sdt=0,47 | 1 | 2,021 |
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model | 24 | icml | 1 | 2 | 2023-06-17 04:14:17.405000 | https://github.com/zwang84/zsdb3kd | 10 | Zero-shot knowledge distillation from a decision-based black-box model | https://scholar.google.com/scholar?cluster=7908835679548457764&hl=en&as_sdt=0,33 | 3 | 2,021 |
Fast Algorithms for Stackelberg Prediction Game with Least Squares Loss | 5 | icml | 2 | 0 | 2023-06-17 04:14:17.607000 | https://github.com/JialiWang12/SPGLS | 2 | Fast algorithms for stackelberg prediction game with least squares loss | https://scholar.google.com/scholar?cluster=2550353303659094230&hl=en&as_sdt=0,5 | 2 | 2,021 |
Self-Tuning for Data-Efficient Deep Learning | 35 | icml | 14 | 6 | 2023-06-17 04:14:17.810000 | https://github.com/thuml/Self-Tuning | 108 | Self-tuning for data-efficient deep learning | https://scholar.google.com/scholar?cluster=3161082086338934038&hl=en&as_sdt=0,5 | 4 | 2,021 |
AlphaNet: Improved Training of Supernets with Alpha-Divergence | 41 | icml | 13 | 0 | 2023-06-17 04:14:18.013000 | https://github.com/facebookresearch/AlphaNet | 90 | Alphanet: Improved training of supernets with alpha-divergence | https://scholar.google.com/scholar?cluster=16040812221590233106&hl=en&as_sdt=0,5 | 10 | 2,021 |
SG-PALM: a Fast Physically Interpretable Tensor Graphical Model | 6 | icml | 0 | 0 | 2023-06-17 04:14:18.217000 | https://github.com/ywa136/sg-palm | 0 | Sg-palm: a fast physically interpretable tensor graphical model | https://scholar.google.com/scholar?cluster=15846965999647833426&hl=en&as_sdt=0,39 | 2 | 2,021 |
Robust Inference for High-Dimensional Linear Models via Residual Randomization | 2 | icml | 0 | 0 | 2023-06-17 04:14:18.419000 | https://github.com/atechnicolorskye/rrHDI | 0 | Robust inference for high-dimensional linear models via residual randomization | https://scholar.google.com/scholar?cluster=7848775259409033077&hl=en&as_sdt=0,5 | 4 | 2,021 |
Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method | 12 | icml | 0 | 0 | 2023-06-17 04:14:18.623000 | https://github.com/peng8wang/ICML2021-PPM-SBM | 1 | Optimal non-convex exact recovery in stochastic block model via projected power method | https://scholar.google.com/scholar?cluster=2598400261123150872&hl=en&as_sdt=0,5 | 1 | 2,021 |
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks | 14 | icml | 0 | 0 | 2023-06-17 04:14:18.826000 | https://github.com/bhwangfy/ICML-2021-Adaptive-Bias | 0 | The implicit bias for adaptive optimization algorithms on homogeneous neural networks | https://scholar.google.com/scholar?cluster=6329455504055217085&hl=en&as_sdt=0,6 | 1 | 2,021 |
Directional Bias Amplification | 30 | icml | 1 | 0 | 2023-06-17 04:14:19.029000 | https://github.com/princetonvisualai/directional-bias-amp | 12 | Directional bias amplification | https://scholar.google.com/scholar?cluster=16389460185229956032&hl=en&as_sdt=0,5 | 3 | 2,021 |
An exact solver for the Weston-Watkins SVM subproblem | 1 | icml | 1 | 0 | 2023-06-17 04:14:19.233000 | https://github.com/YutongWangUMich/liblinear | 1 | An exact solver for the Weston-Watkins SVM subproblem | https://scholar.google.com/scholar?cluster=3159763216882198120&hl=en&as_sdt=0,33 | 1 | 2,021 |
UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data | 68 | icml | 1 | 0 | 2023-06-17 04:14:19.435000 | https://github.com/cywang97/unispeech | 6 | Unispeech: Unified speech representation learning with labeled and unlabeled data | https://scholar.google.com/scholar?cluster=13435266557122878220&hl=en&as_sdt=0,10 | 0 | 2,021 |
Guarantees for Tuning the Step Size using a Learning-to-Learn Approach | 14 | icml | 0 | 0 | 2023-06-17 04:14:19.638000 | https://github.com/Kolin96/learning-to-learn | 5 | Guarantees for tuning the step size using a learning-to-learn approach | https://scholar.google.com/scholar?cluster=14011148372183922163&hl=en&as_sdt=0,47 | 1 | 2,021 |
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation | 48 | icml | 9 | 1 | 2023-06-17 04:14:19.840000 | https://github.com/AI-secure/multi-task-learning | 61 | Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation | https://scholar.google.com/scholar?cluster=5814522177483838670&hl=en&as_sdt=0,5 | 2 | 2,021 |
Robust Asymmetric Learning in POMDPs | 18 | icml | 0 | 0 | 2023-06-17 04:14:20.043000 | https://github.com/plai-group/a2d | 6 | Robust asymmetric learning in pomdps | https://scholar.google.com/scholar?cluster=3140825517966878728&hl=en&as_sdt=0,11 | 6 | 2,021 |
Thinking Like Transformers | 23 | icml | 18 | 1 | 2023-06-17 04:14:20.250000 | https://github.com/tech-srl/RASP | 204 | Thinking like transformers | https://scholar.google.com/scholar?cluster=18191652199606300845&hl=en&as_sdt=0,33 | 9 | 2,021 |
Prediction-Centric Learning of Independent Cascade Dynamics from Partial Observations | 4 | icml | 0 | 4 | 2023-06-17 04:14:20.453000 | https://github.com/mateuszwilinski/dynamic-message-passing | 2 | Prediction-centric learning of independent cascade dynamics from partial observations | https://scholar.google.com/scholar?cluster=10502404999928524540&hl=en&as_sdt=0,5 | 2 | 2,021 |
Learning Neural Network Subspaces | 40 | icml | 17 | 1 | 2023-06-17 04:14:20.656000 | https://github.com/apple/learning-subspaces | 124 | Learning neural network subspaces | https://scholar.google.com/scholar?cluster=10251875714480398754&hl=en&as_sdt=0,5 | 10 | 2,021 |
Making Paper Reviewing Robust to Bid Manipulation Attacks | 17 | icml | 5 | 0 | 2023-06-17 04:14:20.858000 | https://github.com/facebookresearch/secure-paper-bidding | 9 | Making paper reviewing robust to bid manipulation attacks | https://scholar.google.com/scholar?cluster=3106264104832629742&hl=en&as_sdt=0,5 | 9 | 2,021 |
LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning | 25 | icml | 4 | 0 | 2023-06-17 04:14:21.061000 | https://github.com/tonywu95/lime | 16 | Lime: Learning inductive bias for primitives of mathematical reasoning | https://scholar.google.com/scholar?cluster=6631886312737976055&hl=en&as_sdt=0,18 | 2 | 2,021 |
ChaCha for Online AutoML | 5 | icml | 379 | 173 | 2023-06-17 04:14:21.263000 | https://github.com/microsoft/FLAML | 2,517 | Chacha for online automl | https://scholar.google.com/scholar?cluster=15774579199663385941&hl=en&as_sdt=0,39 | 45 | 2,021 |
Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach | 20 | icml | 6 | 0 | 2023-06-17 04:14:21.465000 | https://github.com/qitianwu/IDCF | 24 | Towards open-world recommendation: An inductive model-based collaborative filtering approach | https://scholar.google.com/scholar?cluster=13656226067206698249&hl=en&as_sdt=0,5 | 2 | 2,021 |
A Bit More Bayesian: Domain-Invariant Learning with Uncertainty | 24 | icml | 5 | 1 | 2023-06-17 04:14:21.668000 | https://github.com/zzzx1224/A-Bit-More-Bayesian | 10 | A bit more bayesian: Domain-invariant learning with uncertainty | https://scholar.google.com/scholar?cluster=8533759072554466832&hl=en&as_sdt=0,5 | 1 | 2,021 |
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks | 75 | icml | 12 | 3 | 2023-06-17 04:14:21.872000 | https://github.com/AI-secure/CRFL | 56 | Crfl: Certifiably robust federated learning against backdoor attacks | https://scholar.google.com/scholar?cluster=566297691223350385&hl=en&as_sdt=0,47 | 3 | 2,021 |
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization | 21 | icml | 6 | 0 | 2023-06-17 04:14:22.074000 | https://github.com/zeke-xie/Positive-Negative-Momentum | 25 | Positive-negative momentum: Manipulating stochastic gradient noise to improve generalization | https://scholar.google.com/scholar?cluster=9647717968624963089&hl=en&as_sdt=0,23 | 3 | 2,021 |
An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming | 42 | icml | 12 | 0 | 2023-06-17 04:14:22.277000 | https://github.com/MinkaiXu/ConfVAE-ICML21 | 47 | An end-to-end framework for molecular conformation generation via bilevel programming | https://scholar.google.com/scholar?cluster=914718927564575831&hl=en&as_sdt=0,5 | 3 | 2,021 |
Self-supervised Graph-level Representation Learning with Local and Global Structure | 96 | icml | 15 | 3 | 2023-06-17 04:14:22.479000 | https://github.com/DeepGraphLearning/GraphLoG | 57 | Self-supervised graph-level representation learning with local and global structure | https://scholar.google.com/scholar?cluster=15360735332012817623&hl=en&as_sdt=0,5 | 7 | 2,021 |
Conformal prediction interval for dynamic time-series | 38 | icml | 18 | 0 | 2023-06-17 04:14:22.682000 | https://github.com/hamrel-cxu/EnbPI | 42 | Conformal prediction interval for dynamic time-series | https://scholar.google.com/scholar?cluster=9397887507156986767&hl=en&as_sdt=0,33 | 2 | 2,021 |
KNAS: Green Neural Architecture Search | 25 | icml | 15 | 1 | 2023-06-17 04:14:22.885000 | https://github.com/jingjing-nlp/knas | 90 | KNAS: green neural architecture search | https://scholar.google.com/scholar?cluster=636730090425787241&hl=en&as_sdt=0,36 | 2 | 2,021 |
Structured Convolutional Kernel Networks for Airline Crew Scheduling | 8 | icml | 3 | 0 | 2023-06-17 04:14:23.087000 | https://github.com/Yaakoubi/Struct-CKN | 5 | Structured convolutional kernel networks for airline crew scheduling | https://scholar.google.com/scholar?cluster=6467944180520163376&hl=en&as_sdt=0,47 | 1 | 2,021 |
Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences | 1 | icml | 0 | 0 | 2023-06-17 04:14:23.289000 | https://github.com/i-yamane/mediated_uncoupled_learning | 2 | Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences | https://scholar.google.com/scholar?cluster=17617652020684598053&hl=en&as_sdt=0,21 | 2 | 2,021 |
EL-Attention: Memory Efficient Lossless Attention for Generation | 2 | icml | 41 | 11 | 2023-06-17 04:14:23.492000 | https://github.com/microsoft/fastseq | 416 | El-attention: Memory efficient lossless attention for generation | https://scholar.google.com/scholar?cluster=1375858256863771464&hl=en&as_sdt=0,47 | 15 | 2,021 |
Link Prediction with Persistent Homology: An Interactive View | 23 | icml | 1 | 1 | 2023-06-17 04:14:23.695000 | https://github.com/pkuyzy/TLC-GNN | 8 | Link prediction with persistent homology: An interactive view | https://scholar.google.com/scholar?cluster=6988958697269886780&hl=en&as_sdt=0,44 | 2 | 2,021 |
CATE: Computation-aware Neural Architecture Encoding with Transformers | 12 | icml | 5 | 1 | 2023-06-17 04:14:23.897000 | https://github.com/MSU-MLSys-Lab/CATE | 17 | Cate: Computation-aware neural architecture encoding with transformers | https://scholar.google.com/scholar?cluster=8641165479167437291&hl=en&as_sdt=0,41 | 4 | 2,021 |
On Perceptual Lossy Compression: The Cost of Perceptual Reconstruction and An Optimal Training Framework | 12 | icml | 2 | 0 | 2023-06-17 04:14:24.100000 | https://github.com/ZeyuYan/Perceptual-Lossy-Compression | 10 | On perceptual lossy compression: The cost of perceptual reconstruction and an optimal training framework | https://scholar.google.com/scholar?cluster=3982169689811841911&hl=en&as_sdt=0,36 | 1 | 2,021 |
Graph Neural Networks Inspired by Classical Iterative Algorithms | 43 | icml | 5 | 0 | 2023-06-17 04:14:24.302000 | https://github.com/FFTYYY/TWIRLS | 35 | Graph neural networks inspired by classical iterative algorithms | https://scholar.google.com/scholar?cluster=7834297008396631458&hl=en&as_sdt=0,5 | 2 | 2,021 |
Voice2Series: Reprogramming Acoustic Models for Time Series Classification | 49 | icml | 8 | 3 | 2023-06-17 04:14:24.504000 | https://github.com/huckiyang/Voice2Series-Reprogramming | 54 | Voice2series: Reprogramming acoustic models for time series classification | https://scholar.google.com/scholar?cluster=436573915483653789&hl=en&as_sdt=0,38 | 2 | 2,021 |
Rethinking Rotated Object Detection with Gaussian Wasserstein Distance Loss | 166 | icml | 178 | 21 | 2023-06-17 04:14:24.706000 | https://github.com/yangxue0827/RotationDetection | 1,013 | Rethinking rotated object detection with gaussian wasserstein distance loss | https://scholar.google.com/scholar?cluster=9458084216549029781&hl=en&as_sdt=0,33 | 21 | 2,021 |
Delving into Deep Imbalanced Regression | 114 | icml | 111 | 3 | 2023-06-17 04:14:24.909000 | https://github.com/YyzHarry/imbalanced-regression | 642 | Delving into deep imbalanced regression | https://scholar.google.com/scholar?cluster=14041915448985010978&hl=en&as_sdt=0,31 | 18 | 2,021 |
SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks | 189 | icml | 33 | 9 | 2023-06-17 04:14:25.111000 | https://github.com/ZjjConan/SimAM | 257 | Simam: A simple, parameter-free attention module for convolutional neural networks | https://scholar.google.com/scholar?cluster=6748424654077587327&hl=en&as_sdt=0,47 | 5 | 2,021 |
Improving Generalization in Meta-learning via Task Augmentation | 53 | icml | 4 | 0 | 2023-06-17 04:14:25.314000 | https://github.com/huaxiuyao/MetaMix | 25 | Improving generalization in meta-learning via task augmentation | https://scholar.google.com/scholar?cluster=756197262814969387&hl=en&as_sdt=0,47 | 1 | 2,021 |
Deep Learning for Functional Data Analysis with Adaptive Basis Layers | 8 | icml | 4 | 0 | 2023-06-17 04:14:25.516000 | https://github.com/jwyyy/AdaFNN | 18 | Deep learning for functional data analysis with adaptive basis layers | https://scholar.google.com/scholar?cluster=17144943362411304273&hl=en&as_sdt=0,14 | 1 | 2,021 |
Addressing Catastrophic Forgetting in Few-Shot Problems | 10 | icml | 1 | 0 | 2023-06-17 04:14:25.719000 | https://github.com/pauchingyap/boml | 4 | Addressing catastrophic forgetting in few-shot problems | https://scholar.google.com/scholar?cluster=5331519649661500119&hl=en&as_sdt=0,11 | 1 | 2,021 |
Break-It-Fix-It: Unsupervised Learning for Program Repair | 52 | icml | 19 | 5 | 2023-06-17 04:14:25.922000 | https://github.com/michiyasunaga/bifi | 97 | Break-it-fix-it: Unsupervised learning for program repair | https://scholar.google.com/scholar?cluster=4368697690139646578&hl=en&as_sdt=0,25 | 2 | 2,021 |
Neighborhood Contrastive Learning Applied to Online Patient Monitoring | 14 | icml | 1 | 0 | 2023-06-17 04:14:26.125000 | https://github.com/ratschlab/ncl | 15 | Neighborhood contrastive learning applied to online patient monitoring | https://scholar.google.com/scholar?cluster=4664115316667000917&hl=en&as_sdt=0,5 | 7 | 2,021 |
Continuous-time Model-based Reinforcement Learning | 25 | icml | 9 | 0 | 2023-06-17 04:14:26.328000 | https://github.com/cagatayyildiz/oderl | 29 | Continuous-time model-based reinforcement learning | https://scholar.google.com/scholar?cluster=14746718008006143630&hl=en&as_sdt=0,33 | 3 | 2,021 |
Path Planning using Neural A* Search | 48 | icml | 39 | 0 | 2023-06-17 04:14:26.588000 | https://github.com/omron-sinicx/neural-astar | 112 | Path planning using neural a* search | https://scholar.google.com/scholar?cluster=997109174991202847&hl=en&as_sdt=0,34 | 8 | 2,021 |
SinIR: Efficient General Image Manipulation with Single Image Reconstruction | 15 | icml | 6 | 2 | 2023-06-17 04:14:26.873000 | https://github.com/YooJiHyeong/SinIR | 49 | Sinir: Efficient general image manipulation with single image reconstruction | https://scholar.google.com/scholar?cluster=10599627975062939893&hl=en&as_sdt=0,5 | 4 | 2,021 |
Conditional Temporal Neural Processes with Covariance Loss | 6 | icml | 3 | 0 | 2023-06-17 04:14:27.077000 | https://github.com/boseon-ai/Conditional-Temporal-Neural-Processes-with-Covariance-Loss | 4 | Conditional temporal neural processes with covariance loss | https://scholar.google.com/scholar?cluster=11587001317959077781&hl=en&as_sdt=0,5 | 1 | 2,021 |
Adversarial Purification with Score-based Generative Models | 44 | icml | 3 | 2 | 2023-06-17 04:14:27.280000 | https://github.com/jmyoon1/adp | 19 | Adversarial purification with score-based generative models | https://scholar.google.com/scholar?cluster=1510322463041774819&hl=en&as_sdt=0,44 | 1 | 2,021 |
Federated Continual Learning with Weighted Inter-client Transfer | 76 | icml | 22 | 0 | 2023-06-17 04:14:27.484000 | https://github.com/wyjeong/FedWeIT | 67 | Federated continual learning with weighted inter-client transfer | https://scholar.google.com/scholar?cluster=6346174361267860505&hl=en&as_sdt=0,21 | 2 | 2,021 |
Autoencoding Under Normalization Constraints | 16 | icml | 13 | 0 | 2023-06-17 04:14:27.694000 | https://github.com/swyoon/normalized-autoencoders | 37 | Autoencoding under normalization constraints | https://scholar.google.com/scholar?cluster=1297005004772257313&hl=en&as_sdt=0,47 | 5 | 2,021 |
Lower-Bounded Proper Losses for Weakly Supervised Classification | 2 | icml | 0 | 0 | 2023-06-17 04:14:27.897000 | https://github.com/yoshum/lower-bounded-proper-losses | 2 | Lower-Bounded Proper Losses for Weakly Supervised Classification | https://scholar.google.com/scholar?cluster=17541047076253957367&hl=en&as_sdt=0,5 | 1 | 2,021 |
Graph Contrastive Learning Automated | 196 | icml | 8 | 4 | 2023-06-17 04:14:28.101000 | https://github.com/Shen-Lab/GraphCL_Automated | 85 | Graph contrastive learning automated | https://scholar.google.com/scholar?cluster=4319391299971749370&hl=en&as_sdt=0,33 | 3 | 2,021 |
LogME: Practical Assessment of Pre-trained Models for Transfer Learning | 69 | icml | 15 | 6 | 2023-06-17 04:14:28.303000 | https://github.com/thuml/LogME | 172 | Logme: Practical assessment of pre-trained models for transfer learning | https://scholar.google.com/scholar?cluster=7398435047749789865&hl=en&as_sdt=0,33 | 5 | 2,021 |
DAGs with No Curl: An Efficient DAG Structure Learning Approach | 30 | icml | 5 | 1 | 2023-06-17 04:14:28.506000 | https://github.com/fishmoon1234/DAG-NoCurl | 16 | Dags with no curl: An efficient dag structure learning approach | https://scholar.google.com/scholar?cluster=3161455728562313506&hl=en&as_sdt=0,5 | 2 | 2,021 |
Large Scale Private Learning via Low-rank Reparametrization | 41 | icml | 17 | 0 | 2023-06-17 04:14:28.710000 | https://github.com/dayu11/Differentially-Private-Deep-Learning | 72 | Large scale private learning via low-rank reparametrization | https://scholar.google.com/scholar?cluster=10646842759761842433&hl=en&as_sdt=0,33 | 2 | 2,021 |
Federated Composite Optimization | 38 | icml | 3 | 0 | 2023-06-17 04:14:28.914000 | https://github.com/hongliny/FCO-ICML21 | 9 | Federated composite optimization | https://scholar.google.com/scholar?cluster=10805982907996173478&hl=en&as_sdt=0,34 | 1 | 2,021 |
Three Operator Splitting with a Nonconvex Loss Function | 5 | icml | 0 | 0 | 2023-06-17 04:14:29.117000 | https://github.com/alpyurtsever/NonconvexTOS | 1 | Three operator splitting with a nonconvex loss function | https://scholar.google.com/scholar?cluster=14275996016492090770&hl=en&as_sdt=0,14 | 1 | 2,021 |
Learning Binary Decision Trees by Argmin Differentiation | 10 | icml | 1 | 2 | 2023-06-17 04:14:29.320000 | https://github.com/vzantedeschi/LatentTrees | 11 | Learning binary decision trees by argmin differentiation | https://scholar.google.com/scholar?cluster=8235159658077202682&hl=en&as_sdt=0,5 | 1 | 2,021 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.