Spaces:
Running
Running
File size: 4,786 Bytes
3117bf6 b297dbb 7bce082 b297dbb 56f6f03 4c579e2 56f6f03 a478593 5ed850b 800a3fe 3647e4f 366c4be b297dbb 326419c a478593 94afca3 79bf2a9 800a3fe 94afca3 a478593 f8a4912 f432111 f8a4912 d7e55f8 f929720 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
title: README
emoji: π
colorFrom: indigo
colorTo: gray
sdk: static
pinned: false
---
# Paris Noah's Ark Lab
We are a research laboratory located in Paris, working on the fundamental and practical development of modern artificial intelligence systems.
Paris Noah's Ark Lab consists of 3 research teams that cover the following topics:
- Time Series and Transfer Learning (Team leader: [Ievgen Redko](https://fr.linkedin.com/in/ievgen-redko-58622360))
- Auto-Data-Science and Reinforcement Learning (Team leader: [Balazs Kegl](https://fr.linkedin.com/in/balazskegl))
- Autonomous Driving
<!-- - (Team leader: [Dzmitry Tsishkou]()) -->
## Projects
### Packages
- [Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification](https://github.com/vfeofanov/mantis/tree/main): check also [our paper](https://arxiv.org/pdf/2502.15637).
### Preprints
- [TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning](https://huggingface.co/papers/2502.15425): distributed multi-agent hierarchical reinforcement learning framework.
- [AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting](https://arxiv.org/abs/2502.10235): simple yet powerful tricks to extend foundation models.
- [SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation](https://arxiv.org/abs/2407.11676): benchmark of shallow and deep domain adaptation method with realistic validation
- [Clustering Head: A Visual Case Study of the Training Dynamics in Transformers](https://arxiv.org/abs/2410.24050): visual and theoretical understanding of training dynamics in transformers.
- [Large Language Models as Markov Chains](https://huggingface.co/papers/2410.02724): theoretical insights on their generalization and convergence properties.
- [A Systematic Study Comparing Hyperparameter Optimization Engines on Tabular Data](https://balazskegl.medium.com/navigating-the-maze-of-hyperparameter-optimization-insights-from-a-systematic-study-6019675ea96c): insights to navigate the maze of hyperopt techniques.
### 2025
- *(ICLR'25)* - [Zero-shot Model-based Reinforcement Learning using Large Language Models](https://huggingface.co/papers/2410.11711): disentangled in-context learning for multivariate time series forecasting and model-based RL.
- *(ICASSP'25)* - [Easing Optimization Paths: A Circuit Perspective](https://arxiv.org/abs/2501.02362): mechanistic study of training dynamics in transformers.
- *(Neurocomputing)* - [Self-training: A survey](https://www.sciencedirect.com/science/article/pii/S0925231224016758): know more about pseudo-labeling strategies.
### 2024
- *(NeurIPS'24)* [MANO: Unsupervised Accuracy Estimation Under Distribution Shifts](https://huggingface.co/papers/2405.18979): when logits are enough to estimate generalization of a pre-trained model.
- *(NeurIPS'24, **Spotlight**)* [Analysing Multi-Task Regression via Random Matrix Theory](https://arxiv.org/pdf/2406.10327): insights on a classical approach and its potentiality for time series forecasting.
- *(ICML'24, **Oral**)* [SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting](https://huggingface.co/papers/2402.10198): sharpness-aware minimization and channel-wise attention is all you need.
- *(AISTATS'24)* [Leveraging Ensemble Diversity for Robust Self-Training](https://huggingface.co/papers/2310.14814): confidence estimation method for efficient pseudo-labeling under sample selection bias.
- *(JMLR, 2024)* [Multi-class Probabilistic Bounds for Majority Vote Classifiers with Partially Labeled Data](https://www.jmlr.org/papers/volume25/23-0121/23-0121.pdf) generalization with unlabeled or pseudo-labeled data.
- *(ICML '24)* [Position: A Call for Embodied AI](https://arxiv.org/abs/2402.03824): position paper on the need for embodied AI research
- *(RLC '24)* [A Study of the Weighted Multi-step Loss Impact on the Predictive Error and the Return in MBRL](https://openreview.net/pdf?id=K4VjW7evSV): multi-step loss in MBRL does not work as well as expected
### 2023
- *(ICML '23)* [Meta Optimal Transport](https://arxiv.org/abs/2206.05262)
- *(AAAI '23)* [Unbalanced Co-Optimal Transport](https://arxiv.org/abs/2205.14923)
- *(ICML '23)* [Multi-Agent Best Arm Identification with Private Communications](https://proceedings.mlr.press/v202/rio23a.html)
- *(ICML '23)* [Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption](https://proceedings.mlr.press/v202/feofanov23a.html)
- *(ICML '23)* [PCA-based Multi Task Learning: a Random Matrix Approach](https://proceedings.mlr.press/v202/tiomoko23a/tiomoko23a.pdf)
|