Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models? Paper • 2502.11895 • Published Feb 17 • 2
What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structure Paper • 2302.12239 • Published Feb 23, 2023 • 1
Dynaword: From One-shot to Continuously Developed Datasets Paper • 2508.02271 • Published 14 days ago • 13
GenCodeSearchNet: A Benchmark Test Suite for Evaluating Generalization in Programming Language Understanding Paper • 2311.09707 • Published Nov 16, 2023
When are 1.58 bits enough? A Bottom-up Exploration of BitNet Quantization Paper • 2411.05882 • Published Nov 8, 2024 • 1
CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model Paper • 1902.06423 • Published Feb 18, 2019
SkillSpan: Hard and Soft Skill Extraction from English Job Postings Paper • 2204.12811 • Published Apr 27, 2022 • 1
Kompetencer: Fine-grained Skill Classification in Danish Job Postings via Distant Supervision and Transfer Learning Paper • 2205.01381 • Published May 3, 2022
Dynaword: From One-shot to Continuously Developed Datasets Paper • 2508.02271 • Published 14 days ago • 13
Dynaword: From One-shot to Continuously Developed Datasets Paper • 2508.02271 • Published 14 days ago • 13