stereoplegic
's Collections
Dissecting In-Context Learning of Translations in GPTs
Paper
•
2310.15987
•
Published
•
5
In-Context Learning Creates Task Vectors
Paper
•
2310.15916
•
Published
•
42
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Paper
•
2202.07922
•
Published
•
1
Promptor: A Conversational and Autonomous Prompt Generation Agent for
Intelligent Text Entry Techniques
Paper
•
2310.08101
•
Published
•
2
Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for
Knowledge-intensive Question Answering
Paper
•
2308.13259
•
Published
•
2
EcoAssistant: Using LLM Assistant More Affordably and Accurately
Paper
•
2310.03046
•
Published
•
5
SCREWS: A Modular Framework for Reasoning with Revisions
Paper
•
2309.13075
•
Published
•
15
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
Paper
•
2306.05425
•
Published
•
11
Neural Machine Translation Models Can Learn to be Few-shot Learners
Paper
•
2309.08590
•
Published
•
1
Ambiguity-Aware In-Context Learning with Large Language Models
Paper
•
2309.07900
•
Published
•
4
Are Emergent Abilities in Large Language Models just In-Context
Learning?
Paper
•
2309.01809
•
Published
•
3
FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning
Paper
•
2309.04663
•
Published
•
5
How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations
Paper
•
2310.10616
•
Published
•
1
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Paper
•
2310.10638
•
Published
•
29
Large Language Models Are Also Good Prototypical Commonsense Reasoners
Paper
•
2309.13165
•
Published
•
1
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller
Language Models
Paper
•
2310.05074
•
Published
•
1
Multilingual Machine Translation with Large Language Models: Empirical
Results and Analysis
Paper
•
2304.04675
•
Published
•
1
The Closeness of In-Context Learning and Weight Shifting for Softmax
Regression
Paper
•
2304.13276
•
Published
•
1
RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder
Language Models
Paper
•
2308.07922
•
Published
•
17
Commonsense Knowledge Transfer for Pre-trained Language Models
Paper
•
2306.02388
•
Published
•
1
Efficient Prompting via Dynamic In-Context Learning
Paper
•
2305.11170
•
Published
•
1
Adapting Language Models to Compress Contexts
Paper
•
2305.14788
•
Published
•
1
Diffusion Language Models Can Perform Many Tasks with Scaling and
Instruction-Finetuning
Paper
•
2308.12219
•
Published
•
1
MMICL: Empowering Vision-language Model with Multi-Modal In-Context
Learning
Paper
•
2309.07915
•
Published
•
4
Steering Large Language Models for Machine Translation with Finetuning
and In-Context Learning
Paper
•
2310.13448
•
Published
•
1
Query2doc: Query Expansion with Large Language Models
Paper
•
2303.07678
•
Published
•
1
Query Expansion by Prompting Large Language Models
Paper
•
2305.03653
•
Published
•
1
Generative Relevance Feedback with Large Language Models
Paper
•
2304.13157
•
Published
•
1
Context Aware Query Rewriting for Text Rankers using LLM
Paper
•
2308.16753
•
Published
•
1
Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization
for Few-shot Generalization
Paper
•
2303.12314
•
Published
•
1
Contrastive Learning for Prompt-Based Few-Shot Language Learners
Paper
•
2205.01308
•
Published
•
1
Learning to Retrieve In-Context Examples for Large Language Models
Paper
•
2307.07164
•
Published
•
21
Tuning Language Models as Training Data Generators for
Augmentation-Enhanced Few-Shot Learning
Paper
•
2211.03044
•
Published
•
1
ConsPrompt: Easily Exploiting Contrastive Samples for Few-shot Prompt
Learning
Paper
•
2211.04118
•
Published
•
1
Contrastive Demonstration Tuning for Pre-trained Language Models
Paper
•
2204.04392
•
Published
•
1
Reason for Future, Act for Now: A Principled Framework for Autonomous
LLM Agents with Provable Sample Efficiency
Paper
•
2309.17382
•
Published
•
4
Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large
Language Models
Paper
•
2305.18507
•
Published
•
1
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting
Paper
•
2306.06427
•
Published
•
2
Progressive-Hint Prompting Improves Reasoning in Large Language Models
Paper
•
2304.09797
•
Published
•
1
Small Language Models Improve Giants by Rewriting Their Outputs
Paper
•
2305.13514
•
Published
•
2
Introspective Tips: Large Language Model for In-Context Decision Making
Paper
•
2305.11598
•
Published
•
1
Tab-CoT: Zero-shot Tabular Chain of Thought
Paper
•
2305.17812
•
Published
•
2
Program of Thoughts Prompting: Disentangling Computation from Reasoning
for Numerical Reasoning Tasks
Paper
•
2211.12588
•
Published
•
3
Structured Chain-of-Thought Prompting for Code Generation
Paper
•
2305.06599
•
Published
•
1
Improving ChatGPT Prompt for Code Generation
Paper
•
2305.08360
•
Published
•
1
Not All Languages Are Created Equal in LLMs: Improving Multilingual
Capability by Cross-Lingual-Thought Prompting
Paper
•
2305.07004
•
Published
•
1
Text Data Augmentation in Low-Resource Settings via Fine-Tuning of Large
Language Models
Paper
•
2310.01119
•
Published
•
1
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning
Paper
•
2205.05638
•
Published
•
3
Prompt Space Optimizing Few-shot Reasoning Success with Large Language
Models
Paper
•
2306.03799
•
Published
•
1
Prompt Engineering or Fine Tuning: An Empirical Assessment of Large
Language Models in Automated Software Engineering Tasks
Paper
•
2310.10508
•
Published
•
1
Large Language Model-Aware In-Context Learning for Code Generation
Paper
•
2310.09748
•
Published
•
1
Few-shot training LLMs for project-specific code-summarization
Paper
•
2207.04237
•
Published
•
1
ThinkSum: Probabilistic reasoning over sets using large language models
Paper
•
2210.01293
•
Published
•
1
EchoPrompt: Instructing the Model to Rephrase Queries for Improved
In-context Learning
Paper
•
2309.10687
•
Published
•
1
ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for
Document Information Extraction
Paper
•
2303.05063
•
Published
•
1
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly
Generating Predictions and Natural Language Explanations
Paper
•
2305.13235
•
Published
•
1
Guiding Generative Language Models for Data Augmentation in Few-Shot
Text Classification
Paper
•
2111.09064
•
Published
•
1
Schema-learning and rebinding as mechanisms of in-context learning and
emergence
Paper
•
2307.01201
•
Published
•
2
Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners
Paper
•
2305.14825
•
Published
•
1
The Transient Nature of Emergent In-Context Learning in Transformers
Paper
•
2311.08360
•
Published
•
1
Explore Spurious Correlations at the Concept Level in Language Models
for Text Classification
Paper
•
2311.08648
•
Published
•
2
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient
Framework
Paper
•
2111.04130
•
Published
•
1
Auto-ICL: In-Context Learning without Human Supervision
Paper
•
2311.09263
•
Published
•
2
Gated recurrent neural networks discover attention
Paper
•
2309.01775
•
Published
•
7
Generative Multimodal Models are In-Context Learners
Paper
•
2312.13286
•
Published
•
34
ICE-GRT: Instruction Context Enhancement by Generative Reinforcement
based Transformers
Paper
•
2401.02072
•
Published
•
9
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
Paper
•
2312.04474
•
Published
•
30
Pretraining task diversity and the emergence of non-Bayesian in-context
learning for regression
Paper
•
2306.15063
•
Published
•
1
AceCoder: Utilizing Existing Code to Enhance Code Generation
Paper
•
2303.17780
•
Published
•
1
Compositional Exemplars for In-context Learning
Paper
•
2302.05698
•
Published
•
2
What Makes Good In-context Demonstrations for Code Intelligence Tasks
with LLMs?
Paper
•
2304.07575
•
Published
•
1
Can language models learn from explanations in context?
Paper
•
2204.02329
•
Published
•
1
Post Hoc Explanations of Language Models Can Improve Language Models
Paper
•
2305.11426
•
Published
•
1
Automatic Chain of Thought Prompting in Large Language Models
Paper
•
2210.03493
•
Published
•
2
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step
Reasoning
Paper
•
2308.00436
•
Published
•
22
Learning Multi-Step Reasoning by Solving Arithmetic Tasks
Paper
•
2306.01707
•
Published
•
1
Better Zero-Shot Reasoning with Role-Play Prompting
Paper
•
2308.07702
•
Published
•
2
Link-Context Learning for Multimodal LLMs
Paper
•
2308.07891
•
Published
•
15
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning
Tasks
Paper
•
2402.04248
•
Published
•
30
Can large language models explore in-context?
Paper
•
2403.15371
•
Published
•
32
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context
Reinforcement Learning
Paper
•
2406.08973
•
Published
•
86