Papers
arxiv:2410.16392

LLM-based Optimization of Compound AI Systems: A Survey

Published on Oct 21
ยท Submitted by shenzhi-wang on Oct 23
Authors:
,
,
,
,
,
,
,
,

Abstract

In a compound AI system, components such as an LLM call, a retriever, a code interpreter, or tools are interconnected. The system's behavior is primarily driven by parameters such as instructions or tool definitions. Recent advancements enable end-to-end optimization of these parameters using an LLM. Notably, leveraging an LLM as an optimizer is particularly efficient because it avoids gradient computation and can generate complex code and instructions. This paper presents a survey of the principles and emerging trends in LLM-based optimization of compound AI systems. It covers archetypes of compound AI systems, approaches to LLM-based end-to-end optimization, and insights into future directions and broader impacts. Importantly, this survey uses concepts from program analysis to provide a unified view of how an LLM optimizer is prompted to optimize a compound AI system. The exhaustive list of paper is provided at https://github.com/linyuhongg/LLM-based-Optimization-of-Compound-AI-Systems.

Community

Paper author Paper submitter

Designing a compound AI system typically involves manually optimizing various parameters, such as LLM instructions, tool implementations, and reasoning structures.
๐Ÿง Can we replace the human in this process with an LLM?
Optimizing a compound AI system using an LLM refers to using an LLM to generate instructions or code, effectively automating the human role in the optimization process.
๐Ÿš€ In our survey, we explore this new paradigm to help practitioners understand its potential and applications!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.16392 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.16392 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.16392 in a Space README.md to link it from this page.

Collections including this paper 1