en_url
stringlengths 47
153
| en_title
stringlengths 12
118
| en_content
stringlengths 16
19.4k
| zh_url
stringlengths 53
159
| zh_title
stringlengths 4
70
| zh_content
stringlengths 16
10.5k
|
---|---|---|---|---|---|
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/ | High-Performance Remote IO With NVIDIA KvikIO | Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload.
Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have
higher and more variable latency
. Every storage service has their own set of best practices and performance guidelines (
AWS
,
Azure
). Here, we’ll give some general guidelines that are focused on data analysis workloads.
Location
Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt.
File format
“Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located).
Apache Parquet
,
Zarr
, and
Cloud Optimized GeoTIFF
are some examples of cloud-native file formats for various types of data.
Because object storage services typically support
range requests
, clients (like
cuDF
) can read the metadata and then download
just
the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out.
File size
Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples).
That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency.
Concurrency
Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a
remote
service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead.
Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a
thread pool
:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
Or with
asyncio
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
We’re able to have a lot of reads waiting around doing nothing
at the same time
, which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput.
Libraries
Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA
KvikIO
is notable because
It automatically chunks large requests into multiple smaller ones and makes those requests concurrently.
It can read efficiently into host or device memory, especially if
GPU Direct Storage
is enabled.
It’s fast.
As mentioned in the
RADIDS 24.12 release announcement
, KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does.
Benchmarks
When you read a file, KvikIO splits that read into smaller reads of
kvikio.defaults.task_size
bytes. It makes those read requests in parallel using a thread pool with
kvikio.defaults.num_threads
workers. These can be controlled using the environment variables
KVIKIO_TASK_SIZE
and
KVIKIO_NTHREADS
, or through Python with:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
See
Runtime Settings
for more.
This chart shows the throughput, in megabits per second, of reading a 1 GB blob from
S3
to a
g4dn
EC2 instance in the same region for various sizes of the thread pool (higher is better).
Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of
kvikio.RemoteFile.read
for various values of
kvikio.defaults.num
_threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point.
Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system.
With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case.
As shown in the next figure, the task size also affects the maximum throughput.
Figure 2. From a benchmark reading a 1 GB file from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of
kvikio.RemoteFile.read
. The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts.
As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput.
KvikIO achieves higher throughput on this workload when compared with
boto3
, the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently.
Figure 3. From a benchmark reading a 1 GB from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3.
As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS
g4dn.12xlarge
instance
, which has 4
NVIDIA T4
GPUs and 48 vCPUs.
Figure 4. From a benchmark reading a parquet data set from S3 to a
g4dn.12xlarge EC2
instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include
an optimization
to read parquet footers in parallel.
With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node.
Conclusion
As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on
GitHub
. You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing. | https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/ | 借助 NVIDIA KvikIO 实现高性能远程 IO | 处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。
您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。
地址
将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。
文件格式
“云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。
Apache Parquet
、
Zarr
和
Cloud Optimized GeoTIFF
是适用于各种类型数据的云原生文件格式的一些示例。
由于对象存储服务通常支持范围请求,因此客户端 (如
cuDF
) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。
文件大小
由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。
也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。
并发
使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。
对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
或使用
异步
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
我们能够让大量读取
同时
不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。
库
上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中,
NVIDIA KvikIO
值得注意,因为
它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。
它可以高效读取主机或设备内存,尤其是启用
GPU Direct Storage
时。
速度很快。
正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。
基准测试
当您读取文件时,KvikIO 会将读取的文件拆分成较小的
kvikio.defaults.task_size
字节读取。它使用具有
kvikio.defaults.num_threads
工作线程的线程池并行执行这些读取请求。可以使用环境变量
KVIKIO_TASK_SIZE
和
KVIKIO_NTHREADS
控制这些内容,也可以通过 Python 使用:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
详情请参阅
Runtime Settings
。
此图表显示了在同一区域内,针对不同大小的线程池,从
S3
到
g4dn
EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。
图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是
kvikio.RemoteFile.read
的吞吐量,适用于各种值的
kvikio.defaults.num
_threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。
线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。
借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。
如下图所示,任务大小也会影响最大吞吐量。
图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2 实例
。这显示了
kvikio.RemoteFile.read
吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。
只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。
与
boto3
(适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。
图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2
实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。
对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS
g4dn.12xlarge
实例上运行,该实例包含 4 个
NVIDIA T4
GPU 和 48 个 vCPUs。
图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。
启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。
结束语
随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。 |
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs | Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have.
The rise of
small language models
(SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data.
Microsoft announces
the new generation of open SLMs
to the Phi family with two new additions:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs.
These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the
Phi-4 14B parameter SLM
and enables commercial use for the two new smaller models.
The new models are available on the
Azure AI Foundry
, Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents.
You can test out each member of the Phi family through the
NVIDIA API Catalog
, which is the first sandbox environment to support each modality and tool-calling for
Phi-4-multimodal
. Use the preview NIM microservice to integrate the model into your applications today.
Why invest in SLMs?
SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements.
Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems.
Phi-4-multimodal
Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days.
Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications.
Figure 1. Visual QA demo in NVIDIA API Catalog
You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data.
Figure 2. Tool-calling demo in NVIDIA API Catalog
Phi-4-mini
Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days.
For both models, the training data is intentionally focused on high quality educational data and code which results in a
textbook-like quality
to the models. Text, speech, and vision benchmark data can be found in the model cards.
Advancing community models
NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience.
Using the
NVIDIA NeMo platform
, these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry.
NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences.
Get started today
Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at
build.nvidia.com/microsoft
.
On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production. | https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | 在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展 | 大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。
小语言模型
(SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。
Microsoft 宣布在 Phi 系列中
推出新一代开放式 SLM
,并新增两项功能:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。
这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的
Phi-4 14B 参数 SLM
的研究版本构建而成,可用于两个新的较小模型的商业用途。
这些新模型可在 Microsoft 的云 AI 平台
Azure AI Foundry
上使用,用于设计、定制和管理 AI 应用和代理。
您可以通过
NVIDIA API Catalog
测试 Phi 系列的每个成员,这是第一个支持
Phi-4 多模态
的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。
为何投资 SLM?
SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。
SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。
Phi-4-multimodal
Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。
事实证明,该模型在 ASR 方面表现出色,因为它在
Huggingface OpenASR 排行榜上排名第一
,单词错误率为 6.14%。
词错误率 (WER)
是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。
图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。
图 1、NVIDIA API Catalog 中的可视化问答演示
您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。
图 2、NVIDIA API Catalog 中的工具调用演示
Phi-4-mini
Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。
对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。
推进社区模式
NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。
借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。
NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。
立即开始使用
请访问
build.nvidia.com/microsoft
,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。
在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。 |
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM | In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on.
Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms.
NVIDIA NIM
microservices simplify the secure and reliable deployment of AI
foundation models
for language,
computer vision
, speech, biology, and more.
NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex.
This post helps you get started with building a
vision language model
(VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using
LangGraph,
the state-of-the-art
llama-3.2-90b-vision-instruct
VLM, the optimized
mistral-small-24B-instruct
large language model (LLM), and NVIDIA NIM for deployment.
This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of
LangChain’s tool calling
enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources.
This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
A simple HTML multimodal retrieval pipeline
The system consists of the following pipelines:
Document ingestion and preprocessing:
Runs a VLM on the images and translates them into text.
Question-answering:
Enables the user to ask questions of the system.
Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively.
Data ingestion and preprocessing pipeline
This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text.
Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an
LLM with long context modeling
capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed.
Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers.
Figure 1. Data ingestion and preprocessing pipeline
LLMs with long context modeling
can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval.
In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information.
However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy.
QA pipeline
All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a
LLM with long context modeling
(mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents.
Figure 2. Question-answering pipeline
Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content.
Finally, the system combines both textual and visual insights to deliver a comprehensive answer.
Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems.
Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text.
NVIDIA NIM now natively supports
capabilities for generating structured outputs
. This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering.
Integrating NVIDIA NIM with LangGraph
NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with
LangGraph
, a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes:
Assistant node:
Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools.
Tools node:
A collection of tools that perform specific tasks required by the assistant.
Figure 3. Use LangGraph to build an agent for the pipeline
Assistant node
The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
Here are the agent inputs:
Collection_name
: The set of documents on which to search.
Question
: The user’s question.
document_id
: (Optional) If provided, the agent skips the document ranking phase.
This is the agent process:
Document selection
: If
document_id
is not provided, the agent invokes the
find_best_document_id
tool, which identifies the most relevant document for the user’s question within the specified collection.
Question answering
: With
document_id
, the agent uses the
query_document
tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document.
Image analysis
(if necessary): If the
query_document
tool indicates that the answer might be in an image (by returning an
image_hash
value), the agent invokes the
query_image
tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer.
Tools node
We implemented three key tools for the agent to perform its tasks.
Find_best_document_id
: Identify the most relevant document for the user’s question when
document_id
is not provided. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_document
: Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_image
: Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the
/NVIDIA/GenerativeAIExamples
.
Binding external tools with models
Tool calling
is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks.
Tool binding
empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses.
Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following:
Extended capabilities
: Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation.
Dynamic tool selection
: The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance.
Seamless integration
: NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems.
In this implementation, use LangChain’s
@tool decorator
to create three tools, then use the
.bind_tools
method to bind the tools with models.
Defining structured outputs with Pydantic
By defining the output schema with
Pydantic
and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph.
Define the structure
The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for
downstream processing
.
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s
with_structured_output
method.
Define the prompt
The
prompt_document_expert
contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the
BestDocuments
schema).
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
Prepare context
The
get_context
function prepares the input data by retrieving document summaries and formatting them appropriately.
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
Bind the structured output
The
llm.with_structured_output(BestDocuments)
method instructs the LLM to produce output conforming to the
BestDocuments
Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure.
LangChain’s
with_structured_output
method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions.
Finally, create a chain to process the input and generate the structured output:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
End-to-end tool in action
To get started with the multimodal retrieval system, clone the
/NVIDIA/GenerativeAIExamples
GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to
http://localhost:7860
to access the system through the Gradio user interface.
For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The
Select Question
field is for evaluation purposes, with the
Ground Truth Answer
field value provided by a human.
Figure 4. Agent multi-document evaluation
This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data.
Figure 5. Agent result with source graph for verification
Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM
Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM
Challenges and solutions
As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt.
Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses.
Hierarchical document reranking solution
To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages:
Batch processing
: Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations.
Intermediate reranking
: The model evaluates each batch separately, ranking the documents within each group.
Selection of top candidates
: The most relevant documents from each batch are selected to proceed to the next stage.
Final reranking
: The top candidates from all batches are combined and re-evaluated to identify the most relevant document.
Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified.
Future prospects with smaller models
Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale.
The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost.
Conclusion
This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods:
Enhanced comprehension of documents
A multimodal model to extract information from images, tables, and text
Seamless integration of external tools
Generation of consistent and structured output
Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
NVIDIA NIM
also offers access to more models optimized for NVIDIA GPUs. You can explore
NVIDIA NeMo
, a scalable
generative AI
framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more.
If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time
multilingual and cross-lingual information retrieval systems
to generate context-aware responses, learn more about
NVIDIA NeMo Retriever
. | https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | 使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统 | 在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。
多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。
NVIDIA NIM
微服务可简化 AI
基础模型
在语言、
计算机视觉
、语音、生物学等领域的安全可靠部署。
NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。
本文将帮助您着手构建基于
视觉语言模型
(VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用
LangGraph
部署应用程序、先进的
llama-3.2-90b-vision-instruct
VLM、经过优化的
mistral-small-24B-instruct
大语言模型(LLM),以及用于部署的 NVIDIA NIM。
与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成
LangChain 的工具调用
,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。
此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 仓库。
简单的 HTML 多模态检索工作流
该系统由以下管道组成:
文档提取和预处理:在图像上运行 VLM 并将其转换为文本。
问答:允许用户提出系统问题。
这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。
数据提取和预处理 pipeline
此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。
接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。
最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道
采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。
相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。
但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。
QA 管道
所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道
接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。
最后,该系统将文本和视觉见解相结合,以提供全面的答案。
结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。
从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。
NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。
将 NVIDIA NIM 与 LangChain 集成
NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成:
助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。
工具节点:用于执行助手所需特定任务的工具集合。
图 3、使用 LangGraph 为管道构建代理
助理节点
助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到。
智能体输入如下:
Collection_name
:要在其上搜索的文档集。
Question
:用户的问题。
document_id
:(可选) 如果提供,代理会跳过文档排名阶段。
这是智能体流程:
文档选择
:如果未提供
document_id
,代理会调用
find_best_document_id
工具,该工具可在指定集合中识别与用户问题最相关的文档。
问题回答:对于
document_id
,代理使用
query_document
工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。
图像分析 (如有必要):如果
query_document
工具表明答案可能在图像中 (通过返回
image_hash
值),代理会调用
query_image
工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。
工具节点
我们为智能体实施了三个关键工具来执行任务。
Find_best_document_id
:在未提供
document_id
时,确定与用户问题最相关的文档。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 存储库。
query_document
:在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
GitHub 存储库。
query_image
:当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/
NVIDIA/GenerativeAIExamples
。
将外部工具与模型绑定
工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。
工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。
绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括:
扩展功能
:模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。
动态工具选择
:模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。
无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。
在此实现中,使用 LangChain 的
@tool
装饰器创建三个工具,然后使用
.bind_tools
方法将这些工具与模型绑定。
使用 PyTorch 定义结构化输出
通过使用
Pydantic
定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。
定义结构
首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的
with_structured_output
方法实现的。
定义提示
prompt_document_expert
包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与
BestDocuments
架构匹配的 JSON)。
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
准备上下文
get_context
函数通过检索文档摘要并对其进行适当格式化来准备输入数据。
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
绑定结构化输出
llm.with_structured_output(BestDocuments)
方法指示 LLM 生成符合
BestDocuments
Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。
LangChain 的
with_structured_output
方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。
最后,创建一个链来处理输入并生成结构化输出:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
端到端工具的实际应用
要开始使用多模态检索系统,请克隆
/NVIDIA/GenerativeAIExamples
GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至
http://localhost:7860
,通过 Gradio 用户界面访问系统。
例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“
Select Question
” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估
该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形
视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合
视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像
挑战和解决方案
随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。
处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。
分层文档重新排序解决方案
为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段:
批量处理
:将摘要分组为模型可以处理的批量,且不会超过提示大小限制。
中级重新排序
:模型分别评估每个批次,对每个组中的文档进行排序。
选择最优秀的候选文档
:从每个批次中选择最相关的文档,以进入下一阶段。
最终重新排名
:系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。
考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。
小型模型的未来前景
使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。
对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。
结束语
本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势:
增强对文档的理解
用于从图像、表格和文本中提取信息的多模态模型
无缝集成外部工具
生成一致的结构化输出
借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到源代码。
NVIDIA NIM
还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索
NVIDIA NeMo
,这是一个可扩展的生成式
AI
框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。
如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解
NVIDIA NeMo Retriever
。 |
https://developer.nvidia.com/blog/tag/inference-performance/ | Inference Performance | No content found | https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/ | Inference Performance | No content found |
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding | Large language models (LLMs)
that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving
AI agents
, these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation.
To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in
NVIDIA TensorRT-LLM
, and the ease of deployment with
NVIDIA NIM
for transformative potential and coding efficiency.
Qwen2.5-Coder models
The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks.
NVIDIA TensorRT-LLM
has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic
inflight batching
,
KV caching
,
KV cache reuse
, and several speculative decoding techniques, among others.
These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM
high-level API
to speed up Qwen2.5-Coder inference to generate multiline autocode completion.
Lookahead decoding
Lookahead decoding
is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with
draft target speculative decoding
, and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example.
Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding.
Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the
Jacobi iteration method
, a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch.
The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G).
Window size (W):
Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently.
N-gram size (N):
Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory.
Verification set size (G):
Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities.
Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models.
Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on
NVIDIA H100 Tensor Core GPUs
, as shown in Figure 2.
Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Similar throughput speedups are achieved on
NVIDIA H200 Tensor Core GPUs
. With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3).
Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Steps to run lookahead decoding with TensorRT-LLM
To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below.
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
Then run lookahead decoding in TensorRT-LLM using the high-level API.
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
Summary
Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to
try the Qwen2.5-Coder models
optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable
NVIDIA NIM
microservices for ease of deployment.
Acknowledgments
We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | 使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量 | 专注于编码的
大语言模型(LLMs)
已稳步应用于开发者工作流程。从配对编程到自我改进的
AI 智能体
,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。
为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对
NVIDIA TensorRT-LLM 支持
的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。
Qwen2.5-Coder 模型
Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。
NVIDIA TensorRT-LLM
已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上
批处理
、
KV 缓存
、
KV 缓存重复使用
和几种预测性解码技术等优化功能。
这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。
解码前景展望
解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。
与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。
每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器
Jacobi 迭代法
,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。
前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。
窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。
N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。
验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。
图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。
通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过
NVIDIA H100 Tensor Core GPUs
上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。
图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
NVIDIA H200 Tensor Core GPU
也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。
图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
使用 TensorRT-LLM 进行解码的前瞻性运行步骤
要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
总结
前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。
访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。
为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的
NVIDIA NIM
微服务。
致谢
在此,
我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力
。 |
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | Optimize AI Inference Performance with NVIDIA Full-Stack Solutions | The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure.
NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in
AI inference
, making it faster, more efficient, and more scalable than ever before.
Easily deploy high-throughput, low-latency inference
Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput.
To address this, NVIDIA developed the
NVIDIA Triton Inference Server
, an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA
open-source projects
, now used by
hundreds of leading organizations to deploy production AI models efficiently.
In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools,
NVIDIA TensorRT
provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations.
NVIDIA NIM
microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations.
Optimizations for AI inference workloads
Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance.
These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness.
The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for
large language models (LLMs)
, which are outlined below.
Prefill and KV cache optimizations
Key-value (KV) cache early reuse
: By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments.
Chunked prefill
: For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands.
Supercharging multiturn interactions
:
The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput.
Decoding optimization
Multiblock attention for long sequences
: Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs.
Speculative decoding for accelerated throughput
: Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications.
Speculative decoding with Medusa:
The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation.
Multi-GPU inference
MultiShot communication protocol
: Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by
NVSwitch
, reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient.
Pipeline parallelism for high-concurrency efficiency
:
Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on
NVIDIA H200 Tensor Core
GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium.
Large NVLink domains:
The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications.
Quantization and lower-precision compute
NVIDIA TensorRT Model Optimizer for precision and performance:
The NVIDIA custom FP8 quantization recipe in the
NVIDIA TensorRT Model Optimizer
delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads.
End-to-end full-stack optimization:
NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments.
With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data.
Evaluating inference performance
Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs.
MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance.
In the latest round of MLPerf Inference,
NVIDIA Blackwell made its debut
, delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU.
In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark.
The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round.
NVIDIA Triton Inference Server, running on a system with eight H200 GPUs
achieved virtually identical performance compared to the
NVIDIA bare-metal submission on the Llama 2 70B benchmark
in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton.
The future of AI inference: Emerging trends and technologies
The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference.
Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed.
At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses.
The
NVIDIA GB200 NVL72
rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU.
In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time.
The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI.
Get started
Check out
How to Get Started with AI Inference
, learn more about the
NVIDIA AI Inference
platform, and stay informed about the latest
AI inference performance
updates.
Watch a demo on
how to quickly deploy NVIDIA NIM microservices
or read
A Simple Guide to Deploying Generative AI with NVIDIA NIM
. Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices. | https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | 借助 NVIDIA 全栈解决方案提升 AI 推理性能 | AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。
NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义
AI 推理
的可能性,使其比以往更快、更高效、更具可扩展性。
轻松部署高吞吐量、低延迟推理
六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。
为解决这一问题,NVIDIA 开发了
NVIDIA Triton Inference Server
,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。
除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者,
NVIDIA TensorRT
提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。
NVIDIA NIM
微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。
针对 AI 推理工作负载进行优化
推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。
这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。
TensorRT-LLM 库包含许多先进功能,可加速
大语言模型(LLMs)
的推理性能,如下所述。
预填充和 KV 缓存优化
键值 (KV) 缓存提早复用
:通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。
分块预填充
:为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。
强效助力多圈交互
:NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。
解码优化
长序列的
多块注意力
:TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。
用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。
使用 Medusa 进行推理解码
:Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。
多 GPU 推理
MultiShot 通信协议
:传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由
NVSwitch
提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。
实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。
NVIDIA H200 Tensor Core
GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。
大型 NVLink 域
:NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。
量化和低精度计算
用于提高精度和性能的 NVIDIA TensorRT 模型优化器
:NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。
端到端全栈优化
:NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。
借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。
评估推理性能
实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。
MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。
在最新一轮 MLPerf Inference 中,
NVIDIA Blackwell 首次亮相
,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。
此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。
服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。
与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比
,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而
NVIDIA Triton 可以同时实现这两种性能。
AI 推理的未来:新兴趋势和技术
在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。
每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。
与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。
NVIDIA GB200 NVL72
机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。
此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。
通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。
开始使用
查看如何开始使用 AI 推理
,了解更多关于
NVIDIA AI 推理平台
的信息,并随时了解
最新的 AI 推理性能更新
。
观看演示,了解如何快速部署
NVIDIA NIM
微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。 |
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference | Recurrent drafting
(referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with
NVIDIA TensorRT-LLM
. ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs.
NVIDIA TensorRT-LLM
is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build
NVIDIA TensorRT
engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more.
Speculative decoding
is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference.
ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as
drafting
, combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community.
The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration.
TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance.
To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter.
Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM
The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM.
Inflight-batching compatible engine
Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting.
Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine
Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future.
Implementing in-engine validation and drafting
To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model.
The following PyTorch code excerpt is Apple’s
PyTorch implementation of ReDrafter
. The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version.
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
This, of course, is a very simple example. For a more complex example, see the
beam search implementation
. With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance.
ReDrafter performance in TensorRT-LLM
As
benchmarked by Apple
, ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on
NVIDIA H100
GPUs with TP8 over the base LLM.
Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including:
GPU utilization:
Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes.
Average acceptance rate:
The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data).
Task:
It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance.
Summary
This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads.
Explore
NVIDIA TensorRT-LLM
to unlock the full potential of your models on NVIDIA GPUs. | https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化 | Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与
NVIDIA TensorRT-LLM
一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。
NVIDIA TensorRT-LLM
是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建
NVIDIA TensorRT
引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。
推测解码 (Speculative decoding)
是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。
ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为
Drafting
) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。
ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。
为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。
为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。
图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较
下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。
兼容 Inflight-batching 批处理的引擎
Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。
图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流
请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。
实现引擎内验证和
Drafting
为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。
以下 PyTorch 代码摘录是苹果公司的
PyTorch 实现的 ReDrafter
。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见
束搜索实现
。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。
ReDrafter 在 TensorRT-LLM 中的性能
根据
苹果公司的基准测试
,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的
NVIDIA
GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。
请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括:
GPU 利用率:
推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。
平均接受率:
由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。
任务:
在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。
总结
NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。
探索
NVIDIA TensorRT-LLM
,在 NVIDIA GPU 上充分释放模型潜能。 |
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding | Meta’s Llama collection of open
large language models (LLMs)
continues to grow with the recent addition of
Llama 3.3 70B
, a text-only instruction-tuned model. Llama 3.3 provides enhanced performance respective to the older Llama 3.1 70B model and can even match the capabilities of the larger, more computationally expensive
Llama 3.1 405B model
on several tasks including math, reasoning, coding, and multilingual support.
NVIDIA TensorRT-LLM
, a powerful inference engine that delivers state-of-the-art performance on the latest LLMs, incorporates many optimizations to deliver outstanding Llama 3.3 70B inference throughput. These include
in-flight batching
,
KV caching
,
custom FP8 quantization
, speculative decoding, and more for fast, cost-efficient LLM serving.
With in-flight batching activated by default as a runtime configuration parameter, TensorRT-LLM supports batching multiple different requests at the same time for higher serving throughput. By interleaving requests in context and generation phases, in-flight batching reduces latency and improves GPU utilization by executing new requests while older requests are still in flight. Finished requests are evicted from the batch, making room for the next set of requests.
Caching the values of the key-value elements of previous tokens saves from expensive recomputation of these tensors in the generation phase for the next set of tokens. Computational savings effectively lead to higher throughput. However, KV cache grows linearly in size with number of batched requests and sequence context lengths, leading to higher memory requirements.
TensorRT-LLM KV caching addresses these challenges through several optimizations, including support for
paged KV cache
,
quantized KV cache
,
circular buffer KV cache
and
KV cache reuse
. Each of these optimizations address the challenging balance between growing memory size and avoiding unnecessary and expensive recomputation.Speculative decoding is a popular technique for faster and cost-effective LLM inference with built-in verification for the quality of output generation. It’s based on the premise that generating multiple sequences of future (draft) tokens is more efficient than processing a single token in autoregressive decoding, an inherently time-consuming process. The target model determines how many of these draft tokens to accept, which is far more efficient than having to generate one token per iteration. TensorRT-LLM supports a growing list of speculative decoding techniques including
draft target
,
Medusa
,
Eagle
, and
lookahead decoding
, among others.
Figure 1. NVIDIA TensorRT optimizations for high-performance deep learning inference
In this post, we show how the
NVIDIA HGX H200 platform
with NVLink and NVSwitch, as well as TensorRT-LLM, achieve great performance when running the latest Llama 3.3 70B model. We describe the step-by-step setup to get speculating decoding working for Llama 3.3 70B with TensorRT-LLM. For more information, including other optimizations, different models, and multi-GPU execution, see the full list of
TensorRT-LLM examples
.
Achieving throughput speedups with draft target speculative decoding
Table 1 and Figure 2 highlight the throughput (output tokens/second) speedups between no draft model (that is, no speculative decoding) versus draft models of various sizes with Llama 3.3 70B target model.
Throughput Performance – Output Tokens/Second
One NVIDIA H200 Tensor Core GPU
Draft | Target
Models
Llama 3.2 1B | Llama 3.3 70B
Llama 3.2 3B | Llama 3.3 70B
Llama 3.1 8B | Llama 3.3 70B
Llama 3.3 70B
(without draft model)
Tokens/sec
181.74
161.53
134.38
51.14
Speedups (with versus without draft models)
3.55x
3.16x
2.63x
N/A
Table 1. Throughput performance using one NVIDIA H200 Tensor Core GPU with TensorRT-LLM internal measurements
Data measured on December 11, 2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP1, FP8, batch size=1, TensorRT Model Optimizer version 0.21, TensorRT-LLM version 0.15.0.
Figure 2. Throughput speedups using speculative decoding with Llama 3.3 70B target model
We provide the steps to reproduce these performance gains using draft target speculative decoding within TensorRT-LLM.
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
After the model checkpoints have been downloaded, install TensorRT-LLM.
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
Next, compile the downloaded model checkpoints into draft and target TensorRT engines. These engines are optimized to run inference with best accuracy and highest throughput.
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
Finally, run speculative decoding in TensorRT-LLM.
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
To benchmark throughput performance without speculative decoding, follow the steps below:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
Summary
NVIDIA collaborates with Meta for the creation, optimization, and acceleration of the world’s leading open models. NVIDIA supports Llama as part of our commitment to grow open community AI models and software to enable users to customize and address their own unique workloads. NVIDIA is involved with several open-source projects through partnering with developers, maintainers, and foundations.
NVIDIA TensorRT-LLM
provides several features for optimizing and efficiently running LLMs of different model architectures. These optimizations lead to significant speedups on the same hardware, enable fewer resources to serve the same workload, reduce energy costs, and improve total cost of ownership. Available through production-ready deployments using
NVIDIA NIM
microservices, these TensorRT optimizations accelerate the deployment of your generative AI applications across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations. | https://developer.nvidia.com/zh-cn/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | 借助 NVIDIA TensorRT-LLM 预测解码,将 Llama 3.3 的推理吞吐量提升 3 倍 | 随着近期新增的 Llama 3.3 70B
(一种纯文本指令调整模型),Meta 的开放
大语言模型 (LLMs)
集合将继续增长。Llama 3.3 相对于较旧的 Llama 3.1 70B 模型提供了增强的性能,甚至可以在数学、推理、编码和多语种支持等多项任务中匹配更大、计算成本更高的 Llama 3.1 405B 模型的功能。
NVIDIA TensorRT-LLM
是一款功能强大的推理引擎,可在最新的 LLM 上提供先进的性能,并整合了许多优化措施,可提供出色的 Llama 3.3 70B 推理吞吐量。其中包括在飞批处理、
KV 缓存
、
自定义 FP8 量化
、推测解码等,可实现快速、经济高效的 LLM 服务。
动态批处理默认作为运行时配置参数激活,TensorRT-LLM 支持同时批处理多个不同的请求,从而提高服务吞吐量。通过在上下文和生成阶段交错处理请求,动态批处理可在旧请求仍在运行时执行新请求,从而减少延迟并提高 GPU 利用率。已完成的请求将从批处理中移除,从而为下一组请求腾出空间。
缓存先前令牌的键值元素的值可避免在为下一组令牌生成阶段对这些张量进行昂贵的重新计算。节省的计算量可有效提高吞吐量。但是,随着批量请求的数量和序列上下文长度的增加,KV 缓存的大小呈线性增长,从而导致内存需求增加。
TensorRT-LLM KV 缓存通过多种优化解决了这些挑战,包括对分页 KV 缓存、
量化 KV 缓存
、
循环缓冲区 KV 缓存
和
KV 缓存重复使用
的支持。每项优化都解决了在增大内存大小与避免不必要且昂贵的重新计算之间实现富有挑战性的平衡这一问题。预测解码是一种热门技术,可通过内置验证来生成输出的质量,从而实现更快且经济高效的 LLM 推理。其前提是,在自回归解码过程中,生成多个未来 (草稿) 令牌序列比处理单个令牌更高效。目标模型决定接受这些草稿令牌的数量,这比每次迭代生成一个令牌的效率要高得多。TensorRT-LLM 支持越来越多的预测性解码技术,包括
草稿目标
、
Medusa
、
Eagle
和前瞻性解码等。
图 1、适用于高性能深度学习推理的 NVIDIA TensorRT 优化
在本文中,我们将展示搭载 NVLink 和 NVSwitch 的
NVIDIA HGX H200 平台以及 TensorRT-LLM
如何在运行最新的 Llama 3.3 70B 模型时实现出色性能。我们介绍了使用 TensorRT-LLM 对 Llama 3.3 70B 进行解码预测的分步设置。有关其他优化、不同模型和多 GPU 执行的更多信息,请参阅
TensorRT-LLM 示例
的完整列表。
通过草稿目标预测解码实现吞吐量加速
表 1 和图 2 突出显示了无草稿模型 (即无预测解码) 与使用 Llama 3.3 70B 目标模型的各种规模的草稿模型之间的吞吐量 (输出令牌/秒) 加速。
吞吐量性能 – 输出 Tokens/秒
One NVIDIA H200 Tensor Core GPU
草稿|目标
模型
Llama 3.2 1B|Llama 3.3 70B
Llama 3.2 3B|Llama 3.3 70B
Llama 3.1 8B|Llama 3.3 70B
Llama 3.3 70B
(无草稿模型)
令牌/秒
191.74
151.53
134.38
51.14
加速 (有与无草稿模型对比)
3.55 倍
3.16 倍
2.63 倍
不适用
表 1、使用一个 NVIDIA H200 Tensor Core GPU 和 TensorRT-LLM 内部测量的吞吐量性能
数据测量于 2024 年 12 月 11 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 总生成 tokens / 总延迟。DGX H200、TP1、FP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21,TensorRT-LLM 版本 0.15.0。
图 2、使用 Llama 3.3 70B 目标模型的预测解码提高吞吐量
我们提供了在 TensorRT-LLM 中使用草稿目标预测解码重现这些性能提升的步骤。
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
下载模型检查点后,安装 TensorRT-LLM。
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
接下来,将下载的模型检查点编译到草稿和目标 TensorRT 引擎中。这些引擎经过优化,能够以最佳的准确性和最高的吞吐量运行推理。
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
最后,在 TensorRT-LLM 中运行投机性解码。
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
要在不使用 speculative decoding 的情况下对吞吐量性能进行基准测试,请执行以下步骤:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
总结
NVIDIA 与 Meta 合作创建、优化和加速世界领先的开放模型。NVIDIA 支持 Llama,履行我们发展开放社区 AI 模型和软件的承诺,使用户能够自定义和处理自己独特的工作负载。NVIDIA 通过与开发者、维护人员和基金会合作参与了多个开源项目。
NVIDIA TensorRT-LLM
提供多种功能,用于优化和高效运行不同模型架构的 LLM。这些优化可显著加快相同硬件的运行速度,减少资源以处理相同工作负载,降低能源成本,并提高总拥有成本。这些 TensorRT 优化通过使用
NVIDIA NIM
微服务的生产就绪型部署提供,可随时随地在 NVIDIA 加速的基础设施 (包括云、数据中心和工作站) 中加速生成式 AI 应用的部署。 |
https://developer.nvidia.com/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | Spotlight: Perplexity AI Serves 400 Million Search Queries a Month Using NVIDIA Inference Stack | The demand for AI-enabled services continues to grow rapidly, placing increasing pressure on IT and infrastructure teams. These teams are tasked with provisioning the necessary hardware and software to meet that demand while simultaneously balancing cost efficiency with optimal user experience. This challenge was faced by the inference team at Perplexity AI, an AI-powered search engine that handles more than 435 million queries each month. Each query represents multiple AI inference requests.
To meet this demand, the Perplexity inference team turned to
NVIDIA H100 Tensor Core GPUs
,
NVIDIA Triton Inference Server
, and
NVIDIA TensorRT-LLM
for cost-effective
large language model (LLM)
deployment. This post details some of the deployment best practices and TCO savings based on their hands-on experience.
Serving multiple AI models simultaneously
To support its extensive user base and serve a wide range of requests—spanning search, summarization, and question answering, among others—the inference team at Perplexity serves over 20 AI models simultaneously. This includes different variations of the popular open source Llama 3.1 models like 8B, 70B, and 405B.
To match each user request with the appropriate model, the company relies on smaller classifier models that help determine user intent. User tasks detected by the classifiers, like text completion, are then routed to specific models deployed on GPU pods. Each pod consists of one or more NVIDIA H100 GPUs and is managed by an NVIDIA Triton Inference Server instance. The pods operate under strict service-level agreements (SLAs) for both cost efficiency and user interactivity.
Figure 1. Perplexity AI deployed LLM production on a massive scale using NVIDIA HGX H100 four-GPU and eight-GPU systems featuring dynamic scalability, model parallelism, and load balancing capabilities
To accommodate the large Perplexity user base and fluctuating traffic throughout the day, the pods are hosted within a Kubernetes cluster. They feature a front-end scheduler built in-house that routes traffic to the appropriate pod based on their load and usage, ensuring that the SLAs are consistently met.
The scheduling algorithm used by the front-end scheduler can affect inter-token latency, particularly in improving the worst percentile of performance (Figure 2). The team at Perplexity constantly looks for new scheduler optimizations, including how to better account for sequence length variations across requests.
Figure 2. Inter-token latency distribution for round-robin, least requests, and power of two random choices load balancing strategies during an internal Perplexity benchmark at a moderate request rate of 16 QPS
Triton Inference Server is a critical component of Perplexity’s deployment architecture. It serves optimized models across various backends, batches incoming user requests, and provides GPU utilization metrics to the scheduler. This supports scaling up or down the number of deployments and GPUs based on the amount of the inference requests.
For a detailed guide on how to deploy NVIDIA Triton with Kubernetes, see
Scaling LLMs with NVIDIA Triton and NVIDIA TensorRT-LLM Using Kubernetes
.
Meeting strict service-level agreements
To define the right SLAs for the company’s diverse use cases, Perplexity’s inference team conducts comprehensive A/B testing, evaluating different configurations and their impact on user experience. Their goal is to maximize GPU utilization while consistently meeting the target SLA for each specific use case. By improving batching while meeting target SLAs, inference serving cost is optimized.
For smaller models, such as embedding models under 1 billion parameters used in real-time retrieval, the focus is on achieving the lowest possible latency. These are typically hidden from the user and are part of a broader workflow. As a result, configurations for these queries typically have low batch sizes. Given the smaller memory footprints of these models, the team runs multiple models concurrently on the NVIDIA H100 GPU to maintain high resource utilization.
For user-facing models such as Llama 8B, 70B, and 405B, which have a greater impact on user experience and deployment costs, the team conducts a deeper performance analysis and evaluates key metrics such as time to first token, tokens per second per user, and cost per million queries.
To optimize performance while controlling costs, Perplexity parallelizes their deployment of these models across multiple GPUs. Due to the strict SLAs, the team opted to increase tensor parallelism to four and eight GPUs, which they found yields lower serving costs for very latency-sensitive requests within a fixed GPU budget. Data or pipeline parallelism was useful for maximizing throughput in less latency-sensitive settings.
The Perplexity team now uses TensorRT-LLM in combination with proprietary LLM runtimes built with optimized CUDA kernels to successfully serve the Llama-based models within their strict SLAs at minimal costs.
Figure 3. Sharding the Llama 8B model using tensor parallelism across four NVIDIA Hopper GPUs reduces Perplexity’s relative cost per million token by up to 3x for latency-sensitive requests
Ultimately, the Perplexity inference team’s decision to host models depends on their ability to serve these models at a lower cost while still meeting their strict SLAs, compared to using third-party LLM provider APIs. For example, the team estimated that they were able to save approximately $1 million annually by serving models that power their Related-Questions feature on cloud-hosted NVIDIA GPUs. The Related-Questions feature offers Perplexity users suggested follow-up questions to facilitate deeper dives after a search query.
Delivering new levels of performance
The inference team at Perplexity adopts a comprehensive, full-stack approach to their road map, consistently optimizing and enhancing every layer of the stack—from applications and use cases to inference serving middleware and hardware accelerators.
In terms of inference serving middleware, the team is actively collaborating with the NVIDIA Triton engineering team to deploy
disaggregating serving
, a groundbreaking technique that separates the prefill and decode inference phases of an LLM workflow onto separate NVIDIA GPUs. This technique significantly boosts overall system throughput while meeting SLAs, translating to lower cost per token. Additionally, this technique gives Perplexity the flexibility to use different NVIDIA GPU products for each inference phase given its specific hardware resource requirements.
The Perplexity team understands that optimizing the software stack can only drive performance improvements to a certain extent. To deliver new levels of performance, hardware innovations are crucial. This is why they are eager to assess the
NVIDIA Blackwell platform
.
NVIDIA Blackwell delivers
significant performance leaps
enabled by numerous technology innovations, including the second-generation Transformer Engine with support for the FP4 data format, fifth-generation NVLink and NVSwitch enabling significantly larger NVLink domains, and more. Collectively these innovations deliver 30x improvement in inference performance for
trillion parameter LLMs
.
Get started
NVIDIA Triton Inference Server
and
NVIDIA TensorRT
are open-source projects available on GitHub. They are also available as Docker containers that can be pulled from
NVIDIA NGC
. They are part of
NVIDIA AI Enterprise
, which offers enterprise-grade security, stability, and support. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, a set of easy-to-use microservices for accelerated inference on a wide range of AI models, including open-source community and NVIDIA AI Foundation models.
To learn more, check out these resources:
From Zero to Millions: Scaling Large Language Model Inference with TensorRT-LLM
Perplexity Enhances Model Performance for AI-Powered Search Engines with NVIDIA NeMo
Accelerating Large Language Model Inference with NVIDIA in the Cloud | https://developer.nvidia.com/zh-cn/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | 聚焦:Perplexity AI 利用 NVIDIA 推理栈每月服务 4 亿个搜索请求 | 对 AI 赋能服务的需求持续快速增长,这给 IT 和基础设施团队带来了越来越大的压力。这些团队的任务是配置必要的硬件和软件来满足这一需求,同时平衡成本效益和最佳用户体验。Perplexity AI 的推理团队面临着这一挑战。Perplexity AI 是一款由 AI 驱动的搜索引擎,每月可处理超过 435 million 个查询。每个查询代表多个 AI 推理请求。
为了满足这一需求,Perplexity 推理团队转而使用
NVIDIA H100 Tensor Core GPUs
、
NVIDIA Triton Inference Server
和
NVIDIA TensorRT-LLM
来进行经济高效的
大语言模型(LLM)
部署。本文根据他们的实操经验,详细介绍了一些部署最佳实践和节省的 TCO。
同时为多个 AI 模型提供服务
为了支持其广泛的用户群并满足搜索、总结和问答等方面的各种请求,Perplexity 的推理团队同时为 20 多个 AI 模型提供服务。其中包括热门开源 Llama 3.1 模型的不同变体,如 8B、70B 和 405B。
为了将每个用户请求与适当的模型相匹配,该公司依靠较小的分类器模型来帮助确定用户意图。然后,分类器检测到的用户任务 (例如文本补全) 会被路由到部署在 GPU 节点上的特定模型。每个节点由一个或多个 NVIDIA H100 GPU 组成,并由 NVIDIA Triton Inference Server 实例管理。这些节点遵循严格的服务级别协议 (SLA) 运行,以实现成本效益和用户交互。
图 1. Perplexity AI 使用 NVIDIA HGX H100 4-GPU 和 8-GPU 系统大规模部署 LLM 生产,具有动态可扩展性、模型并行性和负载均衡功能
为了适应庞大的 Perplexity 用户群和全天波动的流量,Pod 托管在 Kubernetes 集群内。它们具有内置的前端调度程序,可根据负载和使用情况将流量路由到适当的 Pod,从而确保始终满足 SLA。
前端调度程序使用的调度算法会影响令牌间延迟,尤其是在改善最差性能百分位方面 (Figure 2)。Perplexity 团队不断寻找新的调度器优化,包括如何更好地考虑不同请求之间的序列长度变化。
图 2、在 16 QPS 的中等请求速率下,在内部 Perplexity 基准测试中,循环、最小请求的令牌间延迟分配,以及两种随机选择负载均衡策略的能力
Triton Inference Server 是 Perplexity 部署架构的关键组件。它跨各种后端提供优化模型、批量处理传入用户请求,并向调度程序提供 GPU 利用率指标。这支持根据推理请求的数量增加或减少部署和 GPU 的数量。
有关如何使用 Kubernetes 部署 NVIDIA Triton 的详细指南,请参阅
使用 Kubernetes 扩展 LLMs with NVIDIA Triton 和 NVIDIA TensorRT-LLM
。
满足严格的服务级别协议
为了为公司的各种用例定义合适的 SLA,Perplexity 的推理团队进行了全面的 A/B 测试,评估不同的配置及其对用户体验的影响。他们的目标是最大限度地提高 GPU 利用率,同时始终满足每个特定用例的目标 SLA。通过在满足目标 SLA 的同时改进 batching,推理服务成本得以优化。
对于较小的模型,例如嵌入实时检索中使用的 10 亿个参数以下的模型,重点在于尽可能降低延迟。这些通常对用户隐藏,并且是更广泛工作流程的一部分。因此,这些查询的配置通常具有较低的批量大小。鉴于这些模型的内存占用空间较小,该团队在 NVIDIA H100 GPU 上同时运行多个模型,以保持高资源利用率。
对于面向用户的模型 (例如 Llama 8B、70B 和 405B),这些模型对用户体验和部署成本的影响更大,因此该团队会进行更深入的性能分析,并评估关键指标,例如时间到第一个 token、每位用户每秒 token 数以及每百万次查询的成本。
为了在控制成本的同时优化性能,Perplexity 跨多个 GPU 并行部署这些模型。由于严格的服务水平协议 (SLA),该团队选择将张量并行性增加到 4 个和 8 个 GPU,他们发现这样可以在固定的 GPU 预算内降低对延迟非常敏感的请求的服务成本。数据或 pipeline 并行有助于在延迟敏感度较低的设置中更大限度地提高吞吐量。
Perplexity 团队现在将 TensorRT-LLM 与使用优化的 CUDA 内核构建的专有 LLM 运行时相结合,以更低的成本在严格的 SLA 中成功为基于 Llama 的模型提供服务。
图 3、在四个 NVIDIA Hopper GPU 上使用张量并行机制对 Llama 8B 模型进行分片,可将 Perplexity 每百万个令牌的相对成本降低高达 3 倍,以处理延迟敏感型请求
最终,Perplexity 推理团队决定托管模型,取决于他们能否以较低的成本为这些模型提供服务,同时仍满足严格的服务水平协议 (SLA),与使用第三方 LLM 提供商 API 相比。例如,该团队估计,与向第三方 LLM API 服务提供商发送相同的请求量相比,通过在云托管的 NVIDIA GPUs 上提供支持其 Related-Questions 功能的模型,他们每年能够节省约 $1 million。“Related-Questions” 功能为 Perplexity 用户提供后续问题建议,以便在搜索查询后进行更深入的研究。
图 4、与使用第三方 LLM API 服务相比,Perplexity 使用其部署架构在云托管的 NVIDIA GPU 上提供 Related-Questions AI 模型,每年可节省约 100 万美元
提供更高水平的性能
Perplexity 的推理团队对其路线图采用全面的全栈方法,不断优化和增强堆栈的每一层 (从应用和用例到为中间件和硬件加速器提供服务的推理)。
在推理服务中间件方面,该团队正在积极与 NVIDIA Triton 工程团队合作,部署
分解服务
,这是一项突破性技术,可将 LLM 工作流的预填充和解码推理阶段分离到单独的 NVIDIA GPU。此技术可显著提高整体系统吞吐量,同时满足 SLA,从而降低每个令牌的成本。此外,鉴于 Perplexity 的特定硬件资源需求,此技术还可让 Perplexity 在每个推理阶段灵活使用不同的 NVIDIA GPU 产品。
Perplexity 团队明白,优化软件堆栈只能在一定程度上推动性能提升。为了提供更高级别的性能,硬件创新至关重要。因此,他们迫切希望评估
NVIDIA Blackwell 平台
。
NVIDIA Blackwell 通过众多技术创新实现了
显著的性能飞跃
,包括支持 FP4 数据格式的第二代 Transformer Engine、支持更大 NVLink 域的第五代 NVLink 和 NVSwitch 等。总体而言,这些创新可将
万亿级参数 LLM
的推理性能提升 30 倍。
开始使用
NVIDIA Triton Inference Server
和
NVIDIA TensorRT
是 GitHub 上的开源项目。它们还可用作 Docker 容器,可从
NVIDIA NGC
中提取。它们是
NVIDIA AI Enterprise
的一部分,可提供企业级安全性、稳定性和支持。寻求快速实现价值的企业可以使用
NVIDIA NIM
,这是一套易于使用的微服务,可加速各种 AI 模型(包括开源社区和 NVIDIA AI Foundation 模型)的推理。
如需了解详情,请查看以下资源:
从 0 扩展到数百万:使用 TensorRT-LLM 扩展大型语言模型推理
Perplexity
借助 NVIDIA NeMo 增强 AI 驱动的搜索引擎的模型性能
借助 NVIDIA 在云端加速大语言模型推理 |
https://developer.nvidia.com/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs | Meta recently released its
Llama 3.2
series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only
small language model (SLM)
variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and cost-efficient serving across millions of GPUs worldwide – from our most powerful data center and cloud GPUs to local
NVIDIA RTX workstations
and even low-power edge devices with
NVIDIA Jetson
.
Llama 3.2 VLMs support long context lengths of up to 128K text tokens as well as a single image input at a resolution of 1120 x 1120 pixels. To enable low latency responses for great user experiences, while also providing high throughput for cost-efficient serving of these models, the NVIDIA platform is optimized at every layer of the technology stack.
Similarly, the Llama 3.2 SLMs have been optimized to run well on the millions of NVIDIA RTX PCs and workstations worldwide. They have also been quantized to allow for local deployment on edge devices with NVIDIA Jetson. For more information, see
Deploying Accelerated Llama 3.2 from the Edge to the Cloud
.
This post describes the full-stack optimizations that enable high throughput and low latency serving of Llama 3.2 models.
Accelerating Llama 3.2 AI inference throughput
The Llama 3.2 11B and Llama 3.2 90B models include a vision encoder with a text decoder. The encoder is optimized for high-performance inference using the
NVIDIA TensorRT
library and the text decoder is optimized using the
NVIDIA TensorRT-LLM
library.
The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the Llama 3.2 VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with the text input.
TensorRT supports the vision encoder in the BF16 data format. TensorRT-LLM supports the text decoder in both FP16 and BF16 formats. The official recipe released by Meta uses a BF16 text decoder. This is also used for our baseline performance measurements. To boost performance further, NVIDIA developed a custom FP8
post-training quantization (PTQ)
recipe, leveraging the fourth-generation FP8 Tensor Cores that are part of the
NVIDIA Hopper architecture
.
This recipe, available through the
TensorRT Model Optimizer library
, enables higher Llama 3.2 throughput and lower latency while delivering the same accuracy across numerous benchmarks including
ScienceQA
,
OCRBench
,
TextVQA
, and
MMMU
. This means that developers can now run the model more cost-effectively.
Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices.
Delivering high throughput and low latency
Table 1 shows maximum throughput performance, representing offline use cases, across a range of input and output sequence lengths and single input image with maximum supported resolution of 1120 x 1120 pixels. Using a system based on the NVIDIA HGX H200 platform, we run the Llama 3.2 90B model on eight
NVIDIA H200 Tensor Core GPUs
, each with 141 GB of fast HBM3e memory, connected through NVLink and
NVLink Switch
, providing 900 GB/s of GPU-to-GPU bandwidth between the GPUs.
Maximum Throughput Performance –
Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
2,646
1,417
480
Table 1. Maximum throughput performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. NIM server restarted between each ISL/OSL configuration to set an optimal KV cache split. Data measured on 11/14/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. DGX H200, normalized to 8 GPUs (by taking the TP profile maximizing throughput per GPU, and multiplying that value by 8 to simulate a replica-parallel setup), batch size tuned for maximum node throughput, TensorRT Model Optimizer version 0.21 (pre-release), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
Table 2 shows minimum latency performance using the same input and output sequence lengths and input image size.
Minimum Latency Performance – Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
64
63
55
Table 2. Minimum latency performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. Data measured on 11/4/2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP8, batch size = 1, TensorRT Model Optimizer version 0.21 (prerelease), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
As these results show, NVIDIA H200 GPUs with
TensorRT-optimized
software delivers exceptional performance on Llama 3.2 90B VLM, in both latency-optimized and throughput-optimized scenarios.
Throughput performance of GeForce RTX 4090 with ONNX Runtime on NVIDIA RTX
For Windows deployments, NVIDIA has optimized Llama 3.2 SLMs to work efficiently using the
ONNX Runtime Generative API
, with a
DirectML
backend. Performance measurements are made using the
model checkpoint
available on the NGC catalog. The checkpoint is a quantized version of Llama 3.2 3B Instruct model and is quantized to
AWQ INT4
using
AutoAWQ
and converted to
ONNX
using ONNX Runtime Generative API.
Maximum Throughput Performance – Output Tokens/Second
NVIDIA GeForce RTX 4090 GPUs
Input | Output Sequence Lengths
100 | 100
2,000 | 100
4,000 | 100
Onnx-GenAI Runtime with DirectML, BS=1
253
203
165
Onnx-GenAI Runtime with DirectML, BS=4
615
374
251
Table 3. Maximum throughput performance with NVIDIA internal measurements
ONNX Runtime Generative API with DirectML data measured on 10/07/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. GeForce RTX 4090 GPU.
Better performance on Llama 3.2 across platforms
With the NVIDIA accelerated computing platform, you can build models and supercharge your applications with the most performant Llama 3.2 models on any platform—from the data center and cloud to local workstations. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, part of the
NVIDIA AI Enterprise
software platform, which offers
NVIDIA TensorRT
optimized
inference
on Llama 3.2 and other models from NVIDIA and its partner ecosystem.
Acknowledgments
We would like to thank George Yuan, Alex Settle, and Chenjie Luo for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 全栈优化释放 NVIDIA GPU 的高性能 | Meta 最近发布了
Llama 3.2
系列视觉语言模型(VLM),其中包含 11B 参数和 90B 参数变体。这些模型是多模态模型,支持文本和图像输入。此外,Meta 还推出了 Llama 3.2 的纯文本
小语言模型(SLM)
变体,具有 1B 和 3B 参数。NVIDIA 已对 Llama 3.2 模型集合进行优化,以便在全球数百万个 GPU 上提供出色的性能和经济高效的服务 – 从我们功能强大的数据中心和云 GPU 到本地
NVIDIA RTX 工作站
,甚至是使用
NVIDIA Jetson
的低功耗边缘设备。
Llama 3.2 VLM 支持长达 128K 的文本标记,以及分辨率为 1120 x 1120 像素的单张图像输入。为了实现低延迟响应,提供出色的用户体验,同时提供高吞吐量以经济高效地提供这些模型,NVIDIA 平台在技术堆栈的每一层都进行了优化。
同样,Llama 3.2 SLM 已经过优化,可在全球数百万 NVIDIA RTX PC 和工作站上良好运行。它们还经过量化,允许在边缘设备上使用 NVIDIA Jetson 进行本地部署。有关更多信息,请参阅
将加速的 Llama 3.2 从边缘部署到云端
。
本文将介绍如何通过全栈优化实现 Llama 3.2 模型的高吞吐量和低延迟服务。
加速 Llama 3.2 AI 推理吞吐量
Llama 3.2 11B 和 Llama 3.2 90B 模型包括一个带有文本解码器的视觉编码器。编码器使用
NVIDIA TensorRT 库
针对高性能推理进行了优化,文本解码器使用
NVIDIA TensorRT-LLM 库
进行了优化。
通过 TensorRT-LLM 中支持的交叉注意力机制,视觉编码器的视觉信息融合到 Llama 文本解码器中。这使得 Llama 3.2 VLM 能够高效生成文本,在文本输入的上下文中考虑视觉推理和理解。
TensorRT 支持视觉编码器在 BF16 数据格式下运行。TensorRT-LLM 支持文本解码器在 FP16 和 BF16 格式下运行。Meta 发布的官方 recipe 使用了 BF16 文本解码器,这也用于我们的基准性能测量。为了进一步提高性能,NVIDIA 利用 NVIDIA Hopper 架构中的
第四代 FP8 Tensor Core
开发了自定义的 FP8 后训练量化(PTQ)recipe。
此方法通过
TensorRT Model Optimizer 库
提供,可实现更高的 Llama 3.2 吞吐量和更低的延迟,同时在众多基准测试(包括
ScienceQA
、
OCRBench
、
TextVQA
和
MMMU
)中提供相同的准确性。这意味着开发者现在可以更经济高效地运行模型。
TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用
NVIDIA NIM
微服务的生产就绪型部署获得。
提供高吞吐量和低延迟
表 1 显示了在一系列输入和输出序列长度以及单输入图像 (最大支持分辨率为 1120 x 1120 像素) 中的最大吞吐量性能 (代表离线用例)。我们使用基于 NVIDIA HGX H200 平台的系统,在 8 个
NVIDIA H200 Tensor Core GPU
上运行 Llama 3.2 90B 模型,每个 GPU 均具有 141 GB 的快速 HBM3e 内存,通过
NVLink 和 NVLink Switch
连接,可在 GPU 之间提供 900 GB/s 的 GPU 到 GPU 带宽。
最大吞吐量性能 –
每秒输出令牌数
八个 NVIDIA H200 Tensor Core GPU
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
2646
1417
480
表 1. 通过 NVIDIA 内部测量获得的最大吞吐量性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。在每个 ISL/OSL 配置之间重新启动 NIM 服务器,以设置最佳 KV 缓存分割。数据测量时间:11 月 14 日,2024 年。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = total generated tokens / total latency。DGX H200,标准化为 8 个 GPUs (通过采用 TP 配置文件最大限度地提高每个 GPU 的吞吐量,并将该值乘以 8 以模拟副本并行设置),针对最大节点吞吐量调整了批量大小,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0。
表 2 显示了使用相同的输入和输出序列长度以及输入图像大小时的最小延迟性能。
最低延迟性能 – 输出令牌/秒
(
八个 NVIDIA H200 Tensor Core GPUs
)
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
64
63
55
表 2、NVIDIA 内部测量的最低延迟性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。数据测量时间:2024 年 11 月 4 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 生成的 tokens 总数/总延迟。DGX H200、TP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0 以上。
这些结果表明,
搭载 TensorRT 优化软件的
NVIDIA H200 GPU 可在 Llama 3.2 90B VLM 上提供出色的性能,适用于延迟优化和吞吐量优化的场景。
采用 ONNX Runtime 的 GeForce RTX 4090 在 NVIDIA RTX 上的吞吐量性能
对于 Windows 部署,NVIDIA 已对 Llama 3.2 SLM 进行优化,以便使用带有
DirectML
后端的
ONNX
Runtime Generative API
高效工作。使用 NGC 目录上提供的
模型检查点
进行性能测量。检查点是 Llama 3.2 3B Instruct 模型的量化版本,使用
AutoAWQ
量化为
AWQ INT4
,并使用 ONNX Runtime Generative API 转换为 ONNX。
最大吞吐量性能 – 输出令牌/秒
NVIDIA GeForce RTX 4090 GPU
输入|输出序列长度
100|100
2000|100
4000|100
采用 DirectML 的 Onnx-GenAI 运行时,BS = 1
253
203
165
支持 DirectML 的 Onnx-GenAI 运行时,BS = 4
615
374
251
表 3、通过 NVIDIA 内部测量获得的最大吞吐量性能
使用在 2024 年 10 月 7 日测量的 DirectML 数据的 ONNX Runtime Generative API。输出 token/s 包含生成第一个 token 的时间 – tok/s = 总生成 token 数 / 总延迟。GeForce RTX 4090 GPU。
跨平台在 Llama 3.2 上获得更好的性能
借助 NVIDIA 加速计算平台,您可以在任何平台 (从数据中心、云到本地工作站) 上构建模型并使用性能超强的 Llama 3.2 模型增强您的应用。
NVIDIA NIM
是
NVIDIA AI Enterprise
软件平台的一部分,可为 Llama 3.2 和 NVIDIA 及其合作伙伴生态系统的其他模型提供经
NVIDIA TensorRT
优化的推理。
致谢
我们要感谢 George Yuan、Alex Settle 和 Chenjie Luo 为支持这篇博文而付出的努力。 |
https://developer.nvidia.com/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | Streamlining AI Inference Performance and Deployment with NVIDIA TensorRT-LLM Chunked Prefill | In this blog post, we take a closer look at chunked prefill, a feature of
NVIDIA TensorRT-LLM
that increases GPU utilization and simplifies the deployment experience for developers. This builds on our previous
post
discussing how advanced KV cache optimization features in TensorRT-LLM improve performance up to 5x in use cases that require system prefills.
Challenges with traditional prefill and decode inference approaches
When a user submits a request to a model, it goes through two distinct computational phases: prefill and decode. Each phase uses GPU compute resources differently.
During the prefill phase, the system processes all input tokens to compute the KV cache, which is then used to generate the first token of the output. This phase is computationally demanding and can effectively use a GPU’s vast parallel compute resources.
In the decode phase, the system generates output tokens individually, updating the intermediate states from the prefill phase with each new token. Since the heavy computational work for intermediate state calculations is done in the prefill phase, the decode phase primarily involves processing only the newly generated token. As a result, this phase is less computationally intensive.
TensorRT-LLM supports in-flight batching where multiple requests are handled in parallel, enabling prefill and decode stage processing simultaneously. While more efficient than phased batching, in which decode requests are paused to process prefill requests, this approach can introduce latency as the decode phases are delayed until the prefill requests are completed. The top portion of Figure 1 illustrates this concept.
Figure 1. A simplified representation of the chunked prefill process, showing how it can increase GPU utilization and accelerate query completion
Balancing prefill and decode phases with chunked prefill
With TensorRT-LLM chunked prefill, the tokens are divided into smaller units, or chunks, for faster processing. This prevents the prefill phase from becoming a bottleneck, enables more parallelization with decode phase tokens, and increases GPU utilization. The bottom portion of Figure 1 illustrates this concept.
Additionally, using TensorRT-LLM chunked prefill enables GPU systems to handle longer contexts and higher concurrency levels. Since memory usage depends on the number of tokens processed during each iteration, using prefill chunks decouples memory consumption from the context length of incoming requests. The system can then process larger context lengths without increasing memory demands.
Simplifying TensorRT-LLM engine creation with dynamic chunk sizing
Figure 1 shows how chunk size can impact user interactivity and overall system throughput, requiring a trade-off based on the desired user experience and available GPU resources. Using a large chunk size lowers the number of iterations required to process prefill sequences, reducing time to first token (TTFT).
However, it also increases the time taken to complete the decode phase of ongoing requests, increasing query completion time and reducing output tokens per second (TPS). Finding the right balance can sometimes be time-consuming for developers. TensorRT-LLM deploys dynamic chunk sizing, providing ideal recommendations for the user based on GPU utilization metrics.
An additional advantage of dynamic prefill chunk sizing is the simplification of the TensorRT-LLM engine build process. In the past, developers had to supply a maximum input sequence length from which activation buffers were computed during engine build time. To ensure the activation buffers could handle the most complex user queries, developers were required to set the maximum sequence length expected for their use case. This approach optimized GPU resources for outlier and worst-case scenarios, leading to inefficient memory usage.
With dynamic prefill chunk sizing developers no longer need to specify the maximum input length manually. Instead, activation buffer sizes are automatically determined by the chunk size, which is configured by TensorRT-LLM. This eliminates manual configuration and leads to more efficient memory usage during execution.
Getting started with TensorRT-LLM chunked prefills
Using TensorRT-LLM chunked prefill significantly improves both system performance and utilization. Breaking down the traditionally sequential prefill phase into smaller, more manageable chunks, enables better parallelization, with the decode phase, reducing bottlenecks and accelerating query completion.
The dynamic adjustment of chunk size also simplifies the TensorRT-LLM engine configuration process, eliminating the need for developers to manually set activation buffer sizes, which previously resulted in inefficient memory usage. Dynamic chunk sizing optimizes memory use and ensures that the system adapts seamlessly to varying LLM user demands.
Check out our
GitHub documentation
, to start using TensorRT-LLM chunked prefills. Learn more about
NVIDIA AI Inference solutions
and stay up-to-date with the latest
AI inference performance
updates. | https://developer.nvidia.com/zh-cn/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | 借助 NVIDIA TensorRT-LLM 分块预填充提高 AI 推理效率和简化部署 | 在本
博文
中,我们详细介绍了分块预填充,这是
NVIDIA TensorRT-LLM
的一项功能,可提高 GPU 利用率并简化开发者的部署体验。本文基于之前的博文,讨论了 TensorRT-LLM 中的高级 KV 缓存优化功能如何在需要系统预填充的用例中将性能提升高达 5 倍。
传统预填充和解码推理方法面临的挑战
当用户向模型提交请求时,它将经历两个不同的计算阶段:预填充和解码。每个阶段使用 GPU 计算资源的方式各不相同。
在预填充阶段,系统会处理所有输入令牌以计算 KV 缓存,然后使用该缓存生成输出的第一个令牌。此阶段的计算要求很高,可以有效利用 GPU 庞大的并行计算资源。
在解码阶段,系统会单独生成输出 tokens,使用每个新 tokens 更新预填充阶段的中间状态。由于中间状态计算的繁重计算工作是在预填充阶段完成的,因此解码阶段主要涉及仅处理新生成的 tokens。因此,此阶段的计算密集度较低。
TensorRT-LLM 支持动态批处理,允许并行处理多个请求,使得预填充和解码阶段可以同时进行。虽然这种方法比暂停解码请求以处理预填充请求的分阶段批处理更高效,但由于解码阶段延迟到预填充请求完成,因此可能会导致延迟。图 1 的顶部说明了这一概念。
图 1. 分块预填充过程的简化表示,展示了如何提高 GPU 利用率并加速查询完成。
使用分块预填充平衡预填充和解码阶段
借助 TensorRT-LLM 分块预填充,令牌被分成较小的单元或分块,以加快处理速度。这可以防止预填充阶段成为瓶颈,使解码阶段的令牌实现更多的并行化,并提高 GPU 的利用率。图 1 的底部部分说明了这一概念。
此外,使用 TensorRT-LLM 分块预填充使 GPU 系统能够处理更长的上下文和更高的并发级别。由于内存使用量取决于每次迭代期间处理的令牌数量,因此使用预填充块可将内存消耗量与传入请求的上下文长度解耦。然后,系统可以在不增加内存需求的情况下处理更大的上下文长度。
通过动态数据块大小调整简化 TensorRT-LLM 引擎创建
图 1 显示了块大小百分比对用户交互性和整体系统吞吐量的影响,这需要根据所需的用户体验和可用 GPU 资源进行权衡。使用大块大小可以减少处理预填充序列所需的迭代次数,从而缩短第一个令牌(TTFT)的时间。
但是,这也会增加完成正在进行的请求的解码阶段所需的时间,从而增加查询完成时间并降低每秒输出令牌数(TPS)。对于开发者而言,找到合适的平衡有时非常耗时。TensorRT-LLM 部署动态块大小,根据 GPU 利用率指标为用户提供理想建议。
动态预填充块大小调整的另一个优势是简化了 TensorRT-LLM 引擎构建流程。过去,开发者必须提供最大输入序列长度,以便在引擎构建期间计算激活缓冲区。为确保激活缓冲区能够处理最复杂的用户查询,开发者需要为其用例设置预期的最大序列长度。这种方法针对异常和最差情况优化了 GPU 资源,导致内存使用效率低下。
借助动态预填充块大小调整,开发者无需再手动指定最大输入长度。相反,激活缓冲区大小由块大小(由 TensorRT-LLM 配置)自动确定。这消除了手动配置,并提高了执行期间的内存使用效率。
TensorRT-LLM 分块预填充入门
使用 TensorRT-LLM 分块预填充可显著提高系统性能和利用率。将传统的连续预填充阶段分解为更小、更易于管理的块,可通过解码阶段实现更好的并行化,从而减少瓶颈并加速查询完成。
块大小的动态调整还简化了 TensorRT-LLM 引擎配置流程,无需开发者手动设置激活缓冲区大小,而这在之前会导致内存使用效率低下。动态块大小优化内存使用,并确保系统能够无缝适应不同 LLM 用户的需求。
查看
我们的 GitHub 文档
,开始使用 TensorRT-LLM 分块预填充。详细了解
NVIDIA AI 推理解决方案
,并随时了解最新的
AI 推理性能更新
。 |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | In our previous
blog post
, we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups.
Introduction to KV cache
LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the user’s prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence.
This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokens—leading to a faster and more efficient user experience.
When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas.
Early KV cache reuse
Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system prompts—predefined instructions added to user queries—are essential to direct the LLM’s responses in line with enterprise guidelines, this method can be inefficient.
When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x.
Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x
Flexible KV cache block sizing
In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation.
This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial. As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case.
TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens.
Figure 2. Impact of changing KV cache block size on inference speedup
Efficient KV cache eviction protocols
Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies.
Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT.
To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks.
Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse
Getting started with TensorRT-LLM KV cache reuse
Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance.
To start using TensorRT-LLM KV cache reuse check out our
GitHub documentation
. | https://developer.nvidia.com/zh-cn/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM KV 缓存早期重用实现首个令牌速度 5 倍提升 | 在上一篇
博客文章
中,我们展示了通过将键值 (KV) 缓存卸载到 CPU 内存来重复使用该缓存如何在基于 x86 的 NVIDIA H100 Tensor Core GPU 上将首次生成令牌 (Time To First Token, TTFT) 的速度提升高达 14 倍,在 NVIDIA GH200 Superchip 上可将其提升高达 28 倍。在本文中,我们介绍了可以进一步推动 TTFT 加速的 KV 缓存重复使用技术和最佳实践。
KV 缓存简介
许多任务(包括问答和代码生成)都迅速采用了 LLM 模型。为了生成响应,这些模型首先将用户的提示符转换为 tokens,然后将其转换为 dense vectors。随后会进行大量的 dot-product operations,以数学方式对 tokens 之间的关系建模,并构建对用户输入的上下文理解。生成这种上下文理解的计算成本会随着输入序列的长度而二次增加。
此资源密集型流程会生成键和值,这些键和值会被缓存,以避免在生成后续 token 时进行重新计算。重复使用 KV 缓存可减少生成额外 token 所需的计算负载和时间,实现更快、更高效的用户体验。
重复使用 KV 缓存时,必须仔细注意其在内存中的剩余时间、在内存已满时首先移除哪些组件,以及何时可重复用于新的传入提示。优化这些因素可能会带来 KV 缓存重复使用的性能增益。NVIDIA TensorRT-LLM 提供了三个关键功能,专门解决了这些问题。
早期 KV 缓存复用
传统的复用算法需要先完成整个 KV 缓存的计算,然后才能根据新用户的提示重复使用其任何部分。在企业聊天机器人等场景中,系统提示(添加到用户查询中的预定义指令)对引导 LLM 根据企业指南做出响应至关重要,这种方法可能效率低下。
当大量用户同时与聊天机器人交互时,每个用户都需要单独计算系统提示 KV 缓存。借助 TensorRT-LLM,我们可以重新使用实时生成的系统提示,使其在突发期间在所有用户之间共享,而不是为每个用户重新计算。这可以显著加快推理速度,最高可达 5 倍。
图 1.TensorRT-LLM KV 缓存重复利用可将 TTFT 速度提升高达 5 倍。
灵活调整 KV 缓存块大小
在重用实现中,只能分配整个缓存内存块以供重用。例如,如果缓存内存块大小为 64 个令牌,KV 缓存为 80 个令牌,则仅存储 64 个令牌以供重用,而其余 16 个令牌则需要重新计算。但是,如果将内存块大小减少到 16 个令牌,则可在五个内存块中存储全部 64 个令牌,从而消除重新计算的需求。
当输入序列较短时,这种效果最为明显。对于长输入序列,更大的块会更有好处。显然,您对 KV 缓存的控制越细致,就越能针对特定用例进行优化。
TensorRT-LLM 可对 KV 缓存内存块进行细粒度控制,使开发者能够将这些块分割成 64 到 2 个令牌之间的较小块。这可优化已分配内存的使用,提高复用率并改善 TTFT。在 NVIDIA H100 Tensor Core GPU 上运行 LLAMA70B 时,我们可以通过将 KV 缓存块的大小从 64 个令牌减少到 8 个令牌,在多用户环境中将 TTFT 速度提升高达 7%。
图 2. 更改 KV 缓存块大小对推理加速的影响
高效的 KV 缓存回收协议
将 KV 缓存分割成较小的块并移除未使用的块对于内存优化可能有效,但这会引入依赖项复杂性。当使用特定块生成响应并将结果存储为新块时,它会形成树状的依赖结构。
随着时间的推移,跟踪源块(分支)使用情况的计数器可能会随着依赖节点(叶子)的重复使用而过时。然后,移除源块需要移除所有依赖块,这将需要重新计算 KV 缓存以用于新用户提示,从而增加 TTFT。
为应对这一挑战,TensorRT-LLM 包含智能驱逐算法,该算法可以追踪源节点中的依赖节点,并首先驱逐依赖节点,即使这些节点具有较新的重复使用计数器。这可以确保更高效的内存管理,同时防止不必要的依赖块驱逐。
图 3.KV 缓存驱逐算法的逻辑表示展示了如何减少被驱逐的块的数量,从而增加重复使用的可能性。
TensorRT-LLM KV 缓存重用入门
在推理过程中生成 KV 缓存需要大量的计算和内存资源。高效使用 KV 缓存对于改善模型响应、加速推理和提高系统吞吐量至关重要。TensorRT-LLM 为希望进一步优化 TTFT 响应时间以实现峰值性能的开发者提供了高级复用功能。
要开始使用 TensorRT-LLM KV 缓存,请参阅
我们的 GitHub 文档
。 |
https://developer.nvidia.com/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | 3x Faster AllReduce with NVSwitch and TensorRT-LLM MultiShot | Deploying generative AI workloads in production environments where user numbers can fluctuate from hundreds to hundreds of thousands – and where input sequence lengths differ with each request – poses unique challenges. To achieve low latency inference in these environments, multi-GPU setups are a must – irrespective of the GPU generation or its memory capacity. To enhance inference performance in production-grade setups, we’re excited to introduce TensorRT-LLM Multi-shot, a new multi-GPU communication protocol that leverages the
NVIDIA NVLink Switch
to significantly increase communication speeds by up to 3x. This blog outlines this new feature and how it helps developers and solution architects address the limitations of traditional multi-GPU communication methods.
Challenges with traditional AllReduce algorithms
For low latency inference, multi-GPU is critical, regardless of the memory capacity of a single GPU. However, at low concurrency, the time GPUs spend exchanging data can outweigh the time spent on compute. For optimal performance, an efficient
AllReduce
operation – a collective operation that combines partial results from each participating GPU – is critical.
Traditional approaches use ring-based algorithms, where the partial values are passed around a ring of GPUs. Each GPU contributes its values and passes the result to its neighbor. This process is repeated 2N-2 times where N is the number of GPUs working together, and by the end of the process, every GPU has the same summed value. A second pass over the ring is required to propagate summed values from the last GPU to the rest.
The Ring approach makes efficient use of available GPU-to-GPU bandwidth per communication step, but as the number of GPUs increases, so does the number of steps. This increases latency, as all GPUs need to stay synchronized at every step of the ring. These synchronization latencies add significant latency overhead and can make it difficult to meet more stringent latency targets.
The Ring AllReduce algorithm is described below:
Ring Algorithm: GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 steps, with full tensor send/recv each step
Latency: 2N-2 communication steps. (N: # of GPUs)
Traffic: (4N-4)/N tensor bytes of send/recvs
Addressing AllReduce communication challenges with TensorRT-LLM MultiShot
TensorRT-LLM MultiShot is a new algorithm that reduces the O(N) latency of Ring AllReduce by up to 3x leveraging multicast in NVSwitch. Multicast is a hardware acceleration feature in NVSwitch which allows a GPU to send data once and have that data sent simultaneously to all other GPUs, minimizing the number of communication steps to two inter-GPU synchronizations while remaining bandwidth efficient. Without NVSwitch, this would take N times the communication bandwidth.
TensorRT-LLM Multishot separates the AllReduce into a ReduceScatter operation followed by an AllGather operation (for more detailed descriptions of collective operations, see this
documentation
).
Each GPU is responsible for accumulating only a portion of the result tensor.
The first step (or “shot”) involves each GPU sending the different slices of the tensor to the respective GPU responsible for accumulating that slice of the tensor.
After accumulating locally, each GPU now has the correct sum accumulators for its unique slice of the output.
In the second step (or “shot”), each GPU broadcasts the result slice to all other GPUs using the NVSwitch multicast capability. This minimizes the per GPU bandwidth required as the NVSwitch itself performs data amplification; each GPU sends 1/N the data and receives the full result tensor in one step.
The entire operation only takes two communication steps, regardless of the number GPUs performing tensor parallel inference.
TensorRT-LLM MultiShot Algorithm: GPU_N sends slices, Compute slice sum, broadcast result in single multicast operation.
Latency: 2 communication steps (regardless of number of GPUs)
Traffic: 2 tensor bytes of send/recv (regardless of number of GPUs)
Why this matters
Since this algorithm requires only two communication steps rather than 2N-2 (where N is the number of GPUs), MultiShot can be nearly 3x faster than Ring AllReduce. The benefits of this algorithm are particularly evident with smaller message sizes and high parallelism – the scenario needed when minimum latency is required for a great user experience.
This can be used to either reduce minimum latency, or increase throughput at a given latency. In scenarios with more aggressive latency thresholds, this can lead to super-linear scaling with the number of GPUs.
Figure 1. With TensorRT-LLM MultiShot, AllReduce latency is reduced by up to 3x.
Achieving optimal inference performance requires careful workload analysis and a deep understanding of performance bottlenecks. By gaining that understanding – both through internal engineering work as well as through close collaboration with external developers and researchers – we can quickly and frequently optimize many aspects of our platform to deliver great performance for users.
As we continue to identify and implement new performance optimizations – some may be extensive, others might be narrower in scope – we will be providing regular updates on these optimizations, providing both technical motivation and quantified benefits. | https://developer.nvidia.com/zh-cn/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | NVSwitch 和 TensorRT-LLM MultiShot 共同加速 AllReduce 速度达 3 倍 | 在用户数量可能在数百到数十万之间波动,且输入序列长度随每个请求而变化的生产环境中,部署生成式 AI 工作负载会面临独特的挑战。要在这些环境中实现低延迟推理,无论 GPU 生成方式或显存容量如何,多 GPU 设置都是必需的。为了提高生产级设置中的推理性能,我们很高兴推出 TensorRT-LLM Multi-shot,这是一种新的多 GPU 通信协议,利用
NVIDIA NVLink Switch
可将通信速度大幅提升高达 3 倍。本博客概述了这一新功能,以及它如何帮助开发者和解决方案架构师克服传统多 GPU 通信方法的限制。
传统 AllReduce 算法面临的挑战
对于低延迟推理,无论单个 GPU 的显存容量如何,多 GPU 都至关重要。但是,在低并发情况下,GPU 花在交换数据上的时间可能超过花在计算上的时间。为了获得最佳性能,
高效的 AllReduce 操作
–结合每个参与其中的 GPU 的部分结果的集合操作–至关重要。
传统方法使用基于环的算法,其中部分值在环形的 GPU 之间传递。每个 GPU 都贡献其值并将结果传递给其邻居。该过程重复 2N-2 次,其中 N 是协同工作的 GPU 数量,在该过程结束时,每个 GPU 都具有相同的总和值。需要对环进行第二次传递,以将总和值从最后一个 GPU 传播到其他 GPU。
Ring 方法可在每个通信步骤中高效利用可用的 GPU 到 GPU 带宽,但随着 GPU 数量的增加,步骤数也会增加。这会增加延迟,因为所有 GPU 都需要在 Ring 的每个步骤中保持同步。这些同步延迟会显著增加延迟开销,并可能导致难以满足更严格的延迟目标。
Ring AllReduce 算法描述如下:
环形算法:GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 步长,每步具有完整的 Tensor send/recv
延迟:2N-2 通信步骤。(N:GPU 的数量)
流量:(4N-4)/N 张量的 send/recv 字节数
使用 TensorRT-LLM MultiShot 应对 AllReduce 通信挑战
TensorRT-LLM MultiShot 是一种新算法,可利用 NVSwitch 中的组播,将 Ring AllReduce 的 O(N) 延迟最多降低 3 倍。组播是 NVSwitch 中的硬件加速功能,允许一个 GPU 发送数据一次,并将该数据同时发送到所有其他 GPU,从而将通信步骤的数量减少到两个 GPU 间的同步,同时保持带宽效率。如果没有 NVSwitch,这将占用 N 倍的通信带宽。
TensorRT-LLM Multishot 将 AllReduce 分离为 ReduceScatter 操作,然后是 AllGather 操作(有关集合操作的更多详细说明,请参阅
此文档
)。
每个 GPU 仅负责累积结果张量的一部分。
第一步(或“shot”)涉及每个 GPU 将张量的不同切片发送到负责累积该张量切片的相应 GPU。
在本地累加后,每个 GPU 现在都有正确的和累加器,用于其独特的输出切片。
在第二步 (或“shot”) 中,每个 GPU 使用 NVSwitch 组播功能将结果切片广播到所有其他 GPU。这可最大限度地减少 NVSwitch 本身执行数据放大所需的每个 GPU 带宽;每个 GPU 一步发送 1/N 数据并接收完整的结果张量。
无论执行张量并行推理的 GPU 数量如何,整个操作仅需两次通信步骤。
TensorRT-LLM MultiShot 算法:GPU_N 发送切片、计算切片和、在单个组播运算中广播结果。
延迟:2 个通信步骤(与 GPU 数量无关)
流量:2 张量字节的 send/recv(与 GPU 数量无关)
为何如此重要
由于此算法只需要两个通信步骤,而不是 2N-2 (其中 N 表示 GPU 数量),因此 MultiShot 的速度几乎是 Ring AllReduce 的 3 倍。这种算法的优势在消息大小较小且并行度高的情况下尤为明显,而这正是需要最低延迟以获得出色的用户体验的场景。
这可用于降低最小延迟,或在给定延迟下提高吞吐量。在具有更激进的延迟阈值的场景中,这可能会导致 GPU 数量的超线性扩展。
图 1. 借助 TensorRT-LLM MultiShot,AllReduce 延迟最多可降低 3 倍。
实现最佳推理性能需要仔细的工作负载分析和对性能瓶颈的深入了解。通过内部工程工作以及与外部开发者和研究人员的密切合作,我们可以快速、频繁地优化平台的许多方面,为用户提供出色的性能。
随着我们继续识别和实施新的性能优化(一些可能是广泛的,另一些可能范围较窄),我们将定期提供有关这些优化的更新,提供技术动机和量化效益。 |
https://developer.nvidia.com/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | Accelerating Scientific Literature Reviews with NVIDIA NIM Microservices for LLMs | A well-crafted systematic review is often the initial step for researchers exploring a scientific field. For scientists new to this field, it provides a structured overview of the domain. For experts, it refines their understanding and sparks new ideas. In 2024 alone, 218,650 review articles were indexed in the
Web of Science
database, highlighting the importance of these resources in research.
Completing a systematic review significantly enhances a researcher’s knowledge base and their academic impact. However, traditional review writing requires collecting, reading, and summarizing large volumes of academic articles on a specific topic. Due to the time-consuming nature of this manual exercise, the scope of processed literature is often confined to dozens or a few hundred articles. Interdisciplinary content—frequently outside the researcher’s area of expertise—adds another layer of complexity.
These challenges make it increasingly difficult to create comprehensive, reliable, and impactful systematic reviews.
The advent of large language models (LLMs) offers a groundbreaking solution, enabling the rapid extraction and synthesis of information from extensive literature. Participating in the
Generative AI Codefest Australia
provided a unique opportunity to explore this idea with support from NVIDIA AI experts to leverage
NVIDIA NIM microservices
for accelerating literature reviews. This enabled the rapid testing and fine-tuning of several state-of-the-art LLMs for our literature analysis process.
Testing the potential of LLMs for processing papers
As a research group specializing in physiological ecology within the
ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF)
, we embarked on writing a review of the literature on the global responses of non-vascular plants, such as moss or lichen, to wind.
However, we quickly faced a challenge: many relevant articles on wind-plant interactions failed to explicitly mention these key words in their titles or abstracts, which are typically used as primary filters during literature screening. A comprehensive analysis of the topic required manually reading the full text of each article—a highly time-consuming process.
We decided to explore the potential of using LLMs to extract content specifically related to wind-plant interactions from the articles. To achieve this, we implemented a simple Q&A application based on the
LlaMa 3.1 8B Instruct NIM microservice
(Figure 1). This enabled us to get an initial prototype quickly.
This first prototype, processing the papers sequentially, was extremely useful to craft and optimize the prompts to extract key information from each article.
Figure 1. Initial simple paper processing pipeline using an LLM NIM microservice at its core
To validate the accuracy of extracted information, we initially manually validated the results. When no significant errors were found in the test dataset, we identified opportunities to further enhance the efficiency of key information extraction using LLMs (Figure 2). These include converting the papers from a PDF format to structured JSON; extracting images, tables, and charts; and using parallel processing to speed up the processing of papers.
Figure 2. Enhanced paper processing pipeline incorporating NV-ingest and parallel processing of papers
Enhancing the performance of LLMs for more efficient information extraction
By using NVIDIA NIM microservices for LLMs and
nv-ingest
, we deployed LLMs and a data ingestion pipeline in our local environment with eight NVIDIA A100 80-GB GPUs. We also fine-tuned the models using low-rank adaptation (LoRA) to improve the accuracy of information extraction from the papers.
We compiled a dataset of over 2K scientific articles related to the targeted research domain, sourced from the
Web of Science
and
Scopus
databases. Over a week during Generative AI Codefest, we focused on experimenting with various strategies to optimize the efficiency and accuracy of key information extraction from these articles.
Best-performing model
To determine the best-performing model, we tested a range of instruction-based and general-purpose LLMs from the NVIDIA API Catalog on a set of randomly selected articles. Each model was assessed for its accuracy and comprehensiveness in information extraction.
Ultimately, we determined that Llama-3.1-8B-instruct was the most suitable for our needs.
Processing speed
We developed a Q&A module using streamlit to answer user-defined research-specific questions.
To further improve processing speed, we implemented parallel processing of the prompts sent to the LLM engine and used KV-caching, which significantly accelerated the computation time by a factor of 6x when using 16 threads.
Extraction content types
We used nv-ingest to extract content from the original PDFs, including text, figures, tables, and charts, into structured JSON files. This enabled information extraction beyond text content, enabling a more comprehensive context for answering the questions.
Using JSON files instead of the original PDF files during inference also had a significant effect on lowering the processing time, by an additional factor of 4.25x.
Results
Thanks to these improvements, we significantly reduced the time required to extract information from our database of papers, with a total speedup of 25.25x compared to our initial implementation.
Processing the entirety of our database now takes less than 30 minutes using two A100 80-GB GPUs and 16 threads.
Compared to the traditional approach of manually reading and analyzing an entire article, which typically takes about one hour, this optimized workflow achieved a time savings of over 99% (Figure 3).
Figure 3. Required time for information extraction and categorization
In addition to information extraction, we also investigated automated article classification. By fine-tuning Llama-3.1-8b-Instruct with a LoRA adapter on a sample of papers manually annotated, we successfully automated the classification process, demonstrating its effectiveness in organizing complex datasets of scientific papers.
The results indicated that each article required only 2 seconds for classification, compared to the 300+ seconds required on average for a manual classification by an experienced reader (Figure 3).
Future directions
We’re currently refining our workflow to further accelerate the processing. We’re also improving our user interface to provide easy access to more locally deployed LLMs and enhance accessibility by other researchers (Figure 4).
We plan to implement the
NVIDIA AI Blueprint for multimodal PDF data extraction
to identify the most relevant articles for each research question and interact with those papers.
Beyond technical improvements, we are aiming to organize extracted key information for each question and generate visualizations (such as maps with location of the experiments mentioned in the papers) to accelerate further the writing of the systematic review.
Figure 4. Prototype interface undergoing in-house testing at SAEF
Summary
Our work at the Generative AI Codefest demonstrated the transformative potential of AI in accelerating systematic literature reviews. With NVIDIA NIM, we quickly moved from an idea to a working solution that significantly improves the process of information extraction from scientific papers.
This experience highlights how AI can streamline research workflows, enabling faster and more comprehensive insights. LLMs have the potential to facilitate interdisciplinary research, empowering scientists to explore complex, multi-domain research fields more effectively.
Moving forward, we aim to refine these methods and tools, ensuring that they are accessible and scalable for future research across diverse topics. | https://developer.nvidia.com/zh-cn/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | 借助适用于 LLM 的 NVIDIA NIM 微服务加速科学文献审查 | 精心设计的系统审查通常是研究人员探索科学领域的第一步。对于刚刚接触该领域的科学家,它提供了该领域的结构化概述。对于专家而言,它可以细化他们的理解并激发新的想法。在 2024 年,
Web of Science
数据库收录了 218,650 篇评测文章,凸显了这些资源在研究中的重要性。
完成系统审查可显著增强研究人员的知识库及其学术影响。然而,传统的评论文章需要收集、阅读和总结有关特定主题的大量学术文章。由于这项人工练习非常耗时,处理过的文献的范围通常局限于几十篇或几百篇文章。跨学科内容 (通常超出研究人员的专业领域) 又增加了一层复杂性。
这些挑战使得创建全面、可靠且有影响力的系统审查变得越来越困难。
大语言模型 (LLMs) 的出现提供了一种突破性的解决方案,能够从大量文献中快速提取和合成信息。参加
Generative AI Codefest Australia
提供了一个独特的机会,让我们在 NVIDIA AI 专家的支持下探索这一想法,利用
NVIDIA NIM
微服务加速文献审查。这为我们的文献分析流程快速测试和微调了多个先进的 LLMs。
测试 LLMs 在处理论文方面的潜力
作为 ARC 特殊研究计划
(ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF))
中的一个专门研究生理生态学的研究小组,我们着手撰写了一篇关于苔藓或地衣等非血管性植物对风的全球反应的文献回顾。
然而,我们很快就面临了一个挑战:许多关于风力电厂相互作用的相关文章都未能在标题或摘要中明确提及这些关键词,而在文献筛选过程中,这些关键词通常用作主要的过滤器。要对主题进行全面分析,需要手动阅读每篇文章的完整文本,这一过程非常耗时。
我们决定探索使用 LLMs 从文章中提取与风电场交互特别相关的内容的可能性。为此,我们
基于 LlaMa 3.1 8B Instruct NIM 微服务
实施了一个简单的问答应用(图 1)。这使我们能够快速获得初始原型。
第一个原型按顺序处理论文,在制作和优化提示以从每篇文章中提取关键信息方面非常有用。
图 1、以 LLM NIM 微服务为核心的初始简单论文处理流程
为了验证提取信息的准确性,我们首先手动验证结果。当测试数据集中未发现重大错误时,我们确定了使用 LLM 进一步提高关键信息提取效率的机会 (图 2)。其中包括将论文从 PDF 格式转换为结构化 JSON;提取图像、表格和图表;以及使用并行处理来加速论文处理速度。
图 2、增强型论文处理流程,整合 NV-ingest 和论文的并行处理
增强 LLMs 的性能,实现更高效的信息提取
通过将 NVIDIA NIM 微服务用于 LLM 和
nv-ingest
,我们在本地环境中部署了 LLM 和数据提取管道,并配备了 8 个 NVIDIA A100 80-GB GPU。我们还使用低等级适应性(LoRA)对模型进行了微调,以提高从论文中提取信息的准确性。
我们编译了一个数据集,其中包含与目标研究领域相关的 2000 多篇科学文章,这些文章均源自
Web of Science
和
Scopus
数据库。在 Generative AI Codefest 期间,我们在一周内专注于尝试各种策略,以优化从这些文章中提取关键信息的效率和准确性。
性能最佳的模型
为了确定性能最佳的模型,我们在一组随机选择的文章中测试了 NVIDIA API Catalog 中的一系列基于指令的和通用的 LLM。评估了每个模型在信息提取方面的准确性和全面性。
最终,我们确定 Llama-3.1-8B-instruct 最适合我们的需求。
处理速度
我们开发了一个问答模块,使用 Streamlit 回答用户定义的研究特定问题。
为进一步提高处理速度,我们对发送到 LLM 引擎的提示实施了并行处理,并使用了 KV 缓存,这使得使用 16 个线程时的计算时间显著缩短了 6 倍。
提取内容类型
我们使用 nv-ingest 将原始 PDF 中的内容 (包括文本、图形、表格和图表) 提取到结构化 JSON 文件中。这使得信息提取功能超越了文本内容,从而为回答问题提供更全面的环境。
在推理过程中使用 JSON 文件而非原始 PDF 文件也对将处理时间额外缩短 4.25 倍产生了重大影响。
成果
得益于这些改进,我们显著缩短了从论文数据库中提取信息所需的时间,与初始实施相比,总速度提高了 25.25 倍。
现在,使用两个 A100 80-GB GPU 和 16 个线程处理整个数据库只需不到 30 分钟。
与手动阅读和分析整篇文章的传统方法 (通常需要大约 1 小时) 相比,这种经过优化的工作流程可节省 99% 以上的时间 (图 3)。
图 3. 信息提取和分类所需的时间
除了信息提取,我们还研究了自动化文章分类。通过在手动标注的论文样本上使用 LoRA 适配器微调 Llama-3.1-8b-Instruct,我们成功实现了分类过程的自动化,证明了其在组织复杂的科学论文数据集方面的有效性。
结果表明,与经验丰富的读者手动分类平均所需的 300 多秒相比,每篇文章只需 2 秒即可完成分类 (图 3)。
未来方向
我们目前正在改进工作流程,以进一步加速处理。我们还在改进用户界面,以便轻松访问更多本地部署的 LLM,并增强其他研究人员的可访问性(图 4)。
我们计划实施用于多模态 PDF 数据提取的 NVIDIA AI Blueprint,以确定与每个研究问题最相关的文章,并与这些论文进行交互。
除了技术改进,我们的目标是组织提取的每个问题的关键信息,并生成可视化效果 (例如,包含论文中提到的实验位置的地图),以进一步加速系统审查的编写。
图 4、原型接口正在 SAEF 进行内部测试
总结
我们在生成式 AI Codefest 上的工作展示了 AI 在加速系统文献审查方面的变革潜力。借助 NVIDIA NIM,我们迅速从想法转变为可行的解决方案,从而显著改善从科学论文中提取信息的过程。
此体验凸显了 AI 如何简化研究工作流程,从而更快获得更全面的见解。LLMs 具有促进跨学科研究的潜力,可助力科学家更有效地探索复杂的多领域研究领域。
展望未来,我们的目标是改进这些方法和工具,确保其可访问性和可扩展性,以供未来跨各种主题的研究使用。 |
https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | Vision Language Model Prompt Engineering Guide for Image and Video Understanding | Vision language models
(VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the
generative AI
landscape by bringing visual understanding to
large language models
(LLMs) through the use of a
vision encoder
. These initial VLMs were limited in their abilities, only able to understand text and single image inputs.
Fast-forward a few years and VLMs are now capable of understanding multi-image and video inputs to perform advanced vision-language tasks such as visual question-answering (VQA), captioning, search, and summarization.
Figure 1. Suite of model customization tools in NVIDIA NeMo
VLM accuracy can be improved for specific use cases by tuning the prompts and model weights. Fine-tuning the model weights can be done with efficient techniques such as PEFT but still requires sufficient data and compute.
However,
prompt engineering
and in context learning can also be used to improve the output quality and is as simple as adjusting the text input to the model at runtime.
This post walks you through the evolution of VLMs and how to effectively prompt a VLM for single image, multi-image, and video understanding use cases. For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
Single-image understanding
From a single image, a VLM is capable of identifying, describing, classifying, and reasoning over what is in the image. Some VLMs have advanced capabilities such as providing bounding box coordinates for certain objects, and translating text in the image. This works best for use cases where an image needs to be classified, captioned or described in text.
Single-image capability can also be used to detect basic events occurring in a livestream. This can be accomplished by sampling one frame at a time from a livestream and providing it to the VLM along with a prompt describing what to look for. This works especially well when the types of events to detect are large, clearly visible, and slow-moving.
The VLM can also be prompted to output in a structured format such as JSON so that the response can easily be parsed and sent to a database or a notification service.
For example, we provided Llama-3.2-90b-vision-instruct with the following prompt and image (Figure 2).
Figure 2. Input image to the VLM for question and answering
Prompt:
Is there a fire truck? Is there a fire? Are there firefighters? Output the answer to each question in JSON format.
VLM response:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
For each of the questions, the VLM was able to accurately respond and output it in a structured format to be used for downstream tasks.
While this method enables the VLM to operate on video input by sampling one frame at a time, the VLM itself does not understand the full context of the video, limiting it to simple use cases. Sampling one frame at a time does not give it the temporal understanding needed to detect actions or trends that occur over time. This can be improved with multi-image and video VLMs.
Multi-image understanding
Many VLMs can accept multiple images as input allowing the model to compare, contrast, and learn from input images.
VLMs generally have good zero-shot capability across a wide range of use cases. However, when trying to obtain more precise answers from a VLM on domain-specific use cases, VLMs can struggle because they lack the full context required to accurately respond to the user prompts.
To better understand this, we used a VLM to estimate the stock level of products. This is an important retail use case as it could autonomously monitor and alert employees when products are running low in a store to avoid lost revenues from stock-outs.
We provided a single image of a table (Figure 3) that should be fully stocked with several snack products to
Cosmos Nemotron 34B
and asked it to estimate the stock level.
Figure 3. Single-image VLM input for retail stock level detection
Prompt:
Estimate the stock level of the snack table on a scale of 0–100.
VLM response:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
The model rates it as 80–90% stocked even though it’s quite clear to the human eye that the stock level is low. From the model’s perspective, it does not know what the table looked like beforehand when looking at a single image.
Using the Cosmos Nemotron 34B multi-image capability, we provided another image of the snack table at a 100% stock level (Figure 4), giving the model extra context to produce a more accurate estimate.
Figure 4. In-context learning input to VLM for retail stock level detection
Prompt:
First compare and contrast the stock level of the two images. Then generate an estimate for each image of the stock level on a scale of 0–100%.
VLM response:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
By providing the additional image as a reference, the VLM now estimates the stock level of the second image at 20–30%, which is significantly more accurate.
Multi-image input can also be used in more complex multimodal RAG pipelines where example images are dynamically added to the prompt based on the user’s input. Multi-image VLMs can also be used for basic video understanding by sampling a small set of frames from a video and providing it to the VLM.
However, many VLMs that accept multi-image input have limited context windows (10–20 frames) and may not have been trained on video data. To truly understand video input, we recommend using a VLM trained on video-specific datasets and a long context window.
Video understanding
The latest VLMs have full video understanding capabilities with large context lengths, frame compression and video training data. This enables VLMs to ingest longer videos and have improved temporal understanding: the ability to understand actions and trends over time.
The first example showed how to use a single frame from a livestream to detect a fire but that same method wouldn’t be able to tell you if the fire is growing or getting smaller because it lacks the temporal context.
A VLM with long context and video understanding processes many frames across time to understand if the fire is growing or not. Here are some examples from the Cosmos Nemotron model families.
Sequential visual understanding
VLMs with sequential understanding relies on processing frames in a sequence, enabling them to capture the progression of actions and events over time. Unlike multi-image models, which process a limited number of frames without explicit temporal connections, sequential video models incorporate temporal dynamics by analyzing frames in an orderly sequence.
This is helpful in situations where you want to capture the causality of events in a video, as it enables the model to track the progression of actions over time and establish cause-effect relationships.
The next few examples are based on the following video where a worker drops a box in a warehouse. We discuss a few prompts and responses.
Figure 5. Input warehouse video to the VLM for scene understanding
Prompt:
What happened in this video?
VLM response:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
The model gave a detailed response with specifics about the environment such as aisle labels and lighting conditions but failed to correctly identify that the worker dropped a box. Now try to ask this question directly.
Prompt:
Did the worker drop any box?
VLM response:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
The VLM is now able to deduce that the worker might have dropped the box. This suggests that generic prompts might not capture all the nuances correctly unless you direct the VLM to look for something specific. Modify the prompt a little.
Prompt:
Elaborate on the visual and narrative elements of this warehouse video in detail. Highlight all anomalies or hazards if present.
VLM response:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
The model is now capturing all the important details of the scene along with important events, such as workers dropping a box. The detailed response is useful if you want to perform Q&A with the video later. VLMs with video understanding are also able to address directional questions, as they can understand actions over time.
Prompt:
From which direction did the worker enter the scene?
VLM response:
The worker entered the scene from the right side.
However, these models often lack precise temporal localization, which can limit their ability to pinpoint when specific events occur.
Prompt:
When did the worker drop the box?
VLM response:
The worker dropped the box in the second image.
Temporal localization
VLMs incorporating
Localization Interpretable Temporal Attention (LITA)
or similar temporal localization techniques elevate video understanding by explicitly learning when and where critical events occur. The models achieve this by introducing temporal attention mechanisms that focus on key segments of a video, aligning visual and textual modalities more effectively over time. This helps answer the “when” questions in the videos.
Prompt:
When did the worker drop the box?
VLM response:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
The optimal prompt format for a VLM depends on the model’s architecture and the nature of the caption pairs used during training. Different training datasets influence how a VLM interprets the prompt.
Conclusion
This post walked through how VLMs have evolved from supporting only single-image input to being capable of complex temporal reasoning on long video inputs. To get started with VLMs, visit
build.nvidia.com
and try out some of the prompts shown in this post. For technical questions, see the
Visual AI Agent forum
.
VLMs can be used to build various visual AI agents. Explore one of the sample vision AI workflows on the
/NVIDIA/metropolis-nim-workflows
GitHub repo with no GPU required. To build a video analytics AI agent, try the
AI blueprint for video search and summarization
.
For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
For more information about LLM prompting, see
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
. | https://developer.nvidia.com/zh-cn/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | 图像和视频理解的视觉语言模型提示工程实践指南 | 视觉语言模型 (VLMs) 正在以极快的速度发展。2020 年,首批 VLMs 通过使用视觉编码器将视觉理解引入大语言模型 (LLMs) ,彻底改变了生成式 AI 格局。这些初始 VLMs 能力有限,只能理解文本和单张图像输入。
几年后,VLM 现在能够理解多图像和视频输入,以执行高级视觉语言任务,例如视觉问答 (VQA)、字幕、搜索和摘要。
图 1、NVIDIA NeMo 中的模型自定义工具套件
通过调整提示和模型权重,可以提高特定用例的 VLM 准确性。可以使用 PEFT 等高效技术微调模型权重,但仍需要足够的数据和计算。
但是,
prompt engineering
和上下文学习也可用于提高输出质量,并且与在运行时调整模型的文本输入一样简单。
本文将为您介绍 VLM 的演变,以及如何有效地提示 VLM 以用于了解单张图像、多张图像和视频的用例。如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的
Vision for All:Unlocking Video Analytics with AI Agents 在线研讨会
。
单图像理解
从单张图像来看,VLM 能够识别、描述、分类和推理图像中的内容。一些 VLM 具有高级功能,例如为某些对象提供边界框坐标,以及翻译图像中的文本。这最适合需要对图像进行分类、添加字幕或以文本进行描述的用例。
单图像功能还可用于检测直播中发生的基本事件。这可以通过以下方式实现:从直播中一次采样一帧,并将其提供给 VLM,同时提供描述要查找的内容的提示。当要检测的事件类型较大、清晰可见且移动缓慢时,此方法的效果尤为出色。
系统还可以提示 VLM 以 JSON 等结构化格式输出,以便轻松解析响应并将其发送至数据库或通知服务。
例如,我们为 Llama-3.2-90b-vision-instruct 提供了以下提示和图像(图 2)。
图 2、将图像输入 VLM 以进行问答
提示:是否有消防车?是否有火灾?有消防人员吗?输出每个问题的答案,以 JSON 格式。
VLM 响应:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
对于每个问题,VLM 能够准确回答并以结构化格式输出,以便用于下游任务。
虽然此方法允许 VLM 通过一次采样一帧来操作视频输入,但 VLM 本身无法理解视频的完整上下文,将其局限于简单的用例。一次对一帧进行采样无法为其提供检测随时间变化的动作或趋势所需的时间理解。这可以通过多图像和视频 VLM 得到改善。
多图像理解
许多 VLM 可以接受多张图像作为输入,允许模型比较、对比并从输入图像中学习。
VLM 通常在各种用例中都具有良好的零样本击能力。但是,在尝试从 VLM 获取有关特定领域用例的更精确答案时,VLM 可能会遇到困难,因为它们缺乏准确响应用户提示所需的完整上下文。
为了更好地理解这一点,我们使用 VLM 来估计产品的库存水平。这是一个重要的零售用例,因为它可以在商店中的产品运行不足时自动监控和提醒员工,以避免因缺货而损失的收入。
我们向
Cosmos Nemotron 34B
提供了一张表格 (图 3) 的单一图像,该表格应与多种零食产品一起充分库存,并要求其估算库存水平。
图 3. 用于零售库存水平检测的单图像 VLM 输入
提示:按 0 – 100 的比例估算零食表的库存量。
VLM 响应:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
该模型将其评为 80–90% 的库存,即使肉眼很清楚库存水平很低。从模型的角度来看,它不知道在查看单个图像之前表格的样子。
通过使用 Cosmos Nemotron 34B 多图像功能,我们提供了另一张 100%库存水平的零食表图像 (图 4),为模型提供了额外的背景信息,以便生成更准确的估计。
图 4. 为 VLM 提供情境学习输入,用于零售库存水平检测
提示:首先比较两张图像的库存水平。然后,按 0 – 100% 的比例对库存量的每张图像生成估计值。
VLM 响应:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
现在,通过提供额外的图像作为参考,VLM 可将第二张图像的库存量估算为 20–30%,准确度明显更高。
多图像输入还可用于更复杂的多模态 RAG 工作流,其中会根据用户的输入将示例图像动态添加到提示中。多图像 VLM 还可以通过从视频中采样一小部分帧并将其提供给 VLM 来用于基本的视频理解。
但是,许多接受多图像输入的 VLM 的上下文窗口有限(10–20 帧),并且可能未基于视频数据进行训练。为了真正理解视频输入,我们建议使用基于视频特定数据集训练的 VLM 和长上下文窗口。
视频理解
最新的 VLM 具有完整的视频理解功能,具有大上下文长度、帧压缩和视频训练数据。这使得 VLM 能够提取更长的视频,并提高时间理解能力:能够理解一段时间内的行动和趋势。
第一个示例展示了如何使用直播中的单帧来检测火灾,但由于缺乏时间背景,相同的方法无法告诉您火灾是在增大还是变小。
具有长上下文和视频理解的 VLM 会随着时间的推移处理多个帧,以了解火势是否在增长。以下是 Cosmos Nemotron 模型系列的一些示例。
顺序视觉理解
具有顺序理解的 VLMs 依赖于按序列处理帧,使其能够捕捉动作和事件随时间推移的进展。与多图像模型相比,它们在处理有限数量的帧时没有显式时间连接,而顺序视频模型则通过按顺序分析帧来整合时间动力学。
这在您想要在视频中捕捉事件因果关系的情况下非常有用,因为它使模型能够跟踪动作在一段时间内的进展情况,并建立因果关系。
接下来的几个示例基于以下视频,展示了一名工人在仓库中丢弃箱子的过程。我们将讨论一些提示和响应。
图 5、将仓库视频输入到 VLM 以进行场景理解
提示:本视频中发生了什么?
VLM 响应:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
该模型提供了详细的环境细节 (例如通道标签和照明条件),但无法正确识别工人掉落的箱子。现在,尝试直接问这个问题。Prompt:工作人员是否丢弃了任何箱子?
VLM 响应:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
现在,VLM 可以推断出工作者可能丢弃了箱子。这表明,除非您指示 VLM 查找特定内容,否则通用提示可能无法正确捕捉所有细微差别。稍微修改一下提示。
提示:详细介绍此仓库视频的视觉和叙事元素。突出显示所有异常或危险(如果存在)。
VLM 响应:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
现在,该模型可以捕捉场景的所有重要细节以及重要事件,例如工人丢箱。如果您想稍后在视频中执行 Q&A,详细的回答非常有用。具有视频理解功能的 VLM 还能够解决定向问题,因为它们可以理解随时间推移的动作。
提示:工作者是从哪个方向进入场景的?
VLM 响应:
The worker entered the scene from the right side.
但是,这些模型通常缺乏精确的时间定位,这可能会限制其确定特定事件何时发生的能力。
提示:工人是什么时候丢弃盒子的?
VLM 响应:
The worker dropped the box in the second image.
时间定位
结合
Localization Interpretable Temporal Attention (LITA)
或类似时间定位技术的 VLM 通过显式学习关键事件发生的时间和地点来提高视频理解能力。这些模型通过引入专注于视频关键片段的时间注意力机制,随着时间的推移更有效地调整视觉和文本模式来实现这一目标。这有助于回答视频中的“何时”问题。
提示:
工人是什么时候丢弃盒子的?
VLM 响应:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
VLM 的最佳提示格式取决于模型的架构和训练期间使用的描述对的性质。不同的训练数据集会影响 VLM 解释提示的方式。
结束语
本文介绍了 VLM 如何从仅支持单图像输入发展为能够对长视频输入进行复杂的时间推理。要开始使用 VLM,请访问
build.nvidia.com
并尝试本博文中显示的一些提示。有关技术问题,请参阅
Visual AI Agent 论坛
。
VLM 可用于构建各种视觉 AI 智能体。无需 GPU,即可在
/NVIDIA/metropolis-nim-workflows
GitHub 存储库上探索视觉 AI 工作流示例之一。要构建视频分析 AI 智能体,请试用
NVIDIA AI Blueprint
中的视频搜索和摘要蓝图。
如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 网络研讨会。
有关 LLM 提示的更多信息,请参阅《
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
》。 |
https://developer.nvidia.com/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | Configurable Graph-Based Task Solving with the Marco Multi-AI Agent Framework for Chip Design | Chip and hardware design presents numerous challenges stemming from its complexity and advancing technologies. These challenges result in longer turn-around time (TAT) for optimizing performance, power, area, and cost (PPAC) during synthesis, verification, physical design, and reliability loops.
Large language models (LLMs) have shown a remarkable capacity to comprehend and generate natural language at a massive scale, leading to many potential applications and benefits across various domains. Successful LLM-based AI agents for hardware design can drastically reduce TAT, leading to faster product cycles, lower costs, improved design reliability, and reduced risk of costly errors.
Figure 1. Marco framework overview
Marco: Configurable Graph-Based Task Solving and Multi-AI Agents Framework
We introduce the proposed
Marco framework
, which encompasses graph-based task solving, agent configurations for sub-tasks, and skill/tool configurations for each AI agent in real time.
Figure 1 showed dynamic and static configurable graph-based task solving, which is flexibly integrated with chip-design knowledge (for example, circuits, timing, and so on).
In the task graph, each node represents a sub-task, and each edge represents the execution or knowledge relationship between nodes. For solving each sub-task, you use
Autogen
to configure single-AI or multi-AI agent with a knowledge database, tools, and memory.
Table 1 summarizes the task graph, agent, and LLM configurations of the Marco framework for various agents.
VerilogCoder
and the MCMM timing analysis agent use a dynamic task graph to complete the specification-to-RTL and extract key takeaways of timing reports, respectively. The timing path debug agent finds the problematic net, wire, and constraints through a static timing debugging task graph (Figure 1).
Agent works
Task category
Marco framework configuration
Task graph
Sub-task agent config.
Customized tools
RTLFixer
Code Syntax Fixing
N/A
Single-AI
RTL Syntax Error RAG Database
Standard Cell Layout Opt.
Optimization
N/A
Single-AI
Cluster Evaluator, Netlist Traverse Tool
MCMM Timing Analysis(Partition/Block-Level)
Summary & Anomaly Identification
Dynamic
Multi-AI
Timing Distribution Calculator, Timing Metric Comparator
DRC Coder
Code Generation
N/A
Multi-Modality & Multi-AI
Foundry Rule Analysis, Layout DRV Analysis, DRC Code Evaluation
Timing Path Debug(Path-Level)
Summary & Anomaly Identification
Static
Hierarchical Multi-AI
Agentic Timing Report Retrieval
VerilogCoder
Code Generation
Dynamic
Multi-AI
TCRG Retrieval Tool, AST-Based Waveform Tracing Tool
Table 1. Task graph, agent configuration, customized tool of the Marco framework for various autonomous agent implementations for hardware design tasks
In Table 1,
RTLFixer
and
Standard Cell Layout Optimization
agents (that is, a single-AI configuration) are supported using the Marco framework. The rest of the multi-AI agents are implemented on the Marco framework.
For
RTLFixer
,
LLM agent for standard cell layout optimization
, and
DRC Coder
, we used single-AI or multi-AI agent configurations with customized tools, memory, and domain knowledge.
Automated hardware description languages code generation
One key area where autonomous agents are making an impact is in the generation of hardware description languages (HDLs), such as Verilog. Due to the growing complexity of VLSI design, writing Verilog and VHDL is time-consuming and prone to bugs, necessitating multiple iterations for debugging functional correctness. Consequently, reducing design costs and designer effort for completing hardware specifications has emerged as a critical need.
LLMs can be used to generate Verilog code from natural language descriptions. However, LLMs often struggle to produce code that is both syntactically and functionally correct.
Syntax correctness
RTLFixer
uses a combination of retrieval-augmented generation (RAG) and ReAct prompting to enable LLMs to iteratively debug and fix syntax errors. RAG incorporates a database of human expert guidance to provide context for error correction. ReAct enables the LLM to reason about the error, plan a fix, and act on the plan.
Functional correctness
VerilogCoder
is a multi-agent system that incorporates a task planner and an abstract syntax tree (AST)-based waveform-tracing tool to generate and debug Verilog code. It employs a task and circuit relation graph (TCRG) to break down a task into manageable sub-tasks and link signal transitions to each step (Figure 2).
Figure 2. Task-driven circuit relation graph retrieval agent reasoning and interacting with the developed TCRG retrieval tool to enrich the task with the relevant circuit and signal descriptions
An AST-based waveform tracing tool assists the LLM agent in identifying and fixing functional errors by back-tracing signal waveforms. VerilogCoder achieves a 94.2% success rate on the VerilogEval-Human v2 benchmark, demonstrating a significant improvement over previous methods.
Video 1. Autonomously Complete Verilog Code with TCRG Planning and AST-Based Waveform Tracing Tools
In Video 1, the demonstration of VerilogCoder autonomously completes functionally correct Verilog code using TCRG planning and AST-based waveform tracing tools.
Automated DRC layout code generation
DRC-Coder
uses multiple autonomous agents with vision capabilities and specialized DRC and Layout DRV analysis tools to generate DRC code. The system interprets design rules from textual descriptions, visual illustrations, and layout representations. The multiple LLM agents include a planner that interprets design rules, and a programmer that translates the rules into executable code.
DRC-Coder incorporates an auto-debugging process, which uses feedback from the code evaluation to refine the generated code.
Video 2. A Demonstration of DRC-Coder in Chip Design
In Video 2, the demonstration of DRC-Coder generates DRC code that achieves perfect F1 scores on hundreds of testing layouts by leveraging a layout analysis tool, an auto-debugging process, and the capabilities of multi-modality and multi-AI agents.
DRC-Coder achieved a perfect F1 score of 1.000 in generating DRC codes for a sub-3nm technology node, outperforming standard prompting techniques. The proposed automated agentic approach significantly reduces the time required for DRC code generation, from weeks to an average of four minutes per design rule.
Standard cell layout optimization
LLM agent for standard cell layout optimization
proposes using the natural language and reasoning ability of an LLM to generate high-quality cluster constraints incrementally to optimize the cell layout PPA and debug the routability with ReAct prompting.
The system uses net information and cell layout analysis to group MOSFET devices into clusters. The AI agent not only achieves up to 19.4% smaller cell area, but also generates 23.5% more LVS and DRC clean cell layouts than the
Transformer-based device clustering approach
on a set of sequential cells in the industrial 2
nm
technology node.
Multi-corner multi-mode timing report debug and analysis
The multi-corner multi-mode (MCMM) timing analysis agent uses dynamic task graphs to complete the specification-to-RTL and extract key takeaways of timing reports, respectively.
The MCMM timing analysis agent achieves an average score of 8.33 out of 10, based on evaluations by experienced engineers on a set of industrial cases, and delivers approximately 60x speedups compared to human engineers (Figure 3).
Figure 3. MCMM timing analysis agent results
The timing path debug agent finds the problematic net, wire, and constraints through the static timing debugging task graph (Figure 1).
In Table 2, the timing path debug agent resolves 86% of path-level debugging tasks, whereas the standard task solving approach fails to resolve any of the tasks.
Multi Report Task Description
Required Analyzed
Sub-Tasks
Standard Task Solving
Timing Path Debug Agent
Find missing clk signals that have no rise/fall information
max, clk
X
V
Identify pairs of nets with high RC mismatch
max, wire
X
V
Detect unusual constraints between victim and its aggressors
max, xtalk, LC
X
V
Identify unusual RC values between victim and its aggressors
max, wire, xtalk, LC
X
V
Find the constraints of slowest stages with highest RC values
max, wire, xtalk, LC
X
V
Compare each timing table for number of stages, point values and timing mismatch
max
X
X
Task M2 and Task M3 for specific stages in list of paths
max, wire, xtalk, LC
X
V
Avg Pass-rate
0%
86%
Table 2. Pass-rate (%) of timing path debug agent with static task graph solving, and a naïve standard task solving without task graph information
X=Failed to solve the task. V=Solved the task successfully.
Conclusion
The proposed Marco framework enables more flexible and domain-specialized methods for real-time hardware design tasks solving. By using task graph and flexible single-AI and multi-AI agent configurations with domain-specific tools and knowledge, we developed various agents for tasks such as cell layout optimization, Verilog syntax error fixing, Verilog and DRC code generation, and timing debugging on problematic blocks, nets, and wires.
The experimental results show impressive performance and efficiency benefits on utilizing collaborative LLM-based agents for chip design.
The future directions for agent research on hardware design include the following:
Training LLMs with high-quality hardware design data
Improving LLM-based agents’ ability for hardware signal and waveform debugging
Incorporating PPA metrics into the design flow
Developing more efficient self-learning techniques and memory systems for LLM agents for solving more complex hardware tasks
For more papers and projects on electronic design automation, see the
NVIDIA Design Automation Research Group
page.
For those interested in the technologies highlighted in the post, here’s a list of relevant papers:
RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models
/NVlabs/RTLFixer
GitHub repo
VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder: Automated DRC Checker Code Generation using LLM Autonomous Agent
Large Language Model (LLM) for Standard Cell Layout Design Optimization | https://developer.nvidia.com/zh-cn/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | 基于 Marco 多 AI 智能体框架的可配置图形任务求解在芯片设计中的应用 | 芯片和硬件设计因其复杂性和先进的技术而面临众多挑战。这些挑战导致在合成、验证、物理设计和可靠性循环期间优化性能、功耗、面积和成本(PPAC)的周转时间(TAT)更长。
大语言模型(LLMs)已显示出大规模理解和生成自然语言的出色能力,从而为各种领域带来许多潜在应用和优势。成功地将基于 LLMs 的 AI 智能体用于硬件设计,可以大幅减少 TAT,从而缩短产品周期、降低成本、提高设计可靠性,并降低代价高昂的错误风险。
图 1. Marco 框架概述
Marco:基于图形的可配置任务求解和多 AI 代理框架
我们介绍了拟议的 Marco 框架,其中包括基于图形的任务求解、子任务的智能体配置,以及每个 AI 智能体的实时技能/工具配置。
图 1 展示了基于动态和静态可配置图形的任务求解,可灵活集成芯片设计知识 (例如电路、定时等)。
在任务图中,每个节点代表一个子任务,每个边缘代表节点之间的执行或知识关系。为了解决每个子任务,您可以使用
Autogen
配置具有知识数据库、工具和内存的单个 AI 或多个 AI 代理。
表 1 总结了各种代理的 Marco 框架的任务图形、代理和 LLM 配置。
VerilogCoder
和 MCMM 时序分析代理使用动态任务图形分别完成对 RTL 的规范和提取时序报告的关键要点。定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的网络、线和约束。
代理作品
任务类别
Marco 框架配置
任务图
子任务代理配置。
定制工具
RTLFixer
代码语法固定
不适用
单 AI
RTL 语法错误 RAG 数据库
Standard Cell Layout Opt.
优化
不适用
单 AI
Cluster Evaluator, Netlist Traverse Tool
MCMM 定时分析 (Partition/Block-Level)
总结和异常识别
动态
多 AI
Timing Distribution Calculator,Timing Metric Comparator
DRC Coder
代码生成
不适用
多模态和多 AI
Foundry Rule Analysis,Foundry 规则分析,Layout DRV Analysis,布局 DRV 分析,DRC Code Evaluation,DRC 代码评估
Timing Path Debug(Path-Level)
总结和异常识别
静态
分层多 AI
代理定时报告检索
VerilogCoder
代码生成
动态
多 AI
TCRG 检索工具,AST-Based Waveform Tracing Tool
表 1、任务图、代理配置、Marco 框架的定制工具,用于硬件设计任务的各种自主代理实现
在表 1 中,使用 Marco 框架支持
RTLFixer
和标准单元布局优化代理 (即单 AI 配置)。其余的多 AI 代理均在 Marco 框架上实施。
对于 RTLFixer、用于标准单元布局优化的 LLM 代理和 DRC Coder,我们使用了具有定制工具、内存和域知识的单 AI 或多 AI 代理配置。
自动硬件描述语言代码生成
自主智能体产生影响的一个关键领域是硬件描述语言(HDL)的生成,例如 Verilog。由于 VLSI 设计的复杂性与日俱增,编写 Verilog 和 VHDL 非常耗时且容易出现错误,因此需要进行多次迭代来调试功能正确性。因此,降低设计成本和设计师完成硬件规格的工作量已成为一项迫切需求。
LLM 可用于根据自然语言描述生成 Verilog 代码。但是,LLM 通常难以生成语法和功能上都正确的代码。
语法正确性
RTLFixer
结合使用检索增强生成 (RAG) 和 ReAct 提示,使 LLM 能够迭代地调试和修复语法错误。RAG 整合了人类专家指导数据库,为纠错提供上下文。ReAct 启用 LLM 进行错误推理、计划修复并采取行动。
功能正确性
VerilogCoder
是一个多代理系统,包含任务规划器和基于抽象语法树(AST)-based 波形追踪工具,用于生成和调试 Verilog 代码。它使用任务和电路关系图(TCRG)将任务分解为可管理的子任务,并将信号转换链接到每个步骤(图 2)。
图 2、任务驱动的电路关系图检索代理推理,并与开发的 TCRG 检索工具交互,通过相关的电路和信号描述丰富任务
基于 AST 的波形追踪工具通过回溯信号波形来协助 LLM 代理识别和修复功能错误。与之前的方法相比,VerilogCoder 在 VerilogEval-Human v2 基准测试中的成功率为 94.2%。
视频1. 使用 TCRG 规划和基于 AST 的波形追踪工具自动完成 Verilog 代码
在视频 1 中,VerilogCoder 演示使用 TCRG 规划和基于 AST 的波形追踪工具自动完成功能正确的 Verilog 代码。
自动生成 DRC 布局代码
DRC-Coder 使用多个具有视觉功能的自主代理以及专门的 DRC 和布局 DRV 分析工具来生成 DRC 代码。该系统从文本描述、视觉插图和布局表示中解释设计规则。多个 LLM 代理包括一个解释设计规则的规划器和一个将规则转换为可执行代码的程序员。
DRC-Coder 包含自动调试流程,该流程利用代码评估的反馈来优化生成的代码。
视频 2、用于芯片设计的 DRC-Coder 演示
在视频 2 中,DRC-Coder 演示会生成 DRC 代码,通过利用布局分析工具、自动调试流程以及多模态和多 AI 代理的功能,在数百个测试布局中实现完美的 F1 分数。
DRC-Coder 在为低于 3nm 的技术节点生成 DRC 代码时,达到了完美的 F1 评分 (达到 1.000 分),优于标准提示技术。拟议的自动化代理方法可显著缩短 DRC 代码生成所需的时间,即每个设计规则从数周缩短到平均 4 分钟。
标准单元布局优化
用于标准单元布局优化的 LLM 智能体建议使用 LLM 的自然语言和推理能力逐步生成高质量集群约束,以优化单元布局 PPA,并使用 ReAct 提示调试可路由性。
该系统使用网络信息和单元布局分析将 MOSFET 设备分组到集群中。与在工业
2nm
技术节点中的一组顺序单元上采用基于 Transformer 的设备集群方法相比,AI 智能体不仅可将单元面积减少多达 19.4%,而且还可生成多 23.5% 的 LVS 和 DRC 清洁单元布局。
多角度多模式时序报告调试与分析
多角多模 (MCMM) 时序分析代理使用动态任务图形分别完成从规范到 RTL 的规范,并提取时序报告的关键要点。
根据经验丰富的工程师对一组工业案例的评估,MCMM 时序分析代理的平均分为 8.33 分 (满分 10 分),与人类工程师相比,速度提高了约 60 倍 (图 3)。
图 3、MCMM 时序分析代理结果
定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的 net、wire 和约束。
在表 2 中,定时路径调试代理可解析 86% 的路径级调试任务,而标准任务求解方法无法解析任何任务。
多报告任务说明
所需的分析结果
子任务
标准任务求解
定时路径调试代理
查找没有上升/下降信息的缺失 clk 信号
max、clk
X
V
识别 nets 对的 RC 不匹配率高
Max、Wire
X
V
检测受害者与侵犯者之间的异常约束
Max、xtalk、LC
X
V
识别牺牲者与侵犯者之间的异常 RC 值
Max、wire、xtalk、LC
X
V
查找具有最高 RC 值的最慢阶段的限制
Max、wire、xtalk、LC
X
V
比较每个 timing table,了解阶段数、点值和 timing mismatch
最大值
X
X
任务 M2 和任务 M3,用于路径列表中的特定阶段
Max、wire、xtalk、LC
X
V
平均通过率
0%
86%
表 2、具有静态任务图求解功能的定时路径调试代理的通过率 (%),以及无任务图信息的朴素标准任务求解
X = 无法解决任务
。V = 成功完成任务。
结束语
拟议的 Marco 框架为实时硬件设计任务求解提供了更灵活、更领域更专业的方法。通过使用任务图形、灵活的单 AI 和多 AI 代理配置以及特定领域的工具和知识,我们为单元布局优化、Verilog 语法错误修复、Verilog 和 DRC 代码生成以及问题块、网络和线缆的定时调试等任务开发了各种代理。
实验结果表明,在芯片设计中使用基于 LLM 的协作式智能体可获得惊人的性能和效率优势。
硬件设计智能体研究的未来方向包括:
使用高质量的硬件设计数据训练 LLMs
提升基于 LLM 的智能体的硬件信号和波形调试能力
将 PPA 指标纳入设计流程
为 LLM 代理开发更高效的自学习技术和内存系统,以解决更复杂的硬件任务
有关电子设计自动化的更多论文和项目,请参阅
NVIDIA Design Automation Research Group
页面。
如果您对本文中重点介绍的技术感兴趣,以下是相关论文列表:
RTLFixer:使用大语言模型自动修复 RTL 语法错误
/NVlabs/RTLFixer
GitHub repo
VerilogCoder:采用基于图形的规划和基于抽象语法树(AST)的波形追踪工具的自主 Verilog 编码代理
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder:使用 LLM 自主代理自动生成 DRC 检查器代码
用于优化标准单元布局设计的大语言模型(LLM) |
https://developer.nvidia.com/blog/defining-llm-red-teaming/ | Defining LLM Red Teaming | There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to deviate from acceptable standards. This use of LLMs began in 2023 and has rapidly evolved to become a common industry practice and a cornerstone of
trustworthy AI
. How can we standardize and define LLM red teaming?
Researchers affiliated with NVIDIA, the University of Washington, the Center for Human-Compatible AI, and the IT University of Copenhagen conducted a study on red teaming in practice,
Summon a demon and bind it: A grounded theory of LLM red teaming
(published in PLOS One).
Because the goal was to define and understand a relatively novel activity, the study uses a grounded theory approach, taking as evidence interviews from dozens of practitioners in thousands of minutes of video recordings. We spoke with security professionals, policy researchers, and scientists, as well as people who conducted this work non-professionally, such as academic researchers, hobbyists, and even artists, to understand the common themes, motivations, and strategies in attacking LLMs.
What defines LLM red teaming in practice?
LLM red teaming has the following defining characteristics:
It’s limit-seeking:
Red teamers find boundaries and explore limits in system behavior.
It’s never malicious:
People doing red teaming are not interested in doing harm—in fact, quite the opposite.
It’s manual:
Being a creative and playful practice, the parts of red teaming that can be automated are often most useful to give human red teamers insight for their work.
It’s a team effort:
Practitioners find inspiration in each others’ techniques and prompts, and the norm is to respect fellow practitioners’ work.
It’s approached with an alchemist mindset:
We found that red teamers tend to abandon rationalizations about models and their behavior and instead embrace the chaotic and unknown nature of the work.
These characteristics feed into NVIDIA’s definition of LLM red teaming, which we discuss later in this post.
In industry, there’s often a rough division between cybersecurity red teaming and content red teaming:
Cybersecurity red teaming
tends to be scoped to the technology stack leading up to the point of inference output, and technologies used to transmit and render this output.
Content red teaming
, on the other hand, is scoped to the content produced at model inference time.
Why do people red team LLMs?
People who attack LLMs have a broad range of motivations.
Some of these are external. It may be part of their job or a regulatory requirement. Social systems can also play a role, with people discovering LLM vulnerabilities for social media content or to participate in a closed group. Others are intrinsic, as many people do it for fun, out of curiosity, or based on concerns for model behavior.
At NVIDIA, we red team LLMs as part of our Trustworthy AI process to assess risks before release. If the model doesn’t perform up to expectations, we postpone the release until a given model is updated and revised to perform better.
How do people approach this activity?
LLM red teaming consists of using strategies to reach goals when conversationally attacking the target. Each kind of strategy is decomposed into different techniques. A technique might just affect two or three adversarial inputs against the targets, or an input might draw upon multiple techniques.
We identified the following overall types of red team strategies:
Language:
Modulating the surface form of words rather than semantics, such as using an encoding scheme.
Rhetorical:
Relying on argumentation, or manipulation.
Possible worlds:
Trying to shift the context of the interaction.
Fictionalizing:
Shifting the basis of operation to a fictional world or set of conditions.
Stratagems:
Using meta-strategies that affect how one interacts with the LLM at a higher level.
For more information, see
Summon a demon and bind it: A grounded theory of LLM red teaming
, which lists and describes 35 techniques over twelve different strategies.
What can LLM red teaming reveal?
The goal of LLM red teaming isn’t to quantify security. Rather, the focus is on exploration, and finding which phenomena and behaviors a red teamer can get out of the LLM. Put another way, if we get a failure just one time, then the failure is possible.
Another thing that distinguishes red teaming from benchmarks is the focus on novelty.
For both cybersecurity and content-based red teaming, the possible range of attacks is infinite. For cybersecurity, that’s because new attack methods are constantly in development. For content, it’s because the mode of interaction is through text, which can be infinitely rearranged and augmented.
So, repeatability is not interesting when discovering new security weaknesses and vulnerabilities. While it makes sense to test any model for failures using a battery of existing prompts, as a benchmark does, this can never indicate security. It just reveals weaknesses.
It’s possible to patch deployed models to be secure against prompts in public benchmarks, or to use this data during alignment, so getting full marks on a benchmark doesn’t show that a model is secure. However, getting low marks on a security benchmark does still indicate the presence of weaknesses.
In the security context, to test a model rigorously, you should go beyond public knowledge and interact closely with the model, trying to find novel ways to breach a particular LLM.
In this sense, LLM red teaming is a classic instance of an artisanal activity. Red teamers use their human expertise and intuition while interacting with the target. For example, they might sense that a model is close to giving a mitigation message (for example, “As an AI, I cannot….”), and they might respond to this by backing off their current line of requests, or by starting a fresh chat session with a slightly different tack.
Or, a red teamer might sense that a model is close to yielding and so keep pushing and slightly varying their request until they find a way through and get the model to fail in the target way. They add what worked and what didn’t to their conscious expertise and unconscious intuition and then share it with other red teamers.
This makes red teaming a distinctly human activity that complements security benchmarking.
How do people use knowledge that comes from LLM red teaming?
Red teamers are often looking for what they describe as
harms
that might be presented by an LLM. There’s a broad range of definitions of harm.
A red teaming exercise could focus on one of many goals or targets, which could depend on deployment context, user base, data handled, or other factors. Red teamers may also pay attention to the level of complexity required to get a “break.” A harm discovered after a single, one-sentence interaction with an LLM often suggests greater concern than a harm surfaced following complex, multi-turn manipulation.
Sometimes, the goal of red teaming is curiosity, a byproduct of which might be content for the red teamer to share, in their organization or publicly. This both builds the expertise and intuition of the individual and raises the community level of knowledge. It’s common for traditional cybersecurity knowledge to be shared informally on social media, and this applies also for LLM security.
At NVIDIA, human red teaming is a critical part of how we make model-release decisions. We have a skilled LLM red team and a skilled AI red team, who both check our models and infrastructure for vulnerabilities and behaviors that were not caught elsewhere. This helps us in three ways:
It enables us to make informed decisions about whether we will release models
It builds a pool of high-level skill at the frontier of LLM red teaming
It gives us the confidence that we’re making the best effort and getting good results with our AI security.
The results from red teaming go into NVIDIA’s enhanced model documentation format,
Model Card++
.
Some parts of LLM security can be tested automatically. After an exploit has been found in one case, this can be stored and used to test other LLMs, so that we don’t make the same mistake again. We do exactly this in
NVIDIA garak
(Generative AI Red-Teaming and Assessment Kit). Developers ready to test the security of their LLM deployments can run the open-source NVIDIA garak against almost any model and get a report indicating susceptibility to over 120 different categories of vulnerability.
Knowledge about how to break a model can be risky in the wrong hands. When an exploit has been found, the best thing to do is contact the model owner and give them a chance to respond and fix the weakness. This process is called
co-ordinated vulnerability disclosure
, and is also a common practice for LLM vulnerabilities.
NVIDIA’s definition of LLM red teaming
We see LLM red teaming as an instance of AI red teaming. Our definition is developed by the
NVIDIA AI Red Team
and takes inspiration from both this research on LLM red teaming in practice and also the definition used by the Association for Computational Linguistics’ SIG on NLP Security (
SIGSEC
).
Take care to specify the specific subdomain of red teaming, as different audiences often make different assumptions about which form is being referred to.
LLM red teaming
: Systematically testing AI models and systems containing AI models to identify vulnerabilities and behaviors that pose threats or risks to the systems running or using those models.
It can be subdivided into two areas: security red teaming and content-based red teaming.
Security red teaming
Assessing the robustness of the model and the system containing the model to attacks impacting traditional security properties (for example, confidentiality, integrity, and availability), either of the model itself or the system containing the model.
Attacks such as adversarial inputs, model extraction, training data inference and extraction, or prompt injection as used to violate a security property are typically evaluated by a security red team.
These activities typically require teams with a traditional security background to leverage findings and evaluate their impact.
Content-based red teaming
Assessing the model for unwanted behavior under adversarial manipulation, producing outputs that violate some pre-specified behavior contract for the model, either explicit (for example, a model card) or implicit.
These behaviors may include outputs that are offensive, unwanted, or unsafe, including biased or bigoted productions, instructions on unsafe or illegal activities, making promises on behalf of the model owner, or making decisions based on protected characteristics. Common techniques involve various forms of jailbreaking and guardrail evasion.
These activities typically require the support of an ethics team, a legal team, or other similar domain experts to assess the impact of findings.
Improving LLM security and safety
NVIDIA NeMo Guardrails
is a scalable platform for defining, orchestrating, and enforcing AI guardrails for content safety, jailbreak prevention, and more in AI agents and other generative AI applications.
NeMo Guardrails and the
NVIDIA garak
toolkit are now available for developers and enterprises. Enterprises can benefit from high-grade safety and security with
NVIDIA AI Enterprise
.
Meet the experts at GTC
The NVIDIA scientists behind this and other works in AI security will be at GTC 2025. You can hear a panel discussion on
navigating critical challenges in AI governance
, where we discuss practical approaches to building responsible AI systems.
Our cybersecurity AI and security teams will present an exclusive, in-depth session designed to
transform your AI agentic workflows securely
from blueprint to production. And our experts will be available to answer all your questions on
building trustworthy AI systems
.
Acknowledgements
Thanks to Nanna Inie, Jonathan Stray, and Leon Derczynski for their work on the
Summon a demon and bind it: A grounded theory of LLM red teaming
paper published in PLOS One. | https://developer.nvidia.com/zh-cn/blog/defining-llm-red-teaming/ | 定义 LLM 红色团队 | 在一项活动中,人们为生成式 AI 技术 (例如大语言模型 (LLMs)) 提供输入,以确定输出是否会偏离可接受的标准。LLMs 的这种使用始于 2023 年,并已迅速发展成为一种常见的行业实践,也是值得信赖的 AI 的基石。如何标准化和定义 LLM 红色团队?
NVIDIA、华盛顿大学、Center for Human-Compatible AI 和哥本哈根 IT 大学的研究人员对红色团队的实际应用“
Summon a demon and bind it: A grounded theory of LLM red teaming
”(在 PLOS One 中发布) 进行了一项研究。
由于目标是定义和理解相对较新的活动,因此该研究采用了有根据的理论方法,在数千分钟的视频录制中,以数十位从业者的访谈作为证据。我们与安全专业人员、政策研究人员和科学家,以及从事这项非专业工作的人员 (例如学术研究人员、业余爱好者甚至艺术家) 进行了交谈,以了解攻击 LLMs 的常见主题、动机和策略。
什么定义 LLM 红队在实践中?
LLM 红色团队合作具有以下定义特征:
它寻求极限:红色团队成员发现边界并探索系统行为的极限。
这绝非恶意:做红队的人对伤害不感兴趣,事实上,情况恰恰相反。
它是手动的:作为一种创造性和趣味性的练习,red teaming 中可以自动化的部分通常非常有用,可以让人类 red teamers 深入了解他们的工作。
这是一项团队合作精神:从业者从彼此的技巧和提示中寻找灵感,规范是尊重其他从业者的工作。
他们以炼金术士的思维方式进行交流:我们发现,红色团队成员倾向于放弃对模型及其行为的合理化,转而接受工作的混乱和未知本质。
这些特征融入了 NVIDIA 对 LLM 红色团队的定义,我们稍后将在本文中讨论这些定义。
在业内,通常会对网络安全红队和内容红队进行粗略划分:
网络安全红色团队合作的范围往往涵盖直至推理输出点的技术堆栈,以及用于传输和渲染此输出的技术。
另一方面,内容红色团队的作用域为模型推理时生成的内容。
人们为什么组建 LLM 团队?
攻击 LLMs 的人的动机多种多样。
其中一些是外部的。这可能是他们工作的一部分,也可能是监管要求。社交系统也可以发挥作用,人们可以发现社交媒体内容的 LLM 漏洞,或者加入封闭的小组。还有一些则是内在的,因为很多人这样做是为了好玩、出于好奇或出于对模型行为的关注。
在 NVIDIA,我们组建了红队 LLMs 作为我们 Trustworthy AI 流程的一部分,在发布之前评估风险。如果模型的性能未达到预期,我们将延迟发布,直到对给定模型进行更新和修改以提高其性能。
人们如何看待这项活动?
LLM 红色团队合作包括在对话式攻击目标时使用实现目标的策略。每种策略都被分解成不同的技术。一项技术可能仅会影响目标的两个或三个对抗输入,或者一项输入可能会利用多种技术。
我们确定了以下红队策略的整体类型:
语言:
调节词语的表面形式而非语义,例如使用编码方案。
修辞学:
依靠论证或操纵。
可能的世界:
尝试改变交互的上下文。
虚构化 :
将操作基础转移到一个虚构的世界或一组条件中。
Stratagems:
使用影响更高级别的 LLM 交互方式的元策略。
如需了解更多信息,请参阅 Summon a demon and bind it:LLM 红色团队的基础理论,其中列出并描述了 12 种不同策略中的 35 种技术。
LLM 红色团队合作揭示了哪些内容?
LLM 红色团队合作的目标并不是量化安全性。重点在于探索,以及找出红色团队成员可以退出 LLM 的现象和行为。换句话说,如果我们只遇到一次故障,那么失败是可能的。
红色团队与基准测试的另一个区别是对新奇性的关注。
对于网络安全和基于内容的红色团队,攻击的可能范围是无限的。对于网络安全而言,这是因为新的攻击方法正在不断开发中。对于内容,这是因为交互模式是通过文本进行的,可以无限地重新排列和增强。
因此,在发现新的安全漏洞时,可重复性并不重要。虽然使用一系列现有提示测试任何模型的故障是有意义的,就像基准测试一样,这永远无法表明安全性。它只是揭示了缺点。
您可以根据公开基准测试中的提示对部署的模型进行修补以确保安全性,或者在对齐期间使用这些数据,因此在基准测试中获得完整标记并不表明模型是安全的。但是,在安全基准测试中得分较低仍然表示存在缺点。
在安全环境中,为了严格测试模型,您应该超越公开知识,与模型密切互动,尝试寻找突破特定 LLM 的新方法。
从这个意义上说,LLM 红色团队合作是典型的手工活动。红色小队队员在与目标进行互动时,运用他们的人类专业知识和直觉。例如,他们可能会感觉到某个模型即将发出缓解消息 (例如,“作为 AI,我无法……”),他们可能会通过退出当前的请求行,或者以略有不同的策略开始新的聊天会话来响应此消息。
或者,红色团队成员可能会感觉到模型已接近生成,因此继续推送并略微改变其请求,直到找到解决方法,使模型以目标方式失败。他们将有效和无效的部分添加到自己的意识专业知识和直觉中,然后与其他红色团队成员分享。
这使得红队合作成为一种独特的人类活动,是对安全基准测试的补充。
人们如何使用来自 LLM 红队的知识?
Red Team 成员通常会寻找 LLM 可能带来的危害。
伤害
的定义十分广泛。
红色团队训练可以专注于许多目标或指标之一,这可能取决于部署环境、用户基础、处理的数据或其他因素。红色团队成员还可能需要注意“休息”所需的复杂程度。与 LLM 进行单句交互后发现的危害性通常比复杂的多回合操作后出现的危害性更值得关注。
有时,红色团队的目标是好奇心,而好奇心的副产品可能是红色团队成员在其组织中或公开分享的内容。这既可以建立个人的专业知识和直觉,也可以提高社区的知识水平。传统网络安全知识通常在社交媒体上以非正式方式分享,这也适用于 LLM 安全。
在 NVIDIA,人类红色团队是我们做出模型发布决策的关键部分。我们拥有一支技能熟练的 LLM 红色团队和一支技能熟练的 AI 红色团队,他们会检查我们的模型和基础架构,以发现未在其他地方发现的漏洞和行为。这在三个方面为我们提供帮助:
它使我们能够就是否发布模型做出明智的决定
它建立了 LLM 红色团队前沿的高级技能池
它让我们确信,我们正在利用 AI 安全功能尽力而为并取得良好结果。
红色团队的结果将输入 NVIDIA 的增强型模型文档格式
Model Card++
。
可以自动测试 LLM 安全性的某些部分。在一个案例中,当发现漏洞后,这可以被存储并用于测试其他 LLM,这样我们就不会再次出现同样的错误。我们在 NVIDIA NeMo Curator(生成式 AI Red-Teaming 和评估套件)中正是这样做的。准备测试其 LLM 部署安全性的开发者可以针对几乎任何模型运行开源
NVIDIA garak
,并获得报告,表明他们容易受到 120 多个不同类别的漏洞的影响。
掌握有关如何破解模型的知识可能会对不法之徒带来风险。当发现漏洞时,最好的方法是联系模型所有者,让他们有机会做出回应并修复漏洞。此过程称为
协同漏洞披露
,也是 LLM 漏洞的常见做法。
NVIDIA 的 LLM 红队定义
我们将 LLM 红色团队视为 AI 红色团队的一个实例。我们的定义由
NVIDIA AI Red Team
开发,其灵感来自本期关于 LLM 红色团队实践的研究,以及计算语言学协会的 NLP Security SIG(
SIGSEC
)使用的定义。
请注意指定红队的特定子域,因为不同的受众通常会对所提及的形式做出不同的假设。
LLM 红色团队合作:系统测试包含 AI 模型的 AI 模型和系统,以识别会对运行或使用这些模型的系统造成威胁或风险的漏洞和行为。
它可以细分为两个方面:安全红队和基于内容的红队。
安全红色团队
评估模型和包含模型的系统的稳健性,使其能够抵御攻击影响模型本身或包含模型的系统的传统安全属性(例如,机密性、完整性和可用性)。
用于违反安全属性的攻击(例如对抗输入、模型提取、训练数据推理和提取或 prompt injection)通常由安全红色团队进行评估。
这些活动通常需要具有传统安全背景的团队利用调查结果并评估其影响。
基于内容的红色团队
评估模型在对抗操作下的不良行为,生成违反模型某些预先指定的行为契约的输出 (显式 (例如 model card) 或隐式)。
这些行为可能包括冒犯性的、不受欢迎的或不安全的输出,包括有偏见或偏见的生产、关于不安全或不法活动的指令、代表模型所有者作出承诺,或根据受保护的特征做出决定。常见的技术包括各种形式的 jailbreaking 和 guardrail evasion。
这些活动通常需要道德团队、法律团队或其他类似领域专家的支持,以评估调查结果的影响。
提升 LLM 的安全性和安全性
NVIDIA NeMo Guardrails 是一个可扩展的平台,用于定义、编排和执行 AI Guardrails,以在 AI 代理和其他生成式 AI 应用中实现内容安全、越狱预防等。
NeMo Guardrails 和
NVIDIA garak
工具包现已面向开发者和企业推出。借助
NVIDIA AI Enterprise
,企业可以从高级别的安全性和可靠性中受益。
在 GTC 大会上与专家会面
在 GTC 2025 上,NVIDIA 科学家将为这项工作以及 AI 安全领域的其他工作提供支持。您可以听到关于应对 AI 治理中的关键挑战的小组讨论,其中我们讨论了构建负责任的 AI 系统的实用方法。
我们的网络安全 AI 和安全团队将举办一场深入的独家会议,旨在安全地将您的 AI 代理工作流从蓝图转变为生产。我们的专家将为您解答有关构建可信 AI 系统的所有问题。
致谢
感谢 Nanna Inie、Jonathan Stray 和 Leon Derczynski 为“召唤恶魔并将其绑定:LLM 红队合作的基础理论”所做的工作:在 PLOS One 上发表的论文。 |
https://developer.nvidia.com/blog/agentic-autonomy-levels-and-security/ | Agentic Autonomy Levels and Security | Agentic workflows
are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable AI models to use tools to access additional data or automate user actions, and enable AI models to operate autonomously, analyzing and performing complex tasks with a minimum of human involvement or interaction.
Because of their power, agentic workflows also present an element of risk. The most common model at the core of agentic systems remains some variety of LLM, which remains vulnerable to prompt injection if untrusted data can be introduced into the system.
To help assess and mitigate those vulnerabilities, NVIDIA presents an Agentic Autonomy framework, which we use for the following purposes:
Understand risks associated with increasing complexity of AI workflows
Help model and mitigate the risks posed by such workflows
Introduce how we use it to model threats to agentic systems
Explore how NVIDIA uses this framework for security for AI and more
cybersecurity use cases at NVIDIA GTC
.
Manipulating autonomous systems
In practice, exploitation of AI-powered applications requires two key components:
An adversary must be able to get their data (read: attack) into the system through some mechanism.
There must be a downstream effect that their malicious data can trigger.
When the AI component of the system is an LLM, this is commonly referred to as either direct prompt injection (the adversary and user are the same person) or indirect prompt injection (the adversary and the user could be different people).
However, similar threats exist for other types of AI models used in autonomous AI powered applications. Both flavors of prompt injection are rooted in the lack of separation between the control and data plane in the LLM architecture.
Direct prompt injection can be commonly seen in such examples as “Do Anything Now” (or DAN) prompts, which attempt to subvert safety and content filters trained into the model, and enable it to produce objectionable or unwanted content. These attacks typically only impact the active user’s session, and so typically have limited impact.
Indirect prompt injection, in which a threat actor causes their data to be included in another user’s session and thus takes control of the LLM’s behavior and productions, is more serious. This is often accomplished by targeting data sources used by retrieval augmented generation (RAG) tools, including internet search, such that documents containing malicious instructions are unknowingly included in the current user’s session. When the capability to manipulate another user’s session,– particularly LLM outputs, is combined with the use of tools, a significant number of new potential risks present themselves.
When potentially untrusted data enters an agentic system, any downstream actions taken by that system become potentially adversary-controlled.
Security and complexity in AI autonomy
Even before “agentic” AI became a distinct class of product offerings, the orchestration of AI workloads in sequences was commonplace. Even simple flows, such as an endpoint security product routing a sample to the correct AI powered analysis engine depending on file format, is arguably an example of such a workflow.
Because workflows such as these are deterministic, it’s straightforward to enumerate all possible paths, map out data flows, and isolate untrusted data from potentially risky actions that might be impacted by attacker-controlled data.
As the industry moves to systems that have more internal decision making capabilities, higher degrees of tool use, and more complex interactions between model components, the number of potential data flow paths increases exponentially, making threat modeling more difficult.
The following set of classification, ordering systems from least autonomous to most, distinguishes between different degrees of autonomy afforded by different architectures, and helps assess corresponding risks.
Autonomy level
Description
Example
0 – Inference API
A single user request results in a single inference call to a single model.
An
NVIDIA NIM microservice
serving a single model
1 – Deterministic system
A single user request triggers more than one inference request, optionally to more than one model, in a predetermined order that does not depend on either user input or inference results.
NVIDIA Generative Virtual Screening for Drug Discovery Blueprint
2 – Weakly autonomous system
A single user request triggers more than one inference request. An AI model can determine if or how to call plugins or perform additional inference at fixed predetermined decision points.
Build an Enterprise RAG Pipeline Blueprint
3 – Fully autonomous system
In response to a user request, the AI model can freely decide if, when, or how to call plugins or other AI models, or to revise its own plan freely, including deciding when to return control to the user.
NVIDIA Vulnerability Analysis for Container Security Blueprint
, “BabyAGI”, computer use agents
Table 1. System autonomy level classification
Level 1
Level 1 is a linear chain of calls, where the output of one AI call or tool response is conveyed to the next step in an entirely deterministic manner. The complete flow of data through the system is known in advance.
Figure 1. First level of autonomy
Level 2
In level 2, the output from the AI model may be sent along different paths through the workflow in a data-dependent manner. While every execution path can be traced, it’s not known until the workflow is executed which execution path is used.
Figure 2. Linear chain of AI calls and tool responses
It forms a directed acyclic graph in that there is no path through the workflow that could potentially form a loop or cycle. All possible paths can be finitely enumerated. Most LLM routers fall into this level.
Level 3
For level 3, the number of potential execution paths grows exponentially with the number of execution steps in the workflow. Tools or other AI models may be invoked as the workflow progresses.
Figure 3. Output from the AI model goes along different paths
The presence of a cycle within the directed graph means that an exponentially increasing (and, at least in theory, potentially infinite) number of paths through the graph exist. It’s generally not tractable to enumerate all possible paths ahead of time, nor predict which specific path will be used for a particular input.
Separation of security boundary analysis and tool security
Our early experience with analyzing and securing agentic systems in the field led us to realize that the risk associated with these systems lies mostly in the tools or plugins available to those systems. In the absence of a tool or plugin that can perform sensitive or physical actions, the primary risk posed by manipulation of the AI component is misinformation, regardless of the degree of complexity of the workflow.
This observation drives the primary difference between this framework, which focuses on security boundaries as being distinct from tool security, and other frameworks, such as that proposed by HuggingFace in their smolagents library (
Introduction to Agents
). Some levels align closely. Their
simple processor
is approximately a
deterministic system
and their
multi-step agent
maps closely onto a
fully autonomous system
.
However, this framework focuses on the difficulty of identifying security boundaries given the degree of autonomy expressed by an AI component in describing program flow. HuggingFace’s description of agency, by combining the risks of tool use with the complexity of program flow, provides a quick rubric to judge systemic risks at the level of the system. This framework’s approach of separating the risks of tool calling and tool use from the difficulty of drawing security boundaries supports more detailed threat modeling of complex flows and potential mitigations.
In the proposed framework, separating the following concerns enables you to more concretely assess the risk posed by an agentic workflow:
Security boundaries with respect to the flow of untrusted data through the system
Security properties of specific functions or tools that may be called by the agentic system
Within a simple deterministic system with clear security boundaries, even a sensitive plugin may be a fairly low-risk if you can clearly separate it from untrusted input. In a similar manner, even a fully agentic system may be relatively low-risk as long as no sensitive tools are available to it and the output of the system is appropriately validated before use.
The combination of a more complex workflow with sensitive tools, however, demands further analysis to ensure secure operation.
Threat modeling under this framework
Increasing autonomy levels do not inherently represent increasing risk. Instead, they represent increasing lack of determinism and predictability of the system behavior, which makes it more difficult to assess the impact of untrusted data.
Risk associated with agentic systems is located largely in the tools or plugins that the system has access to which can perform sensitive actions, including the following examples:
Completing a purchase
Sending an email
Physical actions such as moving a robot or setting a thermostat
Broadly speaking, anything that takes an action on behalf of a user that requires their delegated authorization to complete, or anything that results in a physical change in a system should be treated as potentially sensitive, and analyzed for its potential to act upon or be influenced by untrusted data.
Mitigation of this tool risk relies in large part on the ability to block the ability of attackers to inject malicious data into plugins, which becomes significantly more difficult with each increase in autonomy level.
Possible downstream effects are much more dependent on the details of the system and what plugins are being used, but at a minimum, a threat actor who can feed malicious data to the system must be presumed to be able to at least influence, if not control outright, the responses of the system.
If tools or plugins are being used, the potential threat increases significantly. A bad actor who can control the output of an AI model embedded in a system that uses tools might also be able to control which plugins are used and what actions those tools take. This can lead to a range of threats:
Data exfiltration, for example, through Markdown rendering plugins
Remote code execution, for example, through plugins that run LLM-provided code
The risk of physical harm, if the system has control or influence over some sort of physical system
As you climb the autonomy hierarchy, you need a more careful analysis of and management of potentially untrusted data (Figures 4 and 5).
A Level 0 system, with no vectors for untrusted data, is relatively straightforward from this perspective.
A Level 1 system, with a predetermined workflow, always accesses the same data sources, models, and tools in the same sequence, making it straightforward to determine if untrusted data can be introduced into the workflow. The designer can either isolate sensitive functionality from the untrusted data, or design appropriate sanitization strategies for that data.
The number of potential execution paths in a Level 2 system is greater than in Level 1 systems but still enumerable, and specific paths that contain untrusted data can be analyzed in greater detail.
Level 3 autonomous systems pose the largest difficulty, as the number of execution paths typically grows exponentially in the number of self-reflection steps performed by the agent. In this case, these include time-of-use sanitization, time-of-use taint tracing, or other risk mitigations such as requiring manual user approval of potentially hazardous actions.
Taint tracing
Taint tracing
is marking an execution flow as having received untrusted data and then either preventing use of or requiring manual re-authorization for any sensitive tool.
For Level 1 and Level 2 systems, identifying potential sources of untrustworthy data, and evaluating the risks of downstream impacts is relatively straightforward. When untrustworthy data enters the system, every downstream AI model and its associated outputs and every downstream tool and its associated actions and outputs are also untrusted.
Figure 4. Taints from untrusted sources can be bounded and enumerated in Level 1 and 2 systems
For level 3 systems, taint tracing in the general case becomes almost intractable. When untrusted data has been introduced into the workflow, it has the potential to propagate to any other downstream component through the looping structure inherent to level 3 workflows.
Figure 5. Taints from untrusted sources are difficult to bound and enumerate in Level 3 systems
By classifying an agentic application into the correct level, it becomes simpler to identify the overall level of risk posed by the application and corresponding security requirements.
Recommended security controls per autonomy level
Table 2 provides a high-level summary of some suggested mitigating controls that should be placed around agents of various autonomy levels. Not all controls are required for all agentic workloads, depending on the sensitivity of tools provided to the workload.
Autonomy level
Recommended security controls
0 – Inference API
Use standard API security.
1 – Deterministic system
Manually trace dataflows and order workflow correctly to prevent untrusted data from entering sensitive plugins.
2 – Bounded agentic workflow
Enumerate dataflows, identify ones with potentially untrusted data, explore isolation or sanitization options, and consider time-of-use manual approval of sensitive actions.
3 – Fully autonomous system
Implement taint tracing and mandatory sanitization of potentially untrusted data. Consider time-of-use manual approval of sensitive actions.
Table 2. Recommended security controls by level
For Level 0 systems, you can apply simple application and API security. Due to lack of further interaction taken by the system on the basis of AI model output, overall risk here is largely limited to non-AI components.
Level 1 and 2 systems provide a slightly elevated level of risk, but one that is straightforward to evaluate. Because these systems contain no loops, all execution flows through the systems can be exhaustively enumerated and examined for cases when untrusted data may flow into a sensitive tool.
Flows of this type are particularly risky when they pass through an LLM between the source and the sink. In level 1 systems, it’s often sufficient to reorder the workflow. For more complex (level 2) systems, risky flows from untrusted sources to sensitive sinks should be carefully evaluated to see if they can be eliminated, or otherwise if the untrusted data or input to the sensitive tool can be properly sanitized to remove risk.
Finally, if the flows cannot be rearranged or eliminated, or robust sanitization is not possible, add time-of-use manual approval to sensitive tools that may receive untrusted data, thus placing a human in the loop. This enables users of agentic workflows to inspect risky actions before they are performed and deny the workflow the ability to perform adversarially manipulated ones.
Level 3 systems, due to their complexity, typically cannot be meaningfully rearranged to block untrusted sources to sensitive sink flows. In this case, sanitization of untrusted data at time of retrieval should be implemented.
If possible, implementing taint tracing should also be considered. In the event that neither is possible, simply requiring that all potentially sensitive tools require manual approval before executing their function may be used as a last resort.
As with threat modeling, breaking agentic workloads into different complexity levels enables the quick determination of reasonable security controls to avoid the risk of having sensitive tools ingest untrusted data, including LLM outputs that have been manipulated by prompt injection.
Conclusion
As systems climb the autonomy hierarchy, they become more complex and more difficult to predict. This makes threat modeling and risk assessment more difficult, particularly in the presence of a range of data sources and tools of varying trustworthiness and sensitivity.
Identifying the system autonomy level provides a useful framework for assessing the complexity of the system, as well as the level of effort required for threat modeling and necessary security controls and mitigations.
It’s also important to analyze the plugins in the pipeline and classify them depending on their capabilities to provide an accurate risk evaluation based on the autonomy level. | https://developer.nvidia.com/zh-cn/blog/agentic-autonomy-levels-and-security/ | 代理自主级别与安全性 | 智能体工作流是 AI 赋能工具的下一次演进。它们使开发者能够将多个 AI 模型关联在一起以执行复杂的活动,使 AI 模型能够使用工具来访问其他数据或自动执行用户操作,并使 AI 模型能够自主运行,以尽可能减少人类参与或交互的方式分析和执行复杂的任务。
由于其强大的功能,代理工作流也存在风险因素。代理系统的核心最常见的模型仍然是各种 LLM,如果可以将不受信任的数据引入系统,这些模型仍然容易受到 prompt injection 的影响。
为帮助评估和缓解这些漏洞,NVIDIA 提供了一个 Agentic Autonomy 框架,我们将其用于以下用途:
了解与日益复杂的 AI 工作流相关的风险
帮助模拟和减轻此类工作流程带来的风险
介绍如何使用它对 agentic 系统的威胁进行建模
操控自主系统
在实践中,开发 AI 赋能的应用需要两个关键组件:
对手必须能够通过某种机制将其数据 (读取:攻击) 传入系统。
必须存在其恶意数据可能触发的下游影响。
当系统的 AI 组件是 LLM 时,这通常被称为直接提示注入 (对手和用户是同一个人) 或间接提示注入 (对手和用户可以是不同的人)。
然而,对于 AI 驱动的自主应用中使用的其他类型的 AI 模型,也存在类似的威胁。这两种提示注入的根源在于 LLM 架构中控制平面和数据平面之间缺乏分离。
直接提示注入通常出现在“Do Anything Now”(或 DAN)提示等示例中,这些示例试图颠覆训练到模型中的安全和内容过滤器,并使其产生令人反感或不需要的内容。这些攻击通常只影响活跃用户的会话,因此通常影响有限。
间接提示注入 (即威胁行为者将其数据包含在其他用户的会话中,从而控制 LLM 的行为和生产) 更为重要。这通常通过定位检索增强生成 (RAG) 工具 (包括互联网搜索) 使用的数据源来实现,这样就会在不知情的情况下将包含恶意指令的文档包含在当前用户的会话中。当操控其他用户会话 (尤其是 LLM 输出) 的能力与工具的使用相结合时,会出现大量新的潜在风险。
当潜在的不可信任数据进入代理系统时,该系统采取的任何下游操作都可能受到对手的控制。
AI 自主领域的安全性和复杂性
早在“代理式”AI 成为一类独特的产品之前,按序列编排 AI 工作负载就已屡见不鲜。即使是简单的流程 (例如端点安全产品,根据文件格式将样本路由到由 AI 驱动的正确分析引擎) 也可以说是此类工作流的示例。
此类工作流程具有确定性,因此您可以直接列举所有可能的路径,绘制数据流地图,并将不受信任的数据与可能受攻击者控制数据影响的潜在风险操作隔离开来。
随着行业转向拥有更多内部决策能力、更高工具使用率以及模型组件之间更复杂交互的系统,潜在数据流路径的数量呈指数级增加,从而增加了威胁建模的难度。
以下分类将系统从最不自主排序为最自主排序,可区分不同架构提供的不同自主程度,并有助于评估相应的风险。
自主级别
说明
示例
0 – 推理 API
单个用户请求会导致对单个模型进行一次推理调用。
为单个模型提供服务的 NVIDIA NIM 微服务
1 – 确定性系统
单个用户请求会按不依赖于用户输入或推理结果的预定顺序触发多个推理请求 (可以选择触发多个模型)。
NVIDIA
生成式虚拟筛选蓝图
2 – 弱自主系统
单个用户请求会触发多个 inference 请求。AI 模型可以确定是否或如何调用插件,或在固定的预定决策点执行额外 inference。
构建企业 RAG Pipeline Blueprint
3 – 全自主系统
根据用户请求,AI 模型可以自由决定是否、何时或如何调用插件或其他 AI 模型,或者自由修改自己的计划,包括决定何时将控制权交回给用户。
针对容器安全蓝图“BabyAGI”、计算机使用代理的 NVIDIA 漏洞分析
表 1. 系统自主级别分类
1 级
1 级是线性调用链,其中一个 AI 调用或工具响应的输出以完全确定的方式传输到下一步。通过系统的完整数据流是预先知道的。
图 1. 第一级自动驾驶
2 级
在第 2 级中,AI 模型的输出可能会以数据依赖的方式在工作流中沿不同路径发送。虽然可以追踪每个执行路径,但在工作流执行之前,我们不知道使用的是哪种执行路径。
图 2、AI 调用和工具响应的线性链
它形成了有向无环图,因为工作流中没有可能形成循环或循环的路径。所有可能的路径都可以有限枚举。大多数 LLM 路由器都属于这一级别。
3 级
对于级别 3,潜在执行路径的数量会随着工作流中执行步骤的数量呈指数级增长。随着工作流的进展,可能会调用工具或其他 AI 模型。
图 3、AI 模型的输出路径不同
有向图形中存在循环意味着图形中存在呈指数级增加(至少理论上可能是无穷大)的路径。通常无法提前列举所有可能的路径,也无法预测特定输入将使用哪种特定路径。
安全边界分析与工具安全性分离
我们早期在现场分析和保护代理系统的经验使我们意识到,与这些系统相关的风险主要在于这些系统可用的工具或插件。在缺乏可执行敏感或物理操作的工具或插件的情况下,无论工作流程的复杂程度如何,操作 AI 组件所带来的主要风险都是错误信息。
这一观察结果决定了此框架与其他框架 (例如 Hugging Face 在其 smolagents 库
(Introduction to Agents)
中提出的框架) 之间的主要区别,前者侧重于安全边界,而后者不同于工具安全边界。一些层次非常相似。他们的简单处理器近似为
确定性系统
,并且他们的多步骤智能体紧密地映射到完全自主的系统上。
但是,鉴于 AI 组件在描述程序流时表示的自主程度,此框架重点关注识别安全边界的难度。Hugging Face 对机构的描述将工具使用的风险与程序流程的复杂性相结合,提供了在系统层面判断系统风险的快速指南。此框架将工具调用和工具使用的风险与绘制安全边界的难度分开,支持对复杂流程和潜在缓解措施进行更详细的威胁建模。
在建议的框架中,通过分离以下问题,您可以更具体地评估由 agentic workflow 带来的风险:
与通过系统的非可信数据流相关的安全边界
代理系统调用的特定函数或工具的安全属性
在具有明确安全边界的简单确定性系统中,如果您能够将敏感插件与不信任的输入明确分开,即使是敏感插件也可能风险较低。同样,只要没有可用的敏感工具,且系统的输出在使用前经过适当验证,则即使是全代理系统也可能相对风险较低。
然而,将更复杂的工作流程与敏感工具相结合需要进行进一步分析,以确保安全操作。
此框架下的威胁建模
自动驾驶级别的增加本身并不代表风险的增加。相反,它们代表着系统行为越来越缺乏确定性和可预测性,这使得评估不可信任数据的影响变得更加困难。
与代理系统相关的风险主要存在于系统可以访问的工具或插件中,而这些工具或插件可以执行敏感操作,其中包括以下示例:
完成购买
发送电子邮件
物理动作,例如移动机器人或设置 thermostat
从广义上讲,任何代表用户采取行动且需要其授权才能完成的操作,或任何导致系统发生物理变化的操作,都应被视为潜在的敏感数据,并分析其对不受信任数据采取行动或受其影响的可能性。
降低此工具风险在很大程度上依赖于能否阻止攻击者向插件中注入恶意数据,而随着自主级别的提升,这一能力会变得更加困难。
可能的下游影响更多取决于系统的细节和使用的插件,但至少必须假定能够向系统提供恶意数据的威胁行为者至少能够影响(如果不是完全控制)系统的响应。
如果使用工具或插件,潜在威胁会显著增加。控制嵌入到使用工具的系统中的 AI 模型的输出的坏行为者也可能能够控制使用哪些插件以及这些工具执行哪些操作。这可能会导致一系列威胁:
数据泄漏,例如通过 Markdown 渲染插件
远程执行代码,例如,通过运行 LLM 提供的代码的插件
如果系统控制或影响某种物理系统,则会产生物理伤害风险
在提升自主层次结构时,您需要更仔细地分析和管理潜在的不可信任数据(图 4 和图 5)。
从这个角度来看,没有不可信数据向量的 Level 0 系统相对简单。
具有预定工作流的 1 级系统始终可以按相同顺序访问相同的数据源、模型和工具,从而可以直接确定是否可以将不可信任的数据引入工作流。设计师可以将敏感功能与非可信数据隔离开来,或者为这些数据设计合适的清理策略。
2 级系统中的潜在执行路径数量大于 1 级系统,但仍可枚举,并且可以更详细地分析包含非可信数据的特定路径。
3 级自主系统的难度最大,因为执行路径的数量通常随智能体执行的自反射步骤的数量呈指数级增长。在这种情况下,这些措施包括使用时间清洁、使用时间污点追踪或其他风险缓解措施,例如要求用户手动批准潜在的危险操作。
污点追踪
污点追踪
将执行流程标记为接收了不可信任的数据,然后阻止对任何敏感工具的使用或要求对其进行手动重新授权。
对于 1 级和 2 级系统,识别潜在来源的不可信数据,以及评估下游影响的风险相对简单。当不可信数据进入系统时,每个下游 AI 模型及其关联输出、每个下游工具及其关联操作和输出也同样不可信。
图 4、不可信来源的污点可以在 Level 1 和 Level 2 系统中进行限制和列举
对于 level 3 级系统,在一般情况下,污点追踪几乎难以解决。当不受信任的数据引入工作流后,它有可能通过 level 3 级工作流固有的循环结构传播到任何其他下游组件。
图 5、来自不可信来源的污点很难在 Level 3 级系统中绑定和列举
通过将代理应用程序分类为正确的级别,可以更简单地识别应用程序构成的总体风险水平以及相应的安全要求。
每个自主级别的推荐安全控制
表 2 简要概述了一些建议的缓解控制,这些控制应放置在不同自主级别的 agents 周围。并非所有 agents 工作负载都需要所有控件,具体取决于为工作负载提供的工具的灵敏度。
自主级别
推荐的安全控制
0 – 推理 API
使用标准 API 安全性。
1 – 确定性系统
手动追踪数据流并正确订购工作流,以防止不受信任的数据进入敏感插件。
2 – 有界限的代理工作流
列举数据流、识别具有潜在不可信任数据的数据流、探索隔离或清理选项,并考虑使用时间手动批准敏感操作。
3 – 全自主系统
实现污点追踪和强制清理对可能不可信的数据。考虑使用时间手动批准敏感操作。
表 2. Recommended security controls by level
对于 0 级系统,您可以应用简单的应用程序和 API 安全性。由于系统无法根据 AI 模型输出进行进一步交互,因此整体风险主要局限于非 AI 组件。
1 级和 2 级系统的风险水平略高,但评估起来很简单。由于这些系统不包含循环,因此可以详尽无遗地列举和检查流经系统的所有执行流程,以备不时之需。
当此类流通过源和接收端之间的 LLM 时,风险特别大。在 1 级系统中,通常只需重新排列工作流程即可。对于更复杂(2 级)的系统,应仔细评估从不受信任的来源到敏感汇的风险流,以确定是否可以消除这些流,或者是否可以适当清除不受信任的数据或敏感工具的输入,以消除风险。
最后,如果流程无法重新排列或消除,或无法进行可靠的清理,请为可能会接收不受信任数据的敏感工具添加使用时间手动审批,从而让用户进入循环。这使得代理工作流的用户能够在执行风险操作之前检查这些操作,并使工作流无法执行对抗性操作。
由于 Level 3 级系统的复杂性,通常无法进行有意义的重新排列,以将不受信任的源阻塞到敏感的接收流。在这种情况下,应在检索时清除不受信任的数据。
如果可能,还应该考虑实施污点追踪。如果两者都无法实现,则仅要求所有潜在敏感的工具在执行其功能之前需要手动批准即可将其用作最后手段。
与威胁建模一样,将代理工作负载分解为不同的复杂性级别有助于快速确定合理的安全控制,从而避免让敏感工具提取不受信任数据 (包括通过提示注入操作的 LLM 输出) 的风险。
结束语
随着系统对自动驾驶层次结构的要求越来越高,这些层次结构变得越来越复杂,预测难度也越来越大。这使得威胁建模和风险评估变得更加困难,尤其是在存在各种数据源以及具有不同可靠性和灵敏度的工具的情况下。
确定系统自主级别为评估系统复杂性、威胁建模以及必要的安全控制和缓解措施所需的工作量水平提供了有用的框架。
分析管道中的插件并根据其功能对其进行分类也很重要,以便根据自主级别提供准确的风险评估。 |
https://developer.nvidia.com/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA Deep Learning Institute Releases New Generative AI Teaching Kit | Generative AI, powered by advanced machine learning models and deep neural networks, is revolutionizing industries by generating novel content and driving innovation in fields like healthcare, finance, and entertainment.
NVIDIA is leading this transformation with its cutting-edge GPU architectures and software ecosystems, such as the
H100 Tensor Core
GPU and CUDA platform, which optimize the development and deployment of generative models.
NVIDIA NIM
enhances the efficiency and scalability of AI inference tasks, enabling rapid deployment and iteration across various computing environments and accelerating advancements in generative AI applications.
The importance of generative AI education
As generative AI models, such as GANs and transformers, become increasingly sophisticated, there is a growing demand for skilled professionals who can develop, refine, and ethically deploy these technologies. A strong educational foundation in generative AI equips students with the practical skills and theoretical knowledge needed to innovate in areas like content creation, drug discovery, and autonomous systems.
College and university education in generative AI is crucial due to the rapidly expanding role of AI in almost every industry. By integrating generative AI into their curriculum, universities prepare the next generation of AI researchers, engineers, and thought leaders to advance the field and address the complex challenges associated with AI-driven innovation.
The new Generative AI Teaching Kit, a collaboration between the NVIDIA Deep Learning Institute (DLI) and Dartmouth College, is set to empower the next generation of professionals with the skills and knowledge needed in this rapidly evolving field.
This comprehensive teaching resource enables educators to provide students access to cutting-edge tools, frameworks, and practical exercises that are crucial for understanding the complexities of Generative AI and large language model development and deployment. By equipping students with a deep understanding of generative AI techniques, the Teaching Kit enables educators to foster future innovation and creativity in AI-driven industries.
As students transition into the workforce, they will be better prepared to tackle global challenges, from improving healthcare and science to advancing sustainable technologies.
Sam Raymond, adjunct assistant professor of engineering at Dartmouth College, was instrumental in developing the content. “Empowering students with skills to understand and potentially develop their own GPU-accelerated Generative AI applications is the primary objective,” said Raymond. “I believe students who go through this course will be at a significant advantage in the job market and help bridge the knowledge gap in industries today.”
Overview of the Generative AI Teaching Kit
All Teaching Kits include lecture slides, hands-on labs, Jupyter notebooks, knowledge checks, and free online self-paced courses that provide certificates of competency for students, all comprehensively packaged up and ready for classroom and curriculum integration.
Figure 1. The Generative AI Teaching Kit includes free access to all DLI online self-paced courses, offering certificates for students
.
The aim of the Generative AI Teaching Kit is to introduce the foundational concepts of natural language processing (NLP) that are essential for understanding LLMs and generative AI more broadly. Key concepts of LLMs are then examined using NVIDIA GPUs, tools, and services, as well as open-source libraries and frameworks. A simple pretraining exercise of a GPT model shows basic training processes in the cloud.
The kit also covers diffusion models to explore the application of generative AI in image and video generation. Multi-modal LLM architectures are then introduced, with a focus on optimizing various LLM architectures during fine-tuning using the NVIDIA NeMo framework. Advancements in inference and the refinement of tools like chatbots are also discussed, using NVIDIA NIM, NeMo Guardrails, TensorRT, and TensorRT-LLM to enhance efficiency and scalability in production environments.
The Generative AI Teaching Kit contains focused modules that combine theory, algorithms, programming, and examples:
Module 1:
Introduction to Generative AI
Module 2:
Word Embeddings, Tokens, and NLP
Module 3:
Large Language Models and the Transformer
Module 4:
LLM Scaling Laws and LLM Families
Module 5:
Multimodal Learning and Its Applications
Module 6:
Diffusion Models in Generative AI
Module 7:
Model Training (Pre-Training, Instruction Following, and PEFT)
Module 8:
LLM Orchestration
Module 9:
Scaling Model Training to Distributed Workloads
Through a collaboration with Google, educators also get access to free Google Colab credits to run the lab notebooks in preparation for teaching the course.
This content is valuable for educators across various fields, especially in computer science and engineering. Its modular design enables instructors to tailor the course to meet the specific needs of their students and create a customized learning experience. Select professors from around the world have already been given early access to first-release modules.
“I’m eager to integrate the Generative AI Teaching Kit in my AI in Materials Engineering class,” said Mohadeseh Taheri-Mousavi, assistant professor in the Materials Science and Engineering department at Carnegie Mellon University. “The comprehensive lecture notes with well-structured coding labs with examples from various fields, and associated online courses with certificates, will provide my students with the cutting-edge resources to deeply understand the broad applications of generative AI techniques in various fields.”
Professor Payam Barnaghi from the Department of Brain Sciences at Imperial College London uses LLMs and generative AI in his research using electronic health records and healthcare data. “NVIDIA Generative AI Teaching Kit content is a wonderful resource for students learning the latest developments in AI and machine learning,” said Barnaghi. “As a result of having early access to the first modules, I plan to use this content as the basis for teaching advanced topics in my machine learning for neuroscience courses.”
Given the fast-paced advancements in generative AI, educators can expect the teaching materials to be updated over time. NVIDIA is dedicated to offering high-end educational resources and welcomes feedback to continuously improve the content.
LLM technical certification discounts for students
The Generative AI Teaching Kit also includes access to a 50% discount on the
NVIDIA-Certified Associate: Generative AI LLMs
certification exam and the
NVIDIA-Certified Associate: Generative AI Multimodal
certification exam for educators to distribute to students. It also includes access to both exams free of charge for educators. Both exams are online and proctored remotely, include 50-60 questions, and have a 60-minute time limit.
Figure 1. NVIDIA-Certified Associate
These technical certifications validate students’ expertise in generative AI, offering a competitive advantage in the job market. They can open doors to exciting career opportunities across AI-driven industries.
The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions.
The NCA Generative AI Multimodal certification is an entry-level credential that validates the foundational skills needed to design, implement, and manage AI systems that synthesize and interpret data across text, image, and audio modalities.
NVIDIA recommends that students take these exams after completing the Generative AI Teaching Kit.
Get started
Educators can access the first release of the Generative AI Teaching Kit and other kits for free by joining the
NVIDIA DLI Teaching Kit Program
.
About the NVIDIA Deep Learning Institute
The
NVIDIA Deep Learning Institute
(DLI) offers resources for diverse learning needs, from learning materials to self-paced and live training to educator programs. Individuals, teams, organizations, educators, and students can now find everything they need to advance their knowledge in AI, accelerated computing, accelerated data science, graphics, simulation, and more. | https://developer.nvidia.com/zh-cn/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA 深度学习培训中心发布全新生成式 AI 教学套件 | 生成式 AI 由先进的机器学习模型和深度神经网络提供动力支持,通过生成新内容并驱动医疗、金融和娱乐等领域的创新来变革各行各业。
NVIDIA 正凭借其先进的 GPU 架构和软件生态系统(例如
H100 Tensor Core GPU
和 CUDA 平台),引领这一转型,该平台可优化生成式模型的开发和部署。
NVIDIA NIM
可增强 AI 推理任务的效率和可扩展性,从而实现跨各种计算环境的快速部署和迭代,并加速生成式 AI 应用的发展。
生成式 AI 教育的重要性
随着生成式 AI 模型(例如 GANs 和 Transformers)变得越来越复杂,对能够开发、改进和合乎道德地部署这些技术的熟练专业人员的需求也在不断增长。生成式 AI 方面坚实的教育基础可以让学生掌握在内容创作、药物研发和自主系统等领域进行创新所需的实践技能和理论知识。
由于生成式 AI 在几乎每个行业中的作用都在迅速扩展,因此学院和大学教育在生成式 AI 中至关重要。通过将生成式 AI 集成到其课程中,大学可以培养新一代 AI 研究人员、工程师和思想领袖,以推动该领域的发展,并应对与 AI 驱动的创新相关的复杂挑战。
新的生成式 AI 教学套件,由 NVIDIA 深度学习研究所(DLI)和达特茅斯学院合作开发,旨在为新一代专业人士提供这一快速发展领域所需的技能和知识。
借助这一全面的教学资源,教育工作者能够为学生提供先进的工具、框架和实践练习,这些工具、框架和实践练习对于理解生成式 AI 和大语言模型开发和部署的复杂性至关重要。通过让学生深入了解生成式 AI 技术,该教学套件使教育工作者能够在 AI 驱动的行业中培养未来的创新和创造力。
随着学生向劳动力过渡,他们将更好地准备应对全球挑战,从改善医疗和科学到推进可持续技术。
达特茅斯学院(Dartmouth College)工程系兼职助理教授 Sam Raymond 在编写课程内容时发挥了重要作用。Raymond 说:“让学生掌握理解并开发自己的 GPU 加速生成式 AI 应用程序的技能是我们的主要目标。我相信,学习本课程的学生将在就业市场中占据显著优势,并有助于缩小当今各行各业的知识差距。
生成式 AI 教学套件概述
所有教学套件均包含教学幻灯片、动手实验室、Jupyter notebooks、知识检查和免费的在线自定进度课程,为学生提供能力证书。这些课程均经过全面打包,ready for课堂和课程整合。
图 1. 生成式 AI 教学套件包括免费访问所有 DLI 在线自定进度课程,并为学生提供证书。
生成式 AI 教学套件旨在介绍自然语言处理(NLP)的基本概念,这些概念对于更广泛地理解 LLM 和生成式 AI 至关重要。然后,我们使用 NVIDIA GPUs、工具和服务以及开源库和框架来检查 LLM 的关键概念。一个 GPT 模型的简单预训练练习展示了云端的基本训练流程。
该套件还涵盖扩散模型,以探索生成式 AI 在图像和视频生成中的应用。然后,介绍多模态 LLM 架构,重点介绍如何在使用 NVIDIA NeMo 框架进行微调的过程中优化各种 LLM 架构。此外,还讨论了推理方面的进展以及聊天机器人等工具的优化,这些工具使用 NVIDIA NIM、NeMo Guardrails、TensorRT 和 TensorRT-LLM 来提高生产环境的效率和可扩展性。
生成式 AI 教学套件包含结合理论、算法、编程和示例的重点模块。这第一版包括以下模块:
生成式 AI 简介
生成式 AI 中的扩散模型
LLM 编排
未来版本的工具包将提供更多的模块。
这些内容对各个领域的教育工作者都很有价值,尤其是计算机科学和工程领域的教育工作者。其模块化设计使讲师能够根据学生的具体需求定制课程,并打造定制化的学习体验。来自世界各地的精选教授已经可以抢先体验第一版模块。“我非常希望将 Generative AI 教学套件集成到我的 AI 在材料工程课堂中,”卡内基梅隆大学材料科学与工程系助理教授 Mohadeseh Taheri-Mousavi 说,“内容全面的教学笔记包含结构良好的编码实验室和来自各个领域的示例,以及与证书相关的在线课程,将为我的学生提供前沿资源,帮助他们深入了解 Generative AI 技术在各个领域的广泛应用。”
伦敦帝国理工学院大脑科学系的Professor Payam Barnaghi教授使用电子病历和医疗健康数据在研究中使用LLMs和generative AI。“NVIDIA Generative AI Teaching Kit 内容是学生学习 AI 和 machine learning 最新发展的绝佳资源,”Barnaghi 说:“由于能够提前访问首批模块,我计划以这些内容为基础,在神经科学课程的机器学习中教授高级主题。
鉴于生成式 AI 的快速发展,教育工作者可以预期教学材料会随着时间的推移而更新。NVIDIA 致力于提供高端教育资源,并欢迎反馈,以便不断改进内容。
开始使用
教育工作者可以加入
NVIDIA DLI 教学套件计划
,免费获取第一版 Generative AI 教学套件和其他套件。
关于 NVIDIA 深度学习培训中心
NVIDIA 深度学习研究所
(DLI) 为各种学习需求提供资源,从学习材料到自定进度和实时培训,再到教育工作者计划。个人、团队、组织、教育工作者和学生现在可以找到所需的一切资源,以提升他们在 AI、加速计算、加速数据科学、图形、模拟等方面的知识水平。 |
https://developer.nvidia.com/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | JSON Lines Reading with pandas 100x Faster Using NVIDIA cuDF | JSON is a widely adopted format for text-based information working interoperably between systems, most commonly in web applications and large language models (LLMs). While the JSON format is human-readable, it is complex to process with data science and data engineering tools.
JSON data often takes the form of newline-delimited
JSON Lines
(also known as NDJSON) to represent multiple records in a dataset. Reading JSON Lines data into a dataframe is a common first step in data processing.
In this post, we compare the performance and functionality of Python APIs for converting JSON Lines data into a dataframe using the following libraries:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas Accelerator Mode
We demonstrate good scaling performance and high data processing throughput with the JSON reader in cudf.pandas, especially for data with a complex schema. We also review the versatile set of JSON reader options in cuDF that improve compatibility with Apache Spark and empower Python users to handle quote normalization, invalid records, mixed types and other JSON anomalies.
JSON parsing versus JSON reading
When it comes to JSON data processing, it’s important to distinguish between parsing and reading.
JSON parsers
JSON parsers, such as
simdjson
, convert a buffer of character data into a vector of tokens. These tokens represent the logical components of JSON data, including field names, values, array begin/end, and map begin/end. Parsing is a critical first step in extracting information from JSON data, and significant research has been dedicated to reaching high parsing throughput.
To use information from JSON Lines in data processing pipelines, the tokens must often be converted into a Dataframe or columnar format, such as
Apache Arrow
.
JSON readers
JSON readers, such as
pandas.read_json
convert input character data into a Dataframe organized by columns and rows. The reader process begins with a parsing step and then detects record boundaries, manages the top-level columns and nested struct or list child columns, handles missing and null fields, infers data types, and more.
JSON readers convert unstructured character data into a structured Dataframe, making JSON data compatible with downstream applications.
JSON Lines reader benchmarking
JSON Lines is a flexible format for representing data. Here are some important properties of JSON data:
Number of records per file
Number of top level columns
Depth of struct or list nesting for each column
Data types of values
Distribution of string lengths
Fraction of missing keys
For this study, we held the record count fixed at 200K and swept the column count from 2 to 200, exploring a range of complex schemas. The four data types in use are as follows:
list<int>
and
list<str>
with two child elements
struct<int>
and
struct<str>
with a single child element
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Data type
Example records
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
Table 1. Example JSON Lines character data
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Performance statistics were collected on the 25.02 branch of
cuDF
and with the following library versions: pandas 2.2.3, duckdb 1.1.3, and pyarrow 17.0.0. The execution hardware used an NVIDIA H100 Tensor Core 80 GB HBM3 GPU and Intel Xeon Platinum 8480CL CPU with 2TiB of RAM. Timing was collected from the third of three repetitions, to avoid initialization overhead and ensure that the input file data was present in the OS page cache.
In addition to the zero code change cudf.pandas, we also collected performance data from
pylibcudf
, a Python API for the
libcudf
CUDA C++ computation core. The runs with pylibcudf used a CUDA async memory resource through RAPIDS Memory Manager (RMM). Throughput values were computed using the JSONL input file size and the reader runtime of the third repetition.
Here are some examples from several Python libraries for invoking the JSON Lines reader:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON Lines reader performance
Overall, we found a wide range of performance characteristics for the JSON readers available in Python, with overall runtimes varying from 1.5 seconds to almost 5 minutes.
Table 2 shows the sum of the timing data from seven JSON reader configurations when processing 28 input files with a total file size of 8.2 GB:
Using cudf.pandas for JSON reading shows about 133x speedup over pandas with the default engine and 60x speedup over pandas with the pyarrow engine.
DuckDB and pyarrow show good performance as well, with about 60 seconds total time for DuckDB, and 6.9 seconds for pyarrow with block size tuning.
The fastest time comes from pylibcudf at 1.5 seconds, showing about 4.6x speedup over pyarrow with
block_size
tuning.
Reader label
Benchmark runtime
(sec)
Comment
cudf.pandas
2.1
Using
-m cudf.pandas
from the command line
pylibcudf
1.5
pandas
281
pandas-pa
130
Using the pyarrow engine
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
Using a 20 MB
block_size
value
Table 2. Sum of timing data for JSON reading of 28 input files
Table 2 includes the input columns counts 2, 5, 10, 20, 50, 100, and 200, and the data types
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Zooming into the data by data type and column count, we found that JSON reader performance varies over a wide range based on the input data details and the data processing library, from 40 MB/s to 3 GB/s for CPU-based libraries and from 2–6 GB/s for the GPU-based cuDF.
Figure 1 shows the data processing throughput based on input size for 200K rows and 2–200 columns, with input data sizes varying from about 10 MB to 1.5 GB.
Figure 1. JSON Lines reading throughput by number of input columns
In Figure 1, each subplot corresponds to the data type of the input columns. File size annotations align to the x-axis.
For cudf.pandas
read_json
, we observed 2–5 GB/s throughput that increased with larger column count and input data size. We also found that the column data type does not significantly affect throughput. The pylibcudf library shows about 1–2 GB/s higher throughput than cuDF-python, due to lower Python and pandas semantic overhead.
For pandas
read_json
, we measured about 40–50 MB/s throughput for the default UltraJSON engine (labeled as “pandas-uj”). Using the pyarrow engine (
engine="pyarrow"
) provided a boost up to 70–100 MB/s due to faster parsing (pandas-pa). The pandas JSON reader performance appears to be limited by the need to create Python list and dictionary objects for each element in the table.
For DuckDB
read_json
, we found about 0.5–1 GB/s throughput for
list<str>
and
struct<str>
processing with lower values <0.2 GB/s for
list<int>
and
struct<int>
. Data processing throughput remained steady over the range of column counts.
For pyarrow
read_json
, we measured data processing throughputs up to 2–3 GB/s for 5-20 columns, and lower throughput values as column count increased to 50 and above. We found data type to have a smaller impact on reader performance than column count and input data size. For column counts of 200 and a record size of ~5 KB per row, throughput dropped to about 0.6 GB/s.
Raising the pyarrow
block_size
reader option to 20 MB (pyarrow-20MB) led to increased throughput for column counts 100 or more, while also degrading throughput for column counts 50 or fewer.
Overall, DuckDB primarily showed throughput variability due to data types, whereas cuDF and pyarrow primarily showed throughput variability due to column count and input data size. The GPU-based cudf.pandas and pylibcudf showed the highest data processing throughput for complex list and struct schema, especially for input data sizes >50 MB.
JSON Lines reader options
Given the text-based nature of the JSON format, JSON data often includes anomalies that result in invalid JSON records or don’t map well to a dataframe. Some of these JSON anomalies include single-quoted fields, cropped or corrupted records, and mixed struct or list types. When these patterns occur in your data, they can break the JSON reader step in your pipeline.
Here are some examples of these JSON anomalies:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
To unlock advanced JSON reader options in cuDF, we recommend incorporating cuDF-Python (
import cudf
) and pylibcudf into your workflow. If single-quoted field names or string values appear in your data, cuDF provides a reader option to normalize single quotes into double quotes. cuDF supports this feature to provide compatibility with the
allowSingleQuotes
option that is enabled by default in Apache Spark.
If invalid records appear in your data, cuDF and DuckDB both provide error recovery options to replace these records with null. When error handling is enabled, if a record generates a parsing error, all of the columns for the corresponding row are marked as null.
If mixed list and struct values are associated with the same field name in your data, cuDF provides a dtype schema override option to coerce the datatype to string. DuckDB uses a similar approach by inferring a
JSON
data type.
For mixed types, the pandas library has perhaps the most faithful approach, using Python list and dictionary objects to represent the input data.
Here is an example in cuDF-Python and pylibcudf that shows the reader options, including a dtype schema override for column name “a”. For more information, see
cudf.read_json
and
pylibcudf.io.json.read_json
.
For pylibcudf, the
JsonReaderOptions
object can be configured either before or after the
build
function.
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
Table 3 summarizes the behavior of several JSON readers with Python APIs for a few common JSON anomalies. Crosses denote that the reader function raised an exception, and checkmarks denote that the library successfully returned a Dataframe. These results may change in future versions of the libraries.
Single quotes
Invalid records
Mixed types
cuDF-Python, pylibcudf
✔️ Normalize to double quotes
✔️ Set to null
✔️ Represent as a string
pandas
❌ Exception
❌ Exception
✔️ Represent as a Python object
pandas (
engine="pyarrow
“
)
❌ Exception
❌ Exception
❌ Exception
DuckDB
❌ Exception
✔️ Set to null
✔️ Represent as a JSON string-like type
pyarrow
❌ Exception
❌ Exception
❌ Exception
Table 3. JSON reader outcomes when reading JSONL files with anomalies including: single quotes, mixed types and invalid records
cuDF supports several additional JSON reader options that are critical for compatibility with Apache Spark conventions, and now are available to Python users as well. Some of these options include:
Validation rules for numbers and strings
Custom record delimiters
Column pruning by the schema provided in dtype
Customization of
NaN
values
For more information, see the libcudf C++ API documentation on
json_reader_options
.
For more information about multi-source reading for efficiently processing many smaller JSON Lines files, or byte-range support for breaking up large JSON Lines files, see
GPU-Accelerated JSON Data Processing with RAPIDS
.
Summary
RAPIDS cuDF provides powerful, flexible, and accelerated tools for working with JSON data in Python.
GPU-accelerated JSON data processing is also available in RAPIDS Accelerator For Apache Spark, starting in the 24.12 release. For information, see
Accelerating JSON Processing on Apache Spark with GPUs
.
For more information, see the following resources:
cuDF documentation
/rapidsai/cudf
GitHub repo
RAPIDS Docker containers
(available for releases and nightly builds)
Accelerate Data Science Workflows with Zero Code Changes
DLI course
Mastering the cudf.pandas Profiler for GPU Acceleration | https://developer.nvidia.com/zh-cn/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | 使用 NVIDIA cuDF,pandas 读取 JSON 行文件速度提升100倍 | JSON 是一种广泛采用的格式,用于在系统之间 (通常用于 Web 应用和大语言模型 (LLMs)) 以互操作方式运行的基于文本的信息。虽然 JSON 格式是人类可读的,但使用数据科学和数据工程工具进行处理十分复杂。
JSON
数据通常采用换行分隔的 JSON 行
(也称为 NDJSON) 的形式来表示数据集中的多个记录。将 JSON 行数据读入数据帧是数据处理中常见的第一步。
在本文中,我们比较了使用以下库将 JSON 行数据转换为数据帧的 Python API 的性能和功能:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas
加速器模式
我们使用 cudf.pandas 中的 JSON 读取器展示了良好的扩展性能和高数据处理吞吐量,特别是对于具有复杂模式的数据。我们还审查了 cuDF 中的一组通用 JSON 读取器选项,这些选项可提高与 Apache Spark 的兼容性,并使 Python 用户能够处理引文归一化、无效记录、混合类型和其他 JSON 异常。
JSON 解析与 JSON 读取
当涉及到 JSON 数据处理时,区分解析和读取非常重要。
JSON 解析器
JSON 解析器 (例如
simdjson
) 可将字符数据缓冲区转换为令牌向量。这些令牌代表 JSON 数据的逻辑组件,包括字段名、值、数组开始/结束和映射开始/结束。解析是从 JSON 数据中提取信息的关键第一步,并且我们致力于实现高解析吞吐量。
要在数据处理工作流中使用来自 JSON 行的信息,必须经常将令牌转换为 Dataframe 或列式格式,例如
Apache Arrow
。
JSON 阅读器
JSON 读取器 (例如
pandas.read_json
) 将输入字符数据转换为按列和行排列的 Dataframe。读取器流程从解析步骤开始,然后检测记录边界、管理顶层列和嵌套结构体或列表子列、处理缺失和空字段、推理数据类型等。
JSON 读取器可将非结构化字符数据转换为结构化 Dataframe,从而使 JSON 数据与下游应用兼容。
JSON Lines 读取器基准测试
JSON Lines 是一种灵活的数据表示格式。以下是 JSON 数据的一些重要属性:
每个文件的记录数
顶层列的数量
每列的结构体或列表嵌套深度
值的数据类型
字符串长度分布
缺少密钥的百分比
在这项研究中,我们将记录计数固定为 200K,并将列计数从 2 扫至 200,同时探索了一系列复杂的模式。使用的四种数据类型如下所示:
包含两个子元素的
list<int>
和
list<str>
包含单个子元素的
struct<int>
和
struct<str>
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
。
数据类型
记录示例
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
表 1. JSON 行字符数据示例
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
性能统计数据在
cuDF
的 25.02 分支上收集,并包含以下库版本:pandas 2.2.3、duckdb 1.1.3 和 pyarrow 17.0.0。执行硬件使用 NVIDIA H100 Tensor Core 80 GB HBM3 GPU 和 Intel Xeon Platinum 8480CL CPU 以及 2TiB 的 RAM。计时数据从三次重复的第三次中收集,以避免初始化开销,并确保输入文件数据存在于操作系统页面缓存中。
除了零代码更改 cudf.pandas 之外,我们还从
py
libcudf
(用于
libcudf
CUDA C++计算核心的 Python API) 收集了性能数据。
py
libcudf
运行通过 RAPIDS 内存管理器 (RMM) 使用 CUDA 异步内存资源。使用 JSONL 输入文件大小和第三次重复的读取器运行时计算吞吐量值。
以下是来自多个 Python 库的一些调用 JSON 行读取器的示例:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON 行读取器性能
总体而言,我们发现 Python 中的 JSON 读取器具有各种性能特征,总体运行时间从 1.5 秒到近 5 分钟不等。
表 2 显示了在处理 28 个输入文件 (总文件大小为 8.2 GB) 时,来自 7 个 JSON 读取器配置的定时数据的总和:
使用 cudf.pandas 进行 JSON 读取显示,与使用默认引擎的 pandas 相比,速度提高了 133 倍,使用 pyarrow 引擎的 pandas 速度提高了 60 倍。
DuckDB 和 pyarrow 也表现出良好的性能,在调整块大小时,DuckDB 的总时间约为 60 秒,而 pyarrow 的总时间为 6.9 秒。
pylibcudf 生成的最快时间为 1.5 秒,与 pyarrow 相比,使用
block_size
调优的速度提高了约 4.6 倍。
阅读器标签
基准运行时
(秒)
评论
cudf.pandas
2.1
在命令行中使用
-m cudf.pandas
pylibcudf
1.5
pandas
271
pandas-pa
130
使用 pyarrow 引擎
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
使用 20 MB 的
block_size
值
表 2、用于 JSON 读取 28 个输入文件的定时数据总和
表 2 包括输入列计数 2、5、10、20、50、100 和 200,以及数据类型
list<int>
、
list<str>
、
struct<int>
和
struct<str>
通过按数据类型和列数量放大数据,我们发现 JSON 读取器的性能因输入数据详细信息和数据处理库的不同而差异很大,基于 CPU 的库的性能介于 40 MB/s 到 3 GB/s 之间,而基于 GPU 的 cuDF 的性能介于 2–6 GB/s 之间。
图 1 显示了基于 200K 行、2–200 列输入大小的数据处理吞吐量,输入数据大小在约 10 MB 到 1.5 GB 之间变化。
图 1. JSON Lines 读取吞吐量按输入列数量
在图 1 中,每个子图均对应输入列的数据类型。文件大小标注与 x 轴对齐。
对于
cudf.pandas read_json
,我们观察到,随着列数量和输入数据大小的增加,吞吐量增加了 2–5 GB/秒。我们还发现,列数据类型不会对吞吐量产生重大影响。由于 Python 和 pandas 语义用度较低,pylibcudf 库的吞吐量比 cuDF-python 高约 1–2 GB/秒。
对于
pandas read_json
,我们测量了默认 UltraJSON 引擎 (标记为“pandas-uj”) 的吞吐量约为 40–50 MB/s。由于解析速度更快 (pandas-pa),使用 pyarrow 引擎 (
engine="pyarrow"
) 可将速度提升高达 70–100 MB/s。由于需要为表中的每个元素创建 Python 列表和字典对象,因此 pandas JSON 读取器的性能似乎受到限制。
对于 DuckDB
read_json
,我们发现
list<str>
和
struct<str>
处理的吞吐量约为 0.5–1 GB/s,而
list<int>
和
struct<int>
的较低值 < 0.2 GB/s。数据处理吞吐量在列数量范围内保持稳定。
对于 pyarrow
read_json
,我们测量了 5-20 列的高达 2–3 GB/s 的数据处理吞吐量,以及随着列数量增加到 50 及以上而降低的吞吐量值。我们发现,与列数量和输入数据大小相比,数据类型对读取器性能的影响较小。如果列数量为 200,且每行的记录大小约为 5 KB,吞吐量将下降到约 0.6 GB/s。
将 pyarrow
block_size
reader 选项提升至 20 MB (pyarrow-20MB) 会导致列数量增加 100 或以上的吞吐量增加,同时还会降低 50 或以下列数量的吞吐量。
总体而言,DuckDB 主要因数据类型而显示吞吐量可变性,而 cuDF 和 pyarrow 主要因列数量和输入数据大小而显示吞吐量可变性。基于 GPU 的 cudf.pandas 和 pylibcudf 为复杂列表和结构模式(尤其是输入数据大小 > 50 MB)提供了超高的数据处理吞吐量。
JSON 行读取器选项
鉴于 JSON 格式基于文本的特性,JSON 数据通常包含异常,导致 JSON 记录无效或无法很好地映射到数据帧。其中一些 JSON 异常包括单引号字段、已裁剪或损坏的记录,以及混合结构或列表类型。当数据中出现这些模式时,它们可能会中断工作流中的 JSON 读取器步骤。
以下是这些 JSON 异常的一些示例:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
要在 cuDF 中解锁高级 JSON 读取器选项,我们建议您将 cuDF-Python (
import cudf
) 和 pylibcudf 集成到您的工作流中。如果数据中出现单引号字段名称或字符串值,cuDF 会提供读取器选项,用于将单引号归一化为双引号。cuDF 支持此功能,可与 Apache Spark 中默认启用的
allowSingleQuotes
选项兼容。
如果您的数据中出现无效记录,cuDF 和 DuckDB 都会提供错误恢复选项,将这些记录替换为 null。启用错误处理后,如果记录生成解析错误,则相应行的所有列均标记为 null。
如果混合 list 和 struct 值与数据中的相同字段名相关联,cuDF 提供一个 dtype 模式覆盖选项,以将数据类型强制转换为字符串。DuckDB 使用类似的方法来推理
JSON
数据类型。
对于混合类型,pandas 库可能是最可靠的方法,使用 Python 列表和字典对象来表示输入数据。
以下是 cuDF-Python 和 pylibcudf 中的示例,其中显示了读取器选项,包括列名称“a”的 dtype 模式覆盖。如需了解更多信息,请参阅
cudf.read_json
和
pylibcudf.io.json.read_json
。
对于 pylibcudf,可以在
build
函数之前或之后配置
JsonReaderOptions
对象。
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
表 3 总结了使用 Python API 的多个 JSON 读取器针对一些常见 JSON 异常的行为。交叉表示读取器函数引发异常,勾号表示库已成功返回 Dataframe。在未来版本的库中,这些结果可能会发生变化。
单引号
无效记录
混合类型
cuDF-Python、pylibcudf
归一化为双引号
设置为 null
表示为字符串
pandas
*例外
*例外
表示为 Python 对象
pandas (
engine="pyarrow
“
)
*例外
*例外
*例外
DuckDB
*例外
设置为 null
表示为类似 JSON 字符串的类型
pyarrow
*例外
*例外
*例外
表 3、读取异常情况 (包括单引号、混合类型和无效记录) 的 JSONL 文件时的 JSON 读取器结果
cuDF 支持多个额外的 JSON 读取器选项,这些选项对于与 Apache Spark 惯例的兼容性至关重要,现在也可供 Python 用户使用。其中一些选项包括:
数字和字符串的验证规则
自定义记录分隔符
根据 dtype 中提供的模式进行列剪枝
自定义
NaN
值
有关更多信息,请参阅有关
json_reader_options
的 libcudf C++ API 文档。
有关多源读取以高效处理许多较小的 JSON 行文件的更多信息,或有关分解大型 JSON 行文件的字节范围支持的更多信息,请参阅使用
RAPIDS 进行 GPU 加速的 JSON 数据处理
。
总结
RAPIDS cuDF 为在 Python 中处理 JSON 数据提供了功能强大、灵活且加速的工具。
从 24.12 版本开始,您还可以在适用于 Apache Spark 的 RAPIDS Accelerator 中使用 GPU 加速的 JSON 数据处理功能。有关信息,请参阅
使用 GPU 在 Apache Spark 上加速 JSON 处理
。
有关更多信息,请参阅以下资源:
cuDF 文档
/rapidsai/cudf
GitHub 存储库
RAPIDS Docker 容器 (可用于版本和夜间构建)
零代码更改加速数据科学工作流程
DLI 课程
掌握用于 GPU 加速的 cudf.pandas Profiler |
https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | Understanding the Language of Life’s Biomolecules Across Evolution at a New Scale with Evo 2 | AI has evolved
from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics datasets, and automated laboratory workflows has allowed scientists to tackle problems once thought intractable—from rapid protein structure prediction to generative drug design, increasing the need for
AI literacy among scientists
. With this momentum, we find ourselves on the cusp of the next paradigm shift: the emergence of powerful AI foundation models purpose-built for biology.
These new models promise to unify disparate data sources—genomic sequences, RNA and proteomic profiles, and, in some cases, scientific literature—into a single, coherent understanding of life at the molecular, cellular, and systems levels. Learning biology’s language and structure opens doors to transformative applications, such as smarter drug discovery, rational enzyme design, and disease mechanism elucidation.
As we set the stage for this next wave of AI-driven breakthroughs, it is clear that these foundation models will not merely accelerate progress; they stand poised to redefine what is possible in biological research.
Test
Evo 2 as an NVIDIA BioNeMo NIM microservice
for free with the
/NVIDIA/bionemo-examples
example notebook.
Explore a complete
reference workflow for protein design
.
Start training Evo 2 on your data today in
BioNeMo Framework
.
Stay up to date with the
latest NVIDIA BioNeMo platform updates
.
A leap forward in sequence modeling and design from molecular to genome-scale
The
first Evo model
from November 2024 represented a groundbreaking milestone in genomic research, introducing a foundation model capable of analyzing and generating biological sequences across DNA, RNA, and proteins.
Published at a time when most models were restricted to single modalities or short contexts, Evo is known for its ability to operate across scales—ranging from molecular to genomic—using a unified approach. Trained on 2.7M prokaryotic and phage genomes, encompassing 300B nucleotide tokens, Evo delivered single-nucleotide resolution across many biological evolution and function tasks.
At the core of Evo’s success is its innovative
StripedHyena
architecture (Figure 1), a hybrid model combining 29 Hyena layers, a new type of deep learning architecture designed to handle long sequences of information without relying on traditional attention mechanisms that are common to Transformer architectures. Instead it uses a combination of convolutional filters and gates.
This design overcame the limitations of traditional Transformer models, enabling Evo to handle long contexts of up to 131,072 tokens efficiently. The result was a model capable of connecting small sequence changes to system-wide and organism-level impacts, bridging the gap between molecular biology and evolutionary genomics.
Figure 1
.
Evo and Evo 2 AI model architecture
Evo’s predictive capabilities set new standards for biological modeling. It achieved competitive performance in several zero-shot tasks, including predicting the fitness effects of mutations on proteins, non-coding RNAs, and regulatory DNA, providing invaluable insights for synthetic biology and precision medicine.
Evo also demonstrated remarkable generative capabilities, designing functional CRISPR-Cas systems and transposons. These outputs were validated experimentally, proving that Evo could predict and design novel biological systems with real-world utility.
Evo represents a notable advancement in
integrating multimodal and multiscale biological understanding into a single model
. Its ability to generate genome-scale sequences and predict gene essentiality across entire genomes marked a leap forward in our capacity to analyze and engineer life.
Evo’s milestones were not just its technical achievements but also its vision. This unified framework combined biology’s vast complexity with cutting-edge AI to accelerate discovery and innovation in life sciences.
Learning the language of life across evolution
Evo 2 is the next generation of this line of research in genomic modeling, building on the success of Evo with expanded data, enhanced architecture, and superior performance.
Evo 2 can provide insights into three essential biomolecules—DNA, RNA, and protein— and all three domains of life: Eukarya, Prokarya, and Archaea. Trained on a dataset of 8.85T nucleotides from 15,032 eukaryotic genomes and 113,379 prokaryotic genomes, this training dataset covers diverse species, enabling unprecedented cross-species generalization and significantly broadening its scope compared to Evo, which focused solely on prokaryotic genomes.
Evo 2 uses a new and improved
StripedHyena 2
architecture, extended up to 40B parameters, enhancing the model’s training efficiency and ability to capture long-range dependencies with context lengths of 1M tokens. StripedHyena 2, thanks to its multihybrid design based on convolutions, trains significantly faster than Transformers and other hybrid models using linear attention or state-space models.
The largest Evo 2 model was trained using 2,048
NVIDIA H100 GPUs
using
NVIDIA DGX Cloud
on AWS. As part of
NVIDIA’s partnership with Arc
, they gained access to this high-performance, fully managed AI platform optimized for large-scale, distributed training with NVIDIA AI software and expertise.
These advances mark a significant increase from Evo’s 7B parameters and a 131,000-token context length, positioning Evo 2 as a leader in multimodal and multiscale biological modeling (Table 1).
Feature
Evo
Evo 2
Genomic Training Data
Bacterial
+ bacteriophage
(300B nucleotides)
All domains of life
+ bacteriophage
(9T nucleotides)
Model Parameters
7B
7B + 40B
Context Length
131,072 tokens
Up to 1,048,576 tokens
Modalities
DNA, RNA, protein
DNA, RNA, protein
Safety
Viruses of Eukaryotes excluded
Viruses of Eukaryotes excluded
Applications
Limited cross-species tasks
Broad cross-species applications
Table 1.
Key features of Evo 2 and Evo
Evo 2’s expanded training data and refined architecture empower it to excel across various biological applications. Its multimodal design integrates DNA, RNA, and protein data, enabling zero-shot performance on tasks like mutation impact prediction and genome annotation. Evo 2 also fundamentally improves Evo by including eukaryotic genomes, enabling deeper insights into human diseases, agriculture, and environmental science.
Evo 2’s predictive capabilities outperform specialized models across diverse tasks:
Variant impact analysis
: Achieves state-of-the-art accuracy in predicting the functional effects of mutations across species zero-shot, including human and non-coding variants.
Gene essentiality
: Identifies essential genes in prokaryotic and eukaryotic genomes, validated against experimental datasets, bridging the gap between molecular and systems biology tasks.
Generative capabilities
: Designs complex biological systems, such as genome-scale prokaryotic and eukaryotic sequences, and the controllable design of chromatin accessibility, demonstrating new capabilities for biological design with real-world applicability.
Using the NVIDIA Evo 2 NIM microservice
The
NVIDIA Evo 2 NIM microservice
is useful for generating a variety of biological sequences, with an API that provides settings to adjust tokenization, sampling, and temperature parameters:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
For more information about the API output for various prompts, see the
NVIDIA BioNeMo Framework
documentation.
Evo 2 can also be fine-tuned using the open-source
NVIDIA BioNeMo Framework
, which offers robust tools for adapting pretrained models such as Evo 2 to specialized tasks in BioPharma:
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 and the future of AI in biology
AI is poised to rapidly transform biological research, enabling breakthroughs previously thought to be decades away. Evo 2 represents a significant leap forward in this evolution, introducing a genomic foundation model capable of analyzing and generating DNA, RNA, and protein sequences at unprecedented scales.
While Evo excelled in predicting mutation effects and gene expression in prokaryotes, the capabilities of Evo 2 are much broader, with enhanced cross-species generalization, making it a valuable tool for studying eukaryotic biology, human diseases, and evolutionary relationships.
Evo 2’s ability to perform zero-shot tasks, from identifying genes that drive cancer risk to designing complex biomolecular systems, showcases its versatility. Including long-context dependencies enables it to uncover patterns across genomes, providing multi-modal and multi-scale insights that are pivotal for advancements in precision medicine, agriculture, and synthetic biology.
As the field moves forward, models like Evo 2 set the stage for a future where AI deciphers life’s complexity and is also used to design new useful biological systems. These advancements align with broader trends in AI-driven science, where foundational models are tailored to domain-specific challenges, unlocking previously unattainable capabilities. Evo 2’s contributions signal a future where AI becomes an indispensable partner in decoding, designing, and reshaping the living world.
For more information about Evo 2, see the
technical report published by the Arc Institute
. Evo 2 is also available within the NVIDIA BioNeMo platform.
Acknowledgments
We’d like to thank the following contributors to the described research for their notable contributions to the ideation, writing, and figure design for this post:
Garyk Brixi
, genetics Ph.D. student at Stanford
Jerome Ku
, machine learning engineer working with the Arc Institute
Michael Poli
, founding scientist at Liquid AI and computer science Ph.D. student at Stanford
Greg Brockman
, co-founder and president of OpenAI
Eric Nguyen
, bioengineering Ph.D. student at Stanford
Brandon Yang
, co-founder of Cartesia AI and computer science Ph.D. student at Stanford (on leave)
Dave Burke
, chief technology officer at the Arc Institute
Hani Goodarzi
, core investigator at the Arc Institute and associate professor of biophysics and biochemistry at the University of California, San Francisco
Patrick Hsu
, co-founder of the Arc Institute, assistant professor of bioengineering, and Deb Faculty Fellow at the University of California, Berkeley
Brian Hie
, assistant professor of chemical engineering at Stanford University, Dieter Schwarz Foundation Stanford Data Science Faculty Fellow, innovation investigator at the Arc Institute, and leader at the Laboratory of Evolutionary Design at Stanford | https://developer.nvidia.com/zh-cn/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | 利用 Evo 2 以全新规模跨进化研究生命生物分子的语言 | AI 已从实验好奇心发展为生物学研究的驱动力。
深度学习算法、海量组学数据集和自动化实验室工作流程的融合使
科学家能够解决一度被认为棘手的问题
(从快速蛋白质结构预测到生成式药物设计),从而增加了科学家对 AI 素养的需求。在这一势头下,我们正处于下一次范式转变的边缘:专为生物学打造的强大 AI 基础模型的出现。
这些新模型有望将不同的数据源 (基因组序列、RNA 和蛋白质组谱,在某些情况下还包括科学文献) 统一为分子、细胞和系统级别上对生命的统一、一致的理解。学习生物学的语言和结构为变革性应用打开了大门,例如更智能的药物发现、合理的酶设计和疾病机制阐明。
在我们为下一波 AI 驱动的突破做好准备之际,这些基础模型显然不仅能够加速进展,还将重新定义生物学研究的可能性。
使用/NVIDIA/bionemo-examples
示例 notebook
免费将 Evo 2 测试为 NVIDIA BioNeMo NIM 微服务
。
探索蛋白质设计的完整参考工作流程
。
立即开始在
BioNeMo 框架
中使用您的数据训练 Evo 2
及时了解 NVIDIA BioNeMo 平台的最新动态。
序列建模和设计从分子规模到基因组规模的飞跃
2024 年 11 月推出的
首个 Evo 模型
是基因组研究领域的一个突破性里程碑,它引入了能够分析和生成跨 DNA、RNA 和蛋白质的生物序列的基础模型。
在发布 Evo 时,大多数模型都被限制在单一模式或简短背景下运行,而众所周知,它能够使用统一的方法跨规模 (从分子到基因组) 操作。Evo 基于 270 万个原核细胞和噬菌体基因组 (包含 300 亿个核酸令牌) 进行训练,在许多生物进化和功能任务中提供了单核酸分辨率。
Evo 成功的核心是其创新的 StripedHyena 架构 (图 1),这是一个结合了 29 个 Hyena 层的混合模型,这是一种新型深度学习架构,旨在处理长序列信息,而无需依赖 Transformer 架构中常见的传统注意机制。相反,它使用卷积过滤器和门的组合。
这种设计克服了传统 Transformer 模型的限制,使 Evo 能够高效处理多达 131,072 个令牌的长上下文。最终,该模型能够将微小的序列变化与系统级和有机体级的影响联系起来,弥合分子生物学与进化基因组学之间的差距。
图 1
.
.
.
Evo 和 Evo 2 AI 模型架构
Evo 的预测功能为生物建模树立了新的标准。它在多项零样本任务中取得了具有竞争力的表现,包括预测突变对蛋白质、非编码 RNAs 和调控 DNA 的适应性影响,为合成生物学和精准医学提供宝贵见解。
Evo 还展示了非凡的生成功能,设计了功能齐全的 CRISPR-Cas 系统和转座子。这些输出经过实验验证,证明了 Evo 可以预测和设计具有真实效用的新型生物系统。
Evo
代表着将多模态和多尺度生物理解集成到单个模型中的显著进步
。它能够生成基因组级序列并预测整个基因组的基因本质,这标志着我们分析和工程生命的能力实现了飞跃。
Evo 的里程碑式发展不仅体现在技术成就上,还体现在愿景上。这一统一框架将生物学的庞大复杂性与尖端 AI 相结合,加速了生命科学领域的发现和创新。
学习进化过程中的生命语言
Evo 2 是基因组建模领域这一系列研究的新一代产品,基于 Evo 在扩展数据、增强架构和卓越性能方面取得的成功而构建。
Evo 2 可以深入了解三种基本生物分子 (DNA、RNA 和蛋白质) 以及生命的所有三个领域:真核生物 (Eukarya)、原核生物 (Prokarya) 和古菌 (Archaea)。此训练数据集基于来自 15,032 个真核基因组和 113,379 个原核基因组的 8.85T 核酸数据集进行训练,涵盖不同物种,与仅关注原核基因组的 Evo 相比,实现了前所未有的跨物种泛化,并显著拓宽了其范围。
Evo 2 使用经过改进的全新
StripedHyena 2
架构,该架构可将参数扩展至 40B 个,从而提高模型的训练效率,以及使用 1M 个令牌的上下文长度捕获远程依赖项的能力。
StripedHyena 2
采用基于卷积的多混合设计,其训练速度明显快于 Transformers 和其他使用线性注意力或状态空间模型的混合模型。
最大的 Evo 2 模型使用 AWS 上的
NVIDIA DGX Cloud
使用 2,048 个
NVIDIA H100 GPUs
进行训练。作为
NVIDIA 与 Arc 合作
的一部分,他们获得了对这个高性能、完全托管的 AI 平台的访问权限,该平台利用 NVIDIA AI 软件和专业知识针对大规模分布式训练进行了优化。
这些进步标志着 Evo 的 7B 参数和 131,000 个令牌的上下文长度有了显著增加,使 Evo 2 成为多模态和多尺度生物建模领域的领导者 (表 1)。
特征
Evo
Evo 2
基因组训练数据
细菌 + bacteriophage(300B 核酸)
所有生命领域 + bacteriophage(9T nucleotides)
模型参数
70 亿
70 亿 400 亿
上下文长度
131072 个令牌
最多 1048576 个令牌
模式
DNA、RNA、蛋白质
DNA、RNA、蛋白质
安全性
不包括真核生物病毒
不包括真核生物病毒
应用
跨物种任务受限
广泛的跨物种应用
表 1、Evo 2 和 Evo 的主要特性
Evo 2 的扩展训练数据和优化架构使其能够在各种生物应用中表现卓越。其多模态设计集成了 DNA、RNA 和蛋白质数据,可在执行突变影响预测和基因组标注等任务时实现零采样性能。Evo 2 还通过纳入真核基因组从根本上改进了 Evo,使人们能够更深入地了解人类疾病、农业和环境科学。
Evo 2 的预测能力优于各种任务的专业模型:
变体影响分析
:在零样本预测物种突变 (包括人类和非编码变体) 的功能性影响方面实现出色的准确性。
基因本质
:识别原核和真核基因组中的基本基因,并通过实验数据集进行验证,弥合分子和系统生物学任务之间的差距。
生成功能
:设计复杂的生物系统(例如基因组级的原核细胞序列和真核细胞序列),以及染色质可访问性的可控设计,展示具有现实世界适用性的生物设计新功能。
使用 NVIDIA Evo 2 NIM 微服务
NVIDIA Evo 2 NIM 微服务
可用于生成各种生物序列,其 API 可提供用于调整标记化、采样和温度参数的设置:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
有关各种提示的 API 输出的更多信息,请参阅
NVIDIA BioNeMo 框架文档
。
此外,还可以使用开源
NVIDIA BioNeMo Framework
对 Evo 2 进行微调,该框架提供可靠的工具,可根据 BioPharma 中的专业任务调整预训练模型(例如 Evo 2):
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 和生物学领域的 AI 未来
AI 势必会迅速改变生物学研究,实现之前人们认为需要数十年才能实现的突破。Evo 2 代表了这一变革的重大飞跃,它引入了基因组基础模型,能够以超大规模分析和生成 DNA、RNA 和蛋白质序列。
虽然 Evo 在预测原核生物的突变效应和基因表达方面表现优异,但 Evo 2 的功能更为广泛,并增强了跨物种泛化,这使其成为研究真核生物学、人类疾病和进化关系的宝贵工具。
从识别导致癌症风险的基因到设计复杂的生物分子系统,Evo 2 执行零样本击任务的能力充分体现了其通用性。借助长上下文依赖项,AI 能够揭示跨基因组的模式,提供对精准医学、农业和合成生物学的进步至关重要的多模态和多尺度见解。
随着该领域的发展,像 Evo 2 这样的模型为 AI 解读生命复杂性的未来奠定了基础,同时还用于设计新的有用的生物系统。这些进步与 AI 驱动的科学领域的更广泛趋势是一致的,在这些趋势中,基础模型针对特定领域的挑战进行了定制,解锁了以前无法实现的功能。Evo 2 的贡献标志着 AI 将成为解码、设计和重塑生存世界不可或缺的合作伙伴。
有关 Evo 2 的更多信息,请参阅 Arc Institute 发布的技术报告。Evo 2 也可在 NVIDIA BioNeMo 平台中使用。
致谢
在此,
我们要感谢以下参与所述研究的人员,感谢他们为本文的构思、写作和图形设计做出的杰出贡献
:
Garyk Brixi
,斯坦福大学遗传学博士生
与 Arc Institute 合作的机器学习工程师
Jerome Ku
Michael Poli
,Liquid AI 的创始科学家兼斯坦福大学计算机科学博士生
Greg Brockman
,OpenAI 联合创始人兼总裁
Eric Nguyen,
斯坦福大学生物工程博士生
Brandon Yang
,Cartesia AI 联合创始人兼斯坦福大学计算机科学博士生 (休假中)
Dave Burke
,Arc Institute 首席
技术官
Hani Goodarzi
,Arc Institute 核心研究员,加州大学旧金山分校生物物理学和生物化学副教授
Patrick Hsu
,Arc Institute 联合创始人、生物工程助理教授兼加州大学伯克利分校 Deb 教职研究员
Brian Hie
– 斯坦福大学化学工程助理教授、Dieter Schwarz 基金会斯坦福大学数据科学教职人员、Arc Institute 创新研究员、斯坦福大学进化设计实验室负责人 |
https://developer.nvidia.com/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie | As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult. Fortunately, large datasets are a beneficial component for recommendation systems, which can make a sometimes overwhelming decision much easier.
Graphs are excellent choices for modeling the relationships inherent in the data that fuel recommendation systems, and NetworkX is a very popular option that many data scientists turn to for graph analytics in Python. NetworkX is easy to learn and use, stocked with a wide breadth of graph algorithms, backed by a large and friendly community, and has copious examples available in notebooks, documents, Stack Overflow, and your favorite LLM.
However, to the disappointment of countless developers that broke into graph analytics with or even because of NetworkX, it famously falls short in performance at the scales used by typical recommendation systems.
This begs the question: Can an effective graph-based recommendation system be written in a few simple lines of Python? More generally, can developers and data scientists have both easy-to-use and high-performance graph analytics?
The answer to both questions is, “Yes.”
Read on to discover how you can create a simple and effective recommendation system in Python using NetworkX, a dataset of 33M movie reviews, the Jaccard Similarity algorithm, and the NVIDIA cuGraph back-end, which provides the >250x speedup necessary for modern large-scale graph data.
The MovieLens dataset
Here’s the most important part of the system: the data. The
MovieLens dataset
is generously made available for public download and is described in more detail in the
README file
. The full set includes about 331K anonymized users reviewing 87K movies, resulting in 34M ratings.
Figure 1. MovieLens data represented as a graph, where the individual ratings easily map to edges between user and movie nodes
Extracting recommendations from the data: bipartite graphs and Jaccard Similarity
The type of graph created from the MovieLens data is a
bipartite graph
because there are only two types of nodes: movies and users. The reviews (edges) can only occur between a user and a movie. This makes it particularly easy to apply the
Jaccard Similarity
algorithm to find similarities between movies.
Jaccard Similarity compares pairs of nodes and computes a similarity coefficient using their relationships in the graph. In this case, movies are related to each other based on how users have chosen to watch and review them.
Figure 2. Jaccard Similarity computes a similarity coefficient using the sizes of the sets of neighbors for the two nodes being compared
Based on the viewing preferences of users, you can see m3 is more similar to m2 than it is to m1, and movies m4 and m1 aren’t similar at all. This system would recommend m2 to someone who likes m3 and wouldn’t recommend m1 to someone who likes m4.
NetworkX makes it easy… for smaller graphs
Not surprisingly, NetworkX supports the type of analysis described earlier, and it’s quite easy to start seeing results in just a few lines of Python. But as you’ll see, performance becomes a limitation for larger-sized graphs—such as those needed for your movie recommendation system—when using NetworkX without the GPU-accelerated cuGraph backend.
I discuss the key pieces of the recommendation system later in this post, but the full source code is available in the
/rapidsai/nx-cugraph
GitHub repo.
Because the Jaccard Similarity algorithm you’re using doesn’t take edge weights into account, it considers all reviews equal. You don’t want movies with low reviews to be recommended, so filter out all reviews under a certain threshold, which has the side effect of making the graph smaller too.
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
If you print the sizes of the data you’re working with, you see that your graph of good reviews is approximately 330K nodes and 28M edges, with an average degree (number of neighbors per node) of 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
As mentioned earlier, graphs of this size often present a challenge to NetworkX, but GPU acceleration using the cuGraph backend removes the performance limitations often associated with this much data. However, I’ll continue with a CPU-only environment to demonstrate the default performance.
All the following examples were run on a workstation using NetworkX 3.4.2 and a Intel Xeon Platinum 8480CL at 2.0 GHz with 2 TB RAM.
Using a NetworkX graph created from users and good movie reviews, pick a user, find one of their highest rated movies, and use Jaccard Similarity to find other movies like it:
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
When you look up the node ID in the movie name map, you see that one of this user’s highest rated movies is the animated film,
Mulan
:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
You can now use Jaccard Similarity to recommend a movie based on the user’s preferences and viewing history:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
The Jaccard Similarity computation using the default NetworkX implementation ran for over two minutes. Using these results, you can now provide a recommendation.
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
Now you can print the first movie in the sorted list of recommendations:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
The code is easy and the results look good, but performance holds us back
As you can see, the recommendation seems reasonable; someone who likes
Mulan
seems likely to also enjoy the 1999 Disney animated film
Tarzan
.
However, if the goal was to provide a service or to analyze hundreds or thousands of movies, the two-minute runtime would have you start looking for an alternative to NetworkX. You can see that finding similarities between other movies using this system isn’t any faster:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
Here’s another example:
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
The quality of the recommendations returned is impressive given that this system is composed of only a few lines of code. However, the runtime performance makes it virtually unusable. As described earlier, finding recommendations based on
Shawshank Redemption, The (1994)
takes nearly 17 minutes.
NVIDIA cuGraph makes it transformatively faster
The graph algorithm in this workflow is prohibitively expensive, but by using the NVIDIA cuGraph backend and a compatible GPU, you can dramatically improve performance without changing the code.
Jaccard Similarity is supported in nx-cugraph version 25.02 or later. Version 25.02 is available from nightly builds and will be part of future stable releases later this month. Instructions on installing nx-cugraph, as well as other RAPIDS packages, from both nightly and stable channels using conda or pip, are available in the
RAPIDS Installation Guide
.
After being installed, enable nx-cugraph by setting an environment variable:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph uses the GPU to dramatically accelerate the neighbor lookups and set comparisons needed for the Jaccard Similarity computation. As the graph scales and the number of movies and reviews per movie increases, performance remains almost constant.
The best part of the system—the simplicity of the code—does not change, and the results are identical, but performance increases by over 250x for the run that previously took nearly 17 minutes, reducing it to under 4 seconds.
Figure 3. Speedup of cuGraph over NetworkX for Jaccard Similarity computation for various movies
Software: NetworkX 3.4.2, cuGraph/nx-cugraph 25.02
;
CPU: Intel(R) Xeon(R) Platinum 8480CL @ 2.0GHz 2TB RAM
;
GPU: NVIDIA Quadro RTX 8000 48GB RAM
Conclusion
This post covered a simple and effective recommendation system that’s easy to write in Python using NetworkX. Although there are many other approaches you could take—as covered in
What Is a Recommendation System?
—few would match the low effort required to start exploring data that graph analysis with NetworkX offers.
However, productive and meaningful data exploration requires quick turnaround, and NetworkX has traditionally struggled to scale to larger, real-world problem sizes.
The NVIDIA cuGraph backend for NetworkX accelerates the familiar and flexible NetworkX API to also make it performant at scale, generating results in seconds instead of tens of minutes, keeping you focused and productive. You can now continue using NetworkX, the most popular graph analytics library, without concern for scaling issues simply by adding a GPU and the cuGraph backend to your environment.
For more information about accelerated graph analysis using NetworkX and NVIDIA cuGraph, see
RAPIDS: GPU-Accelerated NetworkX Backend
. | https://developer.nvidia.com/zh-cn/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | 使用 NetworkX、Jaccard Similarity 和 cuGraph 预测您下一部最喜欢的电影 | 随着全球每个人可用的数据量不断增加,消费者做出明智决策的能力也越来越难。幸运的是,对于推荐系统而言,大型数据集是一个非常有用的组件,有时这会让某些情况下的决策变得更加容易。
对于为推荐系统提供支持的数据中固有的关系建模而言,图形是一个很好的选择,而 NetworkX 是许多数据科学家在 Python 中进行图形分析时非常喜欢的选择。NetworkX 易于学习和使用,拥有各种图形算法,并由庞大而友好的社区提供支持,并且在 Notebook、文档、Stack Overflow 和您喜欢的 LLM 中提供了大量示例。然而,令无数开发人员失望的是,他们使用 NetworkX 或甚至因为 NetworkX 而涉足图形分析,但众所周知,它在典型推荐系统使用的规模上的性能表现不佳。
这就引出了一个问题:能否用 Python 的几行简单代码编写有效的基于图形的推荐系统?更一般地说,开发者和数据科学家能否同时进行易于使用的高性能图形分析?
这两个问题的答案都是“Yes”
请继续阅读,了解如何使用 NetworkX、Jaccard Similarity 算法和 NVIDIA cuGraph 后端 (可将现代大规模图形数据所需的速度提高 250 倍以上),在 Python 中创建简单有效的推荐系统,使用 3300 万条电影评论的数据集。
MovieLens 数据集
我们先从系统中最重要的部分开始:数据。MovieLens 数据集
1
可供公开
下载
,
README 文件
中有更详细的说明。该系列包括大约 331k 匿名用户,他们在观看 87k 部电影,获得了 34M 的评分。
图 1、MovieLens 数据可以表示为一个图形,其中各个评分可轻松映射到用户和电影节点之间的边缘。
从数据中提取建议:二分图和 Jaccard Similarity
我们根据 MovieLens 数据创建的图形类型是二部图,因为只有两种类型的节点:电影节点和用户节点,并且评论(边缘)只能在用户和电影之间发生。这使得应用
Jaccard Similarity
算法来查找电影之间的相似性变得特别容易。Jaccard Similarity 比较节点对,并使用它们在图形中的关系计算相似性系数。在这种情况下,电影根据用户选择的观看和审查方式相互关联。
图 3、Jaccard Similarity 使用所比较的两个节点的近邻集的大小来计算相似性系数。根据用户的观看偏好,我们可以看到 m3 与 m2 更相似,电影 m4 和 m1 完全不相似。此系统会向喜欢 m3 的用户推荐 m2,而不会向喜欢 m4 的用户推荐 m1。
NetworkX 可以轻松处理较小的图形
不足为奇的是,NetworkX 支持我们上述的分析类型,而且只需使用几行 Python 代码即可轻松查看结果。但正如我们将看到的,当使用无 GPU 加速的 cuGraph 后端的 NetworkX 时,大型图形 (例如我们的电影推荐系统所需的图形) 的性能会受到限制。
我们将在下方查看推荐系统的关键部分,但可在
此处
获取完整源代码。
由于我们使用的 Jaccard Similarity 算法未考虑边缘权重,因此会将所有评论视为相同。我们不希望推荐好评度低的电影,因此我们会过滤掉某个值下的所有好评度,这会导致图形也变小。
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
如果我们打印正在处理的数据的大小,我们会看到好评图大约有 330k 个节点,28M 个边缘,平均度数 (每个节点的近邻数) 为 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
如上所述,这种规模的图形通常会给 NetworkX 带来挑战,但使用 cuGraph 后端的 GPU 加速消除了通常与如此多的数据相关的性能限制。不过,我们将继续使用 CPU 环境来演示默认性能。
注意
以下所有示例均在使用 NetworkX 3.4.2 和 Intel(R) Xeon(R) Platinum [email protected](2TB RAM)的工作站上运行
使用由用户创建的 NetworkX 图形和优秀的电影评论,我们来选择一个用户,找到他们评分最高的电影之一,并使用 Jaccard Similarity 找到类似的其他电影。
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
当我们在电影名称贴图中查找节点 ID 时,我们会发现该用户评分最高的电影之一是动画电影“Mulan”:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
我们现在可以使用 Jaccard Similarity 根据用户的偏好和观看历史记录来推荐电影:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
使用默认 NetworkX 实现的 Jaccard 相似性计算运行了两分钟以上。根据这些结果,我们现在可以提供推荐。
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
现在,我们只需在已排序的推荐列表中打印出第一部电影:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
代码很简单,结果看起来不错,但性能却拖累了我们
如我们所见,这个推荐似乎是合理的;喜欢“Mulan”的人似乎也喜欢 1999 年的迪士尼动画电影“Tarzan”。
但是,如果我们的目标是提供服务,或分析数百乃至数千部电影,那么两分钟的运行时间就能让我们开始寻找 NetworkX 的替代方案。我们可以看到,使用此系统查找其他电影相似点的速度并没有加快:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
鉴于此系统仅由几行代码组成,因此所返回的推荐内容的质量令人印象深刻。但是,运行时性能使其几乎无法使用。如上所述,根据“Shawshank Redemption, The (1994)”查找推荐内容大约需要 17 分钟。
NVIDIA cuGraph 使其变革性地加快
上述工作流程中的图形算法成本高昂,但通过使用 NVIDIA cuGraph 后端和兼容的 GPU,我们可以在不更改代码的情况下显著提高性能。
nx-cugraph 版本 25.02 或更高版本支持 Jaccard Similarity。版本 25.02 可在 nightly builds 中使用,并将于本月晚些时候纳入未来的稳定版本中。有关如何使用 conda 或 pip 从 nightly 和 stable 通道安装 nx-cugraph 以及其他 RAPIDS 包的说明,请参阅
RAPIDS Installation Guide
。
安装后,只需设置环境变量即可启用 nx-cugraph:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph 利用 GPU 显著加速近邻查找,并设置 Jaccard 相似性计算所需的比较结果。此外,随着图形规模以及每部电影的电影和评论数量的增加,性能几乎保持不变。
该系统最优秀的部分,即代码的简单性,并没有改变,结果也是一样的,但在过去近 17 分钟的运行中,性能提高了 250 倍以上,缩短到 4 秒以内。
图 4、图表显示了各种电影的 cuGraph over NetworkX for Jaccard Similarity 计算速度
。
软件:NetworkX 3.4.2,cuGraph/nx-cugraph 25.02
CPU:Intel(R) Xeon(R) Platinum [email protected] 2TB RAM
GPU:NVIDIA Quadro RTX 8000 48GB RAM
结束语
这篇博文介绍了一个简单而有效的推荐系统,它可以使用 NetworkX 轻松地用 Python 编写。虽然我们可以采用许多其他方法(如此处所述),但很少有方法能够做到与开始探索 NetworkX 图形分析提供的数据所需的工作量不相上下。然而,高效和有意义的数据探索需要快速的周转,而 NetworkX 传统上一直难以扩展到更大的实际问题规模。
适用于 NetworkX 的 NVIDIA cuGraph 后端可对熟悉且灵活的 NetworkX API 进行加速,还可大规模提升其性能,在几秒钟 (而非数十分钟) 内生成结果,从而让您专注工作并高效工作。现在,用户只需向环境中添加 GPU 和 cuGraph 后端,即可继续使用热门的图形分析库 NetworkX,而无需担心扩展问题。
如需了解有关使用 NetworkX 和 NVIDIA cuGraph 进行加速图形分析的更多信息,请访问
https://rapids.ai/nx-cugraph
。 |
https://developer.nvidia.com/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | NVIDIA Open GPU Datacenter Drivers for RHEL9 Signed by Red Hat | NVIDIA and Red Hat have partnered to bring continued improvements to the precompiled
NVIDIA Driver introduced in 2020
. Last month, NVIDIA
announced
that the open GPU driver modules will become the default recommended way to enable NVIDIA graphics hardware.
Today, NVIDIA announced that Red Hat is now compiling and signing the
NVIDIA open GPU kernel modules
to further streamline the usage for the Red Hat Enterprise Linux 9 OS and to provide a way forward for supporting
Confidential Computing
with NVIDIA hardware in the datacenter.
The signed open GPU drivers can now be installed and used without any extra key enrollment configuration. As with the current legacy precompiled packages, only the latest RHEL 9 kernel is supported at the time of the driver update.
Starting with RHEL 9.5, NVIDIA now offers a tech preview repository available for trial until April 30, 2025. This repository enables the usage of the open GPU driver and legacy driver (with caveats) using
modularity streams
.
After the tech preview ends, the signed kernel modules will be available in the NVIDIA Compute package repositories with the rest of the CUDA platform: CUDA-X acceleration libraries, CUDA-optimized applications, and frameworks.
For more information about modularity streams and how they work with the precompiled graphics driver, see
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
.
Activating the tech preview
To use the signed NVIDIA open GPU driver modules, make sure that the Red Hat repositories are enabled, including RHEL9 AppStream, RHEL9 BaseOS, and RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
Add the CUDA network repository:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
When the tech preview repository is activated, view the supported modularity streams:
$ dnf module list nvidia-driver
Figure 1. Example output list from the tech preview tool of the NVIDIA modularity streams installed
The tech preview provides the following new modularity streams:
latest-open-gpu
<driver_version>-open-gpu
Install the latest stream to opt into the signed precompiled packages:
$ sudo dnf module install nvidia-driver:latest-open-gpu
Reboot and enjoy!
Figure 2. Example
nvidia-smi
command output showing the NVIDIA CUDA driver version running and system parameters
Important caveat with the tech preview repository
The legacy precompiled driver is also listed in the modularity streams:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
These drivers have been signed by NVIDIA and require signing key enrollment. They install just fine without any warnings but upon reboot the NVIDIA graphics driver will not be available:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Upon further inspection, the problem becomes clear:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
This happens because secure boot is enabled by default:
$ sudo mokutil --sb-state
SecureBoot enabled
This module has been signed by NVIDIA using a key that is not in the chain of trust:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
For more information about enrolling the NVIDIA signing key with mokutil, see
UEFI Secure Boot
on GitHub.
Installation
Install the tech preview repository on a host with secure boot enabled:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
If you discover any issues with packaging, please report those problems to
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
.
This repository will be active until March 31, 2025.
Summary
Now with signed packages from NVIDIA and Red Hat, secure boot with NVIDIA on RHEL 9 is even better. You can look forward to more developments in this space.
The driver source code is available in the
/NVIDIA/open-gpu-kernel-modules
GitHub repo.
Packaging templates and instructions are provided on GitHub to enable you to maintain your own precompiled kernel module packages for custom kernels and derivative Linux distros:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
To give feedback,
send comments
or
report driver bugs
. If you are not already a member,
join the NVIDIA Developer Program
. | https://developer.nvidia.com/zh-cn/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | 适用于 RHEL9 的 NVIDIA 开放 GPU 数据中心驱动,由 Red Hat 签署 | NVIDIA 和 Red Hat 合作,对
2020 年推出的预编译 NVIDIA Driver
进行持续改进。上个月,NVIDIA
宣布
,开放的 GPU 驱动模组将成为启用 NVIDIA 图形硬件的默认推荐方式。
今天,NVIDIA 宣布 Red Hat 正在编译并签署
NVIDIA 开放 GPU 内核模块
,以进一步简化 Red Hat Enterprise Linux 9 操作系统的使用,并为在数据中心使用 NVIDIA 硬件支持
机密计算
提供前进方向。
现在无需任何额外的密钥注册配置即可安装和使用经过签名的开放 GPU 驱动。与当前的旧版预编译软件包一样,在驱动更新时仅支持最新的 RHEL 9 内核。
从 RHEL 9.5 开始,NVIDIA 现在提供技术预览库,可在 2025 年 4 月 30 日之前试用。此库允许使用
模块化流使
用开放式 GPU 驱动程序和旧版驱动程序(附带说明)。
技术预览结束后,已签名的内核模块将在 NVIDIA Compute 软件包存储库中与 CUDA 平台的其他部分一起提供:CUDA-X 加速库、CUDA 优化的应用和框架。
有关 模块化流及其如何与预编译图形驱动程序配合使用的更多信息 ,请参阅
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
。
激活技术预览
要使用经过签名的 NVIDIA 开源 GPU 驱动模块,请确保已启用 Red Hat 资源库,包括 RHEL9 AppStream、RHEL9 BaseOS 和 RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
添加 CUDA 网络存储库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
激活技术预览库后,请查看支持的模块化流:
$ dnf module list nvidia-driver
图 1、已安装的 NVIDIA 模块化流的技术预览工具中的输出列表示例
技术预览提供了以下新的模块化流:
latest-open-gpu
<driver_version>-open-gpu
安装最新的流,选择使用经过签名的预编译软件包:
$ sudo dnf module install nvidia-driver:latest-open-gpu
重启并尽情享受吧!
图 2、显示
正在运行的 NVIDIA CUDA 驱动程序版本和系统参数的 nvidia-smi 命令输出示例
技术预览库的重要注意事项
旧版预编译驱动也列在模块化流中:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
这些驱动已由 NVIDIA 签名,需要签名密钥注册。它们可以正常安装,没有任何警告,但在重启后,NVIDIA 图形驱动程序将不可用:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
经过进一步检查,问题变得明显:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
发生这种情况的原因是默认情况下启用了 secure boot:
$ sudo mokutil --sb-state
SecureBoot enabled
本模块由 NVIDIA 使用不在信任链中的密钥进行签名:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
有关使用 mokutil 注册 NVIDIA 签名密钥的更多信息,请参阅 GitHub 上的 UEFI Secure Boot。
安装
在已启用安全启动的主机上安装技术预览库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
如果您发现任何打包问题,请向
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
报告这些问题。
此库的有效期至 2025 年 3 月 31 日。
总结
现在,借助 NVIDIA 和 Red Hat 的签名软件包,在 RHEL 9 上使用 NVIDIA 进行安全启动会更好。您可以期待此领域的更多开发。
您可在
/NVIDIA/open-gpu-kernel-modules
GitHub 仓库中获取驱动源代码。
GitHub 上提供了打包模板和说明,以便您为自定义内核和衍生 Linux 发行版维护自己的预编译内核模块包:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
如要提供反馈,请
发送评论
或
报告驱动错误
。如果您还不是会员,请
加入 NVIDIA Developer Program
。 |
https://developer.nvidia.com/blog/get-started-with-gpu-acceleration-for-data-science/ | Get Started with GPU Acceleration for Data Science | In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements.
RAPIDS
is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes. Providing tools like
cuDF
for data manipulation,
cuML
for machine learning, and
cuGraph
for graph analytics, RAPIDS enables seamless integration with existing Python libraries, making it easier for data scientists to achieve faster and more efficient processing.
This post shares tips for transitioning from CPU data science libraries to GPU-accelerated workflows, especially for experienced data scientists.
Setting up RAPIDS on desktop or cloud infrastructure
Getting started with RAPIDS is straightforward, but it does have several dependencies. The recommended approach is to follow the official
RAPIDS Installation Guide
, which provides detailed instructions for local installations. You have multiple paths to install the framework: through pip install, Docker image, or through an environment such as Conda. To set up RAPIDS in a cloud environment, see the
RAPIDS Cloud Deployment Guide
. Before installing, ensure compatibility by checking your CUDA version and the supported RAPIDS version on the installation page.
cuDF and GPU acceleration for pandas
An advantage of RAPIDS lies in its modular architecture, which empowers users to adopt specific libraries designed for GPU-accelerated workflows. Among these, cuDF stands out as a powerful tool for seamlessly transitioning from traditional
pandas
-based workflows to GPU-optimized data processing, and requires zero code changes.
To get started, make sure to enable the cuDF extension before importing pandas for execution of data import and remainder of the operation on GPU. By loading the RAPIDS extension with
%load_ext cudf.pandas
, you can effortlessly integrate cuDF DataFrame within existing workflows, preserving the familiar syntax and structure of pandas.
Similar to pandas,
cuDF pandas
supports different file formats such as .csv, .json, .pickle, .paraquet, and hence enables GPU-accelerated data manipulation.
The following code is an example of how to enable the
cudf.pandas
extension and concatenate two .csv files:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
Loading the
cudf.pandas
extension enables the execution of familiar pandas operations—such as filtering, grouping, and merging—on GPUs without requiring a code change or rewrites. The cuDF accelerator is compatible with the pandas API to ensure a smooth transition from CPU to GPU while delivering substantial computational speedups.
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
Decoding performance: CPU and GPU runtime metrics in action
In data science, performance optimization is not just about speed, but also understanding how computational resources are utilized. It involves analyzing how operations leverage CPU and GPU architectures, identifying inefficiencies, and implementing strategies to enhance workflow efficiency.
Performance profiling tools like
%cudf.pandas.profile
play a key role by offering a detailed examination of code execution. The following execution result breaks down each function, and distinguishes between tasks processed on the CPU from those accelerated on the GPU:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
This granularity helps pinpoint operations that inadvertently revert to CPU execution, a common occurrence due to unsupported cuDF functions, incompatible data types, or suboptimal memory handling. It is crucial to identify these issues because such fallbacks can significantly impact overall performance. To learn more about this loader, see
Mastering the cudf.pandas Profiler for GPU Acceleration
.
Additionally, you can use Python magic commands like
%%time
and
%%timeit
to enable benchmarks of specific code blocks that facilitate direct comparisons of runtime between pandas (CPU) and the cuDF accelerator for pandas (GPU). These tools provide insights into the efficiency gains achieved through GPU acceleration. Benchmarking with
%%time
provides a clear comparison of execution times between CPU and GPU environments, highlighting the efficiency gains achievable through parallel processing.
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
The
%%time
example delivers a 10x speedup in execution time, reducing wall time from 63.3 milliseconds (ms) on the CPU to 5.46 ms on the GPU. This highlights the efficiency of GPU acceleration with cuDF pandas for large-scale data operations. Further insights are gained using
%%timeit
, which performs repeated executions to measure consistency and reliability in performance metrics.
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The
%%timeit
example gives us a 10x performance improvement with GPU acceleration, reducing the runtime from 1.11 seconds per loop on the CPU to 89.6 ms per loop on the GPU. This highlights the efficiency of cuDF pandas for intensive data operations.
Verifying GPU utilization
When working with different data types, it is important to verify whether your system is utilizing the GPU effectively. You can check whether arrays are being processed on the CPU or GPU by using the familiar
type
command to differentiate between NumPy and CuPy arrays.
type(guess_ages)
cupy.ndarray
If the output is
np.array
, the data is being processed on the CPU. If the output is
cupy.ndarray
, the data is being processed on the GPU. This quick check ensures that your workflows are leveraging GPU resources where intended.
Secondly, by simply using the
print
command, you can confirm whether the GPU is being utilized and ensure that a cuDF DataFrame is being processed. The output specifies whether the
fast
path (cuDF) or
slow
path (pandas) is in use. This straightforward check provides an easy way to validate that the GPU is active for accelerating data operations.
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
Lastly, commands such as
df.info
can be used to inspect the structure of cuDF DataFrame and confirm that computations are GPU-accelerated. This helps verify whether operations are running on the GPU or falling back to the CPU.
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
Conclusion
RAPIDS, through tools like cuDF pandas, provides a seamless transition from traditional CPU-based data workflows to GPU-accelerated processing, offering significant performance improvements. By leveraging features such as
%%time
,
%%timeit
, and profiling tools like
%%cudf.pandas.profile
, you can measure and optimize runtime efficiency. The ability to inspect GPU utilization through simple commands like
type
,
print(pd)
, and
df.info
ensures that workflows are leveraging GPU resources effectively.
To try the data operations detailed in this post, check out the
accompanying Jupyter Notebook
.
To learn more about GPU-accelerated data science, see
10 Minutes to Data Science: Transitioning Between RAPIDS cuDF and CuPy Libraries
and
RAPIDS cuDF Instantly Accelerates pandas Up to 50x on Google Colab
.
Join us for
GTC 2025
and register for the
Data Science Track
to gain deeper insights. Recommended sessions include:
Accelerating Data Science with RAPIDS and NVIDIA GPUs
Scaling Machine Learning Workflows with RAPIDS
To build expertise with RAPIDS, check out the following hands-on workshops at GTC:
Accelerate Clustering Algorithms to Achieve the Highest Performance
Analyzing and Visualizing Large Data Interactively Using Accelerated Computing | https://developer.nvidia.com/zh-cn/blog/get-started-with-gpu-acceleration-for-data-science/ | 适用于数据科学的 GPU 加速入门 | 在数据科学领域,运营效率是处理日益复杂和大型数据集的关键。GPU 加速已成为现代工作流程的关键,可显著提高性能。
RAPIDS
是由 NVIDIA 开发的一套开源库和框架,旨在使用 GPU 以尽可能减少代码更改来加速数据科学流程。RAPIDS 提供用于数据操作的
cuDF
、用于机器学习的
cuML
和用于图形分析的
cuGraph
等工具,可实现与现有 Python 库的无缝集成,使数据科学家更容易实现更快、更高效的处理。
本文分享了从 CPU 数据科学库过渡到 GPU 加速工作流程的技巧,特别适合经验丰富的数据科学家。
在桌面或云基础架构上设置 RAPIDS
开始使用 RAPIDS 非常简单,但它确实有几个依赖项。推荐的方法是遵循官方的
RAPIDS 安装指南
,该指南提供了详细的本地安装说明。您有多种安装框架的路径:通过 pip install、Docker 镜像,或通过 Conda 等环境。要在云环境中设置 RAPIDS,请参阅
RAPIDS 云部署指南
。安装前,请检查安装页面上的 CUDA 版本和受支持的 RAPIDS 版本,确保兼容性。
适用于 pandas 的 cuDF 和 GPU 加速
RAPIDS 的一个优势在于其模块化架构,使用户能够采用专为 GPU 加速工作流程设计的特定库。其中,cuDF 作为一款功能强大的工具脱颖而出,可从基于
pandas
的传统工作流程无缝过渡到 GPU 优化的数据处理流程,并且无需更改代码。
首先,请确保在导入 pandas 之前启用 cuDF 扩展,以便在 GPU 上执行数据导入和剩余操作。通过使用
%load_ext cudf.pandas
加载 RAPIDS 扩展程序,您可以轻松地将 cuDF DataFrame 集成到现有工作流程中,从而保留熟悉的 pandas 语法和结构。
与 pandas 类似,
cuDF pandas
支持不同的文件格式,例如 .csv、.json、.pickle、.paraquet,因此支持 GPU 加速的数据操作。
以下代码是如何启用
cudf.pandas
扩展名并连接两个 .csv 文件的示例:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
通过加载
cudf.pandas
扩展程序,无需更改或重写代码,即可在 GPU 上执行熟悉的 pandas 操作,例如过滤、分组和合并。cuDF 加速器与 pandas API 兼容,可确保从 CPU 到 GPU 的平稳过渡,同时大幅提高计算速度。
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
解码性能:CPU 和 GPU 运行时指标的实际应用
在数据科学中,性能优化不仅涉及速度,还涉及了解计算资源的利用方式。其中包括分析运营如何利用 CPU 和 GPU 架构、识别效率低下问题,以及实施旨在提高工作流程效率的策略。
%cudf.pandas.profile
等性能分析工具通过详细检查代码执行情况发挥着关键作用。以下执行结果会对每个函数进行分解,并区分在 CPU 上处理的任务与在 GPU 上加速的任务:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
这种粒度有助于查明无意中恢复到 CPU 执行的操作,这是由于不受支持的 cuDF 函数、不兼容的数据类型或次优内存处理而常见的情况。识别这些问题至关重要,因为此类回退会严重影响整体性能。如需详细了解此加载程序,请参阅
Mastering cudf.pandas Profiler for GPU Acceleration
。
此外,您可以使用 Python magic 命令,如
%%time
和
%%timeit
,来启用特定代码块的基准测试,以便直接比较 pandas(CPU)和 cuDF 加速器(GPU)之间的运行时。这些工具可让您深入了解通过 GPU 加速实现的效率提升。通过使用
%%time
进行基准测试,可以清楚地比较 CPU 和 GPU 环境之间的执行时间,从而凸显通过并行处理实现的效率提升。
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
%%time
示例可将执行时间提高 10 倍,将墙面时间从 CPU 上的 63.3 毫秒 (ms) 缩短到 GPU 上的 5.46 毫秒。这凸显了使用 cuDF pandas 进行 GPU 加速在大规模数据操作中的效率。您可以使用
%%timeit
获得更多见解,它执行重复执行来测量性能指标中的一致性和可靠性。
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
在 GPU 加速方面,
%%timeit
示例将性能提升了 10 倍,将运行时间从 CPU 上的每循环 1.11 秒缩短到 GPU 上的每循环 89.6 毫秒。这凸显了 cuDF pandas 在密集型数据操作中的效率。
验证 GPU 利用率
在处理不同的数据类型时,请务必验证您的系统是否有效利用了 GPU。您可以使用熟悉的
type
命令来区分 NumPy 和 CuPy 数组,检查数组是在 CPU 还是 GPU 上处理。
type(guess_ages)
cupy.ndarray
如果输出为
np.array
,则数据将在 CPU 上处理。如果输出为
cupy.ndarray
,则数据将在 GPU 上处理。此快速检查可确保您的工作流程按预期利用 GPU 资源。
其次,只需使用
print
命令,即可确认是否正在利用 GPU,并确保正在处理 cuDF DataFrame。输出指定使用的是
fast
路径 (cuDF) 还是
slow
路径 (pandas)。这种简单的检查提供了一种验证 GPU 是否处于活动状态以加速数据操作的简单方法。
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
最后,可以使用
df.info
等命令检查 cuDF DataFrame 的结构,并确认计算已通过 GPU 加速。这有助于验证操作是在 GPU 上运行,还是回退至 CPU。
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
结束语
通过 cuDF pandas 等工具,RAPIDS 可实现从基于 CPU 的传统数据工作流到 GPU 加速处理的无缝过渡,从而显著提高性能。通过利用
%%time
、
%%timeit
等功能以及
%%cudf.pandas.profile
等分析工具,您可以测量和优化运行时效率。通过
type
、
print(pd)
和
df.info
等简单命令检查 GPU 利用率,可确保工作流程有效利用 GPU 资源。
要尝试本文中详述的数据操作,请查看
随附的 Jupyter Notebook
。
如需了解有关 GPU 加速的数据科学的更多信息,请参阅“
10 分钟了解数据科学:在 RAPIDS cuDF 和 CuPy 库之间过渡
”以及“
RAPIDS cuDF 即时将 pandas 在 Google Colab 上的运行速度提高 50 倍
”。
加入我们的
GTC 2025
大会,并报名参加
Data Science Track
,获得更深入的见解。推荐的会议包括:
借助 RAPIDS 和 NVIDIA GPUs 加速数据科学
借助 RAPIDS 扩展机器学习工作流程
要积累有关 RAPIDS 的专业知识,请查看 GTC 上的以下实战研讨会:
加速聚类算法以实现超高性能
使用加速计算以交互方式分析和可视化大数据 |
https://developer.nvidia.com/blog/how-to-use-openusd/ | How to Use OpenUSD | Universal Scene Description (OpenUSD)
is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds.
This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows.
For an interactive, comprehensive introduction to the essential concepts, vocabulary, and best practices for OpenUSD, explore the
free Learn OpenUSD learning path
from the
NVIDIA Deep Learning Institute
.
Data exchange
OpenUSD does not replace your existing tools or data formats. As the name suggests, Universal Scene Description provides a common way to represent data for many 3D concepts such as meshes and PBR materials. It continues evolving to support additional concepts such as physics and B-reps.
Figure 1. OpenUSD’s growing ecosystem of DCC applications
To begin using OpenUSD with your existing assets, use an exporter or converter to bring your content into USD. This approach ensures swift and effective integration into new USD workflows. By using USD in this way, you can begin to enhance collaboration, performance, and interoperability across different stages of production.
For more information about how to get started, see the
list of applications and tools that support USD interchange
.
As you expand your USD workflows and want to use OpenUSD with proprietary data, you may have a need to develop your own data exchange solutions.
Data exchange pipelines manage the extraction, transformation, and ingestion of content from one data format to another. There are several primary types of data exchange solutions used to integrate existing content into USD workflows:
Importers and exporters
: These are found in digital content creation (DCC) applications and enable you to directly import and export 3D assets to and from USD. Popular software like Rhino and Blender facilitate the importing of 3D assets, while applications like Houdini open 3D assets as native USD files.This makes it straightforward to work with USD within your existing tools.
Converters:
These are standalone programs or scripts designed to convert 3D files from various formats to and from USD. They offer flexibility for batch processing and automation, ensuring that your assets can be easily transformed without the need for manual intervention.
USD file format plug-ins
: This type of USD plug-in enables you to keep your data in its original source format while dynamically reading and translating it to USD on the fly when you open a stage. This means you can work with your assets in their native formats and still benefit from USD’s powerful features, while maintaining data integrity and workflow efficiency.
Conceptual data mapping and extract, transform, load (ETL) are crucial processes for developing data exchange solutions that help ensure the seamless transfer and integration of 3D assets and data.
Conceptual data mapping
involves defining how data from one system or format corresponds to data in another system or format. This is particularly important when working with different DCC applications or file types, as each may have its own unique data structure and representation.
ETL is a more comprehensive process used to move and integrate data from one system to another. It consists of the following main steps:
Extract:
Pull data from the source system or file format. The data could be 3D models, textures, animations, or any other relevant content.
Transform:
Transform extracted data to fit the target runtime requirements. This might include converting units, reformatting data structures, or filtering out irrelevant or private data.
Load:
Load the transformed data into the target runtime. This is sometimes called
ingestion
and ensures that the data is ready for use in USD-based workflows and applications.
Conceptual data mapping and ETL processes are essential for ensuring that content can be effectively and accurately transferred into the USD ecosystem, maintaining its quality and functionality throughout the exchange. The
OpenUSD Exchange SDK
can help accelerate the development of data exchange solutions and help with authoring high-quality OpenUSD data.
After getting your data into OpenUSD, the next step is understanding how to aggregate and structure it for maximum efficiency.
Learn how to
develop data exchange pipelines
with the free Learn OpenUSD self-paced curriculum.
Asset structure principles and content aggregation
Well-structured assets unlock effective content aggregation and asset reusability—this is where OpenUSD truly excels. Proper asset structuring and content aggregation are key methods for efficiently organizing and combining 3D assets and scenes in complex projects.
Developing an asset structure involves breaking down your 3D content into smaller, manageable components that can be easily referenced and reused. This enables you to create a hierarchical and organized structure that simplifies the management and maintenance of your assets. This approach both improves reusability and performance by enabling you to load only the necessary components when needed.
With a well-designed asset structure, you can enhance collaboration as different workstreams can author data independently in separate layers, which can then be composed into a single asset. This non-destructive editing approach ensures that changes made by one team do not overwrite another team’s work.
After you’ve built your smaller components, use content aggregation to combine them into a single cohesive scene. Composition arcs, such as references and payloads, support this process.
Figure 2.
Digital twin of a liquified natural gas carrier
(credit: HD Hyundai and Siemens)
These techniques help make OpenUSD a powerful tool for managing large-scale 3D projects, facilitating efficient collaboration, asset reuse, and pipeline efficiency, making it an indispensable part of any modern 3D production workflow.
Learn
asset structuring principles and content aggregation
with the free Learn OpenUSD self-paced curriculum.
Composition arcs
Composition arcs are the foundational operators of OpenUSD that combine smaller units of scene description into larger units. This enables robust asset structures and content aggregation in OpenUSD workflows.
This feature is one of the superpowers of OpenUSD, setting it apart from other 3D data formats and frameworks.
Figure 3. Composition arcs enable the creation of complex 3D objects such as autonomous mobile robots
Composition arcs are essential tools for non-destructively combining units of USD data. The composition arcs available in OpenUSD include sublayers, references, payloads, variant sets, inherits, specializes, and the most recent addition, relocates.
Start with the following composition arcs to gain an understanding how you can leverage them in your own workflows:
Sublayers:
Enable multiple collaborators or processes to operate on a given USD context. The data authored in a sublayer is added to a USD context in an “include” fashion.
References:
Most suited for content aggregation and can be used to graft one USD context onto the hierarchy of another USD context. By referencing assets, you can maintain a single source of truth for each component, ensuring consistency and reducing redundancy. This is particularly useful in large-scale projects where the same assets are used repeatedly.
Variant sets:
Used to store multiple variations of a component, enhancing adaptability without duplicating data. This feature enables you to create different versions of an asset, such as different materials or configurations, all within a single, efficient structure. Use cases include specifying different textures and materials or configurations of a model, such as describing a car featuring different paint colors or with different rim styles.
Figure 4. Composition of an OpenUSD scene
Learning and implementing composition arcs will provide immediate value to your USD workflows. However, achieving mastery can take years due to their extensive flexibility and diverse applications. To use OpenUSD effectively, we highly recommend investing significant time in experimentation and refining your understanding of composition arc types.
Learn to
create composition arcs
with the free Learn OpenUSD self-paced training.
Get started
OpenUSD offers a comprehensive set of features for efficient 3D asset management and content aggregation, making it an indispensable framework for large-scale projects. By mastering data exchange, asset structuring, and composition arcs, you can significantly improve collaboration, performance, and reusability in your 3D workflows.
Build on your foundational knowledge with courses on the
Learn OpenUSD learning path
.
Start developing with OpenUSD on the
Omniverse Kit SDK
and generative AI models for OpenUSD development through
USD NIM microservices
.
For more resources on OpenUSD, explore the
Alliance for OpenUSD (AOUSD) forum
and visit the
AOUSD
website. | https://developer.nvidia.com/zh-cn/blog/how-to-use-openusd/ | 如何使用 OpenUSD | 通用场景描述(Universal Scene Description)
是开放、可扩展的框架和生态系统,具有用于在 3D 虚拟世界中合成、编辑、查询、渲染、协作和仿真的 API。
本文介绍了如何使用现有资产和工具立即开始使用 OpenUSD,以及可以采取哪些步骤以迭代方式提升 USD 工作流程的水平。
要以交互方式全面介绍 OpenUSD 的基本概念、词汇和最佳实践,请探索
NVIDIA Deep Learning Institute
提供的
免费 Learn OpenUSD 学习路径
。
数据交换
OpenUSD 不会取代您现有的工具或数据格式。顾名思义,Universal Scene Description 提供了一种常用方法来表示网格和 PBR 材质等许多 3D 概念的数据。它不断发展,以支持其他概念,例如 physics 和 B-reps。
图 1、OpenUSD 日益壮大的 DCC 应用生态系统
要开始将 OpenUSD 与您的现有素材结合使用,请使用导出工具或转换器将您的内容导入 USD。这一方法可确保快速有效地集成到新的 USD 工作流程中。通过这种方式使用 USD,您可以开始增强不同生产阶段的协作、性能和互操作性。
有关入门指南的更多信息,请参阅
支持 USD 交换的应用程序和工具列表
。
当您扩展 USD 工作流并希望将 OpenUSD 与专有数据结合使用时,您可能需要开发自己的数据交换解决方案。
数据交换流程管理从一种数据格式到另一种数据格式的内容提取、转换和提取。有几种主要类型的数据交换解决方案用于将现有内容集成到 USD 工作流中:
导入和导出工具:这些工具可在数字内容创建 (DCC) 应用中找到,使您能够直接向 USD 导入和导出 3D 素材。Rhino 和 Blender 等热门软件有助于导入 3D 素材,而 Houdini 等应用则将 3D 素材作为原生 USD 文件打开。这使得您可以在现有工具中轻松使用 USD。
转换器:
这些是独立的程序或脚本,旨在将不同格式的 3D 文件转换为 USD 格式或从 USD 格式转换为 3D 文件。它们为批量处理和自动化提供了灵活性,可确保您的资产可以轻松转换,而无需人工干预。
USD 文件格式插件
:通过这种类型的 USD 插件,您可以保留原始源格式的数据,同时在打开舞台时动态读取数据并将其转换为 USD。这意味着,您可以使用原生格式处理资产,同时仍能从 USD 的强大功能中受益,同时保持数据完整性和工作流程效率。
概念数据映射以及提取、转换、加载(ETL)是开发数据交换解决方案的关键流程,有助于确保无缝传输和集成 3D 资产和数据。
概念数据映射涉及定义一个系统或格式的数据与另一个系统或格式中的数据对应的方式
。在使用不同的 DCC 应用或文件类型时,这一点尤为重要,因为每个应用或文件类型都可能有自己独特的数据结构和表示形式。
ETL 是一个更全面的过程,用于将数据从一个系统迁移到另一个系统并进行集成。它由以下主要步骤组成:
提取:
从源系统或文件格式中提取数据。数据可以是 3D 模型、纹理、动画或任何其他相关内容。
转换
:转换提取的数据以满足目标运行时需求。这可能包括转换单位、重新格式化数据结构,或过滤掉无关紧要或私有数据。
加载:
将转换后的数据加载到目标运行时。
这有时称为 ingestion
,可确保数据可随时用于基于 USD 的工作流和应用。
概念数据映射和 ETL 流程对于确保内容可以有效准确地传输到 USD 生态系统,并在整个交换过程中保持其质量和功能至关重要。
OpenUSD Exchange SDK
有助于加速数据交换解决方案的开发,并有助于创作高质量的 OpenUSD 数据。
在将数据导入 OpenUSD 后,下一步是了解如何聚合和构建数据,以更大限度地提高效率。
了解如何通过免费的 Learn OpenUSD 自定进度课程
开发数据交换工作流
。
资产结构原则和内容聚合
结构良好的资产可实现有效的内容聚合和资产可重用性,这正是 OpenUSD 真正擅长的地方。在复杂的项目中,正确的素材结构和内容聚合是高效组织和合并 3D 素材和场景的关键方法。
开发素材结构需要将 3D 内容分解为更小、可管理的组件,以便于引用和重复使用。这样,您就可以创建一个层次化且井然有序的结构,简化资产的管理和维护。这种方法允许您在需要时仅加载必要的组件,从而提高可复用性和性能。
借助精心设计的资产结构,您可以增强协作,因为不同的工作流可以在不同的层中独立创作数据,然后将数据组合成单个资产。这种非破坏性编辑方法可确保一个团队所做的更改不会覆盖另一个团队的工作。
构建完较小的组件后,请使用内容聚合功能将其合并为一个连贯一致的场景。合成操作符 (例如 references 和 payloads) 支持此过程。
图 2
、
液化天然气运输船的数字孪生
(图片来源:HD Hyundai 和 Siemens)
这些技术有助于使 OpenUSD 成为管理大型 3D 项目的强大工具,促进高效协作、资产重复使用和工作流效率,使其成为任何现代 3D 生产工作流不可或缺的一部分。
通过免费的 Learn OpenUSD 自定进度课程,学习
资产结构化原则和内容聚合
。
合成操作符
合成操作符是 OpenUSD 的基础运算符,可将场景描述的较小单元组合成较大的单元。这可在 OpenUSD 工作流中实现稳健的资产结构和内容聚合。
此功能是 OpenUSD 的超能力之一,使其有别于其他 3D 数据格式和框架。
图 3、合成操作符支持创建复杂的 3D 对象,例如自动移动机器人
合成操作符是以非破坏性方式组合 USD 数据单位的必要工具。OpenUSD 中提供的合成操作符包括子层、引用、有效载荷、变体集、继承、专门化以及最近添加的重新定位。
从以下合成操作符开始,了解如何在自己的工作流程中利用它们:
子层
:支持多个协作者或进程在给定的 USD 环境中运行。子层中创作的数据以“include”方式添加到 USD 上下文中。
引用:最适合内容聚合,可用于将一个 USD 上下文移植到另一个 USD 上下文的层次结构中。通过引用资产,您可以为每个组件维护单一的事实来源,从而确保一致性并减少冗余。这在重复使用相同资产的大型项目中特别有用。
变体集
:用于存储组件的多个变体,在不复制数据的情况下增强适应性。此功能使您能够在单个高效结构中创建素材的不同版本,例如不同的材质或配置。用例包括指定模型的不同纹理、材质或配置,例如描述具有不同涂料颜色或不同轮缘风格的汽车。
图 4、OpenUSD 场景的合成
学习和实施合成操作符将为您的 USD 工作流提供即时价值。然而,由于其广泛的灵活性和多样化的应用,掌握可能需要数年时间。为了有效使用 OpenUSD,我们强烈建议您投入大量时间进行实验,并深入了解合成弧类型。
通过免费的 Learn OpenUSD 自定进度培训,学习
创建 composition arcs
。
开始使用
OpenUSD 为高效的 3D 资产管理和内容聚合提供了一整套功能,使其成为大型项目不可或缺的框架。通过掌握数据交换、资产结构化和合成操作符,您可以显著改善 3D 工作流中的协作、性能和可复用性。
通过学习 Learn OpenUSD 学习路径上的课程,在基础知识的基础上进行构建。
开始在
Omniverse Kit SDK
上使用 OpenUSD 进行开发,并使用生成式 AI 模型通过
USD NIM 微服务
进行 OpenUSD 开发。
有关 OpenUSD 的更多资源,请访问
Alliance for OpenUSD (
AOUSD
) 论坛
并访问 AOUSD 网站。 |
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/ | AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025 | From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet.
At
NVIDIA GTC 2025
, running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world.
Strengthening extreme weather predictions and disaster response
As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe.
Sessions
Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools
AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters.
Boosting Earth System Model Outputs Using Exascale Climate Emulators
Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts.
Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies
JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs.
Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA
Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively.
Autonomous Systems and Remote Sensing for Better Earth Data
Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale.
Boosting climate forecasting for energy and infrastructure planning
Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future.
Sessions
The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change
Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions.
Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2
NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers.
Applying AI Weather Models with NVIDIA Earth-2
NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors.
Advancing AI-driven environmental monitoring and conservation
AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity.
Session
Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration
Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience.
In-person posters
Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef
AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts
Mangrove Simulation Predicts Carbon Sequestration Solutions
Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful.
Revolutionizing Antarctic Flora Monitoring with AI and Drones
AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem.
Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges.
From NVIDIA CEO
Jensen Huang’s must-see keynote
to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact.
The
session catalog
is open—start building your agenda today. | https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/ | NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力 | 从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。
在 3 月 17 日至 21 日举行的
NVIDIA GTC 2025
大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。
加强极端天气预测和灾害响应
随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。
会议
借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。
使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出
KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。
将 AI 用于高级洪水风险建模和缓解策略
JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。
在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式
野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。
利用自主系统和遥感获取更好的地球数据
配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。
提升气候预测能力以加强能源和基础设施规划
准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。
会议
AI 和加速计算在了解和减缓城市气候变化方面的作用
研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。
借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测
NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。
将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用
,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。
推进 AI 驱动的环境监测和保护
AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。
会议
探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性
,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。
线下海报
逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作
Mangrove Simulation 预测碳封存解决方案
红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。
借助 AI 和无人机革新南极植物监测
AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。
在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。
从 NVIDIA 首席执行官
Jensen Huang 不容错过的主题演讲
,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。
会议目录
现已开放,请立即开始构建您的议程。 |
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/ | Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling | As AI models extend their capabilities to solve more sophisticated challenges, a new
scaling law
known as
test-time scaling
or
inference-time scaling
is emerging. Also known as
AI reasoning
or
long-thinking
, this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution.
In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming.
The results turned out to be better than the optimized kernels developed by skilled engineers in some cases.
The need for optimized attention kernels and associated challenges
Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data.
The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes.
There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task.
Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on.
Figure 1. Neighborhood attention on 2D inputs
Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers.
Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code.
The following prompt is sample user input for a relative positional embeddings attention kernel.
Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications.
Use the following function to compute the relative positional encoding:
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use:
qk = qk * qk_scale + rel_pos * 1.44269504
Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations.
LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel.
Inference-time scaling for generating optimized GPU Kernels
To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration.
Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform
The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as input to the DeepSeek-R1 model.
This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel.
Figure 3. Performance of automatically generated optimized attention kernels with flex attention
This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by
Stanford’s KernelBench
benchmark.
The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming.
Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems.
Figure 4. Inference-time scaling results in optimized GPU kernels
Optimized GPU kernels on DeepSeek-R1
These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels.
While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential.
For more information or to get started, see the
DeepSeek-R1 NIM microservice
, now available on
build.nvidia.com
. | https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/ | 使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成 | 随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“
测试时扩展
”或“
推理时扩展
”的新扩展法则正在出现。该技术也称为
AI 推理
或
长时思考
技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。
在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。
事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。
优化注意力内核的需求和相关挑战
注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。
相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。
注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。
多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。
图 1. 邻域注意力在 2D 输入上的应用
创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。
最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。
以下提示是用户输入相对位置嵌入注意力核函数的示例。
Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications.
Use the following function to compute the relative positional encoding:
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use:
qk = qk * qk_scale + rel_pos * 1.44269504
Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations.
LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。
用于生成经过优化的 GPU 内核的推理时间扩展
为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。
图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展
该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。
这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。
图 3、具有 Flex Attention 的自动生成优化注意力内核的性能
此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为
斯坦福大学的 KernelBench
基准测试。*
KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。
图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。
图 4、在优化的 GPU 内核中实现推理 – 时间扩展
DeepSeek-R1 上经过优化的 GPU 内核
这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。
虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。
如需了解更多信息或入门,请参阅
DeepSeek-R1 NIM 微服务
(现已在 build.nvidia.com 上提供)。 |
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/ | AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment | A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research
aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data.
“Multimodal foundation models are a new frontier in medical AI research,” said
Ruijiang LI
, an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.”
Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The
study
, recently published in
Nature
, highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI.
Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights.
To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options.
The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data.
According to
Jinxi Xiang
, study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64
NVIDIA V100 Tensor Core
GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used
NVIDIA A100 80 gb Tensor Core GPUs
. The researchers also used
NVIDIA RTX A6000 GPUs
for evaluating downstream tasks. The framework was accelerated with
NVIDIA CUDA
and
NVIDIA cuDNN
libraries, for optimized performance.
When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy.
Figure 1. An example of the visual question-answering MUSK can perform
It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%.
Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%.
“One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.”
A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes.
Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval.
“We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li.
The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on
GitHub
. | https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/ | AI 基础模型增强癌症诊断并实现个性化治疗 | 斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。
“多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者
Ruijiang LI
说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。”
在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。
借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。
为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。
研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。
据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个
NVIDIA V100 Tensor Core
GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用
NVIDIA A100 80GB Tensor Core GPUs
。研究人员还使用
NVIDIA RTX A6000 GPUs
评估下游任务。该框架通过
NVIDIA CUDA
和
NVIDIA cuDNN
库进行加速,以优化性能。
在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。
图 1. 例如,视觉问答 MUSK 可以执行
它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。
此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。
“一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。”
这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。
他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。
“我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。
研究人员的工作(包括安装说明、模型权重、评估代码和样本数据)
可在 GitHub 上获取。 |
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/ | CUDA Toolkit Now Available for NVIDIA Blackwell | The latest release of the
CUDA Toolkit
, version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release:
NVIDIA Blackwell architecture support
CUDA Graphs conditional nodes enhancements
Blackwell CUTLASS kernels for large language models (LLMs)
NVIDIA Nsight Developer Tools updates
Math libraries updates
cudaStreamGetDevice
Compiler updates
Accelerated Python updates
Feature-complete architectures
NVIDIA Blackwell architecture support
CUDA Toolkit 12.8 is the first version of the Toolkit to support the
NVIDIA Blackwell
architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in
NVIDIA Hopper
GPUs—Blackwell is the largest GPU ever built.
Key Blackwell capabilities supported include:
Second-generation Transformer Engine through custom Tensor Core technology:
Accelerates inference and training for LLMs and mixture-of-experts (MoE) models.
Decompression:
Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate.
Network interconnect:
NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models.
To learn more about the leading innovations in Blackwell, see the
NVIDIA Blackwell Architecture Technical Brief
.
2x faster CUDA Graphs with runtime kernel selection for lower latency inference
With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional
conditional node
types.
In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes.
Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes.
With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making.
Training
: By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train.
Inference
: For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence.
To learn more, see
Dynamic Control Flow in CUDA Graphs with Conditional Nodes
.
Blackwell CUTLASS kernels for LLMs
CUTLASS
, since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI.
With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations.
Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance
For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16.
Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions
NVIDIA Nsight Developer Tools
NVIDIA
Nsight Compute
2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data.
Figure 3. Tensor Memory traffic in the Nsight Compute memory chart
It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance.
This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag
-g-tmem-access-check
when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage.
Math libraries updates
With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing.
New features
cuBLAS
APIs
were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher.
Introduced
initial support for CUDA in Graphics (CIG)
on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits.
cuSOLVER
now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting.
nvJPEG
now provides support for the Tegra architecture.
NPP
now provides support for the DRIVE Thor architecture.
cudaStreamGetDevice
Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics.
Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the
cudaStreamGetDevice
CUDA API to retrieve the device associated with a CUDA stream can simplify applications.
Compiler updates
New compiler updates include the following:
The CUDA Toolkit 12.8 release introduces support for
GCC 14
as a host-side compiler.
The default high-level optimizer is now based on
LLVM 18
for the Blackwell architecture.
nvdisasm
now supports emitting JSON formatted SASS disassembly.
Accelerated Python updates
The following two beta releases are now available for Python users:
CUDA Python has released an early prototype of a new idiomatic object model called
cuda.core
and moved the CUDA binding to a submodule,
cuda.bindings
. For more information, see the documentation in the
NVIDIA/cuda-python
GitHub repo.
CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about
CCCL
.
Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability.
Feature-complete architectures
With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release.
This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle.
Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using
nvcc
,
nvrtc
, and
nvjitlink
.
In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window.
For more details, read the
CUDA Toolkit 12.8 Release Notes
.
Summary
The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python.
Want more information? Check out the
CUDA documentation
, browse the latest
NVIDIA Deep Learning Institute (DLI)
offerings, and visit the
NGC catalog
. Ask questions and join the conversation in the
CUDA Developer Forums.
Acknowledgments
Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash. | https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/ | CUDA 工具包现已支持 NVIDIA Blackwell 架构 | CUDA 工具包
的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能:
NVIDIA Blackwell 架构支持
CUDA 图形处理条件节点增强功能
用于大语言模型(LLMs)的 Blackwell CUTLASS 内核
NVIDIA Nsight 开发者工具更新
数学库更新
cudaStreamGetDevice
编译器更新
加速 Python 更新
功能齐全的架构
NVIDIA Blackwell 架构支持
CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持
NVIDIA Blackwell
架构。Blackwell 由 208 亿个晶体管构建而成,是
NVIDIA Hopper
GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。
Blackwell 支持的主要功能包括:Key Blackwell
采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。
解压缩:
使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。
网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。
如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。
使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理
借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他
条件节点类型
。
在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。
条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。
借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。
训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。
推理:对于使用测试时计算的新一代推理模型
,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。
如需了解详情,
请参阅使用条件节点的 CUDA 图形中的动态控制流。
适用于 LLMs 的 Blackwell CUTLASS 内核
自 2017 年首次推出以来,
CUTLASS
一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。
随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。
图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能
对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。
图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍
NVIDIA Nsight 开发者工具
NVIDIA
Nsight Compute
2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。
图 3、Nsight Compute 内存图中的 Tensor 内存流量
它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。
此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志
-g-tmem-access-check
来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。
数学库更新
借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。
新功能
cuBLAS
API
经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。
为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。
cuSOLVER
现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。
nvJPEG
现在支持 Tegra 架构。
NPP
现在为 DRIVE Thor 架构提供支持。
cudaStreamGetDevice
应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。
以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加
cudaStreamGetDevice
CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。
编译器更新
新的编译器更新包括以下内容:
CUDA 工具包 12.8 版本引入了对作为主机端编译器的
GCC 14
的支持。
现在,Blackwell 架构的默认高级优化器基于
LLVM 18
nvdisasm
现在支持发射 JSON 格式的 SASS 反汇编。
加速 Python 更新
以下两个测试版现已面向 Python 用户提供:
CUDA Python 已发布名为
cuda.core
的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块
cuda.bindings
。有关更多信息,请参阅
NVIDIA/cuda-python
GitHub 存储库中的文档。
CUDA 核心计算库 (
CCCL
) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。
此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。
功能齐全的架构
在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。
这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。
从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用
nvcc
、
nvrtc
和
nvjitlink
时输出警告消息。
在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。
如需了解更多详情,请参阅
CUDA Toolkit 12.8 版本说明
。
总结
CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。
想要了解更多信息?查看
CUDA 文档
,浏览最新的
NVIDIA Deep Learning Institute (DLI) 产品
,并访问
NGC 目录
。在
CUDA Developer Forums
中提出问题并加入对话。
致谢
感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。 |
https://developer.nvidia.com/blog/recent-posts/ | Recent posts | No content found | https://developer.nvidia.com/zh-cn/blog/recent-posts/ | 最近文章 | No content found |
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/ | High-Performance Remote IO With NVIDIA KvikIO | Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload.
Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have
higher and more variable latency
. Every storage service has their own set of best practices and performance guidelines (
AWS
,
Azure
). Here, we’ll give some general guidelines that are focused on data analysis workloads.
Location
Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt.
File format
“Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located).
Apache Parquet
,
Zarr
, and
Cloud Optimized GeoTIFF
are some examples of cloud-native file formats for various types of data.
Because object storage services typically support
range requests
, clients (like
cuDF
) can read the metadata and then download
just
the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out.
File size
Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples).
That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency.
Concurrency
Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a
remote
service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead.
Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a
thread pool
:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
Or with
asyncio
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
We’re able to have a lot of reads waiting around doing nothing
at the same time
, which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput.
Libraries
Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA
KvikIO
is notable because
It automatically chunks large requests into multiple smaller ones and makes those requests concurrently.
It can read efficiently into host or device memory, especially if
GPU Direct Storage
is enabled.
It’s fast.
As mentioned in the
RADIDS 24.12 release announcement
, KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does.
Benchmarks
When you read a file, KvikIO splits that read into smaller reads of
kvikio.defaults.task_size
bytes. It makes those read requests in parallel using a thread pool with
kvikio.defaults.num_threads
workers. These can be controlled using the environment variables
KVIKIO_TASK_SIZE
and
KVIKIO_NTHREADS
, or through Python with:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
See
Runtime Settings
for more.
This chart shows the throughput, in megabits per second, of reading a 1 GB blob from
S3
to a
g4dn
EC2 instance in the same region for various sizes of the thread pool (higher is better).
Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of
kvikio.RemoteFile.read
for various values of
kvikio.defaults.num
_threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point.
Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system.
With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case.
As shown in the next figure, the task size also affects the maximum throughput.
Figure 2. From a benchmark reading a 1 GB file from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of
kvikio.RemoteFile.read
. The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts.
As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput.
KvikIO achieves higher throughput on this workload when compared with
boto3
, the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently.
Figure 3. From a benchmark reading a 1 GB from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3.
As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS
g4dn.12xlarge
instance
, which has 4
NVIDIA T4
GPUs and 48 vCPUs.
Figure 4. From a benchmark reading a parquet data set from S3 to a
g4dn.12xlarge EC2
instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include
an optimization
to read parquet footers in parallel.
With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node.
Conclusion
As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on
GitHub
. You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing. | https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/ | 借助 NVIDIA KvikIO 实现高性能远程 IO | 处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。
您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。
地址
将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。
文件格式
“云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。
Apache Parquet
、
Zarr
和
Cloud Optimized GeoTIFF
是适用于各种类型数据的云原生文件格式的一些示例。
由于对象存储服务通常支持范围请求,因此客户端 (如
cuDF
) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。
文件大小
由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。
也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。
并发
使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。
对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
或使用
异步
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
我们能够让大量读取
同时
不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。
库
上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中,
NVIDIA KvikIO
值得注意,因为
它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。
它可以高效读取主机或设备内存,尤其是启用
GPU Direct Storage
时。
速度很快。
正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。
基准测试
当您读取文件时,KvikIO 会将读取的文件拆分成较小的
kvikio.defaults.task_size
字节读取。它使用具有
kvikio.defaults.num_threads
工作线程的线程池并行执行这些读取请求。可以使用环境变量
KVIKIO_TASK_SIZE
和
KVIKIO_NTHREADS
控制这些内容,也可以通过 Python 使用:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
详情请参阅
Runtime Settings
。
此图表显示了在同一区域内,针对不同大小的线程池,从
S3
到
g4dn
EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。
图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是
kvikio.RemoteFile.read
的吞吐量,适用于各种值的
kvikio.defaults.num
_threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。
线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。
借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。
如下图所示,任务大小也会影响最大吞吐量。
图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2 实例
。这显示了
kvikio.RemoteFile.read
吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。
只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。
与
boto3
(适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。
图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2
实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。
对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS
g4dn.12xlarge
实例上运行,该实例包含 4 个
NVIDIA T4
GPU 和 48 个 vCPUs。
图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。
启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。
结束语
随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。 |
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs | Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have.
The rise of
small language models
(SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data.
Microsoft announces
the new generation of open SLMs
to the Phi family with two new additions:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs.
These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the
Phi-4 14B parameter SLM
and enables commercial use for the two new smaller models.
The new models are available on the
Azure AI Foundry
, Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents.
You can test out each member of the Phi family through the
NVIDIA API Catalog
, which is the first sandbox environment to support each modality and tool-calling for
Phi-4-multimodal
. Use the preview NIM microservice to integrate the model into your applications today.
Why invest in SLMs?
SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements.
Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems.
Phi-4-multimodal
Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days.
Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications.
Figure 1. Visual QA demo in NVIDIA API Catalog
You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data.
Figure 2. Tool-calling demo in NVIDIA API Catalog
Phi-4-mini
Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days.
For both models, the training data is intentionally focused on high quality educational data and code which results in a
textbook-like quality
to the models. Text, speech, and vision benchmark data can be found in the model cards.
Advancing community models
NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience.
Using the
NVIDIA NeMo platform
, these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry.
NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences.
Get started today
Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at
build.nvidia.com/microsoft
.
On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production. | https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | 在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展 | 大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。
小语言模型
(SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。
Microsoft 宣布在 Phi 系列中
推出新一代开放式 SLM
,并新增两项功能:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。
这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的
Phi-4 14B 参数 SLM
的研究版本构建而成,可用于两个新的较小模型的商业用途。
这些新模型可在 Microsoft 的云 AI 平台
Azure AI Foundry
上使用,用于设计、定制和管理 AI 应用和代理。
您可以通过
NVIDIA API Catalog
测试 Phi 系列的每个成员,这是第一个支持
Phi-4 多模态
的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。
为何投资 SLM?
SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。
SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。
Phi-4-multimodal
Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。
事实证明,该模型在 ASR 方面表现出色,因为它在
Huggingface OpenASR 排行榜上排名第一
,单词错误率为 6.14%。
词错误率 (WER)
是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。
图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。
图 1、NVIDIA API Catalog 中的可视化问答演示
您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。
图 2、NVIDIA API Catalog 中的工具调用演示
Phi-4-mini
Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。
对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。
推进社区模式
NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。
借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。
NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。
立即开始使用
请访问
build.nvidia.com/microsoft
,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。
在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。 |
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM | In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on.
Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms.
NVIDIA NIM
microservices simplify the secure and reliable deployment of AI
foundation models
for language,
computer vision
, speech, biology, and more.
NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex.
This post helps you get started with building a
vision language model
(VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using
LangGraph,
the state-of-the-art
llama-3.2-90b-vision-instruct
VLM, the optimized
mistral-small-24B-instruct
large language model (LLM), and NVIDIA NIM for deployment.
This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of
LangChain’s tool calling
enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources.
This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
A simple HTML multimodal retrieval pipeline
The system consists of the following pipelines:
Document ingestion and preprocessing:
Runs a VLM on the images and translates them into text.
Question-answering:
Enables the user to ask questions of the system.
Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively.
Data ingestion and preprocessing pipeline
This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text.
Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an
LLM with long context modeling
capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed.
Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers.
Figure 1. Data ingestion and preprocessing pipeline
LLMs with long context modeling
can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval.
In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information.
However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy.
QA pipeline
All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a
LLM with long context modeling
(mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents.
Figure 2. Question-answering pipeline
Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content.
Finally, the system combines both textual and visual insights to deliver a comprehensive answer.
Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems.
Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text.
NVIDIA NIM now natively supports
capabilities for generating structured outputs
. This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering.
Integrating NVIDIA NIM with LangGraph
NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with
LangGraph
, a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes:
Assistant node:
Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools.
Tools node:
A collection of tools that perform specific tasks required by the assistant.
Figure 3. Use LangGraph to build an agent for the pipeline
Assistant node
The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
Here are the agent inputs:
Collection_name
: The set of documents on which to search.
Question
: The user’s question.
document_id
: (Optional) If provided, the agent skips the document ranking phase.
This is the agent process:
Document selection
: If
document_id
is not provided, the agent invokes the
find_best_document_id
tool, which identifies the most relevant document for the user’s question within the specified collection.
Question answering
: With
document_id
, the agent uses the
query_document
tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document.
Image analysis
(if necessary): If the
query_document
tool indicates that the answer might be in an image (by returning an
image_hash
value), the agent invokes the
query_image
tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer.
Tools node
We implemented three key tools for the agent to perform its tasks.
Find_best_document_id
: Identify the most relevant document for the user’s question when
document_id
is not provided. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_document
: Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_image
: Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the
/NVIDIA/GenerativeAIExamples
.
Binding external tools with models
Tool calling
is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks.
Tool binding
empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses.
Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following:
Extended capabilities
: Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation.
Dynamic tool selection
: The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance.
Seamless integration
: NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems.
In this implementation, use LangChain’s
@tool decorator
to create three tools, then use the
.bind_tools
method to bind the tools with models.
Defining structured outputs with Pydantic
By defining the output schema with
Pydantic
and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph.
Define the structure
The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for
downstream processing
.
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s
with_structured_output
method.
Define the prompt
The
prompt_document_expert
contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the
BestDocuments
schema).
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
Prepare context
The
get_context
function prepares the input data by retrieving document summaries and formatting them appropriately.
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
Bind the structured output
The
llm.with_structured_output(BestDocuments)
method instructs the LLM to produce output conforming to the
BestDocuments
Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure.
LangChain’s
with_structured_output
method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions.
Finally, create a chain to process the input and generate the structured output:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
End-to-end tool in action
To get started with the multimodal retrieval system, clone the
/NVIDIA/GenerativeAIExamples
GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to
http://localhost:7860
to access the system through the Gradio user interface.
For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The
Select Question
field is for evaluation purposes, with the
Ground Truth Answer
field value provided by a human.
Figure 4. Agent multi-document evaluation
This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data.
Figure 5. Agent result with source graph for verification
Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM
Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM
Challenges and solutions
As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt.
Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses.
Hierarchical document reranking solution
To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages:
Batch processing
: Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations.
Intermediate reranking
: The model evaluates each batch separately, ranking the documents within each group.
Selection of top candidates
: The most relevant documents from each batch are selected to proceed to the next stage.
Final reranking
: The top candidates from all batches are combined and re-evaluated to identify the most relevant document.
Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified.
Future prospects with smaller models
Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale.
The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost.
Conclusion
This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods:
Enhanced comprehension of documents
A multimodal model to extract information from images, tables, and text
Seamless integration of external tools
Generation of consistent and structured output
Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
NVIDIA NIM
also offers access to more models optimized for NVIDIA GPUs. You can explore
NVIDIA NeMo
, a scalable
generative AI
framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more.
If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time
multilingual and cross-lingual information retrieval systems
to generate context-aware responses, learn more about
NVIDIA NeMo Retriever
. | https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | 使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统 | 在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。
多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。
NVIDIA NIM
微服务可简化 AI
基础模型
在语言、
计算机视觉
、语音、生物学等领域的安全可靠部署。
NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。
本文将帮助您着手构建基于
视觉语言模型
(VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用
LangGraph
部署应用程序、先进的
llama-3.2-90b-vision-instruct
VLM、经过优化的
mistral-small-24B-instruct
大语言模型(LLM),以及用于部署的 NVIDIA NIM。
与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成
LangChain 的工具调用
,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。
此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 仓库。
简单的 HTML 多模态检索工作流
该系统由以下管道组成:
文档提取和预处理:在图像上运行 VLM 并将其转换为文本。
问答:允许用户提出系统问题。
这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。
数据提取和预处理 pipeline
此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。
接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。
最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道
采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。
相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。
但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。
QA 管道
所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道
接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。
最后,该系统将文本和视觉见解相结合,以提供全面的答案。
结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。
从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。
NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。
将 NVIDIA NIM 与 LangChain 集成
NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成:
助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。
工具节点:用于执行助手所需特定任务的工具集合。
图 3、使用 LangGraph 为管道构建代理
助理节点
助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到。
智能体输入如下:
Collection_name
:要在其上搜索的文档集。
Question
:用户的问题。
document_id
:(可选) 如果提供,代理会跳过文档排名阶段。
这是智能体流程:
文档选择
:如果未提供
document_id
,代理会调用
find_best_document_id
工具,该工具可在指定集合中识别与用户问题最相关的文档。
问题回答:对于
document_id
,代理使用
query_document
工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。
图像分析 (如有必要):如果
query_document
工具表明答案可能在图像中 (通过返回
image_hash
值),代理会调用
query_image
工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。
工具节点
我们为智能体实施了三个关键工具来执行任务。
Find_best_document_id
:在未提供
document_id
时,确定与用户问题最相关的文档。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 存储库。
query_document
:在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
GitHub 存储库。
query_image
:当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/
NVIDIA/GenerativeAIExamples
。
将外部工具与模型绑定
工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。
工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。
绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括:
扩展功能
:模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。
动态工具选择
:模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。
无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。
在此实现中,使用 LangChain 的
@tool
装饰器创建三个工具,然后使用
.bind_tools
方法将这些工具与模型绑定。
使用 PyTorch 定义结构化输出
通过使用
Pydantic
定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。
定义结构
首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的
with_structured_output
方法实现的。
定义提示
prompt_document_expert
包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与
BestDocuments
架构匹配的 JSON)。
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
准备上下文
get_context
函数通过检索文档摘要并对其进行适当格式化来准备输入数据。
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
绑定结构化输出
llm.with_structured_output(BestDocuments)
方法指示 LLM 生成符合
BestDocuments
Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。
LangChain 的
with_structured_output
方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。
最后,创建一个链来处理输入并生成结构化输出:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
端到端工具的实际应用
要开始使用多模态检索系统,请克隆
/NVIDIA/GenerativeAIExamples
GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至
http://localhost:7860
,通过 Gradio 用户界面访问系统。
例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“
Select Question
” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估
该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形
视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合
视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像
挑战和解决方案
随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。
处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。
分层文档重新排序解决方案
为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段:
批量处理
:将摘要分组为模型可以处理的批量,且不会超过提示大小限制。
中级重新排序
:模型分别评估每个批次,对每个组中的文档进行排序。
选择最优秀的候选文档
:从每个批次中选择最相关的文档,以进入下一阶段。
最终重新排名
:系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。
考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。
小型模型的未来前景
使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。
对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。
结束语
本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势:
增强对文档的理解
用于从图像、表格和文本中提取信息的多模态模型
无缝集成外部工具
生成一致的结构化输出
借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到源代码。
NVIDIA NIM
还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索
NVIDIA NeMo
,这是一个可扩展的生成式
AI
框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。
如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解
NVIDIA NeMo Retriever
。 |
https://developer.nvidia.com/blog/tag/inference-performance/ | Inference Performance | No content found | https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/ | Inference Performance | No content found |
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding | Large language models (LLMs)
that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving
AI agents
, these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation.
To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in
NVIDIA TensorRT-LLM
, and the ease of deployment with
NVIDIA NIM
for transformative potential and coding efficiency.
Qwen2.5-Coder models
The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks.
NVIDIA TensorRT-LLM
has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic
inflight batching
,
KV caching
,
KV cache reuse
, and several speculative decoding techniques, among others.
These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM
high-level API
to speed up Qwen2.5-Coder inference to generate multiline autocode completion.
Lookahead decoding
Lookahead decoding
is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with
draft target speculative decoding
, and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example.
Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding.
Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the
Jacobi iteration method
, a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch.
The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G).
Window size (W):
Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently.
N-gram size (N):
Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory.
Verification set size (G):
Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities.
Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models.
Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on
NVIDIA H100 Tensor Core GPUs
, as shown in Figure 2.
Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Similar throughput speedups are achieved on
NVIDIA H200 Tensor Core GPUs
. With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3).
Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Steps to run lookahead decoding with TensorRT-LLM
To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below.
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
Then run lookahead decoding in TensorRT-LLM using the high-level API.
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
Summary
Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to
try the Qwen2.5-Coder models
optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable
NVIDIA NIM
microservices for ease of deployment.
Acknowledgments
We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | 使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量 | 专注于编码的
大语言模型(LLMs)
已稳步应用于开发者工作流程。从配对编程到自我改进的
AI 智能体
,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。
为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对
NVIDIA TensorRT-LLM 支持
的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。
Qwen2.5-Coder 模型
Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。
NVIDIA TensorRT-LLM
已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上
批处理
、
KV 缓存
、
KV 缓存重复使用
和几种预测性解码技术等优化功能。
这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。
解码前景展望
解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。
与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。
每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器
Jacobi 迭代法
,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。
前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。
窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。
N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。
验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。
图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。
通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过
NVIDIA H100 Tensor Core GPUs
上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。
图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
NVIDIA H200 Tensor Core GPU
也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。
图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
使用 TensorRT-LLM 进行解码的前瞻性运行步骤
要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
总结
前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。
访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。
为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的
NVIDIA NIM
微服务。
致谢
在此,
我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力
。 |
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | Optimize AI Inference Performance with NVIDIA Full-Stack Solutions | The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure.
NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in
AI inference
, making it faster, more efficient, and more scalable than ever before.
Easily deploy high-throughput, low-latency inference
Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput.
To address this, NVIDIA developed the
NVIDIA Triton Inference Server
, an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA
open-source projects
, now used by
hundreds of leading organizations to deploy production AI models efficiently.
In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools,
NVIDIA TensorRT
provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations.
NVIDIA NIM
microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations.
Optimizations for AI inference workloads
Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance.
These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness.
The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for
large language models (LLMs)
, which are outlined below.
Prefill and KV cache optimizations
Key-value (KV) cache early reuse
: By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments.
Chunked prefill
: For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands.
Supercharging multiturn interactions
:
The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput.
Decoding optimization
Multiblock attention for long sequences
: Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs.
Speculative decoding for accelerated throughput
: Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications.
Speculative decoding with Medusa:
The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation.
Multi-GPU inference
MultiShot communication protocol
: Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by
NVSwitch
, reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient.
Pipeline parallelism for high-concurrency efficiency
:
Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on
NVIDIA H200 Tensor Core
GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium.
Large NVLink domains:
The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications.
Quantization and lower-precision compute
NVIDIA TensorRT Model Optimizer for precision and performance:
The NVIDIA custom FP8 quantization recipe in the
NVIDIA TensorRT Model Optimizer
delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads.
End-to-end full-stack optimization:
NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments.
With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data.
Evaluating inference performance
Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs.
MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance.
In the latest round of MLPerf Inference,
NVIDIA Blackwell made its debut
, delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU.
In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark.
The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round.
NVIDIA Triton Inference Server, running on a system with eight H200 GPUs
achieved virtually identical performance compared to the
NVIDIA bare-metal submission on the Llama 2 70B benchmark
in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton.
The future of AI inference: Emerging trends and technologies
The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference.
Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed.
At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses.
The
NVIDIA GB200 NVL72
rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU.
In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time.
The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI.
Get started
Check out
How to Get Started with AI Inference
, learn more about the
NVIDIA AI Inference
platform, and stay informed about the latest
AI inference performance
updates.
Watch a demo on
how to quickly deploy NVIDIA NIM microservices
or read
A Simple Guide to Deploying Generative AI with NVIDIA NIM
. Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices. | https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | 借助 NVIDIA 全栈解决方案提升 AI 推理性能 | AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。
NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义
AI 推理
的可能性,使其比以往更快、更高效、更具可扩展性。
轻松部署高吞吐量、低延迟推理
六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。
为解决这一问题,NVIDIA 开发了
NVIDIA Triton Inference Server
,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。
除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者,
NVIDIA TensorRT
提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。
NVIDIA NIM
微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。
针对 AI 推理工作负载进行优化
推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。
这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。
TensorRT-LLM 库包含许多先进功能,可加速
大语言模型(LLMs)
的推理性能,如下所述。
预填充和 KV 缓存优化
键值 (KV) 缓存提早复用
:通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。
分块预填充
:为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。
强效助力多圈交互
:NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。
解码优化
长序列的
多块注意力
:TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。
用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。
使用 Medusa 进行推理解码
:Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。
多 GPU 推理
MultiShot 通信协议
:传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由
NVSwitch
提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。
实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。
NVIDIA H200 Tensor Core
GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。
大型 NVLink 域
:NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。
量化和低精度计算
用于提高精度和性能的 NVIDIA TensorRT 模型优化器
:NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。
端到端全栈优化
:NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。
借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。
评估推理性能
实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。
MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。
在最新一轮 MLPerf Inference 中,
NVIDIA Blackwell 首次亮相
,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。
此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。
服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。
与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比
,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而
NVIDIA Triton 可以同时实现这两种性能。
AI 推理的未来:新兴趋势和技术
在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。
每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。
与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。
NVIDIA GB200 NVL72
机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。
此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。
通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。
开始使用
查看如何开始使用 AI 推理
,了解更多关于
NVIDIA AI 推理平台
的信息,并随时了解
最新的 AI 推理性能更新
。
观看演示,了解如何快速部署
NVIDIA NIM
微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。 |
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference | Recurrent drafting
(referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with
NVIDIA TensorRT-LLM
. ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs.
NVIDIA TensorRT-LLM
is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build
NVIDIA TensorRT
engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more.
Speculative decoding
is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference.
ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as
drafting
, combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community.
The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration.
TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance.
To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter.
Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM
The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM.
Inflight-batching compatible engine
Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting.
Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine
Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future.
Implementing in-engine validation and drafting
To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model.
The following PyTorch code excerpt is Apple’s
PyTorch implementation of ReDrafter
. The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version.
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
This, of course, is a very simple example. For a more complex example, see the
beam search implementation
. With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance.
ReDrafter performance in TensorRT-LLM
As
benchmarked by Apple
, ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on
NVIDIA H100
GPUs with TP8 over the base LLM.
Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including:
GPU utilization:
Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes.
Average acceptance rate:
The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data).
Task:
It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance.
Summary
This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads.
Explore
NVIDIA TensorRT-LLM
to unlock the full potential of your models on NVIDIA GPUs. | https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化 | Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与
NVIDIA TensorRT-LLM
一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。
NVIDIA TensorRT-LLM
是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建
NVIDIA TensorRT
引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。
推测解码 (Speculative decoding)
是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。
ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为
Drafting
) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。
ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。
为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。
为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。
图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较
下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。
兼容 Inflight-batching 批处理的引擎
Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。
图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流
请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。
实现引擎内验证和
Drafting
为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。
以下 PyTorch 代码摘录是苹果公司的
PyTorch 实现的 ReDrafter
。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见
束搜索实现
。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。
ReDrafter 在 TensorRT-LLM 中的性能
根据
苹果公司的基准测试
,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的
NVIDIA
GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。
请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括:
GPU 利用率:
推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。
平均接受率:
由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。
任务:
在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。
总结
NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。
探索
NVIDIA TensorRT-LLM
,在 NVIDIA GPU 上充分释放模型潜能。 |
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding | Meta’s Llama collection of open
large language models (LLMs)
continues to grow with the recent addition of
Llama 3.3 70B
, a text-only instruction-tuned model. Llama 3.3 provides enhanced performance respective to the older Llama 3.1 70B model and can even match the capabilities of the larger, more computationally expensive
Llama 3.1 405B model
on several tasks including math, reasoning, coding, and multilingual support.
NVIDIA TensorRT-LLM
, a powerful inference engine that delivers state-of-the-art performance on the latest LLMs, incorporates many optimizations to deliver outstanding Llama 3.3 70B inference throughput. These include
in-flight batching
,
KV caching
,
custom FP8 quantization
, speculative decoding, and more for fast, cost-efficient LLM serving.
With in-flight batching activated by default as a runtime configuration parameter, TensorRT-LLM supports batching multiple different requests at the same time for higher serving throughput. By interleaving requests in context and generation phases, in-flight batching reduces latency and improves GPU utilization by executing new requests while older requests are still in flight. Finished requests are evicted from the batch, making room for the next set of requests.
Caching the values of the key-value elements of previous tokens saves from expensive recomputation of these tensors in the generation phase for the next set of tokens. Computational savings effectively lead to higher throughput. However, KV cache grows linearly in size with number of batched requests and sequence context lengths, leading to higher memory requirements.
TensorRT-LLM KV caching addresses these challenges through several optimizations, including support for
paged KV cache
,
quantized KV cache
,
circular buffer KV cache
and
KV cache reuse
. Each of these optimizations address the challenging balance between growing memory size and avoiding unnecessary and expensive recomputation.Speculative decoding is a popular technique for faster and cost-effective LLM inference with built-in verification for the quality of output generation. It’s based on the premise that generating multiple sequences of future (draft) tokens is more efficient than processing a single token in autoregressive decoding, an inherently time-consuming process. The target model determines how many of these draft tokens to accept, which is far more efficient than having to generate one token per iteration. TensorRT-LLM supports a growing list of speculative decoding techniques including
draft target
,
Medusa
,
Eagle
, and
lookahead decoding
, among others.
Figure 1. NVIDIA TensorRT optimizations for high-performance deep learning inference
In this post, we show how the
NVIDIA HGX H200 platform
with NVLink and NVSwitch, as well as TensorRT-LLM, achieve great performance when running the latest Llama 3.3 70B model. We describe the step-by-step setup to get speculating decoding working for Llama 3.3 70B with TensorRT-LLM. For more information, including other optimizations, different models, and multi-GPU execution, see the full list of
TensorRT-LLM examples
.
Achieving throughput speedups with draft target speculative decoding
Table 1 and Figure 2 highlight the throughput (output tokens/second) speedups between no draft model (that is, no speculative decoding) versus draft models of various sizes with Llama 3.3 70B target model.
Throughput Performance – Output Tokens/Second
One NVIDIA H200 Tensor Core GPU
Draft | Target
Models
Llama 3.2 1B | Llama 3.3 70B
Llama 3.2 3B | Llama 3.3 70B
Llama 3.1 8B | Llama 3.3 70B
Llama 3.3 70B
(without draft model)
Tokens/sec
181.74
161.53
134.38
51.14
Speedups (with versus without draft models)
3.55x
3.16x
2.63x
N/A
Table 1. Throughput performance using one NVIDIA H200 Tensor Core GPU with TensorRT-LLM internal measurements
Data measured on December 11, 2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP1, FP8, batch size=1, TensorRT Model Optimizer version 0.21, TensorRT-LLM version 0.15.0.
Figure 2. Throughput speedups using speculative decoding with Llama 3.3 70B target model
We provide the steps to reproduce these performance gains using draft target speculative decoding within TensorRT-LLM.
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
After the model checkpoints have been downloaded, install TensorRT-LLM.
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
Next, compile the downloaded model checkpoints into draft and target TensorRT engines. These engines are optimized to run inference with best accuracy and highest throughput.
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
Finally, run speculative decoding in TensorRT-LLM.
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
To benchmark throughput performance without speculative decoding, follow the steps below:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
Summary
NVIDIA collaborates with Meta for the creation, optimization, and acceleration of the world’s leading open models. NVIDIA supports Llama as part of our commitment to grow open community AI models and software to enable users to customize and address their own unique workloads. NVIDIA is involved with several open-source projects through partnering with developers, maintainers, and foundations.
NVIDIA TensorRT-LLM
provides several features for optimizing and efficiently running LLMs of different model architectures. These optimizations lead to significant speedups on the same hardware, enable fewer resources to serve the same workload, reduce energy costs, and improve total cost of ownership. Available through production-ready deployments using
NVIDIA NIM
microservices, these TensorRT optimizations accelerate the deployment of your generative AI applications across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations. | https://developer.nvidia.com/zh-cn/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | 借助 NVIDIA TensorRT-LLM 预测解码,将 Llama 3.3 的推理吞吐量提升 3 倍 | 随着近期新增的 Llama 3.3 70B
(一种纯文本指令调整模型),Meta 的开放
大语言模型 (LLMs)
集合将继续增长。Llama 3.3 相对于较旧的 Llama 3.1 70B 模型提供了增强的性能,甚至可以在数学、推理、编码和多语种支持等多项任务中匹配更大、计算成本更高的 Llama 3.1 405B 模型的功能。
NVIDIA TensorRT-LLM
是一款功能强大的推理引擎,可在最新的 LLM 上提供先进的性能,并整合了许多优化措施,可提供出色的 Llama 3.3 70B 推理吞吐量。其中包括在飞批处理、
KV 缓存
、
自定义 FP8 量化
、推测解码等,可实现快速、经济高效的 LLM 服务。
动态批处理默认作为运行时配置参数激活,TensorRT-LLM 支持同时批处理多个不同的请求,从而提高服务吞吐量。通过在上下文和生成阶段交错处理请求,动态批处理可在旧请求仍在运行时执行新请求,从而减少延迟并提高 GPU 利用率。已完成的请求将从批处理中移除,从而为下一组请求腾出空间。
缓存先前令牌的键值元素的值可避免在为下一组令牌生成阶段对这些张量进行昂贵的重新计算。节省的计算量可有效提高吞吐量。但是,随着批量请求的数量和序列上下文长度的增加,KV 缓存的大小呈线性增长,从而导致内存需求增加。
TensorRT-LLM KV 缓存通过多种优化解决了这些挑战,包括对分页 KV 缓存、
量化 KV 缓存
、
循环缓冲区 KV 缓存
和
KV 缓存重复使用
的支持。每项优化都解决了在增大内存大小与避免不必要且昂贵的重新计算之间实现富有挑战性的平衡这一问题。预测解码是一种热门技术,可通过内置验证来生成输出的质量,从而实现更快且经济高效的 LLM 推理。其前提是,在自回归解码过程中,生成多个未来 (草稿) 令牌序列比处理单个令牌更高效。目标模型决定接受这些草稿令牌的数量,这比每次迭代生成一个令牌的效率要高得多。TensorRT-LLM 支持越来越多的预测性解码技术,包括
草稿目标
、
Medusa
、
Eagle
和前瞻性解码等。
图 1、适用于高性能深度学习推理的 NVIDIA TensorRT 优化
在本文中,我们将展示搭载 NVLink 和 NVSwitch 的
NVIDIA HGX H200 平台以及 TensorRT-LLM
如何在运行最新的 Llama 3.3 70B 模型时实现出色性能。我们介绍了使用 TensorRT-LLM 对 Llama 3.3 70B 进行解码预测的分步设置。有关其他优化、不同模型和多 GPU 执行的更多信息,请参阅
TensorRT-LLM 示例
的完整列表。
通过草稿目标预测解码实现吞吐量加速
表 1 和图 2 突出显示了无草稿模型 (即无预测解码) 与使用 Llama 3.3 70B 目标模型的各种规模的草稿模型之间的吞吐量 (输出令牌/秒) 加速。
吞吐量性能 – 输出 Tokens/秒
One NVIDIA H200 Tensor Core GPU
草稿|目标
模型
Llama 3.2 1B|Llama 3.3 70B
Llama 3.2 3B|Llama 3.3 70B
Llama 3.1 8B|Llama 3.3 70B
Llama 3.3 70B
(无草稿模型)
令牌/秒
191.74
151.53
134.38
51.14
加速 (有与无草稿模型对比)
3.55 倍
3.16 倍
2.63 倍
不适用
表 1、使用一个 NVIDIA H200 Tensor Core GPU 和 TensorRT-LLM 内部测量的吞吐量性能
数据测量于 2024 年 12 月 11 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 总生成 tokens / 总延迟。DGX H200、TP1、FP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21,TensorRT-LLM 版本 0.15.0。
图 2、使用 Llama 3.3 70B 目标模型的预测解码提高吞吐量
我们提供了在 TensorRT-LLM 中使用草稿目标预测解码重现这些性能提升的步骤。
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
下载模型检查点后,安装 TensorRT-LLM。
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
接下来,将下载的模型检查点编译到草稿和目标 TensorRT 引擎中。这些引擎经过优化,能够以最佳的准确性和最高的吞吐量运行推理。
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
最后,在 TensorRT-LLM 中运行投机性解码。
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
要在不使用 speculative decoding 的情况下对吞吐量性能进行基准测试,请执行以下步骤:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
总结
NVIDIA 与 Meta 合作创建、优化和加速世界领先的开放模型。NVIDIA 支持 Llama,履行我们发展开放社区 AI 模型和软件的承诺,使用户能够自定义和处理自己独特的工作负载。NVIDIA 通过与开发者、维护人员和基金会合作参与了多个开源项目。
NVIDIA TensorRT-LLM
提供多种功能,用于优化和高效运行不同模型架构的 LLM。这些优化可显著加快相同硬件的运行速度,减少资源以处理相同工作负载,降低能源成本,并提高总拥有成本。这些 TensorRT 优化通过使用
NVIDIA NIM
微服务的生产就绪型部署提供,可随时随地在 NVIDIA 加速的基础设施 (包括云、数据中心和工作站) 中加速生成式 AI 应用的部署。 |
https://developer.nvidia.com/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | Spotlight: Perplexity AI Serves 400 Million Search Queries a Month Using NVIDIA Inference Stack | The demand for AI-enabled services continues to grow rapidly, placing increasing pressure on IT and infrastructure teams. These teams are tasked with provisioning the necessary hardware and software to meet that demand while simultaneously balancing cost efficiency with optimal user experience. This challenge was faced by the inference team at Perplexity AI, an AI-powered search engine that handles more than 435 million queries each month. Each query represents multiple AI inference requests.
To meet this demand, the Perplexity inference team turned to
NVIDIA H100 Tensor Core GPUs
,
NVIDIA Triton Inference Server
, and
NVIDIA TensorRT-LLM
for cost-effective
large language model (LLM)
deployment. This post details some of the deployment best practices and TCO savings based on their hands-on experience.
Serving multiple AI models simultaneously
To support its extensive user base and serve a wide range of requests—spanning search, summarization, and question answering, among others—the inference team at Perplexity serves over 20 AI models simultaneously. This includes different variations of the popular open source Llama 3.1 models like 8B, 70B, and 405B.
To match each user request with the appropriate model, the company relies on smaller classifier models that help determine user intent. User tasks detected by the classifiers, like text completion, are then routed to specific models deployed on GPU pods. Each pod consists of one or more NVIDIA H100 GPUs and is managed by an NVIDIA Triton Inference Server instance. The pods operate under strict service-level agreements (SLAs) for both cost efficiency and user interactivity.
Figure 1. Perplexity AI deployed LLM production on a massive scale using NVIDIA HGX H100 four-GPU and eight-GPU systems featuring dynamic scalability, model parallelism, and load balancing capabilities
To accommodate the large Perplexity user base and fluctuating traffic throughout the day, the pods are hosted within a Kubernetes cluster. They feature a front-end scheduler built in-house that routes traffic to the appropriate pod based on their load and usage, ensuring that the SLAs are consistently met.
The scheduling algorithm used by the front-end scheduler can affect inter-token latency, particularly in improving the worst percentile of performance (Figure 2). The team at Perplexity constantly looks for new scheduler optimizations, including how to better account for sequence length variations across requests.
Figure 2. Inter-token latency distribution for round-robin, least requests, and power of two random choices load balancing strategies during an internal Perplexity benchmark at a moderate request rate of 16 QPS
Triton Inference Server is a critical component of Perplexity’s deployment architecture. It serves optimized models across various backends, batches incoming user requests, and provides GPU utilization metrics to the scheduler. This supports scaling up or down the number of deployments and GPUs based on the amount of the inference requests.
For a detailed guide on how to deploy NVIDIA Triton with Kubernetes, see
Scaling LLMs with NVIDIA Triton and NVIDIA TensorRT-LLM Using Kubernetes
.
Meeting strict service-level agreements
To define the right SLAs for the company’s diverse use cases, Perplexity’s inference team conducts comprehensive A/B testing, evaluating different configurations and their impact on user experience. Their goal is to maximize GPU utilization while consistently meeting the target SLA for each specific use case. By improving batching while meeting target SLAs, inference serving cost is optimized.
For smaller models, such as embedding models under 1 billion parameters used in real-time retrieval, the focus is on achieving the lowest possible latency. These are typically hidden from the user and are part of a broader workflow. As a result, configurations for these queries typically have low batch sizes. Given the smaller memory footprints of these models, the team runs multiple models concurrently on the NVIDIA H100 GPU to maintain high resource utilization.
For user-facing models such as Llama 8B, 70B, and 405B, which have a greater impact on user experience and deployment costs, the team conducts a deeper performance analysis and evaluates key metrics such as time to first token, tokens per second per user, and cost per million queries.
To optimize performance while controlling costs, Perplexity parallelizes their deployment of these models across multiple GPUs. Due to the strict SLAs, the team opted to increase tensor parallelism to four and eight GPUs, which they found yields lower serving costs for very latency-sensitive requests within a fixed GPU budget. Data or pipeline parallelism was useful for maximizing throughput in less latency-sensitive settings.
The Perplexity team now uses TensorRT-LLM in combination with proprietary LLM runtimes built with optimized CUDA kernels to successfully serve the Llama-based models within their strict SLAs at minimal costs.
Figure 3. Sharding the Llama 8B model using tensor parallelism across four NVIDIA Hopper GPUs reduces Perplexity’s relative cost per million token by up to 3x for latency-sensitive requests
Ultimately, the Perplexity inference team’s decision to host models depends on their ability to serve these models at a lower cost while still meeting their strict SLAs, compared to using third-party LLM provider APIs. For example, the team estimated that they were able to save approximately $1 million annually by serving models that power their Related-Questions feature on cloud-hosted NVIDIA GPUs. The Related-Questions feature offers Perplexity users suggested follow-up questions to facilitate deeper dives after a search query.
Delivering new levels of performance
The inference team at Perplexity adopts a comprehensive, full-stack approach to their road map, consistently optimizing and enhancing every layer of the stack—from applications and use cases to inference serving middleware and hardware accelerators.
In terms of inference serving middleware, the team is actively collaborating with the NVIDIA Triton engineering team to deploy
disaggregating serving
, a groundbreaking technique that separates the prefill and decode inference phases of an LLM workflow onto separate NVIDIA GPUs. This technique significantly boosts overall system throughput while meeting SLAs, translating to lower cost per token. Additionally, this technique gives Perplexity the flexibility to use different NVIDIA GPU products for each inference phase given its specific hardware resource requirements.
The Perplexity team understands that optimizing the software stack can only drive performance improvements to a certain extent. To deliver new levels of performance, hardware innovations are crucial. This is why they are eager to assess the
NVIDIA Blackwell platform
.
NVIDIA Blackwell delivers
significant performance leaps
enabled by numerous technology innovations, including the second-generation Transformer Engine with support for the FP4 data format, fifth-generation NVLink and NVSwitch enabling significantly larger NVLink domains, and more. Collectively these innovations deliver 30x improvement in inference performance for
trillion parameter LLMs
.
Get started
NVIDIA Triton Inference Server
and
NVIDIA TensorRT
are open-source projects available on GitHub. They are also available as Docker containers that can be pulled from
NVIDIA NGC
. They are part of
NVIDIA AI Enterprise
, which offers enterprise-grade security, stability, and support. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, a set of easy-to-use microservices for accelerated inference on a wide range of AI models, including open-source community and NVIDIA AI Foundation models.
To learn more, check out these resources:
From Zero to Millions: Scaling Large Language Model Inference with TensorRT-LLM
Perplexity Enhances Model Performance for AI-Powered Search Engines with NVIDIA NeMo
Accelerating Large Language Model Inference with NVIDIA in the Cloud | https://developer.nvidia.com/zh-cn/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | 聚焦:Perplexity AI 利用 NVIDIA 推理栈每月服务 4 亿个搜索请求 | 对 AI 赋能服务的需求持续快速增长,这给 IT 和基础设施团队带来了越来越大的压力。这些团队的任务是配置必要的硬件和软件来满足这一需求,同时平衡成本效益和最佳用户体验。Perplexity AI 的推理团队面临着这一挑战。Perplexity AI 是一款由 AI 驱动的搜索引擎,每月可处理超过 435 million 个查询。每个查询代表多个 AI 推理请求。
为了满足这一需求,Perplexity 推理团队转而使用
NVIDIA H100 Tensor Core GPUs
、
NVIDIA Triton Inference Server
和
NVIDIA TensorRT-LLM
来进行经济高效的
大语言模型(LLM)
部署。本文根据他们的实操经验,详细介绍了一些部署最佳实践和节省的 TCO。
同时为多个 AI 模型提供服务
为了支持其广泛的用户群并满足搜索、总结和问答等方面的各种请求,Perplexity 的推理团队同时为 20 多个 AI 模型提供服务。其中包括热门开源 Llama 3.1 模型的不同变体,如 8B、70B 和 405B。
为了将每个用户请求与适当的模型相匹配,该公司依靠较小的分类器模型来帮助确定用户意图。然后,分类器检测到的用户任务 (例如文本补全) 会被路由到部署在 GPU 节点上的特定模型。每个节点由一个或多个 NVIDIA H100 GPU 组成,并由 NVIDIA Triton Inference Server 实例管理。这些节点遵循严格的服务级别协议 (SLA) 运行,以实现成本效益和用户交互。
图 1. Perplexity AI 使用 NVIDIA HGX H100 4-GPU 和 8-GPU 系统大规模部署 LLM 生产,具有动态可扩展性、模型并行性和负载均衡功能
为了适应庞大的 Perplexity 用户群和全天波动的流量,Pod 托管在 Kubernetes 集群内。它们具有内置的前端调度程序,可根据负载和使用情况将流量路由到适当的 Pod,从而确保始终满足 SLA。
前端调度程序使用的调度算法会影响令牌间延迟,尤其是在改善最差性能百分位方面 (Figure 2)。Perplexity 团队不断寻找新的调度器优化,包括如何更好地考虑不同请求之间的序列长度变化。
图 2、在 16 QPS 的中等请求速率下,在内部 Perplexity 基准测试中,循环、最小请求的令牌间延迟分配,以及两种随机选择负载均衡策略的能力
Triton Inference Server 是 Perplexity 部署架构的关键组件。它跨各种后端提供优化模型、批量处理传入用户请求,并向调度程序提供 GPU 利用率指标。这支持根据推理请求的数量增加或减少部署和 GPU 的数量。
有关如何使用 Kubernetes 部署 NVIDIA Triton 的详细指南,请参阅
使用 Kubernetes 扩展 LLMs with NVIDIA Triton 和 NVIDIA TensorRT-LLM
。
满足严格的服务级别协议
为了为公司的各种用例定义合适的 SLA,Perplexity 的推理团队进行了全面的 A/B 测试,评估不同的配置及其对用户体验的影响。他们的目标是最大限度地提高 GPU 利用率,同时始终满足每个特定用例的目标 SLA。通过在满足目标 SLA 的同时改进 batching,推理服务成本得以优化。
对于较小的模型,例如嵌入实时检索中使用的 10 亿个参数以下的模型,重点在于尽可能降低延迟。这些通常对用户隐藏,并且是更广泛工作流程的一部分。因此,这些查询的配置通常具有较低的批量大小。鉴于这些模型的内存占用空间较小,该团队在 NVIDIA H100 GPU 上同时运行多个模型,以保持高资源利用率。
对于面向用户的模型 (例如 Llama 8B、70B 和 405B),这些模型对用户体验和部署成本的影响更大,因此该团队会进行更深入的性能分析,并评估关键指标,例如时间到第一个 token、每位用户每秒 token 数以及每百万次查询的成本。
为了在控制成本的同时优化性能,Perplexity 跨多个 GPU 并行部署这些模型。由于严格的服务水平协议 (SLA),该团队选择将张量并行性增加到 4 个和 8 个 GPU,他们发现这样可以在固定的 GPU 预算内降低对延迟非常敏感的请求的服务成本。数据或 pipeline 并行有助于在延迟敏感度较低的设置中更大限度地提高吞吐量。
Perplexity 团队现在将 TensorRT-LLM 与使用优化的 CUDA 内核构建的专有 LLM 运行时相结合,以更低的成本在严格的 SLA 中成功为基于 Llama 的模型提供服务。
图 3、在四个 NVIDIA Hopper GPU 上使用张量并行机制对 Llama 8B 模型进行分片,可将 Perplexity 每百万个令牌的相对成本降低高达 3 倍,以处理延迟敏感型请求
最终,Perplexity 推理团队决定托管模型,取决于他们能否以较低的成本为这些模型提供服务,同时仍满足严格的服务水平协议 (SLA),与使用第三方 LLM 提供商 API 相比。例如,该团队估计,与向第三方 LLM API 服务提供商发送相同的请求量相比,通过在云托管的 NVIDIA GPUs 上提供支持其 Related-Questions 功能的模型,他们每年能够节省约 $1 million。“Related-Questions” 功能为 Perplexity 用户提供后续问题建议,以便在搜索查询后进行更深入的研究。
图 4、与使用第三方 LLM API 服务相比,Perplexity 使用其部署架构在云托管的 NVIDIA GPU 上提供 Related-Questions AI 模型,每年可节省约 100 万美元
提供更高水平的性能
Perplexity 的推理团队对其路线图采用全面的全栈方法,不断优化和增强堆栈的每一层 (从应用和用例到为中间件和硬件加速器提供服务的推理)。
在推理服务中间件方面,该团队正在积极与 NVIDIA Triton 工程团队合作,部署
分解服务
,这是一项突破性技术,可将 LLM 工作流的预填充和解码推理阶段分离到单独的 NVIDIA GPU。此技术可显著提高整体系统吞吐量,同时满足 SLA,从而降低每个令牌的成本。此外,鉴于 Perplexity 的特定硬件资源需求,此技术还可让 Perplexity 在每个推理阶段灵活使用不同的 NVIDIA GPU 产品。
Perplexity 团队明白,优化软件堆栈只能在一定程度上推动性能提升。为了提供更高级别的性能,硬件创新至关重要。因此,他们迫切希望评估
NVIDIA Blackwell 平台
。
NVIDIA Blackwell 通过众多技术创新实现了
显著的性能飞跃
,包括支持 FP4 数据格式的第二代 Transformer Engine、支持更大 NVLink 域的第五代 NVLink 和 NVSwitch 等。总体而言,这些创新可将
万亿级参数 LLM
的推理性能提升 30 倍。
开始使用
NVIDIA Triton Inference Server
和
NVIDIA TensorRT
是 GitHub 上的开源项目。它们还可用作 Docker 容器,可从
NVIDIA NGC
中提取。它们是
NVIDIA AI Enterprise
的一部分,可提供企业级安全性、稳定性和支持。寻求快速实现价值的企业可以使用
NVIDIA NIM
,这是一套易于使用的微服务,可加速各种 AI 模型(包括开源社区和 NVIDIA AI Foundation 模型)的推理。
如需了解详情,请查看以下资源:
从 0 扩展到数百万:使用 TensorRT-LLM 扩展大型语言模型推理
Perplexity
借助 NVIDIA NeMo 增强 AI 驱动的搜索引擎的模型性能
借助 NVIDIA 在云端加速大语言模型推理 |
https://developer.nvidia.com/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs | Meta recently released its
Llama 3.2
series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only
small language model (SLM)
variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and cost-efficient serving across millions of GPUs worldwide – from our most powerful data center and cloud GPUs to local
NVIDIA RTX workstations
and even low-power edge devices with
NVIDIA Jetson
.
Llama 3.2 VLMs support long context lengths of up to 128K text tokens as well as a single image input at a resolution of 1120 x 1120 pixels. To enable low latency responses for great user experiences, while also providing high throughput for cost-efficient serving of these models, the NVIDIA platform is optimized at every layer of the technology stack.
Similarly, the Llama 3.2 SLMs have been optimized to run well on the millions of NVIDIA RTX PCs and workstations worldwide. They have also been quantized to allow for local deployment on edge devices with NVIDIA Jetson. For more information, see
Deploying Accelerated Llama 3.2 from the Edge to the Cloud
.
This post describes the full-stack optimizations that enable high throughput and low latency serving of Llama 3.2 models.
Accelerating Llama 3.2 AI inference throughput
The Llama 3.2 11B and Llama 3.2 90B models include a vision encoder with a text decoder. The encoder is optimized for high-performance inference using the
NVIDIA TensorRT
library and the text decoder is optimized using the
NVIDIA TensorRT-LLM
library.
The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the Llama 3.2 VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with the text input.
TensorRT supports the vision encoder in the BF16 data format. TensorRT-LLM supports the text decoder in both FP16 and BF16 formats. The official recipe released by Meta uses a BF16 text decoder. This is also used for our baseline performance measurements. To boost performance further, NVIDIA developed a custom FP8
post-training quantization (PTQ)
recipe, leveraging the fourth-generation FP8 Tensor Cores that are part of the
NVIDIA Hopper architecture
.
This recipe, available through the
TensorRT Model Optimizer library
, enables higher Llama 3.2 throughput and lower latency while delivering the same accuracy across numerous benchmarks including
ScienceQA
,
OCRBench
,
TextVQA
, and
MMMU
. This means that developers can now run the model more cost-effectively.
Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices.
Delivering high throughput and low latency
Table 1 shows maximum throughput performance, representing offline use cases, across a range of input and output sequence lengths and single input image with maximum supported resolution of 1120 x 1120 pixels. Using a system based on the NVIDIA HGX H200 platform, we run the Llama 3.2 90B model on eight
NVIDIA H200 Tensor Core GPUs
, each with 141 GB of fast HBM3e memory, connected through NVLink and
NVLink Switch
, providing 900 GB/s of GPU-to-GPU bandwidth between the GPUs.
Maximum Throughput Performance –
Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
2,646
1,417
480
Table 1. Maximum throughput performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. NIM server restarted between each ISL/OSL configuration to set an optimal KV cache split. Data measured on 11/14/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. DGX H200, normalized to 8 GPUs (by taking the TP profile maximizing throughput per GPU, and multiplying that value by 8 to simulate a replica-parallel setup), batch size tuned for maximum node throughput, TensorRT Model Optimizer version 0.21 (pre-release), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
Table 2 shows minimum latency performance using the same input and output sequence lengths and input image size.
Minimum Latency Performance – Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
64
63
55
Table 2. Minimum latency performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. Data measured on 11/4/2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP8, batch size = 1, TensorRT Model Optimizer version 0.21 (prerelease), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
As these results show, NVIDIA H200 GPUs with
TensorRT-optimized
software delivers exceptional performance on Llama 3.2 90B VLM, in both latency-optimized and throughput-optimized scenarios.
Throughput performance of GeForce RTX 4090 with ONNX Runtime on NVIDIA RTX
For Windows deployments, NVIDIA has optimized Llama 3.2 SLMs to work efficiently using the
ONNX Runtime Generative API
, with a
DirectML
backend. Performance measurements are made using the
model checkpoint
available on the NGC catalog. The checkpoint is a quantized version of Llama 3.2 3B Instruct model and is quantized to
AWQ INT4
using
AutoAWQ
and converted to
ONNX
using ONNX Runtime Generative API.
Maximum Throughput Performance – Output Tokens/Second
NVIDIA GeForce RTX 4090 GPUs
Input | Output Sequence Lengths
100 | 100
2,000 | 100
4,000 | 100
Onnx-GenAI Runtime with DirectML, BS=1
253
203
165
Onnx-GenAI Runtime with DirectML, BS=4
615
374
251
Table 3. Maximum throughput performance with NVIDIA internal measurements
ONNX Runtime Generative API with DirectML data measured on 10/07/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. GeForce RTX 4090 GPU.
Better performance on Llama 3.2 across platforms
With the NVIDIA accelerated computing platform, you can build models and supercharge your applications with the most performant Llama 3.2 models on any platform—from the data center and cloud to local workstations. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, part of the
NVIDIA AI Enterprise
software platform, which offers
NVIDIA TensorRT
optimized
inference
on Llama 3.2 and other models from NVIDIA and its partner ecosystem.
Acknowledgments
We would like to thank George Yuan, Alex Settle, and Chenjie Luo for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 全栈优化释放 NVIDIA GPU 的高性能 | Meta 最近发布了
Llama 3.2
系列视觉语言模型(VLM),其中包含 11B 参数和 90B 参数变体。这些模型是多模态模型,支持文本和图像输入。此外,Meta 还推出了 Llama 3.2 的纯文本
小语言模型(SLM)
变体,具有 1B 和 3B 参数。NVIDIA 已对 Llama 3.2 模型集合进行优化,以便在全球数百万个 GPU 上提供出色的性能和经济高效的服务 – 从我们功能强大的数据中心和云 GPU 到本地
NVIDIA RTX 工作站
,甚至是使用
NVIDIA Jetson
的低功耗边缘设备。
Llama 3.2 VLM 支持长达 128K 的文本标记,以及分辨率为 1120 x 1120 像素的单张图像输入。为了实现低延迟响应,提供出色的用户体验,同时提供高吞吐量以经济高效地提供这些模型,NVIDIA 平台在技术堆栈的每一层都进行了优化。
同样,Llama 3.2 SLM 已经过优化,可在全球数百万 NVIDIA RTX PC 和工作站上良好运行。它们还经过量化,允许在边缘设备上使用 NVIDIA Jetson 进行本地部署。有关更多信息,请参阅
将加速的 Llama 3.2 从边缘部署到云端
。
本文将介绍如何通过全栈优化实现 Llama 3.2 模型的高吞吐量和低延迟服务。
加速 Llama 3.2 AI 推理吞吐量
Llama 3.2 11B 和 Llama 3.2 90B 模型包括一个带有文本解码器的视觉编码器。编码器使用
NVIDIA TensorRT 库
针对高性能推理进行了优化,文本解码器使用
NVIDIA TensorRT-LLM 库
进行了优化。
通过 TensorRT-LLM 中支持的交叉注意力机制,视觉编码器的视觉信息融合到 Llama 文本解码器中。这使得 Llama 3.2 VLM 能够高效生成文本,在文本输入的上下文中考虑视觉推理和理解。
TensorRT 支持视觉编码器在 BF16 数据格式下运行。TensorRT-LLM 支持文本解码器在 FP16 和 BF16 格式下运行。Meta 发布的官方 recipe 使用了 BF16 文本解码器,这也用于我们的基准性能测量。为了进一步提高性能,NVIDIA 利用 NVIDIA Hopper 架构中的
第四代 FP8 Tensor Core
开发了自定义的 FP8 后训练量化(PTQ)recipe。
此方法通过
TensorRT Model Optimizer 库
提供,可实现更高的 Llama 3.2 吞吐量和更低的延迟,同时在众多基准测试(包括
ScienceQA
、
OCRBench
、
TextVQA
和
MMMU
)中提供相同的准确性。这意味着开发者现在可以更经济高效地运行模型。
TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用
NVIDIA NIM
微服务的生产就绪型部署获得。
提供高吞吐量和低延迟
表 1 显示了在一系列输入和输出序列长度以及单输入图像 (最大支持分辨率为 1120 x 1120 像素) 中的最大吞吐量性能 (代表离线用例)。我们使用基于 NVIDIA HGX H200 平台的系统,在 8 个
NVIDIA H200 Tensor Core GPU
上运行 Llama 3.2 90B 模型,每个 GPU 均具有 141 GB 的快速 HBM3e 内存,通过
NVLink 和 NVLink Switch
连接,可在 GPU 之间提供 900 GB/s 的 GPU 到 GPU 带宽。
最大吞吐量性能 –
每秒输出令牌数
八个 NVIDIA H200 Tensor Core GPU
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
2646
1417
480
表 1. 通过 NVIDIA 内部测量获得的最大吞吐量性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。在每个 ISL/OSL 配置之间重新启动 NIM 服务器,以设置最佳 KV 缓存分割。数据测量时间:11 月 14 日,2024 年。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = total generated tokens / total latency。DGX H200,标准化为 8 个 GPUs (通过采用 TP 配置文件最大限度地提高每个 GPU 的吞吐量,并将该值乘以 8 以模拟副本并行设置),针对最大节点吞吐量调整了批量大小,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0。
表 2 显示了使用相同的输入和输出序列长度以及输入图像大小时的最小延迟性能。
最低延迟性能 – 输出令牌/秒
(
八个 NVIDIA H200 Tensor Core GPUs
)
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
64
63
55
表 2、NVIDIA 内部测量的最低延迟性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。数据测量时间:2024 年 11 月 4 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 生成的 tokens 总数/总延迟。DGX H200、TP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0 以上。
这些结果表明,
搭载 TensorRT 优化软件的
NVIDIA H200 GPU 可在 Llama 3.2 90B VLM 上提供出色的性能,适用于延迟优化和吞吐量优化的场景。
采用 ONNX Runtime 的 GeForce RTX 4090 在 NVIDIA RTX 上的吞吐量性能
对于 Windows 部署,NVIDIA 已对 Llama 3.2 SLM 进行优化,以便使用带有
DirectML
后端的
ONNX
Runtime Generative API
高效工作。使用 NGC 目录上提供的
模型检查点
进行性能测量。检查点是 Llama 3.2 3B Instruct 模型的量化版本,使用
AutoAWQ
量化为
AWQ INT4
,并使用 ONNX Runtime Generative API 转换为 ONNX。
最大吞吐量性能 – 输出令牌/秒
NVIDIA GeForce RTX 4090 GPU
输入|输出序列长度
100|100
2000|100
4000|100
采用 DirectML 的 Onnx-GenAI 运行时,BS = 1
253
203
165
支持 DirectML 的 Onnx-GenAI 运行时,BS = 4
615
374
251
表 3、通过 NVIDIA 内部测量获得的最大吞吐量性能
使用在 2024 年 10 月 7 日测量的 DirectML 数据的 ONNX Runtime Generative API。输出 token/s 包含生成第一个 token 的时间 – tok/s = 总生成 token 数 / 总延迟。GeForce RTX 4090 GPU。
跨平台在 Llama 3.2 上获得更好的性能
借助 NVIDIA 加速计算平台,您可以在任何平台 (从数据中心、云到本地工作站) 上构建模型并使用性能超强的 Llama 3.2 模型增强您的应用。
NVIDIA NIM
是
NVIDIA AI Enterprise
软件平台的一部分,可为 Llama 3.2 和 NVIDIA 及其合作伙伴生态系统的其他模型提供经
NVIDIA TensorRT
优化的推理。
致谢
我们要感谢 George Yuan、Alex Settle 和 Chenjie Luo 为支持这篇博文而付出的努力。 |
https://developer.nvidia.com/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | Streamlining AI Inference Performance and Deployment with NVIDIA TensorRT-LLM Chunked Prefill | In this blog post, we take a closer look at chunked prefill, a feature of
NVIDIA TensorRT-LLM
that increases GPU utilization and simplifies the deployment experience for developers. This builds on our previous
post
discussing how advanced KV cache optimization features in TensorRT-LLM improve performance up to 5x in use cases that require system prefills.
Challenges with traditional prefill and decode inference approaches
When a user submits a request to a model, it goes through two distinct computational phases: prefill and decode. Each phase uses GPU compute resources differently.
During the prefill phase, the system processes all input tokens to compute the KV cache, which is then used to generate the first token of the output. This phase is computationally demanding and can effectively use a GPU’s vast parallel compute resources.
In the decode phase, the system generates output tokens individually, updating the intermediate states from the prefill phase with each new token. Since the heavy computational work for intermediate state calculations is done in the prefill phase, the decode phase primarily involves processing only the newly generated token. As a result, this phase is less computationally intensive.
TensorRT-LLM supports in-flight batching where multiple requests are handled in parallel, enabling prefill and decode stage processing simultaneously. While more efficient than phased batching, in which decode requests are paused to process prefill requests, this approach can introduce latency as the decode phases are delayed until the prefill requests are completed. The top portion of Figure 1 illustrates this concept.
Figure 1. A simplified representation of the chunked prefill process, showing how it can increase GPU utilization and accelerate query completion
Balancing prefill and decode phases with chunked prefill
With TensorRT-LLM chunked prefill, the tokens are divided into smaller units, or chunks, for faster processing. This prevents the prefill phase from becoming a bottleneck, enables more parallelization with decode phase tokens, and increases GPU utilization. The bottom portion of Figure 1 illustrates this concept.
Additionally, using TensorRT-LLM chunked prefill enables GPU systems to handle longer contexts and higher concurrency levels. Since memory usage depends on the number of tokens processed during each iteration, using prefill chunks decouples memory consumption from the context length of incoming requests. The system can then process larger context lengths without increasing memory demands.
Simplifying TensorRT-LLM engine creation with dynamic chunk sizing
Figure 1 shows how chunk size can impact user interactivity and overall system throughput, requiring a trade-off based on the desired user experience and available GPU resources. Using a large chunk size lowers the number of iterations required to process prefill sequences, reducing time to first token (TTFT).
However, it also increases the time taken to complete the decode phase of ongoing requests, increasing query completion time and reducing output tokens per second (TPS). Finding the right balance can sometimes be time-consuming for developers. TensorRT-LLM deploys dynamic chunk sizing, providing ideal recommendations for the user based on GPU utilization metrics.
An additional advantage of dynamic prefill chunk sizing is the simplification of the TensorRT-LLM engine build process. In the past, developers had to supply a maximum input sequence length from which activation buffers were computed during engine build time. To ensure the activation buffers could handle the most complex user queries, developers were required to set the maximum sequence length expected for their use case. This approach optimized GPU resources for outlier and worst-case scenarios, leading to inefficient memory usage.
With dynamic prefill chunk sizing developers no longer need to specify the maximum input length manually. Instead, activation buffer sizes are automatically determined by the chunk size, which is configured by TensorRT-LLM. This eliminates manual configuration and leads to more efficient memory usage during execution.
Getting started with TensorRT-LLM chunked prefills
Using TensorRT-LLM chunked prefill significantly improves both system performance and utilization. Breaking down the traditionally sequential prefill phase into smaller, more manageable chunks, enables better parallelization, with the decode phase, reducing bottlenecks and accelerating query completion.
The dynamic adjustment of chunk size also simplifies the TensorRT-LLM engine configuration process, eliminating the need for developers to manually set activation buffer sizes, which previously resulted in inefficient memory usage. Dynamic chunk sizing optimizes memory use and ensures that the system adapts seamlessly to varying LLM user demands.
Check out our
GitHub documentation
, to start using TensorRT-LLM chunked prefills. Learn more about
NVIDIA AI Inference solutions
and stay up-to-date with the latest
AI inference performance
updates. | https://developer.nvidia.com/zh-cn/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | 借助 NVIDIA TensorRT-LLM 分块预填充提高 AI 推理效率和简化部署 | 在本
博文
中,我们详细介绍了分块预填充,这是
NVIDIA TensorRT-LLM
的一项功能,可提高 GPU 利用率并简化开发者的部署体验。本文基于之前的博文,讨论了 TensorRT-LLM 中的高级 KV 缓存优化功能如何在需要系统预填充的用例中将性能提升高达 5 倍。
传统预填充和解码推理方法面临的挑战
当用户向模型提交请求时,它将经历两个不同的计算阶段:预填充和解码。每个阶段使用 GPU 计算资源的方式各不相同。
在预填充阶段,系统会处理所有输入令牌以计算 KV 缓存,然后使用该缓存生成输出的第一个令牌。此阶段的计算要求很高,可以有效利用 GPU 庞大的并行计算资源。
在解码阶段,系统会单独生成输出 tokens,使用每个新 tokens 更新预填充阶段的中间状态。由于中间状态计算的繁重计算工作是在预填充阶段完成的,因此解码阶段主要涉及仅处理新生成的 tokens。因此,此阶段的计算密集度较低。
TensorRT-LLM 支持动态批处理,允许并行处理多个请求,使得预填充和解码阶段可以同时进行。虽然这种方法比暂停解码请求以处理预填充请求的分阶段批处理更高效,但由于解码阶段延迟到预填充请求完成,因此可能会导致延迟。图 1 的顶部说明了这一概念。
图 1. 分块预填充过程的简化表示,展示了如何提高 GPU 利用率并加速查询完成。
使用分块预填充平衡预填充和解码阶段
借助 TensorRT-LLM 分块预填充,令牌被分成较小的单元或分块,以加快处理速度。这可以防止预填充阶段成为瓶颈,使解码阶段的令牌实现更多的并行化,并提高 GPU 的利用率。图 1 的底部部分说明了这一概念。
此外,使用 TensorRT-LLM 分块预填充使 GPU 系统能够处理更长的上下文和更高的并发级别。由于内存使用量取决于每次迭代期间处理的令牌数量,因此使用预填充块可将内存消耗量与传入请求的上下文长度解耦。然后,系统可以在不增加内存需求的情况下处理更大的上下文长度。
通过动态数据块大小调整简化 TensorRT-LLM 引擎创建
图 1 显示了块大小百分比对用户交互性和整体系统吞吐量的影响,这需要根据所需的用户体验和可用 GPU 资源进行权衡。使用大块大小可以减少处理预填充序列所需的迭代次数,从而缩短第一个令牌(TTFT)的时间。
但是,这也会增加完成正在进行的请求的解码阶段所需的时间,从而增加查询完成时间并降低每秒输出令牌数(TPS)。对于开发者而言,找到合适的平衡有时非常耗时。TensorRT-LLM 部署动态块大小,根据 GPU 利用率指标为用户提供理想建议。
动态预填充块大小调整的另一个优势是简化了 TensorRT-LLM 引擎构建流程。过去,开发者必须提供最大输入序列长度,以便在引擎构建期间计算激活缓冲区。为确保激活缓冲区能够处理最复杂的用户查询,开发者需要为其用例设置预期的最大序列长度。这种方法针对异常和最差情况优化了 GPU 资源,导致内存使用效率低下。
借助动态预填充块大小调整,开发者无需再手动指定最大输入长度。相反,激活缓冲区大小由块大小(由 TensorRT-LLM 配置)自动确定。这消除了手动配置,并提高了执行期间的内存使用效率。
TensorRT-LLM 分块预填充入门
使用 TensorRT-LLM 分块预填充可显著提高系统性能和利用率。将传统的连续预填充阶段分解为更小、更易于管理的块,可通过解码阶段实现更好的并行化,从而减少瓶颈并加速查询完成。
块大小的动态调整还简化了 TensorRT-LLM 引擎配置流程,无需开发者手动设置激活缓冲区大小,而这在之前会导致内存使用效率低下。动态块大小优化内存使用,并确保系统能够无缝适应不同 LLM 用户的需求。
查看
我们的 GitHub 文档
,开始使用 TensorRT-LLM 分块预填充。详细了解
NVIDIA AI 推理解决方案
,并随时了解最新的
AI 推理性能更新
。 |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | In our previous
blog post
, we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups.
Introduction to KV cache
LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the user’s prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence.
This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokens—leading to a faster and more efficient user experience.
When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas.
Early KV cache reuse
Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system prompts—predefined instructions added to user queries—are essential to direct the LLM’s responses in line with enterprise guidelines, this method can be inefficient.
When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x.
Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x
Flexible KV cache block sizing
In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation.
This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial. As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case.
TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens.
Figure 2. Impact of changing KV cache block size on inference speedup
Efficient KV cache eviction protocols
Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies.
Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT.
To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks.
Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse
Getting started with TensorRT-LLM KV cache reuse
Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance.
To start using TensorRT-LLM KV cache reuse check out our
GitHub documentation
. | https://developer.nvidia.com/zh-cn/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM KV 缓存早期重用实现首个令牌速度 5 倍提升 | 在上一篇
博客文章
中,我们展示了通过将键值 (KV) 缓存卸载到 CPU 内存来重复使用该缓存如何在基于 x86 的 NVIDIA H100 Tensor Core GPU 上将首次生成令牌 (Time To First Token, TTFT) 的速度提升高达 14 倍,在 NVIDIA GH200 Superchip 上可将其提升高达 28 倍。在本文中,我们介绍了可以进一步推动 TTFT 加速的 KV 缓存重复使用技术和最佳实践。
KV 缓存简介
许多任务(包括问答和代码生成)都迅速采用了 LLM 模型。为了生成响应,这些模型首先将用户的提示符转换为 tokens,然后将其转换为 dense vectors。随后会进行大量的 dot-product operations,以数学方式对 tokens 之间的关系建模,并构建对用户输入的上下文理解。生成这种上下文理解的计算成本会随着输入序列的长度而二次增加。
此资源密集型流程会生成键和值,这些键和值会被缓存,以避免在生成后续 token 时进行重新计算。重复使用 KV 缓存可减少生成额外 token 所需的计算负载和时间,实现更快、更高效的用户体验。
重复使用 KV 缓存时,必须仔细注意其在内存中的剩余时间、在内存已满时首先移除哪些组件,以及何时可重复用于新的传入提示。优化这些因素可能会带来 KV 缓存重复使用的性能增益。NVIDIA TensorRT-LLM 提供了三个关键功能,专门解决了这些问题。
早期 KV 缓存复用
传统的复用算法需要先完成整个 KV 缓存的计算,然后才能根据新用户的提示重复使用其任何部分。在企业聊天机器人等场景中,系统提示(添加到用户查询中的预定义指令)对引导 LLM 根据企业指南做出响应至关重要,这种方法可能效率低下。
当大量用户同时与聊天机器人交互时,每个用户都需要单独计算系统提示 KV 缓存。借助 TensorRT-LLM,我们可以重新使用实时生成的系统提示,使其在突发期间在所有用户之间共享,而不是为每个用户重新计算。这可以显著加快推理速度,最高可达 5 倍。
图 1.TensorRT-LLM KV 缓存重复利用可将 TTFT 速度提升高达 5 倍。
灵活调整 KV 缓存块大小
在重用实现中,只能分配整个缓存内存块以供重用。例如,如果缓存内存块大小为 64 个令牌,KV 缓存为 80 个令牌,则仅存储 64 个令牌以供重用,而其余 16 个令牌则需要重新计算。但是,如果将内存块大小减少到 16 个令牌,则可在五个内存块中存储全部 64 个令牌,从而消除重新计算的需求。
当输入序列较短时,这种效果最为明显。对于长输入序列,更大的块会更有好处。显然,您对 KV 缓存的控制越细致,就越能针对特定用例进行优化。
TensorRT-LLM 可对 KV 缓存内存块进行细粒度控制,使开发者能够将这些块分割成 64 到 2 个令牌之间的较小块。这可优化已分配内存的使用,提高复用率并改善 TTFT。在 NVIDIA H100 Tensor Core GPU 上运行 LLAMA70B 时,我们可以通过将 KV 缓存块的大小从 64 个令牌减少到 8 个令牌,在多用户环境中将 TTFT 速度提升高达 7%。
图 2. 更改 KV 缓存块大小对推理加速的影响
高效的 KV 缓存回收协议
将 KV 缓存分割成较小的块并移除未使用的块对于内存优化可能有效,但这会引入依赖项复杂性。当使用特定块生成响应并将结果存储为新块时,它会形成树状的依赖结构。
随着时间的推移,跟踪源块(分支)使用情况的计数器可能会随着依赖节点(叶子)的重复使用而过时。然后,移除源块需要移除所有依赖块,这将需要重新计算 KV 缓存以用于新用户提示,从而增加 TTFT。
为应对这一挑战,TensorRT-LLM 包含智能驱逐算法,该算法可以追踪源节点中的依赖节点,并首先驱逐依赖节点,即使这些节点具有较新的重复使用计数器。这可以确保更高效的内存管理,同时防止不必要的依赖块驱逐。
图 3.KV 缓存驱逐算法的逻辑表示展示了如何减少被驱逐的块的数量,从而增加重复使用的可能性。
TensorRT-LLM KV 缓存重用入门
在推理过程中生成 KV 缓存需要大量的计算和内存资源。高效使用 KV 缓存对于改善模型响应、加速推理和提高系统吞吐量至关重要。TensorRT-LLM 为希望进一步优化 TTFT 响应时间以实现峰值性能的开发者提供了高级复用功能。
要开始使用 TensorRT-LLM KV 缓存,请参阅
我们的 GitHub 文档
。 |
https://developer.nvidia.com/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | 3x Faster AllReduce with NVSwitch and TensorRT-LLM MultiShot | Deploying generative AI workloads in production environments where user numbers can fluctuate from hundreds to hundreds of thousands – and where input sequence lengths differ with each request – poses unique challenges. To achieve low latency inference in these environments, multi-GPU setups are a must – irrespective of the GPU generation or its memory capacity. To enhance inference performance in production-grade setups, we’re excited to introduce TensorRT-LLM Multi-shot, a new multi-GPU communication protocol that leverages the
NVIDIA NVLink Switch
to significantly increase communication speeds by up to 3x. This blog outlines this new feature and how it helps developers and solution architects address the limitations of traditional multi-GPU communication methods.
Challenges with traditional AllReduce algorithms
For low latency inference, multi-GPU is critical, regardless of the memory capacity of a single GPU. However, at low concurrency, the time GPUs spend exchanging data can outweigh the time spent on compute. For optimal performance, an efficient
AllReduce
operation – a collective operation that combines partial results from each participating GPU – is critical.
Traditional approaches use ring-based algorithms, where the partial values are passed around a ring of GPUs. Each GPU contributes its values and passes the result to its neighbor. This process is repeated 2N-2 times where N is the number of GPUs working together, and by the end of the process, every GPU has the same summed value. A second pass over the ring is required to propagate summed values from the last GPU to the rest.
The Ring approach makes efficient use of available GPU-to-GPU bandwidth per communication step, but as the number of GPUs increases, so does the number of steps. This increases latency, as all GPUs need to stay synchronized at every step of the ring. These synchronization latencies add significant latency overhead and can make it difficult to meet more stringent latency targets.
The Ring AllReduce algorithm is described below:
Ring Algorithm: GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 steps, with full tensor send/recv each step
Latency: 2N-2 communication steps. (N: # of GPUs)
Traffic: (4N-4)/N tensor bytes of send/recvs
Addressing AllReduce communication challenges with TensorRT-LLM MultiShot
TensorRT-LLM MultiShot is a new algorithm that reduces the O(N) latency of Ring AllReduce by up to 3x leveraging multicast in NVSwitch. Multicast is a hardware acceleration feature in NVSwitch which allows a GPU to send data once and have that data sent simultaneously to all other GPUs, minimizing the number of communication steps to two inter-GPU synchronizations while remaining bandwidth efficient. Without NVSwitch, this would take N times the communication bandwidth.
TensorRT-LLM Multishot separates the AllReduce into a ReduceScatter operation followed by an AllGather operation (for more detailed descriptions of collective operations, see this
documentation
).
Each GPU is responsible for accumulating only a portion of the result tensor.
The first step (or “shot”) involves each GPU sending the different slices of the tensor to the respective GPU responsible for accumulating that slice of the tensor.
After accumulating locally, each GPU now has the correct sum accumulators for its unique slice of the output.
In the second step (or “shot”), each GPU broadcasts the result slice to all other GPUs using the NVSwitch multicast capability. This minimizes the per GPU bandwidth required as the NVSwitch itself performs data amplification; each GPU sends 1/N the data and receives the full result tensor in one step.
The entire operation only takes two communication steps, regardless of the number GPUs performing tensor parallel inference.
TensorRT-LLM MultiShot Algorithm: GPU_N sends slices, Compute slice sum, broadcast result in single multicast operation.
Latency: 2 communication steps (regardless of number of GPUs)
Traffic: 2 tensor bytes of send/recv (regardless of number of GPUs)
Why this matters
Since this algorithm requires only two communication steps rather than 2N-2 (where N is the number of GPUs), MultiShot can be nearly 3x faster than Ring AllReduce. The benefits of this algorithm are particularly evident with smaller message sizes and high parallelism – the scenario needed when minimum latency is required for a great user experience.
This can be used to either reduce minimum latency, or increase throughput at a given latency. In scenarios with more aggressive latency thresholds, this can lead to super-linear scaling with the number of GPUs.
Figure 1. With TensorRT-LLM MultiShot, AllReduce latency is reduced by up to 3x.
Achieving optimal inference performance requires careful workload analysis and a deep understanding of performance bottlenecks. By gaining that understanding – both through internal engineering work as well as through close collaboration with external developers and researchers – we can quickly and frequently optimize many aspects of our platform to deliver great performance for users.
As we continue to identify and implement new performance optimizations – some may be extensive, others might be narrower in scope – we will be providing regular updates on these optimizations, providing both technical motivation and quantified benefits. | https://developer.nvidia.com/zh-cn/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | NVSwitch 和 TensorRT-LLM MultiShot 共同加速 AllReduce 速度达 3 倍 | 在用户数量可能在数百到数十万之间波动,且输入序列长度随每个请求而变化的生产环境中,部署生成式 AI 工作负载会面临独特的挑战。要在这些环境中实现低延迟推理,无论 GPU 生成方式或显存容量如何,多 GPU 设置都是必需的。为了提高生产级设置中的推理性能,我们很高兴推出 TensorRT-LLM Multi-shot,这是一种新的多 GPU 通信协议,利用
NVIDIA NVLink Switch
可将通信速度大幅提升高达 3 倍。本博客概述了这一新功能,以及它如何帮助开发者和解决方案架构师克服传统多 GPU 通信方法的限制。
传统 AllReduce 算法面临的挑战
对于低延迟推理,无论单个 GPU 的显存容量如何,多 GPU 都至关重要。但是,在低并发情况下,GPU 花在交换数据上的时间可能超过花在计算上的时间。为了获得最佳性能,
高效的 AllReduce 操作
–结合每个参与其中的 GPU 的部分结果的集合操作–至关重要。
传统方法使用基于环的算法,其中部分值在环形的 GPU 之间传递。每个 GPU 都贡献其值并将结果传递给其邻居。该过程重复 2N-2 次,其中 N 是协同工作的 GPU 数量,在该过程结束时,每个 GPU 都具有相同的总和值。需要对环进行第二次传递,以将总和值从最后一个 GPU 传播到其他 GPU。
Ring 方法可在每个通信步骤中高效利用可用的 GPU 到 GPU 带宽,但随着 GPU 数量的增加,步骤数也会增加。这会增加延迟,因为所有 GPU 都需要在 Ring 的每个步骤中保持同步。这些同步延迟会显著增加延迟开销,并可能导致难以满足更严格的延迟目标。
Ring AllReduce 算法描述如下:
环形算法:GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 步长,每步具有完整的 Tensor send/recv
延迟:2N-2 通信步骤。(N:GPU 的数量)
流量:(4N-4)/N 张量的 send/recv 字节数
使用 TensorRT-LLM MultiShot 应对 AllReduce 通信挑战
TensorRT-LLM MultiShot 是一种新算法,可利用 NVSwitch 中的组播,将 Ring AllReduce 的 O(N) 延迟最多降低 3 倍。组播是 NVSwitch 中的硬件加速功能,允许一个 GPU 发送数据一次,并将该数据同时发送到所有其他 GPU,从而将通信步骤的数量减少到两个 GPU 间的同步,同时保持带宽效率。如果没有 NVSwitch,这将占用 N 倍的通信带宽。
TensorRT-LLM Multishot 将 AllReduce 分离为 ReduceScatter 操作,然后是 AllGather 操作(有关集合操作的更多详细说明,请参阅
此文档
)。
每个 GPU 仅负责累积结果张量的一部分。
第一步(或“shot”)涉及每个 GPU 将张量的不同切片发送到负责累积该张量切片的相应 GPU。
在本地累加后,每个 GPU 现在都有正确的和累加器,用于其独特的输出切片。
在第二步 (或“shot”) 中,每个 GPU 使用 NVSwitch 组播功能将结果切片广播到所有其他 GPU。这可最大限度地减少 NVSwitch 本身执行数据放大所需的每个 GPU 带宽;每个 GPU 一步发送 1/N 数据并接收完整的结果张量。
无论执行张量并行推理的 GPU 数量如何,整个操作仅需两次通信步骤。
TensorRT-LLM MultiShot 算法:GPU_N 发送切片、计算切片和、在单个组播运算中广播结果。
延迟:2 个通信步骤(与 GPU 数量无关)
流量:2 张量字节的 send/recv(与 GPU 数量无关)
为何如此重要
由于此算法只需要两个通信步骤,而不是 2N-2 (其中 N 表示 GPU 数量),因此 MultiShot 的速度几乎是 Ring AllReduce 的 3 倍。这种算法的优势在消息大小较小且并行度高的情况下尤为明显,而这正是需要最低延迟以获得出色的用户体验的场景。
这可用于降低最小延迟,或在给定延迟下提高吞吐量。在具有更激进的延迟阈值的场景中,这可能会导致 GPU 数量的超线性扩展。
图 1. 借助 TensorRT-LLM MultiShot,AllReduce 延迟最多可降低 3 倍。
实现最佳推理性能需要仔细的工作负载分析和对性能瓶颈的深入了解。通过内部工程工作以及与外部开发者和研究人员的密切合作,我们可以快速、频繁地优化平台的许多方面,为用户提供出色的性能。
随着我们继续识别和实施新的性能优化(一些可能是广泛的,另一些可能范围较窄),我们将定期提供有关这些优化的更新,提供技术动机和量化效益。 |
https://developer.nvidia.com/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | Accelerating Scientific Literature Reviews with NVIDIA NIM Microservices for LLMs | A well-crafted systematic review is often the initial step for researchers exploring a scientific field. For scientists new to this field, it provides a structured overview of the domain. For experts, it refines their understanding and sparks new ideas. In 2024 alone, 218,650 review articles were indexed in the
Web of Science
database, highlighting the importance of these resources in research.
Completing a systematic review significantly enhances a researcher’s knowledge base and their academic impact. However, traditional review writing requires collecting, reading, and summarizing large volumes of academic articles on a specific topic. Due to the time-consuming nature of this manual exercise, the scope of processed literature is often confined to dozens or a few hundred articles. Interdisciplinary content—frequently outside the researcher’s area of expertise—adds another layer of complexity.
These challenges make it increasingly difficult to create comprehensive, reliable, and impactful systematic reviews.
The advent of large language models (LLMs) offers a groundbreaking solution, enabling the rapid extraction and synthesis of information from extensive literature. Participating in the
Generative AI Codefest Australia
provided a unique opportunity to explore this idea with support from NVIDIA AI experts to leverage
NVIDIA NIM microservices
for accelerating literature reviews. This enabled the rapid testing and fine-tuning of several state-of-the-art LLMs for our literature analysis process.
Testing the potential of LLMs for processing papers
As a research group specializing in physiological ecology within the
ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF)
, we embarked on writing a review of the literature on the global responses of non-vascular plants, such as moss or lichen, to wind.
However, we quickly faced a challenge: many relevant articles on wind-plant interactions failed to explicitly mention these key words in their titles or abstracts, which are typically used as primary filters during literature screening. A comprehensive analysis of the topic required manually reading the full text of each article—a highly time-consuming process.
We decided to explore the potential of using LLMs to extract content specifically related to wind-plant interactions from the articles. To achieve this, we implemented a simple Q&A application based on the
LlaMa 3.1 8B Instruct NIM microservice
(Figure 1). This enabled us to get an initial prototype quickly.
This first prototype, processing the papers sequentially, was extremely useful to craft and optimize the prompts to extract key information from each article.
Figure 1. Initial simple paper processing pipeline using an LLM NIM microservice at its core
To validate the accuracy of extracted information, we initially manually validated the results. When no significant errors were found in the test dataset, we identified opportunities to further enhance the efficiency of key information extraction using LLMs (Figure 2). These include converting the papers from a PDF format to structured JSON; extracting images, tables, and charts; and using parallel processing to speed up the processing of papers.
Figure 2. Enhanced paper processing pipeline incorporating NV-ingest and parallel processing of papers
Enhancing the performance of LLMs for more efficient information extraction
By using NVIDIA NIM microservices for LLMs and
nv-ingest
, we deployed LLMs and a data ingestion pipeline in our local environment with eight NVIDIA A100 80-GB GPUs. We also fine-tuned the models using low-rank adaptation (LoRA) to improve the accuracy of information extraction from the papers.
We compiled a dataset of over 2K scientific articles related to the targeted research domain, sourced from the
Web of Science
and
Scopus
databases. Over a week during Generative AI Codefest, we focused on experimenting with various strategies to optimize the efficiency and accuracy of key information extraction from these articles.
Best-performing model
To determine the best-performing model, we tested a range of instruction-based and general-purpose LLMs from the NVIDIA API Catalog on a set of randomly selected articles. Each model was assessed for its accuracy and comprehensiveness in information extraction.
Ultimately, we determined that Llama-3.1-8B-instruct was the most suitable for our needs.
Processing speed
We developed a Q&A module using streamlit to answer user-defined research-specific questions.
To further improve processing speed, we implemented parallel processing of the prompts sent to the LLM engine and used KV-caching, which significantly accelerated the computation time by a factor of 6x when using 16 threads.
Extraction content types
We used nv-ingest to extract content from the original PDFs, including text, figures, tables, and charts, into structured JSON files. This enabled information extraction beyond text content, enabling a more comprehensive context for answering the questions.
Using JSON files instead of the original PDF files during inference also had a significant effect on lowering the processing time, by an additional factor of 4.25x.
Results
Thanks to these improvements, we significantly reduced the time required to extract information from our database of papers, with a total speedup of 25.25x compared to our initial implementation.
Processing the entirety of our database now takes less than 30 minutes using two A100 80-GB GPUs and 16 threads.
Compared to the traditional approach of manually reading and analyzing an entire article, which typically takes about one hour, this optimized workflow achieved a time savings of over 99% (Figure 3).
Figure 3. Required time for information extraction and categorization
In addition to information extraction, we also investigated automated article classification. By fine-tuning Llama-3.1-8b-Instruct with a LoRA adapter on a sample of papers manually annotated, we successfully automated the classification process, demonstrating its effectiveness in organizing complex datasets of scientific papers.
The results indicated that each article required only 2 seconds for classification, compared to the 300+ seconds required on average for a manual classification by an experienced reader (Figure 3).
Future directions
We’re currently refining our workflow to further accelerate the processing. We’re also improving our user interface to provide easy access to more locally deployed LLMs and enhance accessibility by other researchers (Figure 4).
We plan to implement the
NVIDIA AI Blueprint for multimodal PDF data extraction
to identify the most relevant articles for each research question and interact with those papers.
Beyond technical improvements, we are aiming to organize extracted key information for each question and generate visualizations (such as maps with location of the experiments mentioned in the papers) to accelerate further the writing of the systematic review.
Figure 4. Prototype interface undergoing in-house testing at SAEF
Summary
Our work at the Generative AI Codefest demonstrated the transformative potential of AI in accelerating systematic literature reviews. With NVIDIA NIM, we quickly moved from an idea to a working solution that significantly improves the process of information extraction from scientific papers.
This experience highlights how AI can streamline research workflows, enabling faster and more comprehensive insights. LLMs have the potential to facilitate interdisciplinary research, empowering scientists to explore complex, multi-domain research fields more effectively.
Moving forward, we aim to refine these methods and tools, ensuring that they are accessible and scalable for future research across diverse topics. | https://developer.nvidia.com/zh-cn/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | 借助适用于 LLM 的 NVIDIA NIM 微服务加速科学文献审查 | 精心设计的系统审查通常是研究人员探索科学领域的第一步。对于刚刚接触该领域的科学家,它提供了该领域的结构化概述。对于专家而言,它可以细化他们的理解并激发新的想法。在 2024 年,
Web of Science
数据库收录了 218,650 篇评测文章,凸显了这些资源在研究中的重要性。
完成系统审查可显著增强研究人员的知识库及其学术影响。然而,传统的评论文章需要收集、阅读和总结有关特定主题的大量学术文章。由于这项人工练习非常耗时,处理过的文献的范围通常局限于几十篇或几百篇文章。跨学科内容 (通常超出研究人员的专业领域) 又增加了一层复杂性。
这些挑战使得创建全面、可靠且有影响力的系统审查变得越来越困难。
大语言模型 (LLMs) 的出现提供了一种突破性的解决方案,能够从大量文献中快速提取和合成信息。参加
Generative AI Codefest Australia
提供了一个独特的机会,让我们在 NVIDIA AI 专家的支持下探索这一想法,利用
NVIDIA NIM
微服务加速文献审查。这为我们的文献分析流程快速测试和微调了多个先进的 LLMs。
测试 LLMs 在处理论文方面的潜力
作为 ARC 特殊研究计划
(ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF))
中的一个专门研究生理生态学的研究小组,我们着手撰写了一篇关于苔藓或地衣等非血管性植物对风的全球反应的文献回顾。
然而,我们很快就面临了一个挑战:许多关于风力电厂相互作用的相关文章都未能在标题或摘要中明确提及这些关键词,而在文献筛选过程中,这些关键词通常用作主要的过滤器。要对主题进行全面分析,需要手动阅读每篇文章的完整文本,这一过程非常耗时。
我们决定探索使用 LLMs 从文章中提取与风电场交互特别相关的内容的可能性。为此,我们
基于 LlaMa 3.1 8B Instruct NIM 微服务
实施了一个简单的问答应用(图 1)。这使我们能够快速获得初始原型。
第一个原型按顺序处理论文,在制作和优化提示以从每篇文章中提取关键信息方面非常有用。
图 1、以 LLM NIM 微服务为核心的初始简单论文处理流程
为了验证提取信息的准确性,我们首先手动验证结果。当测试数据集中未发现重大错误时,我们确定了使用 LLM 进一步提高关键信息提取效率的机会 (图 2)。其中包括将论文从 PDF 格式转换为结构化 JSON;提取图像、表格和图表;以及使用并行处理来加速论文处理速度。
图 2、增强型论文处理流程,整合 NV-ingest 和论文的并行处理
增强 LLMs 的性能,实现更高效的信息提取
通过将 NVIDIA NIM 微服务用于 LLM 和
nv-ingest
,我们在本地环境中部署了 LLM 和数据提取管道,并配备了 8 个 NVIDIA A100 80-GB GPU。我们还使用低等级适应性(LoRA)对模型进行了微调,以提高从论文中提取信息的准确性。
我们编译了一个数据集,其中包含与目标研究领域相关的 2000 多篇科学文章,这些文章均源自
Web of Science
和
Scopus
数据库。在 Generative AI Codefest 期间,我们在一周内专注于尝试各种策略,以优化从这些文章中提取关键信息的效率和准确性。
性能最佳的模型
为了确定性能最佳的模型,我们在一组随机选择的文章中测试了 NVIDIA API Catalog 中的一系列基于指令的和通用的 LLM。评估了每个模型在信息提取方面的准确性和全面性。
最终,我们确定 Llama-3.1-8B-instruct 最适合我们的需求。
处理速度
我们开发了一个问答模块,使用 Streamlit 回答用户定义的研究特定问题。
为进一步提高处理速度,我们对发送到 LLM 引擎的提示实施了并行处理,并使用了 KV 缓存,这使得使用 16 个线程时的计算时间显著缩短了 6 倍。
提取内容类型
我们使用 nv-ingest 将原始 PDF 中的内容 (包括文本、图形、表格和图表) 提取到结构化 JSON 文件中。这使得信息提取功能超越了文本内容,从而为回答问题提供更全面的环境。
在推理过程中使用 JSON 文件而非原始 PDF 文件也对将处理时间额外缩短 4.25 倍产生了重大影响。
成果
得益于这些改进,我们显著缩短了从论文数据库中提取信息所需的时间,与初始实施相比,总速度提高了 25.25 倍。
现在,使用两个 A100 80-GB GPU 和 16 个线程处理整个数据库只需不到 30 分钟。
与手动阅读和分析整篇文章的传统方法 (通常需要大约 1 小时) 相比,这种经过优化的工作流程可节省 99% 以上的时间 (图 3)。
图 3. 信息提取和分类所需的时间
除了信息提取,我们还研究了自动化文章分类。通过在手动标注的论文样本上使用 LoRA 适配器微调 Llama-3.1-8b-Instruct,我们成功实现了分类过程的自动化,证明了其在组织复杂的科学论文数据集方面的有效性。
结果表明,与经验丰富的读者手动分类平均所需的 300 多秒相比,每篇文章只需 2 秒即可完成分类 (图 3)。
未来方向
我们目前正在改进工作流程,以进一步加速处理。我们还在改进用户界面,以便轻松访问更多本地部署的 LLM,并增强其他研究人员的可访问性(图 4)。
我们计划实施用于多模态 PDF 数据提取的 NVIDIA AI Blueprint,以确定与每个研究问题最相关的文章,并与这些论文进行交互。
除了技术改进,我们的目标是组织提取的每个问题的关键信息,并生成可视化效果 (例如,包含论文中提到的实验位置的地图),以进一步加速系统审查的编写。
图 4、原型接口正在 SAEF 进行内部测试
总结
我们在生成式 AI Codefest 上的工作展示了 AI 在加速系统文献审查方面的变革潜力。借助 NVIDIA NIM,我们迅速从想法转变为可行的解决方案,从而显著改善从科学论文中提取信息的过程。
此体验凸显了 AI 如何简化研究工作流程,从而更快获得更全面的见解。LLMs 具有促进跨学科研究的潜力,可助力科学家更有效地探索复杂的多领域研究领域。
展望未来,我们的目标是改进这些方法和工具,确保其可访问性和可扩展性,以供未来跨各种主题的研究使用。 |
https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | Vision Language Model Prompt Engineering Guide for Image and Video Understanding | Vision language models
(VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the
generative AI
landscape by bringing visual understanding to
large language models
(LLMs) through the use of a
vision encoder
. These initial VLMs were limited in their abilities, only able to understand text and single image inputs.
Fast-forward a few years and VLMs are now capable of understanding multi-image and video inputs to perform advanced vision-language tasks such as visual question-answering (VQA), captioning, search, and summarization.
Figure 1. Suite of model customization tools in NVIDIA NeMo
VLM accuracy can be improved for specific use cases by tuning the prompts and model weights. Fine-tuning the model weights can be done with efficient techniques such as PEFT but still requires sufficient data and compute.
However,
prompt engineering
and in context learning can also be used to improve the output quality and is as simple as adjusting the text input to the model at runtime.
This post walks you through the evolution of VLMs and how to effectively prompt a VLM for single image, multi-image, and video understanding use cases. For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
Single-image understanding
From a single image, a VLM is capable of identifying, describing, classifying, and reasoning over what is in the image. Some VLMs have advanced capabilities such as providing bounding box coordinates for certain objects, and translating text in the image. This works best for use cases where an image needs to be classified, captioned or described in text.
Single-image capability can also be used to detect basic events occurring in a livestream. This can be accomplished by sampling one frame at a time from a livestream and providing it to the VLM along with a prompt describing what to look for. This works especially well when the types of events to detect are large, clearly visible, and slow-moving.
The VLM can also be prompted to output in a structured format such as JSON so that the response can easily be parsed and sent to a database or a notification service.
For example, we provided Llama-3.2-90b-vision-instruct with the following prompt and image (Figure 2).
Figure 2. Input image to the VLM for question and answering
Prompt:
Is there a fire truck? Is there a fire? Are there firefighters? Output the answer to each question in JSON format.
VLM response:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
For each of the questions, the VLM was able to accurately respond and output it in a structured format to be used for downstream tasks.
While this method enables the VLM to operate on video input by sampling one frame at a time, the VLM itself does not understand the full context of the video, limiting it to simple use cases. Sampling one frame at a time does not give it the temporal understanding needed to detect actions or trends that occur over time. This can be improved with multi-image and video VLMs.
Multi-image understanding
Many VLMs can accept multiple images as input allowing the model to compare, contrast, and learn from input images.
VLMs generally have good zero-shot capability across a wide range of use cases. However, when trying to obtain more precise answers from a VLM on domain-specific use cases, VLMs can struggle because they lack the full context required to accurately respond to the user prompts.
To better understand this, we used a VLM to estimate the stock level of products. This is an important retail use case as it could autonomously monitor and alert employees when products are running low in a store to avoid lost revenues from stock-outs.
We provided a single image of a table (Figure 3) that should be fully stocked with several snack products to
Cosmos Nemotron 34B
and asked it to estimate the stock level.
Figure 3. Single-image VLM input for retail stock level detection
Prompt:
Estimate the stock level of the snack table on a scale of 0–100.
VLM response:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
The model rates it as 80–90% stocked even though it’s quite clear to the human eye that the stock level is low. From the model’s perspective, it does not know what the table looked like beforehand when looking at a single image.
Using the Cosmos Nemotron 34B multi-image capability, we provided another image of the snack table at a 100% stock level (Figure 4), giving the model extra context to produce a more accurate estimate.
Figure 4. In-context learning input to VLM for retail stock level detection
Prompt:
First compare and contrast the stock level of the two images. Then generate an estimate for each image of the stock level on a scale of 0–100%.
VLM response:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
By providing the additional image as a reference, the VLM now estimates the stock level of the second image at 20–30%, which is significantly more accurate.
Multi-image input can also be used in more complex multimodal RAG pipelines where example images are dynamically added to the prompt based on the user’s input. Multi-image VLMs can also be used for basic video understanding by sampling a small set of frames from a video and providing it to the VLM.
However, many VLMs that accept multi-image input have limited context windows (10–20 frames) and may not have been trained on video data. To truly understand video input, we recommend using a VLM trained on video-specific datasets and a long context window.
Video understanding
The latest VLMs have full video understanding capabilities with large context lengths, frame compression and video training data. This enables VLMs to ingest longer videos and have improved temporal understanding: the ability to understand actions and trends over time.
The first example showed how to use a single frame from a livestream to detect a fire but that same method wouldn’t be able to tell you if the fire is growing or getting smaller because it lacks the temporal context.
A VLM with long context and video understanding processes many frames across time to understand if the fire is growing or not. Here are some examples from the Cosmos Nemotron model families.
Sequential visual understanding
VLMs with sequential understanding relies on processing frames in a sequence, enabling them to capture the progression of actions and events over time. Unlike multi-image models, which process a limited number of frames without explicit temporal connections, sequential video models incorporate temporal dynamics by analyzing frames in an orderly sequence.
This is helpful in situations where you want to capture the causality of events in a video, as it enables the model to track the progression of actions over time and establish cause-effect relationships.
The next few examples are based on the following video where a worker drops a box in a warehouse. We discuss a few prompts and responses.
Figure 5. Input warehouse video to the VLM for scene understanding
Prompt:
What happened in this video?
VLM response:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
The model gave a detailed response with specifics about the environment such as aisle labels and lighting conditions but failed to correctly identify that the worker dropped a box. Now try to ask this question directly.
Prompt:
Did the worker drop any box?
VLM response:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
The VLM is now able to deduce that the worker might have dropped the box. This suggests that generic prompts might not capture all the nuances correctly unless you direct the VLM to look for something specific. Modify the prompt a little.
Prompt:
Elaborate on the visual and narrative elements of this warehouse video in detail. Highlight all anomalies or hazards if present.
VLM response:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
The model is now capturing all the important details of the scene along with important events, such as workers dropping a box. The detailed response is useful if you want to perform Q&A with the video later. VLMs with video understanding are also able to address directional questions, as they can understand actions over time.
Prompt:
From which direction did the worker enter the scene?
VLM response:
The worker entered the scene from the right side.
However, these models often lack precise temporal localization, which can limit their ability to pinpoint when specific events occur.
Prompt:
When did the worker drop the box?
VLM response:
The worker dropped the box in the second image.
Temporal localization
VLMs incorporating
Localization Interpretable Temporal Attention (LITA)
or similar temporal localization techniques elevate video understanding by explicitly learning when and where critical events occur. The models achieve this by introducing temporal attention mechanisms that focus on key segments of a video, aligning visual and textual modalities more effectively over time. This helps answer the “when” questions in the videos.
Prompt:
When did the worker drop the box?
VLM response:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
The optimal prompt format for a VLM depends on the model’s architecture and the nature of the caption pairs used during training. Different training datasets influence how a VLM interprets the prompt.
Conclusion
This post walked through how VLMs have evolved from supporting only single-image input to being capable of complex temporal reasoning on long video inputs. To get started with VLMs, visit
build.nvidia.com
and try out some of the prompts shown in this post. For technical questions, see the
Visual AI Agent forum
.
VLMs can be used to build various visual AI agents. Explore one of the sample vision AI workflows on the
/NVIDIA/metropolis-nim-workflows
GitHub repo with no GPU required. To build a video analytics AI agent, try the
AI blueprint for video search and summarization
.
For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
For more information about LLM prompting, see
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
. | https://developer.nvidia.com/zh-cn/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | 图像和视频理解的视觉语言模型提示工程实践指南 | 视觉语言模型 (VLMs) 正在以极快的速度发展。2020 年,首批 VLMs 通过使用视觉编码器将视觉理解引入大语言模型 (LLMs) ,彻底改变了生成式 AI 格局。这些初始 VLMs 能力有限,只能理解文本和单张图像输入。
几年后,VLM 现在能够理解多图像和视频输入,以执行高级视觉语言任务,例如视觉问答 (VQA)、字幕、搜索和摘要。
图 1、NVIDIA NeMo 中的模型自定义工具套件
通过调整提示和模型权重,可以提高特定用例的 VLM 准确性。可以使用 PEFT 等高效技术微调模型权重,但仍需要足够的数据和计算。
但是,
prompt engineering
和上下文学习也可用于提高输出质量,并且与在运行时调整模型的文本输入一样简单。
本文将为您介绍 VLM 的演变,以及如何有效地提示 VLM 以用于了解单张图像、多张图像和视频的用例。如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的
Vision for All:Unlocking Video Analytics with AI Agents 在线研讨会
。
单图像理解
从单张图像来看,VLM 能够识别、描述、分类和推理图像中的内容。一些 VLM 具有高级功能,例如为某些对象提供边界框坐标,以及翻译图像中的文本。这最适合需要对图像进行分类、添加字幕或以文本进行描述的用例。
单图像功能还可用于检测直播中发生的基本事件。这可以通过以下方式实现:从直播中一次采样一帧,并将其提供给 VLM,同时提供描述要查找的内容的提示。当要检测的事件类型较大、清晰可见且移动缓慢时,此方法的效果尤为出色。
系统还可以提示 VLM 以 JSON 等结构化格式输出,以便轻松解析响应并将其发送至数据库或通知服务。
例如,我们为 Llama-3.2-90b-vision-instruct 提供了以下提示和图像(图 2)。
图 2、将图像输入 VLM 以进行问答
提示:是否有消防车?是否有火灾?有消防人员吗?输出每个问题的答案,以 JSON 格式。
VLM 响应:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
对于每个问题,VLM 能够准确回答并以结构化格式输出,以便用于下游任务。
虽然此方法允许 VLM 通过一次采样一帧来操作视频输入,但 VLM 本身无法理解视频的完整上下文,将其局限于简单的用例。一次对一帧进行采样无法为其提供检测随时间变化的动作或趋势所需的时间理解。这可以通过多图像和视频 VLM 得到改善。
多图像理解
许多 VLM 可以接受多张图像作为输入,允许模型比较、对比并从输入图像中学习。
VLM 通常在各种用例中都具有良好的零样本击能力。但是,在尝试从 VLM 获取有关特定领域用例的更精确答案时,VLM 可能会遇到困难,因为它们缺乏准确响应用户提示所需的完整上下文。
为了更好地理解这一点,我们使用 VLM 来估计产品的库存水平。这是一个重要的零售用例,因为它可以在商店中的产品运行不足时自动监控和提醒员工,以避免因缺货而损失的收入。
我们向
Cosmos Nemotron 34B
提供了一张表格 (图 3) 的单一图像,该表格应与多种零食产品一起充分库存,并要求其估算库存水平。
图 3. 用于零售库存水平检测的单图像 VLM 输入
提示:按 0 – 100 的比例估算零食表的库存量。
VLM 响应:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
该模型将其评为 80–90% 的库存,即使肉眼很清楚库存水平很低。从模型的角度来看,它不知道在查看单个图像之前表格的样子。
通过使用 Cosmos Nemotron 34B 多图像功能,我们提供了另一张 100%库存水平的零食表图像 (图 4),为模型提供了额外的背景信息,以便生成更准确的估计。
图 4. 为 VLM 提供情境学习输入,用于零售库存水平检测
提示:首先比较两张图像的库存水平。然后,按 0 – 100% 的比例对库存量的每张图像生成估计值。
VLM 响应:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
现在,通过提供额外的图像作为参考,VLM 可将第二张图像的库存量估算为 20–30%,准确度明显更高。
多图像输入还可用于更复杂的多模态 RAG 工作流,其中会根据用户的输入将示例图像动态添加到提示中。多图像 VLM 还可以通过从视频中采样一小部分帧并将其提供给 VLM 来用于基本的视频理解。
但是,许多接受多图像输入的 VLM 的上下文窗口有限(10–20 帧),并且可能未基于视频数据进行训练。为了真正理解视频输入,我们建议使用基于视频特定数据集训练的 VLM 和长上下文窗口。
视频理解
最新的 VLM 具有完整的视频理解功能,具有大上下文长度、帧压缩和视频训练数据。这使得 VLM 能够提取更长的视频,并提高时间理解能力:能够理解一段时间内的行动和趋势。
第一个示例展示了如何使用直播中的单帧来检测火灾,但由于缺乏时间背景,相同的方法无法告诉您火灾是在增大还是变小。
具有长上下文和视频理解的 VLM 会随着时间的推移处理多个帧,以了解火势是否在增长。以下是 Cosmos Nemotron 模型系列的一些示例。
顺序视觉理解
具有顺序理解的 VLMs 依赖于按序列处理帧,使其能够捕捉动作和事件随时间推移的进展。与多图像模型相比,它们在处理有限数量的帧时没有显式时间连接,而顺序视频模型则通过按顺序分析帧来整合时间动力学。
这在您想要在视频中捕捉事件因果关系的情况下非常有用,因为它使模型能够跟踪动作在一段时间内的进展情况,并建立因果关系。
接下来的几个示例基于以下视频,展示了一名工人在仓库中丢弃箱子的过程。我们将讨论一些提示和响应。
图 5、将仓库视频输入到 VLM 以进行场景理解
提示:本视频中发生了什么?
VLM 响应:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
该模型提供了详细的环境细节 (例如通道标签和照明条件),但无法正确识别工人掉落的箱子。现在,尝试直接问这个问题。Prompt:工作人员是否丢弃了任何箱子?
VLM 响应:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
现在,VLM 可以推断出工作者可能丢弃了箱子。这表明,除非您指示 VLM 查找特定内容,否则通用提示可能无法正确捕捉所有细微差别。稍微修改一下提示。
提示:详细介绍此仓库视频的视觉和叙事元素。突出显示所有异常或危险(如果存在)。
VLM 响应:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
现在,该模型可以捕捉场景的所有重要细节以及重要事件,例如工人丢箱。如果您想稍后在视频中执行 Q&A,详细的回答非常有用。具有视频理解功能的 VLM 还能够解决定向问题,因为它们可以理解随时间推移的动作。
提示:工作者是从哪个方向进入场景的?
VLM 响应:
The worker entered the scene from the right side.
但是,这些模型通常缺乏精确的时间定位,这可能会限制其确定特定事件何时发生的能力。
提示:工人是什么时候丢弃盒子的?
VLM 响应:
The worker dropped the box in the second image.
时间定位
结合
Localization Interpretable Temporal Attention (LITA)
或类似时间定位技术的 VLM 通过显式学习关键事件发生的时间和地点来提高视频理解能力。这些模型通过引入专注于视频关键片段的时间注意力机制,随着时间的推移更有效地调整视觉和文本模式来实现这一目标。这有助于回答视频中的“何时”问题。
提示:
工人是什么时候丢弃盒子的?
VLM 响应:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
VLM 的最佳提示格式取决于模型的架构和训练期间使用的描述对的性质。不同的训练数据集会影响 VLM 解释提示的方式。
结束语
本文介绍了 VLM 如何从仅支持单图像输入发展为能够对长视频输入进行复杂的时间推理。要开始使用 VLM,请访问
build.nvidia.com
并尝试本博文中显示的一些提示。有关技术问题,请参阅
Visual AI Agent 论坛
。
VLM 可用于构建各种视觉 AI 智能体。无需 GPU,即可在
/NVIDIA/metropolis-nim-workflows
GitHub 存储库上探索视觉 AI 工作流示例之一。要构建视频分析 AI 智能体,请试用
NVIDIA AI Blueprint
中的视频搜索和摘要蓝图。
如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 网络研讨会。
有关 LLM 提示的更多信息,请参阅《
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
》。 |
https://developer.nvidia.com/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | Configurable Graph-Based Task Solving with the Marco Multi-AI Agent Framework for Chip Design | Chip and hardware design presents numerous challenges stemming from its complexity and advancing technologies. These challenges result in longer turn-around time (TAT) for optimizing performance, power, area, and cost (PPAC) during synthesis, verification, physical design, and reliability loops.
Large language models (LLMs) have shown a remarkable capacity to comprehend and generate natural language at a massive scale, leading to many potential applications and benefits across various domains. Successful LLM-based AI agents for hardware design can drastically reduce TAT, leading to faster product cycles, lower costs, improved design reliability, and reduced risk of costly errors.
Figure 1. Marco framework overview
Marco: Configurable Graph-Based Task Solving and Multi-AI Agents Framework
We introduce the proposed
Marco framework
, which encompasses graph-based task solving, agent configurations for sub-tasks, and skill/tool configurations for each AI agent in real time.
Figure 1 showed dynamic and static configurable graph-based task solving, which is flexibly integrated with chip-design knowledge (for example, circuits, timing, and so on).
In the task graph, each node represents a sub-task, and each edge represents the execution or knowledge relationship between nodes. For solving each sub-task, you use
Autogen
to configure single-AI or multi-AI agent with a knowledge database, tools, and memory.
Table 1 summarizes the task graph, agent, and LLM configurations of the Marco framework for various agents.
VerilogCoder
and the MCMM timing analysis agent use a dynamic task graph to complete the specification-to-RTL and extract key takeaways of timing reports, respectively. The timing path debug agent finds the problematic net, wire, and constraints through a static timing debugging task graph (Figure 1).
Agent works
Task category
Marco framework configuration
Task graph
Sub-task agent config.
Customized tools
RTLFixer
Code Syntax Fixing
N/A
Single-AI
RTL Syntax Error RAG Database
Standard Cell Layout Opt.
Optimization
N/A
Single-AI
Cluster Evaluator, Netlist Traverse Tool
MCMM Timing Analysis(Partition/Block-Level)
Summary & Anomaly Identification
Dynamic
Multi-AI
Timing Distribution Calculator, Timing Metric Comparator
DRC Coder
Code Generation
N/A
Multi-Modality & Multi-AI
Foundry Rule Analysis, Layout DRV Analysis, DRC Code Evaluation
Timing Path Debug(Path-Level)
Summary & Anomaly Identification
Static
Hierarchical Multi-AI
Agentic Timing Report Retrieval
VerilogCoder
Code Generation
Dynamic
Multi-AI
TCRG Retrieval Tool, AST-Based Waveform Tracing Tool
Table 1. Task graph, agent configuration, customized tool of the Marco framework for various autonomous agent implementations for hardware design tasks
In Table 1,
RTLFixer
and
Standard Cell Layout Optimization
agents (that is, a single-AI configuration) are supported using the Marco framework. The rest of the multi-AI agents are implemented on the Marco framework.
For
RTLFixer
,
LLM agent for standard cell layout optimization
, and
DRC Coder
, we used single-AI or multi-AI agent configurations with customized tools, memory, and domain knowledge.
Automated hardware description languages code generation
One key area where autonomous agents are making an impact is in the generation of hardware description languages (HDLs), such as Verilog. Due to the growing complexity of VLSI design, writing Verilog and VHDL is time-consuming and prone to bugs, necessitating multiple iterations for debugging functional correctness. Consequently, reducing design costs and designer effort for completing hardware specifications has emerged as a critical need.
LLMs can be used to generate Verilog code from natural language descriptions. However, LLMs often struggle to produce code that is both syntactically and functionally correct.
Syntax correctness
RTLFixer
uses a combination of retrieval-augmented generation (RAG) and ReAct prompting to enable LLMs to iteratively debug and fix syntax errors. RAG incorporates a database of human expert guidance to provide context for error correction. ReAct enables the LLM to reason about the error, plan a fix, and act on the plan.
Functional correctness
VerilogCoder
is a multi-agent system that incorporates a task planner and an abstract syntax tree (AST)-based waveform-tracing tool to generate and debug Verilog code. It employs a task and circuit relation graph (TCRG) to break down a task into manageable sub-tasks and link signal transitions to each step (Figure 2).
Figure 2. Task-driven circuit relation graph retrieval agent reasoning and interacting with the developed TCRG retrieval tool to enrich the task with the relevant circuit and signal descriptions
An AST-based waveform tracing tool assists the LLM agent in identifying and fixing functional errors by back-tracing signal waveforms. VerilogCoder achieves a 94.2% success rate on the VerilogEval-Human v2 benchmark, demonstrating a significant improvement over previous methods.
Video 1. Autonomously Complete Verilog Code with TCRG Planning and AST-Based Waveform Tracing Tools
In Video 1, the demonstration of VerilogCoder autonomously completes functionally correct Verilog code using TCRG planning and AST-based waveform tracing tools.
Automated DRC layout code generation
DRC-Coder
uses multiple autonomous agents with vision capabilities and specialized DRC and Layout DRV analysis tools to generate DRC code. The system interprets design rules from textual descriptions, visual illustrations, and layout representations. The multiple LLM agents include a planner that interprets design rules, and a programmer that translates the rules into executable code.
DRC-Coder incorporates an auto-debugging process, which uses feedback from the code evaluation to refine the generated code.
Video 2. A Demonstration of DRC-Coder in Chip Design
In Video 2, the demonstration of DRC-Coder generates DRC code that achieves perfect F1 scores on hundreds of testing layouts by leveraging a layout analysis tool, an auto-debugging process, and the capabilities of multi-modality and multi-AI agents.
DRC-Coder achieved a perfect F1 score of 1.000 in generating DRC codes for a sub-3nm technology node, outperforming standard prompting techniques. The proposed automated agentic approach significantly reduces the time required for DRC code generation, from weeks to an average of four minutes per design rule.
Standard cell layout optimization
LLM agent for standard cell layout optimization
proposes using the natural language and reasoning ability of an LLM to generate high-quality cluster constraints incrementally to optimize the cell layout PPA and debug the routability with ReAct prompting.
The system uses net information and cell layout analysis to group MOSFET devices into clusters. The AI agent not only achieves up to 19.4% smaller cell area, but also generates 23.5% more LVS and DRC clean cell layouts than the
Transformer-based device clustering approach
on a set of sequential cells in the industrial 2
nm
technology node.
Multi-corner multi-mode timing report debug and analysis
The multi-corner multi-mode (MCMM) timing analysis agent uses dynamic task graphs to complete the specification-to-RTL and extract key takeaways of timing reports, respectively.
The MCMM timing analysis agent achieves an average score of 8.33 out of 10, based on evaluations by experienced engineers on a set of industrial cases, and delivers approximately 60x speedups compared to human engineers (Figure 3).
Figure 3. MCMM timing analysis agent results
The timing path debug agent finds the problematic net, wire, and constraints through the static timing debugging task graph (Figure 1).
In Table 2, the timing path debug agent resolves 86% of path-level debugging tasks, whereas the standard task solving approach fails to resolve any of the tasks.
Multi Report Task Description
Required Analyzed
Sub-Tasks
Standard Task Solving
Timing Path Debug Agent
Find missing clk signals that have no rise/fall information
max, clk
X
V
Identify pairs of nets with high RC mismatch
max, wire
X
V
Detect unusual constraints between victim and its aggressors
max, xtalk, LC
X
V
Identify unusual RC values between victim and its aggressors
max, wire, xtalk, LC
X
V
Find the constraints of slowest stages with highest RC values
max, wire, xtalk, LC
X
V
Compare each timing table for number of stages, point values and timing mismatch
max
X
X
Task M2 and Task M3 for specific stages in list of paths
max, wire, xtalk, LC
X
V
Avg Pass-rate
0%
86%
Table 2. Pass-rate (%) of timing path debug agent with static task graph solving, and a naïve standard task solving without task graph information
X=Failed to solve the task. V=Solved the task successfully.
Conclusion
The proposed Marco framework enables more flexible and domain-specialized methods for real-time hardware design tasks solving. By using task graph and flexible single-AI and multi-AI agent configurations with domain-specific tools and knowledge, we developed various agents for tasks such as cell layout optimization, Verilog syntax error fixing, Verilog and DRC code generation, and timing debugging on problematic blocks, nets, and wires.
The experimental results show impressive performance and efficiency benefits on utilizing collaborative LLM-based agents for chip design.
The future directions for agent research on hardware design include the following:
Training LLMs with high-quality hardware design data
Improving LLM-based agents’ ability for hardware signal and waveform debugging
Incorporating PPA metrics into the design flow
Developing more efficient self-learning techniques and memory systems for LLM agents for solving more complex hardware tasks
For more papers and projects on electronic design automation, see the
NVIDIA Design Automation Research Group
page.
For those interested in the technologies highlighted in the post, here’s a list of relevant papers:
RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models
/NVlabs/RTLFixer
GitHub repo
VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder: Automated DRC Checker Code Generation using LLM Autonomous Agent
Large Language Model (LLM) for Standard Cell Layout Design Optimization | https://developer.nvidia.com/zh-cn/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | 基于 Marco 多 AI 智能体框架的可配置图形任务求解在芯片设计中的应用 | 芯片和硬件设计因其复杂性和先进的技术而面临众多挑战。这些挑战导致在合成、验证、物理设计和可靠性循环期间优化性能、功耗、面积和成本(PPAC)的周转时间(TAT)更长。
大语言模型(LLMs)已显示出大规模理解和生成自然语言的出色能力,从而为各种领域带来许多潜在应用和优势。成功地将基于 LLMs 的 AI 智能体用于硬件设计,可以大幅减少 TAT,从而缩短产品周期、降低成本、提高设计可靠性,并降低代价高昂的错误风险。
图 1. Marco 框架概述
Marco:基于图形的可配置任务求解和多 AI 代理框架
我们介绍了拟议的 Marco 框架,其中包括基于图形的任务求解、子任务的智能体配置,以及每个 AI 智能体的实时技能/工具配置。
图 1 展示了基于动态和静态可配置图形的任务求解,可灵活集成芯片设计知识 (例如电路、定时等)。
在任务图中,每个节点代表一个子任务,每个边缘代表节点之间的执行或知识关系。为了解决每个子任务,您可以使用
Autogen
配置具有知识数据库、工具和内存的单个 AI 或多个 AI 代理。
表 1 总结了各种代理的 Marco 框架的任务图形、代理和 LLM 配置。
VerilogCoder
和 MCMM 时序分析代理使用动态任务图形分别完成对 RTL 的规范和提取时序报告的关键要点。定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的网络、线和约束。
代理作品
任务类别
Marco 框架配置
任务图
子任务代理配置。
定制工具
RTLFixer
代码语法固定
不适用
单 AI
RTL 语法错误 RAG 数据库
Standard Cell Layout Opt.
优化
不适用
单 AI
Cluster Evaluator, Netlist Traverse Tool
MCMM 定时分析 (Partition/Block-Level)
总结和异常识别
动态
多 AI
Timing Distribution Calculator,Timing Metric Comparator
DRC Coder
代码生成
不适用
多模态和多 AI
Foundry Rule Analysis,Foundry 规则分析,Layout DRV Analysis,布局 DRV 分析,DRC Code Evaluation,DRC 代码评估
Timing Path Debug(Path-Level)
总结和异常识别
静态
分层多 AI
代理定时报告检索
VerilogCoder
代码生成
动态
多 AI
TCRG 检索工具,AST-Based Waveform Tracing Tool
表 1、任务图、代理配置、Marco 框架的定制工具,用于硬件设计任务的各种自主代理实现
在表 1 中,使用 Marco 框架支持
RTLFixer
和标准单元布局优化代理 (即单 AI 配置)。其余的多 AI 代理均在 Marco 框架上实施。
对于 RTLFixer、用于标准单元布局优化的 LLM 代理和 DRC Coder,我们使用了具有定制工具、内存和域知识的单 AI 或多 AI 代理配置。
自动硬件描述语言代码生成
自主智能体产生影响的一个关键领域是硬件描述语言(HDL)的生成,例如 Verilog。由于 VLSI 设计的复杂性与日俱增,编写 Verilog 和 VHDL 非常耗时且容易出现错误,因此需要进行多次迭代来调试功能正确性。因此,降低设计成本和设计师完成硬件规格的工作量已成为一项迫切需求。
LLM 可用于根据自然语言描述生成 Verilog 代码。但是,LLM 通常难以生成语法和功能上都正确的代码。
语法正确性
RTLFixer
结合使用检索增强生成 (RAG) 和 ReAct 提示,使 LLM 能够迭代地调试和修复语法错误。RAG 整合了人类专家指导数据库,为纠错提供上下文。ReAct 启用 LLM 进行错误推理、计划修复并采取行动。
功能正确性
VerilogCoder
是一个多代理系统,包含任务规划器和基于抽象语法树(AST)-based 波形追踪工具,用于生成和调试 Verilog 代码。它使用任务和电路关系图(TCRG)将任务分解为可管理的子任务,并将信号转换链接到每个步骤(图 2)。
图 2、任务驱动的电路关系图检索代理推理,并与开发的 TCRG 检索工具交互,通过相关的电路和信号描述丰富任务
基于 AST 的波形追踪工具通过回溯信号波形来协助 LLM 代理识别和修复功能错误。与之前的方法相比,VerilogCoder 在 VerilogEval-Human v2 基准测试中的成功率为 94.2%。
视频1. 使用 TCRG 规划和基于 AST 的波形追踪工具自动完成 Verilog 代码
在视频 1 中,VerilogCoder 演示使用 TCRG 规划和基于 AST 的波形追踪工具自动完成功能正确的 Verilog 代码。
自动生成 DRC 布局代码
DRC-Coder 使用多个具有视觉功能的自主代理以及专门的 DRC 和布局 DRV 分析工具来生成 DRC 代码。该系统从文本描述、视觉插图和布局表示中解释设计规则。多个 LLM 代理包括一个解释设计规则的规划器和一个将规则转换为可执行代码的程序员。
DRC-Coder 包含自动调试流程,该流程利用代码评估的反馈来优化生成的代码。
视频 2、用于芯片设计的 DRC-Coder 演示
在视频 2 中,DRC-Coder 演示会生成 DRC 代码,通过利用布局分析工具、自动调试流程以及多模态和多 AI 代理的功能,在数百个测试布局中实现完美的 F1 分数。
DRC-Coder 在为低于 3nm 的技术节点生成 DRC 代码时,达到了完美的 F1 评分 (达到 1.000 分),优于标准提示技术。拟议的自动化代理方法可显著缩短 DRC 代码生成所需的时间,即每个设计规则从数周缩短到平均 4 分钟。
标准单元布局优化
用于标准单元布局优化的 LLM 智能体建议使用 LLM 的自然语言和推理能力逐步生成高质量集群约束,以优化单元布局 PPA,并使用 ReAct 提示调试可路由性。
该系统使用网络信息和单元布局分析将 MOSFET 设备分组到集群中。与在工业
2nm
技术节点中的一组顺序单元上采用基于 Transformer 的设备集群方法相比,AI 智能体不仅可将单元面积减少多达 19.4%,而且还可生成多 23.5% 的 LVS 和 DRC 清洁单元布局。
多角度多模式时序报告调试与分析
多角多模 (MCMM) 时序分析代理使用动态任务图形分别完成从规范到 RTL 的规范,并提取时序报告的关键要点。
根据经验丰富的工程师对一组工业案例的评估,MCMM 时序分析代理的平均分为 8.33 分 (满分 10 分),与人类工程师相比,速度提高了约 60 倍 (图 3)。
图 3、MCMM 时序分析代理结果
定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的 net、wire 和约束。
在表 2 中,定时路径调试代理可解析 86% 的路径级调试任务,而标准任务求解方法无法解析任何任务。
多报告任务说明
所需的分析结果
子任务
标准任务求解
定时路径调试代理
查找没有上升/下降信息的缺失 clk 信号
max、clk
X
V
识别 nets 对的 RC 不匹配率高
Max、Wire
X
V
检测受害者与侵犯者之间的异常约束
Max、xtalk、LC
X
V
识别牺牲者与侵犯者之间的异常 RC 值
Max、wire、xtalk、LC
X
V
查找具有最高 RC 值的最慢阶段的限制
Max、wire、xtalk、LC
X
V
比较每个 timing table,了解阶段数、点值和 timing mismatch
最大值
X
X
任务 M2 和任务 M3,用于路径列表中的特定阶段
Max、wire、xtalk、LC
X
V
平均通过率
0%
86%
表 2、具有静态任务图求解功能的定时路径调试代理的通过率 (%),以及无任务图信息的朴素标准任务求解
X = 无法解决任务
。V = 成功完成任务。
结束语
拟议的 Marco 框架为实时硬件设计任务求解提供了更灵活、更领域更专业的方法。通过使用任务图形、灵活的单 AI 和多 AI 代理配置以及特定领域的工具和知识,我们为单元布局优化、Verilog 语法错误修复、Verilog 和 DRC 代码生成以及问题块、网络和线缆的定时调试等任务开发了各种代理。
实验结果表明,在芯片设计中使用基于 LLM 的协作式智能体可获得惊人的性能和效率优势。
硬件设计智能体研究的未来方向包括:
使用高质量的硬件设计数据训练 LLMs
提升基于 LLM 的智能体的硬件信号和波形调试能力
将 PPA 指标纳入设计流程
为 LLM 代理开发更高效的自学习技术和内存系统,以解决更复杂的硬件任务
有关电子设计自动化的更多论文和项目,请参阅
NVIDIA Design Automation Research Group
页面。
如果您对本文中重点介绍的技术感兴趣,以下是相关论文列表:
RTLFixer:使用大语言模型自动修复 RTL 语法错误
/NVlabs/RTLFixer
GitHub repo
VerilogCoder:采用基于图形的规划和基于抽象语法树(AST)的波形追踪工具的自主 Verilog 编码代理
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder:使用 LLM 自主代理自动生成 DRC 检查器代码
用于优化标准单元布局设计的大语言模型(LLM) |
https://developer.nvidia.com/blog/defining-llm-red-teaming/ | Defining LLM Red Teaming | There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to deviate from acceptable standards. This use of LLMs began in 2023 and has rapidly evolved to become a common industry practice and a cornerstone of
trustworthy AI
. How can we standardize and define LLM red teaming?
Researchers affiliated with NVIDIA, the University of Washington, the Center for Human-Compatible AI, and the IT University of Copenhagen conducted a study on red teaming in practice,
Summon a demon and bind it: A grounded theory of LLM red teaming
(published in PLOS One).
Because the goal was to define and understand a relatively novel activity, the study uses a grounded theory approach, taking as evidence interviews from dozens of practitioners in thousands of minutes of video recordings. We spoke with security professionals, policy researchers, and scientists, as well as people who conducted this work non-professionally, such as academic researchers, hobbyists, and even artists, to understand the common themes, motivations, and strategies in attacking LLMs.
What defines LLM red teaming in practice?
LLM red teaming has the following defining characteristics:
It’s limit-seeking:
Red teamers find boundaries and explore limits in system behavior.
It’s never malicious:
People doing red teaming are not interested in doing harm—in fact, quite the opposite.
It’s manual:
Being a creative and playful practice, the parts of red teaming that can be automated are often most useful to give human red teamers insight for their work.
It’s a team effort:
Practitioners find inspiration in each others’ techniques and prompts, and the norm is to respect fellow practitioners’ work.
It’s approached with an alchemist mindset:
We found that red teamers tend to abandon rationalizations about models and their behavior and instead embrace the chaotic and unknown nature of the work.
These characteristics feed into NVIDIA’s definition of LLM red teaming, which we discuss later in this post.
In industry, there’s often a rough division between cybersecurity red teaming and content red teaming:
Cybersecurity red teaming
tends to be scoped to the technology stack leading up to the point of inference output, and technologies used to transmit and render this output.
Content red teaming
, on the other hand, is scoped to the content produced at model inference time.
Why do people red team LLMs?
People who attack LLMs have a broad range of motivations.
Some of these are external. It may be part of their job or a regulatory requirement. Social systems can also play a role, with people discovering LLM vulnerabilities for social media content or to participate in a closed group. Others are intrinsic, as many people do it for fun, out of curiosity, or based on concerns for model behavior.
At NVIDIA, we red team LLMs as part of our Trustworthy AI process to assess risks before release. If the model doesn’t perform up to expectations, we postpone the release until a given model is updated and revised to perform better.
How do people approach this activity?
LLM red teaming consists of using strategies to reach goals when conversationally attacking the target. Each kind of strategy is decomposed into different techniques. A technique might just affect two or three adversarial inputs against the targets, or an input might draw upon multiple techniques.
We identified the following overall types of red team strategies:
Language:
Modulating the surface form of words rather than semantics, such as using an encoding scheme.
Rhetorical:
Relying on argumentation, or manipulation.
Possible worlds:
Trying to shift the context of the interaction.
Fictionalizing:
Shifting the basis of operation to a fictional world or set of conditions.
Stratagems:
Using meta-strategies that affect how one interacts with the LLM at a higher level.
For more information, see
Summon a demon and bind it: A grounded theory of LLM red teaming
, which lists and describes 35 techniques over twelve different strategies.
What can LLM red teaming reveal?
The goal of LLM red teaming isn’t to quantify security. Rather, the focus is on exploration, and finding which phenomena and behaviors a red teamer can get out of the LLM. Put another way, if we get a failure just one time, then the failure is possible.
Another thing that distinguishes red teaming from benchmarks is the focus on novelty.
For both cybersecurity and content-based red teaming, the possible range of attacks is infinite. For cybersecurity, that’s because new attack methods are constantly in development. For content, it’s because the mode of interaction is through text, which can be infinitely rearranged and augmented.
So, repeatability is not interesting when discovering new security weaknesses and vulnerabilities. While it makes sense to test any model for failures using a battery of existing prompts, as a benchmark does, this can never indicate security. It just reveals weaknesses.
It’s possible to patch deployed models to be secure against prompts in public benchmarks, or to use this data during alignment, so getting full marks on a benchmark doesn’t show that a model is secure. However, getting low marks on a security benchmark does still indicate the presence of weaknesses.
In the security context, to test a model rigorously, you should go beyond public knowledge and interact closely with the model, trying to find novel ways to breach a particular LLM.
In this sense, LLM red teaming is a classic instance of an artisanal activity. Red teamers use their human expertise and intuition while interacting with the target. For example, they might sense that a model is close to giving a mitigation message (for example, “As an AI, I cannot….”), and they might respond to this by backing off their current line of requests, or by starting a fresh chat session with a slightly different tack.
Or, a red teamer might sense that a model is close to yielding and so keep pushing and slightly varying their request until they find a way through and get the model to fail in the target way. They add what worked and what didn’t to their conscious expertise and unconscious intuition and then share it with other red teamers.
This makes red teaming a distinctly human activity that complements security benchmarking.
How do people use knowledge that comes from LLM red teaming?
Red teamers are often looking for what they describe as
harms
that might be presented by an LLM. There’s a broad range of definitions of harm.
A red teaming exercise could focus on one of many goals or targets, which could depend on deployment context, user base, data handled, or other factors. Red teamers may also pay attention to the level of complexity required to get a “break.” A harm discovered after a single, one-sentence interaction with an LLM often suggests greater concern than a harm surfaced following complex, multi-turn manipulation.
Sometimes, the goal of red teaming is curiosity, a byproduct of which might be content for the red teamer to share, in their organization or publicly. This both builds the expertise and intuition of the individual and raises the community level of knowledge. It’s common for traditional cybersecurity knowledge to be shared informally on social media, and this applies also for LLM security.
At NVIDIA, human red teaming is a critical part of how we make model-release decisions. We have a skilled LLM red team and a skilled AI red team, who both check our models and infrastructure for vulnerabilities and behaviors that were not caught elsewhere. This helps us in three ways:
It enables us to make informed decisions about whether we will release models
It builds a pool of high-level skill at the frontier of LLM red teaming
It gives us the confidence that we’re making the best effort and getting good results with our AI security.
The results from red teaming go into NVIDIA’s enhanced model documentation format,
Model Card++
.
Some parts of LLM security can be tested automatically. After an exploit has been found in one case, this can be stored and used to test other LLMs, so that we don’t make the same mistake again. We do exactly this in
NVIDIA garak
(Generative AI Red-Teaming and Assessment Kit). Developers ready to test the security of their LLM deployments can run the open-source NVIDIA garak against almost any model and get a report indicating susceptibility to over 120 different categories of vulnerability.
Knowledge about how to break a model can be risky in the wrong hands. When an exploit has been found, the best thing to do is contact the model owner and give them a chance to respond and fix the weakness. This process is called
co-ordinated vulnerability disclosure
, and is also a common practice for LLM vulnerabilities.
NVIDIA’s definition of LLM red teaming
We see LLM red teaming as an instance of AI red teaming. Our definition is developed by the
NVIDIA AI Red Team
and takes inspiration from both this research on LLM red teaming in practice and also the definition used by the Association for Computational Linguistics’ SIG on NLP Security (
SIGSEC
).
Take care to specify the specific subdomain of red teaming, as different audiences often make different assumptions about which form is being referred to.
LLM red teaming
: Systematically testing AI models and systems containing AI models to identify vulnerabilities and behaviors that pose threats or risks to the systems running or using those models.
It can be subdivided into two areas: security red teaming and content-based red teaming.
Security red teaming
Assessing the robustness of the model and the system containing the model to attacks impacting traditional security properties (for example, confidentiality, integrity, and availability), either of the model itself or the system containing the model.
Attacks such as adversarial inputs, model extraction, training data inference and extraction, or prompt injection as used to violate a security property are typically evaluated by a security red team.
These activities typically require teams with a traditional security background to leverage findings and evaluate their impact.
Content-based red teaming
Assessing the model for unwanted behavior under adversarial manipulation, producing outputs that violate some pre-specified behavior contract for the model, either explicit (for example, a model card) or implicit.
These behaviors may include outputs that are offensive, unwanted, or unsafe, including biased or bigoted productions, instructions on unsafe or illegal activities, making promises on behalf of the model owner, or making decisions based on protected characteristics. Common techniques involve various forms of jailbreaking and guardrail evasion.
These activities typically require the support of an ethics team, a legal team, or other similar domain experts to assess the impact of findings.
Improving LLM security and safety
NVIDIA NeMo Guardrails
is a scalable platform for defining, orchestrating, and enforcing AI guardrails for content safety, jailbreak prevention, and more in AI agents and other generative AI applications.
NeMo Guardrails and the
NVIDIA garak
toolkit are now available for developers and enterprises. Enterprises can benefit from high-grade safety and security with
NVIDIA AI Enterprise
.
Meet the experts at GTC
The NVIDIA scientists behind this and other works in AI security will be at GTC 2025. You can hear a panel discussion on
navigating critical challenges in AI governance
, where we discuss practical approaches to building responsible AI systems.
Our cybersecurity AI and security teams will present an exclusive, in-depth session designed to
transform your AI agentic workflows securely
from blueprint to production. And our experts will be available to answer all your questions on
building trustworthy AI systems
.
Acknowledgements
Thanks to Nanna Inie, Jonathan Stray, and Leon Derczynski for their work on the
Summon a demon and bind it: A grounded theory of LLM red teaming
paper published in PLOS One. | https://developer.nvidia.com/zh-cn/blog/defining-llm-red-teaming/ | 定义 LLM 红色团队 | 在一项活动中,人们为生成式 AI 技术 (例如大语言模型 (LLMs)) 提供输入,以确定输出是否会偏离可接受的标准。LLMs 的这种使用始于 2023 年,并已迅速发展成为一种常见的行业实践,也是值得信赖的 AI 的基石。如何标准化和定义 LLM 红色团队?
NVIDIA、华盛顿大学、Center for Human-Compatible AI 和哥本哈根 IT 大学的研究人员对红色团队的实际应用“
Summon a demon and bind it: A grounded theory of LLM red teaming
”(在 PLOS One 中发布) 进行了一项研究。
由于目标是定义和理解相对较新的活动,因此该研究采用了有根据的理论方法,在数千分钟的视频录制中,以数十位从业者的访谈作为证据。我们与安全专业人员、政策研究人员和科学家,以及从事这项非专业工作的人员 (例如学术研究人员、业余爱好者甚至艺术家) 进行了交谈,以了解攻击 LLMs 的常见主题、动机和策略。
什么定义 LLM 红队在实践中?
LLM 红色团队合作具有以下定义特征:
它寻求极限:红色团队成员发现边界并探索系统行为的极限。
这绝非恶意:做红队的人对伤害不感兴趣,事实上,情况恰恰相反。
它是手动的:作为一种创造性和趣味性的练习,red teaming 中可以自动化的部分通常非常有用,可以让人类 red teamers 深入了解他们的工作。
这是一项团队合作精神:从业者从彼此的技巧和提示中寻找灵感,规范是尊重其他从业者的工作。
他们以炼金术士的思维方式进行交流:我们发现,红色团队成员倾向于放弃对模型及其行为的合理化,转而接受工作的混乱和未知本质。
这些特征融入了 NVIDIA 对 LLM 红色团队的定义,我们稍后将在本文中讨论这些定义。
在业内,通常会对网络安全红队和内容红队进行粗略划分:
网络安全红色团队合作的范围往往涵盖直至推理输出点的技术堆栈,以及用于传输和渲染此输出的技术。
另一方面,内容红色团队的作用域为模型推理时生成的内容。
人们为什么组建 LLM 团队?
攻击 LLMs 的人的动机多种多样。
其中一些是外部的。这可能是他们工作的一部分,也可能是监管要求。社交系统也可以发挥作用,人们可以发现社交媒体内容的 LLM 漏洞,或者加入封闭的小组。还有一些则是内在的,因为很多人这样做是为了好玩、出于好奇或出于对模型行为的关注。
在 NVIDIA,我们组建了红队 LLMs 作为我们 Trustworthy AI 流程的一部分,在发布之前评估风险。如果模型的性能未达到预期,我们将延迟发布,直到对给定模型进行更新和修改以提高其性能。
人们如何看待这项活动?
LLM 红色团队合作包括在对话式攻击目标时使用实现目标的策略。每种策略都被分解成不同的技术。一项技术可能仅会影响目标的两个或三个对抗输入,或者一项输入可能会利用多种技术。
我们确定了以下红队策略的整体类型:
语言:
调节词语的表面形式而非语义,例如使用编码方案。
修辞学:
依靠论证或操纵。
可能的世界:
尝试改变交互的上下文。
虚构化 :
将操作基础转移到一个虚构的世界或一组条件中。
Stratagems:
使用影响更高级别的 LLM 交互方式的元策略。
如需了解更多信息,请参阅 Summon a demon and bind it:LLM 红色团队的基础理论,其中列出并描述了 12 种不同策略中的 35 种技术。
LLM 红色团队合作揭示了哪些内容?
LLM 红色团队合作的目标并不是量化安全性。重点在于探索,以及找出红色团队成员可以退出 LLM 的现象和行为。换句话说,如果我们只遇到一次故障,那么失败是可能的。
红色团队与基准测试的另一个区别是对新奇性的关注。
对于网络安全和基于内容的红色团队,攻击的可能范围是无限的。对于网络安全而言,这是因为新的攻击方法正在不断开发中。对于内容,这是因为交互模式是通过文本进行的,可以无限地重新排列和增强。
因此,在发现新的安全漏洞时,可重复性并不重要。虽然使用一系列现有提示测试任何模型的故障是有意义的,就像基准测试一样,这永远无法表明安全性。它只是揭示了缺点。
您可以根据公开基准测试中的提示对部署的模型进行修补以确保安全性,或者在对齐期间使用这些数据,因此在基准测试中获得完整标记并不表明模型是安全的。但是,在安全基准测试中得分较低仍然表示存在缺点。
在安全环境中,为了严格测试模型,您应该超越公开知识,与模型密切互动,尝试寻找突破特定 LLM 的新方法。
从这个意义上说,LLM 红色团队合作是典型的手工活动。红色小队队员在与目标进行互动时,运用他们的人类专业知识和直觉。例如,他们可能会感觉到某个模型即将发出缓解消息 (例如,“作为 AI,我无法……”),他们可能会通过退出当前的请求行,或者以略有不同的策略开始新的聊天会话来响应此消息。
或者,红色团队成员可能会感觉到模型已接近生成,因此继续推送并略微改变其请求,直到找到解决方法,使模型以目标方式失败。他们将有效和无效的部分添加到自己的意识专业知识和直觉中,然后与其他红色团队成员分享。
这使得红队合作成为一种独特的人类活动,是对安全基准测试的补充。
人们如何使用来自 LLM 红队的知识?
Red Team 成员通常会寻找 LLM 可能带来的危害。
伤害
的定义十分广泛。
红色团队训练可以专注于许多目标或指标之一,这可能取决于部署环境、用户基础、处理的数据或其他因素。红色团队成员还可能需要注意“休息”所需的复杂程度。与 LLM 进行单句交互后发现的危害性通常比复杂的多回合操作后出现的危害性更值得关注。
有时,红色团队的目标是好奇心,而好奇心的副产品可能是红色团队成员在其组织中或公开分享的内容。这既可以建立个人的专业知识和直觉,也可以提高社区的知识水平。传统网络安全知识通常在社交媒体上以非正式方式分享,这也适用于 LLM 安全。
在 NVIDIA,人类红色团队是我们做出模型发布决策的关键部分。我们拥有一支技能熟练的 LLM 红色团队和一支技能熟练的 AI 红色团队,他们会检查我们的模型和基础架构,以发现未在其他地方发现的漏洞和行为。这在三个方面为我们提供帮助:
它使我们能够就是否发布模型做出明智的决定
它建立了 LLM 红色团队前沿的高级技能池
它让我们确信,我们正在利用 AI 安全功能尽力而为并取得良好结果。
红色团队的结果将输入 NVIDIA 的增强型模型文档格式
Model Card++
。
可以自动测试 LLM 安全性的某些部分。在一个案例中,当发现漏洞后,这可以被存储并用于测试其他 LLM,这样我们就不会再次出现同样的错误。我们在 NVIDIA NeMo Curator(生成式 AI Red-Teaming 和评估套件)中正是这样做的。准备测试其 LLM 部署安全性的开发者可以针对几乎任何模型运行开源
NVIDIA garak
,并获得报告,表明他们容易受到 120 多个不同类别的漏洞的影响。
掌握有关如何破解模型的知识可能会对不法之徒带来风险。当发现漏洞时,最好的方法是联系模型所有者,让他们有机会做出回应并修复漏洞。此过程称为
协同漏洞披露
,也是 LLM 漏洞的常见做法。
NVIDIA 的 LLM 红队定义
我们将 LLM 红色团队视为 AI 红色团队的一个实例。我们的定义由
NVIDIA AI Red Team
开发,其灵感来自本期关于 LLM 红色团队实践的研究,以及计算语言学协会的 NLP Security SIG(
SIGSEC
)使用的定义。
请注意指定红队的特定子域,因为不同的受众通常会对所提及的形式做出不同的假设。
LLM 红色团队合作:系统测试包含 AI 模型的 AI 模型和系统,以识别会对运行或使用这些模型的系统造成威胁或风险的漏洞和行为。
它可以细分为两个方面:安全红队和基于内容的红队。
安全红色团队
评估模型和包含模型的系统的稳健性,使其能够抵御攻击影响模型本身或包含模型的系统的传统安全属性(例如,机密性、完整性和可用性)。
用于违反安全属性的攻击(例如对抗输入、模型提取、训练数据推理和提取或 prompt injection)通常由安全红色团队进行评估。
这些活动通常需要具有传统安全背景的团队利用调查结果并评估其影响。
基于内容的红色团队
评估模型在对抗操作下的不良行为,生成违反模型某些预先指定的行为契约的输出 (显式 (例如 model card) 或隐式)。
这些行为可能包括冒犯性的、不受欢迎的或不安全的输出,包括有偏见或偏见的生产、关于不安全或不法活动的指令、代表模型所有者作出承诺,或根据受保护的特征做出决定。常见的技术包括各种形式的 jailbreaking 和 guardrail evasion。
这些活动通常需要道德团队、法律团队或其他类似领域专家的支持,以评估调查结果的影响。
提升 LLM 的安全性和安全性
NVIDIA NeMo Guardrails 是一个可扩展的平台,用于定义、编排和执行 AI Guardrails,以在 AI 代理和其他生成式 AI 应用中实现内容安全、越狱预防等。
NeMo Guardrails 和
NVIDIA garak
工具包现已面向开发者和企业推出。借助
NVIDIA AI Enterprise
,企业可以从高级别的安全性和可靠性中受益。
在 GTC 大会上与专家会面
在 GTC 2025 上,NVIDIA 科学家将为这项工作以及 AI 安全领域的其他工作提供支持。您可以听到关于应对 AI 治理中的关键挑战的小组讨论,其中我们讨论了构建负责任的 AI 系统的实用方法。
我们的网络安全 AI 和安全团队将举办一场深入的独家会议,旨在安全地将您的 AI 代理工作流从蓝图转变为生产。我们的专家将为您解答有关构建可信 AI 系统的所有问题。
致谢
感谢 Nanna Inie、Jonathan Stray 和 Leon Derczynski 为“召唤恶魔并将其绑定:LLM 红队合作的基础理论”所做的工作:在 PLOS One 上发表的论文。 |
https://developer.nvidia.com/blog/agentic-autonomy-levels-and-security/ | Agentic Autonomy Levels and Security | Agentic workflows
are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable AI models to use tools to access additional data or automate user actions, and enable AI models to operate autonomously, analyzing and performing complex tasks with a minimum of human involvement or interaction.
Because of their power, agentic workflows also present an element of risk. The most common model at the core of agentic systems remains some variety of LLM, which remains vulnerable to prompt injection if untrusted data can be introduced into the system.
To help assess and mitigate those vulnerabilities, NVIDIA presents an Agentic Autonomy framework, which we use for the following purposes:
Understand risks associated with increasing complexity of AI workflows
Help model and mitigate the risks posed by such workflows
Introduce how we use it to model threats to agentic systems
Explore how NVIDIA uses this framework for security for AI and more
cybersecurity use cases at NVIDIA GTC
.
Manipulating autonomous systems
In practice, exploitation of AI-powered applications requires two key components:
An adversary must be able to get their data (read: attack) into the system through some mechanism.
There must be a downstream effect that their malicious data can trigger.
When the AI component of the system is an LLM, this is commonly referred to as either direct prompt injection (the adversary and user are the same person) or indirect prompt injection (the adversary and the user could be different people).
However, similar threats exist for other types of AI models used in autonomous AI powered applications. Both flavors of prompt injection are rooted in the lack of separation between the control and data plane in the LLM architecture.
Direct prompt injection can be commonly seen in such examples as “Do Anything Now” (or DAN) prompts, which attempt to subvert safety and content filters trained into the model, and enable it to produce objectionable or unwanted content. These attacks typically only impact the active user’s session, and so typically have limited impact.
Indirect prompt injection, in which a threat actor causes their data to be included in another user’s session and thus takes control of the LLM’s behavior and productions, is more serious. This is often accomplished by targeting data sources used by retrieval augmented generation (RAG) tools, including internet search, such that documents containing malicious instructions are unknowingly included in the current user’s session. When the capability to manipulate another user’s session,– particularly LLM outputs, is combined with the use of tools, a significant number of new potential risks present themselves.
When potentially untrusted data enters an agentic system, any downstream actions taken by that system become potentially adversary-controlled.
Security and complexity in AI autonomy
Even before “agentic” AI became a distinct class of product offerings, the orchestration of AI workloads in sequences was commonplace. Even simple flows, such as an endpoint security product routing a sample to the correct AI powered analysis engine depending on file format, is arguably an example of such a workflow.
Because workflows such as these are deterministic, it’s straightforward to enumerate all possible paths, map out data flows, and isolate untrusted data from potentially risky actions that might be impacted by attacker-controlled data.
As the industry moves to systems that have more internal decision making capabilities, higher degrees of tool use, and more complex interactions between model components, the number of potential data flow paths increases exponentially, making threat modeling more difficult.
The following set of classification, ordering systems from least autonomous to most, distinguishes between different degrees of autonomy afforded by different architectures, and helps assess corresponding risks.
Autonomy level
Description
Example
0 – Inference API
A single user request results in a single inference call to a single model.
An
NVIDIA NIM microservice
serving a single model
1 – Deterministic system
A single user request triggers more than one inference request, optionally to more than one model, in a predetermined order that does not depend on either user input or inference results.
NVIDIA Generative Virtual Screening for Drug Discovery Blueprint
2 – Weakly autonomous system
A single user request triggers more than one inference request. An AI model can determine if or how to call plugins or perform additional inference at fixed predetermined decision points.
Build an Enterprise RAG Pipeline Blueprint
3 – Fully autonomous system
In response to a user request, the AI model can freely decide if, when, or how to call plugins or other AI models, or to revise its own plan freely, including deciding when to return control to the user.
NVIDIA Vulnerability Analysis for Container Security Blueprint
, “BabyAGI”, computer use agents
Table 1. System autonomy level classification
Level 1
Level 1 is a linear chain of calls, where the output of one AI call or tool response is conveyed to the next step in an entirely deterministic manner. The complete flow of data through the system is known in advance.
Figure 1. First level of autonomy
Level 2
In level 2, the output from the AI model may be sent along different paths through the workflow in a data-dependent manner. While every execution path can be traced, it’s not known until the workflow is executed which execution path is used.
Figure 2. Linear chain of AI calls and tool responses
It forms a directed acyclic graph in that there is no path through the workflow that could potentially form a loop or cycle. All possible paths can be finitely enumerated. Most LLM routers fall into this level.
Level 3
For level 3, the number of potential execution paths grows exponentially with the number of execution steps in the workflow. Tools or other AI models may be invoked as the workflow progresses.
Figure 3. Output from the AI model goes along different paths
The presence of a cycle within the directed graph means that an exponentially increasing (and, at least in theory, potentially infinite) number of paths through the graph exist. It’s generally not tractable to enumerate all possible paths ahead of time, nor predict which specific path will be used for a particular input.
Separation of security boundary analysis and tool security
Our early experience with analyzing and securing agentic systems in the field led us to realize that the risk associated with these systems lies mostly in the tools or plugins available to those systems. In the absence of a tool or plugin that can perform sensitive or physical actions, the primary risk posed by manipulation of the AI component is misinformation, regardless of the degree of complexity of the workflow.
This observation drives the primary difference between this framework, which focuses on security boundaries as being distinct from tool security, and other frameworks, such as that proposed by HuggingFace in their smolagents library (
Introduction to Agents
). Some levels align closely. Their
simple processor
is approximately a
deterministic system
and their
multi-step agent
maps closely onto a
fully autonomous system
.
However, this framework focuses on the difficulty of identifying security boundaries given the degree of autonomy expressed by an AI component in describing program flow. HuggingFace’s description of agency, by combining the risks of tool use with the complexity of program flow, provides a quick rubric to judge systemic risks at the level of the system. This framework’s approach of separating the risks of tool calling and tool use from the difficulty of drawing security boundaries supports more detailed threat modeling of complex flows and potential mitigations.
In the proposed framework, separating the following concerns enables you to more concretely assess the risk posed by an agentic workflow:
Security boundaries with respect to the flow of untrusted data through the system
Security properties of specific functions or tools that may be called by the agentic system
Within a simple deterministic system with clear security boundaries, even a sensitive plugin may be a fairly low-risk if you can clearly separate it from untrusted input. In a similar manner, even a fully agentic system may be relatively low-risk as long as no sensitive tools are available to it and the output of the system is appropriately validated before use.
The combination of a more complex workflow with sensitive tools, however, demands further analysis to ensure secure operation.
Threat modeling under this framework
Increasing autonomy levels do not inherently represent increasing risk. Instead, they represent increasing lack of determinism and predictability of the system behavior, which makes it more difficult to assess the impact of untrusted data.
Risk associated with agentic systems is located largely in the tools or plugins that the system has access to which can perform sensitive actions, including the following examples:
Completing a purchase
Sending an email
Physical actions such as moving a robot or setting a thermostat
Broadly speaking, anything that takes an action on behalf of a user that requires their delegated authorization to complete, or anything that results in a physical change in a system should be treated as potentially sensitive, and analyzed for its potential to act upon or be influenced by untrusted data.
Mitigation of this tool risk relies in large part on the ability to block the ability of attackers to inject malicious data into plugins, which becomes significantly more difficult with each increase in autonomy level.
Possible downstream effects are much more dependent on the details of the system and what plugins are being used, but at a minimum, a threat actor who can feed malicious data to the system must be presumed to be able to at least influence, if not control outright, the responses of the system.
If tools or plugins are being used, the potential threat increases significantly. A bad actor who can control the output of an AI model embedded in a system that uses tools might also be able to control which plugins are used and what actions those tools take. This can lead to a range of threats:
Data exfiltration, for example, through Markdown rendering plugins
Remote code execution, for example, through plugins that run LLM-provided code
The risk of physical harm, if the system has control or influence over some sort of physical system
As you climb the autonomy hierarchy, you need a more careful analysis of and management of potentially untrusted data (Figures 4 and 5).
A Level 0 system, with no vectors for untrusted data, is relatively straightforward from this perspective.
A Level 1 system, with a predetermined workflow, always accesses the same data sources, models, and tools in the same sequence, making it straightforward to determine if untrusted data can be introduced into the workflow. The designer can either isolate sensitive functionality from the untrusted data, or design appropriate sanitization strategies for that data.
The number of potential execution paths in a Level 2 system is greater than in Level 1 systems but still enumerable, and specific paths that contain untrusted data can be analyzed in greater detail.
Level 3 autonomous systems pose the largest difficulty, as the number of execution paths typically grows exponentially in the number of self-reflection steps performed by the agent. In this case, these include time-of-use sanitization, time-of-use taint tracing, or other risk mitigations such as requiring manual user approval of potentially hazardous actions.
Taint tracing
Taint tracing
is marking an execution flow as having received untrusted data and then either preventing use of or requiring manual re-authorization for any sensitive tool.
For Level 1 and Level 2 systems, identifying potential sources of untrustworthy data, and evaluating the risks of downstream impacts is relatively straightforward. When untrustworthy data enters the system, every downstream AI model and its associated outputs and every downstream tool and its associated actions and outputs are also untrusted.
Figure 4. Taints from untrusted sources can be bounded and enumerated in Level 1 and 2 systems
For level 3 systems, taint tracing in the general case becomes almost intractable. When untrusted data has been introduced into the workflow, it has the potential to propagate to any other downstream component through the looping structure inherent to level 3 workflows.
Figure 5. Taints from untrusted sources are difficult to bound and enumerate in Level 3 systems
By classifying an agentic application into the correct level, it becomes simpler to identify the overall level of risk posed by the application and corresponding security requirements.
Recommended security controls per autonomy level
Table 2 provides a high-level summary of some suggested mitigating controls that should be placed around agents of various autonomy levels. Not all controls are required for all agentic workloads, depending on the sensitivity of tools provided to the workload.
Autonomy level
Recommended security controls
0 – Inference API
Use standard API security.
1 – Deterministic system
Manually trace dataflows and order workflow correctly to prevent untrusted data from entering sensitive plugins.
2 – Bounded agentic workflow
Enumerate dataflows, identify ones with potentially untrusted data, explore isolation or sanitization options, and consider time-of-use manual approval of sensitive actions.
3 – Fully autonomous system
Implement taint tracing and mandatory sanitization of potentially untrusted data. Consider time-of-use manual approval of sensitive actions.
Table 2. Recommended security controls by level
For Level 0 systems, you can apply simple application and API security. Due to lack of further interaction taken by the system on the basis of AI model output, overall risk here is largely limited to non-AI components.
Level 1 and 2 systems provide a slightly elevated level of risk, but one that is straightforward to evaluate. Because these systems contain no loops, all execution flows through the systems can be exhaustively enumerated and examined for cases when untrusted data may flow into a sensitive tool.
Flows of this type are particularly risky when they pass through an LLM between the source and the sink. In level 1 systems, it’s often sufficient to reorder the workflow. For more complex (level 2) systems, risky flows from untrusted sources to sensitive sinks should be carefully evaluated to see if they can be eliminated, or otherwise if the untrusted data or input to the sensitive tool can be properly sanitized to remove risk.
Finally, if the flows cannot be rearranged or eliminated, or robust sanitization is not possible, add time-of-use manual approval to sensitive tools that may receive untrusted data, thus placing a human in the loop. This enables users of agentic workflows to inspect risky actions before they are performed and deny the workflow the ability to perform adversarially manipulated ones.
Level 3 systems, due to their complexity, typically cannot be meaningfully rearranged to block untrusted sources to sensitive sink flows. In this case, sanitization of untrusted data at time of retrieval should be implemented.
If possible, implementing taint tracing should also be considered. In the event that neither is possible, simply requiring that all potentially sensitive tools require manual approval before executing their function may be used as a last resort.
As with threat modeling, breaking agentic workloads into different complexity levels enables the quick determination of reasonable security controls to avoid the risk of having sensitive tools ingest untrusted data, including LLM outputs that have been manipulated by prompt injection.
Conclusion
As systems climb the autonomy hierarchy, they become more complex and more difficult to predict. This makes threat modeling and risk assessment more difficult, particularly in the presence of a range of data sources and tools of varying trustworthiness and sensitivity.
Identifying the system autonomy level provides a useful framework for assessing the complexity of the system, as well as the level of effort required for threat modeling and necessary security controls and mitigations.
It’s also important to analyze the plugins in the pipeline and classify them depending on their capabilities to provide an accurate risk evaluation based on the autonomy level. | https://developer.nvidia.com/zh-cn/blog/agentic-autonomy-levels-and-security/ | 代理自主级别与安全性 | 智能体工作流是 AI 赋能工具的下一次演进。它们使开发者能够将多个 AI 模型关联在一起以执行复杂的活动,使 AI 模型能够使用工具来访问其他数据或自动执行用户操作,并使 AI 模型能够自主运行,以尽可能减少人类参与或交互的方式分析和执行复杂的任务。
由于其强大的功能,代理工作流也存在风险因素。代理系统的核心最常见的模型仍然是各种 LLM,如果可以将不受信任的数据引入系统,这些模型仍然容易受到 prompt injection 的影响。
为帮助评估和缓解这些漏洞,NVIDIA 提供了一个 Agentic Autonomy 框架,我们将其用于以下用途:
了解与日益复杂的 AI 工作流相关的风险
帮助模拟和减轻此类工作流程带来的风险
介绍如何使用它对 agentic 系统的威胁进行建模
操控自主系统
在实践中,开发 AI 赋能的应用需要两个关键组件:
对手必须能够通过某种机制将其数据 (读取:攻击) 传入系统。
必须存在其恶意数据可能触发的下游影响。
当系统的 AI 组件是 LLM 时,这通常被称为直接提示注入 (对手和用户是同一个人) 或间接提示注入 (对手和用户可以是不同的人)。
然而,对于 AI 驱动的自主应用中使用的其他类型的 AI 模型,也存在类似的威胁。这两种提示注入的根源在于 LLM 架构中控制平面和数据平面之间缺乏分离。
直接提示注入通常出现在“Do Anything Now”(或 DAN)提示等示例中,这些示例试图颠覆训练到模型中的安全和内容过滤器,并使其产生令人反感或不需要的内容。这些攻击通常只影响活跃用户的会话,因此通常影响有限。
间接提示注入 (即威胁行为者将其数据包含在其他用户的会话中,从而控制 LLM 的行为和生产) 更为重要。这通常通过定位检索增强生成 (RAG) 工具 (包括互联网搜索) 使用的数据源来实现,这样就会在不知情的情况下将包含恶意指令的文档包含在当前用户的会话中。当操控其他用户会话 (尤其是 LLM 输出) 的能力与工具的使用相结合时,会出现大量新的潜在风险。
当潜在的不可信任数据进入代理系统时,该系统采取的任何下游操作都可能受到对手的控制。
AI 自主领域的安全性和复杂性
早在“代理式”AI 成为一类独特的产品之前,按序列编排 AI 工作负载就已屡见不鲜。即使是简单的流程 (例如端点安全产品,根据文件格式将样本路由到由 AI 驱动的正确分析引擎) 也可以说是此类工作流的示例。
此类工作流程具有确定性,因此您可以直接列举所有可能的路径,绘制数据流地图,并将不受信任的数据与可能受攻击者控制数据影响的潜在风险操作隔离开来。
随着行业转向拥有更多内部决策能力、更高工具使用率以及模型组件之间更复杂交互的系统,潜在数据流路径的数量呈指数级增加,从而增加了威胁建模的难度。
以下分类将系统从最不自主排序为最自主排序,可区分不同架构提供的不同自主程度,并有助于评估相应的风险。
自主级别
说明
示例
0 – 推理 API
单个用户请求会导致对单个模型进行一次推理调用。
为单个模型提供服务的 NVIDIA NIM 微服务
1 – 确定性系统
单个用户请求会按不依赖于用户输入或推理结果的预定顺序触发多个推理请求 (可以选择触发多个模型)。
NVIDIA
生成式虚拟筛选蓝图
2 – 弱自主系统
单个用户请求会触发多个 inference 请求。AI 模型可以确定是否或如何调用插件,或在固定的预定决策点执行额外 inference。
构建企业 RAG Pipeline Blueprint
3 – 全自主系统
根据用户请求,AI 模型可以自由决定是否、何时或如何调用插件或其他 AI 模型,或者自由修改自己的计划,包括决定何时将控制权交回给用户。
针对容器安全蓝图“BabyAGI”、计算机使用代理的 NVIDIA 漏洞分析
表 1. 系统自主级别分类
1 级
1 级是线性调用链,其中一个 AI 调用或工具响应的输出以完全确定的方式传输到下一步。通过系统的完整数据流是预先知道的。
图 1. 第一级自动驾驶
2 级
在第 2 级中,AI 模型的输出可能会以数据依赖的方式在工作流中沿不同路径发送。虽然可以追踪每个执行路径,但在工作流执行之前,我们不知道使用的是哪种执行路径。
图 2、AI 调用和工具响应的线性链
它形成了有向无环图,因为工作流中没有可能形成循环或循环的路径。所有可能的路径都可以有限枚举。大多数 LLM 路由器都属于这一级别。
3 级
对于级别 3,潜在执行路径的数量会随着工作流中执行步骤的数量呈指数级增长。随着工作流的进展,可能会调用工具或其他 AI 模型。
图 3、AI 模型的输出路径不同
有向图形中存在循环意味着图形中存在呈指数级增加(至少理论上可能是无穷大)的路径。通常无法提前列举所有可能的路径,也无法预测特定输入将使用哪种特定路径。
安全边界分析与工具安全性分离
我们早期在现场分析和保护代理系统的经验使我们意识到,与这些系统相关的风险主要在于这些系统可用的工具或插件。在缺乏可执行敏感或物理操作的工具或插件的情况下,无论工作流程的复杂程度如何,操作 AI 组件所带来的主要风险都是错误信息。
这一观察结果决定了此框架与其他框架 (例如 Hugging Face 在其 smolagents 库
(Introduction to Agents)
中提出的框架) 之间的主要区别,前者侧重于安全边界,而后者不同于工具安全边界。一些层次非常相似。他们的简单处理器近似为
确定性系统
,并且他们的多步骤智能体紧密地映射到完全自主的系统上。
但是,鉴于 AI 组件在描述程序流时表示的自主程度,此框架重点关注识别安全边界的难度。Hugging Face 对机构的描述将工具使用的风险与程序流程的复杂性相结合,提供了在系统层面判断系统风险的快速指南。此框架将工具调用和工具使用的风险与绘制安全边界的难度分开,支持对复杂流程和潜在缓解措施进行更详细的威胁建模。
在建议的框架中,通过分离以下问题,您可以更具体地评估由 agentic workflow 带来的风险:
与通过系统的非可信数据流相关的安全边界
代理系统调用的特定函数或工具的安全属性
在具有明确安全边界的简单确定性系统中,如果您能够将敏感插件与不信任的输入明确分开,即使是敏感插件也可能风险较低。同样,只要没有可用的敏感工具,且系统的输出在使用前经过适当验证,则即使是全代理系统也可能相对风险较低。
然而,将更复杂的工作流程与敏感工具相结合需要进行进一步分析,以确保安全操作。
此框架下的威胁建模
自动驾驶级别的增加本身并不代表风险的增加。相反,它们代表着系统行为越来越缺乏确定性和可预测性,这使得评估不可信任数据的影响变得更加困难。
与代理系统相关的风险主要存在于系统可以访问的工具或插件中,而这些工具或插件可以执行敏感操作,其中包括以下示例:
完成购买
发送电子邮件
物理动作,例如移动机器人或设置 thermostat
从广义上讲,任何代表用户采取行动且需要其授权才能完成的操作,或任何导致系统发生物理变化的操作,都应被视为潜在的敏感数据,并分析其对不受信任数据采取行动或受其影响的可能性。
降低此工具风险在很大程度上依赖于能否阻止攻击者向插件中注入恶意数据,而随着自主级别的提升,这一能力会变得更加困难。
可能的下游影响更多取决于系统的细节和使用的插件,但至少必须假定能够向系统提供恶意数据的威胁行为者至少能够影响(如果不是完全控制)系统的响应。
如果使用工具或插件,潜在威胁会显著增加。控制嵌入到使用工具的系统中的 AI 模型的输出的坏行为者也可能能够控制使用哪些插件以及这些工具执行哪些操作。这可能会导致一系列威胁:
数据泄漏,例如通过 Markdown 渲染插件
远程执行代码,例如,通过运行 LLM 提供的代码的插件
如果系统控制或影响某种物理系统,则会产生物理伤害风险
在提升自主层次结构时,您需要更仔细地分析和管理潜在的不可信任数据(图 4 和图 5)。
从这个角度来看,没有不可信数据向量的 Level 0 系统相对简单。
具有预定工作流的 1 级系统始终可以按相同顺序访问相同的数据源、模型和工具,从而可以直接确定是否可以将不可信任的数据引入工作流。设计师可以将敏感功能与非可信数据隔离开来,或者为这些数据设计合适的清理策略。
2 级系统中的潜在执行路径数量大于 1 级系统,但仍可枚举,并且可以更详细地分析包含非可信数据的特定路径。
3 级自主系统的难度最大,因为执行路径的数量通常随智能体执行的自反射步骤的数量呈指数级增长。在这种情况下,这些措施包括使用时间清洁、使用时间污点追踪或其他风险缓解措施,例如要求用户手动批准潜在的危险操作。
污点追踪
污点追踪
将执行流程标记为接收了不可信任的数据,然后阻止对任何敏感工具的使用或要求对其进行手动重新授权。
对于 1 级和 2 级系统,识别潜在来源的不可信数据,以及评估下游影响的风险相对简单。当不可信数据进入系统时,每个下游 AI 模型及其关联输出、每个下游工具及其关联操作和输出也同样不可信。
图 4、不可信来源的污点可以在 Level 1 和 Level 2 系统中进行限制和列举
对于 level 3 级系统,在一般情况下,污点追踪几乎难以解决。当不受信任的数据引入工作流后,它有可能通过 level 3 级工作流固有的循环结构传播到任何其他下游组件。
图 5、来自不可信来源的污点很难在 Level 3 级系统中绑定和列举
通过将代理应用程序分类为正确的级别,可以更简单地识别应用程序构成的总体风险水平以及相应的安全要求。
每个自主级别的推荐安全控制
表 2 简要概述了一些建议的缓解控制,这些控制应放置在不同自主级别的 agents 周围。并非所有 agents 工作负载都需要所有控件,具体取决于为工作负载提供的工具的灵敏度。
自主级别
推荐的安全控制
0 – 推理 API
使用标准 API 安全性。
1 – 确定性系统
手动追踪数据流并正确订购工作流,以防止不受信任的数据进入敏感插件。
2 – 有界限的代理工作流
列举数据流、识别具有潜在不可信任数据的数据流、探索隔离或清理选项,并考虑使用时间手动批准敏感操作。
3 – 全自主系统
实现污点追踪和强制清理对可能不可信的数据。考虑使用时间手动批准敏感操作。
表 2. Recommended security controls by level
对于 0 级系统,您可以应用简单的应用程序和 API 安全性。由于系统无法根据 AI 模型输出进行进一步交互,因此整体风险主要局限于非 AI 组件。
1 级和 2 级系统的风险水平略高,但评估起来很简单。由于这些系统不包含循环,因此可以详尽无遗地列举和检查流经系统的所有执行流程,以备不时之需。
当此类流通过源和接收端之间的 LLM 时,风险特别大。在 1 级系统中,通常只需重新排列工作流程即可。对于更复杂(2 级)的系统,应仔细评估从不受信任的来源到敏感汇的风险流,以确定是否可以消除这些流,或者是否可以适当清除不受信任的数据或敏感工具的输入,以消除风险。
最后,如果流程无法重新排列或消除,或无法进行可靠的清理,请为可能会接收不受信任数据的敏感工具添加使用时间手动审批,从而让用户进入循环。这使得代理工作流的用户能够在执行风险操作之前检查这些操作,并使工作流无法执行对抗性操作。
由于 Level 3 级系统的复杂性,通常无法进行有意义的重新排列,以将不受信任的源阻塞到敏感的接收流。在这种情况下,应在检索时清除不受信任的数据。
如果可能,还应该考虑实施污点追踪。如果两者都无法实现,则仅要求所有潜在敏感的工具在执行其功能之前需要手动批准即可将其用作最后手段。
与威胁建模一样,将代理工作负载分解为不同的复杂性级别有助于快速确定合理的安全控制,从而避免让敏感工具提取不受信任数据 (包括通过提示注入操作的 LLM 输出) 的风险。
结束语
随着系统对自动驾驶层次结构的要求越来越高,这些层次结构变得越来越复杂,预测难度也越来越大。这使得威胁建模和风险评估变得更加困难,尤其是在存在各种数据源以及具有不同可靠性和灵敏度的工具的情况下。
确定系统自主级别为评估系统复杂性、威胁建模以及必要的安全控制和缓解措施所需的工作量水平提供了有用的框架。
分析管道中的插件并根据其功能对其进行分类也很重要,以便根据自主级别提供准确的风险评估。 |
https://developer.nvidia.com/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA Deep Learning Institute Releases New Generative AI Teaching Kit | Generative AI, powered by advanced machine learning models and deep neural networks, is revolutionizing industries by generating novel content and driving innovation in fields like healthcare, finance, and entertainment.
NVIDIA is leading this transformation with its cutting-edge GPU architectures and software ecosystems, such as the
H100 Tensor Core
GPU and CUDA platform, which optimize the development and deployment of generative models.
NVIDIA NIM
enhances the efficiency and scalability of AI inference tasks, enabling rapid deployment and iteration across various computing environments and accelerating advancements in generative AI applications.
The importance of generative AI education
As generative AI models, such as GANs and transformers, become increasingly sophisticated, there is a growing demand for skilled professionals who can develop, refine, and ethically deploy these technologies. A strong educational foundation in generative AI equips students with the practical skills and theoretical knowledge needed to innovate in areas like content creation, drug discovery, and autonomous systems.
College and university education in generative AI is crucial due to the rapidly expanding role of AI in almost every industry. By integrating generative AI into their curriculum, universities prepare the next generation of AI researchers, engineers, and thought leaders to advance the field and address the complex challenges associated with AI-driven innovation.
The new Generative AI Teaching Kit, a collaboration between the NVIDIA Deep Learning Institute (DLI) and Dartmouth College, is set to empower the next generation of professionals with the skills and knowledge needed in this rapidly evolving field.
This comprehensive teaching resource enables educators to provide students access to cutting-edge tools, frameworks, and practical exercises that are crucial for understanding the complexities of Generative AI and large language model development and deployment. By equipping students with a deep understanding of generative AI techniques, the Teaching Kit enables educators to foster future innovation and creativity in AI-driven industries.
As students transition into the workforce, they will be better prepared to tackle global challenges, from improving healthcare and science to advancing sustainable technologies.
Sam Raymond, adjunct assistant professor of engineering at Dartmouth College, was instrumental in developing the content. “Empowering students with skills to understand and potentially develop their own GPU-accelerated Generative AI applications is the primary objective,” said Raymond. “I believe students who go through this course will be at a significant advantage in the job market and help bridge the knowledge gap in industries today.”
Overview of the Generative AI Teaching Kit
All Teaching Kits include lecture slides, hands-on labs, Jupyter notebooks, knowledge checks, and free online self-paced courses that provide certificates of competency for students, all comprehensively packaged up and ready for classroom and curriculum integration.
Figure 1. The Generative AI Teaching Kit includes free access to all DLI online self-paced courses, offering certificates for students
.
The aim of the Generative AI Teaching Kit is to introduce the foundational concepts of natural language processing (NLP) that are essential for understanding LLMs and generative AI more broadly. Key concepts of LLMs are then examined using NVIDIA GPUs, tools, and services, as well as open-source libraries and frameworks. A simple pretraining exercise of a GPT model shows basic training processes in the cloud.
The kit also covers diffusion models to explore the application of generative AI in image and video generation. Multi-modal LLM architectures are then introduced, with a focus on optimizing various LLM architectures during fine-tuning using the NVIDIA NeMo framework. Advancements in inference and the refinement of tools like chatbots are also discussed, using NVIDIA NIM, NeMo Guardrails, TensorRT, and TensorRT-LLM to enhance efficiency and scalability in production environments.
The Generative AI Teaching Kit contains focused modules that combine theory, algorithms, programming, and examples:
Module 1:
Introduction to Generative AI
Module 2:
Word Embeddings, Tokens, and NLP
Module 3:
Large Language Models and the Transformer
Module 4:
LLM Scaling Laws and LLM Families
Module 5:
Multimodal Learning and Its Applications
Module 6:
Diffusion Models in Generative AI
Module 7:
Model Training (Pre-Training, Instruction Following, and PEFT)
Module 8:
LLM Orchestration
Module 9:
Scaling Model Training to Distributed Workloads
Through a collaboration with Google, educators also get access to free Google Colab credits to run the lab notebooks in preparation for teaching the course.
This content is valuable for educators across various fields, especially in computer science and engineering. Its modular design enables instructors to tailor the course to meet the specific needs of their students and create a customized learning experience. Select professors from around the world have already been given early access to first-release modules.
“I’m eager to integrate the Generative AI Teaching Kit in my AI in Materials Engineering class,” said Mohadeseh Taheri-Mousavi, assistant professor in the Materials Science and Engineering department at Carnegie Mellon University. “The comprehensive lecture notes with well-structured coding labs with examples from various fields, and associated online courses with certificates, will provide my students with the cutting-edge resources to deeply understand the broad applications of generative AI techniques in various fields.”
Professor Payam Barnaghi from the Department of Brain Sciences at Imperial College London uses LLMs and generative AI in his research using electronic health records and healthcare data. “NVIDIA Generative AI Teaching Kit content is a wonderful resource for students learning the latest developments in AI and machine learning,” said Barnaghi. “As a result of having early access to the first modules, I plan to use this content as the basis for teaching advanced topics in my machine learning for neuroscience courses.”
Given the fast-paced advancements in generative AI, educators can expect the teaching materials to be updated over time. NVIDIA is dedicated to offering high-end educational resources and welcomes feedback to continuously improve the content.
LLM technical certification discounts for students
The Generative AI Teaching Kit also includes access to a 50% discount on the
NVIDIA-Certified Associate: Generative AI LLMs
certification exam and the
NVIDIA-Certified Associate: Generative AI Multimodal
certification exam for educators to distribute to students. It also includes access to both exams free of charge for educators. Both exams are online and proctored remotely, include 50-60 questions, and have a 60-minute time limit.
Figure 1. NVIDIA-Certified Associate
These technical certifications validate students’ expertise in generative AI, offering a competitive advantage in the job market. They can open doors to exciting career opportunities across AI-driven industries.
The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions.
The NCA Generative AI Multimodal certification is an entry-level credential that validates the foundational skills needed to design, implement, and manage AI systems that synthesize and interpret data across text, image, and audio modalities.
NVIDIA recommends that students take these exams after completing the Generative AI Teaching Kit.
Get started
Educators can access the first release of the Generative AI Teaching Kit and other kits for free by joining the
NVIDIA DLI Teaching Kit Program
.
About the NVIDIA Deep Learning Institute
The
NVIDIA Deep Learning Institute
(DLI) offers resources for diverse learning needs, from learning materials to self-paced and live training to educator programs. Individuals, teams, organizations, educators, and students can now find everything they need to advance their knowledge in AI, accelerated computing, accelerated data science, graphics, simulation, and more. | https://developer.nvidia.com/zh-cn/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA 深度学习培训中心发布全新生成式 AI 教学套件 | 生成式 AI 由先进的机器学习模型和深度神经网络提供动力支持,通过生成新内容并驱动医疗、金融和娱乐等领域的创新来变革各行各业。
NVIDIA 正凭借其先进的 GPU 架构和软件生态系统(例如
H100 Tensor Core GPU
和 CUDA 平台),引领这一转型,该平台可优化生成式模型的开发和部署。
NVIDIA NIM
可增强 AI 推理任务的效率和可扩展性,从而实现跨各种计算环境的快速部署和迭代,并加速生成式 AI 应用的发展。
生成式 AI 教育的重要性
随着生成式 AI 模型(例如 GANs 和 Transformers)变得越来越复杂,对能够开发、改进和合乎道德地部署这些技术的熟练专业人员的需求也在不断增长。生成式 AI 方面坚实的教育基础可以让学生掌握在内容创作、药物研发和自主系统等领域进行创新所需的实践技能和理论知识。
由于生成式 AI 在几乎每个行业中的作用都在迅速扩展,因此学院和大学教育在生成式 AI 中至关重要。通过将生成式 AI 集成到其课程中,大学可以培养新一代 AI 研究人员、工程师和思想领袖,以推动该领域的发展,并应对与 AI 驱动的创新相关的复杂挑战。
新的生成式 AI 教学套件,由 NVIDIA 深度学习研究所(DLI)和达特茅斯学院合作开发,旨在为新一代专业人士提供这一快速发展领域所需的技能和知识。
借助这一全面的教学资源,教育工作者能够为学生提供先进的工具、框架和实践练习,这些工具、框架和实践练习对于理解生成式 AI 和大语言模型开发和部署的复杂性至关重要。通过让学生深入了解生成式 AI 技术,该教学套件使教育工作者能够在 AI 驱动的行业中培养未来的创新和创造力。
随着学生向劳动力过渡,他们将更好地准备应对全球挑战,从改善医疗和科学到推进可持续技术。
达特茅斯学院(Dartmouth College)工程系兼职助理教授 Sam Raymond 在编写课程内容时发挥了重要作用。Raymond 说:“让学生掌握理解并开发自己的 GPU 加速生成式 AI 应用程序的技能是我们的主要目标。我相信,学习本课程的学生将在就业市场中占据显著优势,并有助于缩小当今各行各业的知识差距。
生成式 AI 教学套件概述
所有教学套件均包含教学幻灯片、动手实验室、Jupyter notebooks、知识检查和免费的在线自定进度课程,为学生提供能力证书。这些课程均经过全面打包,ready for课堂和课程整合。
图 1. 生成式 AI 教学套件包括免费访问所有 DLI 在线自定进度课程,并为学生提供证书。
生成式 AI 教学套件旨在介绍自然语言处理(NLP)的基本概念,这些概念对于更广泛地理解 LLM 和生成式 AI 至关重要。然后,我们使用 NVIDIA GPUs、工具和服务以及开源库和框架来检查 LLM 的关键概念。一个 GPT 模型的简单预训练练习展示了云端的基本训练流程。
该套件还涵盖扩散模型,以探索生成式 AI 在图像和视频生成中的应用。然后,介绍多模态 LLM 架构,重点介绍如何在使用 NVIDIA NeMo 框架进行微调的过程中优化各种 LLM 架构。此外,还讨论了推理方面的进展以及聊天机器人等工具的优化,这些工具使用 NVIDIA NIM、NeMo Guardrails、TensorRT 和 TensorRT-LLM 来提高生产环境的效率和可扩展性。
生成式 AI 教学套件包含结合理论、算法、编程和示例的重点模块。这第一版包括以下模块:
生成式 AI 简介
生成式 AI 中的扩散模型
LLM 编排
未来版本的工具包将提供更多的模块。
这些内容对各个领域的教育工作者都很有价值,尤其是计算机科学和工程领域的教育工作者。其模块化设计使讲师能够根据学生的具体需求定制课程,并打造定制化的学习体验。来自世界各地的精选教授已经可以抢先体验第一版模块。“我非常希望将 Generative AI 教学套件集成到我的 AI 在材料工程课堂中,”卡内基梅隆大学材料科学与工程系助理教授 Mohadeseh Taheri-Mousavi 说,“内容全面的教学笔记包含结构良好的编码实验室和来自各个领域的示例,以及与证书相关的在线课程,将为我的学生提供前沿资源,帮助他们深入了解 Generative AI 技术在各个领域的广泛应用。”
伦敦帝国理工学院大脑科学系的Professor Payam Barnaghi教授使用电子病历和医疗健康数据在研究中使用LLMs和generative AI。“NVIDIA Generative AI Teaching Kit 内容是学生学习 AI 和 machine learning 最新发展的绝佳资源,”Barnaghi 说:“由于能够提前访问首批模块,我计划以这些内容为基础,在神经科学课程的机器学习中教授高级主题。
鉴于生成式 AI 的快速发展,教育工作者可以预期教学材料会随着时间的推移而更新。NVIDIA 致力于提供高端教育资源,并欢迎反馈,以便不断改进内容。
开始使用
教育工作者可以加入
NVIDIA DLI 教学套件计划
,免费获取第一版 Generative AI 教学套件和其他套件。
关于 NVIDIA 深度学习培训中心
NVIDIA 深度学习研究所
(DLI) 为各种学习需求提供资源,从学习材料到自定进度和实时培训,再到教育工作者计划。个人、团队、组织、教育工作者和学生现在可以找到所需的一切资源,以提升他们在 AI、加速计算、加速数据科学、图形、模拟等方面的知识水平。 |
https://developer.nvidia.com/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | JSON Lines Reading with pandas 100x Faster Using NVIDIA cuDF | JSON is a widely adopted format for text-based information working interoperably between systems, most commonly in web applications and large language models (LLMs). While the JSON format is human-readable, it is complex to process with data science and data engineering tools.
JSON data often takes the form of newline-delimited
JSON Lines
(also known as NDJSON) to represent multiple records in a dataset. Reading JSON Lines data into a dataframe is a common first step in data processing.
In this post, we compare the performance and functionality of Python APIs for converting JSON Lines data into a dataframe using the following libraries:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas Accelerator Mode
We demonstrate good scaling performance and high data processing throughput with the JSON reader in cudf.pandas, especially for data with a complex schema. We also review the versatile set of JSON reader options in cuDF that improve compatibility with Apache Spark and empower Python users to handle quote normalization, invalid records, mixed types and other JSON anomalies.
JSON parsing versus JSON reading
When it comes to JSON data processing, it’s important to distinguish between parsing and reading.
JSON parsers
JSON parsers, such as
simdjson
, convert a buffer of character data into a vector of tokens. These tokens represent the logical components of JSON data, including field names, values, array begin/end, and map begin/end. Parsing is a critical first step in extracting information from JSON data, and significant research has been dedicated to reaching high parsing throughput.
To use information from JSON Lines in data processing pipelines, the tokens must often be converted into a Dataframe or columnar format, such as
Apache Arrow
.
JSON readers
JSON readers, such as
pandas.read_json
convert input character data into a Dataframe organized by columns and rows. The reader process begins with a parsing step and then detects record boundaries, manages the top-level columns and nested struct or list child columns, handles missing and null fields, infers data types, and more.
JSON readers convert unstructured character data into a structured Dataframe, making JSON data compatible with downstream applications.
JSON Lines reader benchmarking
JSON Lines is a flexible format for representing data. Here are some important properties of JSON data:
Number of records per file
Number of top level columns
Depth of struct or list nesting for each column
Data types of values
Distribution of string lengths
Fraction of missing keys
For this study, we held the record count fixed at 200K and swept the column count from 2 to 200, exploring a range of complex schemas. The four data types in use are as follows:
list<int>
and
list<str>
with two child elements
struct<int>
and
struct<str>
with a single child element
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Data type
Example records
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
Table 1. Example JSON Lines character data
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Performance statistics were collected on the 25.02 branch of
cuDF
and with the following library versions: pandas 2.2.3, duckdb 1.1.3, and pyarrow 17.0.0. The execution hardware used an NVIDIA H100 Tensor Core 80 GB HBM3 GPU and Intel Xeon Platinum 8480CL CPU with 2TiB of RAM. Timing was collected from the third of three repetitions, to avoid initialization overhead and ensure that the input file data was present in the OS page cache.
In addition to the zero code change cudf.pandas, we also collected performance data from
pylibcudf
, a Python API for the
libcudf
CUDA C++ computation core. The runs with pylibcudf used a CUDA async memory resource through RAPIDS Memory Manager (RMM). Throughput values were computed using the JSONL input file size and the reader runtime of the third repetition.
Here are some examples from several Python libraries for invoking the JSON Lines reader:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON Lines reader performance
Overall, we found a wide range of performance characteristics for the JSON readers available in Python, with overall runtimes varying from 1.5 seconds to almost 5 minutes.
Table 2 shows the sum of the timing data from seven JSON reader configurations when processing 28 input files with a total file size of 8.2 GB:
Using cudf.pandas for JSON reading shows about 133x speedup over pandas with the default engine and 60x speedup over pandas with the pyarrow engine.
DuckDB and pyarrow show good performance as well, with about 60 seconds total time for DuckDB, and 6.9 seconds for pyarrow with block size tuning.
The fastest time comes from pylibcudf at 1.5 seconds, showing about 4.6x speedup over pyarrow with
block_size
tuning.
Reader label
Benchmark runtime
(sec)
Comment
cudf.pandas
2.1
Using
-m cudf.pandas
from the command line
pylibcudf
1.5
pandas
281
pandas-pa
130
Using the pyarrow engine
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
Using a 20 MB
block_size
value
Table 2. Sum of timing data for JSON reading of 28 input files
Table 2 includes the input columns counts 2, 5, 10, 20, 50, 100, and 200, and the data types
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Zooming into the data by data type and column count, we found that JSON reader performance varies over a wide range based on the input data details and the data processing library, from 40 MB/s to 3 GB/s for CPU-based libraries and from 2–6 GB/s for the GPU-based cuDF.
Figure 1 shows the data processing throughput based on input size for 200K rows and 2–200 columns, with input data sizes varying from about 10 MB to 1.5 GB.
Figure 1. JSON Lines reading throughput by number of input columns
In Figure 1, each subplot corresponds to the data type of the input columns. File size annotations align to the x-axis.
For cudf.pandas
read_json
, we observed 2–5 GB/s throughput that increased with larger column count and input data size. We also found that the column data type does not significantly affect throughput. The pylibcudf library shows about 1–2 GB/s higher throughput than cuDF-python, due to lower Python and pandas semantic overhead.
For pandas
read_json
, we measured about 40–50 MB/s throughput for the default UltraJSON engine (labeled as “pandas-uj”). Using the pyarrow engine (
engine="pyarrow"
) provided a boost up to 70–100 MB/s due to faster parsing (pandas-pa). The pandas JSON reader performance appears to be limited by the need to create Python list and dictionary objects for each element in the table.
For DuckDB
read_json
, we found about 0.5–1 GB/s throughput for
list<str>
and
struct<str>
processing with lower values <0.2 GB/s for
list<int>
and
struct<int>
. Data processing throughput remained steady over the range of column counts.
For pyarrow
read_json
, we measured data processing throughputs up to 2–3 GB/s for 5-20 columns, and lower throughput values as column count increased to 50 and above. We found data type to have a smaller impact on reader performance than column count and input data size. For column counts of 200 and a record size of ~5 KB per row, throughput dropped to about 0.6 GB/s.
Raising the pyarrow
block_size
reader option to 20 MB (pyarrow-20MB) led to increased throughput for column counts 100 or more, while also degrading throughput for column counts 50 or fewer.
Overall, DuckDB primarily showed throughput variability due to data types, whereas cuDF and pyarrow primarily showed throughput variability due to column count and input data size. The GPU-based cudf.pandas and pylibcudf showed the highest data processing throughput for complex list and struct schema, especially for input data sizes >50 MB.
JSON Lines reader options
Given the text-based nature of the JSON format, JSON data often includes anomalies that result in invalid JSON records or don’t map well to a dataframe. Some of these JSON anomalies include single-quoted fields, cropped or corrupted records, and mixed struct or list types. When these patterns occur in your data, they can break the JSON reader step in your pipeline.
Here are some examples of these JSON anomalies:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
To unlock advanced JSON reader options in cuDF, we recommend incorporating cuDF-Python (
import cudf
) and pylibcudf into your workflow. If single-quoted field names or string values appear in your data, cuDF provides a reader option to normalize single quotes into double quotes. cuDF supports this feature to provide compatibility with the
allowSingleQuotes
option that is enabled by default in Apache Spark.
If invalid records appear in your data, cuDF and DuckDB both provide error recovery options to replace these records with null. When error handling is enabled, if a record generates a parsing error, all of the columns for the corresponding row are marked as null.
If mixed list and struct values are associated with the same field name in your data, cuDF provides a dtype schema override option to coerce the datatype to string. DuckDB uses a similar approach by inferring a
JSON
data type.
For mixed types, the pandas library has perhaps the most faithful approach, using Python list and dictionary objects to represent the input data.
Here is an example in cuDF-Python and pylibcudf that shows the reader options, including a dtype schema override for column name “a”. For more information, see
cudf.read_json
and
pylibcudf.io.json.read_json
.
For pylibcudf, the
JsonReaderOptions
object can be configured either before or after the
build
function.
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
Table 3 summarizes the behavior of several JSON readers with Python APIs for a few common JSON anomalies. Crosses denote that the reader function raised an exception, and checkmarks denote that the library successfully returned a Dataframe. These results may change in future versions of the libraries.
Single quotes
Invalid records
Mixed types
cuDF-Python, pylibcudf
✔️ Normalize to double quotes
✔️ Set to null
✔️ Represent as a string
pandas
❌ Exception
❌ Exception
✔️ Represent as a Python object
pandas (
engine="pyarrow
“
)
❌ Exception
❌ Exception
❌ Exception
DuckDB
❌ Exception
✔️ Set to null
✔️ Represent as a JSON string-like type
pyarrow
❌ Exception
❌ Exception
❌ Exception
Table 3. JSON reader outcomes when reading JSONL files with anomalies including: single quotes, mixed types and invalid records
cuDF supports several additional JSON reader options that are critical for compatibility with Apache Spark conventions, and now are available to Python users as well. Some of these options include:
Validation rules for numbers and strings
Custom record delimiters
Column pruning by the schema provided in dtype
Customization of
NaN
values
For more information, see the libcudf C++ API documentation on
json_reader_options
.
For more information about multi-source reading for efficiently processing many smaller JSON Lines files, or byte-range support for breaking up large JSON Lines files, see
GPU-Accelerated JSON Data Processing with RAPIDS
.
Summary
RAPIDS cuDF provides powerful, flexible, and accelerated tools for working with JSON data in Python.
GPU-accelerated JSON data processing is also available in RAPIDS Accelerator For Apache Spark, starting in the 24.12 release. For information, see
Accelerating JSON Processing on Apache Spark with GPUs
.
For more information, see the following resources:
cuDF documentation
/rapidsai/cudf
GitHub repo
RAPIDS Docker containers
(available for releases and nightly builds)
Accelerate Data Science Workflows with Zero Code Changes
DLI course
Mastering the cudf.pandas Profiler for GPU Acceleration | https://developer.nvidia.com/zh-cn/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | 使用 NVIDIA cuDF,pandas 读取 JSON 行文件速度提升100倍 | JSON 是一种广泛采用的格式,用于在系统之间 (通常用于 Web 应用和大语言模型 (LLMs)) 以互操作方式运行的基于文本的信息。虽然 JSON 格式是人类可读的,但使用数据科学和数据工程工具进行处理十分复杂。
JSON
数据通常采用换行分隔的 JSON 行
(也称为 NDJSON) 的形式来表示数据集中的多个记录。将 JSON 行数据读入数据帧是数据处理中常见的第一步。
在本文中,我们比较了使用以下库将 JSON 行数据转换为数据帧的 Python API 的性能和功能:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas
加速器模式
我们使用 cudf.pandas 中的 JSON 读取器展示了良好的扩展性能和高数据处理吞吐量,特别是对于具有复杂模式的数据。我们还审查了 cuDF 中的一组通用 JSON 读取器选项,这些选项可提高与 Apache Spark 的兼容性,并使 Python 用户能够处理引文归一化、无效记录、混合类型和其他 JSON 异常。
JSON 解析与 JSON 读取
当涉及到 JSON 数据处理时,区分解析和读取非常重要。
JSON 解析器
JSON 解析器 (例如
simdjson
) 可将字符数据缓冲区转换为令牌向量。这些令牌代表 JSON 数据的逻辑组件,包括字段名、值、数组开始/结束和映射开始/结束。解析是从 JSON 数据中提取信息的关键第一步,并且我们致力于实现高解析吞吐量。
要在数据处理工作流中使用来自 JSON 行的信息,必须经常将令牌转换为 Dataframe 或列式格式,例如
Apache Arrow
。
JSON 阅读器
JSON 读取器 (例如
pandas.read_json
) 将输入字符数据转换为按列和行排列的 Dataframe。读取器流程从解析步骤开始,然后检测记录边界、管理顶层列和嵌套结构体或列表子列、处理缺失和空字段、推理数据类型等。
JSON 读取器可将非结构化字符数据转换为结构化 Dataframe,从而使 JSON 数据与下游应用兼容。
JSON Lines 读取器基准测试
JSON Lines 是一种灵活的数据表示格式。以下是 JSON 数据的一些重要属性:
每个文件的记录数
顶层列的数量
每列的结构体或列表嵌套深度
值的数据类型
字符串长度分布
缺少密钥的百分比
在这项研究中,我们将记录计数固定为 200K,并将列计数从 2 扫至 200,同时探索了一系列复杂的模式。使用的四种数据类型如下所示:
包含两个子元素的
list<int>
和
list<str>
包含单个子元素的
struct<int>
和
struct<str>
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
。
数据类型
记录示例
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
表 1. JSON 行字符数据示例
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
性能统计数据在
cuDF
的 25.02 分支上收集,并包含以下库版本:pandas 2.2.3、duckdb 1.1.3 和 pyarrow 17.0.0。执行硬件使用 NVIDIA H100 Tensor Core 80 GB HBM3 GPU 和 Intel Xeon Platinum 8480CL CPU 以及 2TiB 的 RAM。计时数据从三次重复的第三次中收集,以避免初始化开销,并确保输入文件数据存在于操作系统页面缓存中。
除了零代码更改 cudf.pandas 之外,我们还从
py
libcudf
(用于
libcudf
CUDA C++计算核心的 Python API) 收集了性能数据。
py
libcudf
运行通过 RAPIDS 内存管理器 (RMM) 使用 CUDA 异步内存资源。使用 JSONL 输入文件大小和第三次重复的读取器运行时计算吞吐量值。
以下是来自多个 Python 库的一些调用 JSON 行读取器的示例:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON 行读取器性能
总体而言,我们发现 Python 中的 JSON 读取器具有各种性能特征,总体运行时间从 1.5 秒到近 5 分钟不等。
表 2 显示了在处理 28 个输入文件 (总文件大小为 8.2 GB) 时,来自 7 个 JSON 读取器配置的定时数据的总和:
使用 cudf.pandas 进行 JSON 读取显示,与使用默认引擎的 pandas 相比,速度提高了 133 倍,使用 pyarrow 引擎的 pandas 速度提高了 60 倍。
DuckDB 和 pyarrow 也表现出良好的性能,在调整块大小时,DuckDB 的总时间约为 60 秒,而 pyarrow 的总时间为 6.9 秒。
pylibcudf 生成的最快时间为 1.5 秒,与 pyarrow 相比,使用
block_size
调优的速度提高了约 4.6 倍。
阅读器标签
基准运行时
(秒)
评论
cudf.pandas
2.1
在命令行中使用
-m cudf.pandas
pylibcudf
1.5
pandas
271
pandas-pa
130
使用 pyarrow 引擎
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
使用 20 MB 的
block_size
值
表 2、用于 JSON 读取 28 个输入文件的定时数据总和
表 2 包括输入列计数 2、5、10、20、50、100 和 200,以及数据类型
list<int>
、
list<str>
、
struct<int>
和
struct<str>
通过按数据类型和列数量放大数据,我们发现 JSON 读取器的性能因输入数据详细信息和数据处理库的不同而差异很大,基于 CPU 的库的性能介于 40 MB/s 到 3 GB/s 之间,而基于 GPU 的 cuDF 的性能介于 2–6 GB/s 之间。
图 1 显示了基于 200K 行、2–200 列输入大小的数据处理吞吐量,输入数据大小在约 10 MB 到 1.5 GB 之间变化。
图 1. JSON Lines 读取吞吐量按输入列数量
在图 1 中,每个子图均对应输入列的数据类型。文件大小标注与 x 轴对齐。
对于
cudf.pandas read_json
,我们观察到,随着列数量和输入数据大小的增加,吞吐量增加了 2–5 GB/秒。我们还发现,列数据类型不会对吞吐量产生重大影响。由于 Python 和 pandas 语义用度较低,pylibcudf 库的吞吐量比 cuDF-python 高约 1–2 GB/秒。
对于
pandas read_json
,我们测量了默认 UltraJSON 引擎 (标记为“pandas-uj”) 的吞吐量约为 40–50 MB/s。由于解析速度更快 (pandas-pa),使用 pyarrow 引擎 (
engine="pyarrow"
) 可将速度提升高达 70–100 MB/s。由于需要为表中的每个元素创建 Python 列表和字典对象,因此 pandas JSON 读取器的性能似乎受到限制。
对于 DuckDB
read_json
,我们发现
list<str>
和
struct<str>
处理的吞吐量约为 0.5–1 GB/s,而
list<int>
和
struct<int>
的较低值 < 0.2 GB/s。数据处理吞吐量在列数量范围内保持稳定。
对于 pyarrow
read_json
,我们测量了 5-20 列的高达 2–3 GB/s 的数据处理吞吐量,以及随着列数量增加到 50 及以上而降低的吞吐量值。我们发现,与列数量和输入数据大小相比,数据类型对读取器性能的影响较小。如果列数量为 200,且每行的记录大小约为 5 KB,吞吐量将下降到约 0.6 GB/s。
将 pyarrow
block_size
reader 选项提升至 20 MB (pyarrow-20MB) 会导致列数量增加 100 或以上的吞吐量增加,同时还会降低 50 或以下列数量的吞吐量。
总体而言,DuckDB 主要因数据类型而显示吞吐量可变性,而 cuDF 和 pyarrow 主要因列数量和输入数据大小而显示吞吐量可变性。基于 GPU 的 cudf.pandas 和 pylibcudf 为复杂列表和结构模式(尤其是输入数据大小 > 50 MB)提供了超高的数据处理吞吐量。
JSON 行读取器选项
鉴于 JSON 格式基于文本的特性,JSON 数据通常包含异常,导致 JSON 记录无效或无法很好地映射到数据帧。其中一些 JSON 异常包括单引号字段、已裁剪或损坏的记录,以及混合结构或列表类型。当数据中出现这些模式时,它们可能会中断工作流中的 JSON 读取器步骤。
以下是这些 JSON 异常的一些示例:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
要在 cuDF 中解锁高级 JSON 读取器选项,我们建议您将 cuDF-Python (
import cudf
) 和 pylibcudf 集成到您的工作流中。如果数据中出现单引号字段名称或字符串值,cuDF 会提供读取器选项,用于将单引号归一化为双引号。cuDF 支持此功能,可与 Apache Spark 中默认启用的
allowSingleQuotes
选项兼容。
如果您的数据中出现无效记录,cuDF 和 DuckDB 都会提供错误恢复选项,将这些记录替换为 null。启用错误处理后,如果记录生成解析错误,则相应行的所有列均标记为 null。
如果混合 list 和 struct 值与数据中的相同字段名相关联,cuDF 提供一个 dtype 模式覆盖选项,以将数据类型强制转换为字符串。DuckDB 使用类似的方法来推理
JSON
数据类型。
对于混合类型,pandas 库可能是最可靠的方法,使用 Python 列表和字典对象来表示输入数据。
以下是 cuDF-Python 和 pylibcudf 中的示例,其中显示了读取器选项,包括列名称“a”的 dtype 模式覆盖。如需了解更多信息,请参阅
cudf.read_json
和
pylibcudf.io.json.read_json
。
对于 pylibcudf,可以在
build
函数之前或之后配置
JsonReaderOptions
对象。
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
表 3 总结了使用 Python API 的多个 JSON 读取器针对一些常见 JSON 异常的行为。交叉表示读取器函数引发异常,勾号表示库已成功返回 Dataframe。在未来版本的库中,这些结果可能会发生变化。
单引号
无效记录
混合类型
cuDF-Python、pylibcudf
归一化为双引号
设置为 null
表示为字符串
pandas
*例外
*例外
表示为 Python 对象
pandas (
engine="pyarrow
“
)
*例外
*例外
*例外
DuckDB
*例外
设置为 null
表示为类似 JSON 字符串的类型
pyarrow
*例外
*例外
*例外
表 3、读取异常情况 (包括单引号、混合类型和无效记录) 的 JSONL 文件时的 JSON 读取器结果
cuDF 支持多个额外的 JSON 读取器选项,这些选项对于与 Apache Spark 惯例的兼容性至关重要,现在也可供 Python 用户使用。其中一些选项包括:
数字和字符串的验证规则
自定义记录分隔符
根据 dtype 中提供的模式进行列剪枝
自定义
NaN
值
有关更多信息,请参阅有关
json_reader_options
的 libcudf C++ API 文档。
有关多源读取以高效处理许多较小的 JSON 行文件的更多信息,或有关分解大型 JSON 行文件的字节范围支持的更多信息,请参阅使用
RAPIDS 进行 GPU 加速的 JSON 数据处理
。
总结
RAPIDS cuDF 为在 Python 中处理 JSON 数据提供了功能强大、灵活且加速的工具。
从 24.12 版本开始,您还可以在适用于 Apache Spark 的 RAPIDS Accelerator 中使用 GPU 加速的 JSON 数据处理功能。有关信息,请参阅
使用 GPU 在 Apache Spark 上加速 JSON 处理
。
有关更多信息,请参阅以下资源:
cuDF 文档
/rapidsai/cudf
GitHub 存储库
RAPIDS Docker 容器 (可用于版本和夜间构建)
零代码更改加速数据科学工作流程
DLI 课程
掌握用于 GPU 加速的 cudf.pandas Profiler |
https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | Understanding the Language of Life’s Biomolecules Across Evolution at a New Scale with Evo 2 | AI has evolved
from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics datasets, and automated laboratory workflows has allowed scientists to tackle problems once thought intractable—from rapid protein structure prediction to generative drug design, increasing the need for
AI literacy among scientists
. With this momentum, we find ourselves on the cusp of the next paradigm shift: the emergence of powerful AI foundation models purpose-built for biology.
These new models promise to unify disparate data sources—genomic sequences, RNA and proteomic profiles, and, in some cases, scientific literature—into a single, coherent understanding of life at the molecular, cellular, and systems levels. Learning biology’s language and structure opens doors to transformative applications, such as smarter drug discovery, rational enzyme design, and disease mechanism elucidation.
As we set the stage for this next wave of AI-driven breakthroughs, it is clear that these foundation models will not merely accelerate progress; they stand poised to redefine what is possible in biological research.
Test
Evo 2 as an NVIDIA BioNeMo NIM microservice
for free with the
/NVIDIA/bionemo-examples
example notebook.
Explore a complete
reference workflow for protein design
.
Start training Evo 2 on your data today in
BioNeMo Framework
.
Stay up to date with the
latest NVIDIA BioNeMo platform updates
.
A leap forward in sequence modeling and design from molecular to genome-scale
The
first Evo model
from November 2024 represented a groundbreaking milestone in genomic research, introducing a foundation model capable of analyzing and generating biological sequences across DNA, RNA, and proteins.
Published at a time when most models were restricted to single modalities or short contexts, Evo is known for its ability to operate across scales—ranging from molecular to genomic—using a unified approach. Trained on 2.7M prokaryotic and phage genomes, encompassing 300B nucleotide tokens, Evo delivered single-nucleotide resolution across many biological evolution and function tasks.
At the core of Evo’s success is its innovative
StripedHyena
architecture (Figure 1), a hybrid model combining 29 Hyena layers, a new type of deep learning architecture designed to handle long sequences of information without relying on traditional attention mechanisms that are common to Transformer architectures. Instead it uses a combination of convolutional filters and gates.
This design overcame the limitations of traditional Transformer models, enabling Evo to handle long contexts of up to 131,072 tokens efficiently. The result was a model capable of connecting small sequence changes to system-wide and organism-level impacts, bridging the gap between molecular biology and evolutionary genomics.
Figure 1
.
Evo and Evo 2 AI model architecture
Evo’s predictive capabilities set new standards for biological modeling. It achieved competitive performance in several zero-shot tasks, including predicting the fitness effects of mutations on proteins, non-coding RNAs, and regulatory DNA, providing invaluable insights for synthetic biology and precision medicine.
Evo also demonstrated remarkable generative capabilities, designing functional CRISPR-Cas systems and transposons. These outputs were validated experimentally, proving that Evo could predict and design novel biological systems with real-world utility.
Evo represents a notable advancement in
integrating multimodal and multiscale biological understanding into a single model
. Its ability to generate genome-scale sequences and predict gene essentiality across entire genomes marked a leap forward in our capacity to analyze and engineer life.
Evo’s milestones were not just its technical achievements but also its vision. This unified framework combined biology’s vast complexity with cutting-edge AI to accelerate discovery and innovation in life sciences.
Learning the language of life across evolution
Evo 2 is the next generation of this line of research in genomic modeling, building on the success of Evo with expanded data, enhanced architecture, and superior performance.
Evo 2 can provide insights into three essential biomolecules—DNA, RNA, and protein— and all three domains of life: Eukarya, Prokarya, and Archaea. Trained on a dataset of 8.85T nucleotides from 15,032 eukaryotic genomes and 113,379 prokaryotic genomes, this training dataset covers diverse species, enabling unprecedented cross-species generalization and significantly broadening its scope compared to Evo, which focused solely on prokaryotic genomes.
Evo 2 uses a new and improved
StripedHyena 2
architecture, extended up to 40B parameters, enhancing the model’s training efficiency and ability to capture long-range dependencies with context lengths of 1M tokens. StripedHyena 2, thanks to its multihybrid design based on convolutions, trains significantly faster than Transformers and other hybrid models using linear attention or state-space models.
The largest Evo 2 model was trained using 2,048
NVIDIA H100 GPUs
using
NVIDIA DGX Cloud
on AWS. As part of
NVIDIA’s partnership with Arc
, they gained access to this high-performance, fully managed AI platform optimized for large-scale, distributed training with NVIDIA AI software and expertise.
These advances mark a significant increase from Evo’s 7B parameters and a 131,000-token context length, positioning Evo 2 as a leader in multimodal and multiscale biological modeling (Table 1).
Feature
Evo
Evo 2
Genomic Training Data
Bacterial
+ bacteriophage
(300B nucleotides)
All domains of life
+ bacteriophage
(9T nucleotides)
Model Parameters
7B
7B + 40B
Context Length
131,072 tokens
Up to 1,048,576 tokens
Modalities
DNA, RNA, protein
DNA, RNA, protein
Safety
Viruses of Eukaryotes excluded
Viruses of Eukaryotes excluded
Applications
Limited cross-species tasks
Broad cross-species applications
Table 1.
Key features of Evo 2 and Evo
Evo 2’s expanded training data and refined architecture empower it to excel across various biological applications. Its multimodal design integrates DNA, RNA, and protein data, enabling zero-shot performance on tasks like mutation impact prediction and genome annotation. Evo 2 also fundamentally improves Evo by including eukaryotic genomes, enabling deeper insights into human diseases, agriculture, and environmental science.
Evo 2’s predictive capabilities outperform specialized models across diverse tasks:
Variant impact analysis
: Achieves state-of-the-art accuracy in predicting the functional effects of mutations across species zero-shot, including human and non-coding variants.
Gene essentiality
: Identifies essential genes in prokaryotic and eukaryotic genomes, validated against experimental datasets, bridging the gap between molecular and systems biology tasks.
Generative capabilities
: Designs complex biological systems, such as genome-scale prokaryotic and eukaryotic sequences, and the controllable design of chromatin accessibility, demonstrating new capabilities for biological design with real-world applicability.
Using the NVIDIA Evo 2 NIM microservice
The
NVIDIA Evo 2 NIM microservice
is useful for generating a variety of biological sequences, with an API that provides settings to adjust tokenization, sampling, and temperature parameters:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
For more information about the API output for various prompts, see the
NVIDIA BioNeMo Framework
documentation.
Evo 2 can also be fine-tuned using the open-source
NVIDIA BioNeMo Framework
, which offers robust tools for adapting pretrained models such as Evo 2 to specialized tasks in BioPharma:
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 and the future of AI in biology
AI is poised to rapidly transform biological research, enabling breakthroughs previously thought to be decades away. Evo 2 represents a significant leap forward in this evolution, introducing a genomic foundation model capable of analyzing and generating DNA, RNA, and protein sequences at unprecedented scales.
While Evo excelled in predicting mutation effects and gene expression in prokaryotes, the capabilities of Evo 2 are much broader, with enhanced cross-species generalization, making it a valuable tool for studying eukaryotic biology, human diseases, and evolutionary relationships.
Evo 2’s ability to perform zero-shot tasks, from identifying genes that drive cancer risk to designing complex biomolecular systems, showcases its versatility. Including long-context dependencies enables it to uncover patterns across genomes, providing multi-modal and multi-scale insights that are pivotal for advancements in precision medicine, agriculture, and synthetic biology.
As the field moves forward, models like Evo 2 set the stage for a future where AI deciphers life’s complexity and is also used to design new useful biological systems. These advancements align with broader trends in AI-driven science, where foundational models are tailored to domain-specific challenges, unlocking previously unattainable capabilities. Evo 2’s contributions signal a future where AI becomes an indispensable partner in decoding, designing, and reshaping the living world.
For more information about Evo 2, see the
technical report published by the Arc Institute
. Evo 2 is also available within the NVIDIA BioNeMo platform.
Acknowledgments
We’d like to thank the following contributors to the described research for their notable contributions to the ideation, writing, and figure design for this post:
Garyk Brixi
, genetics Ph.D. student at Stanford
Jerome Ku
, machine learning engineer working with the Arc Institute
Michael Poli
, founding scientist at Liquid AI and computer science Ph.D. student at Stanford
Greg Brockman
, co-founder and president of OpenAI
Eric Nguyen
, bioengineering Ph.D. student at Stanford
Brandon Yang
, co-founder of Cartesia AI and computer science Ph.D. student at Stanford (on leave)
Dave Burke
, chief technology officer at the Arc Institute
Hani Goodarzi
, core investigator at the Arc Institute and associate professor of biophysics and biochemistry at the University of California, San Francisco
Patrick Hsu
, co-founder of the Arc Institute, assistant professor of bioengineering, and Deb Faculty Fellow at the University of California, Berkeley
Brian Hie
, assistant professor of chemical engineering at Stanford University, Dieter Schwarz Foundation Stanford Data Science Faculty Fellow, innovation investigator at the Arc Institute, and leader at the Laboratory of Evolutionary Design at Stanford | https://developer.nvidia.com/zh-cn/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | 利用 Evo 2 以全新规模跨进化研究生命生物分子的语言 | AI 已从实验好奇心发展为生物学研究的驱动力。
深度学习算法、海量组学数据集和自动化实验室工作流程的融合使
科学家能够解决一度被认为棘手的问题
(从快速蛋白质结构预测到生成式药物设计),从而增加了科学家对 AI 素养的需求。在这一势头下,我们正处于下一次范式转变的边缘:专为生物学打造的强大 AI 基础模型的出现。
这些新模型有望将不同的数据源 (基因组序列、RNA 和蛋白质组谱,在某些情况下还包括科学文献) 统一为分子、细胞和系统级别上对生命的统一、一致的理解。学习生物学的语言和结构为变革性应用打开了大门,例如更智能的药物发现、合理的酶设计和疾病机制阐明。
在我们为下一波 AI 驱动的突破做好准备之际,这些基础模型显然不仅能够加速进展,还将重新定义生物学研究的可能性。
使用/NVIDIA/bionemo-examples
示例 notebook
免费将 Evo 2 测试为 NVIDIA BioNeMo NIM 微服务
。
探索蛋白质设计的完整参考工作流程
。
立即开始在
BioNeMo 框架
中使用您的数据训练 Evo 2
及时了解 NVIDIA BioNeMo 平台的最新动态。
序列建模和设计从分子规模到基因组规模的飞跃
2024 年 11 月推出的
首个 Evo 模型
是基因组研究领域的一个突破性里程碑,它引入了能够分析和生成跨 DNA、RNA 和蛋白质的生物序列的基础模型。
在发布 Evo 时,大多数模型都被限制在单一模式或简短背景下运行,而众所周知,它能够使用统一的方法跨规模 (从分子到基因组) 操作。Evo 基于 270 万个原核细胞和噬菌体基因组 (包含 300 亿个核酸令牌) 进行训练,在许多生物进化和功能任务中提供了单核酸分辨率。
Evo 成功的核心是其创新的 StripedHyena 架构 (图 1),这是一个结合了 29 个 Hyena 层的混合模型,这是一种新型深度学习架构,旨在处理长序列信息,而无需依赖 Transformer 架构中常见的传统注意机制。相反,它使用卷积过滤器和门的组合。
这种设计克服了传统 Transformer 模型的限制,使 Evo 能够高效处理多达 131,072 个令牌的长上下文。最终,该模型能够将微小的序列变化与系统级和有机体级的影响联系起来,弥合分子生物学与进化基因组学之间的差距。
图 1
.
.
.
Evo 和 Evo 2 AI 模型架构
Evo 的预测功能为生物建模树立了新的标准。它在多项零样本任务中取得了具有竞争力的表现,包括预测突变对蛋白质、非编码 RNAs 和调控 DNA 的适应性影响,为合成生物学和精准医学提供宝贵见解。
Evo 还展示了非凡的生成功能,设计了功能齐全的 CRISPR-Cas 系统和转座子。这些输出经过实验验证,证明了 Evo 可以预测和设计具有真实效用的新型生物系统。
Evo
代表着将多模态和多尺度生物理解集成到单个模型中的显著进步
。它能够生成基因组级序列并预测整个基因组的基因本质,这标志着我们分析和工程生命的能力实现了飞跃。
Evo 的里程碑式发展不仅体现在技术成就上,还体现在愿景上。这一统一框架将生物学的庞大复杂性与尖端 AI 相结合,加速了生命科学领域的发现和创新。
学习进化过程中的生命语言
Evo 2 是基因组建模领域这一系列研究的新一代产品,基于 Evo 在扩展数据、增强架构和卓越性能方面取得的成功而构建。
Evo 2 可以深入了解三种基本生物分子 (DNA、RNA 和蛋白质) 以及生命的所有三个领域:真核生物 (Eukarya)、原核生物 (Prokarya) 和古菌 (Archaea)。此训练数据集基于来自 15,032 个真核基因组和 113,379 个原核基因组的 8.85T 核酸数据集进行训练,涵盖不同物种,与仅关注原核基因组的 Evo 相比,实现了前所未有的跨物种泛化,并显著拓宽了其范围。
Evo 2 使用经过改进的全新
StripedHyena 2
架构,该架构可将参数扩展至 40B 个,从而提高模型的训练效率,以及使用 1M 个令牌的上下文长度捕获远程依赖项的能力。
StripedHyena 2
采用基于卷积的多混合设计,其训练速度明显快于 Transformers 和其他使用线性注意力或状态空间模型的混合模型。
最大的 Evo 2 模型使用 AWS 上的
NVIDIA DGX Cloud
使用 2,048 个
NVIDIA H100 GPUs
进行训练。作为
NVIDIA 与 Arc 合作
的一部分,他们获得了对这个高性能、完全托管的 AI 平台的访问权限,该平台利用 NVIDIA AI 软件和专业知识针对大规模分布式训练进行了优化。
这些进步标志着 Evo 的 7B 参数和 131,000 个令牌的上下文长度有了显著增加,使 Evo 2 成为多模态和多尺度生物建模领域的领导者 (表 1)。
特征
Evo
Evo 2
基因组训练数据
细菌 + bacteriophage(300B 核酸)
所有生命领域 + bacteriophage(9T nucleotides)
模型参数
70 亿
70 亿 400 亿
上下文长度
131072 个令牌
最多 1048576 个令牌
模式
DNA、RNA、蛋白质
DNA、RNA、蛋白质
安全性
不包括真核生物病毒
不包括真核生物病毒
应用
跨物种任务受限
广泛的跨物种应用
表 1、Evo 2 和 Evo 的主要特性
Evo 2 的扩展训练数据和优化架构使其能够在各种生物应用中表现卓越。其多模态设计集成了 DNA、RNA 和蛋白质数据,可在执行突变影响预测和基因组标注等任务时实现零采样性能。Evo 2 还通过纳入真核基因组从根本上改进了 Evo,使人们能够更深入地了解人类疾病、农业和环境科学。
Evo 2 的预测能力优于各种任务的专业模型:
变体影响分析
:在零样本预测物种突变 (包括人类和非编码变体) 的功能性影响方面实现出色的准确性。
基因本质
:识别原核和真核基因组中的基本基因,并通过实验数据集进行验证,弥合分子和系统生物学任务之间的差距。
生成功能
:设计复杂的生物系统(例如基因组级的原核细胞序列和真核细胞序列),以及染色质可访问性的可控设计,展示具有现实世界适用性的生物设计新功能。
使用 NVIDIA Evo 2 NIM 微服务
NVIDIA Evo 2 NIM 微服务
可用于生成各种生物序列,其 API 可提供用于调整标记化、采样和温度参数的设置:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
有关各种提示的 API 输出的更多信息,请参阅
NVIDIA BioNeMo 框架文档
。
此外,还可以使用开源
NVIDIA BioNeMo Framework
对 Evo 2 进行微调,该框架提供可靠的工具,可根据 BioPharma 中的专业任务调整预训练模型(例如 Evo 2):
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 和生物学领域的 AI 未来
AI 势必会迅速改变生物学研究,实现之前人们认为需要数十年才能实现的突破。Evo 2 代表了这一变革的重大飞跃,它引入了基因组基础模型,能够以超大规模分析和生成 DNA、RNA 和蛋白质序列。
虽然 Evo 在预测原核生物的突变效应和基因表达方面表现优异,但 Evo 2 的功能更为广泛,并增强了跨物种泛化,这使其成为研究真核生物学、人类疾病和进化关系的宝贵工具。
从识别导致癌症风险的基因到设计复杂的生物分子系统,Evo 2 执行零样本击任务的能力充分体现了其通用性。借助长上下文依赖项,AI 能够揭示跨基因组的模式,提供对精准医学、农业和合成生物学的进步至关重要的多模态和多尺度见解。
随着该领域的发展,像 Evo 2 这样的模型为 AI 解读生命复杂性的未来奠定了基础,同时还用于设计新的有用的生物系统。这些进步与 AI 驱动的科学领域的更广泛趋势是一致的,在这些趋势中,基础模型针对特定领域的挑战进行了定制,解锁了以前无法实现的功能。Evo 2 的贡献标志着 AI 将成为解码、设计和重塑生存世界不可或缺的合作伙伴。
有关 Evo 2 的更多信息,请参阅 Arc Institute 发布的技术报告。Evo 2 也可在 NVIDIA BioNeMo 平台中使用。
致谢
在此,
我们要感谢以下参与所述研究的人员,感谢他们为本文的构思、写作和图形设计做出的杰出贡献
:
Garyk Brixi
,斯坦福大学遗传学博士生
与 Arc Institute 合作的机器学习工程师
Jerome Ku
Michael Poli
,Liquid AI 的创始科学家兼斯坦福大学计算机科学博士生
Greg Brockman
,OpenAI 联合创始人兼总裁
Eric Nguyen,
斯坦福大学生物工程博士生
Brandon Yang
,Cartesia AI 联合创始人兼斯坦福大学计算机科学博士生 (休假中)
Dave Burke
,Arc Institute 首席
技术官
Hani Goodarzi
,Arc Institute 核心研究员,加州大学旧金山分校生物物理学和生物化学副教授
Patrick Hsu
,Arc Institute 联合创始人、生物工程助理教授兼加州大学伯克利分校 Deb 教职研究员
Brian Hie
– 斯坦福大学化学工程助理教授、Dieter Schwarz 基金会斯坦福大学数据科学教职人员、Arc Institute 创新研究员、斯坦福大学进化设计实验室负责人 |
https://developer.nvidia.com/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie | As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult. Fortunately, large datasets are a beneficial component for recommendation systems, which can make a sometimes overwhelming decision much easier.
Graphs are excellent choices for modeling the relationships inherent in the data that fuel recommendation systems, and NetworkX is a very popular option that many data scientists turn to for graph analytics in Python. NetworkX is easy to learn and use, stocked with a wide breadth of graph algorithms, backed by a large and friendly community, and has copious examples available in notebooks, documents, Stack Overflow, and your favorite LLM.
However, to the disappointment of countless developers that broke into graph analytics with or even because of NetworkX, it famously falls short in performance at the scales used by typical recommendation systems.
This begs the question: Can an effective graph-based recommendation system be written in a few simple lines of Python? More generally, can developers and data scientists have both easy-to-use and high-performance graph analytics?
The answer to both questions is, “Yes.”
Read on to discover how you can create a simple and effective recommendation system in Python using NetworkX, a dataset of 33M movie reviews, the Jaccard Similarity algorithm, and the NVIDIA cuGraph back-end, which provides the >250x speedup necessary for modern large-scale graph data.
The MovieLens dataset
Here’s the most important part of the system: the data. The
MovieLens dataset
is generously made available for public download and is described in more detail in the
README file
. The full set includes about 331K anonymized users reviewing 87K movies, resulting in 34M ratings.
Figure 1. MovieLens data represented as a graph, where the individual ratings easily map to edges between user and movie nodes
Extracting recommendations from the data: bipartite graphs and Jaccard Similarity
The type of graph created from the MovieLens data is a
bipartite graph
because there are only two types of nodes: movies and users. The reviews (edges) can only occur between a user and a movie. This makes it particularly easy to apply the
Jaccard Similarity
algorithm to find similarities between movies.
Jaccard Similarity compares pairs of nodes and computes a similarity coefficient using their relationships in the graph. In this case, movies are related to each other based on how users have chosen to watch and review them.
Figure 2. Jaccard Similarity computes a similarity coefficient using the sizes of the sets of neighbors for the two nodes being compared
Based on the viewing preferences of users, you can see m3 is more similar to m2 than it is to m1, and movies m4 and m1 aren’t similar at all. This system would recommend m2 to someone who likes m3 and wouldn’t recommend m1 to someone who likes m4.
NetworkX makes it easy… for smaller graphs
Not surprisingly, NetworkX supports the type of analysis described earlier, and it’s quite easy to start seeing results in just a few lines of Python. But as you’ll see, performance becomes a limitation for larger-sized graphs—such as those needed for your movie recommendation system—when using NetworkX without the GPU-accelerated cuGraph backend.
I discuss the key pieces of the recommendation system later in this post, but the full source code is available in the
/rapidsai/nx-cugraph
GitHub repo.
Because the Jaccard Similarity algorithm you’re using doesn’t take edge weights into account, it considers all reviews equal. You don’t want movies with low reviews to be recommended, so filter out all reviews under a certain threshold, which has the side effect of making the graph smaller too.
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
If you print the sizes of the data you’re working with, you see that your graph of good reviews is approximately 330K nodes and 28M edges, with an average degree (number of neighbors per node) of 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
As mentioned earlier, graphs of this size often present a challenge to NetworkX, but GPU acceleration using the cuGraph backend removes the performance limitations often associated with this much data. However, I’ll continue with a CPU-only environment to demonstrate the default performance.
All the following examples were run on a workstation using NetworkX 3.4.2 and a Intel Xeon Platinum 8480CL at 2.0 GHz with 2 TB RAM.
Using a NetworkX graph created from users and good movie reviews, pick a user, find one of their highest rated movies, and use Jaccard Similarity to find other movies like it:
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
When you look up the node ID in the movie name map, you see that one of this user’s highest rated movies is the animated film,
Mulan
:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
You can now use Jaccard Similarity to recommend a movie based on the user’s preferences and viewing history:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
The Jaccard Similarity computation using the default NetworkX implementation ran for over two minutes. Using these results, you can now provide a recommendation.
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
Now you can print the first movie in the sorted list of recommendations:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
The code is easy and the results look good, but performance holds us back
As you can see, the recommendation seems reasonable; someone who likes
Mulan
seems likely to also enjoy the 1999 Disney animated film
Tarzan
.
However, if the goal was to provide a service or to analyze hundreds or thousands of movies, the two-minute runtime would have you start looking for an alternative to NetworkX. You can see that finding similarities between other movies using this system isn’t any faster:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
Here’s another example:
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
The quality of the recommendations returned is impressive given that this system is composed of only a few lines of code. However, the runtime performance makes it virtually unusable. As described earlier, finding recommendations based on
Shawshank Redemption, The (1994)
takes nearly 17 minutes.
NVIDIA cuGraph makes it transformatively faster
The graph algorithm in this workflow is prohibitively expensive, but by using the NVIDIA cuGraph backend and a compatible GPU, you can dramatically improve performance without changing the code.
Jaccard Similarity is supported in nx-cugraph version 25.02 or later. Version 25.02 is available from nightly builds and will be part of future stable releases later this month. Instructions on installing nx-cugraph, as well as other RAPIDS packages, from both nightly and stable channels using conda or pip, are available in the
RAPIDS Installation Guide
.
After being installed, enable nx-cugraph by setting an environment variable:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph uses the GPU to dramatically accelerate the neighbor lookups and set comparisons needed for the Jaccard Similarity computation. As the graph scales and the number of movies and reviews per movie increases, performance remains almost constant.
The best part of the system—the simplicity of the code—does not change, and the results are identical, but performance increases by over 250x for the run that previously took nearly 17 minutes, reducing it to under 4 seconds.
Figure 3. Speedup of cuGraph over NetworkX for Jaccard Similarity computation for various movies
Software: NetworkX 3.4.2, cuGraph/nx-cugraph 25.02
;
CPU: Intel(R) Xeon(R) Platinum 8480CL @ 2.0GHz 2TB RAM
;
GPU: NVIDIA Quadro RTX 8000 48GB RAM
Conclusion
This post covered a simple and effective recommendation system that’s easy to write in Python using NetworkX. Although there are many other approaches you could take—as covered in
What Is a Recommendation System?
—few would match the low effort required to start exploring data that graph analysis with NetworkX offers.
However, productive and meaningful data exploration requires quick turnaround, and NetworkX has traditionally struggled to scale to larger, real-world problem sizes.
The NVIDIA cuGraph backend for NetworkX accelerates the familiar and flexible NetworkX API to also make it performant at scale, generating results in seconds instead of tens of minutes, keeping you focused and productive. You can now continue using NetworkX, the most popular graph analytics library, without concern for scaling issues simply by adding a GPU and the cuGraph backend to your environment.
For more information about accelerated graph analysis using NetworkX and NVIDIA cuGraph, see
RAPIDS: GPU-Accelerated NetworkX Backend
. | https://developer.nvidia.com/zh-cn/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | 使用 NetworkX、Jaccard Similarity 和 cuGraph 预测您下一部最喜欢的电影 | 随着全球每个人可用的数据量不断增加,消费者做出明智决策的能力也越来越难。幸运的是,对于推荐系统而言,大型数据集是一个非常有用的组件,有时这会让某些情况下的决策变得更加容易。
对于为推荐系统提供支持的数据中固有的关系建模而言,图形是一个很好的选择,而 NetworkX 是许多数据科学家在 Python 中进行图形分析时非常喜欢的选择。NetworkX 易于学习和使用,拥有各种图形算法,并由庞大而友好的社区提供支持,并且在 Notebook、文档、Stack Overflow 和您喜欢的 LLM 中提供了大量示例。然而,令无数开发人员失望的是,他们使用 NetworkX 或甚至因为 NetworkX 而涉足图形分析,但众所周知,它在典型推荐系统使用的规模上的性能表现不佳。
这就引出了一个问题:能否用 Python 的几行简单代码编写有效的基于图形的推荐系统?更一般地说,开发者和数据科学家能否同时进行易于使用的高性能图形分析?
这两个问题的答案都是“Yes”
请继续阅读,了解如何使用 NetworkX、Jaccard Similarity 算法和 NVIDIA cuGraph 后端 (可将现代大规模图形数据所需的速度提高 250 倍以上),在 Python 中创建简单有效的推荐系统,使用 3300 万条电影评论的数据集。
MovieLens 数据集
我们先从系统中最重要的部分开始:数据。MovieLens 数据集
1
可供公开
下载
,
README 文件
中有更详细的说明。该系列包括大约 331k 匿名用户,他们在观看 87k 部电影,获得了 34M 的评分。
图 1、MovieLens 数据可以表示为一个图形,其中各个评分可轻松映射到用户和电影节点之间的边缘。
从数据中提取建议:二分图和 Jaccard Similarity
我们根据 MovieLens 数据创建的图形类型是二部图,因为只有两种类型的节点:电影节点和用户节点,并且评论(边缘)只能在用户和电影之间发生。这使得应用
Jaccard Similarity
算法来查找电影之间的相似性变得特别容易。Jaccard Similarity 比较节点对,并使用它们在图形中的关系计算相似性系数。在这种情况下,电影根据用户选择的观看和审查方式相互关联。
图 3、Jaccard Similarity 使用所比较的两个节点的近邻集的大小来计算相似性系数。根据用户的观看偏好,我们可以看到 m3 与 m2 更相似,电影 m4 和 m1 完全不相似。此系统会向喜欢 m3 的用户推荐 m2,而不会向喜欢 m4 的用户推荐 m1。
NetworkX 可以轻松处理较小的图形
不足为奇的是,NetworkX 支持我们上述的分析类型,而且只需使用几行 Python 代码即可轻松查看结果。但正如我们将看到的,当使用无 GPU 加速的 cuGraph 后端的 NetworkX 时,大型图形 (例如我们的电影推荐系统所需的图形) 的性能会受到限制。
我们将在下方查看推荐系统的关键部分,但可在
此处
获取完整源代码。
由于我们使用的 Jaccard Similarity 算法未考虑边缘权重,因此会将所有评论视为相同。我们不希望推荐好评度低的电影,因此我们会过滤掉某个值下的所有好评度,这会导致图形也变小。
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
如果我们打印正在处理的数据的大小,我们会看到好评图大约有 330k 个节点,28M 个边缘,平均度数 (每个节点的近邻数) 为 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
如上所述,这种规模的图形通常会给 NetworkX 带来挑战,但使用 cuGraph 后端的 GPU 加速消除了通常与如此多的数据相关的性能限制。不过,我们将继续使用 CPU 环境来演示默认性能。
注意
以下所有示例均在使用 NetworkX 3.4.2 和 Intel(R) Xeon(R) Platinum [email protected](2TB RAM)的工作站上运行
使用由用户创建的 NetworkX 图形和优秀的电影评论,我们来选择一个用户,找到他们评分最高的电影之一,并使用 Jaccard Similarity 找到类似的其他电影。
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
当我们在电影名称贴图中查找节点 ID 时,我们会发现该用户评分最高的电影之一是动画电影“Mulan”:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
我们现在可以使用 Jaccard Similarity 根据用户的偏好和观看历史记录来推荐电影:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
使用默认 NetworkX 实现的 Jaccard 相似性计算运行了两分钟以上。根据这些结果,我们现在可以提供推荐。
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
现在,我们只需在已排序的推荐列表中打印出第一部电影:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
代码很简单,结果看起来不错,但性能却拖累了我们
如我们所见,这个推荐似乎是合理的;喜欢“Mulan”的人似乎也喜欢 1999 年的迪士尼动画电影“Tarzan”。
但是,如果我们的目标是提供服务,或分析数百乃至数千部电影,那么两分钟的运行时间就能让我们开始寻找 NetworkX 的替代方案。我们可以看到,使用此系统查找其他电影相似点的速度并没有加快:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
鉴于此系统仅由几行代码组成,因此所返回的推荐内容的质量令人印象深刻。但是,运行时性能使其几乎无法使用。如上所述,根据“Shawshank Redemption, The (1994)”查找推荐内容大约需要 17 分钟。
NVIDIA cuGraph 使其变革性地加快
上述工作流程中的图形算法成本高昂,但通过使用 NVIDIA cuGraph 后端和兼容的 GPU,我们可以在不更改代码的情况下显著提高性能。
nx-cugraph 版本 25.02 或更高版本支持 Jaccard Similarity。版本 25.02 可在 nightly builds 中使用,并将于本月晚些时候纳入未来的稳定版本中。有关如何使用 conda 或 pip 从 nightly 和 stable 通道安装 nx-cugraph 以及其他 RAPIDS 包的说明,请参阅
RAPIDS Installation Guide
。
安装后,只需设置环境变量即可启用 nx-cugraph:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph 利用 GPU 显著加速近邻查找,并设置 Jaccard 相似性计算所需的比较结果。此外,随着图形规模以及每部电影的电影和评论数量的增加,性能几乎保持不变。
该系统最优秀的部分,即代码的简单性,并没有改变,结果也是一样的,但在过去近 17 分钟的运行中,性能提高了 250 倍以上,缩短到 4 秒以内。
图 4、图表显示了各种电影的 cuGraph over NetworkX for Jaccard Similarity 计算速度
。
软件:NetworkX 3.4.2,cuGraph/nx-cugraph 25.02
CPU:Intel(R) Xeon(R) Platinum [email protected] 2TB RAM
GPU:NVIDIA Quadro RTX 8000 48GB RAM
结束语
这篇博文介绍了一个简单而有效的推荐系统,它可以使用 NetworkX 轻松地用 Python 编写。虽然我们可以采用许多其他方法(如此处所述),但很少有方法能够做到与开始探索 NetworkX 图形分析提供的数据所需的工作量不相上下。然而,高效和有意义的数据探索需要快速的周转,而 NetworkX 传统上一直难以扩展到更大的实际问题规模。
适用于 NetworkX 的 NVIDIA cuGraph 后端可对熟悉且灵活的 NetworkX API 进行加速,还可大规模提升其性能,在几秒钟 (而非数十分钟) 内生成结果,从而让您专注工作并高效工作。现在,用户只需向环境中添加 GPU 和 cuGraph 后端,即可继续使用热门的图形分析库 NetworkX,而无需担心扩展问题。
如需了解有关使用 NetworkX 和 NVIDIA cuGraph 进行加速图形分析的更多信息,请访问
https://rapids.ai/nx-cugraph
。 |
https://developer.nvidia.com/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | NVIDIA Open GPU Datacenter Drivers for RHEL9 Signed by Red Hat | NVIDIA and Red Hat have partnered to bring continued improvements to the precompiled
NVIDIA Driver introduced in 2020
. Last month, NVIDIA
announced
that the open GPU driver modules will become the default recommended way to enable NVIDIA graphics hardware.
Today, NVIDIA announced that Red Hat is now compiling and signing the
NVIDIA open GPU kernel modules
to further streamline the usage for the Red Hat Enterprise Linux 9 OS and to provide a way forward for supporting
Confidential Computing
with NVIDIA hardware in the datacenter.
The signed open GPU drivers can now be installed and used without any extra key enrollment configuration. As with the current legacy precompiled packages, only the latest RHEL 9 kernel is supported at the time of the driver update.
Starting with RHEL 9.5, NVIDIA now offers a tech preview repository available for trial until April 30, 2025. This repository enables the usage of the open GPU driver and legacy driver (with caveats) using
modularity streams
.
After the tech preview ends, the signed kernel modules will be available in the NVIDIA Compute package repositories with the rest of the CUDA platform: CUDA-X acceleration libraries, CUDA-optimized applications, and frameworks.
For more information about modularity streams and how they work with the precompiled graphics driver, see
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
.
Activating the tech preview
To use the signed NVIDIA open GPU driver modules, make sure that the Red Hat repositories are enabled, including RHEL9 AppStream, RHEL9 BaseOS, and RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
Add the CUDA network repository:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
When the tech preview repository is activated, view the supported modularity streams:
$ dnf module list nvidia-driver
Figure 1. Example output list from the tech preview tool of the NVIDIA modularity streams installed
The tech preview provides the following new modularity streams:
latest-open-gpu
<driver_version>-open-gpu
Install the latest stream to opt into the signed precompiled packages:
$ sudo dnf module install nvidia-driver:latest-open-gpu
Reboot and enjoy!
Figure 2. Example
nvidia-smi
command output showing the NVIDIA CUDA driver version running and system parameters
Important caveat with the tech preview repository
The legacy precompiled driver is also listed in the modularity streams:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
These drivers have been signed by NVIDIA and require signing key enrollment. They install just fine without any warnings but upon reboot the NVIDIA graphics driver will not be available:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Upon further inspection, the problem becomes clear:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
This happens because secure boot is enabled by default:
$ sudo mokutil --sb-state
SecureBoot enabled
This module has been signed by NVIDIA using a key that is not in the chain of trust:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
For more information about enrolling the NVIDIA signing key with mokutil, see
UEFI Secure Boot
on GitHub.
Installation
Install the tech preview repository on a host with secure boot enabled:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
If you discover any issues with packaging, please report those problems to
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
.
This repository will be active until March 31, 2025.
Summary
Now with signed packages from NVIDIA and Red Hat, secure boot with NVIDIA on RHEL 9 is even better. You can look forward to more developments in this space.
The driver source code is available in the
/NVIDIA/open-gpu-kernel-modules
GitHub repo.
Packaging templates and instructions are provided on GitHub to enable you to maintain your own precompiled kernel module packages for custom kernels and derivative Linux distros:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
To give feedback,
send comments
or
report driver bugs
. If you are not already a member,
join the NVIDIA Developer Program
. | https://developer.nvidia.com/zh-cn/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | 适用于 RHEL9 的 NVIDIA 开放 GPU 数据中心驱动,由 Red Hat 签署 | NVIDIA 和 Red Hat 合作,对
2020 年推出的预编译 NVIDIA Driver
进行持续改进。上个月,NVIDIA
宣布
,开放的 GPU 驱动模组将成为启用 NVIDIA 图形硬件的默认推荐方式。
今天,NVIDIA 宣布 Red Hat 正在编译并签署
NVIDIA 开放 GPU 内核模块
,以进一步简化 Red Hat Enterprise Linux 9 操作系统的使用,并为在数据中心使用 NVIDIA 硬件支持
机密计算
提供前进方向。
现在无需任何额外的密钥注册配置即可安装和使用经过签名的开放 GPU 驱动。与当前的旧版预编译软件包一样,在驱动更新时仅支持最新的 RHEL 9 内核。
从 RHEL 9.5 开始,NVIDIA 现在提供技术预览库,可在 2025 年 4 月 30 日之前试用。此库允许使用
模块化流使
用开放式 GPU 驱动程序和旧版驱动程序(附带说明)。
技术预览结束后,已签名的内核模块将在 NVIDIA Compute 软件包存储库中与 CUDA 平台的其他部分一起提供:CUDA-X 加速库、CUDA 优化的应用和框架。
有关 模块化流及其如何与预编译图形驱动程序配合使用的更多信息 ,请参阅
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
。
激活技术预览
要使用经过签名的 NVIDIA 开源 GPU 驱动模块,请确保已启用 Red Hat 资源库,包括 RHEL9 AppStream、RHEL9 BaseOS 和 RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
添加 CUDA 网络存储库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
激活技术预览库后,请查看支持的模块化流:
$ dnf module list nvidia-driver
图 1、已安装的 NVIDIA 模块化流的技术预览工具中的输出列表示例
技术预览提供了以下新的模块化流:
latest-open-gpu
<driver_version>-open-gpu
安装最新的流,选择使用经过签名的预编译软件包:
$ sudo dnf module install nvidia-driver:latest-open-gpu
重启并尽情享受吧!
图 2、显示
正在运行的 NVIDIA CUDA 驱动程序版本和系统参数的 nvidia-smi 命令输出示例
技术预览库的重要注意事项
旧版预编译驱动也列在模块化流中:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
这些驱动已由 NVIDIA 签名,需要签名密钥注册。它们可以正常安装,没有任何警告,但在重启后,NVIDIA 图形驱动程序将不可用:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
经过进一步检查,问题变得明显:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
发生这种情况的原因是默认情况下启用了 secure boot:
$ sudo mokutil --sb-state
SecureBoot enabled
本模块由 NVIDIA 使用不在信任链中的密钥进行签名:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
有关使用 mokutil 注册 NVIDIA 签名密钥的更多信息,请参阅 GitHub 上的 UEFI Secure Boot。
安装
在已启用安全启动的主机上安装技术预览库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
如果您发现任何打包问题,请向
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
报告这些问题。
此库的有效期至 2025 年 3 月 31 日。
总结
现在,借助 NVIDIA 和 Red Hat 的签名软件包,在 RHEL 9 上使用 NVIDIA 进行安全启动会更好。您可以期待此领域的更多开发。
您可在
/NVIDIA/open-gpu-kernel-modules
GitHub 仓库中获取驱动源代码。
GitHub 上提供了打包模板和说明,以便您为自定义内核和衍生 Linux 发行版维护自己的预编译内核模块包:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
如要提供反馈,请
发送评论
或
报告驱动错误
。如果您还不是会员,请
加入 NVIDIA Developer Program
。 |
https://developer.nvidia.com/blog/get-started-with-gpu-acceleration-for-data-science/ | Get Started with GPU Acceleration for Data Science | In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements.
RAPIDS
is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes. Providing tools like
cuDF
for data manipulation,
cuML
for machine learning, and
cuGraph
for graph analytics, RAPIDS enables seamless integration with existing Python libraries, making it easier for data scientists to achieve faster and more efficient processing.
This post shares tips for transitioning from CPU data science libraries to GPU-accelerated workflows, especially for experienced data scientists.
Setting up RAPIDS on desktop or cloud infrastructure
Getting started with RAPIDS is straightforward, but it does have several dependencies. The recommended approach is to follow the official
RAPIDS Installation Guide
, which provides detailed instructions for local installations. You have multiple paths to install the framework: through pip install, Docker image, or through an environment such as Conda. To set up RAPIDS in a cloud environment, see the
RAPIDS Cloud Deployment Guide
. Before installing, ensure compatibility by checking your CUDA version and the supported RAPIDS version on the installation page.
cuDF and GPU acceleration for pandas
An advantage of RAPIDS lies in its modular architecture, which empowers users to adopt specific libraries designed for GPU-accelerated workflows. Among these, cuDF stands out as a powerful tool for seamlessly transitioning from traditional
pandas
-based workflows to GPU-optimized data processing, and requires zero code changes.
To get started, make sure to enable the cuDF extension before importing pandas for execution of data import and remainder of the operation on GPU. By loading the RAPIDS extension with
%load_ext cudf.pandas
, you can effortlessly integrate cuDF DataFrame within existing workflows, preserving the familiar syntax and structure of pandas.
Similar to pandas,
cuDF pandas
supports different file formats such as .csv, .json, .pickle, .paraquet, and hence enables GPU-accelerated data manipulation.
The following code is an example of how to enable the
cudf.pandas
extension and concatenate two .csv files:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
Loading the
cudf.pandas
extension enables the execution of familiar pandas operations—such as filtering, grouping, and merging—on GPUs without requiring a code change or rewrites. The cuDF accelerator is compatible with the pandas API to ensure a smooth transition from CPU to GPU while delivering substantial computational speedups.
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
Decoding performance: CPU and GPU runtime metrics in action
In data science, performance optimization is not just about speed, but also understanding how computational resources are utilized. It involves analyzing how operations leverage CPU and GPU architectures, identifying inefficiencies, and implementing strategies to enhance workflow efficiency.
Performance profiling tools like
%cudf.pandas.profile
play a key role by offering a detailed examination of code execution. The following execution result breaks down each function, and distinguishes between tasks processed on the CPU from those accelerated on the GPU:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
This granularity helps pinpoint operations that inadvertently revert to CPU execution, a common occurrence due to unsupported cuDF functions, incompatible data types, or suboptimal memory handling. It is crucial to identify these issues because such fallbacks can significantly impact overall performance. To learn more about this loader, see
Mastering the cudf.pandas Profiler for GPU Acceleration
.
Additionally, you can use Python magic commands like
%%time
and
%%timeit
to enable benchmarks of specific code blocks that facilitate direct comparisons of runtime between pandas (CPU) and the cuDF accelerator for pandas (GPU). These tools provide insights into the efficiency gains achieved through GPU acceleration. Benchmarking with
%%time
provides a clear comparison of execution times between CPU and GPU environments, highlighting the efficiency gains achievable through parallel processing.
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
The
%%time
example delivers a 10x speedup in execution time, reducing wall time from 63.3 milliseconds (ms) on the CPU to 5.46 ms on the GPU. This highlights the efficiency of GPU acceleration with cuDF pandas for large-scale data operations. Further insights are gained using
%%timeit
, which performs repeated executions to measure consistency and reliability in performance metrics.
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The
%%timeit
example gives us a 10x performance improvement with GPU acceleration, reducing the runtime from 1.11 seconds per loop on the CPU to 89.6 ms per loop on the GPU. This highlights the efficiency of cuDF pandas for intensive data operations.
Verifying GPU utilization
When working with different data types, it is important to verify whether your system is utilizing the GPU effectively. You can check whether arrays are being processed on the CPU or GPU by using the familiar
type
command to differentiate between NumPy and CuPy arrays.
type(guess_ages)
cupy.ndarray
If the output is
np.array
, the data is being processed on the CPU. If the output is
cupy.ndarray
, the data is being processed on the GPU. This quick check ensures that your workflows are leveraging GPU resources where intended.
Secondly, by simply using the
print
command, you can confirm whether the GPU is being utilized and ensure that a cuDF DataFrame is being processed. The output specifies whether the
fast
path (cuDF) or
slow
path (pandas) is in use. This straightforward check provides an easy way to validate that the GPU is active for accelerating data operations.
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
Lastly, commands such as
df.info
can be used to inspect the structure of cuDF DataFrame and confirm that computations are GPU-accelerated. This helps verify whether operations are running on the GPU or falling back to the CPU.
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
Conclusion
RAPIDS, through tools like cuDF pandas, provides a seamless transition from traditional CPU-based data workflows to GPU-accelerated processing, offering significant performance improvements. By leveraging features such as
%%time
,
%%timeit
, and profiling tools like
%%cudf.pandas.profile
, you can measure and optimize runtime efficiency. The ability to inspect GPU utilization through simple commands like
type
,
print(pd)
, and
df.info
ensures that workflows are leveraging GPU resources effectively.
To try the data operations detailed in this post, check out the
accompanying Jupyter Notebook
.
To learn more about GPU-accelerated data science, see
10 Minutes to Data Science: Transitioning Between RAPIDS cuDF and CuPy Libraries
and
RAPIDS cuDF Instantly Accelerates pandas Up to 50x on Google Colab
.
Join us for
GTC 2025
and register for the
Data Science Track
to gain deeper insights. Recommended sessions include:
Accelerating Data Science with RAPIDS and NVIDIA GPUs
Scaling Machine Learning Workflows with RAPIDS
To build expertise with RAPIDS, check out the following hands-on workshops at GTC:
Accelerate Clustering Algorithms to Achieve the Highest Performance
Analyzing and Visualizing Large Data Interactively Using Accelerated Computing | https://developer.nvidia.com/zh-cn/blog/get-started-with-gpu-acceleration-for-data-science/ | 适用于数据科学的 GPU 加速入门 | 在数据科学领域,运营效率是处理日益复杂和大型数据集的关键。GPU 加速已成为现代工作流程的关键,可显著提高性能。
RAPIDS
是由 NVIDIA 开发的一套开源库和框架,旨在使用 GPU 以尽可能减少代码更改来加速数据科学流程。RAPIDS 提供用于数据操作的
cuDF
、用于机器学习的
cuML
和用于图形分析的
cuGraph
等工具,可实现与现有 Python 库的无缝集成,使数据科学家更容易实现更快、更高效的处理。
本文分享了从 CPU 数据科学库过渡到 GPU 加速工作流程的技巧,特别适合经验丰富的数据科学家。
在桌面或云基础架构上设置 RAPIDS
开始使用 RAPIDS 非常简单,但它确实有几个依赖项。推荐的方法是遵循官方的
RAPIDS 安装指南
,该指南提供了详细的本地安装说明。您有多种安装框架的路径:通过 pip install、Docker 镜像,或通过 Conda 等环境。要在云环境中设置 RAPIDS,请参阅
RAPIDS 云部署指南
。安装前,请检查安装页面上的 CUDA 版本和受支持的 RAPIDS 版本,确保兼容性。
适用于 pandas 的 cuDF 和 GPU 加速
RAPIDS 的一个优势在于其模块化架构,使用户能够采用专为 GPU 加速工作流程设计的特定库。其中,cuDF 作为一款功能强大的工具脱颖而出,可从基于
pandas
的传统工作流程无缝过渡到 GPU 优化的数据处理流程,并且无需更改代码。
首先,请确保在导入 pandas 之前启用 cuDF 扩展,以便在 GPU 上执行数据导入和剩余操作。通过使用
%load_ext cudf.pandas
加载 RAPIDS 扩展程序,您可以轻松地将 cuDF DataFrame 集成到现有工作流程中,从而保留熟悉的 pandas 语法和结构。
与 pandas 类似,
cuDF pandas
支持不同的文件格式,例如 .csv、.json、.pickle、.paraquet,因此支持 GPU 加速的数据操作。
以下代码是如何启用
cudf.pandas
扩展名并连接两个 .csv 文件的示例:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
通过加载
cudf.pandas
扩展程序,无需更改或重写代码,即可在 GPU 上执行熟悉的 pandas 操作,例如过滤、分组和合并。cuDF 加速器与 pandas API 兼容,可确保从 CPU 到 GPU 的平稳过渡,同时大幅提高计算速度。
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
解码性能:CPU 和 GPU 运行时指标的实际应用
在数据科学中,性能优化不仅涉及速度,还涉及了解计算资源的利用方式。其中包括分析运营如何利用 CPU 和 GPU 架构、识别效率低下问题,以及实施旨在提高工作流程效率的策略。
%cudf.pandas.profile
等性能分析工具通过详细检查代码执行情况发挥着关键作用。以下执行结果会对每个函数进行分解,并区分在 CPU 上处理的任务与在 GPU 上加速的任务:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
这种粒度有助于查明无意中恢复到 CPU 执行的操作,这是由于不受支持的 cuDF 函数、不兼容的数据类型或次优内存处理而常见的情况。识别这些问题至关重要,因为此类回退会严重影响整体性能。如需详细了解此加载程序,请参阅
Mastering cudf.pandas Profiler for GPU Acceleration
。
此外,您可以使用 Python magic 命令,如
%%time
和
%%timeit
,来启用特定代码块的基准测试,以便直接比较 pandas(CPU)和 cuDF 加速器(GPU)之间的运行时。这些工具可让您深入了解通过 GPU 加速实现的效率提升。通过使用
%%time
进行基准测试,可以清楚地比较 CPU 和 GPU 环境之间的执行时间,从而凸显通过并行处理实现的效率提升。
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
%%time
示例可将执行时间提高 10 倍,将墙面时间从 CPU 上的 63.3 毫秒 (ms) 缩短到 GPU 上的 5.46 毫秒。这凸显了使用 cuDF pandas 进行 GPU 加速在大规模数据操作中的效率。您可以使用
%%timeit
获得更多见解,它执行重复执行来测量性能指标中的一致性和可靠性。
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
在 GPU 加速方面,
%%timeit
示例将性能提升了 10 倍,将运行时间从 CPU 上的每循环 1.11 秒缩短到 GPU 上的每循环 89.6 毫秒。这凸显了 cuDF pandas 在密集型数据操作中的效率。
验证 GPU 利用率
在处理不同的数据类型时,请务必验证您的系统是否有效利用了 GPU。您可以使用熟悉的
type
命令来区分 NumPy 和 CuPy 数组,检查数组是在 CPU 还是 GPU 上处理。
type(guess_ages)
cupy.ndarray
如果输出为
np.array
,则数据将在 CPU 上处理。如果输出为
cupy.ndarray
,则数据将在 GPU 上处理。此快速检查可确保您的工作流程按预期利用 GPU 资源。
其次,只需使用
print
命令,即可确认是否正在利用 GPU,并确保正在处理 cuDF DataFrame。输出指定使用的是
fast
路径 (cuDF) 还是
slow
路径 (pandas)。这种简单的检查提供了一种验证 GPU 是否处于活动状态以加速数据操作的简单方法。
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
最后,可以使用
df.info
等命令检查 cuDF DataFrame 的结构,并确认计算已通过 GPU 加速。这有助于验证操作是在 GPU 上运行,还是回退至 CPU。
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
结束语
通过 cuDF pandas 等工具,RAPIDS 可实现从基于 CPU 的传统数据工作流到 GPU 加速处理的无缝过渡,从而显著提高性能。通过利用
%%time
、
%%timeit
等功能以及
%%cudf.pandas.profile
等分析工具,您可以测量和优化运行时效率。通过
type
、
print(pd)
和
df.info
等简单命令检查 GPU 利用率,可确保工作流程有效利用 GPU 资源。
要尝试本文中详述的数据操作,请查看
随附的 Jupyter Notebook
。
如需了解有关 GPU 加速的数据科学的更多信息,请参阅“
10 分钟了解数据科学:在 RAPIDS cuDF 和 CuPy 库之间过渡
”以及“
RAPIDS cuDF 即时将 pandas 在 Google Colab 上的运行速度提高 50 倍
”。
加入我们的
GTC 2025
大会,并报名参加
Data Science Track
,获得更深入的见解。推荐的会议包括:
借助 RAPIDS 和 NVIDIA GPUs 加速数据科学
借助 RAPIDS 扩展机器学习工作流程
要积累有关 RAPIDS 的专业知识,请查看 GTC 上的以下实战研讨会:
加速聚类算法以实现超高性能
使用加速计算以交互方式分析和可视化大数据 |
https://developer.nvidia.com/blog/how-to-use-openusd/ | How to Use OpenUSD | Universal Scene Description (OpenUSD)
is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds.
This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows.
For an interactive, comprehensive introduction to the essential concepts, vocabulary, and best practices for OpenUSD, explore the
free Learn OpenUSD learning path
from the
NVIDIA Deep Learning Institute
.
Data exchange
OpenUSD does not replace your existing tools or data formats. As the name suggests, Universal Scene Description provides a common way to represent data for many 3D concepts such as meshes and PBR materials. It continues evolving to support additional concepts such as physics and B-reps.
Figure 1. OpenUSD’s growing ecosystem of DCC applications
To begin using OpenUSD with your existing assets, use an exporter or converter to bring your content into USD. This approach ensures swift and effective integration into new USD workflows. By using USD in this way, you can begin to enhance collaboration, performance, and interoperability across different stages of production.
For more information about how to get started, see the
list of applications and tools that support USD interchange
.
As you expand your USD workflows and want to use OpenUSD with proprietary data, you may have a need to develop your own data exchange solutions.
Data exchange pipelines manage the extraction, transformation, and ingestion of content from one data format to another. There are several primary types of data exchange solutions used to integrate existing content into USD workflows:
Importers and exporters
: These are found in digital content creation (DCC) applications and enable you to directly import and export 3D assets to and from USD. Popular software like Rhino and Blender facilitate the importing of 3D assets, while applications like Houdini open 3D assets as native USD files.This makes it straightforward to work with USD within your existing tools.
Converters:
These are standalone programs or scripts designed to convert 3D files from various formats to and from USD. They offer flexibility for batch processing and automation, ensuring that your assets can be easily transformed without the need for manual intervention.
USD file format plug-ins
: This type of USD plug-in enables you to keep your data in its original source format while dynamically reading and translating it to USD on the fly when you open a stage. This means you can work with your assets in their native formats and still benefit from USD’s powerful features, while maintaining data integrity and workflow efficiency.
Conceptual data mapping and extract, transform, load (ETL) are crucial processes for developing data exchange solutions that help ensure the seamless transfer and integration of 3D assets and data.
Conceptual data mapping
involves defining how data from one system or format corresponds to data in another system or format. This is particularly important when working with different DCC applications or file types, as each may have its own unique data structure and representation.
ETL is a more comprehensive process used to move and integrate data from one system to another. It consists of the following main steps:
Extract:
Pull data from the source system or file format. The data could be 3D models, textures, animations, or any other relevant content.
Transform:
Transform extracted data to fit the target runtime requirements. This might include converting units, reformatting data structures, or filtering out irrelevant or private data.
Load:
Load the transformed data into the target runtime. This is sometimes called
ingestion
and ensures that the data is ready for use in USD-based workflows and applications.
Conceptual data mapping and ETL processes are essential for ensuring that content can be effectively and accurately transferred into the USD ecosystem, maintaining its quality and functionality throughout the exchange. The
OpenUSD Exchange SDK
can help accelerate the development of data exchange solutions and help with authoring high-quality OpenUSD data.
After getting your data into OpenUSD, the next step is understanding how to aggregate and structure it for maximum efficiency.
Learn how to
develop data exchange pipelines
with the free Learn OpenUSD self-paced curriculum.
Asset structure principles and content aggregation
Well-structured assets unlock effective content aggregation and asset reusability—this is where OpenUSD truly excels. Proper asset structuring and content aggregation are key methods for efficiently organizing and combining 3D assets and scenes in complex projects.
Developing an asset structure involves breaking down your 3D content into smaller, manageable components that can be easily referenced and reused. This enables you to create a hierarchical and organized structure that simplifies the management and maintenance of your assets. This approach both improves reusability and performance by enabling you to load only the necessary components when needed.
With a well-designed asset structure, you can enhance collaboration as different workstreams can author data independently in separate layers, which can then be composed into a single asset. This non-destructive editing approach ensures that changes made by one team do not overwrite another team’s work.
After you’ve built your smaller components, use content aggregation to combine them into a single cohesive scene. Composition arcs, such as references and payloads, support this process.
Figure 2.
Digital twin of a liquified natural gas carrier
(credit: HD Hyundai and Siemens)
These techniques help make OpenUSD a powerful tool for managing large-scale 3D projects, facilitating efficient collaboration, asset reuse, and pipeline efficiency, making it an indispensable part of any modern 3D production workflow.
Learn
asset structuring principles and content aggregation
with the free Learn OpenUSD self-paced curriculum.
Composition arcs
Composition arcs are the foundational operators of OpenUSD that combine smaller units of scene description into larger units. This enables robust asset structures and content aggregation in OpenUSD workflows.
This feature is one of the superpowers of OpenUSD, setting it apart from other 3D data formats and frameworks.
Figure 3. Composition arcs enable the creation of complex 3D objects such as autonomous mobile robots
Composition arcs are essential tools for non-destructively combining units of USD data. The composition arcs available in OpenUSD include sublayers, references, payloads, variant sets, inherits, specializes, and the most recent addition, relocates.
Start with the following composition arcs to gain an understanding how you can leverage them in your own workflows:
Sublayers:
Enable multiple collaborators or processes to operate on a given USD context. The data authored in a sublayer is added to a USD context in an “include” fashion.
References:
Most suited for content aggregation and can be used to graft one USD context onto the hierarchy of another USD context. By referencing assets, you can maintain a single source of truth for each component, ensuring consistency and reducing redundancy. This is particularly useful in large-scale projects where the same assets are used repeatedly.
Variant sets:
Used to store multiple variations of a component, enhancing adaptability without duplicating data. This feature enables you to create different versions of an asset, such as different materials or configurations, all within a single, efficient structure. Use cases include specifying different textures and materials or configurations of a model, such as describing a car featuring different paint colors or with different rim styles.
Figure 4. Composition of an OpenUSD scene
Learning and implementing composition arcs will provide immediate value to your USD workflows. However, achieving mastery can take years due to their extensive flexibility and diverse applications. To use OpenUSD effectively, we highly recommend investing significant time in experimentation and refining your understanding of composition arc types.
Learn to
create composition arcs
with the free Learn OpenUSD self-paced training.
Get started
OpenUSD offers a comprehensive set of features for efficient 3D asset management and content aggregation, making it an indispensable framework for large-scale projects. By mastering data exchange, asset structuring, and composition arcs, you can significantly improve collaboration, performance, and reusability in your 3D workflows.
Build on your foundational knowledge with courses on the
Learn OpenUSD learning path
.
Start developing with OpenUSD on the
Omniverse Kit SDK
and generative AI models for OpenUSD development through
USD NIM microservices
.
For more resources on OpenUSD, explore the
Alliance for OpenUSD (AOUSD) forum
and visit the
AOUSD
website. | https://developer.nvidia.com/zh-cn/blog/how-to-use-openusd/ | 如何使用 OpenUSD | 通用场景描述(Universal Scene Description)
是开放、可扩展的框架和生态系统,具有用于在 3D 虚拟世界中合成、编辑、查询、渲染、协作和仿真的 API。
本文介绍了如何使用现有资产和工具立即开始使用 OpenUSD,以及可以采取哪些步骤以迭代方式提升 USD 工作流程的水平。
要以交互方式全面介绍 OpenUSD 的基本概念、词汇和最佳实践,请探索
NVIDIA Deep Learning Institute
提供的
免费 Learn OpenUSD 学习路径
。
数据交换
OpenUSD 不会取代您现有的工具或数据格式。顾名思义,Universal Scene Description 提供了一种常用方法来表示网格和 PBR 材质等许多 3D 概念的数据。它不断发展,以支持其他概念,例如 physics 和 B-reps。
图 1、OpenUSD 日益壮大的 DCC 应用生态系统
要开始将 OpenUSD 与您的现有素材结合使用,请使用导出工具或转换器将您的内容导入 USD。这一方法可确保快速有效地集成到新的 USD 工作流程中。通过这种方式使用 USD,您可以开始增强不同生产阶段的协作、性能和互操作性。
有关入门指南的更多信息,请参阅
支持 USD 交换的应用程序和工具列表
。
当您扩展 USD 工作流并希望将 OpenUSD 与专有数据结合使用时,您可能需要开发自己的数据交换解决方案。
数据交换流程管理从一种数据格式到另一种数据格式的内容提取、转换和提取。有几种主要类型的数据交换解决方案用于将现有内容集成到 USD 工作流中:
导入和导出工具:这些工具可在数字内容创建 (DCC) 应用中找到,使您能够直接向 USD 导入和导出 3D 素材。Rhino 和 Blender 等热门软件有助于导入 3D 素材,而 Houdini 等应用则将 3D 素材作为原生 USD 文件打开。这使得您可以在现有工具中轻松使用 USD。
转换器:
这些是独立的程序或脚本,旨在将不同格式的 3D 文件转换为 USD 格式或从 USD 格式转换为 3D 文件。它们为批量处理和自动化提供了灵活性,可确保您的资产可以轻松转换,而无需人工干预。
USD 文件格式插件
:通过这种类型的 USD 插件,您可以保留原始源格式的数据,同时在打开舞台时动态读取数据并将其转换为 USD。这意味着,您可以使用原生格式处理资产,同时仍能从 USD 的强大功能中受益,同时保持数据完整性和工作流程效率。
概念数据映射以及提取、转换、加载(ETL)是开发数据交换解决方案的关键流程,有助于确保无缝传输和集成 3D 资产和数据。
概念数据映射涉及定义一个系统或格式的数据与另一个系统或格式中的数据对应的方式
。在使用不同的 DCC 应用或文件类型时,这一点尤为重要,因为每个应用或文件类型都可能有自己独特的数据结构和表示形式。
ETL 是一个更全面的过程,用于将数据从一个系统迁移到另一个系统并进行集成。它由以下主要步骤组成:
提取:
从源系统或文件格式中提取数据。数据可以是 3D 模型、纹理、动画或任何其他相关内容。
转换
:转换提取的数据以满足目标运行时需求。这可能包括转换单位、重新格式化数据结构,或过滤掉无关紧要或私有数据。
加载:
将转换后的数据加载到目标运行时。
这有时称为 ingestion
,可确保数据可随时用于基于 USD 的工作流和应用。
概念数据映射和 ETL 流程对于确保内容可以有效准确地传输到 USD 生态系统,并在整个交换过程中保持其质量和功能至关重要。
OpenUSD Exchange SDK
有助于加速数据交换解决方案的开发,并有助于创作高质量的 OpenUSD 数据。
在将数据导入 OpenUSD 后,下一步是了解如何聚合和构建数据,以更大限度地提高效率。
了解如何通过免费的 Learn OpenUSD 自定进度课程
开发数据交换工作流
。
资产结构原则和内容聚合
结构良好的资产可实现有效的内容聚合和资产可重用性,这正是 OpenUSD 真正擅长的地方。在复杂的项目中,正确的素材结构和内容聚合是高效组织和合并 3D 素材和场景的关键方法。
开发素材结构需要将 3D 内容分解为更小、可管理的组件,以便于引用和重复使用。这样,您就可以创建一个层次化且井然有序的结构,简化资产的管理和维护。这种方法允许您在需要时仅加载必要的组件,从而提高可复用性和性能。
借助精心设计的资产结构,您可以增强协作,因为不同的工作流可以在不同的层中独立创作数据,然后将数据组合成单个资产。这种非破坏性编辑方法可确保一个团队所做的更改不会覆盖另一个团队的工作。
构建完较小的组件后,请使用内容聚合功能将其合并为一个连贯一致的场景。合成操作符 (例如 references 和 payloads) 支持此过程。
图 2
、
液化天然气运输船的数字孪生
(图片来源:HD Hyundai 和 Siemens)
这些技术有助于使 OpenUSD 成为管理大型 3D 项目的强大工具,促进高效协作、资产重复使用和工作流效率,使其成为任何现代 3D 生产工作流不可或缺的一部分。
通过免费的 Learn OpenUSD 自定进度课程,学习
资产结构化原则和内容聚合
。
合成操作符
合成操作符是 OpenUSD 的基础运算符,可将场景描述的较小单元组合成较大的单元。这可在 OpenUSD 工作流中实现稳健的资产结构和内容聚合。
此功能是 OpenUSD 的超能力之一,使其有别于其他 3D 数据格式和框架。
图 3、合成操作符支持创建复杂的 3D 对象,例如自动移动机器人
合成操作符是以非破坏性方式组合 USD 数据单位的必要工具。OpenUSD 中提供的合成操作符包括子层、引用、有效载荷、变体集、继承、专门化以及最近添加的重新定位。
从以下合成操作符开始,了解如何在自己的工作流程中利用它们:
子层
:支持多个协作者或进程在给定的 USD 环境中运行。子层中创作的数据以“include”方式添加到 USD 上下文中。
引用:最适合内容聚合,可用于将一个 USD 上下文移植到另一个 USD 上下文的层次结构中。通过引用资产,您可以为每个组件维护单一的事实来源,从而确保一致性并减少冗余。这在重复使用相同资产的大型项目中特别有用。
变体集
:用于存储组件的多个变体,在不复制数据的情况下增强适应性。此功能使您能够在单个高效结构中创建素材的不同版本,例如不同的材质或配置。用例包括指定模型的不同纹理、材质或配置,例如描述具有不同涂料颜色或不同轮缘风格的汽车。
图 4、OpenUSD 场景的合成
学习和实施合成操作符将为您的 USD 工作流提供即时价值。然而,由于其广泛的灵活性和多样化的应用,掌握可能需要数年时间。为了有效使用 OpenUSD,我们强烈建议您投入大量时间进行实验,并深入了解合成弧类型。
通过免费的 Learn OpenUSD 自定进度培训,学习
创建 composition arcs
。
开始使用
OpenUSD 为高效的 3D 资产管理和内容聚合提供了一整套功能,使其成为大型项目不可或缺的框架。通过掌握数据交换、资产结构化和合成操作符,您可以显著改善 3D 工作流中的协作、性能和可复用性。
通过学习 Learn OpenUSD 学习路径上的课程,在基础知识的基础上进行构建。
开始在
Omniverse Kit SDK
上使用 OpenUSD 进行开发,并使用生成式 AI 模型通过
USD NIM 微服务
进行 OpenUSD 开发。
有关 OpenUSD 的更多资源,请访问
Alliance for OpenUSD (
AOUSD
) 论坛
并访问 AOUSD 网站。 |
https://developer.nvidia.com/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/ | NVIDIA JetPack 6.2 Brings Super Mode to NVIDIA Jetson Orin Nano and Jetson Orin NX Modules | The introduction of the
NVIDIA Jetson Orin Nano Super Developer Kit
sparked a new age of
generative AI
for small edge devices. The new
Super Mode
delivered an unprecedented generative AI performance boost of up to 1.7x on the developer kit, making it the most affordable generative AI supercomputer.
JetPack 6.2 is now available to support Super Mode for
Jetson Orin Nano
and
Jetson Orin NX
production modules, delivering up to 2x higher generative AI model performance. Now you can unlock greater value and lower total cost of ownership for new and existing robotics and edge AI applications.
This post discusses the details of Super Mode, including new power modes, benchmarks for popular generative AI models on the Jetson Orin Nano and Orin NX modules, updates to the documentation, and insights into NPN partners supporting Super Mode.
New reference power modes on the Jetson Orin Nano and Jetson Orin NX series
JetPack 6.2 enables the power boost on the Jetson Orin Nano and Jetson Orin NX series by unlocking higher frequencies on the GPU, DLA memory, and CPU clocks.
MODULE
EXISTING REFERENCE POWER MODES
(available with existing flashing configs)
NEW POWER MODES
(available only with new flashing configs)
NVIDIA Jetson Orin Nano 4GB
7W, 10W
10W, 25W,
MAXN
SUPER
NVIDIA Jetson Orin Nano 8GB
7W, 15W
15W, 25W,
MAXN
SUPER
NVIDIA Jetson Orin NX 8GB
10W, 15W, 20W,MAXN
10W, 15W, 20W, 40W,
MAXN
SUPER
NVIDIA Jetson Orin NX 16GB
10W, 15W, 25W,MAXN
10W, 15W, 25W, 40W,
MAXN
SUPER
Table 1. New reference power modes on the Jetson Orin Nano and Jetson Orin NX modules
Jetson Orin Nano modules now have a 25W mode and a new uncapped MAXN SUPER mode. Similarly, Jetson Orin NX modules can now use a new higher 40W reference power mode as well as an uncapped MAXN SUPER mode.
The MAXN SUPER is an uncapped power mode that enables the highest number of cores and clock frequency for CPU, GPU, DLA, PVA, and SOC engines. If the total module power exceeds the thermal design power (TDP) budget in this mode, the module is throttled to lower frequency, which delivers lower performance while staying within the thermal budget.
We strongly recommend building your own custom power mode to find the right balance between power consumption or thermal stability and performance for your application and needs.
Table 2 compares the detailed specifications of Jetson Orin Nano 4GB and 8GB and Jetson Orin NX 8GB and 16GB in their original and Super Mode.
ORIN NANO 4GB
ORIN NANO 4GB
(SUPER)
ORIN NANO 8GB
ORIN NANO 8GB
(SUPER)
ORIN NX 8GB
ORIN NX 8GB
(SUPER)
ORIN NX 16GB
ORIN NX 16GB
(SUPER)
PEAK AI PERF
INT8
20 TOPS (Sparse)
10 TOPS (Dense)
34 TOPS (Sparse)
17 TOPS (Dense)
40 TOPS (Sparse)
20 TOPS (Dense)
67 TOPS (Sparse)
33 TOPS (Dense)
70 TOPS (Sparse)
35 TOPS (Dense)
117 TOPS (Sparse)
58 TOPS (Dense)
100 TOPS (Sparse)
50 TOPS (Dense)
157 TOPS (Sparse)
78 TOPS (Dense)
NVIDIA AMPERE GPU
512 CUDA Cores
16 Tensor Cores
625 MHz
20/10 INT8 TOPs (S/D)
5 FP16 TFLOPs
512 CUDA Cores
16 Tensor Cores
1020 MHz
34/17 INT8 TOPs (S/D)
8.5 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
625 MHz
40/20 INT8 TOPs (S/D)
10 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1020 MHz
67/33 INT8 TOPs (S/D)
17 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
765 MHz
50/25 INT8 TOPs (S/D)
13 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
918 MHz
60/30 INT8 TOPs (S/D)
15 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
CPU
6X A78
1.5 GHz
6X A78
1.7 GHz
6X A78
1.5 GHz
6X A78
1.7 GHz
6X A78
2.0 GHz
6X A78
2.0 GHz
8X A782
.0 GHz
8X A78
2.0 GHz
DLA (S/D)
NA
NA
NA
NA
20/10 INT8 TOPs
40/20 INT8 TOPs
40/20 INT8 TOPs
80/40 INT8 TOPs
DRAM BW
34 GB/s
51 GB/s
68 GB/s
102 GB/s
102 GB/s
102 GB/s
102 GB/s
102 GB/s
REFERENCE MODULE POWER
7W |
10W
7W |
25W
|
MAXN SUPER
7W |
15W
15W
|
25W
|
MAXN SUPER
10W |
15W |
20W | MAXN
10W |
15W |
20W |
40W
|
MAXN SUPER
10W |
15W |
25W | MAXN
10W |
15W |
25W |
40W
|
MAXN SUPER
Table 2. Original specs for Jetson Orin Nano and Jetson Orin NX and the specs in Super Mode
While using the new power modes, ensure that your product’s existing or new thermal design can accommodate the new specifications with the power modes. For more information, see the updated
Thermal Design Guide
.
Updated Power Estimator Tool
The Power Estimator Tool is a powerful tool provided by NVIDIA to create custom power profiles and
nvpmodel
configuration files by modifying system parameters such as the cores, maximum frequency and load levels on the GPU, CPU, DLA, and so on. The tool provides an estimated power consumption with various settings and can be used to create optimal parameters settings to get the desired balance between performance and power consumption.
We have updated the Power Estimator Tool with Super Mode. We strongly recommend that you use the Power Estimator Tool and verify in practice before deploying with high-performance applications.
Boost performance on popular generative AI models
With the introduction of Super Mode with JetPack 6.2, the Jetson Orin Nano and Jetson Orin NX modules deliver up to a 2x inference performance boost. We benchmarked the most popular large language models (LLMs), vision language models (VLMs) and vision transformers (ViTs).
Large language models
The following chart and tables show the Super Mode performance benchmark for popular LLMs such as Llama3.1 8B, Qwen2.5 7B, and Gemma2 2B.
Figure 1. Performance improvements for LLMs using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
In the following tables,
LLM generation performance (tokens per second) was measured with INT4 quantization using MLC API.
Table 3 shows the LLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
Gemma 2 2B
11.40
18.60
1.64
SmolLM2 1.7B
23.00
35.80
1.56
Table 3. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 4GB
Table 4 shows the LLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
14.00
19.10
1.37
Llama 3.2 3B
27.70
43.10
1.55
Qwen 2.5 7B
14.20
21.80
1.53
Gemma 2 2B
21.5
35.0
1.63
Gemma 2 9B
7.20
9.20
1.28
Phi-3.5 3.8B
24.70
38.10
1.54
SmolLM2 1.7B
41.00
64.50
1.57
Table 4. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 8GB
Table 5 shows the LLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 8GB (original)
Orin NX 8GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
15.90
23.10
1.46
Llama 3.2 3B
34.50
46.50
1.35
Qwen 2.5 7B
17.10
23.80
1.39
Gemma 2 2B
26.60
39.30
1.48
Gemma 2 9B
8.80
13.38
1.52
Phi-3.5 3.8B
30.80
41.30
1.34
SmolLM2 1.7B
51.50
69.80
1.35
Table 5. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX 8GB
Table 6 shows the LLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
20.50
22.80
1.11
Llama 3.2 3B
40.40
45.80
1.13
Qwen 2.5 7B
20.80
23.50
1.13
Gemma 2 2B
31.60
39.00
1.23
Gemma 2 9B
10.56
13.26
1.26
Phi-3.5 3.8B
35.90
40.90
1.14
SmolLM2 1.7B
59.50
68.80
1.16
Table 6. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX and Orin Nano modules
Vision language models
The following chart and tables show the Super Mode performance benchmark for popular VLMs such as VILA1.5 8B, LLAVA1.6 7B, and Qwen2 VL 2B.
Figure 2. Performance improvements of VLMs when run using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
Table 7 shows the VLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 4GB (original)
Orin Nano 4GB (Super Mode)
Perf Gain (x)
PaliGemma2 3B
7.2
11.2
1.56
Table 7. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 4GB
Table 8 shows the VLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
0.7
1.1
1.51
VILA 1.5 8B
0.6
0.8
1.45
LLAVA 1.6 7B
0.4
0.6
1.38
Qwen2 VL 2B
2.8
4.4
1.57
InternVL2.5 4B
2.5
5.1
2.04
PaliGemma2 3B
13.7
21.6
1.58
SmolVLM 2B
8.1
12.9
1.59
Table 8. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 8GB
Table 9 shows the VLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
0.8
1
1.25
VILA 1.5 8B
0.7
1.04
1.50
LLAVA 1.6 7B
0.5
1.2
2.54
Qwen2 VL 2B
3.4
4.8
1.41
InternVL2.5 4B
3
4.1
1.37
PaliGemma2 3B
17.1
23.9
1.40
SmolVLM 2B
9.7
14.4
1.48
Table 9. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX 16GB
Table 10 shows the VLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
1
1.3
1.23
VILA 1.5 8B
0.8
1
1.25
LLAVA 1.6 7B
0.6
0.7
1.07
Qwen2 VL 2B
4
4.8
1.20
InternVL2.5 4B
2.8
4.4
1.57
PaliGemma2 3B
20
23.8
1.19
SmolVLM 2B
11.7
14.3
1.22
Table 10. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX and Orin Nano modules
All VILA and LLAVA models were run with INT4 precision using MLC while the rest of the models were run in FP4 precision with Hugging Face Transformers.
Vision transformers
The following chart and tables show the Super Mode performance benchmark for popular ViTs such as CLIP, DINO, and SAM2.
Figure 3. Performance improvements of ViTs when run using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
Table 11 shows the ViT performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 4GB (original)
Orin Nano 4GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
126.8
189.5
1.49
clip-vit-base-patch16
63.2
112.4
1.78
DINOv2-base-patch14
49.3
79.3
1.61
SAM2 base
2.5
3.8
1.54
vit-base-patch16-224
62.4
103.3
1.66
Table 11. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 4GB
Table 12 shows the ViT performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
196
314
1.60
clip-vit-base-patch16
95
161
1.69
DINOv2-base-patch14
75
126
1.68
SAM2 base
4.4
6.3
1.43
Grounding DINO
4.1
6.2
1.52
vit-base-patch16-224
98
158
1.61
vit-base-patch32-224
171
273
1.60
Table 12. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 8GB
Table 13 shows the ViT performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 8GB (original)
Orin NX 8GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
234.0
361.1
1.54
clip-vit-base-patch16
101.7
204.3
2.01
DINOv2-base-patch14
81.4
160.3
1.97
SAM2 base
3.9
7.4
1.92
Grounding DINO
4.2
7.4
1.75
vit-base-patch16-224
98.6
192.5
1.95
vit-base-patch32-224
193.1
313.5
1.62
Table 13. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin NX 8GB
Table 14 shows the ViT performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
323.2
356.7
1.10
clip-vit-base-patch16
163.5
193.6
1.18
DINOv2-base-patch14
127.5
159.8
1.25
SAM2 base
6.2
7.3
1.18
Grounding DINO
6.2
7.2
1.16
vit-base-patch16-224
158.6
190.2
1.20
vit-base-patch32-224
281.2
309.5
1.10
Table 14. Benchmark performance in frames/sec for popular ViTs on Jetson Orin NX 16GB
All ViT models were run with FP16 precision using NVIDIA TensorRT and measurements are in FPS.
Getting started on NVIDIA Jetson Orin Nano and Jetson Orin NX with JetPack 6.2
The NVIDIA Jetson ecosystem provides various ways for you to flash the developer kit and production modules with the JetPack image.
To install JetPack 6.2 on the Jetson Orin Nano Developer Kit or the modules, use one of the following methods:
SD-Card image
from the
JetPack SDK
page.
SDK Manager
for flashing per the
download and install instructions
.
Flashing script
for bootloader, kernel, and root file system, per
Flashing Support
.
New flashing configuration
The new power modes are only available with the new flashing configuration. The default flashing configuration has not changed. To enable the new power modes, you must use the new flashing configuration while flashing.
Here’s the new flashing configuration to be used with flashing:
jetson-orin-nano-devkit-super.conf
After flashing or updating to JetPack 6.2, run the following command to start the newly available Super Mode.
MAXN SUPER mode on Jetson Orin Nano Modules:
sudo nvpmodel -m 2
MAXN SUPER mode on Jetson Orin NX Modules:
sudo nvpmodel -m 0
You can also select the MAXN SUPER and other power modes from the power mode menu at the top-right corner of the page.
Figure 4. Power mode selection menu
s
Jetson AI Lab
The
Jetson AI Lab
is the NVIDIA hub for exploring and experimenting with generative AI technologies optimized for edge devices. It supports developers and provides a collaborative community with nearly 50 tutorials, prebuilt containers, and resources for deploying on-device LLMs, SLMs, VLMs, diffusion policies, and speech models using optimized inferencing infrastructures.
By simplifying access to cutting-edge AI tools, the lab empowers developers of all levels to innovate and deploy generative AI locally, advancing open-source edge AI and robot learning.
Dive into generative AI with ease using these easy-to-follow tutorials for your developer kit powered by JetPack 6.2:
Ollama with Open Web UI
: Introductory LLM application example
NanoOWL
: Interactive demo with cutting-edge ViT model
LeRobot
: Physical AI example with HuggingFace LeRobot
Updated documentation: Datasheets and design guides
With the newer performance boost, the following resources have been updated and can be downloaded from the
Jetson Download Center
:
Jetson Orin NX Module datasheet
Jetson Orin Nano Module datasheet
Jetson Orin NX Series and Jetson Orin Nano Series Design Guide
Thermal Design Guide
Jetson ecosystem partners ready for Super Mode
To support customer deployments, the Jetson ecosystem partners have enhanced their solutions to support this boosted performance.
Category
Jetson ecosystem partners
ISV solutions
DeepEdge
Edge Impulse
RidgeRun
Ultralytics
Hardware system partners
AAEON
Advantech
Aetina
AIMobile
ASUSTek
Axiomtek
Connect Tech
Seeed Studio
Syslogic
Vecow
Yuan High-Tech
Thermal solutions
Advanced Thermal Solutions
Frore Systems
NVIDIA Jetson Orin lifecycle and roadmap
Due to the growing customer demand for Jetson Orin, NVIDIA
recently announced
the extension of the product lifecycle of Jetson Orin through 2032. With this performance boost, the Jetson Orin Nano and Orin NX series are the ideal platforms for both current and future models.
The upcoming
JetPack 5.1.5
will also enable Super Mode for the Jetson Orin NX and Jetson Orin Nano modules. Developers and customers who develop with JetPack 5 will benefit from the performance boost.
Figure 5. JetPack software roadmap
Boost your application performance with JetPack 6.2
JetPack 6.2 is a groundbreaking release. It delivers an astonishing 2x boost in inference performance on existing Jetson modules, without any added cost. This upgrade is a must-have for Jetson developers and customers looking to supercharge their applications. Upgrade to JetPack 6.2 today and unleash the full potential of your Jetson platform.
Stay up to date by subscribing to our
newsletter
, and follow NVIDIA Robotics on
LinkedIn
,
Instagram
,
X
, and
Facebook
. For more information, explore our
documentation
or join the Robotics community on our
developer forums
,
Discord
, and
YouTube
channels. | https://developer.nvidia.com/zh-cn/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/ | NVIDIA JetPack 6.2 为 NVIDIA Jetson Orin Nano 和 Jetson Orin NX 模块引入超级模式 | NVIDIA Jetson Orin Nano 超级开发者套件
的推出为小型边缘设备开启了
生成式 AI
的新时代。新的
超级模式
在开发者套件上实现了前所未有的生成式 AI 性能提升,最高可达 1.7 倍,使其成为最经济实惠的生成式 AI 超级计算机。
JetPack 6.2 现已支持
Jetson Orin Nano
和
Jetson Orin NX
生产模组的超级模式,可将生成式 AI 模型的性能提升高达 2 倍。现在,您可以为新的和现有的机器人和边缘 AI 应用释放更多价值并降低总体拥有成本。
本文讨论了 Super Mode 的详细信息,包括新的功率模式、Jetson Orin Nano 和 Orin NX 模组上热门生成式 AI 模型的基准测试、文档更新,以及对支持 Super Mode 的 NPN 合作伙伴的见解。
Jetson Orin Nano 和 Jetson Orin NX 系列上的新参考功率模式
JetPack 6.2 通过在 GPU、DLA 内存和 CPU 时钟上解锁更高的频率,提升 Jetson Orin Nano 和 Jetson Orin NX 系列的功耗。
模块
现有参考电源模式
(
适用于现有刷写配置
)
NEW POWER MODES
(仅适用于新的刷写配置)
NVIDIA Jetson Orin Nano 4GB
7 瓦、10 瓦
10 瓦、25 瓦、
最大值
超级 (SUPER)
NVIDIA Jetson Orin Nano 8GB
7 瓦、15 瓦
15 瓦、25 瓦、
最大值
超级 (SUPER)
NVIDIA Jetson Orin NX 8GB
10 瓦、15 瓦、20 瓦、MAXN
10 瓦、15 瓦、20 瓦、40 瓦,
最大值
超级 (SUPER)
NVIDIA Jetson Orin NX 16GB
10 瓦、15 瓦、25 瓦、MAXN
10 瓦、15 瓦、25 瓦、40 瓦,
最大值
超级 (SUPER)
表 1、Jetson Orin Nano 和 Jetson Orin NX 模组上的新参考功率模式
Jetson Orin Nano 模组现提供 25W 模式和全新的无上限 MAXN SUPER 模式。同样,Jetson Orin NX 模组现在可以使用更高的 40W 参考功率模式以及不设上限的 MAXN SUPER 模式。
MAXN SUPER 是不设上限的功率模式,可为 CPU、GPU、DLA、PVA 和 SOC 引擎提供最高数量的核心数和时钟频率。在此模式下,如果模组总功率超过热设计功率(TDP)预算,系统会将模组调低频率,从而在保持热预算的同时降低性能。
我们强烈建议您构建自己的自定义功率模式,以便根据您的应用和需求,在功耗或热稳定性与性能之间找到适当的平衡。
表 2 比较了 Jetson Orin Nano 4GB 和 8GB 以及 Jetson Orin NX 8GB 和 16GB 在原始模式和 Super Mode 下的详细规格。
Orin Nano 4GB*
Orin Nano 4GB
(SUPER)*
Orin Nano 8GB*
Orin Nano 8GB*
(SUPER)*
ORIN NX 8GB*
ORIN NX 8GB*
(SUPER)*
Orin NX 16GB*
Orin NX 16GB*
(SUPER)*
AI 性能峰值
*
INT8
*
20 TOPS (Sparse) 10 TOPS (Dense)
34 TOPS (Sparse)
17 TOPS (Dense)
40 TOPS (Sparse) 20 TOPS (Dense)
67 TOPS (Sparse)
33 TOPS (Dense)
70 TOPS (Sparse) 35 TOPS (Dense)
117 TOPS (Sparse)
58 TOPS (Dense)
100 TOPS (Sparse) 50 TOPS (Dense)
157 TOPS (Sparse)
78 TOPS (Dense)
NVIDIA Ampere GPU
512 个 CUDA Cores 16 个 Tensor Cores 625 MHz 20/10 INT8 TOPs (S/D) 5 FP16 TFLOPs
512 个 CUDA Cores 16 个 Tensor Cores
1020 MHz
34/17 INT8 TOPS (S/D)
8.5 FP16 TFLOPS
1024 个 CUDA Cores 32 个 Tensor Cores 625 MHz 40/20 INT8 TOPs (S/D) 10 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1020 MHz
67/33 INT8 TOPs (S/D)
17 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores 765 MHz 50/25 INT8 TOPs (S/D) 13 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores 918 MHz 60/30 INT8 TOPs (S/D) 15 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
CPU
*
6 块 A78+
1.5 GHz*
6 块 A78+
1.7 GHz
*
6 块 A78+
1.5 GHz*
6 块 A78+
1.7 GHz
*
6 块 A78+
2.0 GHz*
6 块 A78+
2.0 GHz*
8 块 A78+2
.0 GHz 以上
8 块 A78+
2.0 GHz*
DLA (S/D)
*
NA*
NA*
NA*
NA*
20/10 INT8 TOPS*
40/20 INT8 TOPS
*
40/20 INT8 TOPS*
80/40 INT8 TOPS
*
DRAM 重制版
*
34 GB/秒*
51 GB/秒
*
68 GB/秒*
102 GB/秒
*
102 GB/秒*
102 GB/秒*
102 GB/秒*
102 GB/秒*
模组功耗
*
7 瓦
10 瓦以上
7 瓦
10 瓦
25 瓦
*
7 瓦
15 瓦以上
7 瓦
15 瓦
25 瓦
*
10 瓦
15 瓦
20 瓦以上
10 瓦
15 瓦
25 瓦
40 瓦
*
10 瓦
15 瓦
25 瓦以上
10 瓦
15 瓦
25 瓦
40 瓦
*
表 2、Jetson Orin Nano 和 Jetson Orin NX 的原始规格以及超级模式下的规格
在使用新的功率模式时,请确保您产品的现有或新的散热设计能够适应新的功率模式规格。有关更多信息,请参阅更新的
Thermal Design Guide
。
更新了功耗估算器工具
功耗估算器工具是 NVIDIA 提供的一款功能强大的工具,可通过修改 GPU、CPU、DLA 上的核心、最大频率和负载级别等系统参数来创建自定义功耗配置文件和
nvpmodel
配置文件。该工具通过各种设置提供估计功耗,并可用于创建最佳参数设置,以在性能和功耗之间实现所需的平衡。
我们更新了 Power Estimator Tool 的超级模式。我们强烈建议您使用 Power Estimator Tool 并在实践中进行验证,然后再通过高性能应用程序进行部署。
提升热门生成式 AI 模型的性能
随着 JetPack 6.2 超级模式的引入,Jetson Orin Nano 和 Jetson Orin NX 模组可将推理性能提升高达 2 倍。我们对热门的大语言模型 (LLMs)、视觉语言模型 (VLMs) 和视觉转换器 (ViTs) 进行了基准测试。
大语言模型
以下图表和表格显示了 Llama3.1 8B、Qwen2.5 7B 和 Gemma2 2B 等热门 LLM 的超模性能基准测试。
图 1、使用 Super Mode 可提升 LLM 的性能
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
在下表中,LLM 生成性能(每秒令牌数)是使用 MLC API 通过 INT4 量化测量的。
表 3 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 上的 LLM 性能提升。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
Gemma 2 2B
11.40
18.60 欧元
1.64
SmolLM2 1.7 B
23.00
35.80
1.56
表 3、对 Jetson Orin Nano 4GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 4 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 LLM 性能提升。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
14.00
19.10
1.37
Llama 3.2 3B
27.70
43.10
1.55
Qwen 2.5 7B
14.20
21.80
1.53
Gemma 2 2B
21.5
35.0
1.63
Gemma 2 9B
7.20
9.20
1.28
Phi-3.5 38 B
24.70
38.10
1.54
SmolLM2 1.7 B
41.00
64.50
1.57
表 4、对 Jetson Orin Nano 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 5 显示了搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 LLM 性能提升。
模型
Orin NX 8GB (原始版本)
Orin NX 8GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
15.90
23.10
1.46
Llama 3.2 3B
34.50
46.50
1.35
Qwen 2.5 7B
17.10
23.80
1.39
Gemma 2 2B
26.60
39.30
1.48
Gemma 2 9B
8.80
13.38
1.52
Phi-3.5 38 B
30.80
评分:41.30
1.34
SmolLM2 1.7 B
51.50
69.80
1.35
表 5、对 Jetson Orin NX 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 6 显示了搭载 JetPack 6.2 的 Jetson Orin NX 16GB 上的 LLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
20.50
22.80
1.11
Llama 3.2 3B
40.40
45.80
1.13
Qwen 2.5 7B
20.80
23.50
1.13
Gemma 2 2B
31.60 欧元
39.00
1.23
Gemma 2 9B
10.56
13.26
1.26
Phi-3.5 38 B
35.90
40.90
1.14
SmolLM2 1.7 B
59.50
68.80
1.16
表 6、对 Jetson Orin NX 和 Orin Nano 模组上的热门 LLM 进行性能基准测试,以令牌/秒为单位
视觉语言模型
以下图表和表格显示了热门 VLM (例如 VILA1.5 8B、LLAVA1.6 7B 和 Qwen2 VL 2B) 的超模性能基准测试。
图 2、使用 Super Mode 运行时 VLM 的性能提升
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
表 7 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 VLM 性能提升情况。
模型
Orin Nano 4GB (原始)
Orin Nano 4GB (超级模式)
性能增益 (倍)
PaliGemma2 3B
7.2
11.2
1.56
表 7、对 Jetson Orin Nano 4GB 上的热门 VLM 进行性能基准测试(tokens/sec)
表 8 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
0.7
1.1
1.51
VILA 1.5 8B
0.6
0.8
1.45
LLAVA 1.6 7B
0.4
0.6
1.38
Qwen2 VL 2B
2.8
4.4
1.57
InterVL2.5 4B
2.5
5.1
2.04
PaliGemma2 3B
13.7
21.6%
1.58
SmolVLM 2B
8.1
12.9
1.59
表 8、对 Jetson Orin Nano 8GB 上的热门 VLM 进行性能基准测试(tokens/sec)
表 9 显示搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
0.8
1
1.25
VILA 1.5 8B
0.7
1.04
1.50
LLAVA 1.6 7B
0.5
1.2
2.54
Qwen2 VL 2B
3.4
4.8
1.41
InterVL2.5 4B
3
4.1
1.37
PaliGemma2 3B
17.1
23.9
1.40
SmolVLM 2B
9.7
14.4
1.48
表 9、对 Jetson Orin NX 16GB 上的热门 VLM 进行性能基准测试,以令牌/秒为单位
表 10 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
1
1.3
1.23
VILA 1.5 8B
0.8
1
1.25
LLAVA 1.6 7B
0.6
0.7
1.07
Qwen2 VL 2B
4
4.8
1.20
InterVL2.5 4B
2.8
4.4
1.57
PaliGemma2 3B
20
23.8
1.19
SmolVLM 2B
11.7
14.3
1.22
表 10、对 Jetson Orin NX 和 Orin Nano 模组上的热门 VLM 进行性能基准测试,以令牌/秒为单位
所有 VILA 和 LLAVA 模型均使用 MLC 以 INT4 精度运行,而其余模型则使用 Hugging Face Transformers 以 FP4 精度运行。
视觉 Transformer 模型
以下图表和表格显示了 CLIP、DINO 和 SAM2 等热门 ViTs 的 Super Mode 性能基准测试。
图 3、使用 Super Mode 运行 ViTs 时的性能提升
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
表 11 显示搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 ViT 性能提升情况。
模型
Orin Nano 4GB (原始)
Orin Nano 4GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
126.8
189.5
1.49
clip-vit-base-patch16
63.2
112.4
1.78
基于 DINOv2 的补丁 14
49.3%
79.3
1.61
基于 SAM2
2.5
3.8
1.54
Vit-base-patch16-224
62.4%
103.3
1.66
表 11、对 Jetson Orin Nano 4GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 12 显示搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 的 ViT 性能提升情况。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
196
214
1.60
clip-vit-base-patch16
95%
151
1.69
基于 DINOv2 的补丁 14
75
126
1.68
基于 SAM2
4.4
6.3
1.43
支持恐龙
4.1
6.2
1.52
Vit-base-patch16-224
98
158
1.61
Vit-base-patch32-224
172
273
1.60
表 12、对 Jetson Orin Nano 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 13 显示配备 JetPack 6.2 的 Jetson Orin NX 8GB 上的 ViT 性能提升。
模型
Orin NX 8GB (原始版本)
Orin NX 8GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
224.0
361.1
1.54
clip-vit-base-patch16
101.7
204.3
2.01
基于 DINOv2 的补丁 14
81.4%
160.3
1.97
基于 SAM2
3.9
7.4
1.92
支持恐龙
4.2
7.4
1.75
Vit-base-patch16-224
98.6%
192.5
1.95
Vit-base-patch32-224
192.1
313.5
1.62
表 13、对 Jetson Orin NX 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 14 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 ViT 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
322.2
356.7
1.10
clip-vit-base-patch16
163.5
192.6
1.18
基于 DINOv2 的补丁 14
127.5
159.8
1.25
基于 SAM2
6.2
7.3
1.18
支持恐龙
6.2
7.2
1.16
Vit-base-patch16-224
158.6
190.2
1.20
Vit-base-patch32-224
271.2
209.5
1.10
表 14、对 Jetson Orin NX 16GB 上的热门 ViT 进行性能基准测试 (帧/秒)
所有 ViT 模型均使用 NVIDIA TensorRT 以 FP16 精度运行,且测量以 FPS 进行。
使用 NVIDIA Jetson Orin Nano 和 NVIDIA Jetson Orin NX 的 JetPack 6.2 入门
NVIDIA Jetson 生态系统提供了多种使用 JetPack 镜像刷写开发者套件和生产模组的方式。
要在 Jetson Orin Nano 开发者套件或模组上安装 JetPack 6.2,请使用以下方法之一:
JetPack SDK
页面中的
SD-Card 镜像
。
SDK Manager
,用于
根据下载和安装说明
进行刷写。
根据
刷写支持
,刷写 bootloader、kernel 和 root file system 的脚本。
新的刷写配置
新的电源模式仅适用于新的刷写配置。默认刷写配置未发生变化。要启用新的电源模式,您必须在刷写时使用新的刷写配置。
以下是用于刷写的新刷写配置:
jetson-orin-nano-devkit-super.conf
刷写或更新到 JetPack 6.2 后,运行以下命令以启动新提供的超级模式。
Jetson Orin Nano 模组上的 MAXN SUPER 模式:
sudo nvpmodel -m 2
Jetson Orin NX 模组上的 MAXN SUPER 模式:
sudo nvpmodel -m 0
您还可以从页面右上角的电源模式菜单中选择 MAXN SUPER 和其他电源模式。
图 4. 电源模式选择菜单
Jetson AI 实验室
Jetson AI 实验室是 NVIDIA 探索和试验针对边缘设备优化的生成式 AI 技术的中心。它为开发者提供支持,并为协作社区提供了近 50 个教程、预构建容器和资源,用于使用优化的推理基础架构部署设备上的 LLM、SLM、VLM、diffusion policies 和 speech models。
通过简化对先进 AI 工具的访问,该实验室使各级别的开发者能够在本地创新和部署生成式 AI,从而推进开源边缘 AI 和机器人学习。
通过这些易于学习的教程,为由 JetPack 6.2 提供支持的开发者套件轻松深入了解生成式 AI:
使用 Open Web UI 的 Ollama
:入门级 LLM 应用示例
NanoOWL:
采用先进 ViT 模型的交互式演示
LeRobot
:Hugging Face LeRobot 的物理 AI 示例
更新文档:Datasheets 和设计指南
借助更新的性能提升,以下资源已更新,可从
Jetson Download Center
下载:
Jetson Orin NX 模组数据表
Jetson Orin Nano 模组数据表
Jetson Orin NX 系列和 Jetson Orin Nano 系列设计指南
散热设计指南
Jetson 生态系统合作伙伴为 Super Mode 做好准备
为支持客户部署,Jetson 生态系统合作伙伴已增强其解决方案,以支持这种提升的性能。
类别
Jetson 生态系统合作伙伴
ISV 解决方案
DeepEdge
Edge Impulse
RidgeRun
Ultralytics
硬件系统合作伙伴
AAEON
Advantech
Aetina
AIMobile
ASUSTek
Axiomtek
Connect Tech
Seeed Studio
Syslogic
Vecow
Yuan High-Tech
散热解决方案
Advanced Thermal Solutions
Frore Systems
NVIDIA Jetson Orin 生命周期和路线图
鉴于客户对 Jetson Orin 的需求日益增长,NVIDIA
最近宣布
将 Jetson Orin 的产品生命周期延长至 2032 年。凭借这种性能提升,Jetson Orin Nano 和 Orin NX 系列成为当前和未来模型的理想平台。
即将推出的
JetPack 5.1.5
还将为 Jetson Orin NX 和 Jetson Orin Nano 模组启用超级模式。使用 JetPack 5 进行开发的开发者和客户将从性能提升中受益。
图 5、JetPack 软件路线图
使用 JetPack 6.2 提升应用性能
JetPack 6.2 是一个开创性版本。它可在现有 Jetson 模组上实现惊人的 2 倍推理性能提升,且无需增加任何成本。对于希望为其应用程序提供强效助力的 Jetson 开发者和客户而言,此次升级必不可少。立即升级到 JetPack 6.2,充分发挥 Jetson 平台的全部潜力。 |
https://developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/ | Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform | As robotics and autonomous vehicles advance, accelerating development of
physical AI
—which enables
autonomous machines
to perceive, understand, and perform complex actions in the physical world—has become essential. At the center of these systems are
world foundation models (WFMs)
—AI models that simulate physical states through physics-aware videos, enabling machines to make accurate decisions and interact seamlessly with their surroundings.
NVIDIA Cosmos
is a platform that helps developers build custom world models for physical AI systems at scale. It offers
open world foundation models
and tools for every stage of development, from data curation to training to customization.
Video 1. NVIDIA Cosmos: A World Foundation Model Platform for Physical AI
This post explains Cosmos and its key features that accelerate physical AI development.
Accelerating world model development with NVIDIA Cosmos
Building physical AI is challenging, demanding precise simulations and real-world behavior understanding and prediction. A key tool for overcoming these challenges is a world model, which predicts future environmental states based on past observations and current inputs. These models are invaluable for physical AI builders, enabling them to simulate, train, and refine systems in controlled environments.
However, developing effective world models requires vast amounts of data, computational power, and real-world testing, which can introduce significant safety risks, logistical hurdles, and prohibitive costs. To address these challenges, developers often turn to
synthetic data
generated from
3D simulations
to train models. While synthetic data is a powerful tool, creating it is resource-intensive and may fall short of accurately reflecting real-world physics, particularly in complex or edge-case scenarios.
The
end-to-end NVIDIA Cosmos platform
accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines.
Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. In addition to Cosmos world foundation models, the platform also includes:
NVIDIA NeMo Curator
for efficient video data curation
Cosmos Tokenizer
for efficient, compact, and high-fidelity video tokenization
Cosmos world foundation models pretrained for robotics and autonomous driving applications
NVIDIA NeMo Framework
for model training and optimization
Figure 1. NVIDIA Cosmos is a world foundation model development platform with generative models, data curator, tokenizer, and framework to accelerate physical AI development
Pretrained world foundation models for physical AI
Cosmos world foundation models are pretrained large generative AI models trained on 9,000 trillion tokens—including 20 million hours of data from
autonomous driving
,
robotics
, synthetic environments, and other related domains. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating
humanoid robots
performing advanced actions to developing end-to-end autonomous driving models.
These models use two architectures: autoregressive and diffusion. Both approaches use the transformer architecture for its scalability and effectiveness in handling complex temporal dependencies.
Autoregressive model
Cosmos autoregressive model is designed for video generation, predicting the next token based on input text and past video frames. It uses a transformer decoder architecture, with key modifications for world model development.
3D RoPE (Rotary Position Embeddings) encodes spatial and temporal dimensions separately, ensuring precise video sequence representation.
Cross-attention layers enable text inputs, providing better control over world generation.
QK-normalization enhances training stability.
Pretraining of this model is progressive, starting with predicting up to 17 future frames from a single input frame, then extending to 34 frames, and eventually up to 121 frames (or 50,000 tokens). Text inputs are introduced to combine descriptions with video frames, and the model is fine-tuned with high-quality data for robust performance. This structured approach enables the model to generate videos of varying lengths and complexities, with or without text inputs.
Figure 2. The Cosmos autoregressive model uses a transformer decoder architecture, with key modifications for world model development
Diffusion models
Diffusion models
are popular for generating images, videos, and audio due to their ability to deconstruct training data and reconstruct it based on user input, producing high-quality, realistic outputs.
Diffusion models operate in two phases:
Forward diffusion process:
Training data is progressively corrupted by adding Gaussian noise over multiple steps, effectively transforming it into pure noise.
Reverse diffusion process:
The model learns to reverse this noise step by step, recovering the original data by denoising the corrupted input.
Once trained, diffusion models generate new data by sampling random Gaussian noise and passing it through the learned denoising process. In addition, Cosmos diffusion models also get several key updates tailored for physical AI development.
3D patchification processes video into smaller patches, simplifying spatio-temporal sequence representation.
Hybrid positional embeddings handle spatial and temporal dimensions, supporting videos with varying resolutions and frame rates.
Cross-attention layers incorporate text inputs, enabling better control over video generation based on descriptions.
Adaptive layer normalization with LoRA reduces model size by 36%, maintaining high performance with fewer resources.
Figure 3. Cosmos diffusion model architecture combines advanced video compression, flexible positional encoding, and text integration to deliver high-quality, physics-aware video generation
Model sizes for varied needs
Developers can choose from the following three model sizes to meet performance, quality, and deployment needs.
Nano:
Optimized for real-time, low-latency inference and edge deployment.
Super:
Designed as performant baseline models.
Ultra:
Focused on maximum quality and fidelity, ideal for distilling custom models.
Strengths and limitations
Cosmos world foundation models generate low-resolution, real-world-accurate synthetic video data, essential for training
robotics
and autonomous vehicle systems. While they lack artistic flair, their outputs closely replicate the physical world, making them ideal for precise object permanence and realistic scenarios in physical AI model training.
Guardrails for safe use of Cosmos world foundation models
AI models need guardrails to ensure reliability by mitigating hallucinations, preventing harmful outputs, safeguarding privacy, and aligning with AI standards for safe and controlled deployment. Cosmos ensures the safe use of its world foundation models through a customizable, two-stage guardrail system aligned with NVIDIA’s commitment to trustworthy AI.
Cosmos Guardrails operates in two stages: Pre-guard and Post-guard.
Pre-guard
This stage involves text prompt-based safety measures using two layers:
Keyword Blocking:
A blocklist checker scans prompts for unsafe keywords, using lemmatization to detect variations and blocking non-English terms or spelling errors.
Aegis Guardrail:
The NVIDIA fine-tuned Aegis AI Content Safety model detects and blocks semantically unsafe prompts, including categories like violence, harassment, and profanity. Unsafe prompts halt video generation and return an error message.
Post-guard
The Post-guard stage ensures the safety of generated videos through:
Video Content Safety Classifier:
A multiclass classifier evaluates every video frame for safety. If any frame is flagged as unsafe, the entire video is rejected.
Face Blur Filter:
All human faces in generated videos are blurred using the RetinaFace model to protect privacy and reduce biases based on age, gender, or race.
NVIDIA experts rigorously test with adversarial examples, annotating over 10,000 prompt-video pairs to refine the system and address edge cases.
Evaluating Cosmos world foundation models for 3D consistency and physics alignment
Cosmos benchmarks play a crucial role in assessing the ability of world foundation models to simulate real-world physics accurately and efficiently for physical AI applications. While publicly available benchmarks for video generation focus on fidelity, temporal consistency, and speed of generated videos, Cosmos benchmarks add new dimensions to evaluate generalist models: 3D consistency and physics alignment, ensuring the videos are evaluated based on accuracy required for physical AI systems.
3D consistency
Cosmos models were tested for 3D consistency on static scenes from a curated subset of 500 videos from an
open dataset
. Text prompts describing the videos were generated to avoid motion-related complexities. Comparisons were made against
VideoLDM
, a baseline generative model.
Metrics used
Geometric Consistency:
Assessed through epipolar geometry constraints using metrics like Sampson error and camera pose estimation success rate.
View Synthesis Consistency:
Evaluated through metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). These metrics measure the quality of synthesized views from interpolated camera positions.
Lower Sampson error and higher success rates indicate better 3D alignment. Similarly, higher PSNR and SSIM and lower LPIPS are indicators of a better quality.
Model
Sampson Error ↓
Pose Estimation Success Rate (%) ↑
PSNR ↑
SSIM ↑
LPIPS ↓
VideoLDM
0.841
4.40%
26.23
0.783
0.135
Cosmos 1.0 Diffusion Text2World 7B
0.355
62.60%
33.02
0.939
0.070
Cosmos 1.0 Diffusion Video2World 7B
0.473
68.40%
30.66
0.929
0.085
Cosmos 1.0 Autoregressive 4B
0.433
35.60%
32.56
0.933
0.090
Cosmos 1.0 Autoregressive Video2World 5B
0.392
27.00%
32.18
0.931
0.090
Real videos (reference)
0.431
56.40%
35.38
0.962
0.054
Table 1. Evaluation of 3D consistency for Cosmos world foundation models versus base VideoLDM model
Results
Cosmos world foundation models outperform the baseline in 3D consistency (table 1), with higher geometric alignment and camera pose success rates. Their synthesized views match real-world quality, confirming their effectiveness as world simulators.
Physical alignment
Physics alignment tests how well Cosmos models simulate real-world physics, including motion, gravity, and energy dynamics. Using
NVIDIA PhysX
and
NVIDIA Isaac Sim
, eight controlled scenarios were designed to evaluate properties like gravity, collision, torque, and inertia in virtual environments.
Metrics used
Pixel-Level Metrics:
Peak Signal-to-Noise Ratio (PSNR) measures how closely the pixel values of the model’s output match the reference video. Higher values indicate less noise and better accuracy. Structural Similarity Index Measure (SSIM) assesses the similarity in structure, luminance, and contrast between the generated and ground-truth frames. Higher SSIM values reflect greater visual fidelity.
Feature-Level Metric:
DreamSim measures the similarity between high-level features extracted from both videos. This approach evaluates the semantic consistency of the generated content, focusing on objects and motion rather than individual pixels.
Object-Level Metric:
Intersection-over-Union (IoU) calculates the overlap between the predicted and actual object regions in the video. This is especially useful for tracking specific objects through the simulation to ensure their behavior aligns with physical expectations.
Higher PSNR, SSIM, DreamSim and IoU are indicators of better physical alignment.
Model
Conditioning
PSNR ↑
SSIM ↑
DreamSim ↑
Avg. IoU ↑
Cosmos 1.0 Diffusion Video2World 7B
prompt + 1 frame
17.34
0.54
0.84
0.332
Cosmos 1.0 Diffusion Video2World 7B
prompt + 9 frames
21.06
0.69
0.86
0.592
Cosmos 1.0 Diffusion Video2World 14B
prompt + 1 frame
16.81
0.52
0.84
0.338
Cosmos 1.0 Diffusion Video2World 14B
prompt + 9 frames
20.21
0.64
0.86
0.598
Cosmos 1.0 Autoregressive 4B
1 frame
17.91
0.49
0.83
0.394
Cosmos 1.0 Autoregressive 4B
9 frames
18.13
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 5B
prompt + 1 frame
17.67
0.48
0.82
0.376
Cosmos 1.0 Autoregressive Video2World 5B
prompt + 9 frames
18.29
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 12B
1 frame
17.94
0.49
0.83
0.395
Cosmos 1.0 Autoregressive Video2World 12B
9 frames
18.22
0.49
0.87
0.487
Cosmos 1.0 Autoregressive Video2World 13B
prompt + 1 frame
18
0.49
0.83
0.397
Cosmos 1.0 Autoregressive Video2World 13B
prompt + 9 frames
18.26
0.48
0.87
0.482
Table 2. Physics alignment results with metrics calculated over 33 frames, the maximum length supported by the autoregressive variants of the Cosmos world foundation models
Results
Cosmos world foundation models show strong adherence to physical laws (Table 2), particularly with increased conditioning data. Post-training on camera conditioning dataset achieves a twofold increase in pose estimation success rate compared to baseline models. However, challenges like object impermanence (where objects vanish or appear unexpectedly) and implausible behaviors (such as violating gravity) highlight areas for improvement.
Customizing for physical AI applications with Cosmos and NVIDIA Omniverse
Video search and understanding:
Simplifies video tagging and search by understanding spatial and temporal patterns, making training data preparation easier.
Controllable 3D-to-real synthetic data generation:
With
NVIDIA Omniverse
, developers can create 3D scenarios and use Cosmos to generate photorealistic videos that are precisely controlled by 3D scenes for highly tailored synthetic datasets.
Policy model development and evaluation:
World foundation models fine-tuned for action-conditioned video prediction enable scalable, reproducible evaluation of policy models—strategies mapping states to actions—reducing reliance on risky real-world tests or complex simulations for tasks like obstacle navigation or object manipulation.
Foresight for action selection:
Cosmos equips physical AI models with predictive capabilities to assess the outcomes of potential actions.
Multiverse simulation:
Using Cosmos and NVIDIA Omniverse, developers can simulate multiple future outcomes to help AI models evaluate and select the best strategy for achieving its goals, benefiting applications like predictive maintenance and autonomous decision-making.
From generalist to customized specialist models
Cosmos introduces a two-stage approach to world model training.
Generalist models:
Cosmos world foundation models are built as generalists, trained on extensive datasets that encompass diverse real-world physics and environments. These open models are capable of handling a broad range of scenarios, from natural dynamics to robotic interactions, providing a solid foundation for any physical AI task.
Specialist models:
Developers can fine-tune generalist models using smaller, targeted datasets to create specialists tailored for specific applications, such as autonomous driving or humanoid robotics or they can generate customized synthetic scenarios, such as night scenes with emergency vehicles or high-fidelity industrial robotics environments. This fine-tuning process significantly reduces the required data and training time compared to training models from scratch.
Cosmos accelerates training and fine-tuning with efficient video processing pipelines, highly performant tokenizer, and advanced training frameworks, enabling developers to address operational needs and edge cases for advancing physical AI.
Accelerated data processing with NVIDIA NeMo Curator
Training models require curated, high-quality data, which is time and resource-intensive. NVIDIA Cosmos includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.
NVIDIA NeMo Curator enables robotics and AV developers to process vast datasets efficiently. For example, 20 million hours of video can be processed in 40 days on NVIDIA Hopper GPUs, or just 14 days on NVIDIA Blackwell GPUs—compared to 3.4 years on unoptimized CPU pipelines.
Key benefits include:
89x faster curation:
Dramatically reduces processing time
Scalability:
Handles 100+ PB of data seamlessly
High throughput:
Advanced filtering, captioning, and embedding ensure quality without sacrificing speed
Figure 4. Cosmos includes NeMo Curator that delivers 89x faster video data processing
High-fidelity compression and reconstruction with Cosmos Tokenizer
After data is curated, it must be tokenized for training. Tokenization breaks down complex data into manageable units, enabling models to process and learn from it more efficiently.
Cosmos tokenizers simplify this process with faster compression and visual reconstruction while preserving quality, reducing costs and complexity. For autoregressive models, the discrete tokenizer compresses data 8x in time and 16×16 in space, processing up to 49 frames at once. For diffusion models, the continuous tokenizer achieves 8x time and 8×8 space compression, handling up to 121 frames.
Fine-tuning with NVIDIA NeMo
Developers can fine-tune Cosmos world foundation models using the
NVIDIA NeMo
Framework. NeMo Framework accelerates model training on GPU-powered systems, whether enhancing an existing model or building a new one, from on-premises data centers to the cloud.
NeMo Framework efficiently loads multimodal data by:
Sharding terabyte size dataset into compressed files to reduce IO overhead.
Deterministically saving and loading datasets to avoid repetition and minimize compute waste.
Reducing network bandwidth when exchanging data using optimized communications.
Get started with NVIDIA Cosmos
Cosmos world foundation models are open and available on
NGC
and
Hugging Face
. Developers can also run Cosmos world foundation models on the
NVIDIA API catalog
. Also available on the
API catalog
are Cosmos tools to enhance text prompts for accuracy, an inbuilt watermarking system that enables easy future identification of AI-generated sequences, and a specialized model to decode video sequences for augmented reality applications. To learn more,
watch the demo
.
NeMo Curator
for accelerated data processing pipelines is available as a managed service and SDK. Developers can now
apply for early access
. Cosmos tokenizers are open neural networks available on
GitHub
and
Hugging Face
.
Get started with NVIDIA Cosmos
. | https://developer.nvidia.com/zh-cn/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/ | NVIDIA Cosmos World 基础模型平台助力物理 AI 进步 | 随着机器人和
自动驾驶汽车
的发展,加速
物理 AI
的发展变得至关重要,而物理 AI 使自主机器能够感知、理解并在现实世界中执行复杂的操作。这些系统的核心是
世界基础模型 (WFMs)
,即通过物理感知视频模拟物理状态的 AI 模型,使机器能够做出准确决策并与周围环境无缝交互。
NVIDIA Cosmos
平台可帮助开发者大规模为物理 AI 系统构建自定义世界模型。它为从数据管护、训练到定制的每个开发阶段提供开放世界基础模型和工具。
视频 1、NVIDIA Cosmos:适用于物理 AI 的世界基础模型平台
本文将介绍 Cosmos 及其加速物理 AI 开发的主要功能。Cosmos 是一种开源的 Python 库,用于加速物理 AI 开发。
借助 NVIDIA Cosmos 加速世界模型开发
构建物理 AI 极具挑战性,需要精确的模拟以及真实世界的行为理解和预测。克服这些挑战的一个关键工具是世界模型,该模型根据过去的观察结果和当前的输入结果预测未来的环境状态。这些模型对于物理 AI 构建者来说非常重要,使他们能够在受控环境中模拟、训练和优化系统。
然而,开发有效的世界模型需要大量数据、计算能力和真实世界的测试,这可能会带来重大的安全风险、物流障碍和高昂的成本。为了应对这些挑战,开发者通常会使用通过 3D 仿真生成的
合成数据
来训练模型。虽然合成数据是一种功能强大的工具,但创建合成数据需要大量资源,可能无法准确反映现实世界的物理特性,在复杂或边缘情况下尤其如此。
端到端
NVIDIA Cosmos 平台
可加速物理 AI 系统的世界模型开发。Cosmos 基于 CUDA 构建,结合了先进的世界基础模型、视频标记器和 AI 加速的数据处理流程。
开发者可以通过微调 Cosmos 世界基础模型或从头开始构建新模型来加速世界模型的开发。除了 Cosmos 世界基础模型之外,该平台还包括:
用于高效视频数据管护的
NVIDIA NeMo Curator
Cosmos Tokenizer
可实现高效、紧凑和高保真的视频标记化
为机器人和自动驾驶应用预训练的 Cosmos World Foundation 模型
用于模型训练和优化的
NVIDIA NeMo 框架
图 1、NVIDIA Cosmos 是一个世界性的基础模型开发平台,包含生成式模型、数据管护器、标记器和框架,可加速物理 AI 开发
适用于物理 AI 的
预训练世界基础模型
Cosmos 世界基础模型是预训练的大型生成式 AI 模型,使用 9000 万亿个令牌进行训练,其中包括来自
自动驾驶
、
机器人
开发、合成环境和其他相关领域的 2000 万个小时的数据。这些模型可创建有关环境和交互的逼真合成视频,为训练复杂系统 (从模拟执行高级动作的类
人型机器人
到开发端到端自动驾驶模型) 提供可扩展的基础。
这些模型使用两种架构:自回归和扩散。这两种方法都使用 Transformer 架构,因为其可扩展性和有效性可用于处理复杂的时间依赖项。
自回归模型
Cosmos 自回归模型专为视频生成而设计,可根据输入文本和过去的视频帧预测下一个令牌。它使用 Transformer 解码器架构,并针对世界模型开发进行了关键修改。
3D RoPE (Rotary Position Embeddings) 可分别对空间和时间维度进行编码,确保精确的视频序列表示。
交叉注意力层支持文本输入,从而更好地控制世界生成。
QK 标准化可增强训练稳定性。
此模型的预训练是渐进式的,首先从单个输入帧预测多达 17 帧的未来帧,然后扩展到 34 帧,最终达到 121 帧 (或 50,000 个 tokens)。引入文本输入以将描述与视频帧相结合,并使用高质量数据对模型进行微调,以实现强大的性能。这种结构化方法使模型能够生成长度和复杂性各不相同的视频,无论是否输入文本。
图 2、Cosmos 自回归模型使用 Transformer 解码器架构,并对世界模型开发进行了关键修改
扩散模型
Diffusion 模型在生成图像、视频和音频方面很受欢迎,因为它们能够解构训练数据并根据用户输入进行重建,从而生成高质量、逼真的输出。
扩散模型分为两个阶段:
前向扩散过程:
通过在多个步骤中添加高斯噪声,有效地将其转换为纯噪声,训练数据会逐渐损坏。
反向扩散过程:
模型会逐步学习反向噪声,通过对损坏的输入降噪来恢复原始数据,例如使用 Stable Diffusion 模型。
经过训练后,扩散模型通过对随机高斯噪声进行采样并将其传递给学习的降噪过程来生成新数据。此外,Cosmos 扩散模型还获得了一些专为物理 AI 开发打造的关键更新。
3D Patchification 将视频处理为更小的分块,从而简化时空序列表示。
混合位置嵌入可处理空间和时间维度,支持具有不同分辨率和帧率的视频。
交叉注意力层整合文本输入,从而更好地控制基于描述生成的视频。
采用 LoRA 的自适应层归一化可将模型大小减少 36%,从而以更少的资源保持高性能。
图 3、Cosmos 扩散模型架构将先进的视频压缩、灵活的位置编码和文本集成相结合,可生成具有物理感知的高质量视频
满足不同需求的模型大小
开发者可以从以下三种模型大小中进行选择,以满足性能、质量和部署需求。
Nano:
针对实时、低延迟推理和边缘部署进行优化。
Super:
设计为高性能基准模型。
Ultra:专注于最大化质量和保真度,适合提取自定义模型。
优点和局限性
Cosmos 世界基础模型生成低分辨率、真实世界准确的合成视频数据,这对于训练
机器人
和自动驾驶汽车系统至关重要。虽然它们缺乏艺术天赋,但其输出结果密切复制了物理世界,因此非常适合在物理 AI 模型训练中实现精确的物体持久性和逼真的场景。
用于安全使用 Cosmos World 基础模型的护栏
AI 模型需要护栏来确保可靠性,具体方法包括减少幻影、防止有害输出、保护隐私,以及符合 AI 标准以实现安全可控的部署。Cosmos 通过可定制的双阶段护栏系统确保其 World Foundation 模型的安全使用,该系统符合 NVIDIA 对值得信赖的 AI 的承诺。
Cosmos Guardrails 分为两个阶段:Pre-guard 和 Post-guard。
预防护
此阶段涉及基于文本提示的安全措施,使用两层:
关键字屏蔽:屏蔽列表检查器会扫描不安全关键字的提示,使用引言化来检测变异,并阻止非英语术语或拼写错误。
Aegis Guardrail:
经过 NVIDIA 微调的 Aegis AI 内容安全模型可以检测并阻止语义上不安全的提示,包括暴力、骚扰和冒犯等类别。不安全提示停止视频生成并返回错误消息。
护卫队队员
Post-guard Stage 通过以下方式确保生成视频的安全性:
视频内容安全分类器:
多类别分类器评估每个视频帧的安全性。如果任何帧被标记为不安全,则整个视频将被拒绝。
面部模糊滤镜:
使用 RetinaFace 模型对生成的视频中的所有人脸进行模糊处理,以保护隐私并减少基于年龄、性别或种族的偏见。
NVIDIA 专家使用对抗示例进行严格测试,标注超过 10,000 个提示视频对,以优化系统并解决边缘案例。
评估 Cosmos 世界基础模型,以实现 3D 一致性和物理对齐
Cosmos 基准测试在评估世界基础模型为物理 AI 应用准确高效地模拟真实物理的能力方面发挥着至关重要的作用。虽然公开可用的视频生成基准测试侧重于保真度、时间一致性和生成视频的速度,但 Cosmos 基准测试增加了新的维度来评估通用模型:3D 一致性和物理对齐,确保根据物理 AI 系统所需的准确性评估视频。
3D 一致性
Cosmos 模型在
开放数据集
中的 500 个精选视频子集中进行了静态场景的 3D 一致性测试。生成描述视频的文本提示是为了避免与动作相关的复杂性。并与基准生成模型
VideoLDM
进行了比较。
所用指标
几何一致性:使用 Sampson 误差和摄像头姿态估计成功率等指标通过外极性几何约束条件进行评估。
查看合成一致性:
通过峰值信噪比 (Peak Signal-to-Noise Ratio, PSNR)、结构相似性指数 (Structural Similarity Index, SSIM) 和学习感知图像块相似性 (Learned Perceptual Image Patch Similarity, LPIPS) 等指标进行评估。这些指标用于测量从插入的摄像头位置合成视图的质量。
更低的 Sampson 误差和更高的成功率表明 3D 对齐效果更好。同样,较高的 PSNR 和 SSIM 以及较低的 LPIPS 都表示质量有所提高。
模型
Sampson 错误*
姿态估计成功率 (%)*
PSNR*
SSIM
LPIPS
VideoLDM
0.841
4.40%
26.23
0.783
0.135
Cosmos 1.0 Diffusion Text2World 7B
0.355
62.60%
33.02
0.939
0.070
Cosmos 1.0 Diffusion Video2World 7B
0.473
68.40%
30.66
0.929
0.085
Cosmos 1.0 Autoregressive 4B
0.433
35.60%
32.56
0.933
0.090
Cosmos 1.0 Autoregressive Video2World 5B
0.392
27.00%
32.18
0.931
0.090
真实视频 (参考)
0.431
56.40%
35.38
0.962
0.054
表 1、评估 Cosmos 世界基础模型与基础 VideoLDM 模型的 3D 一致性
成果
Cosmos 世界基础模型在 3D 一致性方面的表现优于基准 (表 1),具有更高的几何对齐和摄像头姿态成功率。其合成视图与真实世界的质量相匹配,证实了其作为世界模拟器的有效性。
物理对齐
物理对齐测试 Cosmos 模型在模拟真实物理 (包括运动、重力和能量动力学) 时的效果。借助
NVIDIA PhysX
和
NVIDIA Isaac Sim
,设计出八个受控场景,用于评估虚拟环境中的重力、碰撞、扭矩和惯性等属性。
所用指标
像素级指标
:峰值信噪比(Peak Signal-to-Noise Ratio,PSNR)可测量模型输出的像素值与参考视频的匹配程度。值越高表示噪点越小,准确性越高。结构相似性指数指标(Structural Similarity Index Measure,SSIM)用于评估生成的帧与真实帧之间在结构、亮度和对比度方面的相似性。较高的 SSIM 值反映了更高的视觉保真度。
特征级别指标
:DreamSim 测量从两个视频中提取的高级特征之间的相似性。这种方法评估生成内容的语义一致性,重点关注物体和运动,而不是单个像素。
目标级别指标:
交集并集比 (Intersection-over-Union,IoU) 用于计算视频中预测的目标区域与实际目标区域之间的重叠。这对于在整个模拟过程中追踪特定物体特别有用,可确保其行为符合物理预期。
PSNR、SSIM、DreamSim 和 IoU 越高,表示物理对齐越好。
模型
调节
PSNR*
SSIM
DreamSim™
平均值。IoU*
Cosmos 1.0 Diffusion Video2World 7B
提示符 = 1 帧
17.34
0.54
0.84
0.332
Cosmos 1.0 Diffusion Video2World 7B
提示符 = 9 帧
21.06
0.69
0.86
0.592
Cosmos 1.0 Diffusion Video2World 14B
提示符 = 1 帧
16.81
0.52
0.84
0.338
Cosmos 1.0 Diffusion Video2World 14B
提示符 = 9 帧
20.21
0.64
0.86
0.598
Cosmos 1.0 Autoregressive 4B
1 帧
17.91
0.49
0.83
0.394
Cosmos 1.0 Autoregressive 4B
9 帧
18.13
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 5B
提示符 = 1 帧
17.67
0.48
0.82
0.376
Cosmos 1.0 Autoregressive Video2World 5B
提示符 = 9 帧
18.29
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 12B
1 帧
17.94
0.49
0.83
0.395
Cosmos 1.0 Autoregressive Video2World 12B
9 帧
18.22
0.49
0.87
0.487
Cosmos 1.0 Autoregressive Video2World 13B
提示符 = 1 帧
18
0.49
0.83
0.397
Cosmos 1.0 Autoregressive Video2World 13B
提示符 = 9 帧
18.26
0.48
0.87
0.482
表 2、物理对齐结果,计算指标超过 33 帧 (Cosmos 世界基础模型的自回归变体支持的最大长度)
成果
Cosmos 世界基础模型非常遵守物理定律 (表 2),尤其是在条件数据增加的情况下。与基线模型相比,在摄像头调节数据集上进行后训练后,姿态估计的成功率提高了两倍。然而,物体无常 (物体意外消失或出现) 和难以置信的行为 (如违反重力) 等挑战凸显了需要改进的领域。
使用 Cosmos 和 NVIDIA Omniverse 定制物理 AI 应用
视频搜索和理解:
通过了解空间和时间模式来简化视频标记和搜索,从而更轻松地准备训练数据
可控的 3D 到真实合成数据生成:
借助
NVIDIA Omniverse
,开发者可以创建 3D 场景,并使用 Cosmos 生成逼真的视频,这些视频由 3D 场景精确控制,用于高度定制的合成数据集。
策略模型开发和评估:
World foundation models 针对动作条件视频预测进行了微调,可实现策略模型的可扩展、可再现评估 (将状态映射到动作的策略),从而减少对障碍物导航或对象操作等任务进行风险真实测试或复杂模拟的依赖。
行动选择前瞻:
Cosmos 为物理 AI 模型配备预测功能,以评估潜在行动的结果。
Multiverse 模拟:
借助 Cosmos 和 NVIDIA Omniverse,开发者可以模拟多种未来结果,帮助 AI 模型评估和选择实现目标的最佳策略,从而使预测性维护和自主决策等应用受益。
从通用模型到定制的专业模型
Cosmos 为世界模型训练引入了一种分为两个阶段的方法。
多面手模型:Cosmos 世界基础模型以多面手的身份构建,基于涵盖各种真实物理和环境的大量数据集进行训练。这些开放模型能够处理从自然动力学到机器人交互的各种场景,为任何物理 AI 任务提供坚实的基础。
专业模型:开发者可以使用更小的定向数据集微调通用模型,为特定应用(如自动驾驶或人形机器人)打造量身定制的专家模型,也可以生成定制的合成场景,如夜间场景中的紧急车辆或高保真工业机器人环境。与从头开始训练模型相比,这种微调过程可显著减少所需的数据和训练时间。
Cosmos 通过高效的视频处理流程、高性能的 tokenizer 和高级训练框架加速训练和微调,使开发者能够满足运营需求和边缘案例,推动物理 AI 的发展。
借助 NVIDIA NeMo Curator 加速数据处理
训练模型需要精心挑选的高质量数据,而且需要耗费大量时间和资源。NVIDIA Cosmos 包含由 NVIDIA NeMo Curator 提供支持并针对 NVIDIA 数据中心 GPU 进行优化的数据处理和管护流程。
借助 NVIDIA NeMo Curator,机器人和自动驾驶(AV)开发者能够高效处理庞大的数据集。例如,20 million 小时的视频可以在 40 天内在 NVIDIA Hopper GPU 上处理完毕,或仅在 14 天内在 NVIDIA Blackwell GPU 上处理完毕——相比之下,未经优化的 CPU 流水线需要 3.4 年。
主要优势包括:
PyTorch、pandas、LangChain、Megatron、NVIDIA、cuOpt、Stable Diffusion、Llama、Jetson、Google、Python、Hugging Face、Arm、Anaconda、Siemens、DPU、GPU 和 Github 等技术的
89
倍管护速度:显著缩短处理时间
可扩展性:
无缝处理 100 多个 PB 的数据
高吞吐量:高级过滤、字幕和嵌入可在不牺牲速度的情况下确保质量
图 4、Cosmos 包含 NeMo Curator,可将视频数据处理速度提高 89 倍
使用 Cosmos Tokenizer 实现高保真压缩和重建
整理数据后,必须对其进行标记化以进行训练。标记化将复杂的数据分解为可管理的单元,使模型能够更高效地处理数据并从中学习。
Cosmos 标记器可简化此过程,加快压缩和视觉重建速度,同时保持质量,降低成本和复杂性。对于自回归模型,离散分词器可将数据压缩为时间的 8 倍,在空间中压缩为 16 × 16,一次最多处理 49 帧。对于扩散模型,连续分词器可实现 8 倍的时间和 8 × 8 的空间压缩,最多可处理 121 帧。
使用 NVIDIA NeMo 进行微调
开发者可以使用
NVIDIA NeMo
框架微调 Cosmos 世界基础模型。NeMo 框架可以在 GPU 驱动的系统上加速模型训练,无论是增强现有模型还是构建新模型,从本地数据中心到云端。
NeMo 框架通过以下方式高效加载多模态数据:
将 TB 大小的数据集分片到压缩文件中,以减少 IO 开销。
决定性地保存和加载数据集,以避免重复并尽可能减少计算浪费。
在使用优化的通信交换数据时降低网络带宽。
开始使用 NVIDIA Cosmos
Cosmos World Foundation 模型现已开放,可在
NGC
和
Hugging Face
上获取。开发者还可以在
NVIDIA
API Catalog
上运行 Cosmos World Foundation 模型。API Catalog 上还提供了用于增强文本提示以提高准确性的 Cosmos 工具、支持未来轻松识别 AI 生成序列的内置水印系统,以及用于解码视频序列以用于增强现实应用的专用模型。如需了解详情,请观看演示。
NeMo Curator
for accelerated data processing pipelines is available as a managed service and SDK. Developers can now
apply for early access
. Cosmos tokenizers are open neural networks available on
GitHub
and
Hugging Face
.
开始使用 NVIDIA Cosmos
。 |
https://developer.nvidia.com/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/ | Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities | Generative AI
has evolved from text-based models to multimodal models, with a recent expansion into video, opening up new potential uses across various industries. Video models can create new experiences for users or simulate scenarios for training autonomous agents at scale. They are helping revolutionize various industries including robotics, autonomous vehicles, and entertainment.
The development of video
foundation models
presents unique challenges due to the vast and varied nature of video data. This also underscores the necessity of scalable pipelines for curating data and effectively training models that can comprehend temporal and spatial dynamics.
We are announcing brand new video foundation model capabilities in the
NVIDIA NeMo framework
, an end-to-end training framework that enables you to pretrain and fine-tune your own video foundation models. The framework includes a high-throughput data curation, efficient multimodal data loading functionality, scalable model training, and a parallelized in-framework inference.
Video 1. NVIDIA Cosmos is a world foundation model platform that accelerates the development of physical AI applications like humanoid robots and autonomous vehicles.
High-throughput video curation through optimized pipelines
NeMo Curator
improves generative AI model accuracy by efficiently processing and preparing high-quality data, including large video datasets.
Using NeMo Curator’s scalable data pipelines, you can efficiently clip, annotate, and filter 100 PB or more of videos. To remove bottlenecks and optimize performance, NeMo Curator uses the following combination:
NVDEC
: Hardware decoder
NVENC
: Hardware encoder
Ray
: Compute framework for scaling AI applications
The NeMo Curator autobalancing techniques can leverage heterogeneous clusters with multiple GPU types to take advantage of NVENC on L40S GPUs and the performance of H100 and GB200 GPUs.
Figure 1 shows how NeMo Curator can process 20M hours of video data, reducing the processing time from years to days, achieving 89x speed up using 1K GPUs compared to unoptimized pipelines on CPUs for ISO power usage.
Figure 1. NeMo Curator delivers 89x faster video data processing
NeMo Curator provides the following relevant pipelines for video foundation model training and fine-tuning datasets:
Clipping
Sharding
The clipping pipeline starts with decoding and splitting raw videos into short, continuous clips by analyzing frame-to-frame color changes. The stitching stage smooths the clips out by using image embedding similarities to potentially merge adjacent clips together. These clips are then transcoded to the high-quality video encoding (H264), and they are annotated with video embeddings and captions, either existing or synthetically generated by a VLM, to facilitate semantic search capabilities.
Figure 2. Video curation clipping and sharding pipelines
Sharding generates text embeddings for captions to create the final WebDataset used for training. NeMo Curator also uses Ray streaming to build an auto-balancing system and deploy an optimal number of workers for each stage in the pipeline to avoid being bottlenecked by any stage (Figure 3).
Figure 3. Auto-balancing system to match the throughput of the overall pipeline
Efficient multimodal dataloading
Video models can be trained on billions of images and millions of videos, necessitating an efficient data loading strategy to achieve high throughput during training time.
This is accomplished in the NeMo framework through the use of
Megatron-Energon data loader
:
Shard large-scale data:
Uses the WebDataset format to shard a TB-size dataset into compressed files to help reduce I/O overhead during training.
Deterministic save and load:
Enables the dataset to be visited in one pass without repetition when the training job is disrupted, ensuring consistency across different training cluster setups.
Sequence packing:
Packs variable length or resolution images and videos together up to the max sequence length, minimizing compute wastage due to padding while simplifying data loading logic. NeMo uses the special THD attention kernel from the
Transformer engine
to support accelerated training with sequence packing.
Figure 4. Mixed image-video training with sequence packing
Reduce network bandwidth strain:
Each model parallel rank downloads a different subset of data instead of the whole dataset, and then all-gathers the data across ranks to get an identical dataloader.
Figure 5. Reducing network bandwidth strain to improve training throughput
Scaling video foundation model training
Video foundation models can be either autoregressive or diffusion models.
The well-established suite of NeMo tools on
large language models
(LLMs) can be reused for autoregressive models, while support for diffusion transformers such as DiT, MovieGen, and the latest
NVIDIA Cosmos
world foundation models for physical AI have been newly added.
The NeMo tech stack is highly optimized and provides more than 40%
Model FLOPs utilization
(MFU) in the latest benchmark (Table 1).
Model size
Context length
Training config
GPU used (TFLOPS/s)
Throughput (token/s/GPU)
DiT 7B
8k
baseline, no optimization
OOM
DiT 7B
8k
CP=2
457
8,969
DiT 7B
74k
TP=4 SP CP=4
414
2,933
DiT 28B
8k
TP=2 SP PP=2
435
2,392
DiT 28B
74k
TP=8 SP CP=4 PP=4
411
994
Table 1. GPU utilization and throughput benchmark for NVIDIA NeMo framework on diffusion transformers (DiT)
Legend:
CP
=context parallelism;
TP
=tensor parallelism;
SP
=sequence parallelism;
PP
=pipeline parallelism
Overview of the video diffusion pipeline
A video diffusion training pipeline is generally composed of the following major steps:
Tokenize the input image and video
with a causal temporal 3D tokenizer to generate 3D spatio-temporal tokens.
Use a transformer decoder
conditioned by the diffusion noise schedule timestep t and text input.
Timestep conditioning is applied through an Adaptive LayerNormalization (AdaLN) mechanism, with an option to use AdaLN-LoRA, which further improves Model FLOPs Utilization (MFU) during training.
Text conditioning is applied through a cross attention layer in each transformer block.
The NeMo framework enables you to initialize your transformer decoder based on the canonical DiT architecture or the MovieGen Llama architecture, which uses Grouped-Query Attention (GQA).
Compute the diffusion loss
with the parallelized EDM diffusion pipeline using the noise prediction from the diffusion transformer.
NeMo also applies additional Root Mean Square Layer Normalization (RMSNorm) on the queries and keys before attention blocks to stabilize diffusion training. RMSNorm is applied per attention head to remain compatible with tensor parallelism.
Figure 6. NeMo video diffusion training pipeline
Parallelism optimizations for video diffusion models
NeMo and Megatron-Core enable various model parallelism techniques:
Tensor parallel (TP)
Sequence parallel (SP)
Pipeline parallel (PP)
Context parallel (CP)
However, these techniques face unique challenges when applied to video diffusion transformers. Here’s how NeMo solves these challenges to achieve scalable and performant training:
Efficient pipeline parallelism for conditioning
Support for Spatio-Temporal DiT (ST-DiT) architecture
Customized random seeding mechanism
The traditional approach is to communicate conditioning information across pipeline stages, incurring additional communication cost and requiring nontrivial modifications to the pipeline schedule. NeMo solves this problem by computing the conditional embeddings at each pipeline stage. The computation cost with efficient pipeline parallelism for conditioning is much less than the communication cost and improves training throughput.
Figure 7. Trading communication for compute in conditioning pipeline parallelism
The Spatio-Temporal DiT (ST-DiT) architecture introduces additional spatial and temporal self-attention layers to each transformer block, as an alternative to training with full self attention on long video sequences. This approach exposes communication overhead during context parallelism due to smaller compute over short input sequence for these layers. NeMo addresses this by using local attention computation with A2A communication for spatial/temporal attention, while maintaining P2P ring topology for full self-attention. The hybrid approach effectively reduces bandwidth needs for temporal/spatial attention while still benefiting from context parallelism over full self-attention layer (Table 2).
Figure 8. Spatial-temporal DiT transformer block
Layer
Input Seq
Communication primitive
Communication bandwidth
Temporal self-attention
Short seq
Local compute & A2A
(bhw/cp, t, d)
Spatial self-attention
Short seq
Local compute & A2A
(bt/cp, hw, d)
Full attention
Long seq
CP with P2P
(b, h*w*t/cp, d)
Table 2. NeMo communication strategies for each kind of layer
Legend:
b
=batch size;
h*w
=spatial size;
t
=temporal size;
cp
=context parallel size;
d
=hidden size
, with input size being (b, t*h*w, d).
The customized random seeding mechanism goal is to make sure that random seeds are correctly initialized across the following components:
Time step
Gaussian noise
The actual model weights
Table 3 shows NeMo’s initialization strategy.
RNG seed
Data parallel
Context parallel
Pipeline parallel
Tensor parallel
Time step (t)
Diff
Same
Same
Same
Gaussian noise
Diff
Diff
Same
Same
Weight initialization
Same
Same
Diff
Diff
Table 3. Customized random seeding for parallelized diffusion transformers
Legend:
Diff=
Different random seed from other parallel ranks;
Same
=Same random seed as other parallel ranks.
Efficient in-framework inference
The NeMo framework accelerates inference by distributing denoising operations across multiple GPUs through context parallelism. After parallel denoising, the latent tensors are combined to reconstruct the video sequence before decoding with the Cosmos video tokenizer.
Benchmarks show 80–90% scaling efficiency on up to 32 H100 GPUs, with FP8 Multi-Head Attention providing 28% and 48% performance improvements over BF16 on 1 and 32 GPUs respectively.
Figure 9. Parallelized video generation with context parallelism
Figure 10. Inference performance at different GPU counts
Conclusion
In this post, we covered all the features of NVIDIA NeMo framework that will help you pretrain or fine-tune video foundation models in an effective and efficient manner.
NeMo Curator offers high-throughput data curation through clipping and sharding pipelines, and the Megatron Energon library offers efficient multimodal data loading. NeMo Frameworks enables scalable video foundation model training by supporting various model parallelism techniques specially optimized on diffusion and autoregressive models. In addition, it provides efficient in-framework inference by distributing denoising operations across multiple GPUs and incorporating FP8 Multi-Head Attention.
You can curate your video data with
NeMo Curator early access program
,
tokenize
them, pre-train(
diffusion
,
autoregressive
), fine-tune (
diffusion
,
autoregressive
), and perform multi-GPU in-framework inference (
diffusion
,
autoregressive
)with NeMo Framework today.
You can also try the NVIDIA Cosmos world foundation models at
build.nvidia.com
and watch the
CES keynote
from NVIDIA CEO Jensen Huang to learn more about the NVIDIA Cosmos world foundation model platform
.
Acknowledgements
Thanks to the following contributors: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal. | https://developer.nvidia.com/zh-cn/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/ | 通过 NVIDIA NeMo 框架的最新功能全链路加速视频基础模型的训练和微调 | 生成式 AI
已经从基于文本的模型发展到多模态模型,最近还扩展到了视频领域,为各行各业开辟了新的潜在用途。视频模型可以为用户创造新的体验,以及大规模模拟场景来训练自主代理。它们正在帮助包括机器人技术、自动驾驶汽车和娱乐业在内的多个行业实现变革。
由于视频数据的庞大和多样性,视频
基础模型
的开发面临着独特的挑战。这也凸显了可扩展的数据管理流程和训练框架的必要性,以此来理解时间和空间动态的模型。
我们发布了
NVIDIA NeMo 框架
中全新的视频基础模型功能,这是一个端到端的训练框架,用于预训练和微调您的专属视频基础模型。该框架包括高吞吐量的数据管理、高效的多模态数据加载功能、可扩展的模型训练,以及并行化的框架内推理。
高吞吐量视频数据处理链路
NVIDIA NeMo Curator 通过高效处理和准备高质量的数据(包括大型视频数据集),来提高生成式 AI 模型的准确性。
使用 NeMo Curator 的可扩展数据流程,您可以高效地剪辑、标注和过滤 100 PB 或更多的视频数据。为了消除瓶颈并优化性能,NeMo Curator 使用以下组合:
NVDEC:硬件解码器
NVENC:硬件编码器
Ray:用于扩展 AI 应用的计算框架
根据实验测试数据(仅做技术讨论和参考)显示,NeMo Curator 的自动负载平衡技术可以利用具有多种 GPU 类型的异构集群,以充分利用 L40S GPU 上的 NVENC 以及 H100 和 GB200 GPU 的性能。
图1 展示了 NeMo Curator 如何处理 2,000 万小时的视频数据,将处理时间从数年缩短到数天,使用 1,000 个 GPU 相比于 CPU 上未优化的流程,在相同功耗下实现了 89 倍的加速。
图1. NeMo Curator 将视频数据处理速度提升 89 倍 (图片展示的性能数据来自 NVIDIA 的实验测试,仅供技术讨论和参考)
NeMo Curator 为视频基础模型的训练和微调数据集提供以下相关的功能:
剪辑 (Clipping)
分片 (Sharding)
剪辑链路从解码和将原始视频分割为短而连续的片段开始,通过分析帧与帧之间的颜色变化来完成分割。在拼接阶段,利用图像嵌入(embedding)的相似性来平滑衔接片段,将相邻片段合并在一起。随后,这些片段会被转码为高质量的视频编码格式(H264),并通过视频嵌入(embedding)和字幕进行标注,这些字幕可以是现有的,也可以是由视觉语言模型 (VLM) 合成生成的,从而增强语义搜索功能。
图 2. 视频整理中的剪辑与分片链路
分片通过为字幕生成文本嵌入(embedding)来创建用于训练的最终 WebDataset。NeMo Curator 还利用 Ray 流处理技术构建了一个自动平衡系统,并为链路中的每个阶段部署了最佳数量的工作线程,以避免任何阶段成为瓶颈(如图 3 所示)。
图 3. 用于匹配整体链路的吞吐量的自动平衡系统
高效的多模态数据加载
视频模型可以在数十亿张图像和数百万个视频上进行训练,这需要一种高效的数据加载策略,从而在训练期间实现高吞吐量。
在 NeMo 框架中,这通过使用 NVIDIA
Megatron
-Energon 数据加载器
实现:
将大规模数据分片
:采用 WebDataset 格式,将 TB 级数据集分片为压缩文件,以减少训练期间的 I/O 开销。
保存与加载一致性
:在训练任务中断时,确保数据集在一次遍历中不重复访问,从而在不同的训练集群配置中保持一致性。
序列打包
:将长度或分辨率可变的图像和视频打包在一起,直至达到最大序列长度,减少因填充而导致的计算资源浪费,同时简化数据加载逻辑。NeMo 使用
Transformer Engine
中的特殊 THD 注意力算子核(attention kernel),通过结合序列打包来加速训练。
图 4. 通过序列打包加速图像-视频混合训练
减少网络带宽压力:
每个模型并行等级(rank)只下载数据集的一个不同子集,而不是整个数据集,然后所有等级通过全聚合(all-gather)在各卡之间共享数据,从而确保每张卡获得一致的数据加载。
图 5. 减少网络带宽压力以提高训练吞吐量
提高视频基础模型的训练拓展性
视频基础模型可以是自回归模型或扩散模型。
NeMo 框架里面现有的
大语言模型
(LLMs)功能可以复用于上述自回归模型,同时 NeMo 新增了对扩散 Transformer 的支持,如 DiT、MovieGen,以及最新的
NVIDIA Cosmos
世界基础模型(用于物理 AI)。
NeMo 技术栈经过高度优化,在最新基准测试中提供了超过 40% 的
模型浮点运算
利用率
(Model FLOPs Utilization,MFU)
(表 1)。
Model size
Context length
Training config
GPU used (TFLOPS/s)
Throughput (token/s/GPU)
DiT 7B
8k
baseline, no optimization
OOM
DiT 7B
8k
CP=2
457
8,969
DiT 7B
74k
TP=4 SP CP=4
414
2,933
DiT 28B
8k
TP=2 SP PP=2
435
2,392
DiT 28B
74k
TP=8 SP CP=4 PP=4
411
994
表 1. NVIDIA NeMo 框架在扩散 Transformer(DiT)上的 GPU 利用率和吞吐量基准测试
标注:
CP
=context parallelism;
TP
=tensor parallelism;
SP
=sequence parallelism;
PP
=pipeline parallelism
视频扩散模型链路概述
视频扩散训练链路通常由以下几个主要步骤组成:
使用因果时序 3D 分词器对输入的图像和视频进行分词,生成 3D 时空分词。
使用由扩散噪声调度@时间步 t 和文本输入作为条件的 Transformer 解码器。
时间步条件调节通过 Adaptive LayerNormalization(AdaLN)机制进行应用,并提供使用 AdaLN-LoRA 的选项,后者在训练过程中进一步提高模型浮点运算利用率(Model FLOPs Utilization,MFU)。
文本条件调节通过每个 Transformer 块中的交叉注意力层呈现。
NeMo 框架使您能够基于标准的的 DiT 架构或 MovieGen Llama 架构来初始化您的 Transformer 解码器,后者使用分组查询注意力机制(GQA, Grouped-Query Attention)。
使用扩散 Transformer 的噪声预测,通过并行化的 EDM 扩散链路计算扩散损失。
NeMo 还在每个注意力块之前,对 query 和 key 向量使用额外的均方根层归一化(Root Mean Square Layer Normalization,RMSNorm)以稳定扩散训练。RMSNorm 在每个注意力头上应用,以保持与张量并行性的兼容性。
图 6. NeMo 视频扩散模型训练链路
视频扩散模型的并行优化
NeMo 和 NVIDIA Megatron-Core 支持多种模型并行技术:
张量并行 (Tensor Parallel, TP)
序列并行 (Sequence Parallel, SP)
流水线并行 (Pipeline Parallel, PP)
上下文并行 (Context Parallel, CP)
然而,这些技术在应用于视频扩散 Transformer 时面临独特的挑战。以下是 NeMo 如何解决这些挑战,实现可扩展和高性能训练的方式:
用于条件调节的高效流水线并行
支持时空 DiT (ST-DiT) 架构
定制的随机种子机制
传统方法是通过流水线各阶段之间传递条件信息,这会产生额外的通信开销,并且需要对流水线调度进行较大修改。NeMo 通过在每个流水线阶段即时计算条件嵌入(embedding)来解决这个问题。与通信成本相比,条件调节的高效流水线并行计算成本要低得多,从而提高了训练吞吐量。
图 7. 在条件调节流水线并行中,以通信换取计算资源的优化策略
时空 DiT (ST-DiT) 架构在每个 Transformer 块中引入了额外的空间和时间自注意力层,以替代在长视频序列上进行全自注意力训练。因为这些层在输入序列比较短,所以进行计算时计算量较小,以至于暴露出在上下文并行中的通信开销。NeMo 通过使用局部注意力计算和 A2A 通信来处理空间/时间注意力层里的短序列输入,同时保持 P2P 环形拓扑结构用于全自注意力里的长序列输入。该混合方法有效地减少了时间/空间注意力的带宽需求,同时仍然从上下文并行中获益,避免了全自注意力层的计算开销(表 2)。
图 8. 时空 DiT Transformer 架构
Layer
Input Seq
Communication primitive
Communication bandwidth
Temporal self-attention
Short seq
Local compute & A2A
(bhw/cp, t, d)
Spatial self-attention
Short seq
Local compute & A2A
(bt/cp, hw, d)
Full attention
Long seq
CP with P2P
(b, h*w*t/cp, d)
表 2. NeMo 针对每种层的通信策略
标识
:
b
=batch size;
h*w
=spatial size;
t
=temporal size;
cp
=context parallel size;
d
=hidden size
, with input size being (b, t*h*w, d).
定制的随机种子机制的目标是确保以下组件中的随机种子得以正确初始化:
时间步
高斯噪声
实际模型权重
表 3 显示了 NeMo 框架里面对于随机种子的初始化策略。
RNG seed
Data parallel
Context parallel
Pipeline parallel
Tensor parallel
Time step (t)
Diff
Same
Same
Same
Gaussian noise
Diff
Diff
Same
Same
Weight initialization
Same
Same
Diff
Diff
表 3. 针对并行化扩散 Transformer 架构的定制随机种子设置
标识:
Diff=
Different random seed from other parallel ranks;
Same
=Same random seed as other parallel ranks.
框架内高效推理
推理的时候,NeMo 框架通过上下文并行,将去噪操作分布到多个 GPU 上,从而加速推理。在并行去噪后,潜在张量被合并以重建视频序列,然后使用 Cosmos 视频分词器进行解码。
基准测试显示,在最多 32 个 NVIDIA Hopper 架构 GPU 上,NeMo 的扩展效率达到 80-90%。在 1 个和 32 个 GPU 上,FP8 多头注意力相比 BF16 提供了分别 28% 和 48% 的性能提升。
图 9. 使用上下文并行化的视频生成
图 10. 在不同 GPU 数量下的推理性能
结论
在本文中,我们介绍了 NVIDIA NeMo 框架,帮助您以高效的方式进行视频基础模型的预训练或微调。
NeMo Curator 通过剪辑和分片链路提供高吞吐量的数据整理,Megatron Energon 库则提供高效的多模态数据加载。NeMo 框架通过支持多种针对扩散模型和自回归模型特别优化的模型并行技术,实现了可扩展的视频基础模型训练。此外,它通过将去噪操作分布到多个 GPU 上,并结合 FP8 多头注意力,实现了高效的框架内推理。
您可以通过
NeMo Curator 的早期
试用
计划
处理您的视频数据,对其进行
分词
,进行预训练(
扩散
、
自回归
)、微调(
扩散
、
自回归
),并在 NeMo 框架下进行多 GPU 的框架内推理(
扩散
、
自回归
)。
您还可以访问
NVIDIA 官网
尝试 NVIDIA Cosmos 世界基础模型,并观看 NVIDIA CEO 黄仁勋在
CES 上的主题演讲
,了解更多关于 NVIDIA Cosmos 世界基础模型平台的信息。
鸣谢
感谢以下贡献者: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal. |
https://developer.nvidia.com/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/ | NVIDIA cuDSS Advances Solver Technologies for Engineering and Scientific Computing | NVIDIA cuDSS
is a first-generation sparse direct solver library designed to accelerate engineering and scientific computing. cuDSS is increasingly adopted in data centers and other environments and supports single-GPU, multi-GPU and multi-node (MGMN) configurations.
cuDSS has become a key tool for accelerating computer-aided engineering (CAE) workflows and scientific computations across multiple domains such as structural engineering, fluid dynamics, electromagnetics, circuit simulation, optimization, and AI-assisted engineering problems.
This post highlights some of the key performance and usability features delivered in
cuDSS v0.4.0
and
cuDSS v0.5.0
, as summarized in Table 1. cuDSS v0.4.0 achieves a significant performance boost for factorization and solve steps, while also introducing several new features, including the memory prediction API, automatic hybrid memory selection, and variable batch support. cuDSS v0.5.0 adds
host execution mode
, which is particularly beneficial for smaller matrices, and demonstrated substantial performance improvements using
hybrid memory mode
and host multithreading for analysis phase, an area that is typically challenging to parallelize effectively.
cuDSS v0.4.0 release
cuDSS v0.5.0 release
PIP wheel and Conda support
Factorization and solve performance improvements (up to 10x) for single and multi-GPU when factors have dense parts
Memory prediction API
Automatic normal/hybrid memory mode selection
Variable (non-uniform) batch support (variable N, NNZ, NRHS, LD)
Host execution mode (parts of computations on the host) for smaller matrices
Host multithreading (currently only for the reordering) with user-defined threading backend
New pivoting approach (static pivoting with scaling)
Improved performance and memory requirements for hybrid memory mode
Table 1. cuDSS features in releases v0.4.0 and v0.5.0
Feature highlights
This section focuses on notable usability enhancements and performance improvements.
Memory prediction API
The memory prediction API is important for users who need to know the precise amount of device and host memory required by cuDSS before reaching the most memory-intensive phase (numerical factorization).
It is especially useful in scenarios where device memory may be insufficient—either when solving large linear systems or when the application has a limited memory budget for cuDSS. In either case, it is recommended to enable
hybrid memory mode
before the analysis phase.
Note that if hybrid memory mode is enabled but everything fits within the available device memory (whether based on the user-defined limit or GPU capacity), cuDSS will automatically detect this and switch to the faster default memory mode.
A typical call sequence for solving a linear system with cuDSS is as follows:
Analysis (reordering and symbolic factorization)
Numerical factorization (where the values of the factors are allocated and computed)
Solving
With the introduction of memory prediction, users can now query the amount of device and host memory required for the chosen mode (either default or hybrid memory) after the analysis phase, as well as the minimum memory required for hybrid memory mode. As the sample below demonstrates, the query is a single call of
cudssDataGet
with
CUDSS_DATA_MEMORY_ESTIMATES
that writes an output in a small fixed-size array.
/*
* After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,)
*/
int64_t memory_estimates[16] = {0};
cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES,
&memory_estimates, sizeof(memory_estimates);
/* memory_estimates[0] - permanent device memory
* memory_estimates[1] - peak device memory
* memory_estimates[2] - permanent host memory
* memory_estimates[3] - peak host memory
* memory_estimates[4] - minimum device memory for the hybrid memory mode
* memory_estimates[5] - maximum host memory for the hybrid memory mode
* memory_estimates[6,...,15] - reserved for future use
*/
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Non-uniform batch API
In scenarios where the application requires solving multiple linear systems, and each system individually is not large enough to fully saturate the GPU, performance can be enhanced through batching. There are two types of batching: uniform and non-uniform. Unlike uniform batches, non-uniform batches do not impose restrictions on the dimensions or sparsity patterns of the matrices.
cuDSS v0.4.0 introduces support for non-uniform batches. The opaque
cudssMatrix_t
objects can represent either a single matrix or a batch of matrices and thus the only part that needs to be changed is how the matrix objects are created and modified.
To create batches of dense or sparse matrices, v0.4.0 introduced new APIs
cudssMatrixCreateBatchDn
or
cudssMatrixCreateBatchCsr
. For modifying the matrix data are the similarly added APIs
cudssMatrixSetBatchValues
and
cudssMatrixSetBatchCsrPointers
as well as
cudssMatrixGetBatchDn
and
cudssMatrixGetBatchCsr
. cuDSS v0.5.0 modifies
cudssMatrixFormat_t
which can now be queried using
cudssMatrixGetFormat
to determine whether
cudssMatrix_t
object is a single matrix or a batch.
Once the batches of matrices are created, they can be passed to the main calls of
cudssExecute
in the exact same way as if they were single matrices. The sample below demonstrates the use of new batch APIs to create batches of dense matrices for the solution and right-hand sides, and a batch of sparse matrices for As.
/*
* For the batch API, scalar arguments like nrows, ncols, etc.
* must be arrays of size batchCount of the specified integer type
*/
cudssMatrix_t b, x;
cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrix_t A;
cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base);
/*
* The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x
*/
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Host multithreading API
Although most of the compute- and memory-intensive parts of cuDSS are executed on the GPU, some important tasks are still executed on the host. Prior to v0.5.0, cuDSS did not support multithreading (MT) on the host, and host execution was always single-threaded. The new release introduces support for arbitrary user-defined threading runtimes (such as
pthreads, OpenMP, and thread pools
), offering flexibility similar to how support was introduced for user-defined communication backends in the
MGMN mode
in cuDSS v0.3.0.
Among the tasks executed on the host, reordering (a critical part of the analysis phase) often stands out, as it can take a significant portion of the total execution time (analysis plus factorization plus solve). To address this common bottleneck in direct sparse solvers, cuDSS v0.5.0 introduces both general MT support on the host and a multithreaded version of reordering. Note that this is available only for the
CUDSS_ALG_DEFAULT
reordering algorithm.
As with the MGMN mode, the new MT mode is optional and does not introduce any new dependencies to the user application if not used. Enabling this feature in your application is simple—just set the name of the shim threading layer library using
cudssSetThreadingLayer
and (optionally) specify the maximum number of threads that cuDSS is allowed to use, as shown in the following sample:
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
* thrLibFileName - filename to the cuDSS threading layer library
* If NULL then export CUDSS_THREADING_LIB = ‘filename’
*/
cudssSetThreadingLayer(cudssHandle, thrLibFileName);
/*
* (optional)Set number of threads to be used by cuDSS
*/
int32_t nthr = ...;
cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS,
&nthr, sizeof(nthr);
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Host execution
While the primary objective of cuDSS is to enable GPU acceleration for sparse direct solver functionality, for tiny and small matrices (which typically don’t have enough parallelism to saturate a GPU) an extensive use of the GPU can bring a non-negligible overhead. This can sometimes even dominate the total runtime.
To make cuDSS a more universal solution, v0.5.0 introduces the
host execute mode,
which enables factorization and solve phases on the host. When enabled, cuDSS will use a heuristic size-based dispatch to determine whether to perform part of the computations (during factorization and solve phases) on the host or on the device.
Additionally, when hybrid execution mode is enabled, users can pass host buffers for the matrix data which saves the needless memory transfers from the host to the device. Host execution mode doesn’t give cuDSS capabilities of a fully-fledged CPU solver, but helps to optionally remove the unwanted memory transfers and improve performance for small matrices.
The following sample demonstrates how to turn on hybrid execution mode.
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
*/
int hybrid_execute_mode = 1;
cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE,
&hybrid_execute_mode, sizeof(hybrid_execute_mode);
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Performance improvements of cuDSS v0.4.0 and v0.5.0
cuDSS v0.4.0 and v0.5.0 introduced significant performance improvements across several types of workloads.
In v0.4.0, the factorization and solve steps are accelerated by detecting when parts of the triangular factors become dense and leveraging more efficient dense BLAS kernels for those parts. The speedup achieved through this optimization depends largely on the symbolic structure of the factors, which in turn is influenced by the original matrix and the reordering permutation.
Figure 1 illustrates the performance improvement of v0.4.0 over v0.3.0, based on a large collection of matrices from the
SuiteSparse Matrix Collection
, analyzed on the NVIDIA H100 GPU.
Figure 1. Performance improvement for the factorization and solve phases of cuDSS v0.4.0 over v0.3.0 for a variety of matrices from SuiteSparse Matrix Collection
As shown in the chart, both the factorization and solve phases saw substantial improvements, with geometric means of 1.74 and 2.0, respectively. Some matrices with relatively sparse triangular factors did not show significant speedups. However, matrices like
Serena
,
conf5_4_8x8_20
and
atmosmodd
(which come from various types of HPC applications) experienced speedups of more than 8x in the factorization phase and more than 6x in the solve phase.
The analysis phase also saw significant speedup, thanks to the multithreaded reordering introduced in cuDSS v0.5.0. Figure 2 compares the performance of the analysis phase between v0.5.0 and v0.4.0, using the same set of matrices from the SuiteSparse Matrix Collection.
The performance improvement arises from the fact that v0.4.0 used a single-threaded reordering implementation, while v0.5.0 leverages multiple CPU threads (cores) on the host. While it’s well-known that state-of-the-art reordering algorithms are notoriously difficult to parallelize efficiently, cuDSS v0.5.0 makes good use of multiple CPU cores, resulting in a solid geometric mean speedup of 1.98, with the maximum improvement reaching 4.82.
Note that the analysis phase includes both the (optionally multithreaded) reordering and symbolic factorization, which is performed on the GPU. Therefore, the actual speedup for the reordering part is likely even higher than what the chart indicates.
Figure 2. Performance improvement (analysis phase only) of cuDSS v0.5.0 over v0.4.0 for a variety of matrices from SuiteSparse Matrix Collection using host multithreading feature released in v0.5.0
cuDSS v0.5.0 further optimizes the performance of the
hybrid memory mode,
which was first introduced in v0.3.0 This feature allows part of the internal arrays used within cuDSS to reside on the host, enabling the solution of systems that don’t fit into the memory of a single GPU. It works particularly well on
NVIDIA Grace
-based systems, thanks to the significantly higher memory bandwidth between the CPU and GPU.
Figure 3 presents the performance speedup for the factorization and solve phases with cuDSS 0.5.0, comparing an NVIDIA Grace Hopper system (Grace CPU plus NVIDIA H100 GPU) against an x86 system (Intel Xeon Platinum 8480CL, 2S) plus NVIDIA H100 GPU, using a set of large matrices.
Figure 3. Performance improvement of cuDSS v0.5.0 with hybrid memory mode for a variety of matrices
As previously mentioned, v0.5.0 introduces the
hybrid execution mode
, which improves performance of cuDSS for small matrices. Figure 4 shows the speedup of the hybrid execution mode against the CPU solver (Intel MKL PARDISO) for the factorization and solve phases.
Figure 4. Performance improvement of cuDSS v0.5.0 with hybrid execution (enabling host execution) against the CPU solver for a variety of matrices
Finally, Figure 5 shows the speedup of the new hybrid execution mode (cuDSS v0.5.0) compared to the default mode (cuDSS v0.4.0) for the factorization and solve phases on a set of small matrices. While the speedup of the factorization phase is significant only for really small matrices, the solve phase delivers speedups for systems with up to 30K equations. This can be explained by the fact that the solve phase has less work compared to the factorization phase and cannot make good use of a GPU for the tested matrices.
Figure 5. Performance improvement of cuDSS v0.5.0 with hybrid execution for a variety of matrices
Summary
NVIDIA cuDSS v0.4.0 and v0.5.0 releases provide several new enhancements that significantly improve performance. Highlights include general speedups in factorization and solving, a hybrid memory and execution mode, host multithreading, and support for non-uniform batch sizes. In addition to our continued investment in performance, we will consistently enhance our APIs to expand functionality, providing users with greater flexibility and fine-grained control.
Ready to get started?
Download NVIDIA cuDSS v0.5.0
.
To learn more, check out the
cuDSS v0.5.0 release notes
and the following previous posts:
NVIDIA cuDSS Library Removes Barriers to Optimizing the US Power Grid
Spotlight: Honeywell Accelerates Industrial Process Simulation with NVIDIA cuDSS
Join the conversation and provide feedback in the
NVIDIA Developer Forum
. | https://developer.nvidia.com/zh-cn/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/ | NVIDIA cuDSS 提高工程和科学计算中的求解器技术 | NVIDIA cuDSS
是第一代稀疏直接求解器库,旨在加速工程和科学计算。cuDSS 正越来越多地应用于数据中心和其他环境,并支持单 GPU、多 GPU 和多节点(MGMN)配置。
cuDSS 已成为加速多个领域 (例如结构工程、流体动力学、电磁学、电路模拟、优化和 AI 辅助工程问题) 的计算机辅助工程 (CAE) 工作流程和科学计算的关键工具。
本文重点介绍了
cuDSS v0.4.0
和
cuDSS v0.5.0
中提供的一些关键性能和可用性功能 (如表 1 中总结的内容),cuDSS v0.4.0 可显著提升分解和求解步骤的性能,同时还引入了一些新功能,包括内存预测 API、自动混合内存选择和可变批量支持。cuDSS v0.5.0 增加了主机执行模式,这对较小的矩阵特别有利,并且在分析阶段使用混合内存模式和主机多线程实现了显著的性能提升,而有效并行化通常是一个具有挑战性的领域。
cuDSS v0.4.0 版本
cuDSS v0.5.0 版本
PIP wheel 和 Conda 支持在因子具有密集部分时,对单 GPU 和多 GPU 进行分解并解决性能提升 (高达 10 倍) 内存预测 API 自动选择正常/混合内存模式可变 (非均匀) 批量支持 (变量 N、NNZ、NRHS、LD)
较小矩阵的主机执行模式 (部分主机计算模式) 带有用户定义线程后端的主机多线程 (目前仅用于重新排序) 新的透视方法 (通过扩展进行静态透视) 提高了混合内存模式的性能和内存需求
表 1.版本 v0.4.0 和 v0.5.0 中的 cuDSS 功能
功能亮点
本节重点介绍显著的可用性增强和性能改进。
内存预测 API
对于需要在达到显存密集型阶段(数值分解)之前了解 cuDSS 所需设备和主机显存的精确数量的用户而言,内存预测 API 非常重要。
在设备显存不足的情况下,无论是求解大型线性系统,还是应用程序的 cuDSS 显存预算有限时,此功能尤其有用。在这两种情况下,建议在分析阶段之前启用混合显存模式。
请注意,如果混合显存模式已启用,但一切都适合可用设备显存 (无论是基于用户定义的限制还是 GPU 容量),cuDSS 将自动检测到这一点并切换到更快的默认显存模式。
使用 cuDSS 求解线性系统的典型调用序列如下所示:
分析 (重新排序和符号分解)
数值分解(分配并计算因子值)
求解
引入 memory prediction 后,用户现在可以在分析阶段结束后查询所选模式(默认或 hybrid memory)所需的 device 和 host memory 数量,以及 hybrid memory 模式所需的最小 memory。如下示例所示,查询是使用
CUDSS_DATA_MEMORY_ESTIMATES
对
cudssDataGet
进行的一次调用,可将输出写入固定大小的小型数组中。
/*
* After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,)
*/
int64_t memory_estimates[16] = {0};
cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES,
&memory_estimates, sizeof(memory_estimates);
/* memory_estimates[0] - permanent device memory
* memory_estimates[1] - peak device memory
* memory_estimates[2] - permanent host memory
* memory_estimates[3] - peak host memory
* memory_estimates[4] - minimum device memory for the hybrid memory mode
* memory_estimates[5] - maximum host memory for the hybrid memory mode
* memory_estimates[6,...,15] - reserved for future use
*/
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
不均匀批处理 API
在应用程序需要求解多个线性系统,且每个系统单独不够大,无法使 GPU 完全饱和的情况下,可以通过批处理来提高性能。有两种类型的批处理:uniform 和 non-uniform。与 uniform 批量不同,non-uniform 批量不会对矩阵的维度或稀疏模式施加限制。
cuDSS v0.4.0 引入了对不均匀批量的支持。不透明的
cudssMatrix_t
对象可以表示单个矩阵,也可以表示批量矩阵,因此唯一需要更改的部分是创建和修改矩阵对象的方式。
为创建密集或稀疏矩阵的批量,v0.4.0 引入了新的 API
cudssMatrixCreateBatchDn
或
cudssMatrixCreateBatchCsr
、添加的类似 API
cudssMatrixSetBatchValues
和
cudssMatrixSetBatchCsrPointers
以及
cudssMatrixGetBatchDn
和
cudssMatrixGetBatchCsr
可用于修改矩阵数据。cuDSS v0.5.0 修改
cudssMatrixFormat_t
,现在可以使用
cudssMatrixGetFormat
查询 tg_10,以确定
cudssMatrix_t
对象是单个矩阵还是批量对象。
创建矩阵批量后,它们可以像单个矩阵一样以完全相同的方式传递给
cudssExecute
的主要调用。以下示例演示了如何使用新的批量 API 为解和右侧创建批量密集矩阵,并为 As 创建批量稀疏矩阵。
/*
* For the batch API, scalar arguments like nrows, ncols, etc.
* must be arrays of size batchCount of the specified integer type
*/
cudssMatrix_t b, x;
cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrix_t A;
cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base);
/*
* The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x
*/
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
托管多线程 API
虽然 cuDSS 的大多数计算和内存密集型部分都在 GPU 上执行,但一些重要任务仍然在主机上执行。在 v0.5.0 之前,cuDSS 不支持主机上的多线程 (MT),并且主机执行始终是单线程的。新版本引入了对任意用户定义线程运行时 (
例如 pthreads、OpenMP 和线程池
) 的支持,其灵活性与 cuDSS v0.3.0 中在
MGMN 模式
下引入对用户定义通信后端的支持类似。
在主机上执行的任务中,重新排序 (分析阶段的关键部分) 通常非常突出,因为它可能占用总执行时间 (分析加分解加解) 的很大一部分。为解决直接稀疏求解器中的常见瓶颈,cuDSS v0.5.0 在主机上引入了通用 MT 支持和多线程版本的重新排序。请注意,此功能仅适用于
CUDSS_ALG_DEFAULT
重排序算法。
与 MGMN 模式一样,新的 MT 模式是可选的,如果不使用,不会向用户应用引入任何新的依赖项。在应用中启用此功能非常简单 – 只需使用
cudssSetThreadingLayer
设置 shim 线程层库的名称,并 (可选) 指定允许 cuDSS 使用的最大线程数,如下例所示:
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
* thrLibFileName - filename to the cuDSS threading layer library
* If NULL then export CUDSS_THREADING_LIB = ‘filename’
*/
cudssSetThreadingLayer(cudssHandle, thrLibFileName);
/*
* (optional)Set number of threads to be used by cuDSS
*/
int32_t nthr = ...;
cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS,
&nthr, sizeof(nthr);
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
主机执行
虽然 cuDSS 的主要目标是为稀疏直接求解器功能实现 GPU 加速,但对于微小的矩阵 (通常没有足够的并行度使 GPU 饱和),广泛使用 GPU 会带来不可忽略的开销。有时,这甚至会主宰总运行时间。
为使 cuDSS 成为更通用的解决方案,v0.5.0 引入了
主机执行模式
,该模式可在主机上启用分解和求解阶段。启用 cuDSS 后,cuDSS 将使用基于大小的启发式分配来确定是否在主机或设备上执行部分计算 (在分解和求解阶段)。
此外,启用混合执行模式后,用户可以为矩阵数据传递主机缓冲区,从而节省从主机到设备的不必要的内存传输。主机执行模式无法提供成熟 CPU 求解器的 cuDSS 功能,但有助于选择性地消除不需要的内存传输,并提高小型矩阵的性能。
以下示例演示了如何开启混合执行模式。
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
*/
int hybrid_execute_mode = 1;
cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE,
&hybrid_execute_mode, sizeof(hybrid_execute_mode);
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub repo。
cuDSS v0.4.0 和 v0.5.0 的性能提升
cuDSS v0.4.0 和 v0.5.0 为多种类型的工作负载引入了显著的性能提升。
在 v0.4.0 中,通过检测三角形因子的某些部分何时变得密集,并为这些部分利用更高效的密集 BLAS 内核,可加速分解和求解步骤。通过此优化实现的加速在很大程度上取决于因子的符号结构,而符号结构又会受到原始矩阵和重新排序排列的影响。
根据 SuiteSparse 矩阵集合中的大量矩阵集合,并在 NVIDIA H100 GPU 上进行分析,图 1 展示了 v0.4.0 相较于 v0.3.0 的性能提升。
图 1. cuDSS v0.4.0 相对于 v0.3.0 对 SuiteSparse Matrix Collection 中各种矩阵的分解和求解阶段的性能有所提升
如图所示,分解阶段和求解阶段均有显著改进,几何均值分别为 1.74 和 2.0、一些三角形因子相对稀疏的矩阵并未显示出显著的加速。但是,
Serena
、
conf5_4_8x8_20
和
atmosmodd
(来自各种类型的 HPC 应用) 等矩阵在分解阶段的速度提升超过 8 倍,在求解阶段的速度提升超过 6 倍。
得益于 cuDSS v0.5.0 中引入的多线程重排序,分析阶段的速度也得到了显著提升。图 2 使用来自 SuiteSparse 矩阵集合的同一组矩阵,比较了 v0.5.0 和 v0.4.0 分析阶段的性能。
性能提升的原因是,v0.4.0 使用了单线程重排序实现,而 v0.5.0 则利用主机上的多个 CPU 线程 (cores)。众所周知,先进的重排序算法很难高效并行化,而 cuDSS v0.5.0 可以很好地利用多个 CPU cores,从而实现 1.98 的几何平均加速,最大提升可达 4.82。
请注意,分析阶段包括 (可选多线程) 重排序和符号分解,这在 GPU 上执行。因此,重新排序部分的实际加速可能甚至高于图表所示。
图 2、对于使用 v0.5.0 中发布的主机多线程功能的 SuiteSparse 矩阵集合中的各种矩阵,cuDSS v0.5.0 的性能比 v0.4.0 得到提升 (仅限分析阶段)
cuDSS v0.5.0 进一步优化了
混合显存模式
(最初在 v0.3.0 中引入)的性能。此功能允许 cuDSS 中使用的部分内部数组驻留在主机上,从而为不适合单个 GPU 内存的系统提供解决方案。由于 CPU 和 GPU 之间的内存带宽显著增加,它在基于
NVIDIA Grace
的系统上的运行效果尤为出色。
图 3 展示了 cuDSS 0.5.0 下分解和求解阶段的性能提升情况,并使用一组大型矩阵比较了 NVIDIA Grace Hopper 系统 (Grace CPU 加 NVIDIA H100 GPU) 与 x86 系统 (Intel Xeon Platinum 8480CL,2S) 加 NVIDIA H100 GPU 的性能提升情况。
图 3、cuDSS v0.5.0 借助混合显存模式,性能得到提升
如前所述,v0.5.0 引入了
混合执行模式
,可提高 cuDSS 在处理小型矩阵时的性能。图 4 显示了在分解和求解阶段,混合执行模式相对于 CPU 求解器 (Intel MKL PARDISO) 的加速情况。
图 4、cuDSS v0.5.0 通过混合执行 (启用主机执行) 相较于 CPU 求解器在各种矩阵中的性能提升
最后,图 5 显示了与默认模式 (cuDSS v0.4.0) 相比,新的混合执行模式 (cuDSS v0.5.0) 在一组小矩阵上分解和求解阶段的速度提升情况。虽然分解阶段的加速仅适用于非常小的矩阵,但求解阶段可为方程高达 30K 个的系统提供加速。这可以解释为,与分解阶段相比,求解阶段的工作量较少,并且无法充分利用 GPU 来测试矩阵。
图 5、cuDSS v0.5.0 的性能提升,支持各种矩阵的混合执行
总结
NVIDIA cuDSS v0.4.0 和 v0.5.0 版本提供了多项新的增强功能,可显著提高性能。亮点包括分解和求解方面的一般加速、混合内存和执行模式、主机多线程,以及对非均匀批量大小的支持。除了持续投资性能之外,我们还将持续增强 API 以扩展功能,为用户提供更高的灵活性和细粒度控制。
准备好开始使用了吗?下载 NVIDIA cuDSS v0.5.0。
如需了解更多信息,请查看
cuDSS v0.5.0 版本说明
以及之前的以下帖子:
NVIDIA cuDSS 库消除了优化美国电网的障碍
聚焦:Honeywell 借助 NVIDIA cuDSS 加速工业流程模拟
在
NVIDIA 开发者论坛
中加入对话并提供反馈。 |
https://developer.nvidia.com/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/ | Spotlight: University of Tokyo Uses NVIDIA Grace Hopper for Groundbreaking Energy-Efficient Seismic Research | Supercomputers are the engines of groundbreaking discoveries. From predicting extreme weather to advancing disease research and designing safer, more efficient infrastructures, these machines simulate complex systems that are impractical to test in the real world due to their size, cost, and material requirements.
Since the introduction of the GPU in 1999, NVIDIA has continually pushed the boundaries of
accelerated computing
, an approach that uses specialized hardware to dramatically speed up work by bundling frequently occurring tasks in parallel processing. This efficiency enables supercomputers to handle unprecedented computational challenges while consuming less energy per task.
Today, NVIDIA-powered systems lead the charge in energy-efficient supercomputing, with eight of the top 10 placements on the
Green500
—an industry benchmark for energy-efficient supercomputers. The JEDI system at Germany’s Jülich Supercomputing Center, powered by
NVIDIA Grace Hopper
, exemplifies this progress, achieving a staggering efficiency of 72.7 gigaflops per watt.
Advancing seismic safety with the University of Tokyo
This drive for
energy efficiency
is exemplified in the recent research effort between NVIDIA and the University of Tokyo. Situated in one of the most seismically active regions in the world, Japan requires cutting-edge research to mitigate the impact of earthquakes.
The
Earthquake Research Institute at the University of Tokyo
has used the
NVIDIA Grace Hopper Superchip
to accelerate its seismic simulations,
achieving groundbreaking results
.
The tightly coupled CPU-GPU architecture of the
NVIDIA GH200 Grace Hopper Superchip
enables an improvement in simulation performance of 86x, with 32x greater energy efficiency compared to traditional methods. Advanced memory optimization ensures faster and more complex computations, empowering researchers to model earthquake-resistant infrastructure with unmatched precision. By leveraging data-driven predictions and heterogeneous computing systems, the university is helping transform seismic research into a more efficient endeavor.
Award-winning research: Heterogeneous computing for seismic modeling
These breakthroughs were presented at
WACCPD 2024
, where the University of Tokyo showcased how NVIDIA hardware-software synergy enables solutions to dynamic, large-scale problems. In seismic modeling, the innovative architecture of Grace Hopper accelerates time-evolution equation-based simulations by combining high-performance GPUs for intensive calculations with the memory capacity of CPUs for predictive algorithms.
Beyond seismic research, this approach has the potential to reshape energy efficiency in
high-performance computing (HPC)
.
This groundbreaking work developed a new CPU-GPU heterogeneous computing method for repeatedly solving time-evolution partial differential equation (PDE) problems with guaranteed accuracy. This new method achieves a very short time-to-solution (TTS) with low energy-to-solution by leveraging both the large memory capacity of the CPU and the high computing performance of the GPU. When scaled up to the
Alps supercomputer
, the method was 51.6x faster than using only the CPU and 6.98x faster than the GPU, while achieving an impressive 94.3% efficiency across 1,920 compute nodes.
Figure 1. Performance comparison of the newly-developed PDE solver. Using both CPU and GPU in the NVIDIA GH200 Grace Hopper Superchip realizes maximum performance
The simulations involve stepping forward in time, where the solution (the seismic activity, for example) at each point in time is calculated by solving a system of equations. In turn, at each timestep multiple iterations are required, where the solution converges to a suitably accurate result. The calculations at each iteration are very intense and require the computational power of GPUs.
Therefore, there are two aspects that determine the overall TTS (and corresponding energy to solution): how fast each iteration can be calculated at each timestep, and how many iterations are required at each timestep. It is this combination that enables Grace Hopper architecture to really excel: the Hopper GPU performs each calculation exceedingly quickly, while the large memory capacity of the Grace CPU stores a history of prior results to inform the predictor, allowing the number of iterations to be vastly reduced.
This data-driven method is well suited to the NVIDIA GH200 Grace Hopper Superchip. The number of iterations required to converge depends on how good the initial “guess” of the solution is. The novelty of this work involves data from previous timesteps being used to maximize the accuracy of each initial solution, thus reducing the number of iterations required. This method requires a combination of capabilities:
The large memory capacity of the NVIDIA Grace CPU to hold the required previous-timestep data,
The computational ability of Hopper to solve the equations at each iteration,
The fast interconnect to allow the data-driven results from Grace CPU to optimize the number of solver iterations on the Hopper GPU.
Figure 2. The data-driven method solves two problem sets simultaneously by using the high bandwidth CPU-GPU connection on GH200 to communicate predictions from the CPU to the solver on the GPU
Figure 3. Data-driven predictor running on the Grace CPU reduces iterations from about 150 to 50
The benefits are shown in Figure 1, where it can be seen that the time-to-solution is reduced by 86x, compared to only using the CPU, or 9x compared to only using the GPU. The corresponding energy reductions are 32x (compared to CPU) and 7x (compared to GPU alone).
Enabling energy efficient supercomputing
The University of Tokyo’s seismic research breakthroughs highlight the transformative power of energy-efficient supercomputing. By harnessing the NVIDIA Grace Hopper Superchip, researchers are not only accelerating seismic simulations by unprecedented margins but also drastically reducing energy consumption—a crucial step in
sustainable computing
.
This collaboration demonstrates how cutting-edge technology can address urgent global challenges, like earthquake safety, while setting new benchmarks for performance and energy efficiency. As these innovations continue to scale across industries, they pave the way for a future where high-performance computing drives both scientific progress and environmental responsibility.
At
NVIDIA GTC 2025
, Kohei Fujita, associate professor at the University of Tokyo Earthquake Research Institute, will discuss this breakthrough research in the session,
Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems [S72925]
. | https://developer.nvidia.com/zh-cn/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/ | 聚焦:东京大学使用 NVIDIA Grace Hopper 进行开创性的节能地震研究 | 超级计算机是推动突破性发现的引擎。从预测极端天气到推进疾病研究,以及设计更安全、更高效的基础设施,这些机器可以模拟因规模、成本和材料要求而无法在现实世界中进行测试的复杂系统。
自 1999 年推出 GPU 以来,NVIDIA 一直在不断突破加速
计算的极限
,这种方法使用专业硬件,通过在并行处理中捆绑频繁出现的任务来大幅加速工作。这种效率使超级计算机能够处理前所未有的计算挑战,同时减少每项任务消耗的能源。
如今,由 NVIDIA 提供支持的系统在高效超级计算领域处于领先地位,在 Green500 榜单前 10 名中有 8 名入选,而 Green500 榜单是高效超级计算机的行业基准测试。德国于利希超级计算中心(Jülich Supercomputing Center)的 JEDI 系统由
NVIDIA Grace Hopper
提供动力支持,充分体现了这一进步,实现了 72.7 gigaflops 每瓦的惊人效率。
与东京大学一起推进地震安全性
NVIDIA 和东京大学最近的研究工作就是这种
提高能效的做法的典范
。日本位于世界上地震最活跃的地区之一,需要进行尖端研究来减轻地震的影响。
东京大学地震研究所
使用
NVIDIA Grace Hopper 超级芯片
加速地震模拟,
取得了突破性成果
。
NVIDIA GH200 Grace Hopper 超级芯片采用紧密合的 CPU-GPU 架构
,模拟性能提高了 86 倍,能效比传统方法高 32 倍。高级内存优化可确保更快速、更复杂的计算,使研究人员能够以出色的精度对抗震基础设施进行建模。通过利用数据驱动的预测和异构计算系统,该大学正在帮助将地震研究转变为更高效的工作。
获奖研究:用于地震建模的异构计算
在
WACCPD 2024
大会上,东京大学展示了 NVIDIA 硬件与软件的协同作用如何为动态、大规模问题提供解决方案,展示了这些突破性成果。在地震建模中,Grace Hopper 的创新架构将用于密集型计算的高性能 GPU 与用于预测算法的 CPU 内存容量相结合,从而加速基于时间进化方程的模拟。
除了地震研究之外,这种方法还有可能重塑
高性能计算(HPC)
的能效。
这项突破性的工作开发了一种新的 CPU-GPU 异构计算方法,能够以有保证的准确性反复解决时间演进偏微分方程 (PDE) 问题。这种新方法通过利用 CPU 的大内存容量和 GPU 的高计算性能,实现了极短的解决方案时间 (TTS) 和低能耗的解决方案。当扩展到
Alps 超级计算机
时,该方法的速度比仅使用 CPU 快 51.6 倍,比 GPU 快 6.98 倍,同时在 1,920 个计算节点中实现了 94.3% 的惊人效率。
图 1、新开发的 PDE 求解器的性能比较。在 NVIDIA GH200 Grace Hopper 超级芯片中同时使用 CPU 和 GPU 可实现出色性能
模拟涉及逐步推进时间,即通过求解方程组来计算每个时间点的解 (例如地震活动)。反过来,在每个时间步长中,都需要进行多次迭代,使解收到适当准确的结果。每次迭代的计算都非常密集,需要 GPU 的计算能力。
因此,有两个方面决定整体 TTS (以及相应的解能量):在每个时间步长下每次迭代的计算速度,以及每个时间步长下需要进行多少次迭代。正是这种组合使 Grace Hopper 架构真正表现出色:Hopper GPU 执行每次计算的速度极快,而 Grace CPU 的大内存容量则存储先前结果的历史记录以告知预测器,从而大幅减少迭代次数。
这种数据驱动的方法非常适合 NVIDIA GH200 Grace Hopper 超级芯片。所需的迭代次数取决于对解的初始“猜测”的准确性。这项工作的新颖之处在于,之前时间步长的数据可用于更大限度地提高每个初始解决方案的准确性,从而减少所需的迭代次数。此方法需要结合以下功能:
NVIDIA Grace CPU 的大内存容量可容纳所需的先前时间步长数据,
Hopper 在每次迭代中求解方程的计算能力
快速互连允许 Grace CPU 的数据驱动结果优化 Hopper GPU 上的求解器迭代次数。
图 2、数据驱动方法通过使用 GH200 上的高带宽 CPU-GPU 连接将预测从 CPU 传输到 GPU 上的求解器,同时解决两个问题集
图 3、在 Grace CPU 上运行的数据驱动预测器可将迭代次数从大约 150 次减少到 50 次
图 1 显示了这些优势,与仅使用 CPU 相比,解决时间缩短了 86 倍,与仅使用 GPU 相比,缩短了 9 倍。相应的能耗分别为 32 倍 (与 CPU 相比) 和 7 倍 (仅与 GPU 相比)。
实现高能效超级计算
东京大学的地震研究取得的突破凸显了高效超级计算的变革力量。通过利用 NVIDIA Grace Hopper Superchip,研究人员不仅以前所未有的利润加速地震模拟,而且还大幅降低了能源消耗,这是
可持续计算
的关键一步。
这项合作展示了尖端技术如何应对紧迫的全球性挑战 (例如地震安全性),同时为性能和能效设定新的基准。随着这些创新继续在各行各业中推广,它们为高性能计算推动科学进步和承担环保责任的未来铺平道路。
在
NVIDIA GTC 2025
上,东京大学地震研究所副教授 Kohei Fujita 将在名为“Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems”的会议中讨论这项突破性研究[S72925]。 |
https://developer.nvidia.com/blog/spotlight-brli-and-toulouse-inp-develop-ai-based-flood-models-using-nvidia-modulus/ | Spotlight: BRLi and Toulouse INP Develop AI-Based Flood Models Using NVIDIA Modulus | Flooding poses a significant threat to 1.5 billion people, making it the most common cause of major natural disasters. Floods cause up to
$25 billion in global economic damage
every year. Flood forecasting is a critical tool in disaster preparedness and risk mitigation. Numerical methods have long been developed that provide accurate simulations of river basins. With these, engineers such as those at the consulting firm
BRLi
study different rainfall scenarios, and provide valuable assessments of flood risks, mitigation strategies, and disaster relief plans to local stakeholders. Accurate predictions can save lives, protect infrastructure, and reduce economic losses.
Yet these methods use physics-based numerical simulations that are computationally intensive. This approach often takes many hours to simulate a flooding event, even on many parallel processors. It’s not suitable for real-time forecasting of ongoing events. This limitation has long been a bottleneck in providing rapid, actionable flood warnings tailored to a given event, and has prevented the development of real-time forecasting systems.
To address this issue, a joint team at BRLi and
National Polytechnic Institute of Toulouse (Toulouse INP)
, through a collaboration in the
ANITI
research institute, has designed an end-to-end AI approach that fully replaces the physics-based solver. This team includes expertise in both hydraulic modeling and AI methods for physics, enabling this interdisciplinary work.
Hydrology experts from BRLi provided detailed physics models for a given basin to produce a database of floods. The Toulouse INP team then used these to train an end-to-end AI system to emulate the solver leveraging
NVIDIA Modulus
from the
Earth-2
platform. Earth-2 is a digital twin cloud platform to develop AI-augmented forecasting and visualization pipelines for weather and climate applications. The team used the optimized training pipelines and parallel training capabilities along with the built-in deep learning models provided by Modulus.
The resulting solver successfully emulates several hours of flooding ahead in mere seconds on a single GPU, opening the way for real-time forecasting. This is a groundbreaking approach leveraging AI to revolutionize flood forecasting, dramatically reducing computation time.
AI-based flood forecasting system
The joint team focused on the Têt River basin, in the south of France. Significant engineering efforts have led to detailed unstructured meshes of this region that encompass many important topographic and engineering features like bridges, dikes, and water retention basins. Detailed numerical simulations using the open-source
Telemac-Mascaret
code were run with a variety of water input profiles to produce a database of flooding events.
For training, the INP team used NVIDIA Modulus as part of the Earth-2 platform, which provides a suite of tools for enabling AI powered solutions in the climate and weather domain. The team used one of the built-in model architectures in Modulus for fast and effortless training with their custom data. To explore what AI models can enable on weather and climate forecasting, check out the
FourCastNet
and
CorrDiff
NVIDIA NIM microservices in a browser-based experience.
The INP team chose one of the graph neural network (GNN) architectures that has shown impressive performance in describing
atmospheric dynamics
and adapted it to the Têt River basin. This approach enables the model to capture complex spatial relationships and temporal dynamics crucial for accurate flood prediction. The AI model was trained to emulate the intricate hydrological processes that lead to flooding, by imitating the data from the detailed numerical simulations.
Training was done on up to 14
NVIDIA A100 Tensor Core
GPUs in parallel on the
CALMIP
Turpan supercomputer, achieving a near-linear speedup with a scaling efficiency of up to 91%. Using an autoregressive approach, the model can predict flooded areas and flow rates in 30-minute increments up to several hours into the future.
To address the challenge of capturing long-range dependencies without substantially increasing the number of message-passing layers, the team created progressively coarser representations of the original mesh. These coarser meshes were connected to the fine mesh using a KD-Tree, forming a multimesh. This strategy expanded the model’s receptive field, enhancing its ability to simulate large-scale flood dynamics while maintaining computational efficiency.
During training, the model optimization was guided by the mean squared error (MSE) loss function to ensure accurate predictions. For testing, the evaluation metrics included the L1 error to measure the average prediction deviation and the critical success index (CSI), which was computed at a 5 cm threshold. The CSI quantified the model’s ability to correctly detect and classify flooded areas by comparing the intersection of predicted and observed flooded cells to their union.
Surrogate GNN model
The final model can perform a 6-hour prediction in 19 ms on a single NVIDIA A100 80 GB GPU. In comparison, a similar simulation with the physics-based numerical model requires 12 hours of CPU time and at least 25 minutes of time-to-solution on 28 CPUs. What’s more, the method leverages the true mesh developed by the hydraulic engineering team with all its complexity. This is a true breakthrough for real-time flood modeling, for which AI attempts often require simplified configurations and meshes.
This achievement also showcases how NVIDIA Modulus enabled the setup and training of cutting-edge AI architectures directly on engineering meshes. This example can be replicated for many problems across multiple industries, simply by providing simulation data.
Figure 1. The surrogate GNN model can perform a 6-hour flooding prediction in 19 ms on a single NVIDIA A100 80 GB GPU
Conclusion
Building on these exciting results, the team is currently refining the training data and evaluation metrics to ensure that the level of quality produced by the model matches the physics-based approach, including in rare events. Once fully validated, the model will be considered for integration into the engineering toolchains at BRLi.
As a first step, it will be used to produce large ensembles of thousands of runs to obtain uncertainty quantification in basin studies, significantly improving upon the state of the art which relies on only a handful of simulations. Next, working with operational disaster relief services will be crucial to find the optimal data sources to ingest and feed to a real-time forecasting system, and how the data should be sent back to them.
Earth-2 is an open platform and
NVIDIA Modulus
is an
open-source project
to support the growing physics AI community. To learn how NVIDIA Modulus can help your physics AI research projects, explore the
NVIDIA/modulus
GitHub repo. | https://developer.nvidia.com/zh-cn/blog/spotlight-brli-and-toulouse-inp-develop-ai-based-flood-models-using-nvidia-modulus/ | 聚焦:BRLi 和 Toulouse INP 使用 NVIDIA Modulus 开发基于 AI 的洪水模型 | 洪水对 15 亿人构成了重大威胁,使其成为重大自然灾害的最常见原因。洪水每年给全球带来高达 25 亿美元的经济损失。洪水预测是防灾和减轻风险的重要工具。我们早已开发出能够准确模拟河流流域的数值方法。借助这些技术,像咨询公司
BRLi
的工程师可以研究不同的降水情景,并为当地利益相关者提供宝贵的洪水风险评估、缓解策略和灾害救济计划。准确的预测可以挽救生命、保护基础设施并减少经济损失。
然而,这些方法使用基于物理性质的数值模拟,而且计算量非常大。这种方法通常需要数小时才能模拟洪水事件,即使在许多并行处理器上也是如此。它不适合实时预测正在进行的事件。长期以来,这种限制一直是针对特定事件提供快速、可行的洪水警报的瓶颈,并阻碍了实时预报系统的开发。
为解决这一问题,BRLi 和
图卢兹国家理工学院 (Toulouse INP)
的一个联合团队通过与
ANITI 研究院
的合作,设计了一种完全取代基于物理的求解器的端到端 AI 方法。该团队在水力学建模和 AI 方法方面拥有丰富的专业知识,为这项跨学科工作提供了支持。
BRLi 的水文专家为给定的水域提供了详细的物理模型,以生成洪水数据库。然后,Toulouse INP 团队使用这些模型来训练端到端 AI 系统,以模拟从
Earth-2
平台利用
NVIDIA Modulus
的求解器。Earth-2 是一个数字孪生云平台,用于为天气和气候应用开发 AI 增强型预测和可视化流程。该团队使用了经过优化的训练流程和并行训练功能,以及 Modulus 提供的内置深度学习模型。
生成的求解器在单个 GPU 上只需几秒钟即可成功模拟前方数小时的洪水,为实时预测开辟了道路。这是一种利用 AI 革新洪水预测的突破性方法,可显著缩短计算时间。
基于 AI 的洪水预报系统
联合团队重点关注法国南部的 Têt River 流域。经过大量的工程工作,该区域形成了详细的非结构化网格,其中包含许多重要的地形和工程特征,例如桥梁、栈桥和水 Retention Basins。使用开源
Telemac-Mascaret
代码的详细数值模拟使用各种水输入剖面图运行,以生成洪水事件数据库。
在训练方面,INP 团队将 NVIDIA Modulus 用作 Earth-2 平台的一部分,该平台提供了一套工具,用于在气候和天气领域实现 AI 驱动的解决方案。该团队使用 Modulus 中的一个内置模型架构,使用其自定义数据进行快速轻松的训练。要探索 AI 模型可以在天气和气候预报方面实现哪些功能,请在基于浏览器的体验中查看
FourCastNet
和
CorrDiff
NVIDIA NIM 微服务。
INP 团队选择了一种在描述
大气动力
学方面表现出色的图神经网络 (GNN) 架构,并对其进行了调整以适应 Têt River 流域。这种方法使模型能够捕捉复杂的空间关系和时间动态,这对准确预测洪水至关重要。AI 模型经过训练,通过模仿详细的数值模拟中的数据来模拟导致洪水的复杂水文过程。
我们在
CALMIP Turpan
超级计算机上并行训练了多达 14 个
NVIDIA A100 Tensor Core GPU
,实现了近线性加速,扩展效率高达 91%。通过使用自回归方法,该模型可以以 30 分钟为增量预测未来几个小时内的洪水淹没地区和流速。
为了解决在不大幅增加消息传递层数量的情况下捕获远程依赖项的挑战,团队逐步创建了原始网格的更粗表示形式。这些粗网格使用 KD-Tree 连接到精细网格,形成了一个多网格。该策略扩展了模型的感受野,增强了其在保持计算效率的同时模拟大规模洪水动力学的能力。
在训练期间,模型优化由均方误差(MSE)损失函数指导,以确保预测准确。在测试方面,评估指标包括用于测量平均预测偏差的 L1 误差和在 5 厘米阈值下计算的关键成功指数(CSI)。CSI 通过比较预测和观察到的洪水淹没细胞与其并集的交集,量化了模型正确检测和分类洪水淹没地区的能力。
替代 GNN 模型
在单个 NVIDIA A100 80 GB GPU 上,最终模型可以在 19 毫秒内执行 6 小时预测。相比之下,使用基于物理性质的数值模型进行类似模拟需要 12 小时的 CPU 时间,在 28 个 CPU 上至少需要 25 分钟的求解时间。此外,该方法利用了液压工程团队开发的真正的复杂网格。这是实时洪水建模的真正突破,AI 通常需要简化的配置和网格。
这一成就还展示了 NVIDIA Modulus 如何直接在工程网格上设置和训练尖端 AI 架构。只需提供模拟数据,即可将此示例复制用于解决多个行业中的许多问题。
图 1、代理 GNN 模型可以在 19 毫秒内使用单个 NVIDIA A100 80 GB GPU 执行 6 小时洪水预测
结束语
基于这些令人兴奋的结果,该团队目前正在完善训练数据和评估指标,以确保模型生成的质量水平与基于物理的方法相匹配,在罕见事件中也是如此。经过充分验证后,我们将考虑将该模型集成到 BRLi 的工程工具链中。
作为第一步,它将用于生成包含数千次运行的大型系综,以在池研究中获得不确定性量化,从而显著改进仅依赖于少量模拟的先进技术。其次,与实际救灾服务机构合作至关重要,有助于找到最佳数据源,将其提取并输入实时预报系统,以及如何将数据发回这些数据源。
Earth-2 是一个开放平台,而
NVIDIA Modulus
是一个开源项目,旨在支持不断发展的物理 AI 社区。要了解 NVIDIA Modulus 如何为您的物理 AI 研究项目提供帮助,请探索
NVIDIA/modulus
Github 存储库。 |
https://developer.nvidia.com/blog/render-path-traced-hair-in-real-time-with-nvidia-geforce-rtx-50-series-gpus/ | Render Path-Traced Hair in Real Time with NVIDIA GeForce RTX 50 Series GPUs | Hardware support for ray tracing triangle meshes was introduced as part of
NVIDIA RTX
in 2018. But ray tracing for hair and fur has remained a compute-intensive problem that has been difficult to further accelerate. That is, until now.
NVIDIA GeForce 50 Series GPUs
include a major advancement in the acceleration of ray tracing for hair and fur: hardware ray tracing support for the linear swept sphere (LSS) primitive. This new primitive is a big step toward rendering high-quality digital humans in real time. The range of applications for realistic human rendering is ever-increasing and includes AI avatars, film and game characters, graphics research, high-performance scientific computing applications, synthetic data generation, and more.
LSS is available today in
NVIDIA OptiX
and in the
NVAPI SDK
. There are advanced examples of real-time hair using LSS with physically based shading models included in the
RTX Character Rendering
SDK which is part of
NVIDIA RTX Kit
. RTX Kit is a suite of neural rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game characters with photorealistic visuals. RTX Kit was introduced during the
GeForce RTX 50 Series launch
.
What is the LSS primitive?
The LSS primitive is a thick, round 3D line with varying radii. Multiple linear swept spheres can be chained together to build 3D curves, sharing vertices where they overlap, similar to how triangles are used to build 3D surfaces. LSS is shaped like a cylindrical or conical tube with spheres optionally capping either end.
Figure 1. The shapes of a linear swept sphere
If the two endpoints are overlapped so the line has no length, then the line swept sphere becomes just a regular sphere. This can be used for hardware-accelerated particle rendering. To make spheres convenient and memory efficient, this special case of LSS for spheres is being exposed as a separate first-class sphere primitive.
Linear swept spheres aren’t normally meant to be shown in magnified close-up, as in Figure 1. Rather, they are intended to be small onscreen, maybe one pixel wide, give or take. When multiple linear swept spheres are chained together into strands and the camera is pulled back a bit, the sharp angles melt away and these thick lines become thin smooth curves.
Figure 2. Linear swept spheres appear smooth from a distance
LSS indexing
LSS comes with two indexing modes: an explicit list mode, and a semi-implicit mode called successive implicit. In list mode, each segment is specified explicitly using a pair of indices into your vertex buffer. In successive implicit mode, line segments are specified using one explicit index and one implicit index, where the second point is assumed to be the next vertex in the vertex buffer.
Because successive vertices in memory are common and convenient, successive implicit mode is an easy way to save 50% of the memory in the index buffer. Figure 3 shows an example where an explicit index buffer would contain [0,1,1,2,2,3,4,5,5,6]. With successive implicit indexing, the index buffer would be [0,1,2,4,5].
Figure 3. An example of two strands with vertex indices
Rendering strand-based curves using legacy hardware
For someone already using the CUDA-based ray tracing API framework
NVIDIA OptiX
, LSS is already available as the default linear curve type, and works on all GPUs that OptiX supports. The OptiX version of LSS automatically uses a software fallback on GPUs prior to NVIDIA Blackwell GPUs, and the new hardware-accelerated primitive on GeForce RTX 50 Series GPUs, without needing any code changes. If you’re not using OptiX, chances are you’ll need to use an alternative primitive.
When it comes to high performance, tessellated curves are the next best thing to LSS. One option is camera-facing or ray-facing quads. Ray-facing quads have a relatively low memory footprint, though they may need to be oriented during every frame of animation (either hair motion or camera motion).
Disjoint Orthogonal Triangle Strips (DOTS) provide a clever solution for tessellating curves that enables viewing from any angle without having to reorient the triangles every frame to face the camera. This way, if you render static curves, you won’t need to rebuild your bounding volume hierarchy (BVH) when the camera or hair moves. Because triangles have broad support, DOTS can take advantage of existing RTX hardware.
Anyone who wants to use LSS on GeForce RTX 50 Series GPUs can still support ray traced strands on older GPUs using DOTS. Some hair examples using both DOTS and LSS will be available as part of the
RTX Character Rendering SDK
that implements path tracing of realistic human hair and skin.
One advantage of LSS is that the geometry is naturally round, making it easy to shade and easy to avoid self-intersection when casting shadow and reflection rays. Tessellated curves like DOTS will typically need to have the geometric normal overridden by a custom shading normal, as shown in Figure 4. When rounded shading normals are used, then the appearance of DOTS can be similar to or even indistinguishable from the appearance of curves made out of linear swept spheres.
Figure 4. Disjoint Orthogonal Triangle Strips can match the appearance of linear swept spheres when shading normals are used
Benefits of using LSS
Some of the existing approaches to real-time hair include:
Textured cards
Ray-facing quads, tessellated tubes, orthogonal quads (DOTS)
Software higher-order curve primitives (cubic Bezier, for example)
Textured cards have been a good way to achieve a reasonable enough quality for hair in games on a very light budget. However, this kind of setup is extremely limited and does not allow much room for simulating or animating hair. Since a textured card has multiple strands per card, there is no control over individual strands, by design. Strand-based approaches are far more controllable and flexible, and this is one reason why strands are clearly the future of real-time hair rendering as the world’s hardware improves over time.
Figure 5. Rendering animated hair on humans is about 2x faster with LSS compared to DOTS, while also requiring about 5x less VRAM to store the geometry
Since triangles are well supported in hardware these days, tessellating curves into triangles (tubes, quads, DOTS) can be very fast to render. The downside is that tessellating takes time and can use a lot of memory. A quad requires four vertices, though two vertices can be shared with the next successive neighboring quad in the strand. So on average, camera facing quads will cost at least two vertices per strand segment. If tubes or DOTS are used, then the strand will end up with more than two vertices per segment on average.
LSS only needs one vertex per strand segment on average, so it uses much less memory than tessellated curves, while achieving higher rendering speeds. And LSS is of course much faster than software higher-order curve primitives, like the cubic B-spline. Depending on the needs and tolerance for quality, this extra performance may come with a memory tradeoff compared to cubic curves, for example, if more vertices are needed for smoothness.
What else can LSS be used for?
The RTX version of the linear swept sphere was designed mainly with hair and fur in mind, but it can be used for other applications including particles, molecular models, wireframe rendering, font and path rendering, and whatever else you can imagine.
Figure 6. Alternative uses of linear swept spheres include wireframe (left), particle rendering (center), and molecular models (right). Image credits: Bay Raitt, Valve; Stanford Dark Sky Simulations; University of Illinois
Get started with LSS
RT Cores in the NVIDIA GeForce RTX 50 Series GPUs introduce hardware support for a new primitive called linear swept spheres (LSS). LSS is a round 3D line primitive with varying thickness, enabling flexible approximation of various strand types. As a special case of LSS, the GeForce RTX 50 Series GPUs hardware primitive also supports spheres directly (without a swept linear segment), which is useful for applications like particle systems.
Rendering animated hair on humans is about 2x faster with LSS compared to DOTS, while also requiring about 5x less VRAM to store the geometry. This is similar for other common use cases. With LSS on GeForce RTX 50 Series GPUs and DOTS for earlier GPUs, there is now a way to get the highest possible hair ray tracing performance on all RTX GPUs.
When ray tracing in CUDA, LSS is currently available in
OptiX
. For DirectX, API for LSS can be found starting in the R570 version of the
NVAPI SDK
. Vulkan developers will be interested in the
Vulkan LSS extension
.
For examples of physically based hair rendering, check out the
RTX Character Rendering SDK
. To learn more about the latest AI and rendering technologies, check out the
NVIDIA RTX Kit
. Join the conversation about LSS on the
OptiX NVIDIA Developer Forum
. | https://developer.nvidia.com/zh-cn/blog/render-path-traced-hair-in-real-time-with-nvidia-geforce-rtx-50-series-gpus/ | 使用 NVIDIA GeForce RTX 50 系列 GPU 实时渲染路径追踪头发 | 2018 年,
NVIDIA RTX
引入了对光线追踪三角形网格的硬件支持。但是,头发和毛皮的光线追踪仍然是一个计算密集型问题,一直难以进一步加速。那就是,直到现在。
NVIDIA GeForce 50 系列 GPUs
在加速毛发和毛皮光线追踪方面取得了重大进步:硬件光线追踪支持线性扫描球体 (LSS) 基元。这种新基元是向实时渲染高质量数字人迈出的一大步。逼真的人类渲染的应用领域与日俱增,包括 AI 虚拟形象、电影和游戏角色、图形研究、高性能科学计算应用、合成数据生成等。
LSS 现已在
NVIDIA OptiX
和
NVAPI SDK
中提供。
RTX Character Rendering SDK
是
NVIDIA RTX Kit
的一部分,其中包含使用基于物理性质的着色模型的 LSS 的高级实时毛发示例。RTX Kit 是一套神经渲染技术,可利用 AI 对游戏进行光线追踪、渲染具有宏大几何图形的场景,以及使用逼真视觉效果创建游戏角色。RTX Kit 在
GeForce RTX 50 Series
发布时推出。
什么是 LSS 基元?
LSS 基元是一条具有不同半径的粗而圆的 3D 线。可以将多个线性扫描球体链接在一起以构建 3D 曲线,在它们重叠的位置共享顶点,这与使用三角形构建 3D 表面的方式类似。LSS 的形状类似圆柱形或锥形管,球体的两端均可选择覆盖。
图 1. 线性扫描球体的形状
如果两个端点重叠,因此直线没有长度,则直线扫描球体仅会成为常规球体。这可用于硬件加速粒子渲染。为了方便球体和提高内存效率,我们将球体 LSS 的这种特殊情况作为单独的一级球体基元公开。
通常情况下,线性扫描球体并不像图 1 那样以放大镜头显示。相反,它们的屏幕尺寸较小,可能有 1 像素宽,可以取或取。当多个线性扫描球体被链式连接成多条链路,并将摄像头稍微往后拉时,清晰的角度会消失,这些粗壮的线条会变成纤细的平滑曲线。
图 2、线性扫描球体在一定距离下显得光滑
LSS 索引
LSS 提供两种索引模式:显式列表模式和称为连续隐式的半隐式模式。在列表模式下,系统会在顶点缓冲区中使用一对索引显式指定每个片段。在连续隐式模式下,使用一个显式索引和一个隐式索引指定线段,其中第二个点假定为顶点缓冲区中的下一个顶点。
由于内存中的连续顶点十分常见且方便,因此可以通过连续隐式模式在索引缓冲区中轻松节省 50% 的内存。图 3 显示了显式索引缓冲区包含 [0,1,1,2,2,3,4,5,5,6] 的示例。通过连续的隐式索引,索引缓冲区将为 [0,1,2,4,5]。
图 3. 带有顶点索引的两条线的示例
使用传统硬件渲染基于链的曲线
对于已经使用基于 CUDA 的光线追踪 API 框架
NVIDIA OptiX
的用户,LSS 已作为默认线性曲线类型提供,并且适用于 OptiX 支持的所有 GPU。OptiX 版本的 LSS 自动使用 NVIDIA Blackwell GPU 之前的 GPU 上的软件回退,以及 GeForce RTX 50 Series GPU 上的新硬件加速基元,而无需更改任何代码。如果您未使用 OptiX,则可能需要使用替代基元。
说到高性能,细分曲线是 LSS 的下一个最佳选择。其中一个选项是摄像头对焦或光线对焦四边形。尽管在动画的每一帧(毛发运动或相机运动)中,面向光线的四边形的显存占用相对较小,但其显存占用相对较小。
不交正交三角形条 (DOTS) 提供了一个巧妙的曲线细分解决方案,支持从任何角度进行查看,而无需重新调整每帧三角形的方向以面向摄像头。这样,如果渲染静态曲线,则无需在相机或头发移动时重建包围盒层次结构 (BVH)。三角形拥有广泛支持,因此 DOTS 可以利用现有的 RTX 硬件。
如果您想在 GeForce RTX 50 系列 GPUs 上使用 LSS,仍然可以使用 DOTS 在较旧的 GPU 上支持光线追踪链。一些同时使用 DOTS 和 LSS 的毛发示例将作为
RTX Character Rendering SDK
的一部分提供,该 SDK 可实现逼真的人类头发和皮肤的路径追踪。
LSS 的一个优点是几何体自然呈圆形,因此在投射阴影和反射光线时,很容易着色,并且很容易避免自交。像 DOTS 这样的细分曲线通常需要使用自定义着色法线覆盖几何法线,如图 4 所示。当使用圆周着色法线时,DOTS 的外观可能与由线性扫描球体构成的曲线相似,甚至无法区分。
图 4、在使用着色法线时,不交接正交三角形条可以与线性扫描球体的外观相匹配
使用 LSS 的优势
一些现有的实时头发处理方法包括:
纹理卡
面向光线的四边形,镶嵌管,正交四边形 (DOTS)
软件高阶曲线基元 (例如 cubic Bezier)
纹理卡是在预算有限的情况下,在游戏中实现合理的头发质量的好方法。但是,这种设置非常有限,没有足够的空间用于模拟头发或制作头发动画。由于一张纹理卡的每张卡有多个链,因此根据设计,无法控制单个链。基于链的方法更具可控性和灵活性,这也是为什么链显然是实时头发渲染的未来,因为世界的硬件会随着时间的推移而改进。
图 5、与 DOTS 相比,使用 LSS 在人类身上渲染动画毛发的速度大约快 2 倍,同时存储几何图形所需的 VRAM 也减少了大约 5 倍
由于硬件目前已大力支持三角形,因此可以非常快速地将曲线细分为三角形(管、四边形、DOTS)。缺点是细分需要时间,并且会占用大量内存。一个四边形需要四个顶点,不过两个顶点可以与链中的下一个相邻四边形共享。因此,面向四边形的摄像头平均每个链段至少会耗费两个顶点。如果使用管或 DOTS,则链的每个部分最终平均将包含两个以上的顶点。
LSS 的每个链段平均只需要一个顶点,因此与细分曲线相比,它使用的内存更少,同时实现了更高的渲染速度。当然,LSS 比软件高阶曲线基元 (如 cubic B-spline) 的速度要快得多。根据对质量的需求和容差,与 cubic curves 相比,这种额外的性能可能会带有内存权衡,例如,如果需要更多的顶点来实现平滑度。
LSS 还有哪些其他用途?
RTX 版本的线性扫描球体在设计时主要考虑了毛发和毛皮,但它可以用于其他应用,包括粒子、分子模型、线框渲染、字体和路径渲染以及您可以想象的其他任何应用。
图 6、线性扫描球体的其他用途包括线框 (左)、粒子渲染 (中心) 和分子模型 (右)。图片来源:Bay Raitt,Valve;Stanford Dark Sky Simulations;University of Illinois
开始使用 LSS
NVIDIA GeForce RTX 50 系列 GPU 中的 RT Cores 为一种称为线性扫描球体 (LSS) 的新基元引入了硬件支持。LSS 是一种厚度不同的 3D 圆线基元,支持灵活逼近各种链类型。作为 LSS 的一种特殊情况,GeForce RTX 50 系列 GPU 硬件基元还直接支持球体 (不包含扫描线性分割),这对于像粒子系统这样的应用非常有用。
与 DOTS 相比,使用 LSS 在人类身上渲染动画毛发的速度大约快 2 倍,同时存储几何图形所需的 VRAM 也减少了大约 5 倍。这与其他常见用例类似。借助 GeForce RTX 50 系列 GPUs 上的 LSS 和早期 GPUs 上的 DOTS,现在可以在所有 RTX GPUs 上获得尽可能高的毛发光线追踪性能。
在 CUDA 中进行光线追踪时,LSS 目前可在
OptiX
中使用。对于 DirectX,可以从 R570 版本的
NVAPI SDK
开始找到适用于 LSS 的 API。Vulkan 开发者可能会对
Vulkan LSS 扩展
感兴趣。
有关基于物理效果的头发渲染示例,请查看
RTX Character Rendering SDK
。如需详细了解最新的 AI 和渲染技术,请查看
NVIDIA RTX Kit
。在
OptiX NVIDIA 开发者论坛
上加入有关 LSS 的对话。 |
https://developer.nvidia.com/blog/nvidia-video-codec-sdk-13-0-powered-by-nvidia-blackwell/ | NVIDIA Video Codec SDK 13.0 Powered by NVIDIA Blackwell | The release of
NVIDIA Video Codec SDK 13.0
marks a significant upgrade, adding support for the latest-generation NVIDIA Blackwell GPUs. This version brings a wealth of improvements aimed at elevating both video encoding and decoding capabilities. From enhanced compression efficiency to better throughput and encoding quality, SDK 13.0 addresses the ever-evolving demands of the video ecosystem.
Here are some of the key features introduced in this update.
Encode features:
Improved compression efficiency:
Achieve better video quality at lower bitrates.
New YUV 4:2:2 encoding support (H.264 and HEVC):
Enables a broader range of use cases, particularly in professional video production and broadcasting.
Higher bit-depth encoding:
Introduces support for 10-bit encoding in H.264, and new hardware capabilities to encode 8-bit content as 10-bit.
Interlaced encoding (H.264):
Adds interlaced encoding support in H.264.
AV1 lookahead level and UHQ:
Introduces lookahead level and UHQ modes for AV1, for latency-tolerant use cases that require the highest possible video quality.
MV-HEVC support:
Supports two views to improve compression for stereoscopic content. For more information, see
Enabling Stereoscopic and 3D Views Using MV-HEVC in NVIDIA Video Codec SDK 13.0
.
Decode features:
4:2:2 decode support (H.264 and HEVC):
Expands decoding capabilities, which is especially valuable for professional video workflows.
Higher bit-depth H.264 decoding:
Introduces support for 10-bit decoding in H.264.
2x H.264 throughput on NVIDIA Blackwell:
Brings an impressive 2x throughput improvement in H.264 decoding compared to previous-generation GPUs.
H.264 8K support:
Now handles ultra-high-definition video decoding with ease, providing future-proof capabilities for 8K content.
Dynamic decode surface allocation:
Enables applications to minimize GPU memory usage.
Encode quality enhancements in NVIDIA Blackwell
Here’s more information about the key encode features in this update.
Improved compression efficiency
NVIDIA encoder (NVENC) hardware in NVIDIA Blackwell includes many enhancements for improving compression efficiency. These include improvement in motion estimation including enhanced subpixel search and better rate distortion optimization (RDO), among others for HEVC and AV1. These enhancements apply across presets and provide significant quality gain over ADA-generation GPUs.
4:2:2 chroma subsampling
In previous generations, NVENC supported formats including 4:2:0 and 4:4:4. YUV 4:4:4 retains full color, resulting in a larger file size and bandwidth requirements to transfer data. In 4:2:0, the full information is retained in the luminance channel but the chroma channel contains only 25% of the original color content.
To overcome the loss in color, NVIDIA Blackwell introduces 4:2:2 chroma. 4:2:2 retains 50% of the color information compared to 4:4:4 but reduces the required bandwidth to transfer video data.
4:2:2 is popular in the video editing and broadcasting eco-system due to higher color resolution compared to 4:2:0 but lower bandwidth requirements compared to 4:4:4.
As with YUV 4:2:0, semi-planar layouts are supported for 4:2:2 for 8-bit and 10-bit depths. The NVENCODE API introduces two new formats for YUV 4:2:2 semi-planar inputs:
NV_ENC_BUFFER_FORMAT_NV16
, for 8-bit YUV 4:2:2
NV_ENC_BUFFER_FORMAT_P210
, for 10-bit YUV 4:2:2
chromaFormatIdc =2
enables 4:2:2 encoding. As with 4:2:0 and 4:4:4, an application can also pass ARGB input with
chromaFormatIdc=2
to generate a 4:2:2 subsample encoded output. The color space conversion from ARGB to YUV 4:2:2 is done inside the encode driver using the CUDA kernel.
The compression efficiency for 4:2:2 input is similar to 4:2:0 and 4:4:4 formats.
Higher bit-depth encoding enhancement
NVIDIA Video Codec SDK 13.0 introduces 10-bit encoding support in H.264 on NVIDIA Blackwell GPUs. All the chroma subsampling formats—4:2:0, 4:2:2 and 4:4:4—support encoding 10-bit content.
In addition, NVIDIA Blackwell NVENC can encode 8-bit content as 10-bit for H.264 and HEVC, a feature already available for AV1 in ADA.
ADA and earlier GPUs continue to support this feature for HEVC, but unlike NVIDIA Blackwell, the input YUV is upscaled from 8- to 10-bit as a preprocessing step using CUDA.
This feature improves the coding efficiency due to higher precision in the encoding pipeline. This upgrade results in smoother gradations and more accurate color reproduction, ideal for high-quality video production. Many of the input-related calculations in the encoder are done in 10-bit instead of 8-bit. Applications can expect an improvement of around 3% in compression efficiency when using this feature without any significant impact on encoder performance.
Unlike AV1, 10-bit encoding is supported only on select profiles for H.264 and HEVC. Applications should only enable this feature if the decoder supports 10-bit profiles.
H.264 interlaced encoding
NVIDIA Blackwell GPUs bring back the support for encoding interlaced content, with improved flexibility for legacy video workflows for users working with traditional broadcast video formats.
Interlaced encoding
involves splitting a frame into two fields. The first field includes the odd lines of the image while the second field includes the even lines. These fields are transmitted sequentially at a rapid rate creating an illusion of a single frame. Field encoding is supported for YUV 4:2:0 and YUV 4:2:2, 8- and 10-bit content on H.264.
NVENCODE API supports both top field first and bottom field first layouts for interlaced content.
Lookahead level and UHQ
Lookahead level analyzes future frames and enables efficient allocation of bits to different blocks within a frame based on how much a given block is referenced in next frames. The statistics generated during lookahead encoding are used for complexity estimation in rate control.
Video Codec SDK 13.0 adds the support for the lookahead level in AV1 and introduces AV1 UHQ tuning info, which combines lookahead level and temporal filtering to provide the best quality and performance for various HQ latency-tolerant encoding presets. These features provide best-in-class visual quality, perfect for demanding video applications
In the UHQ tuning, the optimal settings for lookahead and temporal filtering are applied in combination rather than individually. As with UHQ HEVC, the number of B-frames is set to five, while using the middle B-frame as the reference. UHQ mode also disables adaptive I- and B-frames and uses a fixed GOP (group of pictures) structure.
NVIDIA Blackwell NVENC supports new encode stats enabling the lookahead algorithm to identify the referencing details across multiple references, resulting in much better quality and performance of the UHQ tuning info for both HEVC and AV1 than ADA.
For AV1, the UHQ tuning info in NVIDIA Blackwell has seven B-frames, instead of five, further enhancing the compression efficiency.
Figure 1 and 2 shows the bitrates saving for ADA HQ, ADA UHQ, NVIDIA Blackwell HQ and NVIDIA Blackwell UHQ for HEVC and AV1, respectively.
Figure 1. HEVC Bit-rate savings for HQ and UHQ tuning info with natural content
Figure 2. AV1 Bit-rate savings for HQ and UHQ tuning info with natural content
Enhanced video decoding capabilities
NVIDIA Blackwell GPUs bring significant advancements to the world of video decoding, particularly in H.264 and HEVC formats. These enhancements offer both feature set and performance improvements, setting new standards in the industry.
H.264 decoding enhancements
With NVIDIA Blackwell, the H.264 decoding capabilities have undergone major improvements. Some of the key features include the following:
4:2:0 10-bit support (exclude MBAFF):
This provides improved color depth, making it ideal for working in color-sensitive fields such as video production or gaming.
4:2:2 8/10-bit support (exclude MBAFF):
With support for both 8-bit and 10-bit chroma sampling, NVIDIA Blackwell ensures high-quality video playback, even in compressed formats, which provides better visual fidelity.
2x performance improvement:
Perhaps the most exciting update is the performance boost. NVIDIA Blackwell offers a 2x performance improvement compared to previous generations, which means smoother video playback and faster decoding even for the most demanding video files.
Resolution support up to 8192×8192:
Whether you’re working with ultra-high-definition video or cutting-edge 3D content, NVIDIA Blackwell has the ability to handle resolutions up to 8192×8192. This means that you can decode videos with greater clarity and detail.
These improvements ensure that NVIDIA Blackwell delivers top-tier video decoding, whether you’re working on high-resolution video projects or handling large-scale video processing tasks.
HEVC decoding for enhanced flexibility and speed
High-Efficiency Video Coding (HEVC) has become the go-to format for efficient video compression, and NVIDIA Blackwell takes it to the next level. The new enhancements include the following:
Support for 4:2:2 8/10-bit and 12-bit decoding:
NVIDIA Blackwell now offers a wider range of decoding options for HEVC, making it easier to process high-quality video with minimal loss of fidelity.
Performance improvements:
Thanks to improvements in the NVDCLK, you see noticeable performance boosts when decoding HEVC content. This translates into smoother playback and more efficient video rendering.
Figure 3. H.264 and HEVC Decode performance (fps)
The NVDECODE API introduces two new formats for 4:2:2 decode output:
cudaVideoSurfaceFormat_NV16=4, /**< Semi-Planar YUV 422 [Y plane followed by interleaved UV plane] */
cudaVideoSurfaceFormat_P216=5 /**< 16 bit Semi-Planar YUV 422[Y plane followed by interleaved UV plane]*/
Dynamic decode surface allocation for GPU memory efficiency
One of the standout features in Video Codec SDK 13.0 is the introduction of dynamic decode surface allocation. By adapting to varying video bitstreams, this capability reduces unnecessary memory consumption, leading to a higher number of decode sessions. This improvement is crucial for optimizing GPU memory usage in a few video-decoding use cases.
In some cases, the bitstream may use fewer reference frames than what the DPB size suggests, wasting valuable video memory. Allocating the max decode surface results in higher memory foot prints.
Comparing the new SDK
Before this release, video applications created the decoder object with a fixed minimum number of surfaces based on the DPB size. This approach, while functional, sometimes allocated memory more than necessary.
CUVIDDECODECREATEINFO stDecodeCreateInfo;
memset(&stDecodeCreateInfo, 0x0, sizeof(CUVIDDECODECREATEINFO ));
. . . // Setup the remaining structure members
stDecodeCreateInfo.ulNumDecodeSurfaces = <dpb_size>// Prior to SDK 13.0, this could not change
rResult = cuvidCreateDecoder(&hDecoder, &stDecodeCreateInfo)
With Video Codec SDK 13.0, you gain the flexibility to allocate extra YUV surfaces only when needed. You can create a decoder object with a smaller initial allocation of YUV surfaces (such as 3 or 4) and use the cuvidReconfigureDecoder API to allocate more surfaces dynamically as needed. This dynamic allocation reduces unnecessary memory consumption and enhances the overall efficiency of the decoding process.
Unlocking new possibilities with Video Codec SDK 13.0
NVIDIA Video Codec SDK 13.0 pushes the boundaries of video encoding and decoding with the latest NVIDIA Blackwell GPUs. Whether you’re creating content for broadcast, editing high-quality video, or working with the latest 8K footage, this update offers the tools you need to elevate your workflows.
With improvements in compression efficiency, support for new color formats, enhanced encoding quality, and more, Video Codec SDK 13.0 is designed to meet the growing demands of modern video applications.
In support of the Video Codec SDK 13.0 launch, NVIDIA partners Blackmagic, Capcut, and Wondershare have already integrated features such as 4:2:2 encode, 4:2:2 decode, AV1 UHQ, and split encoding in their video pipelines. | https://developer.nvidia.com/zh-cn/blog/nvidia-video-codec-sdk-13-0-powered-by-nvidia-blackwell/ | NVIDIA 视频编解码器 SDK 13.0 由 NVIDIA Blackwell 驱动 | NVIDIA Video Codec SDK 13.0
的发布标志着重大升级,增加了对新一代 NVIDIA Blackwell GPU 的支持。此版本带来了大量改进,旨在提升视频编码和解码功能。从增强的压缩效率到更好的吞吐量和编码质量,SDK 13.0 可满足视频生态系统不断变化的需求。
以下是本次更新中引入的一些主要功能。
编码功能:
提高压缩效率:
以更低的比特率实现更好的视频质量。
新的 YUV 4:2:2 编码支持 (H.264 和 HEVC):
支持更广泛的用例,特别是在专业视频制作和广播中。
更高的位深编码:
在 H.264 中引入对 10 位编码的支持,以及将 8 位内容编码为 10 位的新硬件功能。
交错编码(H.264):
在 H.264 中添加交错编码支持。
AV1 前瞻性级别和 UHQ:
为 AV1 引入前瞻性级别和 UHQ 模式,适用于需要尽可能高的视频质量的延迟容忍用例。
MV-HEVC 支持:
支持两个视图,以提高立体内容的压缩率。有关更多信息,请参阅在
NVIDIA Video Codec SDK 13.0 中使用 MV-HEVC 启用立体和 3D 视图
。
解码功能:
4:2:2 解码支持 (H.264 和 HEVC):
扩展解码功能,这对于专业视频工作流程尤其重要。
更高位深的 H.264 解码:
在 H.264 中引入对 10 位解码的支持。
NVIDIA Blackwell 上的 H.264 吞吐量提高了 2 倍:
与上一代 GPU 相比,H.264 解码的吞吐量提高了 2 倍,令人印象深刻。
支持 H.264 8K:
现在,可轻松处理超高清视频解码,为 8K 内容提供面向未来的功能。
动态解码表面分配:
支持应用更大限度地减少 GPU 显存占用。
在 NVIDIA Blackwell 中对质量增强进行编码
以下是本次更新中有关关键编码功能的详细信息。
提高压缩效率
NVIDIA Blackwell 中的 NVIDIA 编码器 (NVENC) 硬件包含许多用于提高压缩效率的增强功能。其中包括改进运动估计,包括增强的子像素搜索和更好的速率失真优化(RDO),以及 HEVC 和 AV1。这些增强功能适用于所有预设,与 ADA 代 GPU 相比,可显著提升质量。
4:2:2 色度二次采样
在前几代产品中,NVENC 支持的格式包括 4:2:0 和 4:4:4。YUV 4:4:4 保留全彩,导致传输数据的文件大小和带宽需求更大。在 4:2:0 模式下,全部信息将保留在亮度通道中,而色度通道仅包含原始颜色内容的 25%。
为克服色差问题,NVIDIA Blackwell 引入了 4:2:2 色度。与 4:4:4 相比,4:2:2 可保留 50%的颜色信息,但降低了传输视频数据所需的带宽。
4:2:2 在视频编辑和广播生态系统中很受欢迎,因为与 4:2:0 相比,4:2:2 的色彩分辨率更高,但与 4:4:4 相比,带宽要求更低。
与 YUV 4:2:0 一样,8 位和 10 位深度的半平面布局支持 4:2:2,NVENCODE API 为 YUV 4:2:2 半平面输入引入了两种新格式:
NV_ENC_BUFFER_FORMAT_NV16
,对于 8 位 YUV 4:2:2
NV_ENC_BUFFER_FORMAT_P210
,对于 10 位 YUV 4:2:2
chromaFormatIdc =2
支持 4:2:2 编码。与 4:2:0 和 4:4:4 一样,应用还可以通过
chromaFormatIdc=2
传递 ARGB 输入,以生成 4:2:2 的子样本编码输出。使用 CUDA 内核在编码驱动中完成从 ARGB 到 YUV 4:2:2 的色彩空间转换。
4:2:2 输入的压缩效率类似于 4:2:0 和 4:4:4 格式。
更高的位深度编码增强
NVIDIA 视频编解码器 SDK 13.0 在 NVIDIA Blackwell GPUs 上的 H.264 中引入 10 位编码支持。所有色度子采样格式—4:2:0、4:2:2 和 4:4:4—支持对 10 位内容进行编码。
此外,NVIDIA Blackwell NVENC 可以将 8 位内容编码为 H.264 和 HEVC 的 10 位内容,这项功能已在 ADA 中的 AV1 中提供。
ADA 和早期 GPU 继续支持 HEVC 的此功能,但与 NVIDIA Blackwell 不同的是,输入 YUV 从 8 位升级到 10 位,作为使用 CUDA 的预处理步骤。
由于编码管线的精度更高,此功能可提高编码效率。此次升级可实现更平滑的渐变和更准确的色彩再现,是高质量视频制作的理想之选。编码器中的许多与输入相关的计算均在 10 位 (而非 8 位) 下完成。应用程序的压缩效率有望提高约 3%,而不会对编码器性能产生任何重大影响。
与 AV1 不同,仅 H.264 和 HEVC 的特定配置文件支持 10 位编码。应用程序应仅在解码器支持 10 位配置文件的情况下启用此功能。
H.264 交错编码
NVIDIA Blackwell GPUs 恢复了对交错内容编码的支持,并为传统广播视频格式的用户提高了传统视频工作流程的灵活性。
交错编码
需要将一帧拆分成两个字段。第一个字段包含图像的奇数行,第二个字段包含偶数行。这些字段以快速的顺序传输,会产生单帧的错觉。H.264 上的 YUV 4:2:0 和 YUV 4:2:2、8- 和 10 位内容支持字段编码。
NVENCODE API 支持用于交错内容的“top field first”(顶部字段优先) 和“bottom field first”(底部字段优先) 布局。
前瞻性和 UHQ
前瞻性分析未来帧,并根据给定块在后续帧中的引用量,将位高效分配到帧中的不同块。前瞻性编码期间生成的统计数据用于速率控制的复杂性估计。
Video Codec SDK 13.0 在 AV1 中增加了对前瞻性级别的支持,并引入了 AV1 UHQ 调优信息,该信息结合了前瞻性级别和时间性过滤,可为各种 HQ 延迟容忍编码预设提供出色的质量和性能。这些功能可提供出色的视觉质量,非常适合要求严苛的视频应用
在 UHQ 调优中,前瞻性和时间过滤的最佳设置是组合应用的,而不是单独应用的。与 UHQ HEVC 一样,B 帧的数量设置为 5,同时使用中间的 B 帧作为参考。UHQ 模式还会禁用自适应 I 帧和 B 帧,并使用固定的 GOP(group of pictures)结构。
NVIDIA Blackwell NVENC 支持新的编码统计信息,使前瞻性算法能够识别多个引用的引用详细信息,从而使 HEVC 和 AV1 的 UHQ 调整信息的质量和性能比 ADA 高得多。
对于 AV1,NVIDIA Blackwell 中的 UHQ 调整信息有 7 个 B 帧,而不是 5 个 B 帧,这进一步提高了压缩效率。
图 1 和 2 分别显示了 ADA HQ、ADA UHQ、NVIDIA Blackwell HQ 和 NVIDIA Blackwell UHQ 在 HEVC 和 AV1 上节省的比特率。
图 1、使用自然内容为 HQ 和 UHQ 调优信息节省 HEVC 比特率
图 2、使用自然内容,为 HQ 和 UHQ 调优信息节省 AV1 码率
增强的视频解码功能
NVIDIA Blackwell GPUs 为视频解码领域带来了巨大进步,尤其是 H.264 和 HEVC 格式的视频解码。这些增强功能同时提供功能集和性能改进,为行业树立了新的标杆。
H.264 解码增强功能
借助 NVIDIA Blackwell,H.264 解码功能得到了重大改进。其中一些主要特性包括:
4:2:0 10 位支持 (MBAFF 除外):这可改善色彩深度,非常适合视频制作或游戏等色彩敏感领域的工作。
4:2:2 8/10 位支持 (MBAFF 除外):NVIDIA Blackwell 支持 8 位和 10 位色度采样,可确保高质量的视频播放 (即使是压缩格式),从而提供更好的视觉保真度。
2 倍性能提升:最令人兴奋的更新或许是性能提升。与前几代产品相比,NVIDIA Blackwell 的性能提升了 2 倍,这意味着视频播放更流畅,解码速度更快,即使是对于要求严苛的视频文件也是如此。
支持分辨率高达 8192 × 8192:
无论您是处理超高清视频还是处理先进的 3D 内容,NVIDIA Blackwell 都能处理高达 8192 × 8192 的分辨率。这意味着您可以更清晰、更精细地解码视频。
无论您是处理高分辨率视频项目,还是处理大规模视频处理任务,这些改进都能确保 NVIDIA Blackwell 提供出色的视频解码功能。
HEVC 解码可提高灵活性和速度
高效视频编码 (HEVC) 已成为高效视频压缩的首选格式,而 NVIDIA Blackwell 将其提升至更高水平。新的增强功能包括:
支持 4:2:2 8/10 位和 12 位解码:
NVIDIA Blackwell 现在为 HEVC 提供了更广泛的解码选项,可更轻松地处理高质量视频,并将保真度损失降至最低。
性能提升
:得益于 NVDCLK 的改进,您可以在解码 HEVC 内容时看到明显的性能提升。这转化为更流畅的播放和更高效的视频渲染。
图 3、H.264 和 HEVC 解码性能 (fps)
NVDECODE API 为 4:2:2 解码输出引入了两种新格式:
cudaVideoSurfaceFormat_NV16=4, /**< Semi-Planar YUV 422 [Y plane followed by interleaved UV plane] */
cudaVideoSurfaceFormat_P216=5 /**< 16 bit Semi-Planar YUV 422[Y plane followed by interleaved UV plane]*/
动态解码表面分配,提高 GPU 显存效率
Video Codec SDK 13.0 的一个突出特性是引入了动态解码表面分配。通过适应不同的视频位流,此功能可减少不必要的内存消耗,从而增加解码会话的数量。这一改进对于在一些视频解码用例中优化 GPU 内存使用至关重要。
在某些情况下,bitstream 使用的参考帧可能少于 DPB 大小,因此会浪费宝贵的视频内存。分配最大解码表面会产生更高的显存占用空间。
比较新的 SDK
在此版本发布之前,视频应用程序根据 DPB 大小创建了具有固定最小表面数量的解码器对象。这种方法虽然实用,但有时分配的内存超过了需要。
CUVIDDECODECREATEINFO stDecodeCreateInfo;
memset(&stDecodeCreateInfo, 0x0, sizeof(CUVIDDECODECREATEINFO ));
. . . // Setup the remaining structure members
stDecodeCreateInfo.ulNumDecodeSurfaces = <dpb_size>// Prior to SDK 13.0, this could not change
rResult = cuvidCreateDecoder(&hDecoder, &stDecodeCreateInfo)
借助 Video Codec SDK 13.0,您可以灵活地仅在需要时分配额外的 YUV 表面。您可以创建具有较小 YUV 表面初始分配 (例如 3 或 4) 的解码器对象,并使用 cuvidReconfigureDecoder API 根据需要动态分配更多表面。这种动态分配可减少不必要的内存消耗,并提高解码过程的整体效率。
借助 Video Codec SDK 13.0 解锁新的可能性
NVIDIA Video Codec SDK 13.0 借助最新的 NVIDIA Blackwell GPU 突破视频编码和解码的极限。无论是创作广播内容、编辑高质量视频,还是处理最新的 8K 视频片段,此更新都提供了改进工作流程所需的工具。
Video Codec SDK 13.0 的压缩效率有所提升,支持新的颜色格式,编码质量有所提升,可满足现代视频应用日益增长的需求。
为支持 Video Codec SDK 13.0 的发布,NVIDIA 合作伙伴 Blackmagic、Capcut 和 Wondershare 已在其视频工作流中集成 4:2:2 编码、4:2:2 解码、AV1 UHQ 和分割编码等功能。 |
https://developer.nvidia.com/blog/enabling-stereoscopic-and-3d-views-using-mv-hevc-in-nvidia-video-codec-sdk-13-0/ | Enabling Stereoscopic and 3D Views Using MV-HEVC in NVIDIA Video Codec SDK 13.0 | NVIDIA announces the implementation of Multi-View High Efficiency Video Coding (MV-HEVC) encoder in the latest NVIDIA Video Codec SDK release, version 13.0. This significant update marks a major leap forward in hardware-accelerated, multi-view video compression. It offers enhanced compression efficiency and quality for stereoscopic and 3D video applications as compared to simulcast encoding.
Understanding MV-HEVC
MV-HEVC is an extension of the High Efficiency Video Coding (HEVC) standard, designed to efficiently compress multiple video views of the same scene captured from different vantage points. It addresses the limitations of traditional video-coding methods such as simulcast encoding, which often resulted in high bitrates for multi-view content and lacked efficient inter-view prediction.
Figure 1. MV-HEVC for 3D content
Key features in NVIDIA Video Codec SDK 13.0 MV-HEVC
The MV-HEVC implementation in the Video Codek SDK 13.0 includes the following features:
Support for two views
8-bit and 10-bit 4:2:0 color sampling
Lookahead functionality
Delta-QP support
Separate bitrate allocation for secondary views
These features provide you with flexible options for encoding multi-view content while maintaining high quality and efficiency.
Technical implementation
The MV-HEVC implementation uses advanced techniques to achieve superior compression:
Inter-view prediction
: Reuses data from different views for maximum compression.
Coding tools
: Employs motion-compensated prediction and disparity estimation.
Scalability
: Implements a base layer for backward compatibility and enhancement layers for improved quality.
Bitrate allocation
: Implements separate allocation for secondary views, allowing application developers to experiment and fine-tune quality.
For more information about implementation, see the
NVIDIA Video Codec SDK documentation
.
Quality gains
The addition of MV-HEVC support has resulted in significant quality improvements, with bitrate savings of up to 27% as compared to simulcast encoding of two views.
Figure 2. Shark 1920×1088 600 frames (Camera 1 and Camera 5) 30fps Preset P7
Figure 3 shows the VMAF bit rate savings for the same clip.
Figure 3. Shark 1920×1088 600 frames (Camera 1 and Camera 5) 30fps Preset P7
Real-world applications
MV-HEVC is particularly useful for the following applications:
3D movies
: Enhancing the viewing experience with efficient bit distribution between left– and right-eye views.
AR/VR
: Creating immersive environments with reduced data requirements.
Spatial video
: Supporting new technologies for capturing and viewing immersive 3D videos using advanced smartphones and specialized headsets.
Auto:
Process and compress multi-camera views for tasks such as objection detection and tracking.
The \Samples\AppEncode\AppEncCuda sample application in the
Video Codec SDK
now supports MV-HEVC encoding, providing you with a powerful tool for implementing multi-view video compression. For more information about implementation details for MV-HEVC support in AppEncCuda, see the
NVIDIA Video Codec SDK documentation
.
Backward compatibility
The NVIDIA implementation ensures backward compatibility, supporting two views with identical GOP structures for efficient random access and seeking. This approach lays the foundation for future expansion, as it can be readily adapted to accommodate more views in more complex multi-view applications. By maintaining consistent GOP structures across both views, we optimize synchronization and facilitate seamless navigation within the video content.
Conclusion
The addition of MV-HEVC support in NVIDIA VIDEO CODEC SDK 13.0 marks a significant milestone in multi-view video coding capabilities. With impressive BD-Rate gains and no performance overhead, this update provides you with a powerful tool for creating high-quality stereoscopic and 3D video experiences.
We encourage you to explore this new feature and integrate MV-HEVC encoding into your applications to take advantage of the improved compression efficiency and quality. | https://developer.nvidia.com/zh-cn/blog/enabling-stereoscopic-and-3d-views-using-mv-hevc-in-nvidia-video-codec-sdk-13-0/ | 在 NVIDIA 视频编解码器 SDK 13.0 中使用 MV-HEVC 启用立体和 3D 视图 | NVIDIA 宣布在最新的 NVIDIA 视频编解码器 SDK 版本 13.0 中实施多视图高效视频编码 (MV-HEVC) 编码器。这一重要更新标志着硬件加速的多视图视频压缩技术实现了重大飞跃。与 simulcast 编码相比,它为立体和 3D 视频应用提供了更高的压缩效率和质量。
了解 MV-HEVC
MV-HEVC 是高效视频编码 (HEVC) 标准的扩展,旨在高效压缩从不同有利位置拍摄的同一场景的多个视频视图。它解决了传统视频编码方法 (如 simulcast 编码) 的局限性,这些方法通常会导致多视图内容的高比特率,并且缺乏高效的视图间预测。
图 1、适用于 3D 内容的 MV-HEVC
NVIDIA 视频编解码器 SDK 13.0 MV-HEVC 的主要特性
Video Codek SDK 13.0 中的 MV-HEVC 实现包括以下功能:
支持两种视图
8 位和 10 位 4:2:0 颜色采样
前瞻性功能
Delta-QP 支持
为辅助视图单独分配比特率
这些功能为您提供灵活的多视图内容编码选项,同时保持高质量和高效率。
技术实施
MV-HEVC 实现使用高级技术实现出色的压缩:
视图间预测:
重复使用来自不同视图的数据,以实现最大压缩。
编码工具:
采用运动补偿预测和差异估计。
可扩展性
:实现用于向后兼容性的基础层,以及用于提高质量的增强层。
比特率分配:
实现对辅助视图的单独分配,允许应用程序开发者实验和微调质量。
有关实施的更多信息,请参阅
NVIDIA Video Codec SDK 文档
。
质量提升
添加的 MV-HEVC 支持显著提高了质量,与两个视图的同步广播编码相比,可节省高达 27% 的比特率。
图 2、Shark 1920 × 1088 600 帧 (Camera 1 和 Camera 5) 30fps 预设 P7
图 3 显示了同一片段节省的 VMAF 比特率。
图 3、Shark 1920 × 1088 600 帧 (Camera 1 和 Camera 5) 30fps 预设 P7
真实应用
MV-HEVC 尤其适用于以下应用:
3D 电影
:通过在左眼和右眼视图之间进行高效位分布来增强观看体验。
AR/VR:创建具有更低数据要求的沉浸式环境。
空间视频:支持使用先进的智能手机和专用头显设备捕捉和观看沉浸式 3D 视频的新技术。
自动:处理和压缩多摄像头视图,以执行物体检测和追踪等任务。
Video Codec SDK
中的 Samples\AppEncode\AppEncCuda 示例应用现在支持 MV-HEVC 编码,为您提供用于实现多视图视频压缩的强大工具。如需详细了解 AppEncCuda 中 MV-HEVC 支持的实现详情,请参阅
NVIDIA Video Codec SDK 文档
。
向后兼容性
NVIDIA 实施可确保向后兼容性,支持具有相同 GOP 结构的两个视图,以实现高效的随机访问和查找。这种方法为未来的扩展奠定了基础,因为它可以轻松进行调整,以在更复杂的多视图应用中适应更多视图。通过在两个视图中保持一致的 GOP 结构,我们优化了同步,并促进了视频内容内的无缝导航。
结束语
NVIDIA VIDEO CODEC SDK 13.0 中新增的 MV-HEVC 支持标志着多视图视频编码功能的重要里程碑。带来了令人惊叹的 BD-Rate 获益且无性能开销,为您提供了一款用于创建高质量立体和 3D 视频体验的强大工具。
我们建议您探索这项新功能,并将 MV-HEVC 编码集成到您的应用中,以利用改进后的压缩效率和质量。 |
https://developer.nvidia.com/blog/new-ai-model-offers-cellular-level-view-of-cancerous-tumor/ | New AI Model Offers Cellular-Level View of Cancerous Tumors | Researchers studying cancer unveiled a new AI model that provides cellular-level mapping and visualizations of cancer cells, which scientists hope can shed light on how—and why—certain inter-cellular relationships triggers cancers to grow.
BioTuring
, a San Diego-based startup,
announced
an AI model that can quickly create detailed visualizations of cancerous tumors—at single-cell resolution. This type of granular data shows a cell’s size, shape, which genes are activated, and, crucially, the relative spatial position of millions of different cells within a tissue sample.
Unlike traditional, far slower biological computation methods, the new model provides real-time, high-resolution insights into tumor dynamics and how cancerous and immune cells interact.
“There are around 30 trillion cells in the human body, and if you look at a large tumor biopsy, it has a few million cells,” said Son Pham, BioTuring’s CEO. “The analogy would be—imagine you’re analyzing satellite imagery with a very high resolution, trying to understand how a city works. What our model does, in the context of biology, is show you each house, what’s inside those houses, who’s talking to whom, and what they’re saying.”
“Similarly, our models let you see which cells are talking to which other cells, which groups are forming and talking amongst themselves, and what kind of relationships they’re forming—which can answer some of the most complex challenges in clinical oncological research.”
BioTuring, a member of the
NVIDIA Inception
program for startups, is pursuing its research in single-cell spatial omics, a subfield of biology which examines biological molecules—like messenger RNA and proteins—in their original spatial context in tissue.
Video 1. Demo of BioTuring’s SpatialX deep learning platform for unified multi-technology spatial data analysis.
To create its high-resolution mapping, or “disease cell atlases”, including of ovarian cancer cells, the team used
NVIDIA H100 Tensor Core GPUs
, and
NVIDIA cuBLAS
and
NVIDIA cuSPARSE
libararies to accelerate matrix operations in optimized analyses like the more traditional machine learning algorithm, the
Weighted Gene Co-expression Network Analysis
and CellChat.
Knowing how cancer cells develop and metastasize within a human body—and specifically, within a microenvironment within an organ—could improve screening methods for early cancer detection. Additionally, researchers can use the model’s cellular insights to better understand tumor heterogeneity—or cancerous tumors in the same patient with cells that materially differ from one another.
The new model’s enhanced visual granularity means researchers and drug developers have a far better chance at discovering molecular markers that can more accurately target cancerous cells.
The model can see, for instance, how a person’s killer T cells—humans’ disease-fighting cells—can change shape to engage cancers. By knowing how a person’s immune system morphs to fight a specific cancer, a drug developer could create synthetic therapies that support the patient’s immune system.
“We’re helping uncover biological discoveries that researchers can use to drive therapeutic strategies,” said Rujuta Narurkar, BioTuring’s COO. “Understanding the tumor’s microenvironment through various stages will help map the trajectory of cancer and potentially reveal the source of cancer itself. This new level of cancer tissue resolution has never been possible before. But new technologies are now bringing it within reach.” | https://developer.nvidia.com/zh-cn/blog/new-ai-model-offers-cellular-level-view-of-cancerous-tumor/ | 新的 AI 模型提供癌症细胞级视图 | 研究癌症的研究人员推出了一种新的 AI 模型,该模型提供了癌细胞的细胞级映射和可视化,科学家希望该模型能够揭示某些细胞间关系如何以及为什么会引发癌症生长。
位于圣地亚哥的初创公司 BioTuring
宣布推出
一种 AI 模型,该模型可以以单细胞分辨率快速创建癌症肿瘤的详细可视化图像。这种类型的颗粒数据显示了细胞的大小、形状、哪些基因被激活,关键是显示了组织样本中数百万个不同细胞的相对空间位置。
与速度慢得多的传统生物计算方法不同,新模型提供了有关肿瘤动力学以及癌细胞和免疫细胞相互作用的实时高分辨率见解。
BioTuring 首席执行官 Son Pham 表示:“人体大约有 30 万亿个细胞,如果进行大规模的肿瘤活检,就会发现有几百万个细胞。“类比是 – 想象一下,您正在以非常高分辨率分析卫星图像,试图了解城市的运作方式。在生物学领域,我们的模型所做的是向您展示每栋房子、这些房子里的东西、谁在和谁说话,以及他们在说什么。”
“同样,我们的模型可让您了解哪些细胞正在与哪些其他细胞对话,哪些细胞群正在形成并相互交流,以及它们正在形成何种关系,从而应对临床肿瘤研究中一些非常复杂的挑战。”
BioTuring 是
NVIDIA Inception
计划的成员,正在开展单细胞空间组学的研究。单细胞空间组学是生物学的一个分支领域,用于在组织中的原始空间环境中检查生物分子 (例如信使 RNA 和蛋白质)。
1
BioTuring SpatialX
为了创建高分辨率映射 (或“疾病细胞图谱”) (包括卵巢癌细胞),该团队使用
NVIDIA H100 Tensor Core GPUs
,以及
NVIDIA cuBLAS
和
NVIDIA cuSPARSE
库来加速优化分析中的矩阵运算,例如更传统的机器学习算法、
Weighted Gene Co-expression Network Analysis
和 CellChat。
了解癌细胞在人体内部(尤其是在器官内的微环境中)的发展和转移情况,可以改进癌症早期检测的筛查方法。此外,研究人员可以利用该模型的细胞见解来更好地了解肿瘤异构性,或了解同一患者的恶性肿瘤,其细胞之间存在重大差异。
新模型增强的视觉粒度意味着研究人员和药物开发者有更大的机会发现能够更准确地向癌细胞的分子标记物。
例如,该模型可以看到一个人的杀手 T 细胞(即人类的抗病细胞)如何改变形状以致癌。通过了解人体的免疫系统如何演变以对抗特定癌症,药物开发者可以开发出支持患者免疫系统的合成疗法。
BioTuring 的首席运营官 Rujuta Narurkar 说:“我们正在帮助发现生物学发现,研究人员可以利用这些发现来推动治疗策略。“通过各个阶段了解肿瘤的微环境有助于绘制出癌症的轨迹,并可能揭示癌症本身的来源。这种更高水平的癌症组织分辨率在以前是不可能实现的。但现在,新技术使其触手可及。” |
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/ | AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025 | From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet.
At
NVIDIA GTC 2025
, running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world.
Strengthening extreme weather predictions and disaster response
As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe.
Sessions
Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools
AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters.
Boosting Earth System Model Outputs Using Exascale Climate Emulators
Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts.
Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies
JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs.
Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA
Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively.
Autonomous Systems and Remote Sensing for Better Earth Data
Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale.
Boosting climate forecasting for energy and infrastructure planning
Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future.
Sessions
The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change
Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions.
Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2
NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers.
Applying AI Weather Models with NVIDIA Earth-2
NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors.
Advancing AI-driven environmental monitoring and conservation
AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity.
Session
Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration
Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience.
In-person posters
Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef
AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts
Mangrove Simulation Predicts Carbon Sequestration Solutions
Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful.
Revolutionizing Antarctic Flora Monitoring with AI and Drones
AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem.
Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges.
From NVIDIA CEO
Jensen Huang’s must-see keynote
to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact.
The
session catalog
is open—start building your agenda today. | https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/ | NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力 | 从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。
在 3 月 17 日至 21 日举行的
NVIDIA GTC 2025
大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。
加强极端天气预测和灾害响应
随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。
会议
借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。
使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出
KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。
将 AI 用于高级洪水风险建模和缓解策略
JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。
在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式
野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。
利用自主系统和遥感获取更好的地球数据
配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。
提升气候预测能力以加强能源和基础设施规划
准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。
会议
AI 和加速计算在了解和减缓城市气候变化方面的作用
研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。
借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测
NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。
将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用
,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。
推进 AI 驱动的环境监测和保护
AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。
会议
探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性
,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。
线下海报
逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作
Mangrove Simulation 预测碳封存解决方案
红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。
借助 AI 和无人机革新南极植物监测
AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。
在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。
从 NVIDIA 首席执行官
Jensen Huang 不容错过的主题演讲
,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。
会议目录
现已开放,请立即开始构建您的议程。 |
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/ | Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling | As AI models extend their capabilities to solve more sophisticated challenges, a new
scaling law
known as
test-time scaling
or
inference-time scaling
is emerging. Also known as
AI reasoning
or
long-thinking
, this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution.
In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming.
The results turned out to be better than the optimized kernels developed by skilled engineers in some cases.
The need for optimized attention kernels and associated challenges
Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data.
The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes.
There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task.
Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on.
Figure 1. Neighborhood attention on 2D inputs
Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers.
Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code.
The following prompt is sample user input for a relative positional embeddings attention kernel.
Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications.
Use the following function to compute the relative positional encoding:
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use:
qk = qk * qk_scale + rel_pos * 1.44269504
Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations.
LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel.
Inference-time scaling for generating optimized GPU Kernels
To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration.
Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform
The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as input to the DeepSeek-R1 model.
This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel.
Figure 3. Performance of automatically generated optimized attention kernels with flex attention
This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by
Stanford’s KernelBench
benchmark.
The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming.
Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems.
Figure 4. Inference-time scaling results in optimized GPU kernels
Optimized GPU kernels on DeepSeek-R1
These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels.
While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential.
For more information or to get started, see the
DeepSeek-R1 NIM microservice
, now available on
build.nvidia.com
. | https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/ | 使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成 | 随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“
测试时扩展
”或“
推理时扩展
”的新扩展法则正在出现。该技术也称为
AI 推理
或
长时思考
技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。
在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。
事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。
优化注意力内核的需求和相关挑战
注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。
相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。
注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。
多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。
图 1. 邻域注意力在 2D 输入上的应用
创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。
最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。
以下提示是用户输入相对位置嵌入注意力核函数的示例。
Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications.
Use the following function to compute the relative positional encoding:
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use:
qk = qk * qk_scale + rel_pos * 1.44269504
Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations.
LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。
用于生成经过优化的 GPU 内核的推理时间扩展
为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。
图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展
该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。
这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。
图 3、具有 Flex Attention 的自动生成优化注意力内核的性能
此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为
斯坦福大学的 KernelBench
基准测试。*
KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。
图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。
图 4、在优化的 GPU 内核中实现推理 – 时间扩展
DeepSeek-R1 上经过优化的 GPU 内核
这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。
虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。
如需了解更多信息或入门,请参阅
DeepSeek-R1 NIM 微服务
(现已在 build.nvidia.com 上提供)。 |
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/ | AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment | A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research
aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data.
“Multimodal foundation models are a new frontier in medical AI research,” said
Ruijiang LI
, an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.”
Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The
study
, recently published in
Nature
, highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI.
Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights.
To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options.
The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data.
According to
Jinxi Xiang
, study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64
NVIDIA V100 Tensor Core
GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used
NVIDIA A100 80 gb Tensor Core GPUs
. The researchers also used
NVIDIA RTX A6000 GPUs
for evaluating downstream tasks. The framework was accelerated with
NVIDIA CUDA
and
NVIDIA cuDNN
libraries, for optimized performance.
When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy.
Figure 1. An example of the visual question-answering MUSK can perform
It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%.
Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%.
“One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.”
A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes.
Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval.
“We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li.
The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on
GitHub
. | https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/ | AI 基础模型增强癌症诊断并实现个性化治疗 | 斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。
“多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者
Ruijiang LI
说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。”
在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。
借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。
为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。
研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。
据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个
NVIDIA V100 Tensor Core
GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用
NVIDIA A100 80GB Tensor Core GPUs
。研究人员还使用
NVIDIA RTX A6000 GPUs
评估下游任务。该框架通过
NVIDIA CUDA
和
NVIDIA cuDNN
库进行加速,以优化性能。
在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。
图 1. 例如,视觉问答 MUSK 可以执行
它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。
此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。
“一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。”
这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。
他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。
“我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。
研究人员的工作(包括安装说明、模型权重、评估代码和样本数据)
可在 GitHub 上获取。 |
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/ | CUDA Toolkit Now Available for NVIDIA Blackwell | The latest release of the
CUDA Toolkit
, version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release:
NVIDIA Blackwell architecture support
CUDA Graphs conditional nodes enhancements
Blackwell CUTLASS kernels for large language models (LLMs)
NVIDIA Nsight Developer Tools updates
Math libraries updates
cudaStreamGetDevice
Compiler updates
Accelerated Python updates
Feature-complete architectures
NVIDIA Blackwell architecture support
CUDA Toolkit 12.8 is the first version of the Toolkit to support the
NVIDIA Blackwell
architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in
NVIDIA Hopper
GPUs—Blackwell is the largest GPU ever built.
Key Blackwell capabilities supported include:
Second-generation Transformer Engine through custom Tensor Core technology:
Accelerates inference and training for LLMs and mixture-of-experts (MoE) models.
Decompression:
Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate.
Network interconnect:
NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models.
To learn more about the leading innovations in Blackwell, see the
NVIDIA Blackwell Architecture Technical Brief
.
2x faster CUDA Graphs with runtime kernel selection for lower latency inference
With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional
conditional node
types.
In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes.
Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes.
With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making.
Training
: By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train.
Inference
: For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence.
To learn more, see
Dynamic Control Flow in CUDA Graphs with Conditional Nodes
.
Blackwell CUTLASS kernels for LLMs
CUTLASS
, since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI.
With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations.
Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance
For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16.
Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions
NVIDIA Nsight Developer Tools
NVIDIA
Nsight Compute
2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data.
Figure 3. Tensor Memory traffic in the Nsight Compute memory chart
It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance.
This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag
-g-tmem-access-check
when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage.
Math libraries updates
With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing.
New features
cuBLAS
APIs
were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher.
Introduced
initial support for CUDA in Graphics (CIG)
on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits.
cuSOLVER
now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting.
nvJPEG
now provides support for the Tegra architecture.
NPP
now provides support for the DRIVE Thor architecture.
cudaStreamGetDevice
Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics.
Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the
cudaStreamGetDevice
CUDA API to retrieve the device associated with a CUDA stream can simplify applications.
Compiler updates
New compiler updates include the following:
The CUDA Toolkit 12.8 release introduces support for
GCC 14
as a host-side compiler.
The default high-level optimizer is now based on
LLVM 18
for the Blackwell architecture.
nvdisasm
now supports emitting JSON formatted SASS disassembly.
Accelerated Python updates
The following two beta releases are now available for Python users:
CUDA Python has released an early prototype of a new idiomatic object model called
cuda.core
and moved the CUDA binding to a submodule,
cuda.bindings
. For more information, see the documentation in the
NVIDIA/cuda-python
GitHub repo.
CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about
CCCL
.
Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability.
Feature-complete architectures
With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release.
This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle.
Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using
nvcc
,
nvrtc
, and
nvjitlink
.
In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window.
For more details, read the
CUDA Toolkit 12.8 Release Notes
.
Summary
The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python.
Want more information? Check out the
CUDA documentation
, browse the latest
NVIDIA Deep Learning Institute (DLI)
offerings, and visit the
NGC catalog
. Ask questions and join the conversation in the
CUDA Developer Forums.
Acknowledgments
Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash. | https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/ | CUDA 工具包现已支持 NVIDIA Blackwell 架构 | CUDA 工具包
的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能:
NVIDIA Blackwell 架构支持
CUDA 图形处理条件节点增强功能
用于大语言模型(LLMs)的 Blackwell CUTLASS 内核
NVIDIA Nsight 开发者工具更新
数学库更新
cudaStreamGetDevice
编译器更新
加速 Python 更新
功能齐全的架构
NVIDIA Blackwell 架构支持
CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持
NVIDIA Blackwell
架构。Blackwell 由 208 亿个晶体管构建而成,是
NVIDIA Hopper
GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。
Blackwell 支持的主要功能包括:Key Blackwell
采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。
解压缩:
使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。
网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。
如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。
使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理
借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他
条件节点类型
。
在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。
条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。
借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。
训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。
推理:对于使用测试时计算的新一代推理模型
,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。
如需了解详情,
请参阅使用条件节点的 CUDA 图形中的动态控制流。
适用于 LLMs 的 Blackwell CUTLASS 内核
自 2017 年首次推出以来,
CUTLASS
一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。
随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。
图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能
对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。
图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍
NVIDIA Nsight 开发者工具
NVIDIA
Nsight Compute
2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。
图 3、Nsight Compute 内存图中的 Tensor 内存流量
它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。
此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志
-g-tmem-access-check
来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。
数学库更新
借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。
新功能
cuBLAS
API
经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。
为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。
cuSOLVER
现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。
nvJPEG
现在支持 Tegra 架构。
NPP
现在为 DRIVE Thor 架构提供支持。
cudaStreamGetDevice
应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。
以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加
cudaStreamGetDevice
CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。
编译器更新
新的编译器更新包括以下内容:
CUDA 工具包 12.8 版本引入了对作为主机端编译器的
GCC 14
的支持。
现在,Blackwell 架构的默认高级优化器基于
LLVM 18
nvdisasm
现在支持发射 JSON 格式的 SASS 反汇编。
加速 Python 更新
以下两个测试版现已面向 Python 用户提供:
CUDA Python 已发布名为
cuda.core
的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块
cuda.bindings
。有关更多信息,请参阅
NVIDIA/cuda-python
GitHub 存储库中的文档。
CUDA 核心计算库 (
CCCL
) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。
此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。
功能齐全的架构
在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。
这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。
从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用
nvcc
、
nvrtc
和
nvjitlink
时输出警告消息。
在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。
如需了解更多详情,请参阅
CUDA Toolkit 12.8 版本说明
。
总结
CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。
想要了解更多信息?查看
CUDA 文档
,浏览最新的
NVIDIA Deep Learning Institute (DLI) 产品
,并访问
NGC 目录
。在
CUDA Developer Forums
中提出问题并加入对话。
致谢
感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。 |
https://developer.nvidia.com/blog/recent-posts/ | Recent posts | No content found | https://developer.nvidia.com/zh-cn/blog/recent-posts/ | 最近文章 | No content found |
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/ | High-Performance Remote IO With NVIDIA KvikIO | Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload.
Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have
higher and more variable latency
. Every storage service has their own set of best practices and performance guidelines (
AWS
,
Azure
). Here, we’ll give some general guidelines that are focused on data analysis workloads.
Location
Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt.
File format
“Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located).
Apache Parquet
,
Zarr
, and
Cloud Optimized GeoTIFF
are some examples of cloud-native file formats for various types of data.
Because object storage services typically support
range requests
, clients (like
cuDF
) can read the metadata and then download
just
the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out.
File size
Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples).
That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency.
Concurrency
Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a
remote
service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead.
Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a
thread pool
:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
Or with
asyncio
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
We’re able to have a lot of reads waiting around doing nothing
at the same time
, which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput.
Libraries
Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA
KvikIO
is notable because
It automatically chunks large requests into multiple smaller ones and makes those requests concurrently.
It can read efficiently into host or device memory, especially if
GPU Direct Storage
is enabled.
It’s fast.
As mentioned in the
RADIDS 24.12 release announcement
, KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does.
Benchmarks
When you read a file, KvikIO splits that read into smaller reads of
kvikio.defaults.task_size
bytes. It makes those read requests in parallel using a thread pool with
kvikio.defaults.num_threads
workers. These can be controlled using the environment variables
KVIKIO_TASK_SIZE
and
KVIKIO_NTHREADS
, or through Python with:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
See
Runtime Settings
for more.
This chart shows the throughput, in megabits per second, of reading a 1 GB blob from
S3
to a
g4dn
EC2 instance in the same region for various sizes of the thread pool (higher is better).
Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of
kvikio.RemoteFile.read
for various values of
kvikio.defaults.num
_threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point.
Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system.
With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case.
As shown in the next figure, the task size also affects the maximum throughput.
Figure 2. From a benchmark reading a 1 GB file from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of
kvikio.RemoteFile.read
. The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts.
As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput.
KvikIO achieves higher throughput on this workload when compared with
boto3
, the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently.
Figure 3. From a benchmark reading a 1 GB from S3 to a
g4dn.xlarge EC2
instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3.
As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS
g4dn.12xlarge
instance
, which has 4
NVIDIA T4
GPUs and 48 vCPUs.
Figure 4. From a benchmark reading a parquet data set from S3 to a
g4dn.12xlarge EC2
instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include
an optimization
to read parquet footers in parallel.
With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node.
Conclusion
As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on
GitHub
. You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing. | https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/ | 借助 NVIDIA KvikIO 实现高性能远程 IO | 处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。
您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。
地址
将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。
文件格式
“云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。
Apache Parquet
、
Zarr
和
Cloud Optimized GeoTIFF
是适用于各种类型数据的云原生文件格式的一些示例。
由于对象存储服务通常支持范围请求,因此客户端 (如
cuDF
) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。
文件大小
由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。
也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。
并发
使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。
对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成:
pool = concurrent.futures.ThreadPoolExecutor()
futures = pool.map(request_chunk, chunks)
或使用
异步
:
tasks = [request_chunk_async(chunk) for chunk in chunks]
await asyncio.gather(*tasks)
我们能够让大量读取
同时
不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。
库
上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中,
NVIDIA KvikIO
值得注意,因为
它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。
它可以高效读取主机或设备内存,尤其是启用
GPU Direct Storage
时。
速度很快。
正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。
基准测试
当您读取文件时,KvikIO 会将读取的文件拆分成较小的
kvikio.defaults.task_size
字节读取。它使用具有
kvikio.defaults.num_threads
工作线程的线程池并行执行这些读取请求。可以使用环境变量
KVIKIO_TASK_SIZE
和
KVIKIO_NTHREADS
控制这些内容,也可以通过 Python 使用:
with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size):
...
详情请参阅
Runtime Settings
。
此图表显示了在同一区域内,针对不同大小的线程池,从
S3
到
g4dn
EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。
图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是
kvikio.RemoteFile.read
的吞吐量,适用于各种值的
kvikio.defaults.num
_threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。
线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。
借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。
如下图所示,任务大小也会影响最大吞吐量。
图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2 实例
。这显示了
kvikio.RemoteFile.read
吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。
只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。
与
boto3
(适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。
图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的
g4dn.xlarge EC2
实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。
对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS
g4dn.12xlarge
实例上运行,该实例包含 4 个
NVIDIA T4
GPU 和 48 个 vCPUs。
图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。
启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。
结束语
随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。 |
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs | Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have.
The rise of
small language models
(SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data.
Microsoft announces
the new generation of open SLMs
to the Phi family with two new additions:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs.
These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the
Phi-4 14B parameter SLM
and enables commercial use for the two new smaller models.
The new models are available on the
Azure AI Foundry
, Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents.
You can test out each member of the Phi family through the
NVIDIA API Catalog
, which is the first sandbox environment to support each modality and tool-calling for
Phi-4-multimodal
. Use the preview NIM microservice to integrate the model into your applications today.
Why invest in SLMs?
SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements.
Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems.
Phi-4-multimodal
Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days.
Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications.
Figure 1. Visual QA demo in NVIDIA API Catalog
You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data.
Figure 2. Tool-calling demo in NVIDIA API Catalog
Phi-4-mini
Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days.
For both models, the training data is intentionally focused on high quality educational data and code which results in a
textbook-like quality
to the models. Text, speech, and vision benchmark data can be found in the model cards.
Advancing community models
NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience.
Using the
NVIDIA NeMo platform
, these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry.
NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences.
Get started today
Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at
build.nvidia.com/microsoft
.
On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production. | https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/ | 在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展 | 大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。
小语言模型
(SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。
Microsoft 宣布在 Phi 系列中
推出新一代开放式 SLM
,并新增两项功能:
Phi-4-mini
Phi-4-multimodal
Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。
这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的
Phi-4 14B 参数 SLM
的研究版本构建而成,可用于两个新的较小模型的商业用途。
这些新模型可在 Microsoft 的云 AI 平台
Azure AI Foundry
上使用,用于设计、定制和管理 AI 应用和代理。
您可以通过
NVIDIA API Catalog
测试 Phi 系列的每个成员,这是第一个支持
Phi-4 多模态
的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。
为何投资 SLM?
SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。
SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。
Phi-4-multimodal
Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。
事实证明,该模型在 ASR 方面表现出色,因为它在
Huggingface OpenASR 排行榜上排名第一
,单词错误率为 6.14%。
词错误率 (WER)
是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。
图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。
图 1、NVIDIA API Catalog 中的可视化问答演示
您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。
图 2、NVIDIA API Catalog 中的工具调用演示
Phi-4-mini
Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。
对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。
推进社区模式
NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。
借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。
NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。
立即开始使用
请访问
build.nvidia.com/microsoft
,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。
在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。 |
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM | In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on.
Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms.
NVIDIA NIM
microservices simplify the secure and reliable deployment of AI
foundation models
for language,
computer vision
, speech, biology, and more.
NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex.
This post helps you get started with building a
vision language model
(VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using
LangGraph,
the state-of-the-art
llama-3.2-90b-vision-instruct
VLM, the optimized
mistral-small-24B-instruct
large language model (LLM), and NVIDIA NIM for deployment.
This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of
LangChain’s tool calling
enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources.
This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
A simple HTML multimodal retrieval pipeline
The system consists of the following pipelines:
Document ingestion and preprocessing:
Runs a VLM on the images and translates them into text.
Question-answering:
Enables the user to ask questions of the system.
Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively.
Data ingestion and preprocessing pipeline
This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text.
Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an
LLM with long context modeling
capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed.
Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers.
Figure 1. Data ingestion and preprocessing pipeline
LLMs with long context modeling
can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval.
In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information.
However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy.
QA pipeline
All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a
LLM with long context modeling
(mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents.
Figure 2. Question-answering pipeline
Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content.
Finally, the system combines both textual and visual insights to deliver a comprehensive answer.
Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems.
Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text.
NVIDIA NIM now natively supports
capabilities for generating structured outputs
. This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering.
Integrating NVIDIA NIM with LangGraph
NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with
LangGraph
, a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes:
Assistant node:
Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools.
Tools node:
A collection of tools that perform specific tasks required by the assistant.
Figure 3. Use LangGraph to build an agent for the pipeline
Assistant node
The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
Here are the agent inputs:
Collection_name
: The set of documents on which to search.
Question
: The user’s question.
document_id
: (Optional) If provided, the agent skips the document ranking phase.
This is the agent process:
Document selection
: If
document_id
is not provided, the agent invokes the
find_best_document_id
tool, which identifies the most relevant document for the user’s question within the specified collection.
Question answering
: With
document_id
, the agent uses the
query_document
tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document.
Image analysis
(if necessary): If the
query_document
tool indicates that the answer might be in an image (by returning an
image_hash
value), the agent invokes the
query_image
tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer.
Tools node
We implemented three key tools for the agent to perform its tasks.
Find_best_document_id
: Identify the most relevant document for the user’s question when
document_id
is not provided. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_document
: Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the
/NVIDIA/GenerativeAIExamples
GitHub repo.
query_image
: Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the
/NVIDIA/GenerativeAIExamples
.
Binding external tools with models
Tool calling
is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks.
Tool binding
empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses.
Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following:
Extended capabilities
: Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation.
Dynamic tool selection
: The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance.
Seamless integration
: NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems.
In this implementation, use LangChain’s
@tool decorator
to create three tools, then use the
.bind_tools
method to bind the tools with models.
Defining structured outputs with Pydantic
By defining the output schema with
Pydantic
and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph.
Define the structure
The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for
downstream processing
.
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s
with_structured_output
method.
Define the prompt
The
prompt_document_expert
contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the
BestDocuments
schema).
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
Prepare context
The
get_context
function prepares the input data by retrieving document summaries and formatting them appropriately.
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
Bind the structured output
The
llm.with_structured_output(BestDocuments)
method instructs the LLM to produce output conforming to the
BestDocuments
Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure.
LangChain’s
with_structured_output
method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions.
Finally, create a chain to process the input and generate the structured output:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
End-to-end tool in action
To get started with the multimodal retrieval system, clone the
/NVIDIA/GenerativeAIExamples
GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to
http://localhost:7860
to access the system through the Gradio user interface.
For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The
Select Question
field is for evaluation purposes, with the
Ground Truth Answer
field value provided by a human.
Figure 4. Agent multi-document evaluation
This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data.
Figure 5. Agent result with source graph for verification
Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM
Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM
Challenges and solutions
As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt.
Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses.
Hierarchical document reranking solution
To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages:
Batch processing
: Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations.
Intermediate reranking
: The model evaluates each batch separately, ranking the documents within each group.
Selection of top candidates
: The most relevant documents from each batch are selected to proceed to the next stage.
Final reranking
: The top candidates from all batches are combined and re-evaluated to identify the most relevant document.
Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified.
Future prospects with smaller models
Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale.
The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost.
Conclusion
This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods:
Enhanced comprehension of documents
A multimodal model to extract information from images, tables, and text
Seamless integration of external tools
Generation of consistent and structured output
Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the
/NVIDIA/GenerativeAIExamples
GitHub repo.
NVIDIA NIM
also offers access to more models optimized for NVIDIA GPUs. You can explore
NVIDIA NeMo
, a scalable
generative AI
framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more.
If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time
multilingual and cross-lingual information retrieval systems
to generate context-aware responses, learn more about
NVIDIA NeMo Retriever
. | https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/ | 使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统 | 在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。
多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。
NVIDIA NIM
微服务可简化 AI
基础模型
在语言、
计算机视觉
、语音、生物学等领域的安全可靠部署。
NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。
本文将帮助您着手构建基于
视觉语言模型
(VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用
LangGraph
部署应用程序、先进的
llama-3.2-90b-vision-instruct
VLM、经过优化的
mistral-small-24B-instruct
大语言模型(LLM),以及用于部署的 NVIDIA NIM。
与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成
LangChain 的工具调用
,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。
此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 仓库。
简单的 HTML 多模态检索工作流
该系统由以下管道组成:
文档提取和预处理:在图像上运行 VLM 并将其转换为文本。
问答:允许用户提出系统问题。
这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。
数据提取和预处理 pipeline
此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。
接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。
最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道
采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。
相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。
但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。
QA 管道
所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道
接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。
最后,该系统将文本和视觉见解相结合,以提供全面的答案。
结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。
从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。
NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。
将 NVIDIA NIM 与 LangChain 集成
NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成:
助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。
工具节点:用于执行助手所需特定任务的工具集合。
图 3、使用 LangGraph 为管道构建代理
助理节点
助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到。
智能体输入如下:
Collection_name
:要在其上搜索的文档集。
Question
:用户的问题。
document_id
:(可选) 如果提供,代理会跳过文档排名阶段。
这是智能体流程:
文档选择
:如果未提供
document_id
,代理会调用
find_best_document_id
工具,该工具可在指定集合中识别与用户问题最相关的文档。
问题回答:对于
document_id
,代理使用
query_document
工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。
图像分析 (如有必要):如果
query_document
工具表明答案可能在图像中 (通过返回
image_hash
值),代理会调用
query_image
工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。
工具节点
我们为智能体实施了三个关键工具来执行任务。
Find_best_document_id
:在未提供
document_id
时,确定与用户问题最相关的文档。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
Github 存储库。
query_document
:在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅
/NVIDIA/GenerativeAIExamples
GitHub 存储库。
query_image
:当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/
NVIDIA/GenerativeAIExamples
。
将外部工具与模型绑定
工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。
工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。
绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括:
扩展功能
:模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。
动态工具选择
:模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。
无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。
在此实现中,使用 LangChain 的
@tool
装饰器创建三个工具,然后使用
.bind_tools
方法将这些工具与模型绑定。
使用 PyTorch 定义结构化输出
通过使用
Pydantic
定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。
定义结构
首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。
from typing import List, Optional
from pydantic import BaseModel, Field
class Document(BaseModel):
"""
Represents a document with an identifier and its summary.
"""
id: str = Field(..., description="Hash identifier of the document")
summary: str = Field(..., description="The summary of the document as is")
class BestDocuments(BaseModel):
"""
Contains a list of the best documents to answer the question and their summaries.
"""
documents: List[Document] = Field(..., description="List of best documents")
class Answer(BaseModel):
"""
Represents the answer to the user's question.
"""
answer: str = Field(..., description="Answer to the question posed by the user")
接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的
with_structured_output
方法实现的。
定义提示
prompt_document_expert
包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与
BestDocuments
架构匹配的 JSON)。
from langchain.chat_models import ChatNVIDIA
from langchain.prompts import ChatPromptTemplate
# Initialize the LLM with desired parameters
llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct
", temperature=0, max_tokens=3000)
# Define the prompt template for the document expert
prompt_document_expert = ChatPromptTemplate.from_messages(
[
(
"system",
f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document.
Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on:
{{documents_context}}
""",
),
(
"human",
"Can you tell me what are the most relevant document ids for this question: {question}"
),
("human", "Tip: Make sure to answer in the correct format"),
]
)
准备上下文
get_context
函数通过检索文档摘要并对其进行适当格式化来准备输入数据。
def get_context(input_data: dict) -> dict:
collection_name = input_data.get("collection_name")
question = input_data.get("question")
documents_context = get_document_summaries_markdown(collection_name)
# print(context)
return {"documents_context": documents_context,
"collection_name": collection_name,
"question": question}
绑定结构化输出
llm.with_structured_output(BestDocuments)
方法指示 LLM 生成符合
BestDocuments
Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。
LangChain 的
with_structured_output
方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。
最后,创建一个链来处理输入并生成结构化输出:
chain_document_expert = (
RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict())
)
端到端工具的实际应用
要开始使用多模态检索系统,请克隆
/NVIDIA/GenerativeAIExamples
GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至
http://localhost:7860
,通过 Gradio 用户界面访问系统。
例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“
Select Question
” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估
该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形
视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合
视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像
挑战和解决方案
随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。
处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。
分层文档重新排序解决方案
为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段:
批量处理
:将摘要分组为模型可以处理的批量,且不会超过提示大小限制。
中级重新排序
:模型分别评估每个批次,对每个组中的文档进行排序。
选择最优秀的候选文档
:从每个批次中选择最相关的文档,以进入下一阶段。
最终重新排名
:系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。
考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。
小型模型的未来前景
使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。
对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。
结束语
本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势:
增强对文档的理解
用于从图像、表格和文本中提取信息的多模态模型
无缝集成外部工具
生成一致的结构化输出
借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在
/NVIDIA/GenerativeAIExamples
GitHub repo 中找到源代码。
NVIDIA NIM
还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索
NVIDIA NeMo
,这是一个可扩展的生成式
AI
框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。
如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解
NVIDIA NeMo Retriever
。 |
https://developer.nvidia.com/blog/tag/inference-performance/ | Inference Performance | No content found | https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/ | Inference Performance | No content found |
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding | Large language models (LLMs)
that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving
AI agents
, these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation.
To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in
NVIDIA TensorRT-LLM
, and the ease of deployment with
NVIDIA NIM
for transformative potential and coding efficiency.
Qwen2.5-Coder models
The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks.
NVIDIA TensorRT-LLM
has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic
inflight batching
,
KV caching
,
KV cache reuse
, and several speculative decoding techniques, among others.
These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM
high-level API
to speed up Qwen2.5-Coder inference to generate multiline autocode completion.
Lookahead decoding
Lookahead decoding
is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with
draft target speculative decoding
, and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example.
Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding.
Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the
Jacobi iteration method
, a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch.
The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G).
Window size (W):
Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently.
N-gram size (N):
Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory.
Verification set size (G):
Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities.
Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models.
Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on
NVIDIA H100 Tensor Core GPUs
, as shown in Figure 2.
Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Similar throughput speedups are achieved on
NVIDIA H200 Tensor Core GPUs
. With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3).
Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding
Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0.
Steps to run lookahead decoding with TensorRT-LLM
To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below.
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
Then run lookahead decoding in TensorRT-LLM using the high-level API.
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
Summary
Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to
try the Qwen2.5-Coder models
optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable
NVIDIA NIM
microservices for ease of deployment.
Acknowledgments
We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/ | 使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量 | 专注于编码的
大语言模型(LLMs)
已稳步应用于开发者工作流程。从配对编程到自我改进的
AI 智能体
,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。
为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对
NVIDIA TensorRT-LLM 支持
的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。
Qwen2.5-Coder 模型
Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。
NVIDIA TensorRT-LLM
已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上
批处理
、
KV 缓存
、
KV 缓存重复使用
和几种预测性解码技术等优化功能。
这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。
解码前景展望
解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。
与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。
每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器
Jacobi 迭代法
,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。
前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。
窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。
N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。
验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。
图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。
通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过
NVIDIA H100 Tensor Core GPUs
上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。
图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
NVIDIA H200 Tensor Core GPU
也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。
图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升
数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。
使用 TensorRT-LLM 进行解码的前瞻性运行步骤
要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。
# Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows)
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools
&& pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com
然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。
# Command for Qwen2.5-Coder-7B-Instruct
from tensorrt_llm import LLM, SamplingParams
from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig,
LookaheadDecodingConfig, SamplingParams)
def main():
"""The end user can customize the build configuration with the
build_config class. # Max draft length is based on (W,N,G) values and
calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)"""
build_config = BuildConfig(max_batch_size = 128,
max_input_len = 2048,
max_seq_len = 4096,
max_num_tokens = 16384,
max_draft_len = 111)
build_config.plugin_config.reduce_fusion = True
build_config.plugin_config.use_paged_context_fmha = True
build_config.plugin_config.multiple_profiles = True
# The configuration for lookahead decoding
lookahead_config = LookaheadDecodingConfig(max_window_size=8,
max_ngram_size=8,
max_verification_set_size=8)
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4)
llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct",
kv_cache_config=kv_cache_config,
build_config=build_config,
speculative_config=lookahead_config)
prompt = """Write a C++ program to find the nth Fibonacci number using
recursion. Now we define a sequence of numbers in which each number is the
sum of the three preceding ones. The first three numbers are 0, -1, -1.
Write a program to find the nth number."""
sampling_params = SamplingParams(lookahead_config=lookahead_config)
output = llm.generate(prompt, sampling_params=sampling_params)
print(output)
if __name__ == '__main__':
main()
总结
前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。
访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。
为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的
NVIDIA NIM
微服务。
致谢
在此,
我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力
。 |
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | Optimize AI Inference Performance with NVIDIA Full-Stack Solutions | The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure.
NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in
AI inference
, making it faster, more efficient, and more scalable than ever before.
Easily deploy high-throughput, low-latency inference
Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput.
To address this, NVIDIA developed the
NVIDIA Triton Inference Server
, an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA
open-source projects
, now used by
hundreds of leading organizations to deploy production AI models efficiently.
In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools,
NVIDIA TensorRT
provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations.
NVIDIA NIM
microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations.
Optimizations for AI inference workloads
Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance.
These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness.
The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for
large language models (LLMs)
, which are outlined below.
Prefill and KV cache optimizations
Key-value (KV) cache early reuse
: By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments.
Chunked prefill
: For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands.
Supercharging multiturn interactions
:
The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput.
Decoding optimization
Multiblock attention for long sequences
: Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs.
Speculative decoding for accelerated throughput
: Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications.
Speculative decoding with Medusa:
The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation.
Multi-GPU inference
MultiShot communication protocol
: Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by
NVSwitch
, reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient.
Pipeline parallelism for high-concurrency efficiency
:
Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on
NVIDIA H200 Tensor Core
GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium.
Large NVLink domains:
The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications.
Quantization and lower-precision compute
NVIDIA TensorRT Model Optimizer for precision and performance:
The NVIDIA custom FP8 quantization recipe in the
NVIDIA TensorRT Model Optimizer
delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads.
End-to-end full-stack optimization:
NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments.
With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data.
Evaluating inference performance
Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs.
MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance.
In the latest round of MLPerf Inference,
NVIDIA Blackwell made its debut
, delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU.
In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark.
The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round.
NVIDIA Triton Inference Server, running on a system with eight H200 GPUs
achieved virtually identical performance compared to the
NVIDIA bare-metal submission on the Llama 2 70B benchmark
in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton.
The future of AI inference: Emerging trends and technologies
The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference.
Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed.
At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses.
The
NVIDIA GB200 NVL72
rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU.
In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time.
The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI.
Get started
Check out
How to Get Started with AI Inference
, learn more about the
NVIDIA AI Inference
platform, and stay informed about the latest
AI inference performance
updates.
Watch a demo on
how to quickly deploy NVIDIA NIM microservices
or read
A Simple Guide to Deploying Generative AI with NVIDIA NIM
. Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices. | https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/ | 借助 NVIDIA 全栈解决方案提升 AI 推理性能 | AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。
NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义
AI 推理
的可能性,使其比以往更快、更高效、更具可扩展性。
轻松部署高吞吐量、低延迟推理
六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。
为解决这一问题,NVIDIA 开发了
NVIDIA Triton Inference Server
,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。
除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者,
NVIDIA TensorRT
提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。
NVIDIA NIM
微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。
针对 AI 推理工作负载进行优化
推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。
这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。
TensorRT-LLM 库包含许多先进功能,可加速
大语言模型(LLMs)
的推理性能,如下所述。
预填充和 KV 缓存优化
键值 (KV) 缓存提早复用
:通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。
分块预填充
:为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。
强效助力多圈交互
:NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。
解码优化
长序列的
多块注意力
:TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。
用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。
使用 Medusa 进行推理解码
:Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。
多 GPU 推理
MultiShot 通信协议
:传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由
NVSwitch
提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。
实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。
NVIDIA H200 Tensor Core
GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。
大型 NVLink 域
:NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。
量化和低精度计算
用于提高精度和性能的 NVIDIA TensorRT 模型优化器
:NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。
端到端全栈优化
:NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。
借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。
评估推理性能
实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。
MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。
在最新一轮 MLPerf Inference 中,
NVIDIA Blackwell 首次亮相
,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。
此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。
服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。
与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比
,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而
NVIDIA Triton 可以同时实现这两种性能。
AI 推理的未来:新兴趋势和技术
在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。
每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。
与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。
NVIDIA GB200 NVL72
机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。
此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。
通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。
开始使用
查看如何开始使用 AI 推理
,了解更多关于
NVIDIA AI 推理平台
的信息,并随时了解
最新的 AI 推理性能更新
。
观看演示,了解如何快速部署
NVIDIA NIM
微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。 |
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference | Recurrent drafting
(referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with
NVIDIA TensorRT-LLM
. ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs.
NVIDIA TensorRT-LLM
is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build
NVIDIA TensorRT
engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more.
Speculative decoding
is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference.
ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as
drafting
, combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community.
The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration.
TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance.
To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter.
Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM
The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM.
Inflight-batching compatible engine
Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting.
Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine
Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future.
Implementing in-engine validation and drafting
To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model.
The following PyTorch code excerpt is Apple’s
PyTorch implementation of ReDrafter
. The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version.
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
This, of course, is a very simple example. For a more complex example, see the
beam search implementation
. With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance.
ReDrafter performance in TensorRT-LLM
As
benchmarked by Apple
, ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on
NVIDIA H100
GPUs with TP8 over the base LLM.
Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including:
GPU utilization:
Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes.
Average acceptance rate:
The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data).
Task:
It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance.
Summary
This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads.
Explore
NVIDIA TensorRT-LLM
to unlock the full potential of your models on NVIDIA GPUs. | https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/ | NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化 | Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与
NVIDIA TensorRT-LLM
一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。
NVIDIA TensorRT-LLM
是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建
NVIDIA TensorRT
引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。
推测解码 (Speculative decoding)
是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。
ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为
Drafting
) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。
ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。
为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。
为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。
图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较
下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。
兼容 Inflight-batching 批处理的引擎
Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。
图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流
请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。
实现引擎内验证和
Drafting
为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。
以下 PyTorch 代码摘录是苹果公司的
PyTorch 实现的 ReDrafter
。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。
PyTorch
def unpack(
packed_tensor: torch.Tensor,
unpacker: torch.Tensor,
) -> torch.Tensor:
assert len(packed_tensor.shape) == 3
last_dim_size = packed_tensor.shape[2]
batch_size, beam_width, beam_length = unpacker.shape
unpacked_data_indices = unpacker.view(
batch_size, beam_width * beam_length, 1).expand(
-1, -1, last_dim_size
)
unpacked_tensor = torch.gather(
packed_tensor, 1, unpacked_data_indices).reshape(
batch_size, beam_width, beam_length, -1
)
return unpacked_tensor
TensorRT-LLM
def _unpack_beams(
x: Tensor,
indices: Tensor,
num_beams: int,
beam_length: int
) -> Tensor:
assert x.rank() == 3
d0 = shape(x, 0, INT_DTYPE_STR)
dl = shape(x, -1, INT_DTYPE_STR)
indices = view(
indices, [-1, num_beams * beam_length, 1], False)
res_shape = concat([d0, num_beams, beam_length, dl])
res = view(gather_nd(x, indices), res_shape, False)
return res
当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见
束搜索实现
。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。
ReDrafter 在 TensorRT-LLM 中的性能
根据
苹果公司的基准测试
,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的
NVIDIA
GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。
请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括:
GPU 利用率:
推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。
平均接受率:
由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。
任务:
在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。
总结
NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。
探索
NVIDIA TensorRT-LLM
,在 NVIDIA GPU 上充分释放模型潜能。 |
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding | Meta’s Llama collection of open
large language models (LLMs)
continues to grow with the recent addition of
Llama 3.3 70B
, a text-only instruction-tuned model. Llama 3.3 provides enhanced performance respective to the older Llama 3.1 70B model and can even match the capabilities of the larger, more computationally expensive
Llama 3.1 405B model
on several tasks including math, reasoning, coding, and multilingual support.
NVIDIA TensorRT-LLM
, a powerful inference engine that delivers state-of-the-art performance on the latest LLMs, incorporates many optimizations to deliver outstanding Llama 3.3 70B inference throughput. These include
in-flight batching
,
KV caching
,
custom FP8 quantization
, speculative decoding, and more for fast, cost-efficient LLM serving.
With in-flight batching activated by default as a runtime configuration parameter, TensorRT-LLM supports batching multiple different requests at the same time for higher serving throughput. By interleaving requests in context and generation phases, in-flight batching reduces latency and improves GPU utilization by executing new requests while older requests are still in flight. Finished requests are evicted from the batch, making room for the next set of requests.
Caching the values of the key-value elements of previous tokens saves from expensive recomputation of these tensors in the generation phase for the next set of tokens. Computational savings effectively lead to higher throughput. However, KV cache grows linearly in size with number of batched requests and sequence context lengths, leading to higher memory requirements.
TensorRT-LLM KV caching addresses these challenges through several optimizations, including support for
paged KV cache
,
quantized KV cache
,
circular buffer KV cache
and
KV cache reuse
. Each of these optimizations address the challenging balance between growing memory size and avoiding unnecessary and expensive recomputation.Speculative decoding is a popular technique for faster and cost-effective LLM inference with built-in verification for the quality of output generation. It’s based on the premise that generating multiple sequences of future (draft) tokens is more efficient than processing a single token in autoregressive decoding, an inherently time-consuming process. The target model determines how many of these draft tokens to accept, which is far more efficient than having to generate one token per iteration. TensorRT-LLM supports a growing list of speculative decoding techniques including
draft target
,
Medusa
,
Eagle
, and
lookahead decoding
, among others.
Figure 1. NVIDIA TensorRT optimizations for high-performance deep learning inference
In this post, we show how the
NVIDIA HGX H200 platform
with NVLink and NVSwitch, as well as TensorRT-LLM, achieve great performance when running the latest Llama 3.3 70B model. We describe the step-by-step setup to get speculating decoding working for Llama 3.3 70B with TensorRT-LLM. For more information, including other optimizations, different models, and multi-GPU execution, see the full list of
TensorRT-LLM examples
.
Achieving throughput speedups with draft target speculative decoding
Table 1 and Figure 2 highlight the throughput (output tokens/second) speedups between no draft model (that is, no speculative decoding) versus draft models of various sizes with Llama 3.3 70B target model.
Throughput Performance – Output Tokens/Second
One NVIDIA H200 Tensor Core GPU
Draft | Target
Models
Llama 3.2 1B | Llama 3.3 70B
Llama 3.2 3B | Llama 3.3 70B
Llama 3.1 8B | Llama 3.3 70B
Llama 3.3 70B
(without draft model)
Tokens/sec
181.74
161.53
134.38
51.14
Speedups (with versus without draft models)
3.55x
3.16x
2.63x
N/A
Table 1. Throughput performance using one NVIDIA H200 Tensor Core GPU with TensorRT-LLM internal measurements
Data measured on December 11, 2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP1, FP8, batch size=1, TensorRT Model Optimizer version 0.21, TensorRT-LLM version 0.15.0.
Figure 2. Throughput speedups using speculative decoding with Llama 3.3 70B target model
We provide the steps to reproduce these performance gains using draft target speculative decoding within TensorRT-LLM.
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
After the model checkpoints have been downloaded, install TensorRT-LLM.
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
Next, compile the downloaded model checkpoints into draft and target TensorRT engines. These engines are optimized to run inference with best accuracy and highest throughput.
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
Finally, run speculative decoding in TensorRT-LLM.
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
To benchmark throughput performance without speculative decoding, follow the steps below:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
Summary
NVIDIA collaborates with Meta for the creation, optimization, and acceleration of the world’s leading open models. NVIDIA supports Llama as part of our commitment to grow open community AI models and software to enable users to customize and address their own unique workloads. NVIDIA is involved with several open-source projects through partnering with developers, maintainers, and foundations.
NVIDIA TensorRT-LLM
provides several features for optimizing and efficiently running LLMs of different model architectures. These optimizations lead to significant speedups on the same hardware, enable fewer resources to serve the same workload, reduce energy costs, and improve total cost of ownership. Available through production-ready deployments using
NVIDIA NIM
microservices, these TensorRT optimizations accelerate the deployment of your generative AI applications across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations. | https://developer.nvidia.com/zh-cn/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/ | 借助 NVIDIA TensorRT-LLM 预测解码,将 Llama 3.3 的推理吞吐量提升 3 倍 | 随着近期新增的 Llama 3.3 70B
(一种纯文本指令调整模型),Meta 的开放
大语言模型 (LLMs)
集合将继续增长。Llama 3.3 相对于较旧的 Llama 3.1 70B 模型提供了增强的性能,甚至可以在数学、推理、编码和多语种支持等多项任务中匹配更大、计算成本更高的 Llama 3.1 405B 模型的功能。
NVIDIA TensorRT-LLM
是一款功能强大的推理引擎,可在最新的 LLM 上提供先进的性能,并整合了许多优化措施,可提供出色的 Llama 3.3 70B 推理吞吐量。其中包括在飞批处理、
KV 缓存
、
自定义 FP8 量化
、推测解码等,可实现快速、经济高效的 LLM 服务。
动态批处理默认作为运行时配置参数激活,TensorRT-LLM 支持同时批处理多个不同的请求,从而提高服务吞吐量。通过在上下文和生成阶段交错处理请求,动态批处理可在旧请求仍在运行时执行新请求,从而减少延迟并提高 GPU 利用率。已完成的请求将从批处理中移除,从而为下一组请求腾出空间。
缓存先前令牌的键值元素的值可避免在为下一组令牌生成阶段对这些张量进行昂贵的重新计算。节省的计算量可有效提高吞吐量。但是,随着批量请求的数量和序列上下文长度的增加,KV 缓存的大小呈线性增长,从而导致内存需求增加。
TensorRT-LLM KV 缓存通过多种优化解决了这些挑战,包括对分页 KV 缓存、
量化 KV 缓存
、
循环缓冲区 KV 缓存
和
KV 缓存重复使用
的支持。每项优化都解决了在增大内存大小与避免不必要且昂贵的重新计算之间实现富有挑战性的平衡这一问题。预测解码是一种热门技术,可通过内置验证来生成输出的质量,从而实现更快且经济高效的 LLM 推理。其前提是,在自回归解码过程中,生成多个未来 (草稿) 令牌序列比处理单个令牌更高效。目标模型决定接受这些草稿令牌的数量,这比每次迭代生成一个令牌的效率要高得多。TensorRT-LLM 支持越来越多的预测性解码技术,包括
草稿目标
、
Medusa
、
Eagle
和前瞻性解码等。
图 1、适用于高性能深度学习推理的 NVIDIA TensorRT 优化
在本文中,我们将展示搭载 NVLink 和 NVSwitch 的
NVIDIA HGX H200 平台以及 TensorRT-LLM
如何在运行最新的 Llama 3.3 70B 模型时实现出色性能。我们介绍了使用 TensorRT-LLM 对 Llama 3.3 70B 进行解码预测的分步设置。有关其他优化、不同模型和多 GPU 执行的更多信息,请参阅
TensorRT-LLM 示例
的完整列表。
通过草稿目标预测解码实现吞吐量加速
表 1 和图 2 突出显示了无草稿模型 (即无预测解码) 与使用 Llama 3.3 70B 目标模型的各种规模的草稿模型之间的吞吐量 (输出令牌/秒) 加速。
吞吐量性能 – 输出 Tokens/秒
One NVIDIA H200 Tensor Core GPU
草稿|目标
模型
Llama 3.2 1B|Llama 3.3 70B
Llama 3.2 3B|Llama 3.3 70B
Llama 3.1 8B|Llama 3.3 70B
Llama 3.3 70B
(无草稿模型)
令牌/秒
191.74
151.53
134.38
51.14
加速 (有与无草稿模型对比)
3.55 倍
3.16 倍
2.63 倍
不适用
表 1、使用一个 NVIDIA H200 Tensor Core GPU 和 TensorRT-LLM 内部测量的吞吐量性能
数据测量于 2024 年 12 月 11 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 总生成 tokens / 总延迟。DGX H200、TP1、FP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21,TensorRT-LLM 版本 0.15.0。
图 2、使用 Llama 3.3 70B 目标模型的预测解码提高吞吐量
我们提供了在 TensorRT-LLM 中使用草稿目标预测解码重现这些性能提升的步骤。
# Download the following model checkpoints from Hugging Face and store them
in a directory for easy access through the setup process.
git lfs install
# Download target models
git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct
# Download draft models
git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
下载模型检查点后,安装 TensorRT-LLM。
# Obtain and start the basic docker image environment (optional).
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint
/bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04
# Install dependencies, TensorRT-LLM requires Python 3.10
apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin
libopenmpi-dev git git-lfs
# Fetch the library
git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
# Install the latest version (corresponding to the main branch) of TensorRT-LLM.
pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com
# Check installation
python3 -c "import tensorrt_llm"
接下来,将下载的模型检查点编译到草稿和目标 TensorRT 引擎中。这些引擎经过优化,能够以最佳的准确性和最高的吞吐量运行推理。
cd examples
# Steps to build target and draft models in FP8 precision on 1 H200
# Create FP8 checkpoints
python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8
--output_dir /ckpt-draft --calib_size 512 --tp_size 1
python3 quantization/quantize.py \
--model_dir=<path to target model repo> \
--output_dir=./ckpt-target-70b \
--dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \
--calib_size 512 --tp_size 1
# Build draft and target engines
# Important flags for the engine build process:
# --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model.
# --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model.
trtllm-build \
--checkpoint_dir ./ckpt-draft \
--output_dir=./draft-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072
trtllm-build \
--checkpoint_dir=./ckpt-target-70b \
--output_dir=./target-engine \
--gpt_attention_plugin float16 \
--workers 1 \
--gemm_plugin=fp8 \
--use_paged_context_fmha=enable \
--multiple_profiles enable \
--max_batch_size=32 \
--max_seq_len=131072 \
--low_latency_gemm_plugin fp8 \
--speculative_decoding_mode=draft_tokens_external \
--max_draft_len 10
最后,在 TensorRT-LLM 中运行投机性解码。
#Run decoding
# Important flags to set during the run process:
#--draft_engine_dir and --engine_dir must be specified for the draft and target engines.
#--draft_target_model_config is corresponding to the configuration of
Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4,
device of draft model is GPU0, device of target model is GPU1, and use
tokens rather than logits to accept.
# Only CPP session (using executor as low-level API) is supported, while
Python session (--use_py_session) is not supported.
# Run with Llama 3.3 70B target model
mpirun -n 1 --allow-run-as-root python3 ./run.py \
--tokenizer_dir <path to draft model repo> \
--draft_engine_dir ./draft-engine \
--engine_dir ./target-engine \
--draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \
--kv_cache_free_gpu_memory_fraction=0.35 \
--max_output_len=1024 \
--kv_cache_enable_block_reuse \
--input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA
3-digit integer contains one of each of the digits 1,3 and 5. What is the
probability that the integer is divisible by
5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
# Following is the LLM-generated output:
Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit
integers that can be formed using the digits 1, 3, and 5.
There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers.
## Step 2: Identify the condition for a number to be divisible by 5.
A number is divisible by 5 if its last digit is either 0 or 5.
## Step 3: Determine the number of arrangements where 5 is the last digit.
Since the digit 5 must be the last digit for the number to be divisible by
5, we fix the last position with 5. The remaining two positions can be
filled with the digits 1 and 3 in 2! = 2 ways.
## Step 4: Calculate the probability that the integer is divisible by 5.
The probability is the number of favorable outcomes (arrangements where 5 is the last digit)
divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5).
## Step 5: Calculate the probability.
Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3.
The final answer is: $\boxed{\frac{1}{3}}$"
要在不使用 speculative decoding 的情况下对吞吐量性能进行基准测试,请执行以下步骤:
# Run throughput benchmark for the 70B model without the draft model
trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/
--gpt_attention_plugin float16 --workers 1 --max_batch_size 32
--max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable
--use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8
python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output
token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist
--num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200
--output-stdev 0 > /tmp/synthetic.txt
trtllm-bench --model <path to target model repo> latency --engine_dir
/data/70b-TRT/ --dataset /tmp/synthetic.txt
总结
NVIDIA 与 Meta 合作创建、优化和加速世界领先的开放模型。NVIDIA 支持 Llama,履行我们发展开放社区 AI 模型和软件的承诺,使用户能够自定义和处理自己独特的工作负载。NVIDIA 通过与开发者、维护人员和基金会合作参与了多个开源项目。
NVIDIA TensorRT-LLM
提供多种功能,用于优化和高效运行不同模型架构的 LLM。这些优化可显著加快相同硬件的运行速度,减少资源以处理相同工作负载,降低能源成本,并提高总拥有成本。这些 TensorRT 优化通过使用
NVIDIA NIM
微服务的生产就绪型部署提供,可随时随地在 NVIDIA 加速的基础设施 (包括云、数据中心和工作站) 中加速生成式 AI 应用的部署。 |
https://developer.nvidia.com/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | Spotlight: Perplexity AI Serves 400 Million Search Queries a Month Using NVIDIA Inference Stack | The demand for AI-enabled services continues to grow rapidly, placing increasing pressure on IT and infrastructure teams. These teams are tasked with provisioning the necessary hardware and software to meet that demand while simultaneously balancing cost efficiency with optimal user experience. This challenge was faced by the inference team at Perplexity AI, an AI-powered search engine that handles more than 435 million queries each month. Each query represents multiple AI inference requests.
To meet this demand, the Perplexity inference team turned to
NVIDIA H100 Tensor Core GPUs
,
NVIDIA Triton Inference Server
, and
NVIDIA TensorRT-LLM
for cost-effective
large language model (LLM)
deployment. This post details some of the deployment best practices and TCO savings based on their hands-on experience.
Serving multiple AI models simultaneously
To support its extensive user base and serve a wide range of requests—spanning search, summarization, and question answering, among others—the inference team at Perplexity serves over 20 AI models simultaneously. This includes different variations of the popular open source Llama 3.1 models like 8B, 70B, and 405B.
To match each user request with the appropriate model, the company relies on smaller classifier models that help determine user intent. User tasks detected by the classifiers, like text completion, are then routed to specific models deployed on GPU pods. Each pod consists of one or more NVIDIA H100 GPUs and is managed by an NVIDIA Triton Inference Server instance. The pods operate under strict service-level agreements (SLAs) for both cost efficiency and user interactivity.
Figure 1. Perplexity AI deployed LLM production on a massive scale using NVIDIA HGX H100 four-GPU and eight-GPU systems featuring dynamic scalability, model parallelism, and load balancing capabilities
To accommodate the large Perplexity user base and fluctuating traffic throughout the day, the pods are hosted within a Kubernetes cluster. They feature a front-end scheduler built in-house that routes traffic to the appropriate pod based on their load and usage, ensuring that the SLAs are consistently met.
The scheduling algorithm used by the front-end scheduler can affect inter-token latency, particularly in improving the worst percentile of performance (Figure 2). The team at Perplexity constantly looks for new scheduler optimizations, including how to better account for sequence length variations across requests.
Figure 2. Inter-token latency distribution for round-robin, least requests, and power of two random choices load balancing strategies during an internal Perplexity benchmark at a moderate request rate of 16 QPS
Triton Inference Server is a critical component of Perplexity’s deployment architecture. It serves optimized models across various backends, batches incoming user requests, and provides GPU utilization metrics to the scheduler. This supports scaling up or down the number of deployments and GPUs based on the amount of the inference requests.
For a detailed guide on how to deploy NVIDIA Triton with Kubernetes, see
Scaling LLMs with NVIDIA Triton and NVIDIA TensorRT-LLM Using Kubernetes
.
Meeting strict service-level agreements
To define the right SLAs for the company’s diverse use cases, Perplexity’s inference team conducts comprehensive A/B testing, evaluating different configurations and their impact on user experience. Their goal is to maximize GPU utilization while consistently meeting the target SLA for each specific use case. By improving batching while meeting target SLAs, inference serving cost is optimized.
For smaller models, such as embedding models under 1 billion parameters used in real-time retrieval, the focus is on achieving the lowest possible latency. These are typically hidden from the user and are part of a broader workflow. As a result, configurations for these queries typically have low batch sizes. Given the smaller memory footprints of these models, the team runs multiple models concurrently on the NVIDIA H100 GPU to maintain high resource utilization.
For user-facing models such as Llama 8B, 70B, and 405B, which have a greater impact on user experience and deployment costs, the team conducts a deeper performance analysis and evaluates key metrics such as time to first token, tokens per second per user, and cost per million queries.
To optimize performance while controlling costs, Perplexity parallelizes their deployment of these models across multiple GPUs. Due to the strict SLAs, the team opted to increase tensor parallelism to four and eight GPUs, which they found yields lower serving costs for very latency-sensitive requests within a fixed GPU budget. Data or pipeline parallelism was useful for maximizing throughput in less latency-sensitive settings.
The Perplexity team now uses TensorRT-LLM in combination with proprietary LLM runtimes built with optimized CUDA kernels to successfully serve the Llama-based models within their strict SLAs at minimal costs.
Figure 3. Sharding the Llama 8B model using tensor parallelism across four NVIDIA Hopper GPUs reduces Perplexity’s relative cost per million token by up to 3x for latency-sensitive requests
Ultimately, the Perplexity inference team’s decision to host models depends on their ability to serve these models at a lower cost while still meeting their strict SLAs, compared to using third-party LLM provider APIs. For example, the team estimated that they were able to save approximately $1 million annually by serving models that power their Related-Questions feature on cloud-hosted NVIDIA GPUs. The Related-Questions feature offers Perplexity users suggested follow-up questions to facilitate deeper dives after a search query.
Delivering new levels of performance
The inference team at Perplexity adopts a comprehensive, full-stack approach to their road map, consistently optimizing and enhancing every layer of the stack—from applications and use cases to inference serving middleware and hardware accelerators.
In terms of inference serving middleware, the team is actively collaborating with the NVIDIA Triton engineering team to deploy
disaggregating serving
, a groundbreaking technique that separates the prefill and decode inference phases of an LLM workflow onto separate NVIDIA GPUs. This technique significantly boosts overall system throughput while meeting SLAs, translating to lower cost per token. Additionally, this technique gives Perplexity the flexibility to use different NVIDIA GPU products for each inference phase given its specific hardware resource requirements.
The Perplexity team understands that optimizing the software stack can only drive performance improvements to a certain extent. To deliver new levels of performance, hardware innovations are crucial. This is why they are eager to assess the
NVIDIA Blackwell platform
.
NVIDIA Blackwell delivers
significant performance leaps
enabled by numerous technology innovations, including the second-generation Transformer Engine with support for the FP4 data format, fifth-generation NVLink and NVSwitch enabling significantly larger NVLink domains, and more. Collectively these innovations deliver 30x improvement in inference performance for
trillion parameter LLMs
.
Get started
NVIDIA Triton Inference Server
and
NVIDIA TensorRT
are open-source projects available on GitHub. They are also available as Docker containers that can be pulled from
NVIDIA NGC
. They are part of
NVIDIA AI Enterprise
, which offers enterprise-grade security, stability, and support. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, a set of easy-to-use microservices for accelerated inference on a wide range of AI models, including open-source community and NVIDIA AI Foundation models.
To learn more, check out these resources:
From Zero to Millions: Scaling Large Language Model Inference with TensorRT-LLM
Perplexity Enhances Model Performance for AI-Powered Search Engines with NVIDIA NeMo
Accelerating Large Language Model Inference with NVIDIA in the Cloud | https://developer.nvidia.com/zh-cn/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/ | 聚焦:Perplexity AI 利用 NVIDIA 推理栈每月服务 4 亿个搜索请求 | 对 AI 赋能服务的需求持续快速增长,这给 IT 和基础设施团队带来了越来越大的压力。这些团队的任务是配置必要的硬件和软件来满足这一需求,同时平衡成本效益和最佳用户体验。Perplexity AI 的推理团队面临着这一挑战。Perplexity AI 是一款由 AI 驱动的搜索引擎,每月可处理超过 435 million 个查询。每个查询代表多个 AI 推理请求。
为了满足这一需求,Perplexity 推理团队转而使用
NVIDIA H100 Tensor Core GPUs
、
NVIDIA Triton Inference Server
和
NVIDIA TensorRT-LLM
来进行经济高效的
大语言模型(LLM)
部署。本文根据他们的实操经验,详细介绍了一些部署最佳实践和节省的 TCO。
同时为多个 AI 模型提供服务
为了支持其广泛的用户群并满足搜索、总结和问答等方面的各种请求,Perplexity 的推理团队同时为 20 多个 AI 模型提供服务。其中包括热门开源 Llama 3.1 模型的不同变体,如 8B、70B 和 405B。
为了将每个用户请求与适当的模型相匹配,该公司依靠较小的分类器模型来帮助确定用户意图。然后,分类器检测到的用户任务 (例如文本补全) 会被路由到部署在 GPU 节点上的特定模型。每个节点由一个或多个 NVIDIA H100 GPU 组成,并由 NVIDIA Triton Inference Server 实例管理。这些节点遵循严格的服务级别协议 (SLA) 运行,以实现成本效益和用户交互。
图 1. Perplexity AI 使用 NVIDIA HGX H100 4-GPU 和 8-GPU 系统大规模部署 LLM 生产,具有动态可扩展性、模型并行性和负载均衡功能
为了适应庞大的 Perplexity 用户群和全天波动的流量,Pod 托管在 Kubernetes 集群内。它们具有内置的前端调度程序,可根据负载和使用情况将流量路由到适当的 Pod,从而确保始终满足 SLA。
前端调度程序使用的调度算法会影响令牌间延迟,尤其是在改善最差性能百分位方面 (Figure 2)。Perplexity 团队不断寻找新的调度器优化,包括如何更好地考虑不同请求之间的序列长度变化。
图 2、在 16 QPS 的中等请求速率下,在内部 Perplexity 基准测试中,循环、最小请求的令牌间延迟分配,以及两种随机选择负载均衡策略的能力
Triton Inference Server 是 Perplexity 部署架构的关键组件。它跨各种后端提供优化模型、批量处理传入用户请求,并向调度程序提供 GPU 利用率指标。这支持根据推理请求的数量增加或减少部署和 GPU 的数量。
有关如何使用 Kubernetes 部署 NVIDIA Triton 的详细指南,请参阅
使用 Kubernetes 扩展 LLMs with NVIDIA Triton 和 NVIDIA TensorRT-LLM
。
满足严格的服务级别协议
为了为公司的各种用例定义合适的 SLA,Perplexity 的推理团队进行了全面的 A/B 测试,评估不同的配置及其对用户体验的影响。他们的目标是最大限度地提高 GPU 利用率,同时始终满足每个特定用例的目标 SLA。通过在满足目标 SLA 的同时改进 batching,推理服务成本得以优化。
对于较小的模型,例如嵌入实时检索中使用的 10 亿个参数以下的模型,重点在于尽可能降低延迟。这些通常对用户隐藏,并且是更广泛工作流程的一部分。因此,这些查询的配置通常具有较低的批量大小。鉴于这些模型的内存占用空间较小,该团队在 NVIDIA H100 GPU 上同时运行多个模型,以保持高资源利用率。
对于面向用户的模型 (例如 Llama 8B、70B 和 405B),这些模型对用户体验和部署成本的影响更大,因此该团队会进行更深入的性能分析,并评估关键指标,例如时间到第一个 token、每位用户每秒 token 数以及每百万次查询的成本。
为了在控制成本的同时优化性能,Perplexity 跨多个 GPU 并行部署这些模型。由于严格的服务水平协议 (SLA),该团队选择将张量并行性增加到 4 个和 8 个 GPU,他们发现这样可以在固定的 GPU 预算内降低对延迟非常敏感的请求的服务成本。数据或 pipeline 并行有助于在延迟敏感度较低的设置中更大限度地提高吞吐量。
Perplexity 团队现在将 TensorRT-LLM 与使用优化的 CUDA 内核构建的专有 LLM 运行时相结合,以更低的成本在严格的 SLA 中成功为基于 Llama 的模型提供服务。
图 3、在四个 NVIDIA Hopper GPU 上使用张量并行机制对 Llama 8B 模型进行分片,可将 Perplexity 每百万个令牌的相对成本降低高达 3 倍,以处理延迟敏感型请求
最终,Perplexity 推理团队决定托管模型,取决于他们能否以较低的成本为这些模型提供服务,同时仍满足严格的服务水平协议 (SLA),与使用第三方 LLM 提供商 API 相比。例如,该团队估计,与向第三方 LLM API 服务提供商发送相同的请求量相比,通过在云托管的 NVIDIA GPUs 上提供支持其 Related-Questions 功能的模型,他们每年能够节省约 $1 million。“Related-Questions” 功能为 Perplexity 用户提供后续问题建议,以便在搜索查询后进行更深入的研究。
图 4、与使用第三方 LLM API 服务相比,Perplexity 使用其部署架构在云托管的 NVIDIA GPU 上提供 Related-Questions AI 模型,每年可节省约 100 万美元
提供更高水平的性能
Perplexity 的推理团队对其路线图采用全面的全栈方法,不断优化和增强堆栈的每一层 (从应用和用例到为中间件和硬件加速器提供服务的推理)。
在推理服务中间件方面,该团队正在积极与 NVIDIA Triton 工程团队合作,部署
分解服务
,这是一项突破性技术,可将 LLM 工作流的预填充和解码推理阶段分离到单独的 NVIDIA GPU。此技术可显著提高整体系统吞吐量,同时满足 SLA,从而降低每个令牌的成本。此外,鉴于 Perplexity 的特定硬件资源需求,此技术还可让 Perplexity 在每个推理阶段灵活使用不同的 NVIDIA GPU 产品。
Perplexity 团队明白,优化软件堆栈只能在一定程度上推动性能提升。为了提供更高级别的性能,硬件创新至关重要。因此,他们迫切希望评估
NVIDIA Blackwell 平台
。
NVIDIA Blackwell 通过众多技术创新实现了
显著的性能飞跃
,包括支持 FP4 数据格式的第二代 Transformer Engine、支持更大 NVLink 域的第五代 NVLink 和 NVSwitch 等。总体而言,这些创新可将
万亿级参数 LLM
的推理性能提升 30 倍。
开始使用
NVIDIA Triton Inference Server
和
NVIDIA TensorRT
是 GitHub 上的开源项目。它们还可用作 Docker 容器,可从
NVIDIA NGC
中提取。它们是
NVIDIA AI Enterprise
的一部分,可提供企业级安全性、稳定性和支持。寻求快速实现价值的企业可以使用
NVIDIA NIM
,这是一套易于使用的微服务,可加速各种 AI 模型(包括开源社区和 NVIDIA AI Foundation 模型)的推理。
如需了解详情,请查看以下资源:
从 0 扩展到数百万:使用 TensorRT-LLM 扩展大型语言模型推理
Perplexity
借助 NVIDIA NeMo 增强 AI 驱动的搜索引擎的模型性能
借助 NVIDIA 在云端加速大语言模型推理 |
https://developer.nvidia.com/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs | Meta recently released its
Llama 3.2
series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only
small language model (SLM)
variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and cost-efficient serving across millions of GPUs worldwide – from our most powerful data center and cloud GPUs to local
NVIDIA RTX workstations
and even low-power edge devices with
NVIDIA Jetson
.
Llama 3.2 VLMs support long context lengths of up to 128K text tokens as well as a single image input at a resolution of 1120 x 1120 pixels. To enable low latency responses for great user experiences, while also providing high throughput for cost-efficient serving of these models, the NVIDIA platform is optimized at every layer of the technology stack.
Similarly, the Llama 3.2 SLMs have been optimized to run well on the millions of NVIDIA RTX PCs and workstations worldwide. They have also been quantized to allow for local deployment on edge devices with NVIDIA Jetson. For more information, see
Deploying Accelerated Llama 3.2 from the Edge to the Cloud
.
This post describes the full-stack optimizations that enable high throughput and low latency serving of Llama 3.2 models.
Accelerating Llama 3.2 AI inference throughput
The Llama 3.2 11B and Llama 3.2 90B models include a vision encoder with a text decoder. The encoder is optimized for high-performance inference using the
NVIDIA TensorRT
library and the text decoder is optimized using the
NVIDIA TensorRT-LLM
library.
The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the Llama 3.2 VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with the text input.
TensorRT supports the vision encoder in the BF16 data format. TensorRT-LLM supports the text decoder in both FP16 and BF16 formats. The official recipe released by Meta uses a BF16 text decoder. This is also used for our baseline performance measurements. To boost performance further, NVIDIA developed a custom FP8
post-training quantization (PTQ)
recipe, leveraging the fourth-generation FP8 Tensor Cores that are part of the
NVIDIA Hopper architecture
.
This recipe, available through the
TensorRT Model Optimizer library
, enables higher Llama 3.2 throughput and lower latency while delivering the same accuracy across numerous benchmarks including
ScienceQA
,
OCRBench
,
TextVQA
, and
MMMU
. This means that developers can now run the model more cost-effectively.
Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using
NVIDIA NIM
microservices.
Delivering high throughput and low latency
Table 1 shows maximum throughput performance, representing offline use cases, across a range of input and output sequence lengths and single input image with maximum supported resolution of 1120 x 1120 pixels. Using a system based on the NVIDIA HGX H200 platform, we run the Llama 3.2 90B model on eight
NVIDIA H200 Tensor Core GPUs
, each with 141 GB of fast HBM3e memory, connected through NVLink and
NVLink Switch
, providing 900 GB/s of GPU-to-GPU bandwidth between the GPUs.
Maximum Throughput Performance –
Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
2,646
1,417
480
Table 1. Maximum throughput performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. NIM server restarted between each ISL/OSL configuration to set an optimal KV cache split. Data measured on 11/14/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. DGX H200, normalized to 8 GPUs (by taking the TP profile maximizing throughput per GPU, and multiplying that value by 8 to simulate a replica-parallel setup), batch size tuned for maximum node throughput, TensorRT Model Optimizer version 0.21 (pre-release), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
Table 2 shows minimum latency performance using the same input and output sequence lengths and input image size.
Minimum Latency Performance – Output Tokens/Second
Eight NVIDIA H200 Tensor Core GPUs
Input | Output Sequence Lengths | Image Size
8,000 | 2,000 | 1120×1120
20,000 | 2,000 | 1120×1120
60,000 | 2,000 | 1120×1120
BF16 Encoder with FP8 Decoder
64
63
55
Table 2. Minimum latency performance with NVIDIA internal measurements
TensorRT optimized NIM for VLMs version 1.1.0 recipe. Data measured on 11/4/2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens / total latency. DGX H200, TP8, batch size = 1, TensorRT Model Optimizer version 0.21 (prerelease), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0.
As these results show, NVIDIA H200 GPUs with
TensorRT-optimized
software delivers exceptional performance on Llama 3.2 90B VLM, in both latency-optimized and throughput-optimized scenarios.
Throughput performance of GeForce RTX 4090 with ONNX Runtime on NVIDIA RTX
For Windows deployments, NVIDIA has optimized Llama 3.2 SLMs to work efficiently using the
ONNX Runtime Generative API
, with a
DirectML
backend. Performance measurements are made using the
model checkpoint
available on the NGC catalog. The checkpoint is a quantized version of Llama 3.2 3B Instruct model and is quantized to
AWQ INT4
using
AutoAWQ
and converted to
ONNX
using ONNX Runtime Generative API.
Maximum Throughput Performance – Output Tokens/Second
NVIDIA GeForce RTX 4090 GPUs
Input | Output Sequence Lengths
100 | 100
2,000 | 100
4,000 | 100
Onnx-GenAI Runtime with DirectML, BS=1
253
203
165
Onnx-GenAI Runtime with DirectML, BS=4
615
374
251
Table 3. Maximum throughput performance with NVIDIA internal measurements
ONNX Runtime Generative API with DirectML data measured on 10/07/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens / total latency. GeForce RTX 4090 GPU.
Better performance on Llama 3.2 across platforms
With the NVIDIA accelerated computing platform, you can build models and supercharge your applications with the most performant Llama 3.2 models on any platform—from the data center and cloud to local workstations. Enterprises seeking the fastest time to value can use
NVIDIA NIM
, part of the
NVIDIA AI Enterprise
software platform, which offers
NVIDIA TensorRT
optimized
inference
on Llama 3.2 and other models from NVIDIA and its partner ecosystem.
Acknowledgments
We would like to thank George Yuan, Alex Settle, and Chenjie Luo for their efforts in supporting this post. | https://developer.nvidia.com/zh-cn/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/ | Llama 3.2 全栈优化释放 NVIDIA GPU 的高性能 | Meta 最近发布了
Llama 3.2
系列视觉语言模型(VLM),其中包含 11B 参数和 90B 参数变体。这些模型是多模态模型,支持文本和图像输入。此外,Meta 还推出了 Llama 3.2 的纯文本
小语言模型(SLM)
变体,具有 1B 和 3B 参数。NVIDIA 已对 Llama 3.2 模型集合进行优化,以便在全球数百万个 GPU 上提供出色的性能和经济高效的服务 – 从我们功能强大的数据中心和云 GPU 到本地
NVIDIA RTX 工作站
,甚至是使用
NVIDIA Jetson
的低功耗边缘设备。
Llama 3.2 VLM 支持长达 128K 的文本标记,以及分辨率为 1120 x 1120 像素的单张图像输入。为了实现低延迟响应,提供出色的用户体验,同时提供高吞吐量以经济高效地提供这些模型,NVIDIA 平台在技术堆栈的每一层都进行了优化。
同样,Llama 3.2 SLM 已经过优化,可在全球数百万 NVIDIA RTX PC 和工作站上良好运行。它们还经过量化,允许在边缘设备上使用 NVIDIA Jetson 进行本地部署。有关更多信息,请参阅
将加速的 Llama 3.2 从边缘部署到云端
。
本文将介绍如何通过全栈优化实现 Llama 3.2 模型的高吞吐量和低延迟服务。
加速 Llama 3.2 AI 推理吞吐量
Llama 3.2 11B 和 Llama 3.2 90B 模型包括一个带有文本解码器的视觉编码器。编码器使用
NVIDIA TensorRT 库
针对高性能推理进行了优化,文本解码器使用
NVIDIA TensorRT-LLM 库
进行了优化。
通过 TensorRT-LLM 中支持的交叉注意力机制,视觉编码器的视觉信息融合到 Llama 文本解码器中。这使得 Llama 3.2 VLM 能够高效生成文本,在文本输入的上下文中考虑视觉推理和理解。
TensorRT 支持视觉编码器在 BF16 数据格式下运行。TensorRT-LLM 支持文本解码器在 FP16 和 BF16 格式下运行。Meta 发布的官方 recipe 使用了 BF16 文本解码器,这也用于我们的基准性能测量。为了进一步提高性能,NVIDIA 利用 NVIDIA Hopper 架构中的
第四代 FP8 Tensor Core
开发了自定义的 FP8 后训练量化(PTQ)recipe。
此方法通过
TensorRT Model Optimizer 库
提供,可实现更高的 Llama 3.2 吞吐量和更低的延迟,同时在众多基准测试(包括
ScienceQA
、
OCRBench
、
TextVQA
和
MMMU
)中提供相同的准确性。这意味着开发者现在可以更经济高效地运行模型。
TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用
NVIDIA NIM
微服务的生产就绪型部署获得。
提供高吞吐量和低延迟
表 1 显示了在一系列输入和输出序列长度以及单输入图像 (最大支持分辨率为 1120 x 1120 像素) 中的最大吞吐量性能 (代表离线用例)。我们使用基于 NVIDIA HGX H200 平台的系统,在 8 个
NVIDIA H200 Tensor Core GPU
上运行 Llama 3.2 90B 模型,每个 GPU 均具有 141 GB 的快速 HBM3e 内存,通过
NVLink 和 NVLink Switch
连接,可在 GPU 之间提供 900 GB/s 的 GPU 到 GPU 带宽。
最大吞吐量性能 –
每秒输出令牌数
八个 NVIDIA H200 Tensor Core GPU
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
2646
1417
480
表 1. 通过 NVIDIA 内部测量获得的最大吞吐量性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。在每个 ISL/OSL 配置之间重新启动 NIM 服务器,以设置最佳 KV 缓存分割。数据测量时间:11 月 14 日,2024 年。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = total generated tokens / total latency。DGX H200,标准化为 8 个 GPUs (通过采用 TP 配置文件最大限度地提高每个 GPU 的吞吐量,并将该值乘以 8 以模拟副本并行设置),针对最大节点吞吐量调整了批量大小,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0。
表 2 显示了使用相同的输入和输出序列长度以及输入图像大小时的最小延迟性能。
最低延迟性能 – 输出令牌/秒
(
八个 NVIDIA H200 Tensor Core GPUs
)
输入|输出序列长度|图像大小
8000|2000|1120*1120
20000|2000|1120*1120
60000|2000|1120*1120
带 FP8 解码器的 BF16 编码器
64
63
55
表 2、NVIDIA 内部测量的最低延迟性能
TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。数据测量时间:2024 年 11 月 4 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 生成的 tokens 总数/总延迟。DGX H200、TP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0 以上。
这些结果表明,
搭载 TensorRT 优化软件的
NVIDIA H200 GPU 可在 Llama 3.2 90B VLM 上提供出色的性能,适用于延迟优化和吞吐量优化的场景。
采用 ONNX Runtime 的 GeForce RTX 4090 在 NVIDIA RTX 上的吞吐量性能
对于 Windows 部署,NVIDIA 已对 Llama 3.2 SLM 进行优化,以便使用带有
DirectML
后端的
ONNX
Runtime Generative API
高效工作。使用 NGC 目录上提供的
模型检查点
进行性能测量。检查点是 Llama 3.2 3B Instruct 模型的量化版本,使用
AutoAWQ
量化为
AWQ INT4
,并使用 ONNX Runtime Generative API 转换为 ONNX。
最大吞吐量性能 – 输出令牌/秒
NVIDIA GeForce RTX 4090 GPU
输入|输出序列长度
100|100
2000|100
4000|100
采用 DirectML 的 Onnx-GenAI 运行时,BS = 1
253
203
165
支持 DirectML 的 Onnx-GenAI 运行时,BS = 4
615
374
251
表 3、通过 NVIDIA 内部测量获得的最大吞吐量性能
使用在 2024 年 10 月 7 日测量的 DirectML 数据的 ONNX Runtime Generative API。输出 token/s 包含生成第一个 token 的时间 – tok/s = 总生成 token 数 / 总延迟。GeForce RTX 4090 GPU。
跨平台在 Llama 3.2 上获得更好的性能
借助 NVIDIA 加速计算平台,您可以在任何平台 (从数据中心、云到本地工作站) 上构建模型并使用性能超强的 Llama 3.2 模型增强您的应用。
NVIDIA NIM
是
NVIDIA AI Enterprise
软件平台的一部分,可为 Llama 3.2 和 NVIDIA 及其合作伙伴生态系统的其他模型提供经
NVIDIA TensorRT
优化的推理。
致谢
我们要感谢 George Yuan、Alex Settle 和 Chenjie Luo 为支持这篇博文而付出的努力。 |
https://developer.nvidia.com/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | Streamlining AI Inference Performance and Deployment with NVIDIA TensorRT-LLM Chunked Prefill | In this blog post, we take a closer look at chunked prefill, a feature of
NVIDIA TensorRT-LLM
that increases GPU utilization and simplifies the deployment experience for developers. This builds on our previous
post
discussing how advanced KV cache optimization features in TensorRT-LLM improve performance up to 5x in use cases that require system prefills.
Challenges with traditional prefill and decode inference approaches
When a user submits a request to a model, it goes through two distinct computational phases: prefill and decode. Each phase uses GPU compute resources differently.
During the prefill phase, the system processes all input tokens to compute the KV cache, which is then used to generate the first token of the output. This phase is computationally demanding and can effectively use a GPU’s vast parallel compute resources.
In the decode phase, the system generates output tokens individually, updating the intermediate states from the prefill phase with each new token. Since the heavy computational work for intermediate state calculations is done in the prefill phase, the decode phase primarily involves processing only the newly generated token. As a result, this phase is less computationally intensive.
TensorRT-LLM supports in-flight batching where multiple requests are handled in parallel, enabling prefill and decode stage processing simultaneously. While more efficient than phased batching, in which decode requests are paused to process prefill requests, this approach can introduce latency as the decode phases are delayed until the prefill requests are completed. The top portion of Figure 1 illustrates this concept.
Figure 1. A simplified representation of the chunked prefill process, showing how it can increase GPU utilization and accelerate query completion
Balancing prefill and decode phases with chunked prefill
With TensorRT-LLM chunked prefill, the tokens are divided into smaller units, or chunks, for faster processing. This prevents the prefill phase from becoming a bottleneck, enables more parallelization with decode phase tokens, and increases GPU utilization. The bottom portion of Figure 1 illustrates this concept.
Additionally, using TensorRT-LLM chunked prefill enables GPU systems to handle longer contexts and higher concurrency levels. Since memory usage depends on the number of tokens processed during each iteration, using prefill chunks decouples memory consumption from the context length of incoming requests. The system can then process larger context lengths without increasing memory demands.
Simplifying TensorRT-LLM engine creation with dynamic chunk sizing
Figure 1 shows how chunk size can impact user interactivity and overall system throughput, requiring a trade-off based on the desired user experience and available GPU resources. Using a large chunk size lowers the number of iterations required to process prefill sequences, reducing time to first token (TTFT).
However, it also increases the time taken to complete the decode phase of ongoing requests, increasing query completion time and reducing output tokens per second (TPS). Finding the right balance can sometimes be time-consuming for developers. TensorRT-LLM deploys dynamic chunk sizing, providing ideal recommendations for the user based on GPU utilization metrics.
An additional advantage of dynamic prefill chunk sizing is the simplification of the TensorRT-LLM engine build process. In the past, developers had to supply a maximum input sequence length from which activation buffers were computed during engine build time. To ensure the activation buffers could handle the most complex user queries, developers were required to set the maximum sequence length expected for their use case. This approach optimized GPU resources for outlier and worst-case scenarios, leading to inefficient memory usage.
With dynamic prefill chunk sizing developers no longer need to specify the maximum input length manually. Instead, activation buffer sizes are automatically determined by the chunk size, which is configured by TensorRT-LLM. This eliminates manual configuration and leads to more efficient memory usage during execution.
Getting started with TensorRT-LLM chunked prefills
Using TensorRT-LLM chunked prefill significantly improves both system performance and utilization. Breaking down the traditionally sequential prefill phase into smaller, more manageable chunks, enables better parallelization, with the decode phase, reducing bottlenecks and accelerating query completion.
The dynamic adjustment of chunk size also simplifies the TensorRT-LLM engine configuration process, eliminating the need for developers to manually set activation buffer sizes, which previously resulted in inefficient memory usage. Dynamic chunk sizing optimizes memory use and ensures that the system adapts seamlessly to varying LLM user demands.
Check out our
GitHub documentation
, to start using TensorRT-LLM chunked prefills. Learn more about
NVIDIA AI Inference solutions
and stay up-to-date with the latest
AI inference performance
updates. | https://developer.nvidia.com/zh-cn/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/ | 借助 NVIDIA TensorRT-LLM 分块预填充提高 AI 推理效率和简化部署 | 在本
博文
中,我们详细介绍了分块预填充,这是
NVIDIA TensorRT-LLM
的一项功能,可提高 GPU 利用率并简化开发者的部署体验。本文基于之前的博文,讨论了 TensorRT-LLM 中的高级 KV 缓存优化功能如何在需要系统预填充的用例中将性能提升高达 5 倍。
传统预填充和解码推理方法面临的挑战
当用户向模型提交请求时,它将经历两个不同的计算阶段:预填充和解码。每个阶段使用 GPU 计算资源的方式各不相同。
在预填充阶段,系统会处理所有输入令牌以计算 KV 缓存,然后使用该缓存生成输出的第一个令牌。此阶段的计算要求很高,可以有效利用 GPU 庞大的并行计算资源。
在解码阶段,系统会单独生成输出 tokens,使用每个新 tokens 更新预填充阶段的中间状态。由于中间状态计算的繁重计算工作是在预填充阶段完成的,因此解码阶段主要涉及仅处理新生成的 tokens。因此,此阶段的计算密集度较低。
TensorRT-LLM 支持动态批处理,允许并行处理多个请求,使得预填充和解码阶段可以同时进行。虽然这种方法比暂停解码请求以处理预填充请求的分阶段批处理更高效,但由于解码阶段延迟到预填充请求完成,因此可能会导致延迟。图 1 的顶部说明了这一概念。
图 1. 分块预填充过程的简化表示,展示了如何提高 GPU 利用率并加速查询完成。
使用分块预填充平衡预填充和解码阶段
借助 TensorRT-LLM 分块预填充,令牌被分成较小的单元或分块,以加快处理速度。这可以防止预填充阶段成为瓶颈,使解码阶段的令牌实现更多的并行化,并提高 GPU 的利用率。图 1 的底部部分说明了这一概念。
此外,使用 TensorRT-LLM 分块预填充使 GPU 系统能够处理更长的上下文和更高的并发级别。由于内存使用量取决于每次迭代期间处理的令牌数量,因此使用预填充块可将内存消耗量与传入请求的上下文长度解耦。然后,系统可以在不增加内存需求的情况下处理更大的上下文长度。
通过动态数据块大小调整简化 TensorRT-LLM 引擎创建
图 1 显示了块大小百分比对用户交互性和整体系统吞吐量的影响,这需要根据所需的用户体验和可用 GPU 资源进行权衡。使用大块大小可以减少处理预填充序列所需的迭代次数,从而缩短第一个令牌(TTFT)的时间。
但是,这也会增加完成正在进行的请求的解码阶段所需的时间,从而增加查询完成时间并降低每秒输出令牌数(TPS)。对于开发者而言,找到合适的平衡有时非常耗时。TensorRT-LLM 部署动态块大小,根据 GPU 利用率指标为用户提供理想建议。
动态预填充块大小调整的另一个优势是简化了 TensorRT-LLM 引擎构建流程。过去,开发者必须提供最大输入序列长度,以便在引擎构建期间计算激活缓冲区。为确保激活缓冲区能够处理最复杂的用户查询,开发者需要为其用例设置预期的最大序列长度。这种方法针对异常和最差情况优化了 GPU 资源,导致内存使用效率低下。
借助动态预填充块大小调整,开发者无需再手动指定最大输入长度。相反,激活缓冲区大小由块大小(由 TensorRT-LLM 配置)自动确定。这消除了手动配置,并提高了执行期间的内存使用效率。
TensorRT-LLM 分块预填充入门
使用 TensorRT-LLM 分块预填充可显著提高系统性能和利用率。将传统的连续预填充阶段分解为更小、更易于管理的块,可通过解码阶段实现更好的并行化,从而减少瓶颈并加速查询完成。
块大小的动态调整还简化了 TensorRT-LLM 引擎配置流程,无需开发者手动设置激活缓冲区大小,而这在之前会导致内存使用效率低下。动态块大小优化内存使用,并确保系统能够无缝适应不同 LLM 用户的需求。
查看
我们的 GitHub 文档
,开始使用 TensorRT-LLM 分块预填充。详细了解
NVIDIA AI 推理解决方案
,并随时了解最新的
AI 推理性能更新
。 |
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | 5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse | In our previous
blog post
, we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups.
Introduction to KV cache
LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the user’s prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence.
This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokens—leading to a faster and more efficient user experience.
When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas.
Early KV cache reuse
Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system prompts—predefined instructions added to user queries—are essential to direct the LLM’s responses in line with enterprise guidelines, this method can be inefficient.
When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x.
Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x
Flexible KV cache block sizing
In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation.
This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial. As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case.
TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens.
Figure 2. Impact of changing KV cache block size on inference speedup
Efficient KV cache eviction protocols
Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies.
Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT.
To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks.
Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse
Getting started with TensorRT-LLM KV cache reuse
Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance.
To start using TensorRT-LLM KV cache reuse check out our
GitHub documentation
. | https://developer.nvidia.com/zh-cn/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/ | NVIDIA TensorRT-LLM KV 缓存早期重用实现首个令牌速度 5 倍提升 | 在上一篇
博客文章
中,我们展示了通过将键值 (KV) 缓存卸载到 CPU 内存来重复使用该缓存如何在基于 x86 的 NVIDIA H100 Tensor Core GPU 上将首次生成令牌 (Time To First Token, TTFT) 的速度提升高达 14 倍,在 NVIDIA GH200 Superchip 上可将其提升高达 28 倍。在本文中,我们介绍了可以进一步推动 TTFT 加速的 KV 缓存重复使用技术和最佳实践。
KV 缓存简介
许多任务(包括问答和代码生成)都迅速采用了 LLM 模型。为了生成响应,这些模型首先将用户的提示符转换为 tokens,然后将其转换为 dense vectors。随后会进行大量的 dot-product operations,以数学方式对 tokens 之间的关系建模,并构建对用户输入的上下文理解。生成这种上下文理解的计算成本会随着输入序列的长度而二次增加。
此资源密集型流程会生成键和值,这些键和值会被缓存,以避免在生成后续 token 时进行重新计算。重复使用 KV 缓存可减少生成额外 token 所需的计算负载和时间,实现更快、更高效的用户体验。
重复使用 KV 缓存时,必须仔细注意其在内存中的剩余时间、在内存已满时首先移除哪些组件,以及何时可重复用于新的传入提示。优化这些因素可能会带来 KV 缓存重复使用的性能增益。NVIDIA TensorRT-LLM 提供了三个关键功能,专门解决了这些问题。
早期 KV 缓存复用
传统的复用算法需要先完成整个 KV 缓存的计算,然后才能根据新用户的提示重复使用其任何部分。在企业聊天机器人等场景中,系统提示(添加到用户查询中的预定义指令)对引导 LLM 根据企业指南做出响应至关重要,这种方法可能效率低下。
当大量用户同时与聊天机器人交互时,每个用户都需要单独计算系统提示 KV 缓存。借助 TensorRT-LLM,我们可以重新使用实时生成的系统提示,使其在突发期间在所有用户之间共享,而不是为每个用户重新计算。这可以显著加快推理速度,最高可达 5 倍。
图 1.TensorRT-LLM KV 缓存重复利用可将 TTFT 速度提升高达 5 倍。
灵活调整 KV 缓存块大小
在重用实现中,只能分配整个缓存内存块以供重用。例如,如果缓存内存块大小为 64 个令牌,KV 缓存为 80 个令牌,则仅存储 64 个令牌以供重用,而其余 16 个令牌则需要重新计算。但是,如果将内存块大小减少到 16 个令牌,则可在五个内存块中存储全部 64 个令牌,从而消除重新计算的需求。
当输入序列较短时,这种效果最为明显。对于长输入序列,更大的块会更有好处。显然,您对 KV 缓存的控制越细致,就越能针对特定用例进行优化。
TensorRT-LLM 可对 KV 缓存内存块进行细粒度控制,使开发者能够将这些块分割成 64 到 2 个令牌之间的较小块。这可优化已分配内存的使用,提高复用率并改善 TTFT。在 NVIDIA H100 Tensor Core GPU 上运行 LLAMA70B 时,我们可以通过将 KV 缓存块的大小从 64 个令牌减少到 8 个令牌,在多用户环境中将 TTFT 速度提升高达 7%。
图 2. 更改 KV 缓存块大小对推理加速的影响
高效的 KV 缓存回收协议
将 KV 缓存分割成较小的块并移除未使用的块对于内存优化可能有效,但这会引入依赖项复杂性。当使用特定块生成响应并将结果存储为新块时,它会形成树状的依赖结构。
随着时间的推移,跟踪源块(分支)使用情况的计数器可能会随着依赖节点(叶子)的重复使用而过时。然后,移除源块需要移除所有依赖块,这将需要重新计算 KV 缓存以用于新用户提示,从而增加 TTFT。
为应对这一挑战,TensorRT-LLM 包含智能驱逐算法,该算法可以追踪源节点中的依赖节点,并首先驱逐依赖节点,即使这些节点具有较新的重复使用计数器。这可以确保更高效的内存管理,同时防止不必要的依赖块驱逐。
图 3.KV 缓存驱逐算法的逻辑表示展示了如何减少被驱逐的块的数量,从而增加重复使用的可能性。
TensorRT-LLM KV 缓存重用入门
在推理过程中生成 KV 缓存需要大量的计算和内存资源。高效使用 KV 缓存对于改善模型响应、加速推理和提高系统吞吐量至关重要。TensorRT-LLM 为希望进一步优化 TTFT 响应时间以实现峰值性能的开发者提供了高级复用功能。
要开始使用 TensorRT-LLM KV 缓存,请参阅
我们的 GitHub 文档
。 |
https://developer.nvidia.com/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | 3x Faster AllReduce with NVSwitch and TensorRT-LLM MultiShot | Deploying generative AI workloads in production environments where user numbers can fluctuate from hundreds to hundreds of thousands – and where input sequence lengths differ with each request – poses unique challenges. To achieve low latency inference in these environments, multi-GPU setups are a must – irrespective of the GPU generation or its memory capacity. To enhance inference performance in production-grade setups, we’re excited to introduce TensorRT-LLM Multi-shot, a new multi-GPU communication protocol that leverages the
NVIDIA NVLink Switch
to significantly increase communication speeds by up to 3x. This blog outlines this new feature and how it helps developers and solution architects address the limitations of traditional multi-GPU communication methods.
Challenges with traditional AllReduce algorithms
For low latency inference, multi-GPU is critical, regardless of the memory capacity of a single GPU. However, at low concurrency, the time GPUs spend exchanging data can outweigh the time spent on compute. For optimal performance, an efficient
AllReduce
operation – a collective operation that combines partial results from each participating GPU – is critical.
Traditional approaches use ring-based algorithms, where the partial values are passed around a ring of GPUs. Each GPU contributes its values and passes the result to its neighbor. This process is repeated 2N-2 times where N is the number of GPUs working together, and by the end of the process, every GPU has the same summed value. A second pass over the ring is required to propagate summed values from the last GPU to the rest.
The Ring approach makes efficient use of available GPU-to-GPU bandwidth per communication step, but as the number of GPUs increases, so does the number of steps. This increases latency, as all GPUs need to stay synchronized at every step of the ring. These synchronization latencies add significant latency overhead and can make it difficult to meet more stringent latency targets.
The Ring AllReduce algorithm is described below:
Ring Algorithm: GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 steps, with full tensor send/recv each step
Latency: 2N-2 communication steps. (N: # of GPUs)
Traffic: (4N-4)/N tensor bytes of send/recvs
Addressing AllReduce communication challenges with TensorRT-LLM MultiShot
TensorRT-LLM MultiShot is a new algorithm that reduces the O(N) latency of Ring AllReduce by up to 3x leveraging multicast in NVSwitch. Multicast is a hardware acceleration feature in NVSwitch which allows a GPU to send data once and have that data sent simultaneously to all other GPUs, minimizing the number of communication steps to two inter-GPU synchronizations while remaining bandwidth efficient. Without NVSwitch, this would take N times the communication bandwidth.
TensorRT-LLM Multishot separates the AllReduce into a ReduceScatter operation followed by an AllGather operation (for more detailed descriptions of collective operations, see this
documentation
).
Each GPU is responsible for accumulating only a portion of the result tensor.
The first step (or “shot”) involves each GPU sending the different slices of the tensor to the respective GPU responsible for accumulating that slice of the tensor.
After accumulating locally, each GPU now has the correct sum accumulators for its unique slice of the output.
In the second step (or “shot”), each GPU broadcasts the result slice to all other GPUs using the NVSwitch multicast capability. This minimizes the per GPU bandwidth required as the NVSwitch itself performs data amplification; each GPU sends 1/N the data and receives the full result tensor in one step.
The entire operation only takes two communication steps, regardless of the number GPUs performing tensor parallel inference.
TensorRT-LLM MultiShot Algorithm: GPU_N sends slices, Compute slice sum, broadcast result in single multicast operation.
Latency: 2 communication steps (regardless of number of GPUs)
Traffic: 2 tensor bytes of send/recv (regardless of number of GPUs)
Why this matters
Since this algorithm requires only two communication steps rather than 2N-2 (where N is the number of GPUs), MultiShot can be nearly 3x faster than Ring AllReduce. The benefits of this algorithm are particularly evident with smaller message sizes and high parallelism – the scenario needed when minimum latency is required for a great user experience.
This can be used to either reduce minimum latency, or increase throughput at a given latency. In scenarios with more aggressive latency thresholds, this can lead to super-linear scaling with the number of GPUs.
Figure 1. With TensorRT-LLM MultiShot, AllReduce latency is reduced by up to 3x.
Achieving optimal inference performance requires careful workload analysis and a deep understanding of performance bottlenecks. By gaining that understanding – both through internal engineering work as well as through close collaboration with external developers and researchers – we can quickly and frequently optimize many aspects of our platform to deliver great performance for users.
As we continue to identify and implement new performance optimizations – some may be extensive, others might be narrower in scope – we will be providing regular updates on these optimizations, providing both technical motivation and quantified benefits. | https://developer.nvidia.com/zh-cn/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/ | NVSwitch 和 TensorRT-LLM MultiShot 共同加速 AllReduce 速度达 3 倍 | 在用户数量可能在数百到数十万之间波动,且输入序列长度随每个请求而变化的生产环境中,部署生成式 AI 工作负载会面临独特的挑战。要在这些环境中实现低延迟推理,无论 GPU 生成方式或显存容量如何,多 GPU 设置都是必需的。为了提高生产级设置中的推理性能,我们很高兴推出 TensorRT-LLM Multi-shot,这是一种新的多 GPU 通信协议,利用
NVIDIA NVLink Switch
可将通信速度大幅提升高达 3 倍。本博客概述了这一新功能,以及它如何帮助开发者和解决方案架构师克服传统多 GPU 通信方法的限制。
传统 AllReduce 算法面临的挑战
对于低延迟推理,无论单个 GPU 的显存容量如何,多 GPU 都至关重要。但是,在低并发情况下,GPU 花在交换数据上的时间可能超过花在计算上的时间。为了获得最佳性能,
高效的 AllReduce 操作
–结合每个参与其中的 GPU 的部分结果的集合操作–至关重要。
传统方法使用基于环的算法,其中部分值在环形的 GPU 之间传递。每个 GPU 都贡献其值并将结果传递给其邻居。该过程重复 2N-2 次,其中 N 是协同工作的 GPU 数量,在该过程结束时,每个 GPU 都具有相同的总和值。需要对环进行第二次传递,以将总和值从最后一个 GPU 传播到其他 GPU。
Ring 方法可在每个通信步骤中高效利用可用的 GPU 到 GPU 带宽,但随着 GPU 数量的增加,步骤数也会增加。这会增加延迟,因为所有 GPU 都需要在 Ring 的每个步骤中保持同步。这些同步延迟会显著增加延迟开销,并可能导致难以满足更严格的延迟目标。
Ring AllReduce 算法描述如下:
环形算法:GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1)
2N-2 步长,每步具有完整的 Tensor send/recv
延迟:2N-2 通信步骤。(N:GPU 的数量)
流量:(4N-4)/N 张量的 send/recv 字节数
使用 TensorRT-LLM MultiShot 应对 AllReduce 通信挑战
TensorRT-LLM MultiShot 是一种新算法,可利用 NVSwitch 中的组播,将 Ring AllReduce 的 O(N) 延迟最多降低 3 倍。组播是 NVSwitch 中的硬件加速功能,允许一个 GPU 发送数据一次,并将该数据同时发送到所有其他 GPU,从而将通信步骤的数量减少到两个 GPU 间的同步,同时保持带宽效率。如果没有 NVSwitch,这将占用 N 倍的通信带宽。
TensorRT-LLM Multishot 将 AllReduce 分离为 ReduceScatter 操作,然后是 AllGather 操作(有关集合操作的更多详细说明,请参阅
此文档
)。
每个 GPU 仅负责累积结果张量的一部分。
第一步(或“shot”)涉及每个 GPU 将张量的不同切片发送到负责累积该张量切片的相应 GPU。
在本地累加后,每个 GPU 现在都有正确的和累加器,用于其独特的输出切片。
在第二步 (或“shot”) 中,每个 GPU 使用 NVSwitch 组播功能将结果切片广播到所有其他 GPU。这可最大限度地减少 NVSwitch 本身执行数据放大所需的每个 GPU 带宽;每个 GPU 一步发送 1/N 数据并接收完整的结果张量。
无论执行张量并行推理的 GPU 数量如何,整个操作仅需两次通信步骤。
TensorRT-LLM MultiShot 算法:GPU_N 发送切片、计算切片和、在单个组播运算中广播结果。
延迟:2 个通信步骤(与 GPU 数量无关)
流量:2 张量字节的 send/recv(与 GPU 数量无关)
为何如此重要
由于此算法只需要两个通信步骤,而不是 2N-2 (其中 N 表示 GPU 数量),因此 MultiShot 的速度几乎是 Ring AllReduce 的 3 倍。这种算法的优势在消息大小较小且并行度高的情况下尤为明显,而这正是需要最低延迟以获得出色的用户体验的场景。
这可用于降低最小延迟,或在给定延迟下提高吞吐量。在具有更激进的延迟阈值的场景中,这可能会导致 GPU 数量的超线性扩展。
图 1. 借助 TensorRT-LLM MultiShot,AllReduce 延迟最多可降低 3 倍。
实现最佳推理性能需要仔细的工作负载分析和对性能瓶颈的深入了解。通过内部工程工作以及与外部开发者和研究人员的密切合作,我们可以快速、频繁地优化平台的许多方面,为用户提供出色的性能。
随着我们继续识别和实施新的性能优化(一些可能是广泛的,另一些可能范围较窄),我们将定期提供有关这些优化的更新,提供技术动机和量化效益。 |
https://developer.nvidia.com/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | Accelerating Scientific Literature Reviews with NVIDIA NIM Microservices for LLMs | A well-crafted systematic review is often the initial step for researchers exploring a scientific field. For scientists new to this field, it provides a structured overview of the domain. For experts, it refines their understanding and sparks new ideas. In 2024 alone, 218,650 review articles were indexed in the
Web of Science
database, highlighting the importance of these resources in research.
Completing a systematic review significantly enhances a researcher’s knowledge base and their academic impact. However, traditional review writing requires collecting, reading, and summarizing large volumes of academic articles on a specific topic. Due to the time-consuming nature of this manual exercise, the scope of processed literature is often confined to dozens or a few hundred articles. Interdisciplinary content—frequently outside the researcher’s area of expertise—adds another layer of complexity.
These challenges make it increasingly difficult to create comprehensive, reliable, and impactful systematic reviews.
The advent of large language models (LLMs) offers a groundbreaking solution, enabling the rapid extraction and synthesis of information from extensive literature. Participating in the
Generative AI Codefest Australia
provided a unique opportunity to explore this idea with support from NVIDIA AI experts to leverage
NVIDIA NIM microservices
for accelerating literature reviews. This enabled the rapid testing and fine-tuning of several state-of-the-art LLMs for our literature analysis process.
Testing the potential of LLMs for processing papers
As a research group specializing in physiological ecology within the
ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF)
, we embarked on writing a review of the literature on the global responses of non-vascular plants, such as moss or lichen, to wind.
However, we quickly faced a challenge: many relevant articles on wind-plant interactions failed to explicitly mention these key words in their titles or abstracts, which are typically used as primary filters during literature screening. A comprehensive analysis of the topic required manually reading the full text of each article—a highly time-consuming process.
We decided to explore the potential of using LLMs to extract content specifically related to wind-plant interactions from the articles. To achieve this, we implemented a simple Q&A application based on the
LlaMa 3.1 8B Instruct NIM microservice
(Figure 1). This enabled us to get an initial prototype quickly.
This first prototype, processing the papers sequentially, was extremely useful to craft and optimize the prompts to extract key information from each article.
Figure 1. Initial simple paper processing pipeline using an LLM NIM microservice at its core
To validate the accuracy of extracted information, we initially manually validated the results. When no significant errors were found in the test dataset, we identified opportunities to further enhance the efficiency of key information extraction using LLMs (Figure 2). These include converting the papers from a PDF format to structured JSON; extracting images, tables, and charts; and using parallel processing to speed up the processing of papers.
Figure 2. Enhanced paper processing pipeline incorporating NV-ingest and parallel processing of papers
Enhancing the performance of LLMs for more efficient information extraction
By using NVIDIA NIM microservices for LLMs and
nv-ingest
, we deployed LLMs and a data ingestion pipeline in our local environment with eight NVIDIA A100 80-GB GPUs. We also fine-tuned the models using low-rank adaptation (LoRA) to improve the accuracy of information extraction from the papers.
We compiled a dataset of over 2K scientific articles related to the targeted research domain, sourced from the
Web of Science
and
Scopus
databases. Over a week during Generative AI Codefest, we focused on experimenting with various strategies to optimize the efficiency and accuracy of key information extraction from these articles.
Best-performing model
To determine the best-performing model, we tested a range of instruction-based and general-purpose LLMs from the NVIDIA API Catalog on a set of randomly selected articles. Each model was assessed for its accuracy and comprehensiveness in information extraction.
Ultimately, we determined that Llama-3.1-8B-instruct was the most suitable for our needs.
Processing speed
We developed a Q&A module using streamlit to answer user-defined research-specific questions.
To further improve processing speed, we implemented parallel processing of the prompts sent to the LLM engine and used KV-caching, which significantly accelerated the computation time by a factor of 6x when using 16 threads.
Extraction content types
We used nv-ingest to extract content from the original PDFs, including text, figures, tables, and charts, into structured JSON files. This enabled information extraction beyond text content, enabling a more comprehensive context for answering the questions.
Using JSON files instead of the original PDF files during inference also had a significant effect on lowering the processing time, by an additional factor of 4.25x.
Results
Thanks to these improvements, we significantly reduced the time required to extract information from our database of papers, with a total speedup of 25.25x compared to our initial implementation.
Processing the entirety of our database now takes less than 30 minutes using two A100 80-GB GPUs and 16 threads.
Compared to the traditional approach of manually reading and analyzing an entire article, which typically takes about one hour, this optimized workflow achieved a time savings of over 99% (Figure 3).
Figure 3. Required time for information extraction and categorization
In addition to information extraction, we also investigated automated article classification. By fine-tuning Llama-3.1-8b-Instruct with a LoRA adapter on a sample of papers manually annotated, we successfully automated the classification process, demonstrating its effectiveness in organizing complex datasets of scientific papers.
The results indicated that each article required only 2 seconds for classification, compared to the 300+ seconds required on average for a manual classification by an experienced reader (Figure 3).
Future directions
We’re currently refining our workflow to further accelerate the processing. We’re also improving our user interface to provide easy access to more locally deployed LLMs and enhance accessibility by other researchers (Figure 4).
We plan to implement the
NVIDIA AI Blueprint for multimodal PDF data extraction
to identify the most relevant articles for each research question and interact with those papers.
Beyond technical improvements, we are aiming to organize extracted key information for each question and generate visualizations (such as maps with location of the experiments mentioned in the papers) to accelerate further the writing of the systematic review.
Figure 4. Prototype interface undergoing in-house testing at SAEF
Summary
Our work at the Generative AI Codefest demonstrated the transformative potential of AI in accelerating systematic literature reviews. With NVIDIA NIM, we quickly moved from an idea to a working solution that significantly improves the process of information extraction from scientific papers.
This experience highlights how AI can streamline research workflows, enabling faster and more comprehensive insights. LLMs have the potential to facilitate interdisciplinary research, empowering scientists to explore complex, multi-domain research fields more effectively.
Moving forward, we aim to refine these methods and tools, ensuring that they are accessible and scalable for future research across diverse topics. | https://developer.nvidia.com/zh-cn/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/ | 借助适用于 LLM 的 NVIDIA NIM 微服务加速科学文献审查 | 精心设计的系统审查通常是研究人员探索科学领域的第一步。对于刚刚接触该领域的科学家,它提供了该领域的结构化概述。对于专家而言,它可以细化他们的理解并激发新的想法。在 2024 年,
Web of Science
数据库收录了 218,650 篇评测文章,凸显了这些资源在研究中的重要性。
完成系统审查可显著增强研究人员的知识库及其学术影响。然而,传统的评论文章需要收集、阅读和总结有关特定主题的大量学术文章。由于这项人工练习非常耗时,处理过的文献的范围通常局限于几十篇或几百篇文章。跨学科内容 (通常超出研究人员的专业领域) 又增加了一层复杂性。
这些挑战使得创建全面、可靠且有影响力的系统审查变得越来越困难。
大语言模型 (LLMs) 的出现提供了一种突破性的解决方案,能够从大量文献中快速提取和合成信息。参加
Generative AI Codefest Australia
提供了一个独特的机会,让我们在 NVIDIA AI 专家的支持下探索这一想法,利用
NVIDIA NIM
微服务加速文献审查。这为我们的文献分析流程快速测试和微调了多个先进的 LLMs。
测试 LLMs 在处理论文方面的潜力
作为 ARC 特殊研究计划
(ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF))
中的一个专门研究生理生态学的研究小组,我们着手撰写了一篇关于苔藓或地衣等非血管性植物对风的全球反应的文献回顾。
然而,我们很快就面临了一个挑战:许多关于风力电厂相互作用的相关文章都未能在标题或摘要中明确提及这些关键词,而在文献筛选过程中,这些关键词通常用作主要的过滤器。要对主题进行全面分析,需要手动阅读每篇文章的完整文本,这一过程非常耗时。
我们决定探索使用 LLMs 从文章中提取与风电场交互特别相关的内容的可能性。为此,我们
基于 LlaMa 3.1 8B Instruct NIM 微服务
实施了一个简单的问答应用(图 1)。这使我们能够快速获得初始原型。
第一个原型按顺序处理论文,在制作和优化提示以从每篇文章中提取关键信息方面非常有用。
图 1、以 LLM NIM 微服务为核心的初始简单论文处理流程
为了验证提取信息的准确性,我们首先手动验证结果。当测试数据集中未发现重大错误时,我们确定了使用 LLM 进一步提高关键信息提取效率的机会 (图 2)。其中包括将论文从 PDF 格式转换为结构化 JSON;提取图像、表格和图表;以及使用并行处理来加速论文处理速度。
图 2、增强型论文处理流程,整合 NV-ingest 和论文的并行处理
增强 LLMs 的性能,实现更高效的信息提取
通过将 NVIDIA NIM 微服务用于 LLM 和
nv-ingest
,我们在本地环境中部署了 LLM 和数据提取管道,并配备了 8 个 NVIDIA A100 80-GB GPU。我们还使用低等级适应性(LoRA)对模型进行了微调,以提高从论文中提取信息的准确性。
我们编译了一个数据集,其中包含与目标研究领域相关的 2000 多篇科学文章,这些文章均源自
Web of Science
和
Scopus
数据库。在 Generative AI Codefest 期间,我们在一周内专注于尝试各种策略,以优化从这些文章中提取关键信息的效率和准确性。
性能最佳的模型
为了确定性能最佳的模型,我们在一组随机选择的文章中测试了 NVIDIA API Catalog 中的一系列基于指令的和通用的 LLM。评估了每个模型在信息提取方面的准确性和全面性。
最终,我们确定 Llama-3.1-8B-instruct 最适合我们的需求。
处理速度
我们开发了一个问答模块,使用 Streamlit 回答用户定义的研究特定问题。
为进一步提高处理速度,我们对发送到 LLM 引擎的提示实施了并行处理,并使用了 KV 缓存,这使得使用 16 个线程时的计算时间显著缩短了 6 倍。
提取内容类型
我们使用 nv-ingest 将原始 PDF 中的内容 (包括文本、图形、表格和图表) 提取到结构化 JSON 文件中。这使得信息提取功能超越了文本内容,从而为回答问题提供更全面的环境。
在推理过程中使用 JSON 文件而非原始 PDF 文件也对将处理时间额外缩短 4.25 倍产生了重大影响。
成果
得益于这些改进,我们显著缩短了从论文数据库中提取信息所需的时间,与初始实施相比,总速度提高了 25.25 倍。
现在,使用两个 A100 80-GB GPU 和 16 个线程处理整个数据库只需不到 30 分钟。
与手动阅读和分析整篇文章的传统方法 (通常需要大约 1 小时) 相比,这种经过优化的工作流程可节省 99% 以上的时间 (图 3)。
图 3. 信息提取和分类所需的时间
除了信息提取,我们还研究了自动化文章分类。通过在手动标注的论文样本上使用 LoRA 适配器微调 Llama-3.1-8b-Instruct,我们成功实现了分类过程的自动化,证明了其在组织复杂的科学论文数据集方面的有效性。
结果表明,与经验丰富的读者手动分类平均所需的 300 多秒相比,每篇文章只需 2 秒即可完成分类 (图 3)。
未来方向
我们目前正在改进工作流程,以进一步加速处理。我们还在改进用户界面,以便轻松访问更多本地部署的 LLM,并增强其他研究人员的可访问性(图 4)。
我们计划实施用于多模态 PDF 数据提取的 NVIDIA AI Blueprint,以确定与每个研究问题最相关的文章,并与这些论文进行交互。
除了技术改进,我们的目标是组织提取的每个问题的关键信息,并生成可视化效果 (例如,包含论文中提到的实验位置的地图),以进一步加速系统审查的编写。
图 4、原型接口正在 SAEF 进行内部测试
总结
我们在生成式 AI Codefest 上的工作展示了 AI 在加速系统文献审查方面的变革潜力。借助 NVIDIA NIM,我们迅速从想法转变为可行的解决方案,从而显著改善从科学论文中提取信息的过程。
此体验凸显了 AI 如何简化研究工作流程,从而更快获得更全面的见解。LLMs 具有促进跨学科研究的潜力,可助力科学家更有效地探索复杂的多领域研究领域。
展望未来,我们的目标是改进这些方法和工具,确保其可访问性和可扩展性,以供未来跨各种主题的研究使用。 |
https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | Vision Language Model Prompt Engineering Guide for Image and Video Understanding | Vision language models
(VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the
generative AI
landscape by bringing visual understanding to
large language models
(LLMs) through the use of a
vision encoder
. These initial VLMs were limited in their abilities, only able to understand text and single image inputs.
Fast-forward a few years and VLMs are now capable of understanding multi-image and video inputs to perform advanced vision-language tasks such as visual question-answering (VQA), captioning, search, and summarization.
Figure 1. Suite of model customization tools in NVIDIA NeMo
VLM accuracy can be improved for specific use cases by tuning the prompts and model weights. Fine-tuning the model weights can be done with efficient techniques such as PEFT but still requires sufficient data and compute.
However,
prompt engineering
and in context learning can also be used to improve the output quality and is as simple as adjusting the text input to the model at runtime.
This post walks you through the evolution of VLMs and how to effectively prompt a VLM for single image, multi-image, and video understanding use cases. For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
Single-image understanding
From a single image, a VLM is capable of identifying, describing, classifying, and reasoning over what is in the image. Some VLMs have advanced capabilities such as providing bounding box coordinates for certain objects, and translating text in the image. This works best for use cases where an image needs to be classified, captioned or described in text.
Single-image capability can also be used to detect basic events occurring in a livestream. This can be accomplished by sampling one frame at a time from a livestream and providing it to the VLM along with a prompt describing what to look for. This works especially well when the types of events to detect are large, clearly visible, and slow-moving.
The VLM can also be prompted to output in a structured format such as JSON so that the response can easily be parsed and sent to a database or a notification service.
For example, we provided Llama-3.2-90b-vision-instruct with the following prompt and image (Figure 2).
Figure 2. Input image to the VLM for question and answering
Prompt:
Is there a fire truck? Is there a fire? Are there firefighters? Output the answer to each question in JSON format.
VLM response:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
For each of the questions, the VLM was able to accurately respond and output it in a structured format to be used for downstream tasks.
While this method enables the VLM to operate on video input by sampling one frame at a time, the VLM itself does not understand the full context of the video, limiting it to simple use cases. Sampling one frame at a time does not give it the temporal understanding needed to detect actions or trends that occur over time. This can be improved with multi-image and video VLMs.
Multi-image understanding
Many VLMs can accept multiple images as input allowing the model to compare, contrast, and learn from input images.
VLMs generally have good zero-shot capability across a wide range of use cases. However, when trying to obtain more precise answers from a VLM on domain-specific use cases, VLMs can struggle because they lack the full context required to accurately respond to the user prompts.
To better understand this, we used a VLM to estimate the stock level of products. This is an important retail use case as it could autonomously monitor and alert employees when products are running low in a store to avoid lost revenues from stock-outs.
We provided a single image of a table (Figure 3) that should be fully stocked with several snack products to
Cosmos Nemotron 34B
and asked it to estimate the stock level.
Figure 3. Single-image VLM input for retail stock level detection
Prompt:
Estimate the stock level of the snack table on a scale of 0–100.
VLM response:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
The model rates it as 80–90% stocked even though it’s quite clear to the human eye that the stock level is low. From the model’s perspective, it does not know what the table looked like beforehand when looking at a single image.
Using the Cosmos Nemotron 34B multi-image capability, we provided another image of the snack table at a 100% stock level (Figure 4), giving the model extra context to produce a more accurate estimate.
Figure 4. In-context learning input to VLM for retail stock level detection
Prompt:
First compare and contrast the stock level of the two images. Then generate an estimate for each image of the stock level on a scale of 0–100%.
VLM response:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
By providing the additional image as a reference, the VLM now estimates the stock level of the second image at 20–30%, which is significantly more accurate.
Multi-image input can also be used in more complex multimodal RAG pipelines where example images are dynamically added to the prompt based on the user’s input. Multi-image VLMs can also be used for basic video understanding by sampling a small set of frames from a video and providing it to the VLM.
However, many VLMs that accept multi-image input have limited context windows (10–20 frames) and may not have been trained on video data. To truly understand video input, we recommend using a VLM trained on video-specific datasets and a long context window.
Video understanding
The latest VLMs have full video understanding capabilities with large context lengths, frame compression and video training data. This enables VLMs to ingest longer videos and have improved temporal understanding: the ability to understand actions and trends over time.
The first example showed how to use a single frame from a livestream to detect a fire but that same method wouldn’t be able to tell you if the fire is growing or getting smaller because it lacks the temporal context.
A VLM with long context and video understanding processes many frames across time to understand if the fire is growing or not. Here are some examples from the Cosmos Nemotron model families.
Sequential visual understanding
VLMs with sequential understanding relies on processing frames in a sequence, enabling them to capture the progression of actions and events over time. Unlike multi-image models, which process a limited number of frames without explicit temporal connections, sequential video models incorporate temporal dynamics by analyzing frames in an orderly sequence.
This is helpful in situations where you want to capture the causality of events in a video, as it enables the model to track the progression of actions over time and establish cause-effect relationships.
The next few examples are based on the following video where a worker drops a box in a warehouse. We discuss a few prompts and responses.
Figure 5. Input warehouse video to the VLM for scene understanding
Prompt:
What happened in this video?
VLM response:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
The model gave a detailed response with specifics about the environment such as aisle labels and lighting conditions but failed to correctly identify that the worker dropped a box. Now try to ask this question directly.
Prompt:
Did the worker drop any box?
VLM response:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
The VLM is now able to deduce that the worker might have dropped the box. This suggests that generic prompts might not capture all the nuances correctly unless you direct the VLM to look for something specific. Modify the prompt a little.
Prompt:
Elaborate on the visual and narrative elements of this warehouse video in detail. Highlight all anomalies or hazards if present.
VLM response:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
The model is now capturing all the important details of the scene along with important events, such as workers dropping a box. The detailed response is useful if you want to perform Q&A with the video later. VLMs with video understanding are also able to address directional questions, as they can understand actions over time.
Prompt:
From which direction did the worker enter the scene?
VLM response:
The worker entered the scene from the right side.
However, these models often lack precise temporal localization, which can limit their ability to pinpoint when specific events occur.
Prompt:
When did the worker drop the box?
VLM response:
The worker dropped the box in the second image.
Temporal localization
VLMs incorporating
Localization Interpretable Temporal Attention (LITA)
or similar temporal localization techniques elevate video understanding by explicitly learning when and where critical events occur. The models achieve this by introducing temporal attention mechanisms that focus on key segments of a video, aligning visual and textual modalities more effectively over time. This helps answer the “when” questions in the videos.
Prompt:
When did the worker drop the box?
VLM response:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
The optimal prompt format for a VLM depends on the model’s architecture and the nature of the caption pairs used during training. Different training datasets influence how a VLM interprets the prompt.
Conclusion
This post walked through how VLMs have evolved from supporting only single-image input to being capable of complex temporal reasoning on long video inputs. To get started with VLMs, visit
build.nvidia.com
and try out some of the prompts shown in this post. For technical questions, see the
Visual AI Agent forum
.
VLMs can be used to build various visual AI agents. Explore one of the sample vision AI workflows on the
/NVIDIA/metropolis-nim-workflows
GitHub repo with no GPU required. To build a video analytics AI agent, try the
AI blueprint for video search and summarization
.
For more information about VLMs and visual AI agents, register for the upcoming
Vision for All: Unlocking Video Analytics with AI Agents
webinar.
For more information about LLM prompting, see
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
. | https://developer.nvidia.com/zh-cn/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/ | 图像和视频理解的视觉语言模型提示工程实践指南 | 视觉语言模型 (VLMs) 正在以极快的速度发展。2020 年,首批 VLMs 通过使用视觉编码器将视觉理解引入大语言模型 (LLMs) ,彻底改变了生成式 AI 格局。这些初始 VLMs 能力有限,只能理解文本和单张图像输入。
几年后,VLM 现在能够理解多图像和视频输入,以执行高级视觉语言任务,例如视觉问答 (VQA)、字幕、搜索和摘要。
图 1、NVIDIA NeMo 中的模型自定义工具套件
通过调整提示和模型权重,可以提高特定用例的 VLM 准确性。可以使用 PEFT 等高效技术微调模型权重,但仍需要足够的数据和计算。
但是,
prompt engineering
和上下文学习也可用于提高输出质量,并且与在运行时调整模型的文本输入一样简单。
本文将为您介绍 VLM 的演变,以及如何有效地提示 VLM 以用于了解单张图像、多张图像和视频的用例。如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的
Vision for All:Unlocking Video Analytics with AI Agents 在线研讨会
。
单图像理解
从单张图像来看,VLM 能够识别、描述、分类和推理图像中的内容。一些 VLM 具有高级功能,例如为某些对象提供边界框坐标,以及翻译图像中的文本。这最适合需要对图像进行分类、添加字幕或以文本进行描述的用例。
单图像功能还可用于检测直播中发生的基本事件。这可以通过以下方式实现:从直播中一次采样一帧,并将其提供给 VLM,同时提供描述要查找的内容的提示。当要检测的事件类型较大、清晰可见且移动缓慢时,此方法的效果尤为出色。
系统还可以提示 VLM 以 JSON 等结构化格式输出,以便轻松解析响应并将其发送至数据库或通知服务。
例如,我们为 Llama-3.2-90b-vision-instruct 提供了以下提示和图像(图 2)。
图 2、将图像输入 VLM 以进行问答
提示:是否有消防车?是否有火灾?有消防人员吗?输出每个问题的答案,以 JSON 格式。
VLM 响应:
[ { "question": "Is there a fire truck?", "answer": "True" },
{ "question": "Is there a fire?", "answer": "True" },
{ "question": "Are there firefighters?", "answer": "True" }
]
对于每个问题,VLM 能够准确回答并以结构化格式输出,以便用于下游任务。
虽然此方法允许 VLM 通过一次采样一帧来操作视频输入,但 VLM 本身无法理解视频的完整上下文,将其局限于简单的用例。一次对一帧进行采样无法为其提供检测随时间变化的动作或趋势所需的时间理解。这可以通过多图像和视频 VLM 得到改善。
多图像理解
许多 VLM 可以接受多张图像作为输入,允许模型比较、对比并从输入图像中学习。
VLM 通常在各种用例中都具有良好的零样本击能力。但是,在尝试从 VLM 获取有关特定领域用例的更精确答案时,VLM 可能会遇到困难,因为它们缺乏准确响应用户提示所需的完整上下文。
为了更好地理解这一点,我们使用 VLM 来估计产品的库存水平。这是一个重要的零售用例,因为它可以在商店中的产品运行不足时自动监控和提醒员工,以避免因缺货而损失的收入。
我们向
Cosmos Nemotron 34B
提供了一张表格 (图 3) 的单一图像,该表格应与多种零食产品一起充分库存,并要求其估算库存水平。
图 3. 用于零售库存水平检测的单图像 VLM 输入
提示:按 0 – 100 的比例估算零食表的库存量。
VLM 响应:
The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd.
该模型将其评为 80–90% 的库存,即使肉眼很清楚库存水平很低。从模型的角度来看,它不知道在查看单个图像之前表格的样子。
通过使用 Cosmos Nemotron 34B 多图像功能,我们提供了另一张 100%库存水平的零食表图像 (图 4),为模型提供了额外的背景信息,以便生成更准确的估计。
图 4. 为 VLM 提供情境学习输入,用于零售库存水平检测
提示:首先比较两张图像的库存水平。然后,按 0 – 100% 的比例对库存量的每张图像生成估计值。
VLM 响应:
The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%.
现在,通过提供额外的图像作为参考,VLM 可将第二张图像的库存量估算为 20–30%,准确度明显更高。
多图像输入还可用于更复杂的多模态 RAG 工作流,其中会根据用户的输入将示例图像动态添加到提示中。多图像 VLM 还可以通过从视频中采样一小部分帧并将其提供给 VLM 来用于基本的视频理解。
但是,许多接受多图像输入的 VLM 的上下文窗口有限(10–20 帧),并且可能未基于视频数据进行训练。为了真正理解视频输入,我们建议使用基于视频特定数据集训练的 VLM 和长上下文窗口。
视频理解
最新的 VLM 具有完整的视频理解功能,具有大上下文长度、帧压缩和视频训练数据。这使得 VLM 能够提取更长的视频,并提高时间理解能力:能够理解一段时间内的行动和趋势。
第一个示例展示了如何使用直播中的单帧来检测火灾,但由于缺乏时间背景,相同的方法无法告诉您火灾是在增大还是变小。
具有长上下文和视频理解的 VLM 会随着时间的推移处理多个帧,以了解火势是否在增长。以下是 Cosmos Nemotron 模型系列的一些示例。
顺序视觉理解
具有顺序理解的 VLMs 依赖于按序列处理帧,使其能够捕捉动作和事件随时间推移的进展。与多图像模型相比,它们在处理有限数量的帧时没有显式时间连接,而顺序视频模型则通过按顺序分析帧来整合时间动力学。
这在您想要在视频中捕捉事件因果关系的情况下非常有用,因为它使模型能够跟踪动作在一段时间内的进展情况,并建立因果关系。
接下来的几个示例基于以下视频,展示了一名工人在仓库中丢弃箱子的过程。我们将讨论一些提示和响应。
图 5、将仓库视频输入到 VLM 以进行场景理解
提示:本视频中发生了什么?
VLM 响应:
The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles.
The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment.
The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting.
Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility.
该模型提供了详细的环境细节 (例如通道标签和照明条件),但无法正确识别工人掉落的箱子。现在,尝试直接问这个问题。Prompt:工作人员是否丢弃了任何箱子?
VLM 响应:
Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it.
现在,VLM 可以推断出工作者可能丢弃了箱子。这表明,除非您指示 VLM 查找特定内容,否则通用提示可能无法正确捕捉所有细微差别。稍微修改一下提示。
提示:详细介绍此仓库视频的视觉和叙事元素。突出显示所有异常或危险(如果存在)。
VLM 响应:
The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor.
现在,该模型可以捕捉场景的所有重要细节以及重要事件,例如工人丢箱。如果您想稍后在视频中执行 Q&A,详细的回答非常有用。具有视频理解功能的 VLM 还能够解决定向问题,因为它们可以理解随时间推移的动作。
提示:工作者是从哪个方向进入场景的?
VLM 响应:
The worker entered the scene from the right side.
但是,这些模型通常缺乏精确的时间定位,这可能会限制其确定特定事件何时发生的能力。
提示:工人是什么时候丢弃盒子的?
VLM 响应:
The worker dropped the box in the second image.
时间定位
结合
Localization Interpretable Temporal Attention (LITA)
或类似时间定位技术的 VLM 通过显式学习关键事件发生的时间和地点来提高视频理解能力。这些模型通过引入专注于视频关键片段的时间注意力机制,随着时间的推移更有效地调整视觉和文本模式来实现这一目标。这有助于回答视频中的“何时”问题。
提示:
工人是什么时候丢弃盒子的?
VLM 响应:
0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground.
VLM 的最佳提示格式取决于模型的架构和训练期间使用的描述对的性质。不同的训练数据集会影响 VLM 解释提示的方式。
结束语
本文介绍了 VLM 如何从仅支持单图像输入发展为能够对长视频输入进行复杂的时间推理。要开始使用 VLM,请访问
build.nvidia.com
并尝试本博文中显示的一些提示。有关技术问题,请参阅
Visual AI Agent 论坛
。
VLM 可用于构建各种视觉 AI 智能体。无需 GPU,即可在
/NVIDIA/metropolis-nim-workflows
GitHub 存储库上探索视觉 AI 工作流示例之一。要构建视频分析 AI 智能体,请试用
NVIDIA AI Blueprint
中的视频搜索和摘要蓝图。
如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 网络研讨会。
有关 LLM 提示的更多信息,请参阅《
An Introduction to Large Language Models: Prompt Engineering and P-Tuning
》。 |
https://developer.nvidia.com/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | Configurable Graph-Based Task Solving with the Marco Multi-AI Agent Framework for Chip Design | Chip and hardware design presents numerous challenges stemming from its complexity and advancing technologies. These challenges result in longer turn-around time (TAT) for optimizing performance, power, area, and cost (PPAC) during synthesis, verification, physical design, and reliability loops.
Large language models (LLMs) have shown a remarkable capacity to comprehend and generate natural language at a massive scale, leading to many potential applications and benefits across various domains. Successful LLM-based AI agents for hardware design can drastically reduce TAT, leading to faster product cycles, lower costs, improved design reliability, and reduced risk of costly errors.
Figure 1. Marco framework overview
Marco: Configurable Graph-Based Task Solving and Multi-AI Agents Framework
We introduce the proposed
Marco framework
, which encompasses graph-based task solving, agent configurations for sub-tasks, and skill/tool configurations for each AI agent in real time.
Figure 1 showed dynamic and static configurable graph-based task solving, which is flexibly integrated with chip-design knowledge (for example, circuits, timing, and so on).
In the task graph, each node represents a sub-task, and each edge represents the execution or knowledge relationship between nodes. For solving each sub-task, you use
Autogen
to configure single-AI or multi-AI agent with a knowledge database, tools, and memory.
Table 1 summarizes the task graph, agent, and LLM configurations of the Marco framework for various agents.
VerilogCoder
and the MCMM timing analysis agent use a dynamic task graph to complete the specification-to-RTL and extract key takeaways of timing reports, respectively. The timing path debug agent finds the problematic net, wire, and constraints through a static timing debugging task graph (Figure 1).
Agent works
Task category
Marco framework configuration
Task graph
Sub-task agent config.
Customized tools
RTLFixer
Code Syntax Fixing
N/A
Single-AI
RTL Syntax Error RAG Database
Standard Cell Layout Opt.
Optimization
N/A
Single-AI
Cluster Evaluator, Netlist Traverse Tool
MCMM Timing Analysis(Partition/Block-Level)
Summary & Anomaly Identification
Dynamic
Multi-AI
Timing Distribution Calculator, Timing Metric Comparator
DRC Coder
Code Generation
N/A
Multi-Modality & Multi-AI
Foundry Rule Analysis, Layout DRV Analysis, DRC Code Evaluation
Timing Path Debug(Path-Level)
Summary & Anomaly Identification
Static
Hierarchical Multi-AI
Agentic Timing Report Retrieval
VerilogCoder
Code Generation
Dynamic
Multi-AI
TCRG Retrieval Tool, AST-Based Waveform Tracing Tool
Table 1. Task graph, agent configuration, customized tool of the Marco framework for various autonomous agent implementations for hardware design tasks
In Table 1,
RTLFixer
and
Standard Cell Layout Optimization
agents (that is, a single-AI configuration) are supported using the Marco framework. The rest of the multi-AI agents are implemented on the Marco framework.
For
RTLFixer
,
LLM agent for standard cell layout optimization
, and
DRC Coder
, we used single-AI or multi-AI agent configurations with customized tools, memory, and domain knowledge.
Automated hardware description languages code generation
One key area where autonomous agents are making an impact is in the generation of hardware description languages (HDLs), such as Verilog. Due to the growing complexity of VLSI design, writing Verilog and VHDL is time-consuming and prone to bugs, necessitating multiple iterations for debugging functional correctness. Consequently, reducing design costs and designer effort for completing hardware specifications has emerged as a critical need.
LLMs can be used to generate Verilog code from natural language descriptions. However, LLMs often struggle to produce code that is both syntactically and functionally correct.
Syntax correctness
RTLFixer
uses a combination of retrieval-augmented generation (RAG) and ReAct prompting to enable LLMs to iteratively debug and fix syntax errors. RAG incorporates a database of human expert guidance to provide context for error correction. ReAct enables the LLM to reason about the error, plan a fix, and act on the plan.
Functional correctness
VerilogCoder
is a multi-agent system that incorporates a task planner and an abstract syntax tree (AST)-based waveform-tracing tool to generate and debug Verilog code. It employs a task and circuit relation graph (TCRG) to break down a task into manageable sub-tasks and link signal transitions to each step (Figure 2).
Figure 2. Task-driven circuit relation graph retrieval agent reasoning and interacting with the developed TCRG retrieval tool to enrich the task with the relevant circuit and signal descriptions
An AST-based waveform tracing tool assists the LLM agent in identifying and fixing functional errors by back-tracing signal waveforms. VerilogCoder achieves a 94.2% success rate on the VerilogEval-Human v2 benchmark, demonstrating a significant improvement over previous methods.
Video 1. Autonomously Complete Verilog Code with TCRG Planning and AST-Based Waveform Tracing Tools
In Video 1, the demonstration of VerilogCoder autonomously completes functionally correct Verilog code using TCRG planning and AST-based waveform tracing tools.
Automated DRC layout code generation
DRC-Coder
uses multiple autonomous agents with vision capabilities and specialized DRC and Layout DRV analysis tools to generate DRC code. The system interprets design rules from textual descriptions, visual illustrations, and layout representations. The multiple LLM agents include a planner that interprets design rules, and a programmer that translates the rules into executable code.
DRC-Coder incorporates an auto-debugging process, which uses feedback from the code evaluation to refine the generated code.
Video 2. A Demonstration of DRC-Coder in Chip Design
In Video 2, the demonstration of DRC-Coder generates DRC code that achieves perfect F1 scores on hundreds of testing layouts by leveraging a layout analysis tool, an auto-debugging process, and the capabilities of multi-modality and multi-AI agents.
DRC-Coder achieved a perfect F1 score of 1.000 in generating DRC codes for a sub-3nm technology node, outperforming standard prompting techniques. The proposed automated agentic approach significantly reduces the time required for DRC code generation, from weeks to an average of four minutes per design rule.
Standard cell layout optimization
LLM agent for standard cell layout optimization
proposes using the natural language and reasoning ability of an LLM to generate high-quality cluster constraints incrementally to optimize the cell layout PPA and debug the routability with ReAct prompting.
The system uses net information and cell layout analysis to group MOSFET devices into clusters. The AI agent not only achieves up to 19.4% smaller cell area, but also generates 23.5% more LVS and DRC clean cell layouts than the
Transformer-based device clustering approach
on a set of sequential cells in the industrial 2
nm
technology node.
Multi-corner multi-mode timing report debug and analysis
The multi-corner multi-mode (MCMM) timing analysis agent uses dynamic task graphs to complete the specification-to-RTL and extract key takeaways of timing reports, respectively.
The MCMM timing analysis agent achieves an average score of 8.33 out of 10, based on evaluations by experienced engineers on a set of industrial cases, and delivers approximately 60x speedups compared to human engineers (Figure 3).
Figure 3. MCMM timing analysis agent results
The timing path debug agent finds the problematic net, wire, and constraints through the static timing debugging task graph (Figure 1).
In Table 2, the timing path debug agent resolves 86% of path-level debugging tasks, whereas the standard task solving approach fails to resolve any of the tasks.
Multi Report Task Description
Required Analyzed
Sub-Tasks
Standard Task Solving
Timing Path Debug Agent
Find missing clk signals that have no rise/fall information
max, clk
X
V
Identify pairs of nets with high RC mismatch
max, wire
X
V
Detect unusual constraints between victim and its aggressors
max, xtalk, LC
X
V
Identify unusual RC values between victim and its aggressors
max, wire, xtalk, LC
X
V
Find the constraints of slowest stages with highest RC values
max, wire, xtalk, LC
X
V
Compare each timing table for number of stages, point values and timing mismatch
max
X
X
Task M2 and Task M3 for specific stages in list of paths
max, wire, xtalk, LC
X
V
Avg Pass-rate
0%
86%
Table 2. Pass-rate (%) of timing path debug agent with static task graph solving, and a naïve standard task solving without task graph information
X=Failed to solve the task. V=Solved the task successfully.
Conclusion
The proposed Marco framework enables more flexible and domain-specialized methods for real-time hardware design tasks solving. By using task graph and flexible single-AI and multi-AI agent configurations with domain-specific tools and knowledge, we developed various agents for tasks such as cell layout optimization, Verilog syntax error fixing, Verilog and DRC code generation, and timing debugging on problematic blocks, nets, and wires.
The experimental results show impressive performance and efficiency benefits on utilizing collaborative LLM-based agents for chip design.
The future directions for agent research on hardware design include the following:
Training LLMs with high-quality hardware design data
Improving LLM-based agents’ ability for hardware signal and waveform debugging
Incorporating PPA metrics into the design flow
Developing more efficient self-learning techniques and memory systems for LLM agents for solving more complex hardware tasks
For more papers and projects on electronic design automation, see the
NVIDIA Design Automation Research Group
page.
For those interested in the technologies highlighted in the post, here’s a list of relevant papers:
RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models
/NVlabs/RTLFixer
GitHub repo
VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder: Automated DRC Checker Code Generation using LLM Autonomous Agent
Large Language Model (LLM) for Standard Cell Layout Design Optimization | https://developer.nvidia.com/zh-cn/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/ | 基于 Marco 多 AI 智能体框架的可配置图形任务求解在芯片设计中的应用 | 芯片和硬件设计因其复杂性和先进的技术而面临众多挑战。这些挑战导致在合成、验证、物理设计和可靠性循环期间优化性能、功耗、面积和成本(PPAC)的周转时间(TAT)更长。
大语言模型(LLMs)已显示出大规模理解和生成自然语言的出色能力,从而为各种领域带来许多潜在应用和优势。成功地将基于 LLMs 的 AI 智能体用于硬件设计,可以大幅减少 TAT,从而缩短产品周期、降低成本、提高设计可靠性,并降低代价高昂的错误风险。
图 1. Marco 框架概述
Marco:基于图形的可配置任务求解和多 AI 代理框架
我们介绍了拟议的 Marco 框架,其中包括基于图形的任务求解、子任务的智能体配置,以及每个 AI 智能体的实时技能/工具配置。
图 1 展示了基于动态和静态可配置图形的任务求解,可灵活集成芯片设计知识 (例如电路、定时等)。
在任务图中,每个节点代表一个子任务,每个边缘代表节点之间的执行或知识关系。为了解决每个子任务,您可以使用
Autogen
配置具有知识数据库、工具和内存的单个 AI 或多个 AI 代理。
表 1 总结了各种代理的 Marco 框架的任务图形、代理和 LLM 配置。
VerilogCoder
和 MCMM 时序分析代理使用动态任务图形分别完成对 RTL 的规范和提取时序报告的关键要点。定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的网络、线和约束。
代理作品
任务类别
Marco 框架配置
任务图
子任务代理配置。
定制工具
RTLFixer
代码语法固定
不适用
单 AI
RTL 语法错误 RAG 数据库
Standard Cell Layout Opt.
优化
不适用
单 AI
Cluster Evaluator, Netlist Traverse Tool
MCMM 定时分析 (Partition/Block-Level)
总结和异常识别
动态
多 AI
Timing Distribution Calculator,Timing Metric Comparator
DRC Coder
代码生成
不适用
多模态和多 AI
Foundry Rule Analysis,Foundry 规则分析,Layout DRV Analysis,布局 DRV 分析,DRC Code Evaluation,DRC 代码评估
Timing Path Debug(Path-Level)
总结和异常识别
静态
分层多 AI
代理定时报告检索
VerilogCoder
代码生成
动态
多 AI
TCRG 检索工具,AST-Based Waveform Tracing Tool
表 1、任务图、代理配置、Marco 框架的定制工具,用于硬件设计任务的各种自主代理实现
在表 1 中,使用 Marco 框架支持
RTLFixer
和标准单元布局优化代理 (即单 AI 配置)。其余的多 AI 代理均在 Marco 框架上实施。
对于 RTLFixer、用于标准单元布局优化的 LLM 代理和 DRC Coder,我们使用了具有定制工具、内存和域知识的单 AI 或多 AI 代理配置。
自动硬件描述语言代码生成
自主智能体产生影响的一个关键领域是硬件描述语言(HDL)的生成,例如 Verilog。由于 VLSI 设计的复杂性与日俱增,编写 Verilog 和 VHDL 非常耗时且容易出现错误,因此需要进行多次迭代来调试功能正确性。因此,降低设计成本和设计师完成硬件规格的工作量已成为一项迫切需求。
LLM 可用于根据自然语言描述生成 Verilog 代码。但是,LLM 通常难以生成语法和功能上都正确的代码。
语法正确性
RTLFixer
结合使用检索增强生成 (RAG) 和 ReAct 提示,使 LLM 能够迭代地调试和修复语法错误。RAG 整合了人类专家指导数据库,为纠错提供上下文。ReAct 启用 LLM 进行错误推理、计划修复并采取行动。
功能正确性
VerilogCoder
是一个多代理系统,包含任务规划器和基于抽象语法树(AST)-based 波形追踪工具,用于生成和调试 Verilog 代码。它使用任务和电路关系图(TCRG)将任务分解为可管理的子任务,并将信号转换链接到每个步骤(图 2)。
图 2、任务驱动的电路关系图检索代理推理,并与开发的 TCRG 检索工具交互,通过相关的电路和信号描述丰富任务
基于 AST 的波形追踪工具通过回溯信号波形来协助 LLM 代理识别和修复功能错误。与之前的方法相比,VerilogCoder 在 VerilogEval-Human v2 基准测试中的成功率为 94.2%。
视频1. 使用 TCRG 规划和基于 AST 的波形追踪工具自动完成 Verilog 代码
在视频 1 中,VerilogCoder 演示使用 TCRG 规划和基于 AST 的波形追踪工具自动完成功能正确的 Verilog 代码。
自动生成 DRC 布局代码
DRC-Coder 使用多个具有视觉功能的自主代理以及专门的 DRC 和布局 DRV 分析工具来生成 DRC 代码。该系统从文本描述、视觉插图和布局表示中解释设计规则。多个 LLM 代理包括一个解释设计规则的规划器和一个将规则转换为可执行代码的程序员。
DRC-Coder 包含自动调试流程,该流程利用代码评估的反馈来优化生成的代码。
视频 2、用于芯片设计的 DRC-Coder 演示
在视频 2 中,DRC-Coder 演示会生成 DRC 代码,通过利用布局分析工具、自动调试流程以及多模态和多 AI 代理的功能,在数百个测试布局中实现完美的 F1 分数。
DRC-Coder 在为低于 3nm 的技术节点生成 DRC 代码时,达到了完美的 F1 评分 (达到 1.000 分),优于标准提示技术。拟议的自动化代理方法可显著缩短 DRC 代码生成所需的时间,即每个设计规则从数周缩短到平均 4 分钟。
标准单元布局优化
用于标准单元布局优化的 LLM 智能体建议使用 LLM 的自然语言和推理能力逐步生成高质量集群约束,以优化单元布局 PPA,并使用 ReAct 提示调试可路由性。
该系统使用网络信息和单元布局分析将 MOSFET 设备分组到集群中。与在工业
2nm
技术节点中的一组顺序单元上采用基于 Transformer 的设备集群方法相比,AI 智能体不仅可将单元面积减少多达 19.4%,而且还可生成多 23.5% 的 LVS 和 DRC 清洁单元布局。
多角度多模式时序报告调试与分析
多角多模 (MCMM) 时序分析代理使用动态任务图形分别完成从规范到 RTL 的规范,并提取时序报告的关键要点。
根据经验丰富的工程师对一组工业案例的评估,MCMM 时序分析代理的平均分为 8.33 分 (满分 10 分),与人类工程师相比,速度提高了约 60 倍 (图 3)。
图 3、MCMM 时序分析代理结果
定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的 net、wire 和约束。
在表 2 中,定时路径调试代理可解析 86% 的路径级调试任务,而标准任务求解方法无法解析任何任务。
多报告任务说明
所需的分析结果
子任务
标准任务求解
定时路径调试代理
查找没有上升/下降信息的缺失 clk 信号
max、clk
X
V
识别 nets 对的 RC 不匹配率高
Max、Wire
X
V
检测受害者与侵犯者之间的异常约束
Max、xtalk、LC
X
V
识别牺牲者与侵犯者之间的异常 RC 值
Max、wire、xtalk、LC
X
V
查找具有最高 RC 值的最慢阶段的限制
Max、wire、xtalk、LC
X
V
比较每个 timing table,了解阶段数、点值和 timing mismatch
最大值
X
X
任务 M2 和任务 M3,用于路径列表中的特定阶段
Max、wire、xtalk、LC
X
V
平均通过率
0%
86%
表 2、具有静态任务图求解功能的定时路径调试代理的通过率 (%),以及无任务图信息的朴素标准任务求解
X = 无法解决任务
。V = 成功完成任务。
结束语
拟议的 Marco 框架为实时硬件设计任务求解提供了更灵活、更领域更专业的方法。通过使用任务图形、灵活的单 AI 和多 AI 代理配置以及特定领域的工具和知识,我们为单元布局优化、Verilog 语法错误修复、Verilog 和 DRC 代码生成以及问题块、网络和线缆的定时调试等任务开发了各种代理。
实验结果表明,在芯片设计中使用基于 LLM 的协作式智能体可获得惊人的性能和效率优势。
硬件设计智能体研究的未来方向包括:
使用高质量的硬件设计数据训练 LLMs
提升基于 LLM 的智能体的硬件信号和波形调试能力
将 PPA 指标纳入设计流程
为 LLM 代理开发更高效的自学习技术和内存系统,以解决更复杂的硬件任务
有关电子设计自动化的更多论文和项目,请参阅
NVIDIA Design Automation Research Group
页面。
如果您对本文中重点介绍的技术感兴趣,以下是相关论文列表:
RTLFixer:使用大语言模型自动修复 RTL 语法错误
/NVlabs/RTLFixer
GitHub repo
VerilogCoder:采用基于图形的规划和基于抽象语法树(AST)的波形追踪工具的自主 Verilog 编码代理
/NVlabs/VerilogCoder
GitHub repo
DRC-Coder:使用 LLM 自主代理自动生成 DRC 检查器代码
用于优化标准单元布局设计的大语言模型(LLM) |
https://developer.nvidia.com/blog/defining-llm-red-teaming/ | Defining LLM Red Teaming | There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to deviate from acceptable standards. This use of LLMs began in 2023 and has rapidly evolved to become a common industry practice and a cornerstone of
trustworthy AI
. How can we standardize and define LLM red teaming?
Researchers affiliated with NVIDIA, the University of Washington, the Center for Human-Compatible AI, and the IT University of Copenhagen conducted a study on red teaming in practice,
Summon a demon and bind it: A grounded theory of LLM red teaming
(published in PLOS One).
Because the goal was to define and understand a relatively novel activity, the study uses a grounded theory approach, taking as evidence interviews from dozens of practitioners in thousands of minutes of video recordings. We spoke with security professionals, policy researchers, and scientists, as well as people who conducted this work non-professionally, such as academic researchers, hobbyists, and even artists, to understand the common themes, motivations, and strategies in attacking LLMs.
What defines LLM red teaming in practice?
LLM red teaming has the following defining characteristics:
It’s limit-seeking:
Red teamers find boundaries and explore limits in system behavior.
It’s never malicious:
People doing red teaming are not interested in doing harm—in fact, quite the opposite.
It’s manual:
Being a creative and playful practice, the parts of red teaming that can be automated are often most useful to give human red teamers insight for their work.
It’s a team effort:
Practitioners find inspiration in each others’ techniques and prompts, and the norm is to respect fellow practitioners’ work.
It’s approached with an alchemist mindset:
We found that red teamers tend to abandon rationalizations about models and their behavior and instead embrace the chaotic and unknown nature of the work.
These characteristics feed into NVIDIA’s definition of LLM red teaming, which we discuss later in this post.
In industry, there’s often a rough division between cybersecurity red teaming and content red teaming:
Cybersecurity red teaming
tends to be scoped to the technology stack leading up to the point of inference output, and technologies used to transmit and render this output.
Content red teaming
, on the other hand, is scoped to the content produced at model inference time.
Why do people red team LLMs?
People who attack LLMs have a broad range of motivations.
Some of these are external. It may be part of their job or a regulatory requirement. Social systems can also play a role, with people discovering LLM vulnerabilities for social media content or to participate in a closed group. Others are intrinsic, as many people do it for fun, out of curiosity, or based on concerns for model behavior.
At NVIDIA, we red team LLMs as part of our Trustworthy AI process to assess risks before release. If the model doesn’t perform up to expectations, we postpone the release until a given model is updated and revised to perform better.
How do people approach this activity?
LLM red teaming consists of using strategies to reach goals when conversationally attacking the target. Each kind of strategy is decomposed into different techniques. A technique might just affect two or three adversarial inputs against the targets, or an input might draw upon multiple techniques.
We identified the following overall types of red team strategies:
Language:
Modulating the surface form of words rather than semantics, such as using an encoding scheme.
Rhetorical:
Relying on argumentation, or manipulation.
Possible worlds:
Trying to shift the context of the interaction.
Fictionalizing:
Shifting the basis of operation to a fictional world or set of conditions.
Stratagems:
Using meta-strategies that affect how one interacts with the LLM at a higher level.
For more information, see
Summon a demon and bind it: A grounded theory of LLM red teaming
, which lists and describes 35 techniques over twelve different strategies.
What can LLM red teaming reveal?
The goal of LLM red teaming isn’t to quantify security. Rather, the focus is on exploration, and finding which phenomena and behaviors a red teamer can get out of the LLM. Put another way, if we get a failure just one time, then the failure is possible.
Another thing that distinguishes red teaming from benchmarks is the focus on novelty.
For both cybersecurity and content-based red teaming, the possible range of attacks is infinite. For cybersecurity, that’s because new attack methods are constantly in development. For content, it’s because the mode of interaction is through text, which can be infinitely rearranged and augmented.
So, repeatability is not interesting when discovering new security weaknesses and vulnerabilities. While it makes sense to test any model for failures using a battery of existing prompts, as a benchmark does, this can never indicate security. It just reveals weaknesses.
It’s possible to patch deployed models to be secure against prompts in public benchmarks, or to use this data during alignment, so getting full marks on a benchmark doesn’t show that a model is secure. However, getting low marks on a security benchmark does still indicate the presence of weaknesses.
In the security context, to test a model rigorously, you should go beyond public knowledge and interact closely with the model, trying to find novel ways to breach a particular LLM.
In this sense, LLM red teaming is a classic instance of an artisanal activity. Red teamers use their human expertise and intuition while interacting with the target. For example, they might sense that a model is close to giving a mitigation message (for example, “As an AI, I cannot….”), and they might respond to this by backing off their current line of requests, or by starting a fresh chat session with a slightly different tack.
Or, a red teamer might sense that a model is close to yielding and so keep pushing and slightly varying their request until they find a way through and get the model to fail in the target way. They add what worked and what didn’t to their conscious expertise and unconscious intuition and then share it with other red teamers.
This makes red teaming a distinctly human activity that complements security benchmarking.
How do people use knowledge that comes from LLM red teaming?
Red teamers are often looking for what they describe as
harms
that might be presented by an LLM. There’s a broad range of definitions of harm.
A red teaming exercise could focus on one of many goals or targets, which could depend on deployment context, user base, data handled, or other factors. Red teamers may also pay attention to the level of complexity required to get a “break.” A harm discovered after a single, one-sentence interaction with an LLM often suggests greater concern than a harm surfaced following complex, multi-turn manipulation.
Sometimes, the goal of red teaming is curiosity, a byproduct of which might be content for the red teamer to share, in their organization or publicly. This both builds the expertise and intuition of the individual and raises the community level of knowledge. It’s common for traditional cybersecurity knowledge to be shared informally on social media, and this applies also for LLM security.
At NVIDIA, human red teaming is a critical part of how we make model-release decisions. We have a skilled LLM red team and a skilled AI red team, who both check our models and infrastructure for vulnerabilities and behaviors that were not caught elsewhere. This helps us in three ways:
It enables us to make informed decisions about whether we will release models
It builds a pool of high-level skill at the frontier of LLM red teaming
It gives us the confidence that we’re making the best effort and getting good results with our AI security.
The results from red teaming go into NVIDIA’s enhanced model documentation format,
Model Card++
.
Some parts of LLM security can be tested automatically. After an exploit has been found in one case, this can be stored and used to test other LLMs, so that we don’t make the same mistake again. We do exactly this in
NVIDIA garak
(Generative AI Red-Teaming and Assessment Kit). Developers ready to test the security of their LLM deployments can run the open-source NVIDIA garak against almost any model and get a report indicating susceptibility to over 120 different categories of vulnerability.
Knowledge about how to break a model can be risky in the wrong hands. When an exploit has been found, the best thing to do is contact the model owner and give them a chance to respond and fix the weakness. This process is called
co-ordinated vulnerability disclosure
, and is also a common practice for LLM vulnerabilities.
NVIDIA’s definition of LLM red teaming
We see LLM red teaming as an instance of AI red teaming. Our definition is developed by the
NVIDIA AI Red Team
and takes inspiration from both this research on LLM red teaming in practice and also the definition used by the Association for Computational Linguistics’ SIG on NLP Security (
SIGSEC
).
Take care to specify the specific subdomain of red teaming, as different audiences often make different assumptions about which form is being referred to.
LLM red teaming
: Systematically testing AI models and systems containing AI models to identify vulnerabilities and behaviors that pose threats or risks to the systems running or using those models.
It can be subdivided into two areas: security red teaming and content-based red teaming.
Security red teaming
Assessing the robustness of the model and the system containing the model to attacks impacting traditional security properties (for example, confidentiality, integrity, and availability), either of the model itself or the system containing the model.
Attacks such as adversarial inputs, model extraction, training data inference and extraction, or prompt injection as used to violate a security property are typically evaluated by a security red team.
These activities typically require teams with a traditional security background to leverage findings and evaluate their impact.
Content-based red teaming
Assessing the model for unwanted behavior under adversarial manipulation, producing outputs that violate some pre-specified behavior contract for the model, either explicit (for example, a model card) or implicit.
These behaviors may include outputs that are offensive, unwanted, or unsafe, including biased or bigoted productions, instructions on unsafe or illegal activities, making promises on behalf of the model owner, or making decisions based on protected characteristics. Common techniques involve various forms of jailbreaking and guardrail evasion.
These activities typically require the support of an ethics team, a legal team, or other similar domain experts to assess the impact of findings.
Improving LLM security and safety
NVIDIA NeMo Guardrails
is a scalable platform for defining, orchestrating, and enforcing AI guardrails for content safety, jailbreak prevention, and more in AI agents and other generative AI applications.
NeMo Guardrails and the
NVIDIA garak
toolkit are now available for developers and enterprises. Enterprises can benefit from high-grade safety and security with
NVIDIA AI Enterprise
.
Meet the experts at GTC
The NVIDIA scientists behind this and other works in AI security will be at GTC 2025. You can hear a panel discussion on
navigating critical challenges in AI governance
, where we discuss practical approaches to building responsible AI systems.
Our cybersecurity AI and security teams will present an exclusive, in-depth session designed to
transform your AI agentic workflows securely
from blueprint to production. And our experts will be available to answer all your questions on
building trustworthy AI systems
.
Acknowledgements
Thanks to Nanna Inie, Jonathan Stray, and Leon Derczynski for their work on the
Summon a demon and bind it: A grounded theory of LLM red teaming
paper published in PLOS One. | https://developer.nvidia.com/zh-cn/blog/defining-llm-red-teaming/ | 定义 LLM 红色团队 | 在一项活动中,人们为生成式 AI 技术 (例如大语言模型 (LLMs)) 提供输入,以确定输出是否会偏离可接受的标准。LLMs 的这种使用始于 2023 年,并已迅速发展成为一种常见的行业实践,也是值得信赖的 AI 的基石。如何标准化和定义 LLM 红色团队?
NVIDIA、华盛顿大学、Center for Human-Compatible AI 和哥本哈根 IT 大学的研究人员对红色团队的实际应用“
Summon a demon and bind it: A grounded theory of LLM red teaming
”(在 PLOS One 中发布) 进行了一项研究。
由于目标是定义和理解相对较新的活动,因此该研究采用了有根据的理论方法,在数千分钟的视频录制中,以数十位从业者的访谈作为证据。我们与安全专业人员、政策研究人员和科学家,以及从事这项非专业工作的人员 (例如学术研究人员、业余爱好者甚至艺术家) 进行了交谈,以了解攻击 LLMs 的常见主题、动机和策略。
什么定义 LLM 红队在实践中?
LLM 红色团队合作具有以下定义特征:
它寻求极限:红色团队成员发现边界并探索系统行为的极限。
这绝非恶意:做红队的人对伤害不感兴趣,事实上,情况恰恰相反。
它是手动的:作为一种创造性和趣味性的练习,red teaming 中可以自动化的部分通常非常有用,可以让人类 red teamers 深入了解他们的工作。
这是一项团队合作精神:从业者从彼此的技巧和提示中寻找灵感,规范是尊重其他从业者的工作。
他们以炼金术士的思维方式进行交流:我们发现,红色团队成员倾向于放弃对模型及其行为的合理化,转而接受工作的混乱和未知本质。
这些特征融入了 NVIDIA 对 LLM 红色团队的定义,我们稍后将在本文中讨论这些定义。
在业内,通常会对网络安全红队和内容红队进行粗略划分:
网络安全红色团队合作的范围往往涵盖直至推理输出点的技术堆栈,以及用于传输和渲染此输出的技术。
另一方面,内容红色团队的作用域为模型推理时生成的内容。
人们为什么组建 LLM 团队?
攻击 LLMs 的人的动机多种多样。
其中一些是外部的。这可能是他们工作的一部分,也可能是监管要求。社交系统也可以发挥作用,人们可以发现社交媒体内容的 LLM 漏洞,或者加入封闭的小组。还有一些则是内在的,因为很多人这样做是为了好玩、出于好奇或出于对模型行为的关注。
在 NVIDIA,我们组建了红队 LLMs 作为我们 Trustworthy AI 流程的一部分,在发布之前评估风险。如果模型的性能未达到预期,我们将延迟发布,直到对给定模型进行更新和修改以提高其性能。
人们如何看待这项活动?
LLM 红色团队合作包括在对话式攻击目标时使用实现目标的策略。每种策略都被分解成不同的技术。一项技术可能仅会影响目标的两个或三个对抗输入,或者一项输入可能会利用多种技术。
我们确定了以下红队策略的整体类型:
语言:
调节词语的表面形式而非语义,例如使用编码方案。
修辞学:
依靠论证或操纵。
可能的世界:
尝试改变交互的上下文。
虚构化 :
将操作基础转移到一个虚构的世界或一组条件中。
Stratagems:
使用影响更高级别的 LLM 交互方式的元策略。
如需了解更多信息,请参阅 Summon a demon and bind it:LLM 红色团队的基础理论,其中列出并描述了 12 种不同策略中的 35 种技术。
LLM 红色团队合作揭示了哪些内容?
LLM 红色团队合作的目标并不是量化安全性。重点在于探索,以及找出红色团队成员可以退出 LLM 的现象和行为。换句话说,如果我们只遇到一次故障,那么失败是可能的。
红色团队与基准测试的另一个区别是对新奇性的关注。
对于网络安全和基于内容的红色团队,攻击的可能范围是无限的。对于网络安全而言,这是因为新的攻击方法正在不断开发中。对于内容,这是因为交互模式是通过文本进行的,可以无限地重新排列和增强。
因此,在发现新的安全漏洞时,可重复性并不重要。虽然使用一系列现有提示测试任何模型的故障是有意义的,就像基准测试一样,这永远无法表明安全性。它只是揭示了缺点。
您可以根据公开基准测试中的提示对部署的模型进行修补以确保安全性,或者在对齐期间使用这些数据,因此在基准测试中获得完整标记并不表明模型是安全的。但是,在安全基准测试中得分较低仍然表示存在缺点。
在安全环境中,为了严格测试模型,您应该超越公开知识,与模型密切互动,尝试寻找突破特定 LLM 的新方法。
从这个意义上说,LLM 红色团队合作是典型的手工活动。红色小队队员在与目标进行互动时,运用他们的人类专业知识和直觉。例如,他们可能会感觉到某个模型即将发出缓解消息 (例如,“作为 AI,我无法……”),他们可能会通过退出当前的请求行,或者以略有不同的策略开始新的聊天会话来响应此消息。
或者,红色团队成员可能会感觉到模型已接近生成,因此继续推送并略微改变其请求,直到找到解决方法,使模型以目标方式失败。他们将有效和无效的部分添加到自己的意识专业知识和直觉中,然后与其他红色团队成员分享。
这使得红队合作成为一种独特的人类活动,是对安全基准测试的补充。
人们如何使用来自 LLM 红队的知识?
Red Team 成员通常会寻找 LLM 可能带来的危害。
伤害
的定义十分广泛。
红色团队训练可以专注于许多目标或指标之一,这可能取决于部署环境、用户基础、处理的数据或其他因素。红色团队成员还可能需要注意“休息”所需的复杂程度。与 LLM 进行单句交互后发现的危害性通常比复杂的多回合操作后出现的危害性更值得关注。
有时,红色团队的目标是好奇心,而好奇心的副产品可能是红色团队成员在其组织中或公开分享的内容。这既可以建立个人的专业知识和直觉,也可以提高社区的知识水平。传统网络安全知识通常在社交媒体上以非正式方式分享,这也适用于 LLM 安全。
在 NVIDIA,人类红色团队是我们做出模型发布决策的关键部分。我们拥有一支技能熟练的 LLM 红色团队和一支技能熟练的 AI 红色团队,他们会检查我们的模型和基础架构,以发现未在其他地方发现的漏洞和行为。这在三个方面为我们提供帮助:
它使我们能够就是否发布模型做出明智的决定
它建立了 LLM 红色团队前沿的高级技能池
它让我们确信,我们正在利用 AI 安全功能尽力而为并取得良好结果。
红色团队的结果将输入 NVIDIA 的增强型模型文档格式
Model Card++
。
可以自动测试 LLM 安全性的某些部分。在一个案例中,当发现漏洞后,这可以被存储并用于测试其他 LLM,这样我们就不会再次出现同样的错误。我们在 NVIDIA NeMo Curator(生成式 AI Red-Teaming 和评估套件)中正是这样做的。准备测试其 LLM 部署安全性的开发者可以针对几乎任何模型运行开源
NVIDIA garak
,并获得报告,表明他们容易受到 120 多个不同类别的漏洞的影响。
掌握有关如何破解模型的知识可能会对不法之徒带来风险。当发现漏洞时,最好的方法是联系模型所有者,让他们有机会做出回应并修复漏洞。此过程称为
协同漏洞披露
,也是 LLM 漏洞的常见做法。
NVIDIA 的 LLM 红队定义
我们将 LLM 红色团队视为 AI 红色团队的一个实例。我们的定义由
NVIDIA AI Red Team
开发,其灵感来自本期关于 LLM 红色团队实践的研究,以及计算语言学协会的 NLP Security SIG(
SIGSEC
)使用的定义。
请注意指定红队的特定子域,因为不同的受众通常会对所提及的形式做出不同的假设。
LLM 红色团队合作:系统测试包含 AI 模型的 AI 模型和系统,以识别会对运行或使用这些模型的系统造成威胁或风险的漏洞和行为。
它可以细分为两个方面:安全红队和基于内容的红队。
安全红色团队
评估模型和包含模型的系统的稳健性,使其能够抵御攻击影响模型本身或包含模型的系统的传统安全属性(例如,机密性、完整性和可用性)。
用于违反安全属性的攻击(例如对抗输入、模型提取、训练数据推理和提取或 prompt injection)通常由安全红色团队进行评估。
这些活动通常需要具有传统安全背景的团队利用调查结果并评估其影响。
基于内容的红色团队
评估模型在对抗操作下的不良行为,生成违反模型某些预先指定的行为契约的输出 (显式 (例如 model card) 或隐式)。
这些行为可能包括冒犯性的、不受欢迎的或不安全的输出,包括有偏见或偏见的生产、关于不安全或不法活动的指令、代表模型所有者作出承诺,或根据受保护的特征做出决定。常见的技术包括各种形式的 jailbreaking 和 guardrail evasion。
这些活动通常需要道德团队、法律团队或其他类似领域专家的支持,以评估调查结果的影响。
提升 LLM 的安全性和安全性
NVIDIA NeMo Guardrails 是一个可扩展的平台,用于定义、编排和执行 AI Guardrails,以在 AI 代理和其他生成式 AI 应用中实现内容安全、越狱预防等。
NeMo Guardrails 和
NVIDIA garak
工具包现已面向开发者和企业推出。借助
NVIDIA AI Enterprise
,企业可以从高级别的安全性和可靠性中受益。
在 GTC 大会上与专家会面
在 GTC 2025 上,NVIDIA 科学家将为这项工作以及 AI 安全领域的其他工作提供支持。您可以听到关于应对 AI 治理中的关键挑战的小组讨论,其中我们讨论了构建负责任的 AI 系统的实用方法。
我们的网络安全 AI 和安全团队将举办一场深入的独家会议,旨在安全地将您的 AI 代理工作流从蓝图转变为生产。我们的专家将为您解答有关构建可信 AI 系统的所有问题。
致谢
感谢 Nanna Inie、Jonathan Stray 和 Leon Derczynski 为“召唤恶魔并将其绑定:LLM 红队合作的基础理论”所做的工作:在 PLOS One 上发表的论文。 |
https://developer.nvidia.com/blog/agentic-autonomy-levels-and-security/ | Agentic Autonomy Levels and Security | Agentic workflows
are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable AI models to use tools to access additional data or automate user actions, and enable AI models to operate autonomously, analyzing and performing complex tasks with a minimum of human involvement or interaction.
Because of their power, agentic workflows also present an element of risk. The most common model at the core of agentic systems remains some variety of LLM, which remains vulnerable to prompt injection if untrusted data can be introduced into the system.
To help assess and mitigate those vulnerabilities, NVIDIA presents an Agentic Autonomy framework, which we use for the following purposes:
Understand risks associated with increasing complexity of AI workflows
Help model and mitigate the risks posed by such workflows
Introduce how we use it to model threats to agentic systems
Explore how NVIDIA uses this framework for security for AI and more
cybersecurity use cases at NVIDIA GTC
.
Manipulating autonomous systems
In practice, exploitation of AI-powered applications requires two key components:
An adversary must be able to get their data (read: attack) into the system through some mechanism.
There must be a downstream effect that their malicious data can trigger.
When the AI component of the system is an LLM, this is commonly referred to as either direct prompt injection (the adversary and user are the same person) or indirect prompt injection (the adversary and the user could be different people).
However, similar threats exist for other types of AI models used in autonomous AI powered applications. Both flavors of prompt injection are rooted in the lack of separation between the control and data plane in the LLM architecture.
Direct prompt injection can be commonly seen in such examples as “Do Anything Now” (or DAN) prompts, which attempt to subvert safety and content filters trained into the model, and enable it to produce objectionable or unwanted content. These attacks typically only impact the active user’s session, and so typically have limited impact.
Indirect prompt injection, in which a threat actor causes their data to be included in another user’s session and thus takes control of the LLM’s behavior and productions, is more serious. This is often accomplished by targeting data sources used by retrieval augmented generation (RAG) tools, including internet search, such that documents containing malicious instructions are unknowingly included in the current user’s session. When the capability to manipulate another user’s session,– particularly LLM outputs, is combined with the use of tools, a significant number of new potential risks present themselves.
When potentially untrusted data enters an agentic system, any downstream actions taken by that system become potentially adversary-controlled.
Security and complexity in AI autonomy
Even before “agentic” AI became a distinct class of product offerings, the orchestration of AI workloads in sequences was commonplace. Even simple flows, such as an endpoint security product routing a sample to the correct AI powered analysis engine depending on file format, is arguably an example of such a workflow.
Because workflows such as these are deterministic, it’s straightforward to enumerate all possible paths, map out data flows, and isolate untrusted data from potentially risky actions that might be impacted by attacker-controlled data.
As the industry moves to systems that have more internal decision making capabilities, higher degrees of tool use, and more complex interactions between model components, the number of potential data flow paths increases exponentially, making threat modeling more difficult.
The following set of classification, ordering systems from least autonomous to most, distinguishes between different degrees of autonomy afforded by different architectures, and helps assess corresponding risks.
Autonomy level
Description
Example
0 – Inference API
A single user request results in a single inference call to a single model.
An
NVIDIA NIM microservice
serving a single model
1 – Deterministic system
A single user request triggers more than one inference request, optionally to more than one model, in a predetermined order that does not depend on either user input or inference results.
NVIDIA Generative Virtual Screening for Drug Discovery Blueprint
2 – Weakly autonomous system
A single user request triggers more than one inference request. An AI model can determine if or how to call plugins or perform additional inference at fixed predetermined decision points.
Build an Enterprise RAG Pipeline Blueprint
3 – Fully autonomous system
In response to a user request, the AI model can freely decide if, when, or how to call plugins or other AI models, or to revise its own plan freely, including deciding when to return control to the user.
NVIDIA Vulnerability Analysis for Container Security Blueprint
, “BabyAGI”, computer use agents
Table 1. System autonomy level classification
Level 1
Level 1 is a linear chain of calls, where the output of one AI call or tool response is conveyed to the next step in an entirely deterministic manner. The complete flow of data through the system is known in advance.
Figure 1. First level of autonomy
Level 2
In level 2, the output from the AI model may be sent along different paths through the workflow in a data-dependent manner. While every execution path can be traced, it’s not known until the workflow is executed which execution path is used.
Figure 2. Linear chain of AI calls and tool responses
It forms a directed acyclic graph in that there is no path through the workflow that could potentially form a loop or cycle. All possible paths can be finitely enumerated. Most LLM routers fall into this level.
Level 3
For level 3, the number of potential execution paths grows exponentially with the number of execution steps in the workflow. Tools or other AI models may be invoked as the workflow progresses.
Figure 3. Output from the AI model goes along different paths
The presence of a cycle within the directed graph means that an exponentially increasing (and, at least in theory, potentially infinite) number of paths through the graph exist. It’s generally not tractable to enumerate all possible paths ahead of time, nor predict which specific path will be used for a particular input.
Separation of security boundary analysis and tool security
Our early experience with analyzing and securing agentic systems in the field led us to realize that the risk associated with these systems lies mostly in the tools or plugins available to those systems. In the absence of a tool or plugin that can perform sensitive or physical actions, the primary risk posed by manipulation of the AI component is misinformation, regardless of the degree of complexity of the workflow.
This observation drives the primary difference between this framework, which focuses on security boundaries as being distinct from tool security, and other frameworks, such as that proposed by HuggingFace in their smolagents library (
Introduction to Agents
). Some levels align closely. Their
simple processor
is approximately a
deterministic system
and their
multi-step agent
maps closely onto a
fully autonomous system
.
However, this framework focuses on the difficulty of identifying security boundaries given the degree of autonomy expressed by an AI component in describing program flow. HuggingFace’s description of agency, by combining the risks of tool use with the complexity of program flow, provides a quick rubric to judge systemic risks at the level of the system. This framework’s approach of separating the risks of tool calling and tool use from the difficulty of drawing security boundaries supports more detailed threat modeling of complex flows and potential mitigations.
In the proposed framework, separating the following concerns enables you to more concretely assess the risk posed by an agentic workflow:
Security boundaries with respect to the flow of untrusted data through the system
Security properties of specific functions or tools that may be called by the agentic system
Within a simple deterministic system with clear security boundaries, even a sensitive plugin may be a fairly low-risk if you can clearly separate it from untrusted input. In a similar manner, even a fully agentic system may be relatively low-risk as long as no sensitive tools are available to it and the output of the system is appropriately validated before use.
The combination of a more complex workflow with sensitive tools, however, demands further analysis to ensure secure operation.
Threat modeling under this framework
Increasing autonomy levels do not inherently represent increasing risk. Instead, they represent increasing lack of determinism and predictability of the system behavior, which makes it more difficult to assess the impact of untrusted data.
Risk associated with agentic systems is located largely in the tools or plugins that the system has access to which can perform sensitive actions, including the following examples:
Completing a purchase
Sending an email
Physical actions such as moving a robot or setting a thermostat
Broadly speaking, anything that takes an action on behalf of a user that requires their delegated authorization to complete, or anything that results in a physical change in a system should be treated as potentially sensitive, and analyzed for its potential to act upon or be influenced by untrusted data.
Mitigation of this tool risk relies in large part on the ability to block the ability of attackers to inject malicious data into plugins, which becomes significantly more difficult with each increase in autonomy level.
Possible downstream effects are much more dependent on the details of the system and what plugins are being used, but at a minimum, a threat actor who can feed malicious data to the system must be presumed to be able to at least influence, if not control outright, the responses of the system.
If tools or plugins are being used, the potential threat increases significantly. A bad actor who can control the output of an AI model embedded in a system that uses tools might also be able to control which plugins are used and what actions those tools take. This can lead to a range of threats:
Data exfiltration, for example, through Markdown rendering plugins
Remote code execution, for example, through plugins that run LLM-provided code
The risk of physical harm, if the system has control or influence over some sort of physical system
As you climb the autonomy hierarchy, you need a more careful analysis of and management of potentially untrusted data (Figures 4 and 5).
A Level 0 system, with no vectors for untrusted data, is relatively straightforward from this perspective.
A Level 1 system, with a predetermined workflow, always accesses the same data sources, models, and tools in the same sequence, making it straightforward to determine if untrusted data can be introduced into the workflow. The designer can either isolate sensitive functionality from the untrusted data, or design appropriate sanitization strategies for that data.
The number of potential execution paths in a Level 2 system is greater than in Level 1 systems but still enumerable, and specific paths that contain untrusted data can be analyzed in greater detail.
Level 3 autonomous systems pose the largest difficulty, as the number of execution paths typically grows exponentially in the number of self-reflection steps performed by the agent. In this case, these include time-of-use sanitization, time-of-use taint tracing, or other risk mitigations such as requiring manual user approval of potentially hazardous actions.
Taint tracing
Taint tracing
is marking an execution flow as having received untrusted data and then either preventing use of or requiring manual re-authorization for any sensitive tool.
For Level 1 and Level 2 systems, identifying potential sources of untrustworthy data, and evaluating the risks of downstream impacts is relatively straightforward. When untrustworthy data enters the system, every downstream AI model and its associated outputs and every downstream tool and its associated actions and outputs are also untrusted.
Figure 4. Taints from untrusted sources can be bounded and enumerated in Level 1 and 2 systems
For level 3 systems, taint tracing in the general case becomes almost intractable. When untrusted data has been introduced into the workflow, it has the potential to propagate to any other downstream component through the looping structure inherent to level 3 workflows.
Figure 5. Taints from untrusted sources are difficult to bound and enumerate in Level 3 systems
By classifying an agentic application into the correct level, it becomes simpler to identify the overall level of risk posed by the application and corresponding security requirements.
Recommended security controls per autonomy level
Table 2 provides a high-level summary of some suggested mitigating controls that should be placed around agents of various autonomy levels. Not all controls are required for all agentic workloads, depending on the sensitivity of tools provided to the workload.
Autonomy level
Recommended security controls
0 – Inference API
Use standard API security.
1 – Deterministic system
Manually trace dataflows and order workflow correctly to prevent untrusted data from entering sensitive plugins.
2 – Bounded agentic workflow
Enumerate dataflows, identify ones with potentially untrusted data, explore isolation or sanitization options, and consider time-of-use manual approval of sensitive actions.
3 – Fully autonomous system
Implement taint tracing and mandatory sanitization of potentially untrusted data. Consider time-of-use manual approval of sensitive actions.
Table 2. Recommended security controls by level
For Level 0 systems, you can apply simple application and API security. Due to lack of further interaction taken by the system on the basis of AI model output, overall risk here is largely limited to non-AI components.
Level 1 and 2 systems provide a slightly elevated level of risk, but one that is straightforward to evaluate. Because these systems contain no loops, all execution flows through the systems can be exhaustively enumerated and examined for cases when untrusted data may flow into a sensitive tool.
Flows of this type are particularly risky when they pass through an LLM between the source and the sink. In level 1 systems, it’s often sufficient to reorder the workflow. For more complex (level 2) systems, risky flows from untrusted sources to sensitive sinks should be carefully evaluated to see if they can be eliminated, or otherwise if the untrusted data or input to the sensitive tool can be properly sanitized to remove risk.
Finally, if the flows cannot be rearranged or eliminated, or robust sanitization is not possible, add time-of-use manual approval to sensitive tools that may receive untrusted data, thus placing a human in the loop. This enables users of agentic workflows to inspect risky actions before they are performed and deny the workflow the ability to perform adversarially manipulated ones.
Level 3 systems, due to their complexity, typically cannot be meaningfully rearranged to block untrusted sources to sensitive sink flows. In this case, sanitization of untrusted data at time of retrieval should be implemented.
If possible, implementing taint tracing should also be considered. In the event that neither is possible, simply requiring that all potentially sensitive tools require manual approval before executing their function may be used as a last resort.
As with threat modeling, breaking agentic workloads into different complexity levels enables the quick determination of reasonable security controls to avoid the risk of having sensitive tools ingest untrusted data, including LLM outputs that have been manipulated by prompt injection.
Conclusion
As systems climb the autonomy hierarchy, they become more complex and more difficult to predict. This makes threat modeling and risk assessment more difficult, particularly in the presence of a range of data sources and tools of varying trustworthiness and sensitivity.
Identifying the system autonomy level provides a useful framework for assessing the complexity of the system, as well as the level of effort required for threat modeling and necessary security controls and mitigations.
It’s also important to analyze the plugins in the pipeline and classify them depending on their capabilities to provide an accurate risk evaluation based on the autonomy level. | https://developer.nvidia.com/zh-cn/blog/agentic-autonomy-levels-and-security/ | 代理自主级别与安全性 | 智能体工作流是 AI 赋能工具的下一次演进。它们使开发者能够将多个 AI 模型关联在一起以执行复杂的活动,使 AI 模型能够使用工具来访问其他数据或自动执行用户操作,并使 AI 模型能够自主运行,以尽可能减少人类参与或交互的方式分析和执行复杂的任务。
由于其强大的功能,代理工作流也存在风险因素。代理系统的核心最常见的模型仍然是各种 LLM,如果可以将不受信任的数据引入系统,这些模型仍然容易受到 prompt injection 的影响。
为帮助评估和缓解这些漏洞,NVIDIA 提供了一个 Agentic Autonomy 框架,我们将其用于以下用途:
了解与日益复杂的 AI 工作流相关的风险
帮助模拟和减轻此类工作流程带来的风险
介绍如何使用它对 agentic 系统的威胁进行建模
操控自主系统
在实践中,开发 AI 赋能的应用需要两个关键组件:
对手必须能够通过某种机制将其数据 (读取:攻击) 传入系统。
必须存在其恶意数据可能触发的下游影响。
当系统的 AI 组件是 LLM 时,这通常被称为直接提示注入 (对手和用户是同一个人) 或间接提示注入 (对手和用户可以是不同的人)。
然而,对于 AI 驱动的自主应用中使用的其他类型的 AI 模型,也存在类似的威胁。这两种提示注入的根源在于 LLM 架构中控制平面和数据平面之间缺乏分离。
直接提示注入通常出现在“Do Anything Now”(或 DAN)提示等示例中,这些示例试图颠覆训练到模型中的安全和内容过滤器,并使其产生令人反感或不需要的内容。这些攻击通常只影响活跃用户的会话,因此通常影响有限。
间接提示注入 (即威胁行为者将其数据包含在其他用户的会话中,从而控制 LLM 的行为和生产) 更为重要。这通常通过定位检索增强生成 (RAG) 工具 (包括互联网搜索) 使用的数据源来实现,这样就会在不知情的情况下将包含恶意指令的文档包含在当前用户的会话中。当操控其他用户会话 (尤其是 LLM 输出) 的能力与工具的使用相结合时,会出现大量新的潜在风险。
当潜在的不可信任数据进入代理系统时,该系统采取的任何下游操作都可能受到对手的控制。
AI 自主领域的安全性和复杂性
早在“代理式”AI 成为一类独特的产品之前,按序列编排 AI 工作负载就已屡见不鲜。即使是简单的流程 (例如端点安全产品,根据文件格式将样本路由到由 AI 驱动的正确分析引擎) 也可以说是此类工作流的示例。
此类工作流程具有确定性,因此您可以直接列举所有可能的路径,绘制数据流地图,并将不受信任的数据与可能受攻击者控制数据影响的潜在风险操作隔离开来。
随着行业转向拥有更多内部决策能力、更高工具使用率以及模型组件之间更复杂交互的系统,潜在数据流路径的数量呈指数级增加,从而增加了威胁建模的难度。
以下分类将系统从最不自主排序为最自主排序,可区分不同架构提供的不同自主程度,并有助于评估相应的风险。
自主级别
说明
示例
0 – 推理 API
单个用户请求会导致对单个模型进行一次推理调用。
为单个模型提供服务的 NVIDIA NIM 微服务
1 – 确定性系统
单个用户请求会按不依赖于用户输入或推理结果的预定顺序触发多个推理请求 (可以选择触发多个模型)。
NVIDIA
生成式虚拟筛选蓝图
2 – 弱自主系统
单个用户请求会触发多个 inference 请求。AI 模型可以确定是否或如何调用插件,或在固定的预定决策点执行额外 inference。
构建企业 RAG Pipeline Blueprint
3 – 全自主系统
根据用户请求,AI 模型可以自由决定是否、何时或如何调用插件或其他 AI 模型,或者自由修改自己的计划,包括决定何时将控制权交回给用户。
针对容器安全蓝图“BabyAGI”、计算机使用代理的 NVIDIA 漏洞分析
表 1. 系统自主级别分类
1 级
1 级是线性调用链,其中一个 AI 调用或工具响应的输出以完全确定的方式传输到下一步。通过系统的完整数据流是预先知道的。
图 1. 第一级自动驾驶
2 级
在第 2 级中,AI 模型的输出可能会以数据依赖的方式在工作流中沿不同路径发送。虽然可以追踪每个执行路径,但在工作流执行之前,我们不知道使用的是哪种执行路径。
图 2、AI 调用和工具响应的线性链
它形成了有向无环图,因为工作流中没有可能形成循环或循环的路径。所有可能的路径都可以有限枚举。大多数 LLM 路由器都属于这一级别。
3 级
对于级别 3,潜在执行路径的数量会随着工作流中执行步骤的数量呈指数级增长。随着工作流的进展,可能会调用工具或其他 AI 模型。
图 3、AI 模型的输出路径不同
有向图形中存在循环意味着图形中存在呈指数级增加(至少理论上可能是无穷大)的路径。通常无法提前列举所有可能的路径,也无法预测特定输入将使用哪种特定路径。
安全边界分析与工具安全性分离
我们早期在现场分析和保护代理系统的经验使我们意识到,与这些系统相关的风险主要在于这些系统可用的工具或插件。在缺乏可执行敏感或物理操作的工具或插件的情况下,无论工作流程的复杂程度如何,操作 AI 组件所带来的主要风险都是错误信息。
这一观察结果决定了此框架与其他框架 (例如 Hugging Face 在其 smolagents 库
(Introduction to Agents)
中提出的框架) 之间的主要区别,前者侧重于安全边界,而后者不同于工具安全边界。一些层次非常相似。他们的简单处理器近似为
确定性系统
,并且他们的多步骤智能体紧密地映射到完全自主的系统上。
但是,鉴于 AI 组件在描述程序流时表示的自主程度,此框架重点关注识别安全边界的难度。Hugging Face 对机构的描述将工具使用的风险与程序流程的复杂性相结合,提供了在系统层面判断系统风险的快速指南。此框架将工具调用和工具使用的风险与绘制安全边界的难度分开,支持对复杂流程和潜在缓解措施进行更详细的威胁建模。
在建议的框架中,通过分离以下问题,您可以更具体地评估由 agentic workflow 带来的风险:
与通过系统的非可信数据流相关的安全边界
代理系统调用的特定函数或工具的安全属性
在具有明确安全边界的简单确定性系统中,如果您能够将敏感插件与不信任的输入明确分开,即使是敏感插件也可能风险较低。同样,只要没有可用的敏感工具,且系统的输出在使用前经过适当验证,则即使是全代理系统也可能相对风险较低。
然而,将更复杂的工作流程与敏感工具相结合需要进行进一步分析,以确保安全操作。
此框架下的威胁建模
自动驾驶级别的增加本身并不代表风险的增加。相反,它们代表着系统行为越来越缺乏确定性和可预测性,这使得评估不可信任数据的影响变得更加困难。
与代理系统相关的风险主要存在于系统可以访问的工具或插件中,而这些工具或插件可以执行敏感操作,其中包括以下示例:
完成购买
发送电子邮件
物理动作,例如移动机器人或设置 thermostat
从广义上讲,任何代表用户采取行动且需要其授权才能完成的操作,或任何导致系统发生物理变化的操作,都应被视为潜在的敏感数据,并分析其对不受信任数据采取行动或受其影响的可能性。
降低此工具风险在很大程度上依赖于能否阻止攻击者向插件中注入恶意数据,而随着自主级别的提升,这一能力会变得更加困难。
可能的下游影响更多取决于系统的细节和使用的插件,但至少必须假定能够向系统提供恶意数据的威胁行为者至少能够影响(如果不是完全控制)系统的响应。
如果使用工具或插件,潜在威胁会显著增加。控制嵌入到使用工具的系统中的 AI 模型的输出的坏行为者也可能能够控制使用哪些插件以及这些工具执行哪些操作。这可能会导致一系列威胁:
数据泄漏,例如通过 Markdown 渲染插件
远程执行代码,例如,通过运行 LLM 提供的代码的插件
如果系统控制或影响某种物理系统,则会产生物理伤害风险
在提升自主层次结构时,您需要更仔细地分析和管理潜在的不可信任数据(图 4 和图 5)。
从这个角度来看,没有不可信数据向量的 Level 0 系统相对简单。
具有预定工作流的 1 级系统始终可以按相同顺序访问相同的数据源、模型和工具,从而可以直接确定是否可以将不可信任的数据引入工作流。设计师可以将敏感功能与非可信数据隔离开来,或者为这些数据设计合适的清理策略。
2 级系统中的潜在执行路径数量大于 1 级系统,但仍可枚举,并且可以更详细地分析包含非可信数据的特定路径。
3 级自主系统的难度最大,因为执行路径的数量通常随智能体执行的自反射步骤的数量呈指数级增长。在这种情况下,这些措施包括使用时间清洁、使用时间污点追踪或其他风险缓解措施,例如要求用户手动批准潜在的危险操作。
污点追踪
污点追踪
将执行流程标记为接收了不可信任的数据,然后阻止对任何敏感工具的使用或要求对其进行手动重新授权。
对于 1 级和 2 级系统,识别潜在来源的不可信数据,以及评估下游影响的风险相对简单。当不可信数据进入系统时,每个下游 AI 模型及其关联输出、每个下游工具及其关联操作和输出也同样不可信。
图 4、不可信来源的污点可以在 Level 1 和 Level 2 系统中进行限制和列举
对于 level 3 级系统,在一般情况下,污点追踪几乎难以解决。当不受信任的数据引入工作流后,它有可能通过 level 3 级工作流固有的循环结构传播到任何其他下游组件。
图 5、来自不可信来源的污点很难在 Level 3 级系统中绑定和列举
通过将代理应用程序分类为正确的级别,可以更简单地识别应用程序构成的总体风险水平以及相应的安全要求。
每个自主级别的推荐安全控制
表 2 简要概述了一些建议的缓解控制,这些控制应放置在不同自主级别的 agents 周围。并非所有 agents 工作负载都需要所有控件,具体取决于为工作负载提供的工具的灵敏度。
自主级别
推荐的安全控制
0 – 推理 API
使用标准 API 安全性。
1 – 确定性系统
手动追踪数据流并正确订购工作流,以防止不受信任的数据进入敏感插件。
2 – 有界限的代理工作流
列举数据流、识别具有潜在不可信任数据的数据流、探索隔离或清理选项,并考虑使用时间手动批准敏感操作。
3 – 全自主系统
实现污点追踪和强制清理对可能不可信的数据。考虑使用时间手动批准敏感操作。
表 2. Recommended security controls by level
对于 0 级系统,您可以应用简单的应用程序和 API 安全性。由于系统无法根据 AI 模型输出进行进一步交互,因此整体风险主要局限于非 AI 组件。
1 级和 2 级系统的风险水平略高,但评估起来很简单。由于这些系统不包含循环,因此可以详尽无遗地列举和检查流经系统的所有执行流程,以备不时之需。
当此类流通过源和接收端之间的 LLM 时,风险特别大。在 1 级系统中,通常只需重新排列工作流程即可。对于更复杂(2 级)的系统,应仔细评估从不受信任的来源到敏感汇的风险流,以确定是否可以消除这些流,或者是否可以适当清除不受信任的数据或敏感工具的输入,以消除风险。
最后,如果流程无法重新排列或消除,或无法进行可靠的清理,请为可能会接收不受信任数据的敏感工具添加使用时间手动审批,从而让用户进入循环。这使得代理工作流的用户能够在执行风险操作之前检查这些操作,并使工作流无法执行对抗性操作。
由于 Level 3 级系统的复杂性,通常无法进行有意义的重新排列,以将不受信任的源阻塞到敏感的接收流。在这种情况下,应在检索时清除不受信任的数据。
如果可能,还应该考虑实施污点追踪。如果两者都无法实现,则仅要求所有潜在敏感的工具在执行其功能之前需要手动批准即可将其用作最后手段。
与威胁建模一样,将代理工作负载分解为不同的复杂性级别有助于快速确定合理的安全控制,从而避免让敏感工具提取不受信任数据 (包括通过提示注入操作的 LLM 输出) 的风险。
结束语
随着系统对自动驾驶层次结构的要求越来越高,这些层次结构变得越来越复杂,预测难度也越来越大。这使得威胁建模和风险评估变得更加困难,尤其是在存在各种数据源以及具有不同可靠性和灵敏度的工具的情况下。
确定系统自主级别为评估系统复杂性、威胁建模以及必要的安全控制和缓解措施所需的工作量水平提供了有用的框架。
分析管道中的插件并根据其功能对其进行分类也很重要,以便根据自主级别提供准确的风险评估。 |
https://developer.nvidia.com/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA Deep Learning Institute Releases New Generative AI Teaching Kit | Generative AI, powered by advanced machine learning models and deep neural networks, is revolutionizing industries by generating novel content and driving innovation in fields like healthcare, finance, and entertainment.
NVIDIA is leading this transformation with its cutting-edge GPU architectures and software ecosystems, such as the
H100 Tensor Core
GPU and CUDA platform, which optimize the development and deployment of generative models.
NVIDIA NIM
enhances the efficiency and scalability of AI inference tasks, enabling rapid deployment and iteration across various computing environments and accelerating advancements in generative AI applications.
The importance of generative AI education
As generative AI models, such as GANs and transformers, become increasingly sophisticated, there is a growing demand for skilled professionals who can develop, refine, and ethically deploy these technologies. A strong educational foundation in generative AI equips students with the practical skills and theoretical knowledge needed to innovate in areas like content creation, drug discovery, and autonomous systems.
College and university education in generative AI is crucial due to the rapidly expanding role of AI in almost every industry. By integrating generative AI into their curriculum, universities prepare the next generation of AI researchers, engineers, and thought leaders to advance the field and address the complex challenges associated with AI-driven innovation.
The new Generative AI Teaching Kit, a collaboration between the NVIDIA Deep Learning Institute (DLI) and Dartmouth College, is set to empower the next generation of professionals with the skills and knowledge needed in this rapidly evolving field.
This comprehensive teaching resource enables educators to provide students access to cutting-edge tools, frameworks, and practical exercises that are crucial for understanding the complexities of Generative AI and large language model development and deployment. By equipping students with a deep understanding of generative AI techniques, the Teaching Kit enables educators to foster future innovation and creativity in AI-driven industries.
As students transition into the workforce, they will be better prepared to tackle global challenges, from improving healthcare and science to advancing sustainable technologies.
Sam Raymond, adjunct assistant professor of engineering at Dartmouth College, was instrumental in developing the content. “Empowering students with skills to understand and potentially develop their own GPU-accelerated Generative AI applications is the primary objective,” said Raymond. “I believe students who go through this course will be at a significant advantage in the job market and help bridge the knowledge gap in industries today.”
Overview of the Generative AI Teaching Kit
All Teaching Kits include lecture slides, hands-on labs, Jupyter notebooks, knowledge checks, and free online self-paced courses that provide certificates of competency for students, all comprehensively packaged up and ready for classroom and curriculum integration.
Figure 1. The Generative AI Teaching Kit includes free access to all DLI online self-paced courses, offering certificates for students
.
The aim of the Generative AI Teaching Kit is to introduce the foundational concepts of natural language processing (NLP) that are essential for understanding LLMs and generative AI more broadly. Key concepts of LLMs are then examined using NVIDIA GPUs, tools, and services, as well as open-source libraries and frameworks. A simple pretraining exercise of a GPT model shows basic training processes in the cloud.
The kit also covers diffusion models to explore the application of generative AI in image and video generation. Multi-modal LLM architectures are then introduced, with a focus on optimizing various LLM architectures during fine-tuning using the NVIDIA NeMo framework. Advancements in inference and the refinement of tools like chatbots are also discussed, using NVIDIA NIM, NeMo Guardrails, TensorRT, and TensorRT-LLM to enhance efficiency and scalability in production environments.
The Generative AI Teaching Kit contains focused modules that combine theory, algorithms, programming, and examples:
Module 1:
Introduction to Generative AI
Module 2:
Word Embeddings, Tokens, and NLP
Module 3:
Large Language Models and the Transformer
Module 4:
LLM Scaling Laws and LLM Families
Module 5:
Multimodal Learning and Its Applications
Module 6:
Diffusion Models in Generative AI
Module 7:
Model Training (Pre-Training, Instruction Following, and PEFT)
Module 8:
LLM Orchestration
Module 9:
Scaling Model Training to Distributed Workloads
Through a collaboration with Google, educators also get access to free Google Colab credits to run the lab notebooks in preparation for teaching the course.
This content is valuable for educators across various fields, especially in computer science and engineering. Its modular design enables instructors to tailor the course to meet the specific needs of their students and create a customized learning experience. Select professors from around the world have already been given early access to first-release modules.
“I’m eager to integrate the Generative AI Teaching Kit in my AI in Materials Engineering class,” said Mohadeseh Taheri-Mousavi, assistant professor in the Materials Science and Engineering department at Carnegie Mellon University. “The comprehensive lecture notes with well-structured coding labs with examples from various fields, and associated online courses with certificates, will provide my students with the cutting-edge resources to deeply understand the broad applications of generative AI techniques in various fields.”
Professor Payam Barnaghi from the Department of Brain Sciences at Imperial College London uses LLMs and generative AI in his research using electronic health records and healthcare data. “NVIDIA Generative AI Teaching Kit content is a wonderful resource for students learning the latest developments in AI and machine learning,” said Barnaghi. “As a result of having early access to the first modules, I plan to use this content as the basis for teaching advanced topics in my machine learning for neuroscience courses.”
Given the fast-paced advancements in generative AI, educators can expect the teaching materials to be updated over time. NVIDIA is dedicated to offering high-end educational resources and welcomes feedback to continuously improve the content.
LLM technical certification discounts for students
The Generative AI Teaching Kit also includes access to a 50% discount on the
NVIDIA-Certified Associate: Generative AI LLMs
certification exam and the
NVIDIA-Certified Associate: Generative AI Multimodal
certification exam for educators to distribute to students. It also includes access to both exams free of charge for educators. Both exams are online and proctored remotely, include 50-60 questions, and have a 60-minute time limit.
Figure 1. NVIDIA-Certified Associate
These technical certifications validate students’ expertise in generative AI, offering a competitive advantage in the job market. They can open doors to exciting career opportunities across AI-driven industries.
The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions.
The NCA Generative AI Multimodal certification is an entry-level credential that validates the foundational skills needed to design, implement, and manage AI systems that synthesize and interpret data across text, image, and audio modalities.
NVIDIA recommends that students take these exams after completing the Generative AI Teaching Kit.
Get started
Educators can access the first release of the Generative AI Teaching Kit and other kits for free by joining the
NVIDIA DLI Teaching Kit Program
.
About the NVIDIA Deep Learning Institute
The
NVIDIA Deep Learning Institute
(DLI) offers resources for diverse learning needs, from learning materials to self-paced and live training to educator programs. Individuals, teams, organizations, educators, and students can now find everything they need to advance their knowledge in AI, accelerated computing, accelerated data science, graphics, simulation, and more. | https://developer.nvidia.com/zh-cn/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/ | NVIDIA 深度学习培训中心发布全新生成式 AI 教学套件 | 生成式 AI 由先进的机器学习模型和深度神经网络提供动力支持,通过生成新内容并驱动医疗、金融和娱乐等领域的创新来变革各行各业。
NVIDIA 正凭借其先进的 GPU 架构和软件生态系统(例如
H100 Tensor Core GPU
和 CUDA 平台),引领这一转型,该平台可优化生成式模型的开发和部署。
NVIDIA NIM
可增强 AI 推理任务的效率和可扩展性,从而实现跨各种计算环境的快速部署和迭代,并加速生成式 AI 应用的发展。
生成式 AI 教育的重要性
随着生成式 AI 模型(例如 GANs 和 Transformers)变得越来越复杂,对能够开发、改进和合乎道德地部署这些技术的熟练专业人员的需求也在不断增长。生成式 AI 方面坚实的教育基础可以让学生掌握在内容创作、药物研发和自主系统等领域进行创新所需的实践技能和理论知识。
由于生成式 AI 在几乎每个行业中的作用都在迅速扩展,因此学院和大学教育在生成式 AI 中至关重要。通过将生成式 AI 集成到其课程中,大学可以培养新一代 AI 研究人员、工程师和思想领袖,以推动该领域的发展,并应对与 AI 驱动的创新相关的复杂挑战。
新的生成式 AI 教学套件,由 NVIDIA 深度学习研究所(DLI)和达特茅斯学院合作开发,旨在为新一代专业人士提供这一快速发展领域所需的技能和知识。
借助这一全面的教学资源,教育工作者能够为学生提供先进的工具、框架和实践练习,这些工具、框架和实践练习对于理解生成式 AI 和大语言模型开发和部署的复杂性至关重要。通过让学生深入了解生成式 AI 技术,该教学套件使教育工作者能够在 AI 驱动的行业中培养未来的创新和创造力。
随着学生向劳动力过渡,他们将更好地准备应对全球挑战,从改善医疗和科学到推进可持续技术。
达特茅斯学院(Dartmouth College)工程系兼职助理教授 Sam Raymond 在编写课程内容时发挥了重要作用。Raymond 说:“让学生掌握理解并开发自己的 GPU 加速生成式 AI 应用程序的技能是我们的主要目标。我相信,学习本课程的学生将在就业市场中占据显著优势,并有助于缩小当今各行各业的知识差距。
生成式 AI 教学套件概述
所有教学套件均包含教学幻灯片、动手实验室、Jupyter notebooks、知识检查和免费的在线自定进度课程,为学生提供能力证书。这些课程均经过全面打包,ready for课堂和课程整合。
图 1. 生成式 AI 教学套件包括免费访问所有 DLI 在线自定进度课程,并为学生提供证书。
生成式 AI 教学套件旨在介绍自然语言处理(NLP)的基本概念,这些概念对于更广泛地理解 LLM 和生成式 AI 至关重要。然后,我们使用 NVIDIA GPUs、工具和服务以及开源库和框架来检查 LLM 的关键概念。一个 GPT 模型的简单预训练练习展示了云端的基本训练流程。
该套件还涵盖扩散模型,以探索生成式 AI 在图像和视频生成中的应用。然后,介绍多模态 LLM 架构,重点介绍如何在使用 NVIDIA NeMo 框架进行微调的过程中优化各种 LLM 架构。此外,还讨论了推理方面的进展以及聊天机器人等工具的优化,这些工具使用 NVIDIA NIM、NeMo Guardrails、TensorRT 和 TensorRT-LLM 来提高生产环境的效率和可扩展性。
生成式 AI 教学套件包含结合理论、算法、编程和示例的重点模块。这第一版包括以下模块:
生成式 AI 简介
生成式 AI 中的扩散模型
LLM 编排
未来版本的工具包将提供更多的模块。
这些内容对各个领域的教育工作者都很有价值,尤其是计算机科学和工程领域的教育工作者。其模块化设计使讲师能够根据学生的具体需求定制课程,并打造定制化的学习体验。来自世界各地的精选教授已经可以抢先体验第一版模块。“我非常希望将 Generative AI 教学套件集成到我的 AI 在材料工程课堂中,”卡内基梅隆大学材料科学与工程系助理教授 Mohadeseh Taheri-Mousavi 说,“内容全面的教学笔记包含结构良好的编码实验室和来自各个领域的示例,以及与证书相关的在线课程,将为我的学生提供前沿资源,帮助他们深入了解 Generative AI 技术在各个领域的广泛应用。”
伦敦帝国理工学院大脑科学系的Professor Payam Barnaghi教授使用电子病历和医疗健康数据在研究中使用LLMs和generative AI。“NVIDIA Generative AI Teaching Kit 内容是学生学习 AI 和 machine learning 最新发展的绝佳资源,”Barnaghi 说:“由于能够提前访问首批模块,我计划以这些内容为基础,在神经科学课程的机器学习中教授高级主题。
鉴于生成式 AI 的快速发展,教育工作者可以预期教学材料会随着时间的推移而更新。NVIDIA 致力于提供高端教育资源,并欢迎反馈,以便不断改进内容。
开始使用
教育工作者可以加入
NVIDIA DLI 教学套件计划
,免费获取第一版 Generative AI 教学套件和其他套件。
关于 NVIDIA 深度学习培训中心
NVIDIA 深度学习研究所
(DLI) 为各种学习需求提供资源,从学习材料到自定进度和实时培训,再到教育工作者计划。个人、团队、组织、教育工作者和学生现在可以找到所需的一切资源,以提升他们在 AI、加速计算、加速数据科学、图形、模拟等方面的知识水平。 |
https://developer.nvidia.com/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | JSON Lines Reading with pandas 100x Faster Using NVIDIA cuDF | JSON is a widely adopted format for text-based information working interoperably between systems, most commonly in web applications and large language models (LLMs). While the JSON format is human-readable, it is complex to process with data science and data engineering tools.
JSON data often takes the form of newline-delimited
JSON Lines
(also known as NDJSON) to represent multiple records in a dataset. Reading JSON Lines data into a dataframe is a common first step in data processing.
In this post, we compare the performance and functionality of Python APIs for converting JSON Lines data into a dataframe using the following libraries:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas Accelerator Mode
We demonstrate good scaling performance and high data processing throughput with the JSON reader in cudf.pandas, especially for data with a complex schema. We also review the versatile set of JSON reader options in cuDF that improve compatibility with Apache Spark and empower Python users to handle quote normalization, invalid records, mixed types and other JSON anomalies.
JSON parsing versus JSON reading
When it comes to JSON data processing, it’s important to distinguish between parsing and reading.
JSON parsers
JSON parsers, such as
simdjson
, convert a buffer of character data into a vector of tokens. These tokens represent the logical components of JSON data, including field names, values, array begin/end, and map begin/end. Parsing is a critical first step in extracting information from JSON data, and significant research has been dedicated to reaching high parsing throughput.
To use information from JSON Lines in data processing pipelines, the tokens must often be converted into a Dataframe or columnar format, such as
Apache Arrow
.
JSON readers
JSON readers, such as
pandas.read_json
convert input character data into a Dataframe organized by columns and rows. The reader process begins with a parsing step and then detects record boundaries, manages the top-level columns and nested struct or list child columns, handles missing and null fields, infers data types, and more.
JSON readers convert unstructured character data into a structured Dataframe, making JSON data compatible with downstream applications.
JSON Lines reader benchmarking
JSON Lines is a flexible format for representing data. Here are some important properties of JSON data:
Number of records per file
Number of top level columns
Depth of struct or list nesting for each column
Data types of values
Distribution of string lengths
Fraction of missing keys
For this study, we held the record count fixed at 200K and swept the column count from 2 to 200, exploring a range of complex schemas. The four data types in use are as follows:
list<int>
and
list<str>
with two child elements
struct<int>
and
struct<str>
with a single child element
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Data type
Example records
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
Table 1. Example JSON Lines character data
Table 1 shows the first two columns of the first two records for data types, including
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Performance statistics were collected on the 25.02 branch of
cuDF
and with the following library versions: pandas 2.2.3, duckdb 1.1.3, and pyarrow 17.0.0. The execution hardware used an NVIDIA H100 Tensor Core 80 GB HBM3 GPU and Intel Xeon Platinum 8480CL CPU with 2TiB of RAM. Timing was collected from the third of three repetitions, to avoid initialization overhead and ensure that the input file data was present in the OS page cache.
In addition to the zero code change cudf.pandas, we also collected performance data from
pylibcudf
, a Python API for the
libcudf
CUDA C++ computation core. The runs with pylibcudf used a CUDA async memory resource through RAPIDS Memory Manager (RMM). Throughput values were computed using the JSONL input file size and the reader runtime of the third repetition.
Here are some examples from several Python libraries for invoking the JSON Lines reader:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON Lines reader performance
Overall, we found a wide range of performance characteristics for the JSON readers available in Python, with overall runtimes varying from 1.5 seconds to almost 5 minutes.
Table 2 shows the sum of the timing data from seven JSON reader configurations when processing 28 input files with a total file size of 8.2 GB:
Using cudf.pandas for JSON reading shows about 133x speedup over pandas with the default engine and 60x speedup over pandas with the pyarrow engine.
DuckDB and pyarrow show good performance as well, with about 60 seconds total time for DuckDB, and 6.9 seconds for pyarrow with block size tuning.
The fastest time comes from pylibcudf at 1.5 seconds, showing about 4.6x speedup over pyarrow with
block_size
tuning.
Reader label
Benchmark runtime
(sec)
Comment
cudf.pandas
2.1
Using
-m cudf.pandas
from the command line
pylibcudf
1.5
pandas
281
pandas-pa
130
Using the pyarrow engine
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
Using a 20 MB
block_size
value
Table 2. Sum of timing data for JSON reading of 28 input files
Table 2 includes the input columns counts 2, 5, 10, 20, 50, 100, and 200, and the data types
list<int>
,
list<str>
,
struct<int>
, and
struct<str>
.
Zooming into the data by data type and column count, we found that JSON reader performance varies over a wide range based on the input data details and the data processing library, from 40 MB/s to 3 GB/s for CPU-based libraries and from 2–6 GB/s for the GPU-based cuDF.
Figure 1 shows the data processing throughput based on input size for 200K rows and 2–200 columns, with input data sizes varying from about 10 MB to 1.5 GB.
Figure 1. JSON Lines reading throughput by number of input columns
In Figure 1, each subplot corresponds to the data type of the input columns. File size annotations align to the x-axis.
For cudf.pandas
read_json
, we observed 2–5 GB/s throughput that increased with larger column count and input data size. We also found that the column data type does not significantly affect throughput. The pylibcudf library shows about 1–2 GB/s higher throughput than cuDF-python, due to lower Python and pandas semantic overhead.
For pandas
read_json
, we measured about 40–50 MB/s throughput for the default UltraJSON engine (labeled as “pandas-uj”). Using the pyarrow engine (
engine="pyarrow"
) provided a boost up to 70–100 MB/s due to faster parsing (pandas-pa). The pandas JSON reader performance appears to be limited by the need to create Python list and dictionary objects for each element in the table.
For DuckDB
read_json
, we found about 0.5–1 GB/s throughput for
list<str>
and
struct<str>
processing with lower values <0.2 GB/s for
list<int>
and
struct<int>
. Data processing throughput remained steady over the range of column counts.
For pyarrow
read_json
, we measured data processing throughputs up to 2–3 GB/s for 5-20 columns, and lower throughput values as column count increased to 50 and above. We found data type to have a smaller impact on reader performance than column count and input data size. For column counts of 200 and a record size of ~5 KB per row, throughput dropped to about 0.6 GB/s.
Raising the pyarrow
block_size
reader option to 20 MB (pyarrow-20MB) led to increased throughput for column counts 100 or more, while also degrading throughput for column counts 50 or fewer.
Overall, DuckDB primarily showed throughput variability due to data types, whereas cuDF and pyarrow primarily showed throughput variability due to column count and input data size. The GPU-based cudf.pandas and pylibcudf showed the highest data processing throughput for complex list and struct schema, especially for input data sizes >50 MB.
JSON Lines reader options
Given the text-based nature of the JSON format, JSON data often includes anomalies that result in invalid JSON records or don’t map well to a dataframe. Some of these JSON anomalies include single-quoted fields, cropped or corrupted records, and mixed struct or list types. When these patterns occur in your data, they can break the JSON reader step in your pipeline.
Here are some examples of these JSON anomalies:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
To unlock advanced JSON reader options in cuDF, we recommend incorporating cuDF-Python (
import cudf
) and pylibcudf into your workflow. If single-quoted field names or string values appear in your data, cuDF provides a reader option to normalize single quotes into double quotes. cuDF supports this feature to provide compatibility with the
allowSingleQuotes
option that is enabled by default in Apache Spark.
If invalid records appear in your data, cuDF and DuckDB both provide error recovery options to replace these records with null. When error handling is enabled, if a record generates a parsing error, all of the columns for the corresponding row are marked as null.
If mixed list and struct values are associated with the same field name in your data, cuDF provides a dtype schema override option to coerce the datatype to string. DuckDB uses a similar approach by inferring a
JSON
data type.
For mixed types, the pandas library has perhaps the most faithful approach, using Python list and dictionary objects to represent the input data.
Here is an example in cuDF-Python and pylibcudf that shows the reader options, including a dtype schema override for column name “a”. For more information, see
cudf.read_json
and
pylibcudf.io.json.read_json
.
For pylibcudf, the
JsonReaderOptions
object can be configured either before or after the
build
function.
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
Table 3 summarizes the behavior of several JSON readers with Python APIs for a few common JSON anomalies. Crosses denote that the reader function raised an exception, and checkmarks denote that the library successfully returned a Dataframe. These results may change in future versions of the libraries.
Single quotes
Invalid records
Mixed types
cuDF-Python, pylibcudf
✔️ Normalize to double quotes
✔️ Set to null
✔️ Represent as a string
pandas
❌ Exception
❌ Exception
✔️ Represent as a Python object
pandas (
engine="pyarrow
“
)
❌ Exception
❌ Exception
❌ Exception
DuckDB
❌ Exception
✔️ Set to null
✔️ Represent as a JSON string-like type
pyarrow
❌ Exception
❌ Exception
❌ Exception
Table 3. JSON reader outcomes when reading JSONL files with anomalies including: single quotes, mixed types and invalid records
cuDF supports several additional JSON reader options that are critical for compatibility with Apache Spark conventions, and now are available to Python users as well. Some of these options include:
Validation rules for numbers and strings
Custom record delimiters
Column pruning by the schema provided in dtype
Customization of
NaN
values
For more information, see the libcudf C++ API documentation on
json_reader_options
.
For more information about multi-source reading for efficiently processing many smaller JSON Lines files, or byte-range support for breaking up large JSON Lines files, see
GPU-Accelerated JSON Data Processing with RAPIDS
.
Summary
RAPIDS cuDF provides powerful, flexible, and accelerated tools for working with JSON data in Python.
GPU-accelerated JSON data processing is also available in RAPIDS Accelerator For Apache Spark, starting in the 24.12 release. For information, see
Accelerating JSON Processing on Apache Spark with GPUs
.
For more information, see the following resources:
cuDF documentation
/rapidsai/cudf
GitHub repo
RAPIDS Docker containers
(available for releases and nightly builds)
Accelerate Data Science Workflows with Zero Code Changes
DLI course
Mastering the cudf.pandas Profiler for GPU Acceleration | https://developer.nvidia.com/zh-cn/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/ | 使用 NVIDIA cuDF,pandas 读取 JSON 行文件速度提升100倍 | JSON 是一种广泛采用的格式,用于在系统之间 (通常用于 Web 应用和大语言模型 (LLMs)) 以互操作方式运行的基于文本的信息。虽然 JSON 格式是人类可读的,但使用数据科学和数据工程工具进行处理十分复杂。
JSON
数据通常采用换行分隔的 JSON 行
(也称为 NDJSON) 的形式来表示数据集中的多个记录。将 JSON 行数据读入数据帧是数据处理中常见的第一步。
在本文中,我们比较了使用以下库将 JSON 行数据转换为数据帧的 Python API 的性能和功能:
pandas
DuckDB
pyarrow
RAPIDS cuDF pandas
加速器模式
我们使用 cudf.pandas 中的 JSON 读取器展示了良好的扩展性能和高数据处理吞吐量,特别是对于具有复杂模式的数据。我们还审查了 cuDF 中的一组通用 JSON 读取器选项,这些选项可提高与 Apache Spark 的兼容性,并使 Python 用户能够处理引文归一化、无效记录、混合类型和其他 JSON 异常。
JSON 解析与 JSON 读取
当涉及到 JSON 数据处理时,区分解析和读取非常重要。
JSON 解析器
JSON 解析器 (例如
simdjson
) 可将字符数据缓冲区转换为令牌向量。这些令牌代表 JSON 数据的逻辑组件,包括字段名、值、数组开始/结束和映射开始/结束。解析是从 JSON 数据中提取信息的关键第一步,并且我们致力于实现高解析吞吐量。
要在数据处理工作流中使用来自 JSON 行的信息,必须经常将令牌转换为 Dataframe 或列式格式,例如
Apache Arrow
。
JSON 阅读器
JSON 读取器 (例如
pandas.read_json
) 将输入字符数据转换为按列和行排列的 Dataframe。读取器流程从解析步骤开始,然后检测记录边界、管理顶层列和嵌套结构体或列表子列、处理缺失和空字段、推理数据类型等。
JSON 读取器可将非结构化字符数据转换为结构化 Dataframe,从而使 JSON 数据与下游应用兼容。
JSON Lines 读取器基准测试
JSON Lines 是一种灵活的数据表示格式。以下是 JSON 数据的一些重要属性:
每个文件的记录数
顶层列的数量
每列的结构体或列表嵌套深度
值的数据类型
字符串长度分布
缺少密钥的百分比
在这项研究中,我们将记录计数固定为 200K,并将列计数从 2 扫至 200,同时探索了一系列复杂的模式。使用的四种数据类型如下所示:
包含两个子元素的
list<int>
和
list<str>
包含单个子元素的
struct<int>
和
struct<str>
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
。
数据类型
记录示例
list<int>
{"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],...
list<str>
{"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],...
struct<int>
{"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},...
struct<str>
{"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},...
表 1. JSON 行字符数据示例
表 1 显示了前两列数据类型记录的前两列,包括
list<int>
、
list<str>
、
struct<int>
和
struct<str>
性能统计数据在
cuDF
的 25.02 分支上收集,并包含以下库版本:pandas 2.2.3、duckdb 1.1.3 和 pyarrow 17.0.0。执行硬件使用 NVIDIA H100 Tensor Core 80 GB HBM3 GPU 和 Intel Xeon Platinum 8480CL CPU 以及 2TiB 的 RAM。计时数据从三次重复的第三次中收集,以避免初始化开销,并确保输入文件数据存在于操作系统页面缓存中。
除了零代码更改 cudf.pandas 之外,我们还从
py
libcudf
(用于
libcudf
CUDA C++计算核心的 Python API) 收集了性能数据。
py
libcudf
运行通过 RAPIDS 内存管理器 (RMM) 使用 CUDA 异步内存资源。使用 JSONL 输入文件大小和第三次重复的读取器运行时计算吞吐量值。
以下是来自多个 Python 库的一些调用 JSON 行读取器的示例:
# pandas and cudf.pandas
import pandas as pd
df = pd.read_json(file_path, lines=True)
# DuckDB
import duckdb
df = duckdb.read_json(file_path, format='newline_delimited')
# pyarrow
import pyarrow.json as paj
table = paj.read_json(file_path)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build()
df = plc.io.json.read_json(opt)
JSON 行读取器性能
总体而言,我们发现 Python 中的 JSON 读取器具有各种性能特征,总体运行时间从 1.5 秒到近 5 分钟不等。
表 2 显示了在处理 28 个输入文件 (总文件大小为 8.2 GB) 时,来自 7 个 JSON 读取器配置的定时数据的总和:
使用 cudf.pandas 进行 JSON 读取显示,与使用默认引擎的 pandas 相比,速度提高了 133 倍,使用 pyarrow 引擎的 pandas 速度提高了 60 倍。
DuckDB 和 pyarrow 也表现出良好的性能,在调整块大小时,DuckDB 的总时间约为 60 秒,而 pyarrow 的总时间为 6.9 秒。
pylibcudf 生成的最快时间为 1.5 秒,与 pyarrow 相比,使用
block_size
调优的速度提高了约 4.6 倍。
阅读器标签
基准运行时
(秒)
评论
cudf.pandas
2.1
在命令行中使用
-m cudf.pandas
pylibcudf
1.5
pandas
271
pandas-pa
130
使用 pyarrow 引擎
DuckDB
62.9
pyarrow
15.2
pyarrow-20MB
6.9
使用 20 MB 的
block_size
值
表 2、用于 JSON 读取 28 个输入文件的定时数据总和
表 2 包括输入列计数 2、5、10、20、50、100 和 200,以及数据类型
list<int>
、
list<str>
、
struct<int>
和
struct<str>
通过按数据类型和列数量放大数据,我们发现 JSON 读取器的性能因输入数据详细信息和数据处理库的不同而差异很大,基于 CPU 的库的性能介于 40 MB/s 到 3 GB/s 之间,而基于 GPU 的 cuDF 的性能介于 2–6 GB/s 之间。
图 1 显示了基于 200K 行、2–200 列输入大小的数据处理吞吐量,输入数据大小在约 10 MB 到 1.5 GB 之间变化。
图 1. JSON Lines 读取吞吐量按输入列数量
在图 1 中,每个子图均对应输入列的数据类型。文件大小标注与 x 轴对齐。
对于
cudf.pandas read_json
,我们观察到,随着列数量和输入数据大小的增加,吞吐量增加了 2–5 GB/秒。我们还发现,列数据类型不会对吞吐量产生重大影响。由于 Python 和 pandas 语义用度较低,pylibcudf 库的吞吐量比 cuDF-python 高约 1–2 GB/秒。
对于
pandas read_json
,我们测量了默认 UltraJSON 引擎 (标记为“pandas-uj”) 的吞吐量约为 40–50 MB/s。由于解析速度更快 (pandas-pa),使用 pyarrow 引擎 (
engine="pyarrow"
) 可将速度提升高达 70–100 MB/s。由于需要为表中的每个元素创建 Python 列表和字典对象,因此 pandas JSON 读取器的性能似乎受到限制。
对于 DuckDB
read_json
,我们发现
list<str>
和
struct<str>
处理的吞吐量约为 0.5–1 GB/s,而
list<int>
和
struct<int>
的较低值 < 0.2 GB/s。数据处理吞吐量在列数量范围内保持稳定。
对于 pyarrow
read_json
,我们测量了 5-20 列的高达 2–3 GB/s 的数据处理吞吐量,以及随着列数量增加到 50 及以上而降低的吞吐量值。我们发现,与列数量和输入数据大小相比,数据类型对读取器性能的影响较小。如果列数量为 200,且每行的记录大小约为 5 KB,吞吐量将下降到约 0.6 GB/s。
将 pyarrow
block_size
reader 选项提升至 20 MB (pyarrow-20MB) 会导致列数量增加 100 或以上的吞吐量增加,同时还会降低 50 或以下列数量的吞吐量。
总体而言,DuckDB 主要因数据类型而显示吞吐量可变性,而 cuDF 和 pyarrow 主要因列数量和输入数据大小而显示吞吐量可变性。基于 GPU 的 cudf.pandas 和 pylibcudf 为复杂列表和结构模式(尤其是输入数据大小 > 50 MB)提供了超高的数据处理吞吐量。
JSON 行读取器选项
鉴于 JSON 格式基于文本的特性,JSON 数据通常包含异常,导致 JSON 记录无效或无法很好地映射到数据帧。其中一些 JSON 异常包括单引号字段、已裁剪或损坏的记录,以及混合结构或列表类型。当数据中出现这些模式时,它们可能会中断工作流中的 JSON 读取器步骤。
以下是这些 JSON 异常的一些示例:
# 'Single quotes'
# field name "a" uses single quotes instead of double quotes
s = '{"a":0}\n{\'a\':0}\n{"a":0}\n'
# ‘Invalid records'
# the second record is invalid
s = '{"a":0}\n{"a"\n{"a":0}\n'
# 'Mixed types'
# column "a" switches between list and map
s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n'
要在 cuDF 中解锁高级 JSON 读取器选项,我们建议您将 cuDF-Python (
import cudf
) 和 pylibcudf 集成到您的工作流中。如果数据中出现单引号字段名称或字符串值,cuDF 会提供读取器选项,用于将单引号归一化为双引号。cuDF 支持此功能,可与 Apache Spark 中默认启用的
allowSingleQuotes
选项兼容。
如果您的数据中出现无效记录,cuDF 和 DuckDB 都会提供错误恢复选项,将这些记录替换为 null。启用错误处理后,如果记录生成解析错误,则相应行的所有列均标记为 null。
如果混合 list 和 struct 值与数据中的相同字段名相关联,cuDF 提供一个 dtype 模式覆盖选项,以将数据类型强制转换为字符串。DuckDB 使用类似的方法来推理
JSON
数据类型。
对于混合类型,pandas 库可能是最可靠的方法,使用 Python 列表和字典对象来表示输入数据。
以下是 cuDF-Python 和 pylibcudf 中的示例,其中显示了读取器选项,包括列名称“a”的 dtype 模式覆盖。如需了解更多信息,请参阅
cudf.read_json
和
pylibcudf.io.json.read_json
。
对于 pylibcudf,可以在
build
函数之前或之后配置
JsonReaderOptions
对象。
# cuDF-python
import cudf
df = cudf.read_json(
file_path,
dtype={"a":str},
on_bad_lines='recover',
lines=True,
normalize_single_quotes=True
)
# pylibcudf
import pylibcudf as plc
s = plc.io.types.SourceInfo([file_path])
opt = (
plc.io.json.JsonReaderOptions.builder(s)
.lines(True)
.dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])])
.recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL)
.normalize_single_quotes(True)
.build()
)
df = plc.io.json.read_json(opt)
表 3 总结了使用 Python API 的多个 JSON 读取器针对一些常见 JSON 异常的行为。交叉表示读取器函数引发异常,勾号表示库已成功返回 Dataframe。在未来版本的库中,这些结果可能会发生变化。
单引号
无效记录
混合类型
cuDF-Python、pylibcudf
归一化为双引号
设置为 null
表示为字符串
pandas
*例外
*例外
表示为 Python 对象
pandas (
engine="pyarrow
“
)
*例外
*例外
*例外
DuckDB
*例外
设置为 null
表示为类似 JSON 字符串的类型
pyarrow
*例外
*例外
*例外
表 3、读取异常情况 (包括单引号、混合类型和无效记录) 的 JSONL 文件时的 JSON 读取器结果
cuDF 支持多个额外的 JSON 读取器选项,这些选项对于与 Apache Spark 惯例的兼容性至关重要,现在也可供 Python 用户使用。其中一些选项包括:
数字和字符串的验证规则
自定义记录分隔符
根据 dtype 中提供的模式进行列剪枝
自定义
NaN
值
有关更多信息,请参阅有关
json_reader_options
的 libcudf C++ API 文档。
有关多源读取以高效处理许多较小的 JSON 行文件的更多信息,或有关分解大型 JSON 行文件的字节范围支持的更多信息,请参阅使用
RAPIDS 进行 GPU 加速的 JSON 数据处理
。
总结
RAPIDS cuDF 为在 Python 中处理 JSON 数据提供了功能强大、灵活且加速的工具。
从 24.12 版本开始,您还可以在适用于 Apache Spark 的 RAPIDS Accelerator 中使用 GPU 加速的 JSON 数据处理功能。有关信息,请参阅
使用 GPU 在 Apache Spark 上加速 JSON 处理
。
有关更多信息,请参阅以下资源:
cuDF 文档
/rapidsai/cudf
GitHub 存储库
RAPIDS Docker 容器 (可用于版本和夜间构建)
零代码更改加速数据科学工作流程
DLI 课程
掌握用于 GPU 加速的 cudf.pandas Profiler |
https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | Understanding the Language of Life’s Biomolecules Across Evolution at a New Scale with Evo 2 | AI has evolved
from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics datasets, and automated laboratory workflows has allowed scientists to tackle problems once thought intractable—from rapid protein structure prediction to generative drug design, increasing the need for
AI literacy among scientists
. With this momentum, we find ourselves on the cusp of the next paradigm shift: the emergence of powerful AI foundation models purpose-built for biology.
These new models promise to unify disparate data sources—genomic sequences, RNA and proteomic profiles, and, in some cases, scientific literature—into a single, coherent understanding of life at the molecular, cellular, and systems levels. Learning biology’s language and structure opens doors to transformative applications, such as smarter drug discovery, rational enzyme design, and disease mechanism elucidation.
As we set the stage for this next wave of AI-driven breakthroughs, it is clear that these foundation models will not merely accelerate progress; they stand poised to redefine what is possible in biological research.
Test
Evo 2 as an NVIDIA BioNeMo NIM microservice
for free with the
/NVIDIA/bionemo-examples
example notebook.
Explore a complete
reference workflow for protein design
.
Start training Evo 2 on your data today in
BioNeMo Framework
.
Stay up to date with the
latest NVIDIA BioNeMo platform updates
.
A leap forward in sequence modeling and design from molecular to genome-scale
The
first Evo model
from November 2024 represented a groundbreaking milestone in genomic research, introducing a foundation model capable of analyzing and generating biological sequences across DNA, RNA, and proteins.
Published at a time when most models were restricted to single modalities or short contexts, Evo is known for its ability to operate across scales—ranging from molecular to genomic—using a unified approach. Trained on 2.7M prokaryotic and phage genomes, encompassing 300B nucleotide tokens, Evo delivered single-nucleotide resolution across many biological evolution and function tasks.
At the core of Evo’s success is its innovative
StripedHyena
architecture (Figure 1), a hybrid model combining 29 Hyena layers, a new type of deep learning architecture designed to handle long sequences of information without relying on traditional attention mechanisms that are common to Transformer architectures. Instead it uses a combination of convolutional filters and gates.
This design overcame the limitations of traditional Transformer models, enabling Evo to handle long contexts of up to 131,072 tokens efficiently. The result was a model capable of connecting small sequence changes to system-wide and organism-level impacts, bridging the gap between molecular biology and evolutionary genomics.
Figure 1
.
Evo and Evo 2 AI model architecture
Evo’s predictive capabilities set new standards for biological modeling. It achieved competitive performance in several zero-shot tasks, including predicting the fitness effects of mutations on proteins, non-coding RNAs, and regulatory DNA, providing invaluable insights for synthetic biology and precision medicine.
Evo also demonstrated remarkable generative capabilities, designing functional CRISPR-Cas systems and transposons. These outputs were validated experimentally, proving that Evo could predict and design novel biological systems with real-world utility.
Evo represents a notable advancement in
integrating multimodal and multiscale biological understanding into a single model
. Its ability to generate genome-scale sequences and predict gene essentiality across entire genomes marked a leap forward in our capacity to analyze and engineer life.
Evo’s milestones were not just its technical achievements but also its vision. This unified framework combined biology’s vast complexity with cutting-edge AI to accelerate discovery and innovation in life sciences.
Learning the language of life across evolution
Evo 2 is the next generation of this line of research in genomic modeling, building on the success of Evo with expanded data, enhanced architecture, and superior performance.
Evo 2 can provide insights into three essential biomolecules—DNA, RNA, and protein— and all three domains of life: Eukarya, Prokarya, and Archaea. Trained on a dataset of 8.85T nucleotides from 15,032 eukaryotic genomes and 113,379 prokaryotic genomes, this training dataset covers diverse species, enabling unprecedented cross-species generalization and significantly broadening its scope compared to Evo, which focused solely on prokaryotic genomes.
Evo 2 uses a new and improved
StripedHyena 2
architecture, extended up to 40B parameters, enhancing the model’s training efficiency and ability to capture long-range dependencies with context lengths of 1M tokens. StripedHyena 2, thanks to its multihybrid design based on convolutions, trains significantly faster than Transformers and other hybrid models using linear attention or state-space models.
The largest Evo 2 model was trained using 2,048
NVIDIA H100 GPUs
using
NVIDIA DGX Cloud
on AWS. As part of
NVIDIA’s partnership with Arc
, they gained access to this high-performance, fully managed AI platform optimized for large-scale, distributed training with NVIDIA AI software and expertise.
These advances mark a significant increase from Evo’s 7B parameters and a 131,000-token context length, positioning Evo 2 as a leader in multimodal and multiscale biological modeling (Table 1).
Feature
Evo
Evo 2
Genomic Training Data
Bacterial
+ bacteriophage
(300B nucleotides)
All domains of life
+ bacteriophage
(9T nucleotides)
Model Parameters
7B
7B + 40B
Context Length
131,072 tokens
Up to 1,048,576 tokens
Modalities
DNA, RNA, protein
DNA, RNA, protein
Safety
Viruses of Eukaryotes excluded
Viruses of Eukaryotes excluded
Applications
Limited cross-species tasks
Broad cross-species applications
Table 1.
Key features of Evo 2 and Evo
Evo 2’s expanded training data and refined architecture empower it to excel across various biological applications. Its multimodal design integrates DNA, RNA, and protein data, enabling zero-shot performance on tasks like mutation impact prediction and genome annotation. Evo 2 also fundamentally improves Evo by including eukaryotic genomes, enabling deeper insights into human diseases, agriculture, and environmental science.
Evo 2’s predictive capabilities outperform specialized models across diverse tasks:
Variant impact analysis
: Achieves state-of-the-art accuracy in predicting the functional effects of mutations across species zero-shot, including human and non-coding variants.
Gene essentiality
: Identifies essential genes in prokaryotic and eukaryotic genomes, validated against experimental datasets, bridging the gap between molecular and systems biology tasks.
Generative capabilities
: Designs complex biological systems, such as genome-scale prokaryotic and eukaryotic sequences, and the controllable design of chromatin accessibility, demonstrating new capabilities for biological design with real-world applicability.
Using the NVIDIA Evo 2 NIM microservice
The
NVIDIA Evo 2 NIM microservice
is useful for generating a variety of biological sequences, with an API that provides settings to adjust tokenization, sampling, and temperature parameters:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
For more information about the API output for various prompts, see the
NVIDIA BioNeMo Framework
documentation.
Evo 2 can also be fine-tuned using the open-source
NVIDIA BioNeMo Framework
, which offers robust tools for adapting pretrained models such as Evo 2 to specialized tasks in BioPharma:
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 and the future of AI in biology
AI is poised to rapidly transform biological research, enabling breakthroughs previously thought to be decades away. Evo 2 represents a significant leap forward in this evolution, introducing a genomic foundation model capable of analyzing and generating DNA, RNA, and protein sequences at unprecedented scales.
While Evo excelled in predicting mutation effects and gene expression in prokaryotes, the capabilities of Evo 2 are much broader, with enhanced cross-species generalization, making it a valuable tool for studying eukaryotic biology, human diseases, and evolutionary relationships.
Evo 2’s ability to perform zero-shot tasks, from identifying genes that drive cancer risk to designing complex biomolecular systems, showcases its versatility. Including long-context dependencies enables it to uncover patterns across genomes, providing multi-modal and multi-scale insights that are pivotal for advancements in precision medicine, agriculture, and synthetic biology.
As the field moves forward, models like Evo 2 set the stage for a future where AI deciphers life’s complexity and is also used to design new useful biological systems. These advancements align with broader trends in AI-driven science, where foundational models are tailored to domain-specific challenges, unlocking previously unattainable capabilities. Evo 2’s contributions signal a future where AI becomes an indispensable partner in decoding, designing, and reshaping the living world.
For more information about Evo 2, see the
technical report published by the Arc Institute
. Evo 2 is also available within the NVIDIA BioNeMo platform.
Acknowledgments
We’d like to thank the following contributors to the described research for their notable contributions to the ideation, writing, and figure design for this post:
Garyk Brixi
, genetics Ph.D. student at Stanford
Jerome Ku
, machine learning engineer working with the Arc Institute
Michael Poli
, founding scientist at Liquid AI and computer science Ph.D. student at Stanford
Greg Brockman
, co-founder and president of OpenAI
Eric Nguyen
, bioengineering Ph.D. student at Stanford
Brandon Yang
, co-founder of Cartesia AI and computer science Ph.D. student at Stanford (on leave)
Dave Burke
, chief technology officer at the Arc Institute
Hani Goodarzi
, core investigator at the Arc Institute and associate professor of biophysics and biochemistry at the University of California, San Francisco
Patrick Hsu
, co-founder of the Arc Institute, assistant professor of bioengineering, and Deb Faculty Fellow at the University of California, Berkeley
Brian Hie
, assistant professor of chemical engineering at Stanford University, Dieter Schwarz Foundation Stanford Data Science Faculty Fellow, innovation investigator at the Arc Institute, and leader at the Laboratory of Evolutionary Design at Stanford | https://developer.nvidia.com/zh-cn/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/ | 利用 Evo 2 以全新规模跨进化研究生命生物分子的语言 | AI 已从实验好奇心发展为生物学研究的驱动力。
深度学习算法、海量组学数据集和自动化实验室工作流程的融合使
科学家能够解决一度被认为棘手的问题
(从快速蛋白质结构预测到生成式药物设计),从而增加了科学家对 AI 素养的需求。在这一势头下,我们正处于下一次范式转变的边缘:专为生物学打造的强大 AI 基础模型的出现。
这些新模型有望将不同的数据源 (基因组序列、RNA 和蛋白质组谱,在某些情况下还包括科学文献) 统一为分子、细胞和系统级别上对生命的统一、一致的理解。学习生物学的语言和结构为变革性应用打开了大门,例如更智能的药物发现、合理的酶设计和疾病机制阐明。
在我们为下一波 AI 驱动的突破做好准备之际,这些基础模型显然不仅能够加速进展,还将重新定义生物学研究的可能性。
使用/NVIDIA/bionemo-examples
示例 notebook
免费将 Evo 2 测试为 NVIDIA BioNeMo NIM 微服务
。
探索蛋白质设计的完整参考工作流程
。
立即开始在
BioNeMo 框架
中使用您的数据训练 Evo 2
及时了解 NVIDIA BioNeMo 平台的最新动态。
序列建模和设计从分子规模到基因组规模的飞跃
2024 年 11 月推出的
首个 Evo 模型
是基因组研究领域的一个突破性里程碑,它引入了能够分析和生成跨 DNA、RNA 和蛋白质的生物序列的基础模型。
在发布 Evo 时,大多数模型都被限制在单一模式或简短背景下运行,而众所周知,它能够使用统一的方法跨规模 (从分子到基因组) 操作。Evo 基于 270 万个原核细胞和噬菌体基因组 (包含 300 亿个核酸令牌) 进行训练,在许多生物进化和功能任务中提供了单核酸分辨率。
Evo 成功的核心是其创新的 StripedHyena 架构 (图 1),这是一个结合了 29 个 Hyena 层的混合模型,这是一种新型深度学习架构,旨在处理长序列信息,而无需依赖 Transformer 架构中常见的传统注意机制。相反,它使用卷积过滤器和门的组合。
这种设计克服了传统 Transformer 模型的限制,使 Evo 能够高效处理多达 131,072 个令牌的长上下文。最终,该模型能够将微小的序列变化与系统级和有机体级的影响联系起来,弥合分子生物学与进化基因组学之间的差距。
图 1
.
.
.
Evo 和 Evo 2 AI 模型架构
Evo 的预测功能为生物建模树立了新的标准。它在多项零样本任务中取得了具有竞争力的表现,包括预测突变对蛋白质、非编码 RNAs 和调控 DNA 的适应性影响,为合成生物学和精准医学提供宝贵见解。
Evo 还展示了非凡的生成功能,设计了功能齐全的 CRISPR-Cas 系统和转座子。这些输出经过实验验证,证明了 Evo 可以预测和设计具有真实效用的新型生物系统。
Evo
代表着将多模态和多尺度生物理解集成到单个模型中的显著进步
。它能够生成基因组级序列并预测整个基因组的基因本质,这标志着我们分析和工程生命的能力实现了飞跃。
Evo 的里程碑式发展不仅体现在技术成就上,还体现在愿景上。这一统一框架将生物学的庞大复杂性与尖端 AI 相结合,加速了生命科学领域的发现和创新。
学习进化过程中的生命语言
Evo 2 是基因组建模领域这一系列研究的新一代产品,基于 Evo 在扩展数据、增强架构和卓越性能方面取得的成功而构建。
Evo 2 可以深入了解三种基本生物分子 (DNA、RNA 和蛋白质) 以及生命的所有三个领域:真核生物 (Eukarya)、原核生物 (Prokarya) 和古菌 (Archaea)。此训练数据集基于来自 15,032 个真核基因组和 113,379 个原核基因组的 8.85T 核酸数据集进行训练,涵盖不同物种,与仅关注原核基因组的 Evo 相比,实现了前所未有的跨物种泛化,并显著拓宽了其范围。
Evo 2 使用经过改进的全新
StripedHyena 2
架构,该架构可将参数扩展至 40B 个,从而提高模型的训练效率,以及使用 1M 个令牌的上下文长度捕获远程依赖项的能力。
StripedHyena 2
采用基于卷积的多混合设计,其训练速度明显快于 Transformers 和其他使用线性注意力或状态空间模型的混合模型。
最大的 Evo 2 模型使用 AWS 上的
NVIDIA DGX Cloud
使用 2,048 个
NVIDIA H100 GPUs
进行训练。作为
NVIDIA 与 Arc 合作
的一部分,他们获得了对这个高性能、完全托管的 AI 平台的访问权限,该平台利用 NVIDIA AI 软件和专业知识针对大规模分布式训练进行了优化。
这些进步标志着 Evo 的 7B 参数和 131,000 个令牌的上下文长度有了显著增加,使 Evo 2 成为多模态和多尺度生物建模领域的领导者 (表 1)。
特征
Evo
Evo 2
基因组训练数据
细菌 + bacteriophage(300B 核酸)
所有生命领域 + bacteriophage(9T nucleotides)
模型参数
70 亿
70 亿 400 亿
上下文长度
131072 个令牌
最多 1048576 个令牌
模式
DNA、RNA、蛋白质
DNA、RNA、蛋白质
安全性
不包括真核生物病毒
不包括真核生物病毒
应用
跨物种任务受限
广泛的跨物种应用
表 1、Evo 2 和 Evo 的主要特性
Evo 2 的扩展训练数据和优化架构使其能够在各种生物应用中表现卓越。其多模态设计集成了 DNA、RNA 和蛋白质数据,可在执行突变影响预测和基因组标注等任务时实现零采样性能。Evo 2 还通过纳入真核基因组从根本上改进了 Evo,使人们能够更深入地了解人类疾病、农业和环境科学。
Evo 2 的预测能力优于各种任务的专业模型:
变体影响分析
:在零样本预测物种突变 (包括人类和非编码变体) 的功能性影响方面实现出色的准确性。
基因本质
:识别原核和真核基因组中的基本基因,并通过实验数据集进行验证,弥合分子和系统生物学任务之间的差距。
生成功能
:设计复杂的生物系统(例如基因组级的原核细胞序列和真核细胞序列),以及染色质可访问性的可控设计,展示具有现实世界适用性的生物设计新功能。
使用 NVIDIA Evo 2 NIM 微服务
NVIDIA Evo 2 NIM 微服务
可用于生成各种生物序列,其 API 可提供用于调整标记化、采样和温度参数的设置:
# Define JSON example human L1 retrotransposable element sequence
example = {
# nucleotide sequence to be analyzed
"sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC",
"num_tokens": 102, # number of tokens to generate
"top_k": 4, # only predict top 4 most likely outcomes per token
"top_p": 1.0, # include 100% cumulative prob results in sampling
"temperature": 0.7, # add variability (creativity) to predictions
"
": True, # enable more diverse outputs
"enable_logits": False, # disable raw model output (logits)
}
# Retrieve the API key from the environment
key = os.getenv("NVCF_RUN_KEY")
# Send the example sequence and parameters to the Evo 2 API
r = requests.post(
# Example URL for the Evo 2 model API.
url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate")
# Authorization headers to authenticate with the API
headers={"Authorization": f"Bearer {key}"},
# The data payload (sequence and parameters) sent as JSON
json=example,
)
有关各种提示的 API 输出的更多信息,请参阅
NVIDIA BioNeMo 框架文档
。
此外,还可以使用开源
NVIDIA BioNeMo Framework
对 Evo 2 进行微调,该框架提供可靠的工具,可根据 BioPharma 中的专业任务调整预训练模型(例如 Evo 2):
# Prepare raw sequence data for training based on a YAML config file
preprocess_evo2 -c data_preproc_config.yaml
# Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs
torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback
# Optional Fine-tuning: Add this argument to start from a pretrained model
# --ckpt-dir=/path/to/pretrained_checkpoint
Evo 2 和生物学领域的 AI 未来
AI 势必会迅速改变生物学研究,实现之前人们认为需要数十年才能实现的突破。Evo 2 代表了这一变革的重大飞跃,它引入了基因组基础模型,能够以超大规模分析和生成 DNA、RNA 和蛋白质序列。
虽然 Evo 在预测原核生物的突变效应和基因表达方面表现优异,但 Evo 2 的功能更为广泛,并增强了跨物种泛化,这使其成为研究真核生物学、人类疾病和进化关系的宝贵工具。
从识别导致癌症风险的基因到设计复杂的生物分子系统,Evo 2 执行零样本击任务的能力充分体现了其通用性。借助长上下文依赖项,AI 能够揭示跨基因组的模式,提供对精准医学、农业和合成生物学的进步至关重要的多模态和多尺度见解。
随着该领域的发展,像 Evo 2 这样的模型为 AI 解读生命复杂性的未来奠定了基础,同时还用于设计新的有用的生物系统。这些进步与 AI 驱动的科学领域的更广泛趋势是一致的,在这些趋势中,基础模型针对特定领域的挑战进行了定制,解锁了以前无法实现的功能。Evo 2 的贡献标志着 AI 将成为解码、设计和重塑生存世界不可或缺的合作伙伴。
有关 Evo 2 的更多信息,请参阅 Arc Institute 发布的技术报告。Evo 2 也可在 NVIDIA BioNeMo 平台中使用。
致谢
在此,
我们要感谢以下参与所述研究的人员,感谢他们为本文的构思、写作和图形设计做出的杰出贡献
:
Garyk Brixi
,斯坦福大学遗传学博士生
与 Arc Institute 合作的机器学习工程师
Jerome Ku
Michael Poli
,Liquid AI 的创始科学家兼斯坦福大学计算机科学博士生
Greg Brockman
,OpenAI 联合创始人兼总裁
Eric Nguyen,
斯坦福大学生物工程博士生
Brandon Yang
,Cartesia AI 联合创始人兼斯坦福大学计算机科学博士生 (休假中)
Dave Burke
,Arc Institute 首席
技术官
Hani Goodarzi
,Arc Institute 核心研究员,加州大学旧金山分校生物物理学和生物化学副教授
Patrick Hsu
,Arc Institute 联合创始人、生物工程助理教授兼加州大学伯克利分校 Deb 教职研究员
Brian Hie
– 斯坦福大学化学工程助理教授、Dieter Schwarz 基金会斯坦福大学数据科学教职人员、Arc Institute 创新研究员、斯坦福大学进化设计实验室负责人 |
https://developer.nvidia.com/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie | As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult. Fortunately, large datasets are a beneficial component for recommendation systems, which can make a sometimes overwhelming decision much easier.
Graphs are excellent choices for modeling the relationships inherent in the data that fuel recommendation systems, and NetworkX is a very popular option that many data scientists turn to for graph analytics in Python. NetworkX is easy to learn and use, stocked with a wide breadth of graph algorithms, backed by a large and friendly community, and has copious examples available in notebooks, documents, Stack Overflow, and your favorite LLM.
However, to the disappointment of countless developers that broke into graph analytics with or even because of NetworkX, it famously falls short in performance at the scales used by typical recommendation systems.
This begs the question: Can an effective graph-based recommendation system be written in a few simple lines of Python? More generally, can developers and data scientists have both easy-to-use and high-performance graph analytics?
The answer to both questions is, “Yes.”
Read on to discover how you can create a simple and effective recommendation system in Python using NetworkX, a dataset of 33M movie reviews, the Jaccard Similarity algorithm, and the NVIDIA cuGraph back-end, which provides the >250x speedup necessary for modern large-scale graph data.
The MovieLens dataset
Here’s the most important part of the system: the data. The
MovieLens dataset
is generously made available for public download and is described in more detail in the
README file
. The full set includes about 331K anonymized users reviewing 87K movies, resulting in 34M ratings.
Figure 1. MovieLens data represented as a graph, where the individual ratings easily map to edges between user and movie nodes
Extracting recommendations from the data: bipartite graphs and Jaccard Similarity
The type of graph created from the MovieLens data is a
bipartite graph
because there are only two types of nodes: movies and users. The reviews (edges) can only occur between a user and a movie. This makes it particularly easy to apply the
Jaccard Similarity
algorithm to find similarities between movies.
Jaccard Similarity compares pairs of nodes and computes a similarity coefficient using their relationships in the graph. In this case, movies are related to each other based on how users have chosen to watch and review them.
Figure 2. Jaccard Similarity computes a similarity coefficient using the sizes of the sets of neighbors for the two nodes being compared
Based on the viewing preferences of users, you can see m3 is more similar to m2 than it is to m1, and movies m4 and m1 aren’t similar at all. This system would recommend m2 to someone who likes m3 and wouldn’t recommend m1 to someone who likes m4.
NetworkX makes it easy… for smaller graphs
Not surprisingly, NetworkX supports the type of analysis described earlier, and it’s quite easy to start seeing results in just a few lines of Python. But as you’ll see, performance becomes a limitation for larger-sized graphs—such as those needed for your movie recommendation system—when using NetworkX without the GPU-accelerated cuGraph backend.
I discuss the key pieces of the recommendation system later in this post, but the full source code is available in the
/rapidsai/nx-cugraph
GitHub repo.
Because the Jaccard Similarity algorithm you’re using doesn’t take edge weights into account, it considers all reviews equal. You don’t want movies with low reviews to be recommended, so filter out all reviews under a certain threshold, which has the side effect of making the graph smaller too.
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
If you print the sizes of the data you’re working with, you see that your graph of good reviews is approximately 330K nodes and 28M edges, with an average degree (number of neighbors per node) of 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
As mentioned earlier, graphs of this size often present a challenge to NetworkX, but GPU acceleration using the cuGraph backend removes the performance limitations often associated with this much data. However, I’ll continue with a CPU-only environment to demonstrate the default performance.
All the following examples were run on a workstation using NetworkX 3.4.2 and a Intel Xeon Platinum 8480CL at 2.0 GHz with 2 TB RAM.
Using a NetworkX graph created from users and good movie reviews, pick a user, find one of their highest rated movies, and use Jaccard Similarity to find other movies like it:
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
When you look up the node ID in the movie name map, you see that one of this user’s highest rated movies is the animated film,
Mulan
:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
You can now use Jaccard Similarity to recommend a movie based on the user’s preferences and viewing history:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
The Jaccard Similarity computation using the default NetworkX implementation ran for over two minutes. Using these results, you can now provide a recommendation.
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
Now you can print the first movie in the sorted list of recommendations:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
The code is easy and the results look good, but performance holds us back
As you can see, the recommendation seems reasonable; someone who likes
Mulan
seems likely to also enjoy the 1999 Disney animated film
Tarzan
.
However, if the goal was to provide a service or to analyze hundreds or thousands of movies, the two-minute runtime would have you start looking for an alternative to NetworkX. You can see that finding similarities between other movies using this system isn’t any faster:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
Here’s another example:
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
The quality of the recommendations returned is impressive given that this system is composed of only a few lines of code. However, the runtime performance makes it virtually unusable. As described earlier, finding recommendations based on
Shawshank Redemption, The (1994)
takes nearly 17 minutes.
NVIDIA cuGraph makes it transformatively faster
The graph algorithm in this workflow is prohibitively expensive, but by using the NVIDIA cuGraph backend and a compatible GPU, you can dramatically improve performance without changing the code.
Jaccard Similarity is supported in nx-cugraph version 25.02 or later. Version 25.02 is available from nightly builds and will be part of future stable releases later this month. Instructions on installing nx-cugraph, as well as other RAPIDS packages, from both nightly and stable channels using conda or pip, are available in the
RAPIDS Installation Guide
.
After being installed, enable nx-cugraph by setting an environment variable:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph uses the GPU to dramatically accelerate the neighbor lookups and set comparisons needed for the Jaccard Similarity computation. As the graph scales and the number of movies and reviews per movie increases, performance remains almost constant.
The best part of the system—the simplicity of the code—does not change, and the results are identical, but performance increases by over 250x for the run that previously took nearly 17 minutes, reducing it to under 4 seconds.
Figure 3. Speedup of cuGraph over NetworkX for Jaccard Similarity computation for various movies
Software: NetworkX 3.4.2, cuGraph/nx-cugraph 25.02
;
CPU: Intel(R) Xeon(R) Platinum 8480CL @ 2.0GHz 2TB RAM
;
GPU: NVIDIA Quadro RTX 8000 48GB RAM
Conclusion
This post covered a simple and effective recommendation system that’s easy to write in Python using NetworkX. Although there are many other approaches you could take—as covered in
What Is a Recommendation System?
—few would match the low effort required to start exploring data that graph analysis with NetworkX offers.
However, productive and meaningful data exploration requires quick turnaround, and NetworkX has traditionally struggled to scale to larger, real-world problem sizes.
The NVIDIA cuGraph backend for NetworkX accelerates the familiar and flexible NetworkX API to also make it performant at scale, generating results in seconds instead of tens of minutes, keeping you focused and productive. You can now continue using NetworkX, the most popular graph analytics library, without concern for scaling issues simply by adding a GPU and the cuGraph backend to your environment.
For more information about accelerated graph analysis using NetworkX and NVIDIA cuGraph, see
RAPIDS: GPU-Accelerated NetworkX Backend
. | https://developer.nvidia.com/zh-cn/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/ | 使用 NetworkX、Jaccard Similarity 和 cuGraph 预测您下一部最喜欢的电影 | 随着全球每个人可用的数据量不断增加,消费者做出明智决策的能力也越来越难。幸运的是,对于推荐系统而言,大型数据集是一个非常有用的组件,有时这会让某些情况下的决策变得更加容易。
对于为推荐系统提供支持的数据中固有的关系建模而言,图形是一个很好的选择,而 NetworkX 是许多数据科学家在 Python 中进行图形分析时非常喜欢的选择。NetworkX 易于学习和使用,拥有各种图形算法,并由庞大而友好的社区提供支持,并且在 Notebook、文档、Stack Overflow 和您喜欢的 LLM 中提供了大量示例。然而,令无数开发人员失望的是,他们使用 NetworkX 或甚至因为 NetworkX 而涉足图形分析,但众所周知,它在典型推荐系统使用的规模上的性能表现不佳。
这就引出了一个问题:能否用 Python 的几行简单代码编写有效的基于图形的推荐系统?更一般地说,开发者和数据科学家能否同时进行易于使用的高性能图形分析?
这两个问题的答案都是“Yes”
请继续阅读,了解如何使用 NetworkX、Jaccard Similarity 算法和 NVIDIA cuGraph 后端 (可将现代大规模图形数据所需的速度提高 250 倍以上),在 Python 中创建简单有效的推荐系统,使用 3300 万条电影评论的数据集。
MovieLens 数据集
我们先从系统中最重要的部分开始:数据。MovieLens 数据集
1
可供公开
下载
,
README 文件
中有更详细的说明。该系列包括大约 331k 匿名用户,他们在观看 87k 部电影,获得了 34M 的评分。
图 1、MovieLens 数据可以表示为一个图形,其中各个评分可轻松映射到用户和电影节点之间的边缘。
从数据中提取建议:二分图和 Jaccard Similarity
我们根据 MovieLens 数据创建的图形类型是二部图,因为只有两种类型的节点:电影节点和用户节点,并且评论(边缘)只能在用户和电影之间发生。这使得应用
Jaccard Similarity
算法来查找电影之间的相似性变得特别容易。Jaccard Similarity 比较节点对,并使用它们在图形中的关系计算相似性系数。在这种情况下,电影根据用户选择的观看和审查方式相互关联。
图 3、Jaccard Similarity 使用所比较的两个节点的近邻集的大小来计算相似性系数。根据用户的观看偏好,我们可以看到 m3 与 m2 更相似,电影 m4 和 m1 完全不相似。此系统会向喜欢 m3 的用户推荐 m2,而不会向喜欢 m4 的用户推荐 m1。
NetworkX 可以轻松处理较小的图形
不足为奇的是,NetworkX 支持我们上述的分析类型,而且只需使用几行 Python 代码即可轻松查看结果。但正如我们将看到的,当使用无 GPU 加速的 cuGraph 后端的 NetworkX 时,大型图形 (例如我们的电影推荐系统所需的图形) 的性能会受到限制。
我们将在下方查看推荐系统的关键部分,但可在
此处
获取完整源代码。
由于我们使用的 Jaccard Similarity 算法未考虑边缘权重,因此会将所有评论视为相同。我们不希望推荐好评度低的电影,因此我们会过滤掉某个值下的所有好评度,这会导致图形也变小。
# Create a separate DataFrame containing only "good" reviews (rating >= 3).
good_ratings_df = ratings_df[ratings_df["rating"] >= 3]
good_user_ids = good_ratings_df["userId"].unique()
good_movie_ids = good_ratings_df["movieId"].unique()
如果我们打印正在处理的数据的大小,我们会看到好评图大约有 330k 个节点,28M 个边缘,平均度数 (每个节点的近邻数) 为 84:
total number of users: 330975
total number of reviews: 33832162
average number of total reviews/user: 102.22
total number of users with good ratings: 329127
total number of good reviews: 27782577
average number of good reviews/user: 84.41
如上所述,这种规模的图形通常会给 NetworkX 带来挑战,但使用 cuGraph 后端的 GPU 加速消除了通常与如此多的数据相关的性能限制。不过,我们将继续使用 CPU 环境来演示默认性能。
注意
以下所有示例均在使用 NetworkX 3.4.2 和 Intel(R) Xeon(R) Platinum [email protected](2TB RAM)的工作站上运行
使用由用户创建的 NetworkX 图形和优秀的电影评论,我们来选择一个用户,找到他们评分最高的电影之一,并使用 Jaccard Similarity 找到类似的其他电影。
# Pick a user and one of their highly-rated movies
user = good_user_ids[321]
user_reviews = good_user_movie_G[user]
highest_rated_movie = max(
user_reviews,
key=lambda n: user_reviews[n].get("rating", 0)
)
当我们在电影名称贴图中查找节点 ID 时,我们会发现该用户评分最高的电影之一是动画电影“Mulan”:
highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0}
我们现在可以使用 Jaccard Similarity 根据用户的偏好和观看历史记录来推荐电影:
%%time
# Run Jaccard Similarity
jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch))
CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s
Wall time: 2min 14s
使用默认 NetworkX 实现的 Jaccard 相似性计算运行了两分钟以上。根据这些结果,我们现在可以提供推荐。
# Sort by coefficient value, which is the 3rd item in the tuples
jacc_coeffs.sort(key=lambda t: t[2], reverse=True)
# Create a list of recommendations ordered by "best" to "worst" based on the
# Jaccard Similarity coefficients and the movies already seen
movies_seen = list(good_user_movie_G.neighbors(user))
recommendations = [mid for (_, mid, _) in jacc_coeffs
if mid not in movies_seen]
现在,我们只需在已排序的推荐列表中打印出第一部电影:
User ID 289308 might like Tarzan (1999) (movie ID: 2687)
代码很简单,结果看起来不错,但性能却拖累了我们
如我们所见,这个推荐似乎是合理的;喜欢“Mulan”的人似乎也喜欢 1999 年的迪士尼动画电影“Tarzan”。
但是,如果我们的目标是提供服务,或分析数百乃至数千部电影,那么两分钟的运行时间就能让我们开始寻找 NetworkX 的替代方案。我们可以看到,使用此系统查找其他电影相似点的速度并没有加快:
%%time
# 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)"
print_similar_movies(1196)
movies similar to Star Wars: Episode V - The Empire Strikes Back (1980):
movieId=260, Star Wars: Episode IV - A New Hope (1977)
movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983)
movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s
Wall time: 11min 30s
%%time
# 318: "Shawshank Redemption, The (1994)"
print_similar_movies(318)
movies similar to "Shawshank Redemption, The (1994)":
movieId=296, Pulp Fiction (1994)
movieId=593, "Silence of the Lambs, The (1991)"
movieId=356, Forrest Gump (1994)
CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s
Wall time: 16min 49s
鉴于此系统仅由几行代码组成,因此所返回的推荐内容的质量令人印象深刻。但是,运行时性能使其几乎无法使用。如上所述,根据“Shawshank Redemption, The (1994)”查找推荐内容大约需要 17 分钟。
NVIDIA cuGraph 使其变革性地加快
上述工作流程中的图形算法成本高昂,但通过使用 NVIDIA cuGraph 后端和兼容的 GPU,我们可以在不更改代码的情况下显著提高性能。
nx-cugraph 版本 25.02 或更高版本支持 Jaccard Similarity。版本 25.02 可在 nightly builds 中使用,并将于本月晚些时候纳入未来的稳定版本中。有关如何使用 conda 或 pip 从 nightly 和 stable 通道安装 nx-cugraph 以及其他 RAPIDS 包的说明,请参阅
RAPIDS Installation Guide
。
安装后,只需设置环境变量即可启用 nx-cugraph:
NX_CUGRAPH_AUTOCONFIG=True
cuGraph 利用 GPU 显著加速近邻查找,并设置 Jaccard 相似性计算所需的比较结果。此外,随着图形规模以及每部电影的电影和评论数量的增加,性能几乎保持不变。
该系统最优秀的部分,即代码的简单性,并没有改变,结果也是一样的,但在过去近 17 分钟的运行中,性能提高了 250 倍以上,缩短到 4 秒以内。
图 4、图表显示了各种电影的 cuGraph over NetworkX for Jaccard Similarity 计算速度
。
软件:NetworkX 3.4.2,cuGraph/nx-cugraph 25.02
CPU:Intel(R) Xeon(R) Platinum [email protected] 2TB RAM
GPU:NVIDIA Quadro RTX 8000 48GB RAM
结束语
这篇博文介绍了一个简单而有效的推荐系统,它可以使用 NetworkX 轻松地用 Python 编写。虽然我们可以采用许多其他方法(如此处所述),但很少有方法能够做到与开始探索 NetworkX 图形分析提供的数据所需的工作量不相上下。然而,高效和有意义的数据探索需要快速的周转,而 NetworkX 传统上一直难以扩展到更大的实际问题规模。
适用于 NetworkX 的 NVIDIA cuGraph 后端可对熟悉且灵活的 NetworkX API 进行加速,还可大规模提升其性能,在几秒钟 (而非数十分钟) 内生成结果,从而让您专注工作并高效工作。现在,用户只需向环境中添加 GPU 和 cuGraph 后端,即可继续使用热门的图形分析库 NetworkX,而无需担心扩展问题。
如需了解有关使用 NetworkX 和 NVIDIA cuGraph 进行加速图形分析的更多信息,请访问
https://rapids.ai/nx-cugraph
。 |
https://developer.nvidia.com/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | NVIDIA Open GPU Datacenter Drivers for RHEL9 Signed by Red Hat | NVIDIA and Red Hat have partnered to bring continued improvements to the precompiled
NVIDIA Driver introduced in 2020
. Last month, NVIDIA
announced
that the open GPU driver modules will become the default recommended way to enable NVIDIA graphics hardware.
Today, NVIDIA announced that Red Hat is now compiling and signing the
NVIDIA open GPU kernel modules
to further streamline the usage for the Red Hat Enterprise Linux 9 OS and to provide a way forward for supporting
Confidential Computing
with NVIDIA hardware in the datacenter.
The signed open GPU drivers can now be installed and used without any extra key enrollment configuration. As with the current legacy precompiled packages, only the latest RHEL 9 kernel is supported at the time of the driver update.
Starting with RHEL 9.5, NVIDIA now offers a tech preview repository available for trial until April 30, 2025. This repository enables the usage of the open GPU driver and legacy driver (with caveats) using
modularity streams
.
After the tech preview ends, the signed kernel modules will be available in the NVIDIA Compute package repositories with the rest of the CUDA platform: CUDA-X acceleration libraries, CUDA-optimized applications, and frameworks.
For more information about modularity streams and how they work with the precompiled graphics driver, see
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
.
Activating the tech preview
To use the signed NVIDIA open GPU driver modules, make sure that the Red Hat repositories are enabled, including RHEL9 AppStream, RHEL9 BaseOS, and RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
Add the CUDA network repository:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
When the tech preview repository is activated, view the supported modularity streams:
$ dnf module list nvidia-driver
Figure 1. Example output list from the tech preview tool of the NVIDIA modularity streams installed
The tech preview provides the following new modularity streams:
latest-open-gpu
<driver_version>-open-gpu
Install the latest stream to opt into the signed precompiled packages:
$ sudo dnf module install nvidia-driver:latest-open-gpu
Reboot and enjoy!
Figure 2. Example
nvidia-smi
command output showing the NVIDIA CUDA driver version running and system parameters
Important caveat with the tech preview repository
The legacy precompiled driver is also listed in the modularity streams:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
These drivers have been signed by NVIDIA and require signing key enrollment. They install just fine without any warnings but upon reboot the NVIDIA graphics driver will not be available:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Upon further inspection, the problem becomes clear:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
This happens because secure boot is enabled by default:
$ sudo mokutil --sb-state
SecureBoot enabled
This module has been signed by NVIDIA using a key that is not in the chain of trust:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
For more information about enrolling the NVIDIA signing key with mokutil, see
UEFI Secure Boot
on GitHub.
Installation
Install the tech preview repository on a host with secure boot enabled:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
If you discover any issues with packaging, please report those problems to
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
.
This repository will be active until March 31, 2025.
Summary
Now with signed packages from NVIDIA and Red Hat, secure boot with NVIDIA on RHEL 9 is even better. You can look forward to more developments in this space.
The driver source code is available in the
/NVIDIA/open-gpu-kernel-modules
GitHub repo.
Packaging templates and instructions are provided on GitHub to enable you to maintain your own precompiled kernel module packages for custom kernels and derivative Linux distros:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
To give feedback,
send comments
or
report driver bugs
. If you are not already a member,
join the NVIDIA Developer Program
. | https://developer.nvidia.com/zh-cn/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/ | 适用于 RHEL9 的 NVIDIA 开放 GPU 数据中心驱动,由 Red Hat 签署 | NVIDIA 和 Red Hat 合作,对
2020 年推出的预编译 NVIDIA Driver
进行持续改进。上个月,NVIDIA
宣布
,开放的 GPU 驱动模组将成为启用 NVIDIA 图形硬件的默认推荐方式。
今天,NVIDIA 宣布 Red Hat 正在编译并签署
NVIDIA 开放 GPU 内核模块
,以进一步简化 Red Hat Enterprise Linux 9 操作系统的使用,并为在数据中心使用 NVIDIA 硬件支持
机密计算
提供前进方向。
现在无需任何额外的密钥注册配置即可安装和使用经过签名的开放 GPU 驱动。与当前的旧版预编译软件包一样,在驱动更新时仅支持最新的 RHEL 9 内核。
从 RHEL 9.5 开始,NVIDIA 现在提供技术预览库,可在 2025 年 4 月 30 日之前试用。此库允许使用
模块化流使
用开放式 GPU 驱动程序和旧版驱动程序(附带说明)。
技术预览结束后,已签名的内核模块将在 NVIDIA Compute 软件包存储库中与 CUDA 平台的其他部分一起提供:CUDA-X 加速库、CUDA 优化的应用和框架。
有关 模块化流及其如何与预编译图形驱动程序配合使用的更多信息 ,请参阅
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
。
激活技术预览
要使用经过签名的 NVIDIA 开源 GPU 驱动模块,请确保已启用 Red Hat 资源库,包括 RHEL9 AppStream、RHEL9 BaseOS 和 RHEL9 CRB:
$ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
$ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
$ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms
添加 CUDA 网络存储库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
激活技术预览库后,请查看支持的模块化流:
$ dnf module list nvidia-driver
图 1、已安装的 NVIDIA 模块化流的技术预览工具中的输出列表示例
技术预览提供了以下新的模块化流:
latest-open-gpu
<driver_version>-open-gpu
安装最新的流,选择使用经过签名的预编译软件包:
$ sudo dnf module install nvidia-driver:latest-open-gpu
重启并尽情享受吧!
图 2、显示
正在运行的 NVIDIA CUDA 驱动程序版本和系统参数的 nvidia-smi 命令输出示例
技术预览库的重要注意事项
旧版预编译驱动也列在模块化流中:
nvidia-driver:latest
nvidia-driver:latest-<major_version>
nvidia-driver:<major_version>
这些驱动已由 NVIDIA 签名,需要签名密钥注册。它们可以正常安装,没有任何警告,但在重启后,NVIDIA 图形驱动程序将不可用:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
经过进一步检查,问题变得明显:
# insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko
insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service
发生这种情况的原因是默认情况下启用了 secure boot:
$ sudo mokutil --sb-state
SecureBoot enabled
本模块由 NVIDIA 使用不在信任链中的密钥进行签名:
$ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer
signer: NVIDIA
有关使用 mokutil 注册 NVIDIA 签名密钥的更多信息,请参阅 GitHub 上的 UEFI Secure Boot。
安装
在已启用安全启动的主机上安装技术预览库:
$ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/
$ sudo dnf module install nvidia-driver:latest-open-gpu
$ sudo reboot
如果您发现任何打包问题,请向
github.com/NVIDIA/yum-packaging-precompiled-kmod/issues
报告这些问题。
此库的有效期至 2025 年 3 月 31 日。
总结
现在,借助 NVIDIA 和 Red Hat 的签名软件包,在 RHEL 9 上使用 NVIDIA 进行安全启动会更好。您可以期待此领域的更多开发。
您可在
/NVIDIA/open-gpu-kernel-modules
GitHub 仓库中获取驱动源代码。
GitHub 上提供了打包模板和说明,以便您为自定义内核和衍生 Linux 发行版维护自己的预编译内核模块包:
/NVIDIA/yum-packaging-precompiled-kmod
/NVIDIA/yum-packaging-nvidia-plugin
如要提供反馈,请
发送评论
或
报告驱动错误
。如果您还不是会员,请
加入 NVIDIA Developer Program
。 |
https://developer.nvidia.com/blog/get-started-with-gpu-acceleration-for-data-science/ | Get Started with GPU Acceleration for Data Science | In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements.
RAPIDS
is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes. Providing tools like
cuDF
for data manipulation,
cuML
for machine learning, and
cuGraph
for graph analytics, RAPIDS enables seamless integration with existing Python libraries, making it easier for data scientists to achieve faster and more efficient processing.
This post shares tips for transitioning from CPU data science libraries to GPU-accelerated workflows, especially for experienced data scientists.
Setting up RAPIDS on desktop or cloud infrastructure
Getting started with RAPIDS is straightforward, but it does have several dependencies. The recommended approach is to follow the official
RAPIDS Installation Guide
, which provides detailed instructions for local installations. You have multiple paths to install the framework: through pip install, Docker image, or through an environment such as Conda. To set up RAPIDS in a cloud environment, see the
RAPIDS Cloud Deployment Guide
. Before installing, ensure compatibility by checking your CUDA version and the supported RAPIDS version on the installation page.
cuDF and GPU acceleration for pandas
An advantage of RAPIDS lies in its modular architecture, which empowers users to adopt specific libraries designed for GPU-accelerated workflows. Among these, cuDF stands out as a powerful tool for seamlessly transitioning from traditional
pandas
-based workflows to GPU-optimized data processing, and requires zero code changes.
To get started, make sure to enable the cuDF extension before importing pandas for execution of data import and remainder of the operation on GPU. By loading the RAPIDS extension with
%load_ext cudf.pandas
, you can effortlessly integrate cuDF DataFrame within existing workflows, preserving the familiar syntax and structure of pandas.
Similar to pandas,
cuDF pandas
supports different file formats such as .csv, .json, .pickle, .paraquet, and hence enables GPU-accelerated data manipulation.
The following code is an example of how to enable the
cudf.pandas
extension and concatenate two .csv files:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
Loading the
cudf.pandas
extension enables the execution of familiar pandas operations—such as filtering, grouping, and merging—on GPUs without requiring a code change or rewrites. The cuDF accelerator is compatible with the pandas API to ensure a smooth transition from CPU to GPU while delivering substantial computational speedups.
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
Decoding performance: CPU and GPU runtime metrics in action
In data science, performance optimization is not just about speed, but also understanding how computational resources are utilized. It involves analyzing how operations leverage CPU and GPU architectures, identifying inefficiencies, and implementing strategies to enhance workflow efficiency.
Performance profiling tools like
%cudf.pandas.profile
play a key role by offering a detailed examination of code execution. The following execution result breaks down each function, and distinguishes between tasks processed on the CPU from those accelerated on the GPU:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
This granularity helps pinpoint operations that inadvertently revert to CPU execution, a common occurrence due to unsupported cuDF functions, incompatible data types, or suboptimal memory handling. It is crucial to identify these issues because such fallbacks can significantly impact overall performance. To learn more about this loader, see
Mastering the cudf.pandas Profiler for GPU Acceleration
.
Additionally, you can use Python magic commands like
%%time
and
%%timeit
to enable benchmarks of specific code blocks that facilitate direct comparisons of runtime between pandas (CPU) and the cuDF accelerator for pandas (GPU). These tools provide insights into the efficiency gains achieved through GPU acceleration. Benchmarking with
%%time
provides a clear comparison of execution times between CPU and GPU environments, highlighting the efficiency gains achievable through parallel processing.
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
The
%%time
example delivers a 10x speedup in execution time, reducing wall time from 63.3 milliseconds (ms) on the CPU to 5.46 ms on the GPU. This highlights the efficiency of GPU acceleration with cuDF pandas for large-scale data operations. Further insights are gained using
%%timeit
, which performs repeated executions to measure consistency and reliability in performance metrics.
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The
%%timeit
example gives us a 10x performance improvement with GPU acceleration, reducing the runtime from 1.11 seconds per loop on the CPU to 89.6 ms per loop on the GPU. This highlights the efficiency of cuDF pandas for intensive data operations.
Verifying GPU utilization
When working with different data types, it is important to verify whether your system is utilizing the GPU effectively. You can check whether arrays are being processed on the CPU or GPU by using the familiar
type
command to differentiate between NumPy and CuPy arrays.
type(guess_ages)
cupy.ndarray
If the output is
np.array
, the data is being processed on the CPU. If the output is
cupy.ndarray
, the data is being processed on the GPU. This quick check ensures that your workflows are leveraging GPU resources where intended.
Secondly, by simply using the
print
command, you can confirm whether the GPU is being utilized and ensure that a cuDF DataFrame is being processed. The output specifies whether the
fast
path (cuDF) or
slow
path (pandas) is in use. This straightforward check provides an easy way to validate that the GPU is active for accelerating data operations.
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
Lastly, commands such as
df.info
can be used to inspect the structure of cuDF DataFrame and confirm that computations are GPU-accelerated. This helps verify whether operations are running on the GPU or falling back to the CPU.
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
Conclusion
RAPIDS, through tools like cuDF pandas, provides a seamless transition from traditional CPU-based data workflows to GPU-accelerated processing, offering significant performance improvements. By leveraging features such as
%%time
,
%%timeit
, and profiling tools like
%%cudf.pandas.profile
, you can measure and optimize runtime efficiency. The ability to inspect GPU utilization through simple commands like
type
,
print(pd)
, and
df.info
ensures that workflows are leveraging GPU resources effectively.
To try the data operations detailed in this post, check out the
accompanying Jupyter Notebook
.
To learn more about GPU-accelerated data science, see
10 Minutes to Data Science: Transitioning Between RAPIDS cuDF and CuPy Libraries
and
RAPIDS cuDF Instantly Accelerates pandas Up to 50x on Google Colab
.
Join us for
GTC 2025
and register for the
Data Science Track
to gain deeper insights. Recommended sessions include:
Accelerating Data Science with RAPIDS and NVIDIA GPUs
Scaling Machine Learning Workflows with RAPIDS
To build expertise with RAPIDS, check out the following hands-on workshops at GTC:
Accelerate Clustering Algorithms to Achieve the Highest Performance
Analyzing and Visualizing Large Data Interactively Using Accelerated Computing | https://developer.nvidia.com/zh-cn/blog/get-started-with-gpu-acceleration-for-data-science/ | 适用于数据科学的 GPU 加速入门 | 在数据科学领域,运营效率是处理日益复杂和大型数据集的关键。GPU 加速已成为现代工作流程的关键,可显著提高性能。
RAPIDS
是由 NVIDIA 开发的一套开源库和框架,旨在使用 GPU 以尽可能减少代码更改来加速数据科学流程。RAPIDS 提供用于数据操作的
cuDF
、用于机器学习的
cuML
和用于图形分析的
cuGraph
等工具,可实现与现有 Python 库的无缝集成,使数据科学家更容易实现更快、更高效的处理。
本文分享了从 CPU 数据科学库过渡到 GPU 加速工作流程的技巧,特别适合经验丰富的数据科学家。
在桌面或云基础架构上设置 RAPIDS
开始使用 RAPIDS 非常简单,但它确实有几个依赖项。推荐的方法是遵循官方的
RAPIDS 安装指南
,该指南提供了详细的本地安装说明。您有多种安装框架的路径:通过 pip install、Docker 镜像,或通过 Conda 等环境。要在云环境中设置 RAPIDS,请参阅
RAPIDS 云部署指南
。安装前,请检查安装页面上的 CUDA 版本和受支持的 RAPIDS 版本,确保兼容性。
适用于 pandas 的 cuDF 和 GPU 加速
RAPIDS 的一个优势在于其模块化架构,使用户能够采用专为 GPU 加速工作流程设计的特定库。其中,cuDF 作为一款功能强大的工具脱颖而出,可从基于
pandas
的传统工作流程无缝过渡到 GPU 优化的数据处理流程,并且无需更改代码。
首先,请确保在导入 pandas 之前启用 cuDF 扩展,以便在 GPU 上执行数据导入和剩余操作。通过使用
%load_ext cudf.pandas
加载 RAPIDS 扩展程序,您可以轻松地将 cuDF DataFrame 集成到现有工作流程中,从而保留熟悉的 pandas 语法和结构。
与 pandas 类似,
cuDF pandas
支持不同的文件格式,例如 .csv、.json、.pickle、.paraquet,因此支持 GPU 加速的数据操作。
以下代码是如何启用
cudf.pandas
扩展名并连接两个 .csv 文件的示例:
%load_ext cudf.pandas
import pandas as pd
import cupy as cp
train = pd.read_csv('./Titanic/train.csv')
test = pd.read_csv('./Titanic/test.csv')
concat = pd.concat([train, test], axis = 0)
通过加载
cudf.pandas
扩展程序,无需更改或重写代码,即可在 GPU 上执行熟悉的 pandas 操作,例如过滤、分组和合并。cuDF 加速器与 pandas API 兼容,可确保从 CPU 到 GPU 的平稳过渡,同时大幅提高计算速度。
target_rows = 1_000_000
repeats = -(-target_rows // len(train)) # Ceiling division
train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows)
print(train_df.shape) # (1000000, 2)
repeats = -(-target_rows // len(test)) # Ceiling division
test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows)
print(test_df.shape) # (1000000, 2)
combine = [train_df, test_df]
(1000000, 12)
(1000000, 11)
filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)]
grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean()
additional_info = pd.DataFrame({
'PassengerId': [1, 2, 3],
'VIP_Status': ['No', 'Yes', 'No']
})
merged_df = train_df.merge(additional_info, on='PassengerId',
how='left')
解码性能:CPU 和 GPU 运行时指标的实际应用
在数据科学中,性能优化不仅涉及速度,还涉及了解计算资源的利用方式。其中包括分析运营如何利用 CPU 和 GPU 架构、识别效率低下问题,以及实施旨在提高工作流程效率的策略。
%cudf.pandas.profile
等性能分析工具通过详细检查代码执行情况发挥着关键作用。以下执行结果会对每个函数进行分解,并区分在 CPU 上处理的任务与在 GPU 上加速的任务:
%%cudf.pandas.profile
train_df[['Pclass', 'Survived']].groupby(['Pclass'],
as_index=False).mean().sort_values(by='Survived', ascending=False)
Pclass Survived
0 1 0.629592
1 2 0.472810
2 3 0.242378
Total time elapsed: 5.131 seconds
5 GPU function calls in 5.020 seconds
0 CPU function calls in 0.000 seconds
Stats
+------------------------+------------+-------------+------------+------------+-------------+------------+
| Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall |
+------------------------+------------+-------------+------------+------------+-------------+------------+
| DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 |
| DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 |
| GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 |
| DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 |
| DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 |
+------------------------+------------+-------------+------------+------------+-------------+------------+
这种粒度有助于查明无意中恢复到 CPU 执行的操作,这是由于不受支持的 cuDF 函数、不兼容的数据类型或次优内存处理而常见的情况。识别这些问题至关重要,因为此类回退会严重影响整体性能。如需详细了解此加载程序,请参阅
Mastering cudf.pandas Profiler for GPU Acceleration
。
此外,您可以使用 Python magic 命令,如
%%time
和
%%timeit
,来启用特定代码块的基准测试,以便直接比较 pandas(CPU)和 cuDF 加速器(GPU)之间的运行时。这些工具可让您深入了解通过 GPU 加速实现的效率提升。通过使用
%%time
进行基准测试,可以清楚地比较 CPU 和 GPU 环境之间的执行时间,从而凸显通过并行处理实现的效率提升。
%%time
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
CPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms
Wall time: 63.3 ms
GPU output:
Before (999702, 12) (999856, 11) (999702, 12) (999856, 11)
After (999702, 10) (999856, 9) (999702, 10) (999856, 9)
CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms
Wall time: 5.46 ms
%%time
示例可将执行时间提高 10 倍,将墙面时间从 CPU 上的 63.3 毫秒 (ms) 缩短到 GPU 上的 5.46 毫秒。这凸显了使用 cuDF pandas 进行 GPU 加速在大规模数据操作中的效率。您可以使用
%%timeit
获得更多见解,它执行重复执行来测量性能指标中的一致性和可靠性。
%%timeit
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
CPU output:
1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
GPU output:
89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
在 GPU 加速方面,
%%timeit
示例将性能提升了 10 倍,将运行时间从 CPU 上的每循环 1.11 秒缩短到 GPU 上的每循环 89.6 毫秒。这凸显了 cuDF pandas 在密集型数据操作中的效率。
验证 GPU 利用率
在处理不同的数据类型时,请务必验证您的系统是否有效利用了 GPU。您可以使用熟悉的
type
命令来区分 NumPy 和 CuPy 数组,检查数组是在 CPU 还是 GPU 上处理。
type(guess_ages)
cupy.ndarray
如果输出为
np.array
,则数据将在 CPU 上处理。如果输出为
cupy.ndarray
,则数据将在 GPU 上处理。此快速检查可确保您的工作流程按预期利用 GPU 资源。
其次,只需使用
print
命令,即可确认是否正在利用 GPU,并确保正在处理 cuDF DataFrame。输出指定使用的是
fast
路径 (cuDF) 还是
slow
路径 (pandas)。这种简单的检查提供了一种验证 GPU 是否处于活动状态以加速数据操作的简单方法。
print(pd)
<module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))>
最后,可以使用
df.info
等命令检查 cuDF DataFrame 的结构,并确认计算已通过 GPU 加速。这有助于验证操作是在 GPU 上运行,还是回退至 CPU。
train_df.info()
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 1000000 non-null int64
1 Pclass 1000000 non-null int64
2 Sex 1000000 non-null int64
3 Age 1000000 non-null float64
4 SibSp 1000000 non-null int64
5 Parch 1000000 non-null int64
6 Fare 1000000 non-null float64
7 Embarked 997755 non-null object
8 Title 1000000 non-null int64
dtypes: float64(2), int64(6), object(1)
memory usage: 65.9+ MB
结束语
通过 cuDF pandas 等工具,RAPIDS 可实现从基于 CPU 的传统数据工作流到 GPU 加速处理的无缝过渡,从而显著提高性能。通过利用
%%time
、
%%timeit
等功能以及
%%cudf.pandas.profile
等分析工具,您可以测量和优化运行时效率。通过
type
、
print(pd)
和
df.info
等简单命令检查 GPU 利用率,可确保工作流程有效利用 GPU 资源。
要尝试本文中详述的数据操作,请查看
随附的 Jupyter Notebook
。
如需了解有关 GPU 加速的数据科学的更多信息,请参阅“
10 分钟了解数据科学:在 RAPIDS cuDF 和 CuPy 库之间过渡
”以及“
RAPIDS cuDF 即时将 pandas 在 Google Colab 上的运行速度提高 50 倍
”。
加入我们的
GTC 2025
大会,并报名参加
Data Science Track
,获得更深入的见解。推荐的会议包括:
借助 RAPIDS 和 NVIDIA GPUs 加速数据科学
借助 RAPIDS 扩展机器学习工作流程
要积累有关 RAPIDS 的专业知识,请查看 GTC 上的以下实战研讨会:
加速聚类算法以实现超高性能
使用加速计算以交互方式分析和可视化大数据 |
https://developer.nvidia.com/blog/how-to-use-openusd/ | How to Use OpenUSD | Universal Scene Description (OpenUSD)
is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds.
This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows.
For an interactive, comprehensive introduction to the essential concepts, vocabulary, and best practices for OpenUSD, explore the
free Learn OpenUSD learning path
from the
NVIDIA Deep Learning Institute
.
Data exchange
OpenUSD does not replace your existing tools or data formats. As the name suggests, Universal Scene Description provides a common way to represent data for many 3D concepts such as meshes and PBR materials. It continues evolving to support additional concepts such as physics and B-reps.
Figure 1. OpenUSD’s growing ecosystem of DCC applications
To begin using OpenUSD with your existing assets, use an exporter or converter to bring your content into USD. This approach ensures swift and effective integration into new USD workflows. By using USD in this way, you can begin to enhance collaboration, performance, and interoperability across different stages of production.
For more information about how to get started, see the
list of applications and tools that support USD interchange
.
As you expand your USD workflows and want to use OpenUSD with proprietary data, you may have a need to develop your own data exchange solutions.
Data exchange pipelines manage the extraction, transformation, and ingestion of content from one data format to another. There are several primary types of data exchange solutions used to integrate existing content into USD workflows:
Importers and exporters
: These are found in digital content creation (DCC) applications and enable you to directly import and export 3D assets to and from USD. Popular software like Rhino and Blender facilitate the importing of 3D assets, while applications like Houdini open 3D assets as native USD files.This makes it straightforward to work with USD within your existing tools.
Converters:
These are standalone programs or scripts designed to convert 3D files from various formats to and from USD. They offer flexibility for batch processing and automation, ensuring that your assets can be easily transformed without the need for manual intervention.
USD file format plug-ins
: This type of USD plug-in enables you to keep your data in its original source format while dynamically reading and translating it to USD on the fly when you open a stage. This means you can work with your assets in their native formats and still benefit from USD’s powerful features, while maintaining data integrity and workflow efficiency.
Conceptual data mapping and extract, transform, load (ETL) are crucial processes for developing data exchange solutions that help ensure the seamless transfer and integration of 3D assets and data.
Conceptual data mapping
involves defining how data from one system or format corresponds to data in another system or format. This is particularly important when working with different DCC applications or file types, as each may have its own unique data structure and representation.
ETL is a more comprehensive process used to move and integrate data from one system to another. It consists of the following main steps:
Extract:
Pull data from the source system or file format. The data could be 3D models, textures, animations, or any other relevant content.
Transform:
Transform extracted data to fit the target runtime requirements. This might include converting units, reformatting data structures, or filtering out irrelevant or private data.
Load:
Load the transformed data into the target runtime. This is sometimes called
ingestion
and ensures that the data is ready for use in USD-based workflows and applications.
Conceptual data mapping and ETL processes are essential for ensuring that content can be effectively and accurately transferred into the USD ecosystem, maintaining its quality and functionality throughout the exchange. The
OpenUSD Exchange SDK
can help accelerate the development of data exchange solutions and help with authoring high-quality OpenUSD data.
After getting your data into OpenUSD, the next step is understanding how to aggregate and structure it for maximum efficiency.
Learn how to
develop data exchange pipelines
with the free Learn OpenUSD self-paced curriculum.
Asset structure principles and content aggregation
Well-structured assets unlock effective content aggregation and asset reusability—this is where OpenUSD truly excels. Proper asset structuring and content aggregation are key methods for efficiently organizing and combining 3D assets and scenes in complex projects.
Developing an asset structure involves breaking down your 3D content into smaller, manageable components that can be easily referenced and reused. This enables you to create a hierarchical and organized structure that simplifies the management and maintenance of your assets. This approach both improves reusability and performance by enabling you to load only the necessary components when needed.
With a well-designed asset structure, you can enhance collaboration as different workstreams can author data independently in separate layers, which can then be composed into a single asset. This non-destructive editing approach ensures that changes made by one team do not overwrite another team’s work.
After you’ve built your smaller components, use content aggregation to combine them into a single cohesive scene. Composition arcs, such as references and payloads, support this process.
Figure 2.
Digital twin of a liquified natural gas carrier
(credit: HD Hyundai and Siemens)
These techniques help make OpenUSD a powerful tool for managing large-scale 3D projects, facilitating efficient collaboration, asset reuse, and pipeline efficiency, making it an indispensable part of any modern 3D production workflow.
Learn
asset structuring principles and content aggregation
with the free Learn OpenUSD self-paced curriculum.
Composition arcs
Composition arcs are the foundational operators of OpenUSD that combine smaller units of scene description into larger units. This enables robust asset structures and content aggregation in OpenUSD workflows.
This feature is one of the superpowers of OpenUSD, setting it apart from other 3D data formats and frameworks.
Figure 3. Composition arcs enable the creation of complex 3D objects such as autonomous mobile robots
Composition arcs are essential tools for non-destructively combining units of USD data. The composition arcs available in OpenUSD include sublayers, references, payloads, variant sets, inherits, specializes, and the most recent addition, relocates.
Start with the following composition arcs to gain an understanding how you can leverage them in your own workflows:
Sublayers:
Enable multiple collaborators or processes to operate on a given USD context. The data authored in a sublayer is added to a USD context in an “include” fashion.
References:
Most suited for content aggregation and can be used to graft one USD context onto the hierarchy of another USD context. By referencing assets, you can maintain a single source of truth for each component, ensuring consistency and reducing redundancy. This is particularly useful in large-scale projects where the same assets are used repeatedly.
Variant sets:
Used to store multiple variations of a component, enhancing adaptability without duplicating data. This feature enables you to create different versions of an asset, such as different materials or configurations, all within a single, efficient structure. Use cases include specifying different textures and materials or configurations of a model, such as describing a car featuring different paint colors or with different rim styles.
Figure 4. Composition of an OpenUSD scene
Learning and implementing composition arcs will provide immediate value to your USD workflows. However, achieving mastery can take years due to their extensive flexibility and diverse applications. To use OpenUSD effectively, we highly recommend investing significant time in experimentation and refining your understanding of composition arc types.
Learn to
create composition arcs
with the free Learn OpenUSD self-paced training.
Get started
OpenUSD offers a comprehensive set of features for efficient 3D asset management and content aggregation, making it an indispensable framework for large-scale projects. By mastering data exchange, asset structuring, and composition arcs, you can significantly improve collaboration, performance, and reusability in your 3D workflows.
Build on your foundational knowledge with courses on the
Learn OpenUSD learning path
.
Start developing with OpenUSD on the
Omniverse Kit SDK
and generative AI models for OpenUSD development through
USD NIM microservices
.
For more resources on OpenUSD, explore the
Alliance for OpenUSD (AOUSD) forum
and visit the
AOUSD
website. | https://developer.nvidia.com/zh-cn/blog/how-to-use-openusd/ | 如何使用 OpenUSD | 通用场景描述(Universal Scene Description)
是开放、可扩展的框架和生态系统,具有用于在 3D 虚拟世界中合成、编辑、查询、渲染、协作和仿真的 API。
本文介绍了如何使用现有资产和工具立即开始使用 OpenUSD,以及可以采取哪些步骤以迭代方式提升 USD 工作流程的水平。
要以交互方式全面介绍 OpenUSD 的基本概念、词汇和最佳实践,请探索
NVIDIA Deep Learning Institute
提供的
免费 Learn OpenUSD 学习路径
。
数据交换
OpenUSD 不会取代您现有的工具或数据格式。顾名思义,Universal Scene Description 提供了一种常用方法来表示网格和 PBR 材质等许多 3D 概念的数据。它不断发展,以支持其他概念,例如 physics 和 B-reps。
图 1、OpenUSD 日益壮大的 DCC 应用生态系统
要开始将 OpenUSD 与您的现有素材结合使用,请使用导出工具或转换器将您的内容导入 USD。这一方法可确保快速有效地集成到新的 USD 工作流程中。通过这种方式使用 USD,您可以开始增强不同生产阶段的协作、性能和互操作性。
有关入门指南的更多信息,请参阅
支持 USD 交换的应用程序和工具列表
。
当您扩展 USD 工作流并希望将 OpenUSD 与专有数据结合使用时,您可能需要开发自己的数据交换解决方案。
数据交换流程管理从一种数据格式到另一种数据格式的内容提取、转换和提取。有几种主要类型的数据交换解决方案用于将现有内容集成到 USD 工作流中:
导入和导出工具:这些工具可在数字内容创建 (DCC) 应用中找到,使您能够直接向 USD 导入和导出 3D 素材。Rhino 和 Blender 等热门软件有助于导入 3D 素材,而 Houdini 等应用则将 3D 素材作为原生 USD 文件打开。这使得您可以在现有工具中轻松使用 USD。
转换器:
这些是独立的程序或脚本,旨在将不同格式的 3D 文件转换为 USD 格式或从 USD 格式转换为 3D 文件。它们为批量处理和自动化提供了灵活性,可确保您的资产可以轻松转换,而无需人工干预。
USD 文件格式插件
:通过这种类型的 USD 插件,您可以保留原始源格式的数据,同时在打开舞台时动态读取数据并将其转换为 USD。这意味着,您可以使用原生格式处理资产,同时仍能从 USD 的强大功能中受益,同时保持数据完整性和工作流程效率。
概念数据映射以及提取、转换、加载(ETL)是开发数据交换解决方案的关键流程,有助于确保无缝传输和集成 3D 资产和数据。
概念数据映射涉及定义一个系统或格式的数据与另一个系统或格式中的数据对应的方式
。在使用不同的 DCC 应用或文件类型时,这一点尤为重要,因为每个应用或文件类型都可能有自己独特的数据结构和表示形式。
ETL 是一个更全面的过程,用于将数据从一个系统迁移到另一个系统并进行集成。它由以下主要步骤组成:
提取:
从源系统或文件格式中提取数据。数据可以是 3D 模型、纹理、动画或任何其他相关内容。
转换
:转换提取的数据以满足目标运行时需求。这可能包括转换单位、重新格式化数据结构,或过滤掉无关紧要或私有数据。
加载:
将转换后的数据加载到目标运行时。
这有时称为 ingestion
,可确保数据可随时用于基于 USD 的工作流和应用。
概念数据映射和 ETL 流程对于确保内容可以有效准确地传输到 USD 生态系统,并在整个交换过程中保持其质量和功能至关重要。
OpenUSD Exchange SDK
有助于加速数据交换解决方案的开发,并有助于创作高质量的 OpenUSD 数据。
在将数据导入 OpenUSD 后,下一步是了解如何聚合和构建数据,以更大限度地提高效率。
了解如何通过免费的 Learn OpenUSD 自定进度课程
开发数据交换工作流
。
资产结构原则和内容聚合
结构良好的资产可实现有效的内容聚合和资产可重用性,这正是 OpenUSD 真正擅长的地方。在复杂的项目中,正确的素材结构和内容聚合是高效组织和合并 3D 素材和场景的关键方法。
开发素材结构需要将 3D 内容分解为更小、可管理的组件,以便于引用和重复使用。这样,您就可以创建一个层次化且井然有序的结构,简化资产的管理和维护。这种方法允许您在需要时仅加载必要的组件,从而提高可复用性和性能。
借助精心设计的资产结构,您可以增强协作,因为不同的工作流可以在不同的层中独立创作数据,然后将数据组合成单个资产。这种非破坏性编辑方法可确保一个团队所做的更改不会覆盖另一个团队的工作。
构建完较小的组件后,请使用内容聚合功能将其合并为一个连贯一致的场景。合成操作符 (例如 references 和 payloads) 支持此过程。
图 2
、
液化天然气运输船的数字孪生
(图片来源:HD Hyundai 和 Siemens)
这些技术有助于使 OpenUSD 成为管理大型 3D 项目的强大工具,促进高效协作、资产重复使用和工作流效率,使其成为任何现代 3D 生产工作流不可或缺的一部分。
通过免费的 Learn OpenUSD 自定进度课程,学习
资产结构化原则和内容聚合
。
合成操作符
合成操作符是 OpenUSD 的基础运算符,可将场景描述的较小单元组合成较大的单元。这可在 OpenUSD 工作流中实现稳健的资产结构和内容聚合。
此功能是 OpenUSD 的超能力之一,使其有别于其他 3D 数据格式和框架。
图 3、合成操作符支持创建复杂的 3D 对象,例如自动移动机器人
合成操作符是以非破坏性方式组合 USD 数据单位的必要工具。OpenUSD 中提供的合成操作符包括子层、引用、有效载荷、变体集、继承、专门化以及最近添加的重新定位。
从以下合成操作符开始,了解如何在自己的工作流程中利用它们:
子层
:支持多个协作者或进程在给定的 USD 环境中运行。子层中创作的数据以“include”方式添加到 USD 上下文中。
引用:最适合内容聚合,可用于将一个 USD 上下文移植到另一个 USD 上下文的层次结构中。通过引用资产,您可以为每个组件维护单一的事实来源,从而确保一致性并减少冗余。这在重复使用相同资产的大型项目中特别有用。
变体集
:用于存储组件的多个变体,在不复制数据的情况下增强适应性。此功能使您能够在单个高效结构中创建素材的不同版本,例如不同的材质或配置。用例包括指定模型的不同纹理、材质或配置,例如描述具有不同涂料颜色或不同轮缘风格的汽车。
图 4、OpenUSD 场景的合成
学习和实施合成操作符将为您的 USD 工作流提供即时价值。然而,由于其广泛的灵活性和多样化的应用,掌握可能需要数年时间。为了有效使用 OpenUSD,我们强烈建议您投入大量时间进行实验,并深入了解合成弧类型。
通过免费的 Learn OpenUSD 自定进度培训,学习
创建 composition arcs
。
开始使用
OpenUSD 为高效的 3D 资产管理和内容聚合提供了一整套功能,使其成为大型项目不可或缺的框架。通过掌握数据交换、资产结构化和合成操作符,您可以显著改善 3D 工作流中的协作、性能和可复用性。
通过学习 Learn OpenUSD 学习路径上的课程,在基础知识的基础上进行构建。
开始在
Omniverse Kit SDK
上使用 OpenUSD 进行开发,并使用生成式 AI 模型通过
USD NIM 微服务
进行 OpenUSD 开发。
有关 OpenUSD 的更多资源,请访问
Alliance for OpenUSD (
AOUSD
) 论坛
并访问 AOUSD 网站。 |
https://developer.nvidia.com/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/ | NVIDIA JetPack 6.2 Brings Super Mode to NVIDIA Jetson Orin Nano and Jetson Orin NX Modules | The introduction of the
NVIDIA Jetson Orin Nano Super Developer Kit
sparked a new age of
generative AI
for small edge devices. The new
Super Mode
delivered an unprecedented generative AI performance boost of up to 1.7x on the developer kit, making it the most affordable generative AI supercomputer.
JetPack 6.2 is now available to support Super Mode for
Jetson Orin Nano
and
Jetson Orin NX
production modules, delivering up to 2x higher generative AI model performance. Now you can unlock greater value and lower total cost of ownership for new and existing robotics and edge AI applications.
This post discusses the details of Super Mode, including new power modes, benchmarks for popular generative AI models on the Jetson Orin Nano and Orin NX modules, updates to the documentation, and insights into NPN partners supporting Super Mode.
New reference power modes on the Jetson Orin Nano and Jetson Orin NX series
JetPack 6.2 enables the power boost on the Jetson Orin Nano and Jetson Orin NX series by unlocking higher frequencies on the GPU, DLA memory, and CPU clocks.
MODULE
EXISTING REFERENCE POWER MODES
(available with existing flashing configs)
NEW POWER MODES
(available only with new flashing configs)
NVIDIA Jetson Orin Nano 4GB
7W, 10W
10W, 25W,
MAXN
SUPER
NVIDIA Jetson Orin Nano 8GB
7W, 15W
15W, 25W,
MAXN
SUPER
NVIDIA Jetson Orin NX 8GB
10W, 15W, 20W,MAXN
10W, 15W, 20W, 40W,
MAXN
SUPER
NVIDIA Jetson Orin NX 16GB
10W, 15W, 25W,MAXN
10W, 15W, 25W, 40W,
MAXN
SUPER
Table 1. New reference power modes on the Jetson Orin Nano and Jetson Orin NX modules
Jetson Orin Nano modules now have a 25W mode and a new uncapped MAXN SUPER mode. Similarly, Jetson Orin NX modules can now use a new higher 40W reference power mode as well as an uncapped MAXN SUPER mode.
The MAXN SUPER is an uncapped power mode that enables the highest number of cores and clock frequency for CPU, GPU, DLA, PVA, and SOC engines. If the total module power exceeds the thermal design power (TDP) budget in this mode, the module is throttled to lower frequency, which delivers lower performance while staying within the thermal budget.
We strongly recommend building your own custom power mode to find the right balance between power consumption or thermal stability and performance for your application and needs.
Table 2 compares the detailed specifications of Jetson Orin Nano 4GB and 8GB and Jetson Orin NX 8GB and 16GB in their original and Super Mode.
ORIN NANO 4GB
ORIN NANO 4GB
(SUPER)
ORIN NANO 8GB
ORIN NANO 8GB
(SUPER)
ORIN NX 8GB
ORIN NX 8GB
(SUPER)
ORIN NX 16GB
ORIN NX 16GB
(SUPER)
PEAK AI PERF
INT8
20 TOPS (Sparse)
10 TOPS (Dense)
34 TOPS (Sparse)
17 TOPS (Dense)
40 TOPS (Sparse)
20 TOPS (Dense)
67 TOPS (Sparse)
33 TOPS (Dense)
70 TOPS (Sparse)
35 TOPS (Dense)
117 TOPS (Sparse)
58 TOPS (Dense)
100 TOPS (Sparse)
50 TOPS (Dense)
157 TOPS (Sparse)
78 TOPS (Dense)
NVIDIA AMPERE GPU
512 CUDA Cores
16 Tensor Cores
625 MHz
20/10 INT8 TOPs (S/D)
5 FP16 TFLOPs
512 CUDA Cores
16 Tensor Cores
1020 MHz
34/17 INT8 TOPs (S/D)
8.5 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
625 MHz
40/20 INT8 TOPs (S/D)
10 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1020 MHz
67/33 INT8 TOPs (S/D)
17 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
765 MHz
50/25 INT8 TOPs (S/D)
13 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
918 MHz
60/30 INT8 TOPs (S/D)
15 FP16 TFLOPs
1024 CUDA Cores
32 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
CPU
6X A78
1.5 GHz
6X A78
1.7 GHz
6X A78
1.5 GHz
6X A78
1.7 GHz
6X A78
2.0 GHz
6X A78
2.0 GHz
8X A782
.0 GHz
8X A78
2.0 GHz
DLA (S/D)
NA
NA
NA
NA
20/10 INT8 TOPs
40/20 INT8 TOPs
40/20 INT8 TOPs
80/40 INT8 TOPs
DRAM BW
34 GB/s
51 GB/s
68 GB/s
102 GB/s
102 GB/s
102 GB/s
102 GB/s
102 GB/s
REFERENCE MODULE POWER
7W |
10W
7W |
25W
|
MAXN SUPER
7W |
15W
15W
|
25W
|
MAXN SUPER
10W |
15W |
20W | MAXN
10W |
15W |
20W |
40W
|
MAXN SUPER
10W |
15W |
25W | MAXN
10W |
15W |
25W |
40W
|
MAXN SUPER
Table 2. Original specs for Jetson Orin Nano and Jetson Orin NX and the specs in Super Mode
While using the new power modes, ensure that your product’s existing or new thermal design can accommodate the new specifications with the power modes. For more information, see the updated
Thermal Design Guide
.
Updated Power Estimator Tool
The Power Estimator Tool is a powerful tool provided by NVIDIA to create custom power profiles and
nvpmodel
configuration files by modifying system parameters such as the cores, maximum frequency and load levels on the GPU, CPU, DLA, and so on. The tool provides an estimated power consumption with various settings and can be used to create optimal parameters settings to get the desired balance between performance and power consumption.
We have updated the Power Estimator Tool with Super Mode. We strongly recommend that you use the Power Estimator Tool and verify in practice before deploying with high-performance applications.
Boost performance on popular generative AI models
With the introduction of Super Mode with JetPack 6.2, the Jetson Orin Nano and Jetson Orin NX modules deliver up to a 2x inference performance boost. We benchmarked the most popular large language models (LLMs), vision language models (VLMs) and vision transformers (ViTs).
Large language models
The following chart and tables show the Super Mode performance benchmark for popular LLMs such as Llama3.1 8B, Qwen2.5 7B, and Gemma2 2B.
Figure 1. Performance improvements for LLMs using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
In the following tables,
LLM generation performance (tokens per second) was measured with INT4 quantization using MLC API.
Table 3 shows the LLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
Gemma 2 2B
11.40
18.60
1.64
SmolLM2 1.7B
23.00
35.80
1.56
Table 3. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 4GB
Table 4 shows the LLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
14.00
19.10
1.37
Llama 3.2 3B
27.70
43.10
1.55
Qwen 2.5 7B
14.20
21.80
1.53
Gemma 2 2B
21.5
35.0
1.63
Gemma 2 9B
7.20
9.20
1.28
Phi-3.5 3.8B
24.70
38.10
1.54
SmolLM2 1.7B
41.00
64.50
1.57
Table 4. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 8GB
Table 5 shows the LLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 8GB (original)
Orin NX 8GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
15.90
23.10
1.46
Llama 3.2 3B
34.50
46.50
1.35
Qwen 2.5 7B
17.10
23.80
1.39
Gemma 2 2B
26.60
39.30
1.48
Gemma 2 9B
8.80
13.38
1.52
Phi-3.5 3.8B
30.80
41.30
1.34
SmolLM2 1.7B
51.50
69.80
1.35
Table 5. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX 8GB
Table 6 shows the LLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
Llama 3.1 8B
20.50
22.80
1.11
Llama 3.2 3B
40.40
45.80
1.13
Qwen 2.5 7B
20.80
23.50
1.13
Gemma 2 2B
31.60
39.00
1.23
Gemma 2 9B
10.56
13.26
1.26
Phi-3.5 3.8B
35.90
40.90
1.14
SmolLM2 1.7B
59.50
68.80
1.16
Table 6. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX and Orin Nano modules
Vision language models
The following chart and tables show the Super Mode performance benchmark for popular VLMs such as VILA1.5 8B, LLAVA1.6 7B, and Qwen2 VL 2B.
Figure 2. Performance improvements of VLMs when run using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
Table 7 shows the VLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 4GB (original)
Orin Nano 4GB (Super Mode)
Perf Gain (x)
PaliGemma2 3B
7.2
11.2
1.56
Table 7. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 4GB
Table 8 shows the VLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
0.7
1.1
1.51
VILA 1.5 8B
0.6
0.8
1.45
LLAVA 1.6 7B
0.4
0.6
1.38
Qwen2 VL 2B
2.8
4.4
1.57
InternVL2.5 4B
2.5
5.1
2.04
PaliGemma2 3B
13.7
21.6
1.58
SmolVLM 2B
8.1
12.9
1.59
Table 8. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 8GB
Table 9 shows the VLM performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
0.8
1
1.25
VILA 1.5 8B
0.7
1.04
1.50
LLAVA 1.6 7B
0.5
1.2
2.54
Qwen2 VL 2B
3.4
4.8
1.41
InternVL2.5 4B
3
4.1
1.37
PaliGemma2 3B
17.1
23.9
1.40
SmolVLM 2B
9.7
14.4
1.48
Table 9. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX 16GB
Table 10 shows the VLM performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
VILA 1.5 3B
1
1.3
1.23
VILA 1.5 8B
0.8
1
1.25
LLAVA 1.6 7B
0.6
0.7
1.07
Qwen2 VL 2B
4
4.8
1.20
InternVL2.5 4B
2.8
4.4
1.57
PaliGemma2 3B
20
23.8
1.19
SmolVLM 2B
11.7
14.3
1.22
Table 10. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX and Orin Nano modules
All VILA and LLAVA models were run with INT4 precision using MLC while the rest of the models were run in FP4 precision with Hugging Face Transformers.
Vision transformers
The following chart and tables show the Super Mode performance benchmark for popular ViTs such as CLIP, DINO, and SAM2.
Figure 3. Performance improvements of ViTs when run using Super Mode
DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior.
Table 11 shows the ViT performance gain on Jetson Orin Nano 4GB with JetPack 6.2.
Model
Orin Nano 4GB (original)
Orin Nano 4GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
126.8
189.5
1.49
clip-vit-base-patch16
63.2
112.4
1.78
DINOv2-base-patch14
49.3
79.3
1.61
SAM2 base
2.5
3.8
1.54
vit-base-patch16-224
62.4
103.3
1.66
Table 11. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 4GB
Table 12 shows the ViT performance gain on Jetson Orin Nano 8GB with JetPack 6.2.
Model
Orin Nano 8GB (original)
Orin Nano 8GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
196
314
1.60
clip-vit-base-patch16
95
161
1.69
DINOv2-base-patch14
75
126
1.68
SAM2 base
4.4
6.3
1.43
Grounding DINO
4.1
6.2
1.52
vit-base-patch16-224
98
158
1.61
vit-base-patch32-224
171
273
1.60
Table 12. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 8GB
Table 13 shows the ViT performance gain on Jetson Orin NX 8GB with JetPack 6.2.
Model
Orin NX 8GB (original)
Orin NX 8GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
234.0
361.1
1.54
clip-vit-base-patch16
101.7
204.3
2.01
DINOv2-base-patch14
81.4
160.3
1.97
SAM2 base
3.9
7.4
1.92
Grounding DINO
4.2
7.4
1.75
vit-base-patch16-224
98.6
192.5
1.95
vit-base-patch32-224
193.1
313.5
1.62
Table 13. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin NX 8GB
Table 14 shows the ViT performance gain on Jetson Orin NX 16GB with JetPack 6.2.
Model
Orin NX 16GB (original)
Orin NX 16GB (Super Mode)
Perf Gain (x)
clip-vit-base-patch32
323.2
356.7
1.10
clip-vit-base-patch16
163.5
193.6
1.18
DINOv2-base-patch14
127.5
159.8
1.25
SAM2 base
6.2
7.3
1.18
Grounding DINO
6.2
7.2
1.16
vit-base-patch16-224
158.6
190.2
1.20
vit-base-patch32-224
281.2
309.5
1.10
Table 14. Benchmark performance in frames/sec for popular ViTs on Jetson Orin NX 16GB
All ViT models were run with FP16 precision using NVIDIA TensorRT and measurements are in FPS.
Getting started on NVIDIA Jetson Orin Nano and Jetson Orin NX with JetPack 6.2
The NVIDIA Jetson ecosystem provides various ways for you to flash the developer kit and production modules with the JetPack image.
To install JetPack 6.2 on the Jetson Orin Nano Developer Kit or the modules, use one of the following methods:
SD-Card image
from the
JetPack SDK
page.
SDK Manager
for flashing per the
download and install instructions
.
Flashing script
for bootloader, kernel, and root file system, per
Flashing Support
.
New flashing configuration
The new power modes are only available with the new flashing configuration. The default flashing configuration has not changed. To enable the new power modes, you must use the new flashing configuration while flashing.
Here’s the new flashing configuration to be used with flashing:
jetson-orin-nano-devkit-super.conf
After flashing or updating to JetPack 6.2, run the following command to start the newly available Super Mode.
MAXN SUPER mode on Jetson Orin Nano Modules:
sudo nvpmodel -m 2
MAXN SUPER mode on Jetson Orin NX Modules:
sudo nvpmodel -m 0
You can also select the MAXN SUPER and other power modes from the power mode menu at the top-right corner of the page.
Figure 4. Power mode selection menu
s
Jetson AI Lab
The
Jetson AI Lab
is the NVIDIA hub for exploring and experimenting with generative AI technologies optimized for edge devices. It supports developers and provides a collaborative community with nearly 50 tutorials, prebuilt containers, and resources for deploying on-device LLMs, SLMs, VLMs, diffusion policies, and speech models using optimized inferencing infrastructures.
By simplifying access to cutting-edge AI tools, the lab empowers developers of all levels to innovate and deploy generative AI locally, advancing open-source edge AI and robot learning.
Dive into generative AI with ease using these easy-to-follow tutorials for your developer kit powered by JetPack 6.2:
Ollama with Open Web UI
: Introductory LLM application example
NanoOWL
: Interactive demo with cutting-edge ViT model
LeRobot
: Physical AI example with HuggingFace LeRobot
Updated documentation: Datasheets and design guides
With the newer performance boost, the following resources have been updated and can be downloaded from the
Jetson Download Center
:
Jetson Orin NX Module datasheet
Jetson Orin Nano Module datasheet
Jetson Orin NX Series and Jetson Orin Nano Series Design Guide
Thermal Design Guide
Jetson ecosystem partners ready for Super Mode
To support customer deployments, the Jetson ecosystem partners have enhanced their solutions to support this boosted performance.
Category
Jetson ecosystem partners
ISV solutions
DeepEdge
Edge Impulse
RidgeRun
Ultralytics
Hardware system partners
AAEON
Advantech
Aetina
AIMobile
ASUSTek
Axiomtek
Connect Tech
Seeed Studio
Syslogic
Vecow
Yuan High-Tech
Thermal solutions
Advanced Thermal Solutions
Frore Systems
NVIDIA Jetson Orin lifecycle and roadmap
Due to the growing customer demand for Jetson Orin, NVIDIA
recently announced
the extension of the product lifecycle of Jetson Orin through 2032. With this performance boost, the Jetson Orin Nano and Orin NX series are the ideal platforms for both current and future models.
The upcoming
JetPack 5.1.5
will also enable Super Mode for the Jetson Orin NX and Jetson Orin Nano modules. Developers and customers who develop with JetPack 5 will benefit from the performance boost.
Figure 5. JetPack software roadmap
Boost your application performance with JetPack 6.2
JetPack 6.2 is a groundbreaking release. It delivers an astonishing 2x boost in inference performance on existing Jetson modules, without any added cost. This upgrade is a must-have for Jetson developers and customers looking to supercharge their applications. Upgrade to JetPack 6.2 today and unleash the full potential of your Jetson platform.
Stay up to date by subscribing to our
newsletter
, and follow NVIDIA Robotics on
LinkedIn
,
Instagram
,
X
, and
Facebook
. For more information, explore our
documentation
or join the Robotics community on our
developer forums
,
Discord
, and
YouTube
channels. | https://developer.nvidia.com/zh-cn/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/ | NVIDIA JetPack 6.2 为 NVIDIA Jetson Orin Nano 和 Jetson Orin NX 模块引入超级模式 | NVIDIA Jetson Orin Nano 超级开发者套件
的推出为小型边缘设备开启了
生成式 AI
的新时代。新的
超级模式
在开发者套件上实现了前所未有的生成式 AI 性能提升,最高可达 1.7 倍,使其成为最经济实惠的生成式 AI 超级计算机。
JetPack 6.2 现已支持
Jetson Orin Nano
和
Jetson Orin NX
生产模组的超级模式,可将生成式 AI 模型的性能提升高达 2 倍。现在,您可以为新的和现有的机器人和边缘 AI 应用释放更多价值并降低总体拥有成本。
本文讨论了 Super Mode 的详细信息,包括新的功率模式、Jetson Orin Nano 和 Orin NX 模组上热门生成式 AI 模型的基准测试、文档更新,以及对支持 Super Mode 的 NPN 合作伙伴的见解。
Jetson Orin Nano 和 Jetson Orin NX 系列上的新参考功率模式
JetPack 6.2 通过在 GPU、DLA 内存和 CPU 时钟上解锁更高的频率,提升 Jetson Orin Nano 和 Jetson Orin NX 系列的功耗。
模块
现有参考电源模式
(
适用于现有刷写配置
)
NEW POWER MODES
(仅适用于新的刷写配置)
NVIDIA Jetson Orin Nano 4GB
7 瓦、10 瓦
10 瓦、25 瓦、
最大值
超级 (SUPER)
NVIDIA Jetson Orin Nano 8GB
7 瓦、15 瓦
15 瓦、25 瓦、
最大值
超级 (SUPER)
NVIDIA Jetson Orin NX 8GB
10 瓦、15 瓦、20 瓦、MAXN
10 瓦、15 瓦、20 瓦、40 瓦,
最大值
超级 (SUPER)
NVIDIA Jetson Orin NX 16GB
10 瓦、15 瓦、25 瓦、MAXN
10 瓦、15 瓦、25 瓦、40 瓦,
最大值
超级 (SUPER)
表 1、Jetson Orin Nano 和 Jetson Orin NX 模组上的新参考功率模式
Jetson Orin Nano 模组现提供 25W 模式和全新的无上限 MAXN SUPER 模式。同样,Jetson Orin NX 模组现在可以使用更高的 40W 参考功率模式以及不设上限的 MAXN SUPER 模式。
MAXN SUPER 是不设上限的功率模式,可为 CPU、GPU、DLA、PVA 和 SOC 引擎提供最高数量的核心数和时钟频率。在此模式下,如果模组总功率超过热设计功率(TDP)预算,系统会将模组调低频率,从而在保持热预算的同时降低性能。
我们强烈建议您构建自己的自定义功率模式,以便根据您的应用和需求,在功耗或热稳定性与性能之间找到适当的平衡。
表 2 比较了 Jetson Orin Nano 4GB 和 8GB 以及 Jetson Orin NX 8GB 和 16GB 在原始模式和 Super Mode 下的详细规格。
Orin Nano 4GB*
Orin Nano 4GB
(SUPER)*
Orin Nano 8GB*
Orin Nano 8GB*
(SUPER)*
ORIN NX 8GB*
ORIN NX 8GB*
(SUPER)*
Orin NX 16GB*
Orin NX 16GB*
(SUPER)*
AI 性能峰值
*
INT8
*
20 TOPS (Sparse) 10 TOPS (Dense)
34 TOPS (Sparse)
17 TOPS (Dense)
40 TOPS (Sparse) 20 TOPS (Dense)
67 TOPS (Sparse)
33 TOPS (Dense)
70 TOPS (Sparse) 35 TOPS (Dense)
117 TOPS (Sparse)
58 TOPS (Dense)
100 TOPS (Sparse) 50 TOPS (Dense)
157 TOPS (Sparse)
78 TOPS (Dense)
NVIDIA Ampere GPU
512 个 CUDA Cores 16 个 Tensor Cores 625 MHz 20/10 INT8 TOPs (S/D) 5 FP16 TFLOPs
512 个 CUDA Cores 16 个 Tensor Cores
1020 MHz
34/17 INT8 TOPS (S/D)
8.5 FP16 TFLOPS
1024 个 CUDA Cores 32 个 Tensor Cores 625 MHz 40/20 INT8 TOPs (S/D) 10 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1020 MHz
67/33 INT8 TOPs (S/D)
17 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores 765 MHz 50/25 INT8 TOPs (S/D) 13 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores 918 MHz 60/30 INT8 TOPs (S/D) 15 FP16 TFLOPs
1024 个 CUDA Cores 32 个 Tensor Cores
1173 MHz
77/38 INT8 TOPs (S/D)
19 FP16 TFLOPs
CPU
*
6 块 A78+
1.5 GHz*
6 块 A78+
1.7 GHz
*
6 块 A78+
1.5 GHz*
6 块 A78+
1.7 GHz
*
6 块 A78+
2.0 GHz*
6 块 A78+
2.0 GHz*
8 块 A78+2
.0 GHz 以上
8 块 A78+
2.0 GHz*
DLA (S/D)
*
NA*
NA*
NA*
NA*
20/10 INT8 TOPS*
40/20 INT8 TOPS
*
40/20 INT8 TOPS*
80/40 INT8 TOPS
*
DRAM 重制版
*
34 GB/秒*
51 GB/秒
*
68 GB/秒*
102 GB/秒
*
102 GB/秒*
102 GB/秒*
102 GB/秒*
102 GB/秒*
模组功耗
*
7 瓦
10 瓦以上
7 瓦
10 瓦
25 瓦
*
7 瓦
15 瓦以上
7 瓦
15 瓦
25 瓦
*
10 瓦
15 瓦
20 瓦以上
10 瓦
15 瓦
25 瓦
40 瓦
*
10 瓦
15 瓦
25 瓦以上
10 瓦
15 瓦
25 瓦
40 瓦
*
表 2、Jetson Orin Nano 和 Jetson Orin NX 的原始规格以及超级模式下的规格
在使用新的功率模式时,请确保您产品的现有或新的散热设计能够适应新的功率模式规格。有关更多信息,请参阅更新的
Thermal Design Guide
。
更新了功耗估算器工具
功耗估算器工具是 NVIDIA 提供的一款功能强大的工具,可通过修改 GPU、CPU、DLA 上的核心、最大频率和负载级别等系统参数来创建自定义功耗配置文件和
nvpmodel
配置文件。该工具通过各种设置提供估计功耗,并可用于创建最佳参数设置,以在性能和功耗之间实现所需的平衡。
我们更新了 Power Estimator Tool 的超级模式。我们强烈建议您使用 Power Estimator Tool 并在实践中进行验证,然后再通过高性能应用程序进行部署。
提升热门生成式 AI 模型的性能
随着 JetPack 6.2 超级模式的引入,Jetson Orin Nano 和 Jetson Orin NX 模组可将推理性能提升高达 2 倍。我们对热门的大语言模型 (LLMs)、视觉语言模型 (VLMs) 和视觉转换器 (ViTs) 进行了基准测试。
大语言模型
以下图表和表格显示了 Llama3.1 8B、Qwen2.5 7B 和 Gemma2 2B 等热门 LLM 的超模性能基准测试。
图 1、使用 Super Mode 可提升 LLM 的性能
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
在下表中,LLM 生成性能(每秒令牌数)是使用 MLC API 通过 INT4 量化测量的。
表 3 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 上的 LLM 性能提升。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
Gemma 2 2B
11.40
18.60 欧元
1.64
SmolLM2 1.7 B
23.00
35.80
1.56
表 3、对 Jetson Orin Nano 4GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 4 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 LLM 性能提升。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
14.00
19.10
1.37
Llama 3.2 3B
27.70
43.10
1.55
Qwen 2.5 7B
14.20
21.80
1.53
Gemma 2 2B
21.5
35.0
1.63
Gemma 2 9B
7.20
9.20
1.28
Phi-3.5 38 B
24.70
38.10
1.54
SmolLM2 1.7 B
41.00
64.50
1.57
表 4、对 Jetson Orin Nano 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 5 显示了搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 LLM 性能提升。
模型
Orin NX 8GB (原始版本)
Orin NX 8GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
15.90
23.10
1.46
Llama 3.2 3B
34.50
46.50
1.35
Qwen 2.5 7B
17.10
23.80
1.39
Gemma 2 2B
26.60
39.30
1.48
Gemma 2 9B
8.80
13.38
1.52
Phi-3.5 38 B
30.80
评分:41.30
1.34
SmolLM2 1.7 B
51.50
69.80
1.35
表 5、对 Jetson Orin NX 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位
表 6 显示了搭载 JetPack 6.2 的 Jetson Orin NX 16GB 上的 LLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
Llama 3.1 8B
20.50
22.80
1.11
Llama 3.2 3B
40.40
45.80
1.13
Qwen 2.5 7B
20.80
23.50
1.13
Gemma 2 2B
31.60 欧元
39.00
1.23
Gemma 2 9B
10.56
13.26
1.26
Phi-3.5 38 B
35.90
40.90
1.14
SmolLM2 1.7 B
59.50
68.80
1.16
表 6、对 Jetson Orin NX 和 Orin Nano 模组上的热门 LLM 进行性能基准测试,以令牌/秒为单位
视觉语言模型
以下图表和表格显示了热门 VLM (例如 VILA1.5 8B、LLAVA1.6 7B 和 Qwen2 VL 2B) 的超模性能基准测试。
图 2、使用 Super Mode 运行时 VLM 的性能提升
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
表 7 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 VLM 性能提升情况。
模型
Orin Nano 4GB (原始)
Orin Nano 4GB (超级模式)
性能增益 (倍)
PaliGemma2 3B
7.2
11.2
1.56
表 7、对 Jetson Orin Nano 4GB 上的热门 VLM 进行性能基准测试(tokens/sec)
表 8 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
0.7
1.1
1.51
VILA 1.5 8B
0.6
0.8
1.45
LLAVA 1.6 7B
0.4
0.6
1.38
Qwen2 VL 2B
2.8
4.4
1.57
InterVL2.5 4B
2.5
5.1
2.04
PaliGemma2 3B
13.7
21.6%
1.58
SmolVLM 2B
8.1
12.9
1.59
表 8、对 Jetson Orin Nano 8GB 上的热门 VLM 进行性能基准测试(tokens/sec)
表 9 显示搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
0.8
1
1.25
VILA 1.5 8B
0.7
1.04
1.50
LLAVA 1.6 7B
0.5
1.2
2.54
Qwen2 VL 2B
3.4
4.8
1.41
InterVL2.5 4B
3
4.1
1.37
PaliGemma2 3B
17.1
23.9
1.40
SmolVLM 2B
9.7
14.4
1.48
表 9、对 Jetson Orin NX 16GB 上的热门 VLM 进行性能基准测试,以令牌/秒为单位
表 10 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 VLM 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
VILA 1.5 3B
1
1.3
1.23
VILA 1.5 8B
0.8
1
1.25
LLAVA 1.6 7B
0.6
0.7
1.07
Qwen2 VL 2B
4
4.8
1.20
InterVL2.5 4B
2.8
4.4
1.57
PaliGemma2 3B
20
23.8
1.19
SmolVLM 2B
11.7
14.3
1.22
表 10、对 Jetson Orin NX 和 Orin Nano 模组上的热门 VLM 进行性能基准测试,以令牌/秒为单位
所有 VILA 和 LLAVA 模型均使用 MLC 以 INT4 精度运行,而其余模型则使用 Hugging Face Transformers 以 FP4 精度运行。
视觉 Transformer 模型
以下图表和表格显示了 CLIP、DINO 和 SAM2 等热门 ViTs 的 Super Mode 性能基准测试。
图 3、使用 Super Mode 运行 ViTs 时的性能提升
DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。
表 11 显示搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 ViT 性能提升情况。
模型
Orin Nano 4GB (原始)
Orin Nano 4GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
126.8
189.5
1.49
clip-vit-base-patch16
63.2
112.4
1.78
基于 DINOv2 的补丁 14
49.3%
79.3
1.61
基于 SAM2
2.5
3.8
1.54
Vit-base-patch16-224
62.4%
103.3
1.66
表 11、对 Jetson Orin Nano 4GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 12 显示搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 的 ViT 性能提升情况。
模型
Orin Nano 8GB (原始版本)
Orin Nano 8GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
196
214
1.60
clip-vit-base-patch16
95%
151
1.69
基于 DINOv2 的补丁 14
75
126
1.68
基于 SAM2
4.4
6.3
1.43
支持恐龙
4.1
6.2
1.52
Vit-base-patch16-224
98
158
1.61
Vit-base-patch32-224
172
273
1.60
表 12、对 Jetson Orin Nano 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 13 显示配备 JetPack 6.2 的 Jetson Orin NX 8GB 上的 ViT 性能提升。
模型
Orin NX 8GB (原始版本)
Orin NX 8GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
224.0
361.1
1.54
clip-vit-base-patch16
101.7
204.3
2.01
基于 DINOv2 的补丁 14
81.4%
160.3
1.97
基于 SAM2
3.9
7.4
1.92
支持恐龙
4.2
7.4
1.75
Vit-base-patch16-224
98.6%
192.5
1.95
Vit-base-patch32-224
192.1
313.5
1.62
表 13、对 Jetson Orin NX 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位
表 14 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 ViT 性能提升。
模型
Orin NX 16GB (原始版本)
Orin NX 16GB (超级模式)
性能增益 (倍)
clip-vit-base-patch32
322.2
356.7
1.10
clip-vit-base-patch16
163.5
192.6
1.18
基于 DINOv2 的补丁 14
127.5
159.8
1.25
基于 SAM2
6.2
7.3
1.18
支持恐龙
6.2
7.2
1.16
Vit-base-patch16-224
158.6
190.2
1.20
Vit-base-patch32-224
271.2
209.5
1.10
表 14、对 Jetson Orin NX 16GB 上的热门 ViT 进行性能基准测试 (帧/秒)
所有 ViT 模型均使用 NVIDIA TensorRT 以 FP16 精度运行,且测量以 FPS 进行。
使用 NVIDIA Jetson Orin Nano 和 NVIDIA Jetson Orin NX 的 JetPack 6.2 入门
NVIDIA Jetson 生态系统提供了多种使用 JetPack 镜像刷写开发者套件和生产模组的方式。
要在 Jetson Orin Nano 开发者套件或模组上安装 JetPack 6.2,请使用以下方法之一:
JetPack SDK
页面中的
SD-Card 镜像
。
SDK Manager
,用于
根据下载和安装说明
进行刷写。
根据
刷写支持
,刷写 bootloader、kernel 和 root file system 的脚本。
新的刷写配置
新的电源模式仅适用于新的刷写配置。默认刷写配置未发生变化。要启用新的电源模式,您必须在刷写时使用新的刷写配置。
以下是用于刷写的新刷写配置:
jetson-orin-nano-devkit-super.conf
刷写或更新到 JetPack 6.2 后,运行以下命令以启动新提供的超级模式。
Jetson Orin Nano 模组上的 MAXN SUPER 模式:
sudo nvpmodel -m 2
Jetson Orin NX 模组上的 MAXN SUPER 模式:
sudo nvpmodel -m 0
您还可以从页面右上角的电源模式菜单中选择 MAXN SUPER 和其他电源模式。
图 4. 电源模式选择菜单
Jetson AI 实验室
Jetson AI 实验室是 NVIDIA 探索和试验针对边缘设备优化的生成式 AI 技术的中心。它为开发者提供支持,并为协作社区提供了近 50 个教程、预构建容器和资源,用于使用优化的推理基础架构部署设备上的 LLM、SLM、VLM、diffusion policies 和 speech models。
通过简化对先进 AI 工具的访问,该实验室使各级别的开发者能够在本地创新和部署生成式 AI,从而推进开源边缘 AI 和机器人学习。
通过这些易于学习的教程,为由 JetPack 6.2 提供支持的开发者套件轻松深入了解生成式 AI:
使用 Open Web UI 的 Ollama
:入门级 LLM 应用示例
NanoOWL:
采用先进 ViT 模型的交互式演示
LeRobot
:Hugging Face LeRobot 的物理 AI 示例
更新文档:Datasheets 和设计指南
借助更新的性能提升,以下资源已更新,可从
Jetson Download Center
下载:
Jetson Orin NX 模组数据表
Jetson Orin Nano 模组数据表
Jetson Orin NX 系列和 Jetson Orin Nano 系列设计指南
散热设计指南
Jetson 生态系统合作伙伴为 Super Mode 做好准备
为支持客户部署,Jetson 生态系统合作伙伴已增强其解决方案,以支持这种提升的性能。
类别
Jetson 生态系统合作伙伴
ISV 解决方案
DeepEdge
Edge Impulse
RidgeRun
Ultralytics
硬件系统合作伙伴
AAEON
Advantech
Aetina
AIMobile
ASUSTek
Axiomtek
Connect Tech
Seeed Studio
Syslogic
Vecow
Yuan High-Tech
散热解决方案
Advanced Thermal Solutions
Frore Systems
NVIDIA Jetson Orin 生命周期和路线图
鉴于客户对 Jetson Orin 的需求日益增长,NVIDIA
最近宣布
将 Jetson Orin 的产品生命周期延长至 2032 年。凭借这种性能提升,Jetson Orin Nano 和 Orin NX 系列成为当前和未来模型的理想平台。
即将推出的
JetPack 5.1.5
还将为 Jetson Orin NX 和 Jetson Orin Nano 模组启用超级模式。使用 JetPack 5 进行开发的开发者和客户将从性能提升中受益。
图 5、JetPack 软件路线图
使用 JetPack 6.2 提升应用性能
JetPack 6.2 是一个开创性版本。它可在现有 Jetson 模组上实现惊人的 2 倍推理性能提升,且无需增加任何成本。对于希望为其应用程序提供强效助力的 Jetson 开发者和客户而言,此次升级必不可少。立即升级到 JetPack 6.2,充分发挥 Jetson 平台的全部潜力。 |
https://developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/ | Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform | As robotics and autonomous vehicles advance, accelerating development of
physical AI
—which enables
autonomous machines
to perceive, understand, and perform complex actions in the physical world—has become essential. At the center of these systems are
world foundation models (WFMs)
—AI models that simulate physical states through physics-aware videos, enabling machines to make accurate decisions and interact seamlessly with their surroundings.
NVIDIA Cosmos
is a platform that helps developers build custom world models for physical AI systems at scale. It offers
open world foundation models
and tools for every stage of development, from data curation to training to customization.
Video 1. NVIDIA Cosmos: A World Foundation Model Platform for Physical AI
This post explains Cosmos and its key features that accelerate physical AI development.
Accelerating world model development with NVIDIA Cosmos
Building physical AI is challenging, demanding precise simulations and real-world behavior understanding and prediction. A key tool for overcoming these challenges is a world model, which predicts future environmental states based on past observations and current inputs. These models are invaluable for physical AI builders, enabling them to simulate, train, and refine systems in controlled environments.
However, developing effective world models requires vast amounts of data, computational power, and real-world testing, which can introduce significant safety risks, logistical hurdles, and prohibitive costs. To address these challenges, developers often turn to
synthetic data
generated from
3D simulations
to train models. While synthetic data is a powerful tool, creating it is resource-intensive and may fall short of accurately reflecting real-world physics, particularly in complex or edge-case scenarios.
The
end-to-end NVIDIA Cosmos platform
accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines.
Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. In addition to Cosmos world foundation models, the platform also includes:
NVIDIA NeMo Curator
for efficient video data curation
Cosmos Tokenizer
for efficient, compact, and high-fidelity video tokenization
Cosmos world foundation models pretrained for robotics and autonomous driving applications
NVIDIA NeMo Framework
for model training and optimization
Figure 1. NVIDIA Cosmos is a world foundation model development platform with generative models, data curator, tokenizer, and framework to accelerate physical AI development
Pretrained world foundation models for physical AI
Cosmos world foundation models are pretrained large generative AI models trained on 9,000 trillion tokens—including 20 million hours of data from
autonomous driving
,
robotics
, synthetic environments, and other related domains. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating
humanoid robots
performing advanced actions to developing end-to-end autonomous driving models.
These models use two architectures: autoregressive and diffusion. Both approaches use the transformer architecture for its scalability and effectiveness in handling complex temporal dependencies.
Autoregressive model
Cosmos autoregressive model is designed for video generation, predicting the next token based on input text and past video frames. It uses a transformer decoder architecture, with key modifications for world model development.
3D RoPE (Rotary Position Embeddings) encodes spatial and temporal dimensions separately, ensuring precise video sequence representation.
Cross-attention layers enable text inputs, providing better control over world generation.
QK-normalization enhances training stability.
Pretraining of this model is progressive, starting with predicting up to 17 future frames from a single input frame, then extending to 34 frames, and eventually up to 121 frames (or 50,000 tokens). Text inputs are introduced to combine descriptions with video frames, and the model is fine-tuned with high-quality data for robust performance. This structured approach enables the model to generate videos of varying lengths and complexities, with or without text inputs.
Figure 2. The Cosmos autoregressive model uses a transformer decoder architecture, with key modifications for world model development
Diffusion models
Diffusion models
are popular for generating images, videos, and audio due to their ability to deconstruct training data and reconstruct it based on user input, producing high-quality, realistic outputs.
Diffusion models operate in two phases:
Forward diffusion process:
Training data is progressively corrupted by adding Gaussian noise over multiple steps, effectively transforming it into pure noise.
Reverse diffusion process:
The model learns to reverse this noise step by step, recovering the original data by denoising the corrupted input.
Once trained, diffusion models generate new data by sampling random Gaussian noise and passing it through the learned denoising process. In addition, Cosmos diffusion models also get several key updates tailored for physical AI development.
3D patchification processes video into smaller patches, simplifying spatio-temporal sequence representation.
Hybrid positional embeddings handle spatial and temporal dimensions, supporting videos with varying resolutions and frame rates.
Cross-attention layers incorporate text inputs, enabling better control over video generation based on descriptions.
Adaptive layer normalization with LoRA reduces model size by 36%, maintaining high performance with fewer resources.
Figure 3. Cosmos diffusion model architecture combines advanced video compression, flexible positional encoding, and text integration to deliver high-quality, physics-aware video generation
Model sizes for varied needs
Developers can choose from the following three model sizes to meet performance, quality, and deployment needs.
Nano:
Optimized for real-time, low-latency inference and edge deployment.
Super:
Designed as performant baseline models.
Ultra:
Focused on maximum quality and fidelity, ideal for distilling custom models.
Strengths and limitations
Cosmos world foundation models generate low-resolution, real-world-accurate synthetic video data, essential for training
robotics
and autonomous vehicle systems. While they lack artistic flair, their outputs closely replicate the physical world, making them ideal for precise object permanence and realistic scenarios in physical AI model training.
Guardrails for safe use of Cosmos world foundation models
AI models need guardrails to ensure reliability by mitigating hallucinations, preventing harmful outputs, safeguarding privacy, and aligning with AI standards for safe and controlled deployment. Cosmos ensures the safe use of its world foundation models through a customizable, two-stage guardrail system aligned with NVIDIA’s commitment to trustworthy AI.
Cosmos Guardrails operates in two stages: Pre-guard and Post-guard.
Pre-guard
This stage involves text prompt-based safety measures using two layers:
Keyword Blocking:
A blocklist checker scans prompts for unsafe keywords, using lemmatization to detect variations and blocking non-English terms or spelling errors.
Aegis Guardrail:
The NVIDIA fine-tuned Aegis AI Content Safety model detects and blocks semantically unsafe prompts, including categories like violence, harassment, and profanity. Unsafe prompts halt video generation and return an error message.
Post-guard
The Post-guard stage ensures the safety of generated videos through:
Video Content Safety Classifier:
A multiclass classifier evaluates every video frame for safety. If any frame is flagged as unsafe, the entire video is rejected.
Face Blur Filter:
All human faces in generated videos are blurred using the RetinaFace model to protect privacy and reduce biases based on age, gender, or race.
NVIDIA experts rigorously test with adversarial examples, annotating over 10,000 prompt-video pairs to refine the system and address edge cases.
Evaluating Cosmos world foundation models for 3D consistency and physics alignment
Cosmos benchmarks play a crucial role in assessing the ability of world foundation models to simulate real-world physics accurately and efficiently for physical AI applications. While publicly available benchmarks for video generation focus on fidelity, temporal consistency, and speed of generated videos, Cosmos benchmarks add new dimensions to evaluate generalist models: 3D consistency and physics alignment, ensuring the videos are evaluated based on accuracy required for physical AI systems.
3D consistency
Cosmos models were tested for 3D consistency on static scenes from a curated subset of 500 videos from an
open dataset
. Text prompts describing the videos were generated to avoid motion-related complexities. Comparisons were made against
VideoLDM
, a baseline generative model.
Metrics used
Geometric Consistency:
Assessed through epipolar geometry constraints using metrics like Sampson error and camera pose estimation success rate.
View Synthesis Consistency:
Evaluated through metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). These metrics measure the quality of synthesized views from interpolated camera positions.
Lower Sampson error and higher success rates indicate better 3D alignment. Similarly, higher PSNR and SSIM and lower LPIPS are indicators of a better quality.
Model
Sampson Error ↓
Pose Estimation Success Rate (%) ↑
PSNR ↑
SSIM ↑
LPIPS ↓
VideoLDM
0.841
4.40%
26.23
0.783
0.135
Cosmos 1.0 Diffusion Text2World 7B
0.355
62.60%
33.02
0.939
0.070
Cosmos 1.0 Diffusion Video2World 7B
0.473
68.40%
30.66
0.929
0.085
Cosmos 1.0 Autoregressive 4B
0.433
35.60%
32.56
0.933
0.090
Cosmos 1.0 Autoregressive Video2World 5B
0.392
27.00%
32.18
0.931
0.090
Real videos (reference)
0.431
56.40%
35.38
0.962
0.054
Table 1. Evaluation of 3D consistency for Cosmos world foundation models versus base VideoLDM model
Results
Cosmos world foundation models outperform the baseline in 3D consistency (table 1), with higher geometric alignment and camera pose success rates. Their synthesized views match real-world quality, confirming their effectiveness as world simulators.
Physical alignment
Physics alignment tests how well Cosmos models simulate real-world physics, including motion, gravity, and energy dynamics. Using
NVIDIA PhysX
and
NVIDIA Isaac Sim
, eight controlled scenarios were designed to evaluate properties like gravity, collision, torque, and inertia in virtual environments.
Metrics used
Pixel-Level Metrics:
Peak Signal-to-Noise Ratio (PSNR) measures how closely the pixel values of the model’s output match the reference video. Higher values indicate less noise and better accuracy. Structural Similarity Index Measure (SSIM) assesses the similarity in structure, luminance, and contrast between the generated and ground-truth frames. Higher SSIM values reflect greater visual fidelity.
Feature-Level Metric:
DreamSim measures the similarity between high-level features extracted from both videos. This approach evaluates the semantic consistency of the generated content, focusing on objects and motion rather than individual pixels.
Object-Level Metric:
Intersection-over-Union (IoU) calculates the overlap between the predicted and actual object regions in the video. This is especially useful for tracking specific objects through the simulation to ensure their behavior aligns with physical expectations.
Higher PSNR, SSIM, DreamSim and IoU are indicators of better physical alignment.
Model
Conditioning
PSNR ↑
SSIM ↑
DreamSim ↑
Avg. IoU ↑
Cosmos 1.0 Diffusion Video2World 7B
prompt + 1 frame
17.34
0.54
0.84
0.332
Cosmos 1.0 Diffusion Video2World 7B
prompt + 9 frames
21.06
0.69
0.86
0.592
Cosmos 1.0 Diffusion Video2World 14B
prompt + 1 frame
16.81
0.52
0.84
0.338
Cosmos 1.0 Diffusion Video2World 14B
prompt + 9 frames
20.21
0.64
0.86
0.598
Cosmos 1.0 Autoregressive 4B
1 frame
17.91
0.49
0.83
0.394
Cosmos 1.0 Autoregressive 4B
9 frames
18.13
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 5B
prompt + 1 frame
17.67
0.48
0.82
0.376
Cosmos 1.0 Autoregressive Video2World 5B
prompt + 9 frames
18.29
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 12B
1 frame
17.94
0.49
0.83
0.395
Cosmos 1.0 Autoregressive Video2World 12B
9 frames
18.22
0.49
0.87
0.487
Cosmos 1.0 Autoregressive Video2World 13B
prompt + 1 frame
18
0.49
0.83
0.397
Cosmos 1.0 Autoregressive Video2World 13B
prompt + 9 frames
18.26
0.48
0.87
0.482
Table 2. Physics alignment results with metrics calculated over 33 frames, the maximum length supported by the autoregressive variants of the Cosmos world foundation models
Results
Cosmos world foundation models show strong adherence to physical laws (Table 2), particularly with increased conditioning data. Post-training on camera conditioning dataset achieves a twofold increase in pose estimation success rate compared to baseline models. However, challenges like object impermanence (where objects vanish or appear unexpectedly) and implausible behaviors (such as violating gravity) highlight areas for improvement.
Customizing for physical AI applications with Cosmos and NVIDIA Omniverse
Video search and understanding:
Simplifies video tagging and search by understanding spatial and temporal patterns, making training data preparation easier.
Controllable 3D-to-real synthetic data generation:
With
NVIDIA Omniverse
, developers can create 3D scenarios and use Cosmos to generate photorealistic videos that are precisely controlled by 3D scenes for highly tailored synthetic datasets.
Policy model development and evaluation:
World foundation models fine-tuned for action-conditioned video prediction enable scalable, reproducible evaluation of policy models—strategies mapping states to actions—reducing reliance on risky real-world tests or complex simulations for tasks like obstacle navigation or object manipulation.
Foresight for action selection:
Cosmos equips physical AI models with predictive capabilities to assess the outcomes of potential actions.
Multiverse simulation:
Using Cosmos and NVIDIA Omniverse, developers can simulate multiple future outcomes to help AI models evaluate and select the best strategy for achieving its goals, benefiting applications like predictive maintenance and autonomous decision-making.
From generalist to customized specialist models
Cosmos introduces a two-stage approach to world model training.
Generalist models:
Cosmos world foundation models are built as generalists, trained on extensive datasets that encompass diverse real-world physics and environments. These open models are capable of handling a broad range of scenarios, from natural dynamics to robotic interactions, providing a solid foundation for any physical AI task.
Specialist models:
Developers can fine-tune generalist models using smaller, targeted datasets to create specialists tailored for specific applications, such as autonomous driving or humanoid robotics or they can generate customized synthetic scenarios, such as night scenes with emergency vehicles or high-fidelity industrial robotics environments. This fine-tuning process significantly reduces the required data and training time compared to training models from scratch.
Cosmos accelerates training and fine-tuning with efficient video processing pipelines, highly performant tokenizer, and advanced training frameworks, enabling developers to address operational needs and edge cases for advancing physical AI.
Accelerated data processing with NVIDIA NeMo Curator
Training models require curated, high-quality data, which is time and resource-intensive. NVIDIA Cosmos includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.
NVIDIA NeMo Curator enables robotics and AV developers to process vast datasets efficiently. For example, 20 million hours of video can be processed in 40 days on NVIDIA Hopper GPUs, or just 14 days on NVIDIA Blackwell GPUs—compared to 3.4 years on unoptimized CPU pipelines.
Key benefits include:
89x faster curation:
Dramatically reduces processing time
Scalability:
Handles 100+ PB of data seamlessly
High throughput:
Advanced filtering, captioning, and embedding ensure quality without sacrificing speed
Figure 4. Cosmos includes NeMo Curator that delivers 89x faster video data processing
High-fidelity compression and reconstruction with Cosmos Tokenizer
After data is curated, it must be tokenized for training. Tokenization breaks down complex data into manageable units, enabling models to process and learn from it more efficiently.
Cosmos tokenizers simplify this process with faster compression and visual reconstruction while preserving quality, reducing costs and complexity. For autoregressive models, the discrete tokenizer compresses data 8x in time and 16×16 in space, processing up to 49 frames at once. For diffusion models, the continuous tokenizer achieves 8x time and 8×8 space compression, handling up to 121 frames.
Fine-tuning with NVIDIA NeMo
Developers can fine-tune Cosmos world foundation models using the
NVIDIA NeMo
Framework. NeMo Framework accelerates model training on GPU-powered systems, whether enhancing an existing model or building a new one, from on-premises data centers to the cloud.
NeMo Framework efficiently loads multimodal data by:
Sharding terabyte size dataset into compressed files to reduce IO overhead.
Deterministically saving and loading datasets to avoid repetition and minimize compute waste.
Reducing network bandwidth when exchanging data using optimized communications.
Get started with NVIDIA Cosmos
Cosmos world foundation models are open and available on
NGC
and
Hugging Face
. Developers can also run Cosmos world foundation models on the
NVIDIA API catalog
. Also available on the
API catalog
are Cosmos tools to enhance text prompts for accuracy, an inbuilt watermarking system that enables easy future identification of AI-generated sequences, and a specialized model to decode video sequences for augmented reality applications. To learn more,
watch the demo
.
NeMo Curator
for accelerated data processing pipelines is available as a managed service and SDK. Developers can now
apply for early access
. Cosmos tokenizers are open neural networks available on
GitHub
and
Hugging Face
.
Get started with NVIDIA Cosmos
. | https://developer.nvidia.com/zh-cn/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/ | NVIDIA Cosmos World 基础模型平台助力物理 AI 进步 | 随着机器人和
自动驾驶汽车
的发展,加速
物理 AI
的发展变得至关重要,而物理 AI 使自主机器能够感知、理解并在现实世界中执行复杂的操作。这些系统的核心是
世界基础模型 (WFMs)
,即通过物理感知视频模拟物理状态的 AI 模型,使机器能够做出准确决策并与周围环境无缝交互。
NVIDIA Cosmos
平台可帮助开发者大规模为物理 AI 系统构建自定义世界模型。它为从数据管护、训练到定制的每个开发阶段提供开放世界基础模型和工具。
视频 1、NVIDIA Cosmos:适用于物理 AI 的世界基础模型平台
本文将介绍 Cosmos 及其加速物理 AI 开发的主要功能。Cosmos 是一种开源的 Python 库,用于加速物理 AI 开发。
借助 NVIDIA Cosmos 加速世界模型开发
构建物理 AI 极具挑战性,需要精确的模拟以及真实世界的行为理解和预测。克服这些挑战的一个关键工具是世界模型,该模型根据过去的观察结果和当前的输入结果预测未来的环境状态。这些模型对于物理 AI 构建者来说非常重要,使他们能够在受控环境中模拟、训练和优化系统。
然而,开发有效的世界模型需要大量数据、计算能力和真实世界的测试,这可能会带来重大的安全风险、物流障碍和高昂的成本。为了应对这些挑战,开发者通常会使用通过 3D 仿真生成的
合成数据
来训练模型。虽然合成数据是一种功能强大的工具,但创建合成数据需要大量资源,可能无法准确反映现实世界的物理特性,在复杂或边缘情况下尤其如此。
端到端
NVIDIA Cosmos 平台
可加速物理 AI 系统的世界模型开发。Cosmos 基于 CUDA 构建,结合了先进的世界基础模型、视频标记器和 AI 加速的数据处理流程。
开发者可以通过微调 Cosmos 世界基础模型或从头开始构建新模型来加速世界模型的开发。除了 Cosmos 世界基础模型之外,该平台还包括:
用于高效视频数据管护的
NVIDIA NeMo Curator
Cosmos Tokenizer
可实现高效、紧凑和高保真的视频标记化
为机器人和自动驾驶应用预训练的 Cosmos World Foundation 模型
用于模型训练和优化的
NVIDIA NeMo 框架
图 1、NVIDIA Cosmos 是一个世界性的基础模型开发平台,包含生成式模型、数据管护器、标记器和框架,可加速物理 AI 开发
适用于物理 AI 的
预训练世界基础模型
Cosmos 世界基础模型是预训练的大型生成式 AI 模型,使用 9000 万亿个令牌进行训练,其中包括来自
自动驾驶
、
机器人
开发、合成环境和其他相关领域的 2000 万个小时的数据。这些模型可创建有关环境和交互的逼真合成视频,为训练复杂系统 (从模拟执行高级动作的类
人型机器人
到开发端到端自动驾驶模型) 提供可扩展的基础。
这些模型使用两种架构:自回归和扩散。这两种方法都使用 Transformer 架构,因为其可扩展性和有效性可用于处理复杂的时间依赖项。
自回归模型
Cosmos 自回归模型专为视频生成而设计,可根据输入文本和过去的视频帧预测下一个令牌。它使用 Transformer 解码器架构,并针对世界模型开发进行了关键修改。
3D RoPE (Rotary Position Embeddings) 可分别对空间和时间维度进行编码,确保精确的视频序列表示。
交叉注意力层支持文本输入,从而更好地控制世界生成。
QK 标准化可增强训练稳定性。
此模型的预训练是渐进式的,首先从单个输入帧预测多达 17 帧的未来帧,然后扩展到 34 帧,最终达到 121 帧 (或 50,000 个 tokens)。引入文本输入以将描述与视频帧相结合,并使用高质量数据对模型进行微调,以实现强大的性能。这种结构化方法使模型能够生成长度和复杂性各不相同的视频,无论是否输入文本。
图 2、Cosmos 自回归模型使用 Transformer 解码器架构,并对世界模型开发进行了关键修改
扩散模型
Diffusion 模型在生成图像、视频和音频方面很受欢迎,因为它们能够解构训练数据并根据用户输入进行重建,从而生成高质量、逼真的输出。
扩散模型分为两个阶段:
前向扩散过程:
通过在多个步骤中添加高斯噪声,有效地将其转换为纯噪声,训练数据会逐渐损坏。
反向扩散过程:
模型会逐步学习反向噪声,通过对损坏的输入降噪来恢复原始数据,例如使用 Stable Diffusion 模型。
经过训练后,扩散模型通过对随机高斯噪声进行采样并将其传递给学习的降噪过程来生成新数据。此外,Cosmos 扩散模型还获得了一些专为物理 AI 开发打造的关键更新。
3D Patchification 将视频处理为更小的分块,从而简化时空序列表示。
混合位置嵌入可处理空间和时间维度,支持具有不同分辨率和帧率的视频。
交叉注意力层整合文本输入,从而更好地控制基于描述生成的视频。
采用 LoRA 的自适应层归一化可将模型大小减少 36%,从而以更少的资源保持高性能。
图 3、Cosmos 扩散模型架构将先进的视频压缩、灵活的位置编码和文本集成相结合,可生成具有物理感知的高质量视频
满足不同需求的模型大小
开发者可以从以下三种模型大小中进行选择,以满足性能、质量和部署需求。
Nano:
针对实时、低延迟推理和边缘部署进行优化。
Super:
设计为高性能基准模型。
Ultra:专注于最大化质量和保真度,适合提取自定义模型。
优点和局限性
Cosmos 世界基础模型生成低分辨率、真实世界准确的合成视频数据,这对于训练
机器人
和自动驾驶汽车系统至关重要。虽然它们缺乏艺术天赋,但其输出结果密切复制了物理世界,因此非常适合在物理 AI 模型训练中实现精确的物体持久性和逼真的场景。
用于安全使用 Cosmos World 基础模型的护栏
AI 模型需要护栏来确保可靠性,具体方法包括减少幻影、防止有害输出、保护隐私,以及符合 AI 标准以实现安全可控的部署。Cosmos 通过可定制的双阶段护栏系统确保其 World Foundation 模型的安全使用,该系统符合 NVIDIA 对值得信赖的 AI 的承诺。
Cosmos Guardrails 分为两个阶段:Pre-guard 和 Post-guard。
预防护
此阶段涉及基于文本提示的安全措施,使用两层:
关键字屏蔽:屏蔽列表检查器会扫描不安全关键字的提示,使用引言化来检测变异,并阻止非英语术语或拼写错误。
Aegis Guardrail:
经过 NVIDIA 微调的 Aegis AI 内容安全模型可以检测并阻止语义上不安全的提示,包括暴力、骚扰和冒犯等类别。不安全提示停止视频生成并返回错误消息。
护卫队队员
Post-guard Stage 通过以下方式确保生成视频的安全性:
视频内容安全分类器:
多类别分类器评估每个视频帧的安全性。如果任何帧被标记为不安全,则整个视频将被拒绝。
面部模糊滤镜:
使用 RetinaFace 模型对生成的视频中的所有人脸进行模糊处理,以保护隐私并减少基于年龄、性别或种族的偏见。
NVIDIA 专家使用对抗示例进行严格测试,标注超过 10,000 个提示视频对,以优化系统并解决边缘案例。
评估 Cosmos 世界基础模型,以实现 3D 一致性和物理对齐
Cosmos 基准测试在评估世界基础模型为物理 AI 应用准确高效地模拟真实物理的能力方面发挥着至关重要的作用。虽然公开可用的视频生成基准测试侧重于保真度、时间一致性和生成视频的速度,但 Cosmos 基准测试增加了新的维度来评估通用模型:3D 一致性和物理对齐,确保根据物理 AI 系统所需的准确性评估视频。
3D 一致性
Cosmos 模型在
开放数据集
中的 500 个精选视频子集中进行了静态场景的 3D 一致性测试。生成描述视频的文本提示是为了避免与动作相关的复杂性。并与基准生成模型
VideoLDM
进行了比较。
所用指标
几何一致性:使用 Sampson 误差和摄像头姿态估计成功率等指标通过外极性几何约束条件进行评估。
查看合成一致性:
通过峰值信噪比 (Peak Signal-to-Noise Ratio, PSNR)、结构相似性指数 (Structural Similarity Index, SSIM) 和学习感知图像块相似性 (Learned Perceptual Image Patch Similarity, LPIPS) 等指标进行评估。这些指标用于测量从插入的摄像头位置合成视图的质量。
更低的 Sampson 误差和更高的成功率表明 3D 对齐效果更好。同样,较高的 PSNR 和 SSIM 以及较低的 LPIPS 都表示质量有所提高。
模型
Sampson 错误*
姿态估计成功率 (%)*
PSNR*
SSIM
LPIPS
VideoLDM
0.841
4.40%
26.23
0.783
0.135
Cosmos 1.0 Diffusion Text2World 7B
0.355
62.60%
33.02
0.939
0.070
Cosmos 1.0 Diffusion Video2World 7B
0.473
68.40%
30.66
0.929
0.085
Cosmos 1.0 Autoregressive 4B
0.433
35.60%
32.56
0.933
0.090
Cosmos 1.0 Autoregressive Video2World 5B
0.392
27.00%
32.18
0.931
0.090
真实视频 (参考)
0.431
56.40%
35.38
0.962
0.054
表 1、评估 Cosmos 世界基础模型与基础 VideoLDM 模型的 3D 一致性
成果
Cosmos 世界基础模型在 3D 一致性方面的表现优于基准 (表 1),具有更高的几何对齐和摄像头姿态成功率。其合成视图与真实世界的质量相匹配,证实了其作为世界模拟器的有效性。
物理对齐
物理对齐测试 Cosmos 模型在模拟真实物理 (包括运动、重力和能量动力学) 时的效果。借助
NVIDIA PhysX
和
NVIDIA Isaac Sim
,设计出八个受控场景,用于评估虚拟环境中的重力、碰撞、扭矩和惯性等属性。
所用指标
像素级指标
:峰值信噪比(Peak Signal-to-Noise Ratio,PSNR)可测量模型输出的像素值与参考视频的匹配程度。值越高表示噪点越小,准确性越高。结构相似性指数指标(Structural Similarity Index Measure,SSIM)用于评估生成的帧与真实帧之间在结构、亮度和对比度方面的相似性。较高的 SSIM 值反映了更高的视觉保真度。
特征级别指标
:DreamSim 测量从两个视频中提取的高级特征之间的相似性。这种方法评估生成内容的语义一致性,重点关注物体和运动,而不是单个像素。
目标级别指标:
交集并集比 (Intersection-over-Union,IoU) 用于计算视频中预测的目标区域与实际目标区域之间的重叠。这对于在整个模拟过程中追踪特定物体特别有用,可确保其行为符合物理预期。
PSNR、SSIM、DreamSim 和 IoU 越高,表示物理对齐越好。
模型
调节
PSNR*
SSIM
DreamSim™
平均值。IoU*
Cosmos 1.0 Diffusion Video2World 7B
提示符 = 1 帧
17.34
0.54
0.84
0.332
Cosmos 1.0 Diffusion Video2World 7B
提示符 = 9 帧
21.06
0.69
0.86
0.592
Cosmos 1.0 Diffusion Video2World 14B
提示符 = 1 帧
16.81
0.52
0.84
0.338
Cosmos 1.0 Diffusion Video2World 14B
提示符 = 9 帧
20.21
0.64
0.86
0.598
Cosmos 1.0 Autoregressive 4B
1 帧
17.91
0.49
0.83
0.394
Cosmos 1.0 Autoregressive 4B
9 帧
18.13
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 5B
提示符 = 1 帧
17.67
0.48
0.82
0.376
Cosmos 1.0 Autoregressive Video2World 5B
提示符 = 9 帧
18.29
0.48
0.86
0.481
Cosmos 1.0 Autoregressive Video2World 12B
1 帧
17.94
0.49
0.83
0.395
Cosmos 1.0 Autoregressive Video2World 12B
9 帧
18.22
0.49
0.87
0.487
Cosmos 1.0 Autoregressive Video2World 13B
提示符 = 1 帧
18
0.49
0.83
0.397
Cosmos 1.0 Autoregressive Video2World 13B
提示符 = 9 帧
18.26
0.48
0.87
0.482
表 2、物理对齐结果,计算指标超过 33 帧 (Cosmos 世界基础模型的自回归变体支持的最大长度)
成果
Cosmos 世界基础模型非常遵守物理定律 (表 2),尤其是在条件数据增加的情况下。与基线模型相比,在摄像头调节数据集上进行后训练后,姿态估计的成功率提高了两倍。然而,物体无常 (物体意外消失或出现) 和难以置信的行为 (如违反重力) 等挑战凸显了需要改进的领域。
使用 Cosmos 和 NVIDIA Omniverse 定制物理 AI 应用
视频搜索和理解:
通过了解空间和时间模式来简化视频标记和搜索,从而更轻松地准备训练数据
可控的 3D 到真实合成数据生成:
借助
NVIDIA Omniverse
,开发者可以创建 3D 场景,并使用 Cosmos 生成逼真的视频,这些视频由 3D 场景精确控制,用于高度定制的合成数据集。
策略模型开发和评估:
World foundation models 针对动作条件视频预测进行了微调,可实现策略模型的可扩展、可再现评估 (将状态映射到动作的策略),从而减少对障碍物导航或对象操作等任务进行风险真实测试或复杂模拟的依赖。
行动选择前瞻:
Cosmos 为物理 AI 模型配备预测功能,以评估潜在行动的结果。
Multiverse 模拟:
借助 Cosmos 和 NVIDIA Omniverse,开发者可以模拟多种未来结果,帮助 AI 模型评估和选择实现目标的最佳策略,从而使预测性维护和自主决策等应用受益。
从通用模型到定制的专业模型
Cosmos 为世界模型训练引入了一种分为两个阶段的方法。
多面手模型:Cosmos 世界基础模型以多面手的身份构建,基于涵盖各种真实物理和环境的大量数据集进行训练。这些开放模型能够处理从自然动力学到机器人交互的各种场景,为任何物理 AI 任务提供坚实的基础。
专业模型:开发者可以使用更小的定向数据集微调通用模型,为特定应用(如自动驾驶或人形机器人)打造量身定制的专家模型,也可以生成定制的合成场景,如夜间场景中的紧急车辆或高保真工业机器人环境。与从头开始训练模型相比,这种微调过程可显著减少所需的数据和训练时间。
Cosmos 通过高效的视频处理流程、高性能的 tokenizer 和高级训练框架加速训练和微调,使开发者能够满足运营需求和边缘案例,推动物理 AI 的发展。
借助 NVIDIA NeMo Curator 加速数据处理
训练模型需要精心挑选的高质量数据,而且需要耗费大量时间和资源。NVIDIA Cosmos 包含由 NVIDIA NeMo Curator 提供支持并针对 NVIDIA 数据中心 GPU 进行优化的数据处理和管护流程。
借助 NVIDIA NeMo Curator,机器人和自动驾驶(AV)开发者能够高效处理庞大的数据集。例如,20 million 小时的视频可以在 40 天内在 NVIDIA Hopper GPU 上处理完毕,或仅在 14 天内在 NVIDIA Blackwell GPU 上处理完毕——相比之下,未经优化的 CPU 流水线需要 3.4 年。
主要优势包括:
PyTorch、pandas、LangChain、Megatron、NVIDIA、cuOpt、Stable Diffusion、Llama、Jetson、Google、Python、Hugging Face、Arm、Anaconda、Siemens、DPU、GPU 和 Github 等技术的
89
倍管护速度:显著缩短处理时间
可扩展性:
无缝处理 100 多个 PB 的数据
高吞吐量:高级过滤、字幕和嵌入可在不牺牲速度的情况下确保质量
图 4、Cosmos 包含 NeMo Curator,可将视频数据处理速度提高 89 倍
使用 Cosmos Tokenizer 实现高保真压缩和重建
整理数据后,必须对其进行标记化以进行训练。标记化将复杂的数据分解为可管理的单元,使模型能够更高效地处理数据并从中学习。
Cosmos 标记器可简化此过程,加快压缩和视觉重建速度,同时保持质量,降低成本和复杂性。对于自回归模型,离散分词器可将数据压缩为时间的 8 倍,在空间中压缩为 16 × 16,一次最多处理 49 帧。对于扩散模型,连续分词器可实现 8 倍的时间和 8 × 8 的空间压缩,最多可处理 121 帧。
使用 NVIDIA NeMo 进行微调
开发者可以使用
NVIDIA NeMo
框架微调 Cosmos 世界基础模型。NeMo 框架可以在 GPU 驱动的系统上加速模型训练,无论是增强现有模型还是构建新模型,从本地数据中心到云端。
NeMo 框架通过以下方式高效加载多模态数据:
将 TB 大小的数据集分片到压缩文件中,以减少 IO 开销。
决定性地保存和加载数据集,以避免重复并尽可能减少计算浪费。
在使用优化的通信交换数据时降低网络带宽。
开始使用 NVIDIA Cosmos
Cosmos World Foundation 模型现已开放,可在
NGC
和
Hugging Face
上获取。开发者还可以在
NVIDIA
API Catalog
上运行 Cosmos World Foundation 模型。API Catalog 上还提供了用于增强文本提示以提高准确性的 Cosmos 工具、支持未来轻松识别 AI 生成序列的内置水印系统,以及用于解码视频序列以用于增强现实应用的专用模型。如需了解详情,请观看演示。
NeMo Curator
for accelerated data processing pipelines is available as a managed service and SDK. Developers can now
apply for early access
. Cosmos tokenizers are open neural networks available on
GitHub
and
Hugging Face
.
开始使用 NVIDIA Cosmos
。 |
https://developer.nvidia.com/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/ | Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities | Generative AI
has evolved from text-based models to multimodal models, with a recent expansion into video, opening up new potential uses across various industries. Video models can create new experiences for users or simulate scenarios for training autonomous agents at scale. They are helping revolutionize various industries including robotics, autonomous vehicles, and entertainment.
The development of video
foundation models
presents unique challenges due to the vast and varied nature of video data. This also underscores the necessity of scalable pipelines for curating data and effectively training models that can comprehend temporal and spatial dynamics.
We are announcing brand new video foundation model capabilities in the
NVIDIA NeMo framework
, an end-to-end training framework that enables you to pretrain and fine-tune your own video foundation models. The framework includes a high-throughput data curation, efficient multimodal data loading functionality, scalable model training, and a parallelized in-framework inference.
Video 1. NVIDIA Cosmos is a world foundation model platform that accelerates the development of physical AI applications like humanoid robots and autonomous vehicles.
High-throughput video curation through optimized pipelines
NeMo Curator
improves generative AI model accuracy by efficiently processing and preparing high-quality data, including large video datasets.
Using NeMo Curator’s scalable data pipelines, you can efficiently clip, annotate, and filter 100 PB or more of videos. To remove bottlenecks and optimize performance, NeMo Curator uses the following combination:
NVDEC
: Hardware decoder
NVENC
: Hardware encoder
Ray
: Compute framework for scaling AI applications
The NeMo Curator autobalancing techniques can leverage heterogeneous clusters with multiple GPU types to take advantage of NVENC on L40S GPUs and the performance of H100 and GB200 GPUs.
Figure 1 shows how NeMo Curator can process 20M hours of video data, reducing the processing time from years to days, achieving 89x speed up using 1K GPUs compared to unoptimized pipelines on CPUs for ISO power usage.
Figure 1. NeMo Curator delivers 89x faster video data processing
NeMo Curator provides the following relevant pipelines for video foundation model training and fine-tuning datasets:
Clipping
Sharding
The clipping pipeline starts with decoding and splitting raw videos into short, continuous clips by analyzing frame-to-frame color changes. The stitching stage smooths the clips out by using image embedding similarities to potentially merge adjacent clips together. These clips are then transcoded to the high-quality video encoding (H264), and they are annotated with video embeddings and captions, either existing or synthetically generated by a VLM, to facilitate semantic search capabilities.
Figure 2. Video curation clipping and sharding pipelines
Sharding generates text embeddings for captions to create the final WebDataset used for training. NeMo Curator also uses Ray streaming to build an auto-balancing system and deploy an optimal number of workers for each stage in the pipeline to avoid being bottlenecked by any stage (Figure 3).
Figure 3. Auto-balancing system to match the throughput of the overall pipeline
Efficient multimodal dataloading
Video models can be trained on billions of images and millions of videos, necessitating an efficient data loading strategy to achieve high throughput during training time.
This is accomplished in the NeMo framework through the use of
Megatron-Energon data loader
:
Shard large-scale data:
Uses the WebDataset format to shard a TB-size dataset into compressed files to help reduce I/O overhead during training.
Deterministic save and load:
Enables the dataset to be visited in one pass without repetition when the training job is disrupted, ensuring consistency across different training cluster setups.
Sequence packing:
Packs variable length or resolution images and videos together up to the max sequence length, minimizing compute wastage due to padding while simplifying data loading logic. NeMo uses the special THD attention kernel from the
Transformer engine
to support accelerated training with sequence packing.
Figure 4. Mixed image-video training with sequence packing
Reduce network bandwidth strain:
Each model parallel rank downloads a different subset of data instead of the whole dataset, and then all-gathers the data across ranks to get an identical dataloader.
Figure 5. Reducing network bandwidth strain to improve training throughput
Scaling video foundation model training
Video foundation models can be either autoregressive or diffusion models.
The well-established suite of NeMo tools on
large language models
(LLMs) can be reused for autoregressive models, while support for diffusion transformers such as DiT, MovieGen, and the latest
NVIDIA Cosmos
world foundation models for physical AI have been newly added.
The NeMo tech stack is highly optimized and provides more than 40%
Model FLOPs utilization
(MFU) in the latest benchmark (Table 1).
Model size
Context length
Training config
GPU used (TFLOPS/s)
Throughput (token/s/GPU)
DiT 7B
8k
baseline, no optimization
OOM
DiT 7B
8k
CP=2
457
8,969
DiT 7B
74k
TP=4 SP CP=4
414
2,933
DiT 28B
8k
TP=2 SP PP=2
435
2,392
DiT 28B
74k
TP=8 SP CP=4 PP=4
411
994
Table 1. GPU utilization and throughput benchmark for NVIDIA NeMo framework on diffusion transformers (DiT)
Legend:
CP
=context parallelism;
TP
=tensor parallelism;
SP
=sequence parallelism;
PP
=pipeline parallelism
Overview of the video diffusion pipeline
A video diffusion training pipeline is generally composed of the following major steps:
Tokenize the input image and video
with a causal temporal 3D tokenizer to generate 3D spatio-temporal tokens.
Use a transformer decoder
conditioned by the diffusion noise schedule timestep t and text input.
Timestep conditioning is applied through an Adaptive LayerNormalization (AdaLN) mechanism, with an option to use AdaLN-LoRA, which further improves Model FLOPs Utilization (MFU) during training.
Text conditioning is applied through a cross attention layer in each transformer block.
The NeMo framework enables you to initialize your transformer decoder based on the canonical DiT architecture or the MovieGen Llama architecture, which uses Grouped-Query Attention (GQA).
Compute the diffusion loss
with the parallelized EDM diffusion pipeline using the noise prediction from the diffusion transformer.
NeMo also applies additional Root Mean Square Layer Normalization (RMSNorm) on the queries and keys before attention blocks to stabilize diffusion training. RMSNorm is applied per attention head to remain compatible with tensor parallelism.
Figure 6. NeMo video diffusion training pipeline
Parallelism optimizations for video diffusion models
NeMo and Megatron-Core enable various model parallelism techniques:
Tensor parallel (TP)
Sequence parallel (SP)
Pipeline parallel (PP)
Context parallel (CP)
However, these techniques face unique challenges when applied to video diffusion transformers. Here’s how NeMo solves these challenges to achieve scalable and performant training:
Efficient pipeline parallelism for conditioning
Support for Spatio-Temporal DiT (ST-DiT) architecture
Customized random seeding mechanism
The traditional approach is to communicate conditioning information across pipeline stages, incurring additional communication cost and requiring nontrivial modifications to the pipeline schedule. NeMo solves this problem by computing the conditional embeddings at each pipeline stage. The computation cost with efficient pipeline parallelism for conditioning is much less than the communication cost and improves training throughput.
Figure 7. Trading communication for compute in conditioning pipeline parallelism
The Spatio-Temporal DiT (ST-DiT) architecture introduces additional spatial and temporal self-attention layers to each transformer block, as an alternative to training with full self attention on long video sequences. This approach exposes communication overhead during context parallelism due to smaller compute over short input sequence for these layers. NeMo addresses this by using local attention computation with A2A communication for spatial/temporal attention, while maintaining P2P ring topology for full self-attention. The hybrid approach effectively reduces bandwidth needs for temporal/spatial attention while still benefiting from context parallelism over full self-attention layer (Table 2).
Figure 8. Spatial-temporal DiT transformer block
Layer
Input Seq
Communication primitive
Communication bandwidth
Temporal self-attention
Short seq
Local compute & A2A
(bhw/cp, t, d)
Spatial self-attention
Short seq
Local compute & A2A
(bt/cp, hw, d)
Full attention
Long seq
CP with P2P
(b, h*w*t/cp, d)
Table 2. NeMo communication strategies for each kind of layer
Legend:
b
=batch size;
h*w
=spatial size;
t
=temporal size;
cp
=context parallel size;
d
=hidden size
, with input size being (b, t*h*w, d).
The customized random seeding mechanism goal is to make sure that random seeds are correctly initialized across the following components:
Time step
Gaussian noise
The actual model weights
Table 3 shows NeMo’s initialization strategy.
RNG seed
Data parallel
Context parallel
Pipeline parallel
Tensor parallel
Time step (t)
Diff
Same
Same
Same
Gaussian noise
Diff
Diff
Same
Same
Weight initialization
Same
Same
Diff
Diff
Table 3. Customized random seeding for parallelized diffusion transformers
Legend:
Diff=
Different random seed from other parallel ranks;
Same
=Same random seed as other parallel ranks.
Efficient in-framework inference
The NeMo framework accelerates inference by distributing denoising operations across multiple GPUs through context parallelism. After parallel denoising, the latent tensors are combined to reconstruct the video sequence before decoding with the Cosmos video tokenizer.
Benchmarks show 80–90% scaling efficiency on up to 32 H100 GPUs, with FP8 Multi-Head Attention providing 28% and 48% performance improvements over BF16 on 1 and 32 GPUs respectively.
Figure 9. Parallelized video generation with context parallelism
Figure 10. Inference performance at different GPU counts
Conclusion
In this post, we covered all the features of NVIDIA NeMo framework that will help you pretrain or fine-tune video foundation models in an effective and efficient manner.
NeMo Curator offers high-throughput data curation through clipping and sharding pipelines, and the Megatron Energon library offers efficient multimodal data loading. NeMo Frameworks enables scalable video foundation model training by supporting various model parallelism techniques specially optimized on diffusion and autoregressive models. In addition, it provides efficient in-framework inference by distributing denoising operations across multiple GPUs and incorporating FP8 Multi-Head Attention.
You can curate your video data with
NeMo Curator early access program
,
tokenize
them, pre-train(
diffusion
,
autoregressive
), fine-tune (
diffusion
,
autoregressive
), and perform multi-GPU in-framework inference (
diffusion
,
autoregressive
)with NeMo Framework today.
You can also try the NVIDIA Cosmos world foundation models at
build.nvidia.com
and watch the
CES keynote
from NVIDIA CEO Jensen Huang to learn more about the NVIDIA Cosmos world foundation model platform
.
Acknowledgements
Thanks to the following contributors: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal. | https://developer.nvidia.com/zh-cn/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/ | 通过 NVIDIA NeMo 框架的最新功能全链路加速视频基础模型的训练和微调 | 生成式 AI
已经从基于文本的模型发展到多模态模型,最近还扩展到了视频领域,为各行各业开辟了新的潜在用途。视频模型可以为用户创造新的体验,以及大规模模拟场景来训练自主代理。它们正在帮助包括机器人技术、自动驾驶汽车和娱乐业在内的多个行业实现变革。
由于视频数据的庞大和多样性,视频
基础模型
的开发面临着独特的挑战。这也凸显了可扩展的数据管理流程和训练框架的必要性,以此来理解时间和空间动态的模型。
我们发布了
NVIDIA NeMo 框架
中全新的视频基础模型功能,这是一个端到端的训练框架,用于预训练和微调您的专属视频基础模型。该框架包括高吞吐量的数据管理、高效的多模态数据加载功能、可扩展的模型训练,以及并行化的框架内推理。
高吞吐量视频数据处理链路
NVIDIA NeMo Curator 通过高效处理和准备高质量的数据(包括大型视频数据集),来提高生成式 AI 模型的准确性。
使用 NeMo Curator 的可扩展数据流程,您可以高效地剪辑、标注和过滤 100 PB 或更多的视频数据。为了消除瓶颈并优化性能,NeMo Curator 使用以下组合:
NVDEC:硬件解码器
NVENC:硬件编码器
Ray:用于扩展 AI 应用的计算框架
根据实验测试数据(仅做技术讨论和参考)显示,NeMo Curator 的自动负载平衡技术可以利用具有多种 GPU 类型的异构集群,以充分利用 L40S GPU 上的 NVENC 以及 H100 和 GB200 GPU 的性能。
图1 展示了 NeMo Curator 如何处理 2,000 万小时的视频数据,将处理时间从数年缩短到数天,使用 1,000 个 GPU 相比于 CPU 上未优化的流程,在相同功耗下实现了 89 倍的加速。
图1. NeMo Curator 将视频数据处理速度提升 89 倍 (图片展示的性能数据来自 NVIDIA 的实验测试,仅供技术讨论和参考)
NeMo Curator 为视频基础模型的训练和微调数据集提供以下相关的功能:
剪辑 (Clipping)
分片 (Sharding)
剪辑链路从解码和将原始视频分割为短而连续的片段开始,通过分析帧与帧之间的颜色变化来完成分割。在拼接阶段,利用图像嵌入(embedding)的相似性来平滑衔接片段,将相邻片段合并在一起。随后,这些片段会被转码为高质量的视频编码格式(H264),并通过视频嵌入(embedding)和字幕进行标注,这些字幕可以是现有的,也可以是由视觉语言模型 (VLM) 合成生成的,从而增强语义搜索功能。
图 2. 视频整理中的剪辑与分片链路
分片通过为字幕生成文本嵌入(embedding)来创建用于训练的最终 WebDataset。NeMo Curator 还利用 Ray 流处理技术构建了一个自动平衡系统,并为链路中的每个阶段部署了最佳数量的工作线程,以避免任何阶段成为瓶颈(如图 3 所示)。
图 3. 用于匹配整体链路的吞吐量的自动平衡系统
高效的多模态数据加载
视频模型可以在数十亿张图像和数百万个视频上进行训练,这需要一种高效的数据加载策略,从而在训练期间实现高吞吐量。
在 NeMo 框架中,这通过使用 NVIDIA
Megatron
-Energon 数据加载器
实现:
将大规模数据分片
:采用 WebDataset 格式,将 TB 级数据集分片为压缩文件,以减少训练期间的 I/O 开销。
保存与加载一致性
:在训练任务中断时,确保数据集在一次遍历中不重复访问,从而在不同的训练集群配置中保持一致性。
序列打包
:将长度或分辨率可变的图像和视频打包在一起,直至达到最大序列长度,减少因填充而导致的计算资源浪费,同时简化数据加载逻辑。NeMo 使用
Transformer Engine
中的特殊 THD 注意力算子核(attention kernel),通过结合序列打包来加速训练。
图 4. 通过序列打包加速图像-视频混合训练
减少网络带宽压力:
每个模型并行等级(rank)只下载数据集的一个不同子集,而不是整个数据集,然后所有等级通过全聚合(all-gather)在各卡之间共享数据,从而确保每张卡获得一致的数据加载。
图 5. 减少网络带宽压力以提高训练吞吐量
提高视频基础模型的训练拓展性
视频基础模型可以是自回归模型或扩散模型。
NeMo 框架里面现有的
大语言模型
(LLMs)功能可以复用于上述自回归模型,同时 NeMo 新增了对扩散 Transformer 的支持,如 DiT、MovieGen,以及最新的
NVIDIA Cosmos
世界基础模型(用于物理 AI)。
NeMo 技术栈经过高度优化,在最新基准测试中提供了超过 40% 的
模型浮点运算
利用率
(Model FLOPs Utilization,MFU)
(表 1)。
Model size
Context length
Training config
GPU used (TFLOPS/s)
Throughput (token/s/GPU)
DiT 7B
8k
baseline, no optimization
OOM
DiT 7B
8k
CP=2
457
8,969
DiT 7B
74k
TP=4 SP CP=4
414
2,933
DiT 28B
8k
TP=2 SP PP=2
435
2,392
DiT 28B
74k
TP=8 SP CP=4 PP=4
411
994
表 1. NVIDIA NeMo 框架在扩散 Transformer(DiT)上的 GPU 利用率和吞吐量基准测试
标注:
CP
=context parallelism;
TP
=tensor parallelism;
SP
=sequence parallelism;
PP
=pipeline parallelism
视频扩散模型链路概述
视频扩散训练链路通常由以下几个主要步骤组成:
使用因果时序 3D 分词器对输入的图像和视频进行分词,生成 3D 时空分词。
使用由扩散噪声调度@时间步 t 和文本输入作为条件的 Transformer 解码器。
时间步条件调节通过 Adaptive LayerNormalization(AdaLN)机制进行应用,并提供使用 AdaLN-LoRA 的选项,后者在训练过程中进一步提高模型浮点运算利用率(Model FLOPs Utilization,MFU)。
文本条件调节通过每个 Transformer 块中的交叉注意力层呈现。
NeMo 框架使您能够基于标准的的 DiT 架构或 MovieGen Llama 架构来初始化您的 Transformer 解码器,后者使用分组查询注意力机制(GQA, Grouped-Query Attention)。
使用扩散 Transformer 的噪声预测,通过并行化的 EDM 扩散链路计算扩散损失。
NeMo 还在每个注意力块之前,对 query 和 key 向量使用额外的均方根层归一化(Root Mean Square Layer Normalization,RMSNorm)以稳定扩散训练。RMSNorm 在每个注意力头上应用,以保持与张量并行性的兼容性。
图 6. NeMo 视频扩散模型训练链路
视频扩散模型的并行优化
NeMo 和 NVIDIA Megatron-Core 支持多种模型并行技术:
张量并行 (Tensor Parallel, TP)
序列并行 (Sequence Parallel, SP)
流水线并行 (Pipeline Parallel, PP)
上下文并行 (Context Parallel, CP)
然而,这些技术在应用于视频扩散 Transformer 时面临独特的挑战。以下是 NeMo 如何解决这些挑战,实现可扩展和高性能训练的方式:
用于条件调节的高效流水线并行
支持时空 DiT (ST-DiT) 架构
定制的随机种子机制
传统方法是通过流水线各阶段之间传递条件信息,这会产生额外的通信开销,并且需要对流水线调度进行较大修改。NeMo 通过在每个流水线阶段即时计算条件嵌入(embedding)来解决这个问题。与通信成本相比,条件调节的高效流水线并行计算成本要低得多,从而提高了训练吞吐量。
图 7. 在条件调节流水线并行中,以通信换取计算资源的优化策略
时空 DiT (ST-DiT) 架构在每个 Transformer 块中引入了额外的空间和时间自注意力层,以替代在长视频序列上进行全自注意力训练。因为这些层在输入序列比较短,所以进行计算时计算量较小,以至于暴露出在上下文并行中的通信开销。NeMo 通过使用局部注意力计算和 A2A 通信来处理空间/时间注意力层里的短序列输入,同时保持 P2P 环形拓扑结构用于全自注意力里的长序列输入。该混合方法有效地减少了时间/空间注意力的带宽需求,同时仍然从上下文并行中获益,避免了全自注意力层的计算开销(表 2)。
图 8. 时空 DiT Transformer 架构
Layer
Input Seq
Communication primitive
Communication bandwidth
Temporal self-attention
Short seq
Local compute & A2A
(bhw/cp, t, d)
Spatial self-attention
Short seq
Local compute & A2A
(bt/cp, hw, d)
Full attention
Long seq
CP with P2P
(b, h*w*t/cp, d)
表 2. NeMo 针对每种层的通信策略
标识
:
b
=batch size;
h*w
=spatial size;
t
=temporal size;
cp
=context parallel size;
d
=hidden size
, with input size being (b, t*h*w, d).
定制的随机种子机制的目标是确保以下组件中的随机种子得以正确初始化:
时间步
高斯噪声
实际模型权重
表 3 显示了 NeMo 框架里面对于随机种子的初始化策略。
RNG seed
Data parallel
Context parallel
Pipeline parallel
Tensor parallel
Time step (t)
Diff
Same
Same
Same
Gaussian noise
Diff
Diff
Same
Same
Weight initialization
Same
Same
Diff
Diff
表 3. 针对并行化扩散 Transformer 架构的定制随机种子设置
标识:
Diff=
Different random seed from other parallel ranks;
Same
=Same random seed as other parallel ranks.
框架内高效推理
推理的时候,NeMo 框架通过上下文并行,将去噪操作分布到多个 GPU 上,从而加速推理。在并行去噪后,潜在张量被合并以重建视频序列,然后使用 Cosmos 视频分词器进行解码。
基准测试显示,在最多 32 个 NVIDIA Hopper 架构 GPU 上,NeMo 的扩展效率达到 80-90%。在 1 个和 32 个 GPU 上,FP8 多头注意力相比 BF16 提供了分别 28% 和 48% 的性能提升。
图 9. 使用上下文并行化的视频生成
图 10. 在不同 GPU 数量下的推理性能
结论
在本文中,我们介绍了 NVIDIA NeMo 框架,帮助您以高效的方式进行视频基础模型的预训练或微调。
NeMo Curator 通过剪辑和分片链路提供高吞吐量的数据整理,Megatron Energon 库则提供高效的多模态数据加载。NeMo 框架通过支持多种针对扩散模型和自回归模型特别优化的模型并行技术,实现了可扩展的视频基础模型训练。此外,它通过将去噪操作分布到多个 GPU 上,并结合 FP8 多头注意力,实现了高效的框架内推理。
您可以通过
NeMo Curator 的早期
试用
计划
处理您的视频数据,对其进行
分词
,进行预训练(
扩散
、
自回归
)、微调(
扩散
、
自回归
),并在 NeMo 框架下进行多 GPU 的框架内推理(
扩散
、
自回归
)。
您还可以访问
NVIDIA 官网
尝试 NVIDIA Cosmos 世界基础模型,并观看 NVIDIA CEO 黄仁勋在
CES 上的主题演讲
,了解更多关于 NVIDIA Cosmos 世界基础模型平台的信息。
鸣谢
感谢以下贡献者: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal. |
https://developer.nvidia.com/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/ | NVIDIA cuDSS Advances Solver Technologies for Engineering and Scientific Computing | NVIDIA cuDSS
is a first-generation sparse direct solver library designed to accelerate engineering and scientific computing. cuDSS is increasingly adopted in data centers and other environments and supports single-GPU, multi-GPU and multi-node (MGMN) configurations.
cuDSS has become a key tool for accelerating computer-aided engineering (CAE) workflows and scientific computations across multiple domains such as structural engineering, fluid dynamics, electromagnetics, circuit simulation, optimization, and AI-assisted engineering problems.
This post highlights some of the key performance and usability features delivered in
cuDSS v0.4.0
and
cuDSS v0.5.0
, as summarized in Table 1. cuDSS v0.4.0 achieves a significant performance boost for factorization and solve steps, while also introducing several new features, including the memory prediction API, automatic hybrid memory selection, and variable batch support. cuDSS v0.5.0 adds
host execution mode
, which is particularly beneficial for smaller matrices, and demonstrated substantial performance improvements using
hybrid memory mode
and host multithreading for analysis phase, an area that is typically challenging to parallelize effectively.
cuDSS v0.4.0 release
cuDSS v0.5.0 release
PIP wheel and Conda support
Factorization and solve performance improvements (up to 10x) for single and multi-GPU when factors have dense parts
Memory prediction API
Automatic normal/hybrid memory mode selection
Variable (non-uniform) batch support (variable N, NNZ, NRHS, LD)
Host execution mode (parts of computations on the host) for smaller matrices
Host multithreading (currently only for the reordering) with user-defined threading backend
New pivoting approach (static pivoting with scaling)
Improved performance and memory requirements for hybrid memory mode
Table 1. cuDSS features in releases v0.4.0 and v0.5.0
Feature highlights
This section focuses on notable usability enhancements and performance improvements.
Memory prediction API
The memory prediction API is important for users who need to know the precise amount of device and host memory required by cuDSS before reaching the most memory-intensive phase (numerical factorization).
It is especially useful in scenarios where device memory may be insufficient—either when solving large linear systems or when the application has a limited memory budget for cuDSS. In either case, it is recommended to enable
hybrid memory mode
before the analysis phase.
Note that if hybrid memory mode is enabled but everything fits within the available device memory (whether based on the user-defined limit or GPU capacity), cuDSS will automatically detect this and switch to the faster default memory mode.
A typical call sequence for solving a linear system with cuDSS is as follows:
Analysis (reordering and symbolic factorization)
Numerical factorization (where the values of the factors are allocated and computed)
Solving
With the introduction of memory prediction, users can now query the amount of device and host memory required for the chosen mode (either default or hybrid memory) after the analysis phase, as well as the minimum memory required for hybrid memory mode. As the sample below demonstrates, the query is a single call of
cudssDataGet
with
CUDSS_DATA_MEMORY_ESTIMATES
that writes an output in a small fixed-size array.
/*
* After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,)
*/
int64_t memory_estimates[16] = {0};
cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES,
&memory_estimates, sizeof(memory_estimates);
/* memory_estimates[0] - permanent device memory
* memory_estimates[1] - peak device memory
* memory_estimates[2] - permanent host memory
* memory_estimates[3] - peak host memory
* memory_estimates[4] - minimum device memory for the hybrid memory mode
* memory_estimates[5] - maximum host memory for the hybrid memory mode
* memory_estimates[6,...,15] - reserved for future use
*/
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Non-uniform batch API
In scenarios where the application requires solving multiple linear systems, and each system individually is not large enough to fully saturate the GPU, performance can be enhanced through batching. There are two types of batching: uniform and non-uniform. Unlike uniform batches, non-uniform batches do not impose restrictions on the dimensions or sparsity patterns of the matrices.
cuDSS v0.4.0 introduces support for non-uniform batches. The opaque
cudssMatrix_t
objects can represent either a single matrix or a batch of matrices and thus the only part that needs to be changed is how the matrix objects are created and modified.
To create batches of dense or sparse matrices, v0.4.0 introduced new APIs
cudssMatrixCreateBatchDn
or
cudssMatrixCreateBatchCsr
. For modifying the matrix data are the similarly added APIs
cudssMatrixSetBatchValues
and
cudssMatrixSetBatchCsrPointers
as well as
cudssMatrixGetBatchDn
and
cudssMatrixGetBatchCsr
. cuDSS v0.5.0 modifies
cudssMatrixFormat_t
which can now be queried using
cudssMatrixGetFormat
to determine whether
cudssMatrix_t
object is a single matrix or a batch.
Once the batches of matrices are created, they can be passed to the main calls of
cudssExecute
in the exact same way as if they were single matrices. The sample below demonstrates the use of new batch APIs to create batches of dense matrices for the solution and right-hand sides, and a batch of sparse matrices for As.
/*
* For the batch API, scalar arguments like nrows, ncols, etc.
* must be arrays of size batchCount of the specified integer type
*/
cudssMatrix_t b, x;
cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrix_t A;
cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base);
/*
* The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x
*/
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Host multithreading API
Although most of the compute- and memory-intensive parts of cuDSS are executed on the GPU, some important tasks are still executed on the host. Prior to v0.5.0, cuDSS did not support multithreading (MT) on the host, and host execution was always single-threaded. The new release introduces support for arbitrary user-defined threading runtimes (such as
pthreads, OpenMP, and thread pools
), offering flexibility similar to how support was introduced for user-defined communication backends in the
MGMN mode
in cuDSS v0.3.0.
Among the tasks executed on the host, reordering (a critical part of the analysis phase) often stands out, as it can take a significant portion of the total execution time (analysis plus factorization plus solve). To address this common bottleneck in direct sparse solvers, cuDSS v0.5.0 introduces both general MT support on the host and a multithreaded version of reordering. Note that this is available only for the
CUDSS_ALG_DEFAULT
reordering algorithm.
As with the MGMN mode, the new MT mode is optional and does not introduce any new dependencies to the user application if not used. Enabling this feature in your application is simple—just set the name of the shim threading layer library using
cudssSetThreadingLayer
and (optionally) specify the maximum number of threads that cuDSS is allowed to use, as shown in the following sample:
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
* thrLibFileName - filename to the cuDSS threading layer library
* If NULL then export CUDSS_THREADING_LIB = ‘filename’
*/
cudssSetThreadingLayer(cudssHandle, thrLibFileName);
/*
* (optional)Set number of threads to be used by cuDSS
*/
int32_t nthr = ...;
cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS,
&nthr, sizeof(nthr);
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Host execution
While the primary objective of cuDSS is to enable GPU acceleration for sparse direct solver functionality, for tiny and small matrices (which typically don’t have enough parallelism to saturate a GPU) an extensive use of the GPU can bring a non-negligible overhead. This can sometimes even dominate the total runtime.
To make cuDSS a more universal solution, v0.5.0 introduces the
host execute mode,
which enables factorization and solve phases on the host. When enabled, cuDSS will use a heuristic size-based dispatch to determine whether to perform part of the computations (during factorization and solve phases) on the host or on the device.
Additionally, when hybrid execution mode is enabled, users can pass host buffers for the matrix data which saves the needless memory transfers from the host to the device. Host execution mode doesn’t give cuDSS capabilities of a fully-fledged CPU solver, but helps to optionally remove the unwanted memory transfers and improve performance for small matrices.
The following sample demonstrates how to turn on hybrid execution mode.
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
*/
int hybrid_execute_mode = 1;
cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE,
&hybrid_execute_mode, sizeof(hybrid_execute_mode);
To see the
full sample code
that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo.
Performance improvements of cuDSS v0.4.0 and v0.5.0
cuDSS v0.4.0 and v0.5.0 introduced significant performance improvements across several types of workloads.
In v0.4.0, the factorization and solve steps are accelerated by detecting when parts of the triangular factors become dense and leveraging more efficient dense BLAS kernels for those parts. The speedup achieved through this optimization depends largely on the symbolic structure of the factors, which in turn is influenced by the original matrix and the reordering permutation.
Figure 1 illustrates the performance improvement of v0.4.0 over v0.3.0, based on a large collection of matrices from the
SuiteSparse Matrix Collection
, analyzed on the NVIDIA H100 GPU.
Figure 1. Performance improvement for the factorization and solve phases of cuDSS v0.4.0 over v0.3.0 for a variety of matrices from SuiteSparse Matrix Collection
As shown in the chart, both the factorization and solve phases saw substantial improvements, with geometric means of 1.74 and 2.0, respectively. Some matrices with relatively sparse triangular factors did not show significant speedups. However, matrices like
Serena
,
conf5_4_8x8_20
and
atmosmodd
(which come from various types of HPC applications) experienced speedups of more than 8x in the factorization phase and more than 6x in the solve phase.
The analysis phase also saw significant speedup, thanks to the multithreaded reordering introduced in cuDSS v0.5.0. Figure 2 compares the performance of the analysis phase between v0.5.0 and v0.4.0, using the same set of matrices from the SuiteSparse Matrix Collection.
The performance improvement arises from the fact that v0.4.0 used a single-threaded reordering implementation, while v0.5.0 leverages multiple CPU threads (cores) on the host. While it’s well-known that state-of-the-art reordering algorithms are notoriously difficult to parallelize efficiently, cuDSS v0.5.0 makes good use of multiple CPU cores, resulting in a solid geometric mean speedup of 1.98, with the maximum improvement reaching 4.82.
Note that the analysis phase includes both the (optionally multithreaded) reordering and symbolic factorization, which is performed on the GPU. Therefore, the actual speedup for the reordering part is likely even higher than what the chart indicates.
Figure 2. Performance improvement (analysis phase only) of cuDSS v0.5.0 over v0.4.0 for a variety of matrices from SuiteSparse Matrix Collection using host multithreading feature released in v0.5.0
cuDSS v0.5.0 further optimizes the performance of the
hybrid memory mode,
which was first introduced in v0.3.0 This feature allows part of the internal arrays used within cuDSS to reside on the host, enabling the solution of systems that don’t fit into the memory of a single GPU. It works particularly well on
NVIDIA Grace
-based systems, thanks to the significantly higher memory bandwidth between the CPU and GPU.
Figure 3 presents the performance speedup for the factorization and solve phases with cuDSS 0.5.0, comparing an NVIDIA Grace Hopper system (Grace CPU plus NVIDIA H100 GPU) against an x86 system (Intel Xeon Platinum 8480CL, 2S) plus NVIDIA H100 GPU, using a set of large matrices.
Figure 3. Performance improvement of cuDSS v0.5.0 with hybrid memory mode for a variety of matrices
As previously mentioned, v0.5.0 introduces the
hybrid execution mode
, which improves performance of cuDSS for small matrices. Figure 4 shows the speedup of the hybrid execution mode against the CPU solver (Intel MKL PARDISO) for the factorization and solve phases.
Figure 4. Performance improvement of cuDSS v0.5.0 with hybrid execution (enabling host execution) against the CPU solver for a variety of matrices
Finally, Figure 5 shows the speedup of the new hybrid execution mode (cuDSS v0.5.0) compared to the default mode (cuDSS v0.4.0) for the factorization and solve phases on a set of small matrices. While the speedup of the factorization phase is significant only for really small matrices, the solve phase delivers speedups for systems with up to 30K equations. This can be explained by the fact that the solve phase has less work compared to the factorization phase and cannot make good use of a GPU for the tested matrices.
Figure 5. Performance improvement of cuDSS v0.5.0 with hybrid execution for a variety of matrices
Summary
NVIDIA cuDSS v0.4.0 and v0.5.0 releases provide several new enhancements that significantly improve performance. Highlights include general speedups in factorization and solving, a hybrid memory and execution mode, host multithreading, and support for non-uniform batch sizes. In addition to our continued investment in performance, we will consistently enhance our APIs to expand functionality, providing users with greater flexibility and fine-grained control.
Ready to get started?
Download NVIDIA cuDSS v0.5.0
.
To learn more, check out the
cuDSS v0.5.0 release notes
and the following previous posts:
NVIDIA cuDSS Library Removes Barriers to Optimizing the US Power Grid
Spotlight: Honeywell Accelerates Industrial Process Simulation with NVIDIA cuDSS
Join the conversation and provide feedback in the
NVIDIA Developer Forum
. | https://developer.nvidia.com/zh-cn/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/ | NVIDIA cuDSS 提高工程和科学计算中的求解器技术 | NVIDIA cuDSS
是第一代稀疏直接求解器库,旨在加速工程和科学计算。cuDSS 正越来越多地应用于数据中心和其他环境,并支持单 GPU、多 GPU 和多节点(MGMN)配置。
cuDSS 已成为加速多个领域 (例如结构工程、流体动力学、电磁学、电路模拟、优化和 AI 辅助工程问题) 的计算机辅助工程 (CAE) 工作流程和科学计算的关键工具。
本文重点介绍了
cuDSS v0.4.0
和
cuDSS v0.5.0
中提供的一些关键性能和可用性功能 (如表 1 中总结的内容),cuDSS v0.4.0 可显著提升分解和求解步骤的性能,同时还引入了一些新功能,包括内存预测 API、自动混合内存选择和可变批量支持。cuDSS v0.5.0 增加了主机执行模式,这对较小的矩阵特别有利,并且在分析阶段使用混合内存模式和主机多线程实现了显著的性能提升,而有效并行化通常是一个具有挑战性的领域。
cuDSS v0.4.0 版本
cuDSS v0.5.0 版本
PIP wheel 和 Conda 支持在因子具有密集部分时,对单 GPU 和多 GPU 进行分解并解决性能提升 (高达 10 倍) 内存预测 API 自动选择正常/混合内存模式可变 (非均匀) 批量支持 (变量 N、NNZ、NRHS、LD)
较小矩阵的主机执行模式 (部分主机计算模式) 带有用户定义线程后端的主机多线程 (目前仅用于重新排序) 新的透视方法 (通过扩展进行静态透视) 提高了混合内存模式的性能和内存需求
表 1.版本 v0.4.0 和 v0.5.0 中的 cuDSS 功能
功能亮点
本节重点介绍显著的可用性增强和性能改进。
内存预测 API
对于需要在达到显存密集型阶段(数值分解)之前了解 cuDSS 所需设备和主机显存的精确数量的用户而言,内存预测 API 非常重要。
在设备显存不足的情况下,无论是求解大型线性系统,还是应用程序的 cuDSS 显存预算有限时,此功能尤其有用。在这两种情况下,建议在分析阶段之前启用混合显存模式。
请注意,如果混合显存模式已启用,但一切都适合可用设备显存 (无论是基于用户定义的限制还是 GPU 容量),cuDSS 将自动检测到这一点并切换到更快的默认显存模式。
使用 cuDSS 求解线性系统的典型调用序列如下所示:
分析 (重新排序和符号分解)
数值分解(分配并计算因子值)
求解
引入 memory prediction 后,用户现在可以在分析阶段结束后查询所选模式(默认或 hybrid memory)所需的 device 和 host memory 数量,以及 hybrid memory 模式所需的最小 memory。如下示例所示,查询是使用
CUDSS_DATA_MEMORY_ESTIMATES
对
cudssDataGet
进行的一次调用,可将输出写入固定大小的小型数组中。
/*
* After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,)
*/
int64_t memory_estimates[16] = {0};
cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES,
&memory_estimates, sizeof(memory_estimates);
/* memory_estimates[0] - permanent device memory
* memory_estimates[1] - peak device memory
* memory_estimates[2] - permanent host memory
* memory_estimates[3] - peak host memory
* memory_estimates[4] - minimum device memory for the hybrid memory mode
* memory_estimates[5] - maximum host memory for the hybrid memory mode
* memory_estimates[6,...,15] - reserved for future use
*/
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
不均匀批处理 API
在应用程序需要求解多个线性系统,且每个系统单独不够大,无法使 GPU 完全饱和的情况下,可以通过批处理来提高性能。有两种类型的批处理:uniform 和 non-uniform。与 uniform 批量不同,non-uniform 批量不会对矩阵的维度或稀疏模式施加限制。
cuDSS v0.4.0 引入了对不均匀批量的支持。不透明的
cudssMatrix_t
对象可以表示单个矩阵,也可以表示批量矩阵,因此唯一需要更改的部分是创建和修改矩阵对象的方式。
为创建密集或稀疏矩阵的批量,v0.4.0 引入了新的 API
cudssMatrixCreateBatchDn
或
cudssMatrixCreateBatchCsr
、添加的类似 API
cudssMatrixSetBatchValues
和
cudssMatrixSetBatchCsrPointers
以及
cudssMatrixGetBatchDn
和
cudssMatrixGetBatchCsr
可用于修改矩阵数据。cuDSS v0.5.0 修改
cudssMatrixFormat_t
,现在可以使用
cudssMatrixGetFormat
查询 tg_10,以确定
cudssMatrix_t
对象是单个矩阵还是批量对象。
创建矩阵批量后,它们可以像单个矩阵一样以完全相同的方式传递给
cudssExecute
的主要调用。以下示例演示了如何使用新的批量 API 为解和右侧创建批量密集矩阵,并为 As 创建批量稀疏矩阵。
/*
* For the batch API, scalar arguments like nrows, ncols, etc.
* must be arrays of size batchCount of the specified integer type
*/
cudssMatrix_t b, x;
cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR);
cudssMatrix_t A;
cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base);
/*
* The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x
*/
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
托管多线程 API
虽然 cuDSS 的大多数计算和内存密集型部分都在 GPU 上执行,但一些重要任务仍然在主机上执行。在 v0.5.0 之前,cuDSS 不支持主机上的多线程 (MT),并且主机执行始终是单线程的。新版本引入了对任意用户定义线程运行时 (
例如 pthreads、OpenMP 和线程池
) 的支持,其灵活性与 cuDSS v0.3.0 中在
MGMN 模式
下引入对用户定义通信后端的支持类似。
在主机上执行的任务中,重新排序 (分析阶段的关键部分) 通常非常突出,因为它可能占用总执行时间 (分析加分解加解) 的很大一部分。为解决直接稀疏求解器中的常见瓶颈,cuDSS v0.5.0 在主机上引入了通用 MT 支持和多线程版本的重新排序。请注意,此功能仅适用于
CUDSS_ALG_DEFAULT
重排序算法。
与 MGMN 模式一样,新的 MT 模式是可选的,如果不使用,不会向用户应用引入任何新的依赖项。在应用中启用此功能非常简单 – 只需使用
cudssSetThreadingLayer
设置 shim 线程层库的名称,并 (可选) 指定允许 cuDSS 使用的最大线程数,如下例所示:
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
* thrLibFileName - filename to the cuDSS threading layer library
* If NULL then export CUDSS_THREADING_LIB = ‘filename’
*/
cudssSetThreadingLayer(cudssHandle, thrLibFileName);
/*
* (optional)Set number of threads to be used by cuDSS
*/
int32_t nthr = ...;
cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS,
&nthr, sizeof(nthr);
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。
主机执行
虽然 cuDSS 的主要目标是为稀疏直接求解器功能实现 GPU 加速,但对于微小的矩阵 (通常没有足够的并行度使 GPU 饱和),广泛使用 GPU 会带来不可忽略的开销。有时,这甚至会主宰总运行时间。
为使 cuDSS 成为更通用的解决方案,v0.5.0 引入了
主机执行模式
,该模式可在主机上启用分解和求解阶段。启用 cuDSS 后,cuDSS 将使用基于大小的启发式分配来确定是否在主机或设备上执行部分计算 (在分解和求解阶段)。
此外,启用混合执行模式后,用户可以为矩阵数据传递主机缓冲区,从而节省从主机到设备的不必要的内存传输。主机执行模式无法提供成熟 CPU 求解器的 cuDSS 功能,但有助于选择性地消除不需要的内存传输,并提高小型矩阵的性能。
以下示例演示了如何开启混合执行模式。
/*
* Before cudssExecute(CUDSS_PHASE_ANALYSIS)
*/
int hybrid_execute_mode = 1;
cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE,
&hybrid_execute_mode, sizeof(hybrid_execute_mode);
要查看使用此功能的
完整示例代码
,请访问 NVIDIA/CUDALibrarySamples GitHub repo。
cuDSS v0.4.0 和 v0.5.0 的性能提升
cuDSS v0.4.0 和 v0.5.0 为多种类型的工作负载引入了显著的性能提升。
在 v0.4.0 中,通过检测三角形因子的某些部分何时变得密集,并为这些部分利用更高效的密集 BLAS 内核,可加速分解和求解步骤。通过此优化实现的加速在很大程度上取决于因子的符号结构,而符号结构又会受到原始矩阵和重新排序排列的影响。
根据 SuiteSparse 矩阵集合中的大量矩阵集合,并在 NVIDIA H100 GPU 上进行分析,图 1 展示了 v0.4.0 相较于 v0.3.0 的性能提升。
图 1. cuDSS v0.4.0 相对于 v0.3.0 对 SuiteSparse Matrix Collection 中各种矩阵的分解和求解阶段的性能有所提升
如图所示,分解阶段和求解阶段均有显著改进,几何均值分别为 1.74 和 2.0、一些三角形因子相对稀疏的矩阵并未显示出显著的加速。但是,
Serena
、
conf5_4_8x8_20
和
atmosmodd
(来自各种类型的 HPC 应用) 等矩阵在分解阶段的速度提升超过 8 倍,在求解阶段的速度提升超过 6 倍。
得益于 cuDSS v0.5.0 中引入的多线程重排序,分析阶段的速度也得到了显著提升。图 2 使用来自 SuiteSparse 矩阵集合的同一组矩阵,比较了 v0.5.0 和 v0.4.0 分析阶段的性能。
性能提升的原因是,v0.4.0 使用了单线程重排序实现,而 v0.5.0 则利用主机上的多个 CPU 线程 (cores)。众所周知,先进的重排序算法很难高效并行化,而 cuDSS v0.5.0 可以很好地利用多个 CPU cores,从而实现 1.98 的几何平均加速,最大提升可达 4.82。
请注意,分析阶段包括 (可选多线程) 重排序和符号分解,这在 GPU 上执行。因此,重新排序部分的实际加速可能甚至高于图表所示。
图 2、对于使用 v0.5.0 中发布的主机多线程功能的 SuiteSparse 矩阵集合中的各种矩阵,cuDSS v0.5.0 的性能比 v0.4.0 得到提升 (仅限分析阶段)
cuDSS v0.5.0 进一步优化了
混合显存模式
(最初在 v0.3.0 中引入)的性能。此功能允许 cuDSS 中使用的部分内部数组驻留在主机上,从而为不适合单个 GPU 内存的系统提供解决方案。由于 CPU 和 GPU 之间的内存带宽显著增加,它在基于
NVIDIA Grace
的系统上的运行效果尤为出色。
图 3 展示了 cuDSS 0.5.0 下分解和求解阶段的性能提升情况,并使用一组大型矩阵比较了 NVIDIA Grace Hopper 系统 (Grace CPU 加 NVIDIA H100 GPU) 与 x86 系统 (Intel Xeon Platinum 8480CL,2S) 加 NVIDIA H100 GPU 的性能提升情况。
图 3、cuDSS v0.5.0 借助混合显存模式,性能得到提升
如前所述,v0.5.0 引入了
混合执行模式
,可提高 cuDSS 在处理小型矩阵时的性能。图 4 显示了在分解和求解阶段,混合执行模式相对于 CPU 求解器 (Intel MKL PARDISO) 的加速情况。
图 4、cuDSS v0.5.0 通过混合执行 (启用主机执行) 相较于 CPU 求解器在各种矩阵中的性能提升
最后,图 5 显示了与默认模式 (cuDSS v0.4.0) 相比,新的混合执行模式 (cuDSS v0.5.0) 在一组小矩阵上分解和求解阶段的速度提升情况。虽然分解阶段的加速仅适用于非常小的矩阵,但求解阶段可为方程高达 30K 个的系统提供加速。这可以解释为,与分解阶段相比,求解阶段的工作量较少,并且无法充分利用 GPU 来测试矩阵。
图 5、cuDSS v0.5.0 的性能提升,支持各种矩阵的混合执行
总结
NVIDIA cuDSS v0.4.0 和 v0.5.0 版本提供了多项新的增强功能,可显著提高性能。亮点包括分解和求解方面的一般加速、混合内存和执行模式、主机多线程,以及对非均匀批量大小的支持。除了持续投资性能之外,我们还将持续增强 API 以扩展功能,为用户提供更高的灵活性和细粒度控制。
准备好开始使用了吗?下载 NVIDIA cuDSS v0.5.0。
如需了解更多信息,请查看
cuDSS v0.5.0 版本说明
以及之前的以下帖子:
NVIDIA cuDSS 库消除了优化美国电网的障碍
聚焦:Honeywell 借助 NVIDIA cuDSS 加速工业流程模拟
在
NVIDIA 开发者论坛
中加入对话并提供反馈。 |
https://developer.nvidia.com/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/ | Spotlight: University of Tokyo Uses NVIDIA Grace Hopper for Groundbreaking Energy-Efficient Seismic Research | Supercomputers are the engines of groundbreaking discoveries. From predicting extreme weather to advancing disease research and designing safer, more efficient infrastructures, these machines simulate complex systems that are impractical to test in the real world due to their size, cost, and material requirements.
Since the introduction of the GPU in 1999, NVIDIA has continually pushed the boundaries of
accelerated computing
, an approach that uses specialized hardware to dramatically speed up work by bundling frequently occurring tasks in parallel processing. This efficiency enables supercomputers to handle unprecedented computational challenges while consuming less energy per task.
Today, NVIDIA-powered systems lead the charge in energy-efficient supercomputing, with eight of the top 10 placements on the
Green500
—an industry benchmark for energy-efficient supercomputers. The JEDI system at Germany’s Jülich Supercomputing Center, powered by
NVIDIA Grace Hopper
, exemplifies this progress, achieving a staggering efficiency of 72.7 gigaflops per watt.
Advancing seismic safety with the University of Tokyo
This drive for
energy efficiency
is exemplified in the recent research effort between NVIDIA and the University of Tokyo. Situated in one of the most seismically active regions in the world, Japan requires cutting-edge research to mitigate the impact of earthquakes.
The
Earthquake Research Institute at the University of Tokyo
has used the
NVIDIA Grace Hopper Superchip
to accelerate its seismic simulations,
achieving groundbreaking results
.
The tightly coupled CPU-GPU architecture of the
NVIDIA GH200 Grace Hopper Superchip
enables an improvement in simulation performance of 86x, with 32x greater energy efficiency compared to traditional methods. Advanced memory optimization ensures faster and more complex computations, empowering researchers to model earthquake-resistant infrastructure with unmatched precision. By leveraging data-driven predictions and heterogeneous computing systems, the university is helping transform seismic research into a more efficient endeavor.
Award-winning research: Heterogeneous computing for seismic modeling
These breakthroughs were presented at
WACCPD 2024
, where the University of Tokyo showcased how NVIDIA hardware-software synergy enables solutions to dynamic, large-scale problems. In seismic modeling, the innovative architecture of Grace Hopper accelerates time-evolution equation-based simulations by combining high-performance GPUs for intensive calculations with the memory capacity of CPUs for predictive algorithms.
Beyond seismic research, this approach has the potential to reshape energy efficiency in
high-performance computing (HPC)
.
This groundbreaking work developed a new CPU-GPU heterogeneous computing method for repeatedly solving time-evolution partial differential equation (PDE) problems with guaranteed accuracy. This new method achieves a very short time-to-solution (TTS) with low energy-to-solution by leveraging both the large memory capacity of the CPU and the high computing performance of the GPU. When scaled up to the
Alps supercomputer
, the method was 51.6x faster than using only the CPU and 6.98x faster than the GPU, while achieving an impressive 94.3% efficiency across 1,920 compute nodes.
Figure 1. Performance comparison of the newly-developed PDE solver. Using both CPU and GPU in the NVIDIA GH200 Grace Hopper Superchip realizes maximum performance
The simulations involve stepping forward in time, where the solution (the seismic activity, for example) at each point in time is calculated by solving a system of equations. In turn, at each timestep multiple iterations are required, where the solution converges to a suitably accurate result. The calculations at each iteration are very intense and require the computational power of GPUs.
Therefore, there are two aspects that determine the overall TTS (and corresponding energy to solution): how fast each iteration can be calculated at each timestep, and how many iterations are required at each timestep. It is this combination that enables Grace Hopper architecture to really excel: the Hopper GPU performs each calculation exceedingly quickly, while the large memory capacity of the Grace CPU stores a history of prior results to inform the predictor, allowing the number of iterations to be vastly reduced.
This data-driven method is well suited to the NVIDIA GH200 Grace Hopper Superchip. The number of iterations required to converge depends on how good the initial “guess” of the solution is. The novelty of this work involves data from previous timesteps being used to maximize the accuracy of each initial solution, thus reducing the number of iterations required. This method requires a combination of capabilities:
The large memory capacity of the NVIDIA Grace CPU to hold the required previous-timestep data,
The computational ability of Hopper to solve the equations at each iteration,
The fast interconnect to allow the data-driven results from Grace CPU to optimize the number of solver iterations on the Hopper GPU.
Figure 2. The data-driven method solves two problem sets simultaneously by using the high bandwidth CPU-GPU connection on GH200 to communicate predictions from the CPU to the solver on the GPU
Figure 3. Data-driven predictor running on the Grace CPU reduces iterations from about 150 to 50
The benefits are shown in Figure 1, where it can be seen that the time-to-solution is reduced by 86x, compared to only using the CPU, or 9x compared to only using the GPU. The corresponding energy reductions are 32x (compared to CPU) and 7x (compared to GPU alone).
Enabling energy efficient supercomputing
The University of Tokyo’s seismic research breakthroughs highlight the transformative power of energy-efficient supercomputing. By harnessing the NVIDIA Grace Hopper Superchip, researchers are not only accelerating seismic simulations by unprecedented margins but also drastically reducing energy consumption—a crucial step in
sustainable computing
.
This collaboration demonstrates how cutting-edge technology can address urgent global challenges, like earthquake safety, while setting new benchmarks for performance and energy efficiency. As these innovations continue to scale across industries, they pave the way for a future where high-performance computing drives both scientific progress and environmental responsibility.
At
NVIDIA GTC 2025
, Kohei Fujita, associate professor at the University of Tokyo Earthquake Research Institute, will discuss this breakthrough research in the session,
Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems [S72925]
. | https://developer.nvidia.com/zh-cn/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/ | 聚焦:东京大学使用 NVIDIA Grace Hopper 进行开创性的节能地震研究 | 超级计算机是推动突破性发现的引擎。从预测极端天气到推进疾病研究,以及设计更安全、更高效的基础设施,这些机器可以模拟因规模、成本和材料要求而无法在现实世界中进行测试的复杂系统。
自 1999 年推出 GPU 以来,NVIDIA 一直在不断突破加速
计算的极限
,这种方法使用专业硬件,通过在并行处理中捆绑频繁出现的任务来大幅加速工作。这种效率使超级计算机能够处理前所未有的计算挑战,同时减少每项任务消耗的能源。
如今,由 NVIDIA 提供支持的系统在高效超级计算领域处于领先地位,在 Green500 榜单前 10 名中有 8 名入选,而 Green500 榜单是高效超级计算机的行业基准测试。德国于利希超级计算中心(Jülich Supercomputing Center)的 JEDI 系统由
NVIDIA Grace Hopper
提供动力支持,充分体现了这一进步,实现了 72.7 gigaflops 每瓦的惊人效率。
与东京大学一起推进地震安全性
NVIDIA 和东京大学最近的研究工作就是这种
提高能效的做法的典范
。日本位于世界上地震最活跃的地区之一,需要进行尖端研究来减轻地震的影响。
东京大学地震研究所
使用
NVIDIA Grace Hopper 超级芯片
加速地震模拟,
取得了突破性成果
。
NVIDIA GH200 Grace Hopper 超级芯片采用紧密合的 CPU-GPU 架构
,模拟性能提高了 86 倍,能效比传统方法高 32 倍。高级内存优化可确保更快速、更复杂的计算,使研究人员能够以出色的精度对抗震基础设施进行建模。通过利用数据驱动的预测和异构计算系统,该大学正在帮助将地震研究转变为更高效的工作。
获奖研究:用于地震建模的异构计算
在
WACCPD 2024
大会上,东京大学展示了 NVIDIA 硬件与软件的协同作用如何为动态、大规模问题提供解决方案,展示了这些突破性成果。在地震建模中,Grace Hopper 的创新架构将用于密集型计算的高性能 GPU 与用于预测算法的 CPU 内存容量相结合,从而加速基于时间进化方程的模拟。
除了地震研究之外,这种方法还有可能重塑
高性能计算(HPC)
的能效。
这项突破性的工作开发了一种新的 CPU-GPU 异构计算方法,能够以有保证的准确性反复解决时间演进偏微分方程 (PDE) 问题。这种新方法通过利用 CPU 的大内存容量和 GPU 的高计算性能,实现了极短的解决方案时间 (TTS) 和低能耗的解决方案。当扩展到
Alps 超级计算机
时,该方法的速度比仅使用 CPU 快 51.6 倍,比 GPU 快 6.98 倍,同时在 1,920 个计算节点中实现了 94.3% 的惊人效率。
图 1、新开发的 PDE 求解器的性能比较。在 NVIDIA GH200 Grace Hopper 超级芯片中同时使用 CPU 和 GPU 可实现出色性能
模拟涉及逐步推进时间,即通过求解方程组来计算每个时间点的解 (例如地震活动)。反过来,在每个时间步长中,都需要进行多次迭代,使解收到适当准确的结果。每次迭代的计算都非常密集,需要 GPU 的计算能力。
因此,有两个方面决定整体 TTS (以及相应的解能量):在每个时间步长下每次迭代的计算速度,以及每个时间步长下需要进行多少次迭代。正是这种组合使 Grace Hopper 架构真正表现出色:Hopper GPU 执行每次计算的速度极快,而 Grace CPU 的大内存容量则存储先前结果的历史记录以告知预测器,从而大幅减少迭代次数。
这种数据驱动的方法非常适合 NVIDIA GH200 Grace Hopper 超级芯片。所需的迭代次数取决于对解的初始“猜测”的准确性。这项工作的新颖之处在于,之前时间步长的数据可用于更大限度地提高每个初始解决方案的准确性,从而减少所需的迭代次数。此方法需要结合以下功能:
NVIDIA Grace CPU 的大内存容量可容纳所需的先前时间步长数据,
Hopper 在每次迭代中求解方程的计算能力
快速互连允许 Grace CPU 的数据驱动结果优化 Hopper GPU 上的求解器迭代次数。
图 2、数据驱动方法通过使用 GH200 上的高带宽 CPU-GPU 连接将预测从 CPU 传输到 GPU 上的求解器,同时解决两个问题集
图 3、在 Grace CPU 上运行的数据驱动预测器可将迭代次数从大约 150 次减少到 50 次
图 1 显示了这些优势,与仅使用 CPU 相比,解决时间缩短了 86 倍,与仅使用 GPU 相比,缩短了 9 倍。相应的能耗分别为 32 倍 (与 CPU 相比) 和 7 倍 (仅与 GPU 相比)。
实现高能效超级计算
东京大学的地震研究取得的突破凸显了高效超级计算的变革力量。通过利用 NVIDIA Grace Hopper Superchip,研究人员不仅以前所未有的利润加速地震模拟,而且还大幅降低了能源消耗,这是
可持续计算
的关键一步。
这项合作展示了尖端技术如何应对紧迫的全球性挑战 (例如地震安全性),同时为性能和能效设定新的基准。随着这些创新继续在各行各业中推广,它们为高性能计算推动科学进步和承担环保责任的未来铺平道路。
在
NVIDIA GTC 2025
上,东京大学地震研究所副教授 Kohei Fujita 将在名为“Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems”的会议中讨论这项突破性研究[S72925]。 |
Subsets and Splits