Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 7 new columns ({'_fingerprint', '_format_columns', '_split', '_format_type', '_format_kwargs', '_data_files', '_output_all_columns'}) and 6 missing columns ({'zh_url', 'en_content', 'en_url', 'en_title', 'zh_title', 'zh_content'}). This happened while the json dataset builder was generating data using hf://datasets/huckiyang/zh-tw-en-us-nv-blog-v1/state.json (at revision db6edcb04400013a6f9377fe845b1a2bf6995bb7) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: null _format_kwargs: struct<> _format_type: null _output_all_columns: bool _split: null to {'en_url': Value(dtype='string', id=None), 'en_title': Value(dtype='string', id=None), 'en_content': Value(dtype='string', id=None), 'zh_url': Value(dtype='string', id=None), 'zh_title': Value(dtype='string', id=None), 'zh_content': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1438, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 7 new columns ({'_fingerprint', '_format_columns', '_split', '_format_type', '_format_kwargs', '_data_files', '_output_all_columns'}) and 6 missing columns ({'zh_url', 'en_content', 'en_url', 'en_title', 'zh_title', 'zh_content'}). This happened while the json dataset builder was generating data using hf://datasets/huckiyang/zh-tw-en-us-nv-blog-v1/state.json (at revision db6edcb04400013a6f9377fe845b1a2bf6995bb7) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
en_url
string | en_title
string | en_content
string | zh_url
string | zh_title
string | zh_content
string |
---|---|---|---|---|---|
https://blogs.nvidia.com/blog/ai-scaling-laws/ | How Scaling Laws Drive Smarter, More Powerful AI | Just as there are widely understood empirical laws of nature — for example,
what goes up must come down
, or
every action has an equal and opposite reaction
— the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws — pretraining scaling, post-training scaling and test-time scaling, also called long thinking — reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling — applying more compute at inference time to improve accuracy — has enabled AI reasoning models, a new class of large language models (
LLMs
) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling?
Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements — data, model size, compute — is interrelated. Per the pretraining scaling law,
outlined in this research paper
, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute — creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter
transformer models
,
mixture of experts
models and new distributed training techniques — all demanding significant compute.
And the relevance of the pretraining scaling law continues — as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training.
What Is Post-Training Scaling?
Pretraining a large
foundation model
isn’t for everyone — it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model’s specificity and relevance for an organization’s desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation — or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model’s performance can further improve — in computational efficiency, accuracy or domain specificity — using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning
uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization’s internal datasets, or with pairs of sample model input and outputs.
Distillation
requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning
, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment — for example, a chatbot LLM that is positively reinforced by “thumbs up” reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling
generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It’s often used to improve an AI’s outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods
explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model’s responses.
To support post-training, developers can use
synthetic data
to augment or complement their fine-tuning dataset. Supplementing real-world datasets with AI-generated data can help models improve their ability to handle edge cases that are underrepresented or missing in the original training data.
Post-training scaling refines pretrained models using techniques like fine-tuning, pruning and distillation to enhance efficiency and task relevance.
What Is Test-Time Scaling?
LLMs generate quick responses to input prompts. While this process is well suited for getting the right answers to simple questions, it may not work as well when a user poses complex queries. Answering complex questions — an essential capability for
agentic AI
workloads — requires the LLM to reason through the question before coming up with an answer.
It’s similar to the way most humans think — when asked to add two plus two, they provide an instant answer, without needing to talk through the fundamentals of addition or integers. But if asked on the spot to develop a business plan that could grow a company’s profits by 10%, a person will likely reason through various options and provide a multistep answer.
Test-time scaling, also known as long thinking, takes place during inference. Instead of traditional AI models that rapidly generate a one-shot answer to a user prompt, models using this technique allocate extra computational effort during inference, allowing them to reason through multiple potential responses before arriving at the best answer.
On tasks like generating complex, customized code for developers, this AI reasoning process can take multiple minutes, or even hours — and can easily require over 100x compute for challenging queries compared to a single inference pass on a traditional LLM, which would be highly unlikely to produce a correct answer in response to a complex problem on the first try.
This AI reasoning process can take multiple minutes, or even hours — and can easily require over 100x compute for challenging queries compared to a single inference pass on a traditional LLM.
This test-time compute capability enables AI models to explore different solutions to a problem and break down complex requests into multiple steps — in many cases, showing their work to the user as they reason. Studies have found that test-time scaling results in higher-quality responses when AI models are given open-ended prompts that require several reasoning and planning steps.
The test-time compute methodology has many approaches, including:
Chain-of-thought prompting
: Breaking down complex problems into a series of simpler steps.
Sampling with majority voting
: Generating multiple responses to the same prompt, then selecting the most frequently recurring answer as the final output.
Search
: Exploring and evaluating multiple paths present in a tree-like structure of responses.
Post-training methods like best-of-n sampling can also be used for long thinking during inference to optimize responses in alignment with human preferences or other objectives.
Test-time scaling enhances inference by allocating extra compute to improve AI reasoning, enabling models to tackle complex, multi-step problems effectively.
How Test-Time Scaling Enables AI Reasoning
The rise of test-time compute unlocks the ability for AI to offer well-reasoned, helpful and more accurate responses to complex, open-ended user queries. These capabilities will be critical for the detailed, multistep reasoning tasks expected of autonomous
agentic AI
and
physical AI
applications. Across industries, they could boost efficiency and productivity by providing users with highly capable assistants to accelerate their work.
In healthcare, models could use test-time scaling to analyze vast amounts of data and infer how a disease will progress, as well as predict potential complications that could stem from new treatments based on the chemical structure of a drug molecule. Or, it could comb through a database of clinical trials to suggest options that match an individual’s disease profile, sharing its reasoning process about the pros and cons of different studies.
In retail and supply chain logistics, long thinking can help with the complex decision-making required to address near-term operational challenges and long-term strategic goals. Reasoning techniques can help businesses reduce risk and address scalability challenges by predicting and evaluating multiple scenarios simultaneously — which could enable more accurate demand forecasting, streamlined supply chain travel routes, and sourcing decisions that align with an organization’s sustainability initiatives.
And for global enterprises, this technique could be applied to draft detailed business plans, generate complex code to debug software, or optimize travel routes for delivery trucks, warehouse robots and robotaxis.
AI reasoning models are rapidly evolving. OpenAI o1-mini and o3-mini,
DeepSeek R1
, and Google DeepMind’s Gemini 2.0 Flash Thinking were all introduced in the last few weeks, and additional new models are expected to follow soon.
Models like these require considerably more compute to reason during inference and generate correct answers to complex questions — which means that enterprises need to scale their accelerated computing resources to deliver the next generation of AI reasoning tools that can support complex problem-solving, coding and multistep planning.
Learn about the benefits of
NVIDIA AI for accelerated inference
.
Categories:
Explainer
|
Generative AI
Tags:
Artificial Intelligence
|
Inference | https://blogs.nvidia.com.tw/blog/ai-scaling-laws/ | 擴展定律如何推動更有智慧又更強大的 AI 發展 | 就像是人們普遍理解的自然經驗定律一樣,例如有上必有下,或者每個動作都有相等和相反的反應,人工智慧(AI)領域長期以來都是由單一想法所定義:更多的運算、更多的訓練資料和更多的參數,就可以產生更好的 AI 模型。
然而,AI 發展至今,需要三個不同的定律來描述不同方式利用運算資源如何影響模型效能。這些 AI 擴展定律合在一起,包含預訓練擴展(pretraining scaling)、訓練後擴展(post-training scaling),以及又稱為長思考(long thinking)的測試階段擴展(test-time scaling),反映出 AI 領域如何在各種日益複雜的 AI 用例中運用額外的運算技術演進發展。
近期興起的測試階段擴展,也就是在推論階段應用更多運算來提高準確度,已經實現 AI 推理模型這類新式的大型語言模型(
LLM
),以執行多次推論來處理複雜的問題,同時描述解決任務所需的步驟。測試階段擴展需要用到大量運算資源來支援 AI 推理,這將進一步推動對加速運算的需求。
什麼是預訓練擴展?
預訓練擴展是 AI 發展的原始定律。它證明透過增加訓練資料集大小、模型參數數量和運算資源,開發人員可以期望模型智慧和準確度會出現可預期的改善。
資料、模型大小、運算這三個要素中的每一個都息息相關。根據
本篇研究論文所概述
的預訓練擴展定律,當大型模型獲得更多資料時,模型的整體效能就會提高。為了實現這個目標,開發人員必須擴大運算規模,這就需要強大的加速運算資源來運行那些較大的訓練工作負載。
這種預訓練擴展原則使得大型模型達到突破性的能力。它還激發了模型架構的重大創新,包括有著數十億個和上兆個參數的
transformer 模型
、
混合專家
模型和新式分散式訓練技術的興起,而這一切都需要大量的運算。
而預訓練擴展定律的相關性仍在不斷發展,隨著人類持續產生越來越多的多模態資料,這些文字、影像、音訊、影片和感測器資訊的寶藏庫將會被用來訓練未來強大的 AI 模型。
預訓練擴展是 AI 發展的基本原則,它將模型、資料集和運算的大小與 AI 的效益連結起來。如上圖所示的混合專家模型,是訓練 AI 時常用的模型架構
什麼是訓練後擴展?
預先訓練大型
基礎模型
並非人人適用,這需要大量投資、熟練的專家和資料集。然而,一旦組織預先訓練好並發布模型,就能讓其他人使用其預先訓練的模型當成基礎,以配合自己的應用,從而降低採用 AI 的門檻。
這種訓練後的流程會推動企業及更廣泛的開發人員社群對加速運算的額外累積需求。受歡迎的開源模型可能有著上百個或上千個在多個領域裡訓練出的衍生模型。
針對各種用例開發衍生模型的生態系,可能需要比預先訓練原始基礎模型多出約 30 倍的運算時間。
訓練後技術可以進一步提升模型的特異性,以及與組織所需用例的相關性。預訓練擴展就像是將 AI 模型送去學校學習基本技能,而訓練後擴展則是增強模型適用於其預期工作的技能。比如一個大型語言模型可以經過訓練後擴展來處理情感分析或翻譯等任務,或是理解醫療保健或法律等特定領域的術語。
訓練後擴展定律假設使用微調、剪枝、量化、蒸餾、強化學習和合成資料增強等技術,可以進一步改善預訓練模型在運算效率、準確性或領域特異性方面的效能。
微調
(fine-tuning)使用額外的訓練資料,針對特定領域和應用量身打造 AI 模型。這可以使用組織的內部資料集,或是成對的樣本模型輸入和輸出內容來完成。
蒸餾
(distillation)需要使用一對 AI 模型:一個大型複雜的教師模型和一個輕量級的學生模型。在離線蒸餾這個最常見的蒸餾技術中,學生模型學習模仿預先訓練的教師模型的輸出。
強化學習
(reinforcement learning,RL)是一種機器學習技術,它使用獎勵模型來訓練代理做出符合特定用例的決定。代理的目標是在與環境互動的過程中,隨著時間的推移做出累積獎勵最大化的決策,例如聊天機器人大型語言模型會受到使用者做出「按讚」反應的正向強化。這種技術稱為基於人類回饋的強化學習(RLHF)。另一種較新的技術是基於 AI 回饋強化學習(RLAIF),它使用 AI 模型的回饋來引導學習過程,簡化訓練後的工作。
最佳解搜尋採樣
(Best-of-n sampling)會從語言模型產生多個輸出,並根據獎勵模型選擇獎勵分數最高的一個。它通常用來提高 AI 的輸出,而不需要修改模型參數,提供一種使用強化學習進行微調的替代方法。
搜尋方法
會在選擇最終輸出之前探索一系列潛在的決策路徑。這種訓練後擴展技術可以反覆改善模型的反應。
為了支援訓練後擴展,開發人員可以使用
合成資料
來增強或補充微調資料集。使用 AI 產生的資料來補充現實世界的資料集,有助於模型改善處理原始訓練資料中代表性不足或遺漏的邊緣案例的能力。
訓練後擴展使用微調、修剪和蒸餾等技術來完善預訓練模型,以提高效率和任務相關性
什麼是測試階段擴展?
大型語言模型會對輸入提示做出快速回應。這個過程非常適合用來獲得簡單問題的正確答案,但當使用者提出複雜的詢問,這個流程可能就沒那麼好使用。要回答複雜的問題,大型語言模型必須先對問題進行推理,才能給出答案,而回答複雜的問題是
代理型 AI
工作負載的基本能力。
這跟大多數人的思考方式類似,在被問到二加二的答案時,他們會馬上脫口而出,而不需要講解加法或整數的基本原理。可是萬一當場被要求制定一個可以讓公司利潤成長 10% 的商業計畫時,人們可能會透過各種選項進行推理,並且提供一個多步驟的答案。
測試階段擴展也稱為長思考,發生在推論過程中。傳統的 AI 模型會快速針對使用者的提示產生一次性答案,而使用這項技術的模型則會在推論過程中分配額外的運算工作,讓模型在得出最佳答案前先推理出多個可能的回應。
在為開發人員生成複雜的客製化程式碼等工作上,這個 AI 推理過程可能需要幾分鐘,甚至幾小時的時間,而且相較於傳統大型語言模型的單次推論,高難度的查詢可能需要超過 100 倍的運算量,因為傳統大型語言模型不太可能在第一次嘗試時,就能對複雜的問題產生正確的答案。
這種測試階段運算能力可以讓 AI 模型探索問題的不同解決方案,並將複雜的要求拆解成多個步驟,在許多情況下,在推理過程中向使用者展示其工作。研究發現,當給予 AI 模型需要多個推理與規劃步驟的開放式提示時,測試階段擴展可以獲得更高品質的回應。
測試階段運算方法有多種方法,包括:
思維鏈(chain-of-thought)提示:把複雜的問題分解成一系列更簡單的步驟。
多數決抽樣:針對同一個提示產生多個回應,然後選擇最常出現的答案作為最終輸出。
搜尋:探索與評估回覆樹狀結構裡的多個路徑。
類似最佳解搜尋採樣的訓練後擴展方法也可用於推論過程中的長思考,以最佳化符合人類喜好或其他目標的回應。
測試階段擴展技術透過分配額外的運算來增強 AI推理能力,使得模型能夠有效解決複雜的多步驟問題
測試階段擴展如何進行
AI
推理
測試階段運算技術的興起,讓 AI 有能力對使用者所提出複雜、開放式的查詢項目,提供有理有據、有幫助且更加準確的回應。這些能力對於自主
代理型 AI
及
實體 AI
應用所期待的詳細、多重推理任務來說至關重要。它們可以為各產業的使用者提供能力強大的助理來加速工作,從而提高效率和生產力。
在醫療保健領域,模型可以使用測試階段擴展技術來分析大量資料,推斷疾病的發展情況,以及根據藥物分子的化學結構,預測新療法可能產生的潛在併發症。或者,它可以梳理臨床試驗資料庫,建議符合個人病況的方案,分享其對不同研究利弊的推理過程。
在零售和供應鏈物流領域,長思考有助於解決近期營運挑戰和長期策略目標所需的複雜決策。推理技術可以同時預測與評估多種情境,協助企業降低風險,並因應在擴充方面的難題。這可以實現更精準的需求預測、簡化供應鏈行程路線,以及做出符合組織永續發展計畫的採購決策。
對於全球企業而言,這項技術可應用於草擬詳細的商業計畫、產生複雜的程式碼以對軟體進行除錯,或是最佳化貨車、倉儲機器人和無人駕駛計程車的行駛路線。
AI 推理模型發展迅速。OpenAI o1-mini 和 o3-mini、
DeepSeek R1
以及 Google DeepMind 的 Gemini 2.0 Flash Thinking 都是在過去幾週推出,預計不久後還會有更多新的模型問世。
這些模型在推理過程中需要使用大量運算,才能對複雜問題進行推理與產生正確答案,這表示企業需要擴充加速運算資源,以提供能夠解決複雜問題、編寫程式碼和規劃多步驟的下一代AI推理工具。
了解
NVIDIA AI
在加速推論
方面的優勢。
Categories:
生成式人工智慧
|
解釋達人
Tags:
Artificial Intelligence
|
Inference |
https://blogs.nvidia.com/blog/category/generative-ai/ | Generative AI | - Archives Page 1 | NVIDIA Blog
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
Search for:
Toggle Search
Home
AI
Data Center
Driving
Gaming
Pro Graphics
Robotics
Healthcare
Startups
AI Podcast
NVIDIA Life
Generative AI
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Scientists everywhere can now access Evo 2, a powerful new foundation model that understands the genetic code for…
Read Article
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
The telecom industry’s efforts to drive efficiencies with AI are beginning to show fruit. An increasing focus on deploying AI into radio access networks (RANs) was among the key findings…
Read Article
How Scaling Laws Drive Smarter, More Powerful AI
Just as there are widely understood empirical laws of nature — for example, what goes up must come down, or every action has an equal and opposite reaction — the…
Read Article
Safety First: Leading Partners Adopt NVIDIA Cybersecurity AI to Safeguard Critical Infrastructure
The rapid evolution of generative AI has created countless opportunities for innovation across industry and research. As is often the case with state-of-the-art technology, this evolution has also shifted the…
Read Article
What Are Foundation Models?
Editor’s note: This article, originally published on March 13, 2023, has been updated. The mics were live and tape was rolling in the studio where the Miles Davis Quintet was…
Read Article
NVIDIA CEO Awarded for Advancing Precision Medicine With Accelerated Computing, AI
NVIDIA’s contributions to accelerating medical imaging, genomics, computational chemistry and AI-powered robotics were honored Friday at the Precision Medicine World Conference in Santa Clara, California, where NVIDIA founder and CEO…
Read Article
Technovation Empowers Girls in AI, Making AI Education More Inclusive and Engaging
Tara Chklovski has spent much of her career inspiring young women to take on some of the world’s biggest challenges using technology. The founder and CEO of education nonprofit Technovation…
Read Article
Building More Builders: Gooey.AI Makes AI More Accessible Across Communities
When non-technical users can create and deploy reliable AI workflows, organizations can do more to serve their clientele Platforms for developing no- and low-code solutions are bridging the gap between…
Read Article
How GeForce RTX 50 Series GPUs Are Built to Supercharge Generative AI on PCs
NVIDIA’s GeForce RTX 5090 and 5080 GPUs — which are based on the groundbreaking NVIDIA Blackwell architecture —offer up to 8x faster frame rates with NVIDIA DLSS 4 technology, lower…
Read Article
Load More Articles
All NVIDIA News
All Systems Go: NVIDIA Engineer Takes NIMble Approach to Innovation
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
GeForce NOW Welcomes Warner Bros. Games to the Cloud With ‘Batman: Arkham’ Series
Technovation Empowers Girls in AI, Making AI Education More Inclusive and Engaging
AI-Designed Proteins Take on Deadly Snake Venom
Corporate Information
About NVIDIA
Corporate Overview
Technologies
NVIDIA Research
Investors
Social Responsibility
NVIDIA Foundation
Get Involved
Forums
Careers
Developer Home
Join the Developer Program
NVIDIA Partner Network
NVIDIA Inception
Resources for Venture Capitalists
Venture Capital (NVentures)
Technical Training
Training for IT Professionals
Professional Services for Data Science
News & Events
Newsroom
NVIDIA Blog
NVIDIA Technical Blog
Webinars
Stay Informed
Events Calendar
NVIDIA GTC
NVIDIA On-Demand
Explore our regional blogs and other social networks
Privacy Policy
Manage My Privacy
Legal
Accessibility
Product Security
Contact
Copyright © 2025 NVIDIA Corporation
USA - United States
Share This
Facebook
LinkedIn
Email
Share on Mastodon
Enter your Mastodon instance URL (optional)
Share | https://blogs.nvidia.com.tw/blog/category/generative-ai/ | 生成式人工智慧 | 生成式人工智慧 彙整 - NVIDIA 台灣官方部落格
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
搜尋關鍵字:
Toggle Search
平台
智慧機器
概覽
JETSON
嵌入式系統
機器人
JETSON
資料中心
產品
資料中心 GPU
DGX
HGX
EGX
NGC
虛擬 GPU
解決方案
人工智慧與深度學習
高效能計算
虛擬 GPU
分析
應用範例
開發者
技術
CUDA-X
NVIDIA AMPERE 架構
NVIDIA VOLTA
MAGNUM
多執行個體 GPU
NVIDIA NVLINK
深度學習與人工智慧
概覽
產業
概覽
自動駕駛
醫療保健與生命科學
AI 城市
機器人
開發者
產品
概覽
DGX 系統
NVIDIA GPU 雲
NVIDIA TITAN RTX
NVIDIA TITAN V
解決方案
概覽
數據科學
推論
教育課程
AI 新創
設計視覺化
概覽
GRID
QUADRO
高階渲染技術
專業的虛擬實境解決方案
技術
NVIDIA RTX
NVLINK
TURING 架構
虛擬 GPU 技術
HOLODECK
創作者適用的
醫療保健與生命科學
概覽
給開發者
醫療圖像處理
基因體學
自動駕駛汽車
概覽
DRIVE PX
汽車產業夥伴
遊戲與娛樂
GEFORCE 遊戲平台
概覽
20 系列顯示卡
16 系列顯示卡
電競筆記型電腦
G-SYNC 顯示器
給創作者
開發者
NVIDIA 開發者
開發者新聞
開發者部落格
開發者論壇
開源平台
深度學習機構
訓練課程
GPU 科技大會
CUDA
產業
遊戲開發
醫療保健與生技
高等教育
製造業
媒體娛樂
公共部門
零售業
智慧城市
超級運算
電信業
運輸業
所有產業
驅動程式
概覽
GEFORCE 驅動程式
所有 NVIDIA 驅動程式
支援
關於 NVIDIA
概覽
NVIDIA 合作夥伴網絡
AI 運算模型
公司訊息
徵才訊息
投資人
NVIDIA 合作夥伴
NVIDIA 部落格
加入我們
RSS Feeds
訂閱電子報
聯繫我們
產品安全
生成式人工智慧
Most Popular
擴展定律如何推動更有智慧又更強大的 AI 發展
就像是人們普遍理解的自然經驗定律一樣…
閱讀文章
Most Popular
使用 Transformer 產生合成資料:企業資料挑戰的解決方案
GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務給你歡樂無比的遊戲節慶時刻
揭開 NVIDIA DOCA 的神祕面紗
安全至上:領先合作夥伴採用 NVIDIA 網路安全 AI 保護關鍵基礎設施
生成式人工智慧(AI)的快速發展,為產業與研究領域的創新帶來…
閱讀文章
AI 帶來亮眼報酬:調查結果揭示金融業最新技術趨勢
金融服務業在使用人工智慧(AI)方面正邁入一個重要的里程碑,…
閱讀文章
NVIDIA 發表為代理型 AI 應用提供安全防護的 NIM 微服務
AI 代理為全球數十億名知識工作者提供可完成各種任務的「知識…
閱讀文章
NVIDIA 攜手產業領導業者推動基因組學、藥物探索與醫療保健發展
NVIDIA 今日宣布建立新的合作關係,經由加速藥物探索、加…
閱讀文章
CES 2025:NVIDIA 執行長表示 AI 正以「驚人的速度」進步
NVIDIA 創辦人暨執行長黃仁勳以長達 90 分鐘的主題演…
閱讀文章
NVIDIA 開放 Cosmos 世界基礎模型給實體 AI 開發者社群使用
加速開發實體人工智慧(AI) 的 NVIDIA Cosmos…
閱讀文章
NVIDIA 推出能夠分析影片內容的 AI 代理藍圖
人工智慧(AI) 的下一個重要時刻就在我們眼前。 目前全球部…
閱讀文章
NVIDIA 與合作夥伴推出代理型 AI 藍圖, 協助每個企業自動執行工作
用於建立代理型 AI 應用程式的全新 NVIDIA AI B…
閱讀文章
NVIDIA 宣布推出 Nemotron 模型系列,以推動代理型 AI 的發展
人工智慧(AI)將進入代理式 AI 的新時代,專業代理組成的…
閱讀文章
更多文章
All NVIDIA News
NVIDIA 宣布推出 Isaac GR00T 藍圖以加速開發人型機器人
NVIDIA以 Cosmos 世界基礎模型增強適用於自動駕駛的三台電腦解決方案
NVIDIA 發表「Mega」Omniverse Blueprint,打造工業機器人機群數位孿生
NVIDIA 啟用 DRIVE AI 系統檢測實驗室,創下業界全新安全里程碑
建造更聰明的自主機器:NVIDIA 宣布 Omniverse Sensor RTX 推出搶先體驗活動
平台
人工智慧與深度學習
智慧機器
資料中心
設計視覺化
醫療保健
自動駕駛
GeForce 遊戲
SHIELD
產品
DGX-1
DRIVE PX2
GeForce GTX 20 系列
GRID
Jetson
Quadro
SHIELD TV
Tesla
開發者
開發者專區
CUDA
訓練課程
GPU 科技大會
探究地區性部落格及其他社交網路
隱私權政策
管理我的隱私
請勿出售或分享我的資料
服務條款
輔助使用
公司政策
產品安全
聯絡方式
Copyright © 2025 NVIDIA Corporation
Taiwan |
https://blogs.nvidia.com/blog/cybersecurity-ai-critical-infrastructure/ | Safety First: Leading Partners Adopt NVIDIA Cybersecurity AI to Safeguard Critical Infrastructure | The rapid evolution of generative AI has created countless opportunities for innovation across industry and research. As is often the case with state-of-the-art technology, this evolution has also shifted the landscape of cybersecurity threats, creating new security requirements. Critical infrastructure cybersecurity is advancing to thwart the next wave of emerging threats in the AI era.
Leading operational technology (OT) providers today showcased at the S4 conference for industrial control systems (ICS) and OT cybersecurity how they’re adopting the NVIDIA cybersecurity AI platform to deliver real-time threat detection and critical infrastructure protection.
Armis, Check Point, CrowdStrike, Deloitte and World Wide Technology (WWT) are integrating the platform to help customers bolster critical infrastructure, such as energy, utilities and manufacturing facilities, against cyber threats.
Critical infrastructure operates in highly complex environments, where the convergence of IT and OT, often accelerated by digital transformation, creates a perfect storm of vulnerabilities. Traditional cybersecurity measures are no longer sufficient to address these emerging threats.
By harnessing
NVIDIA’s cybersecurity AI platform
, these partners can provide exceptional visibility into critical infrastructure environments, achieving robust and adaptive security while delivering operational continuity.
The platform integrates NVIDIA’s accelerated computing and AI, featuring
NVIDIA BlueField-3 DPUs
,
NVIDIA DOCA
and the
NVIDIA Morpheus AI cybersecurity framework
, part of the
NVIDIA AI Enterprise
. This combination enables real-time threat detection, empowering cybersecurity professionals to respond swiftly at the edge and across networks.
Unlike conventional solutions that depend on intrusive methods or software agents, BlueField-3 DPUs function as a virtual security overlay. They inspect network traffic and safeguard host integrity without disrupting operations. Acting as embedded sensors within each server, they stream telemetry data to NVIDIA Morpheus, enabling detailed monitoring of host activities, network traffic and application behaviors — seamlessly and without operational impact.
Driving Cybersecurity Innovation Across Industries
Integrating Armis Centrix, Armis’ AI-powered cyber exposure management platform, with NVIDIA cybersecurity AI helps secure critical infrastructure like energy, manufacturing, healthcare and transportation.
“OT environments are increasingly targeted by sophisticated cyber threats, requiring robust solutions that ensure both security and operational continuity,” said Nadir Izrael, chief technology officer and cofounder of Armis. “Combining Armis’ unmatched platform for OT security and cyber exposure management with NVIDIA BlueField-3 DPUs enables organizations to comprehensively protect cyber-physical systems without disrupting operations.”
CrowdStrike is helping secure critical infrastructure such as ICS and OT by deploying its CrowdStrike Falcon security agent on BlueField-3 DPUs to boost real-time AI-powered threat detection and response.
“OT environments are under increasing threat, demanding AI-powered security that adapts in real time,” said Raj Rajamani, head of products at CrowdStrike. “By integrating NVIDIA BlueField-3 DPUs with the CrowdStrike Falcon platform, we’re extending industry-leading protection to critical infrastructure without disrupting operations — delivering unified protection at the edge and helping organizations stay ahead of modern threats.”
Deloitte is driving customers’ digital transformation, enabled by NVIDIA’s cybersecurity AI platform, to help meet the demands of breakthrough technologies that require real-time, granular visibility into data center networks to defend against increasingly sophisticated threats.
“Protecting OT and ICS systems is becoming increasingly challenging as organizations embrace digital transformation and interconnected technologies,” said Dmitry Dudorov, an AI security leader at Deloitte U.K. “Harnessing NVIDIA’s cybersecurity AI platform can enable organizations to determine threat detection, enhance resilience and safeguard their infrastructure to accelerate their efforts.”
A Safer Future, Powered by AI
NVIDIA’s cybersecurity AI platform, combined with the expertise of ecosystem partners, offers a powerful and scalable solution to protect critical infrastructure environments against evolving threats. Bringing NVIDIA AI and accelerated computing to the forefront of OT security can help organizations protect what matters most — now and in the future.
Learn more by attending the
NVIDIA GTC
global AI conference, running March 17-21, where Armis, Check Point and CrowdStrike cybersecurity leaders will host
sessions
about their collaborations with NVIDIA.
Categories:
Generative AI
|
Networking
|
Software
Tags:
Artificial Intelligence
|
Cybersecurity
|
NVIDIA AI Enterprise
|
NVIDIA BlueField | https://blogs.nvidia.com.tw/blog/cybersecurity-ai-critical-infrastructure/ | 安全至上:領先合作夥伴採用 NVIDIA 網路安全 AI 保護關鍵基礎設施 | 生成式人工智慧(AI)的快速發展,為產業與研究領域的創新帶來無數機會。正如最先進的技術常見的情況,這種演進同樣改變了網路安全威脅的格局,產生出全新的安全需求。關鍵基礎設施的網路安全正在不斷進步,以嚇阻 AI 時代的下一波新興威脅。
領先的營運技術(OT)供應商今日在專注於工業控制系統(ICS)與 OT 網路安全的S4大會上,展示他們如何採用 NVIDIA 網路安全 AI 平台來提供即時偵測威脅與關鍵基礎設施保護。
Armis、Check Point、CrowdStrike、德勤(Deloitte)與 World Wide Technology(WWT)正在整合該平台,以協助客戶強化能源、公用事業和製造設施等關鍵基礎設施對抗網路威脅。
關鍵基礎設施在高度複雜的環境中運作,常常因數位轉型而加速整合 IT 與 OT,產生出資安漏洞的完美風暴。傳統的網路安全措施已經不足以應對這些新興威脅。
利用
NVIDIA 的網路安全 AI 平台
,這些合作夥伴能夠為關鍵基礎設施環境提供極佳的可視性,並在維持設施持續運作的同時,實現強大且適應性高的安全功能。
該平台整合了 NVIDIA 的加速運算與 AI,採用
NVIDIA BlueField-3 DPU
、
NVIDIA DOCA
及作為
NVIDIA AI Enterprise
一部分的
NVIDIA Morpheus AI 網路安全框架
。這個組合能夠實現即時偵測威脅,讓網路安全專業人員能夠在邊緣和整個網路迅速回應。
與傳統依賴侵入性方法或軟體代理的解決方案不同,BlueField-3 DPU 有著虛擬安全覆蓋的功能,可以在不中斷運作的情況下檢查網路流量與保護主機完整性。作為嵌入每一台伺服器裡的感測器,它們將遙測資料傳輸至 NVIDIA Morpheus,以流暢且不影響運作的方式,實現主機活動、網路流量和應用程式行為的詳細監控。
推動各產業的網路安全創新
整合 Armis Centrix 的 AI 驅動 Armis 網路暴露管理平台搭配 NVIDIA 網路安全 AI,協助確保能源、製造、醫療保健與運輸等關鍵基礎設施的安全。
Armis 技術長暨共同創辦人 Nadir Izrael 表示:「OT 環境日益成為複雜網路威脅的目標,需要強大的解決方案來確保安全性與營運的連續性。將 Armis 無與倫比的 OT 安全與網路暴露管理平台與 NVIDIA BlueField-3 DPU 相結合,可以讓企業在不中斷營運的情況下,全面保護虛實整合系統。」
CrowdStrike 透過在 BlueField-3 DPU 上部署 CrowdStrike Falcon 安全代理程式,以提升即時 AI 驅動的威脅偵測與回應能力,幫助保護 ICS 與 OT 等關鍵基礎設施的安全。
CrowdStrike 產品負責人 Raj Rajamani 表示:「OT 環境面臨越來越多威脅,需要可即時適應各種情況以 AI 驅動的安全。透過將 NVIDIA BlueField-3 DPUs 與 CrowdStrike Falcon 平台整合,我們在不中斷營運的情況下,將領先業界的防護功能擴展至關鍵基礎設施,在邊緣提供統一的防護,協助企業在現代威脅下保持領先。」
德勤使用 NVIDIA 網路安全 AI 平台推動客戶的數位轉型,以協助滿足突破性技術的需求,這些技術需要為資料中心網路提供即時且精細的可視性,以抵禦日益複雜的威脅。
德勤英國分公司 AI 安全主管 Dmitry Dudorov 表示:「隨著企業擁抱數位轉型與互聯技術,保護 OT 與 ICS 系統的難度與日俱增。利用 NVIDIA 的網路安全 AI 平台,可讓組織確定威脅偵測、增強復原能力,並保障基礎設施的安全,以加快執行各項工作。」
AI
助力開創更安全的未來
NVIDIA 的網路安全 AI 平台結合生態系合作夥伴的專業知識,提供強大且可擴充的解決方案,保護關鍵基礎設施環境免受不斷演進的威脅。將 NVIDIA AI 與加速運算帶入 OT 安全的最前線,可協助組織保護現在和未來最重要的事物。
歡迎參加 3 月 17 至 21 日舉辦的
NVIDIA GTC
全球 AI 大會了解更多資訊,屆時 Armis、Check Point 與 CrowdStrike 等網路安全領導廠商將主持多場
會議
,介紹他們與 NVIDIA 的合作項目。
Categories:
互聯網路
|
生成式人工智慧
|
軟體
Tags:
Artificial Intelligence
|
cybersecurity
|
NVIDIA AI Enterprise
|
NVIDIA BlueField |
https://blogs.nvidia.com/blog/ai-in-financial-services-survey-2025/ | AI Pays Off: Survey Reveals Financial Industry’s Latest Technological Trends | The financial services industry is reaching an important milestone with AI, as organizations move beyond testing and experimentation to successful AI implementation, driving business results.
NVIDIA’s fifth annual
State of AI in Financial Services report
shows how financial institutions have consolidated their AI efforts to focus on core applications, signaling a significant increase in AI capability and proficiency.
AI Helps Drive Revenue and Save Costs
Companies investing in AI are seeing tangible benefits, including increased revenue and cost savings.
Nearly 70% of respondents report that AI has driven a revenue increase of 5% or more, with a dramatic rise in those seeing a 10-20% revenue boost. In addition, more than 60% of respondents say AI has helped reduce annual costs by 5% or more. Nearly a quarter of respondents are planning to use AI to create new business opportunities and revenue streams.
The top
generative AI
use cases in terms of return on investment (ROI) are trading and portfolio optimization, which account for 25% of responses, followed by customer experience and engagement at 21%. These figures highlight the practical, measurable benefits of AI as it transforms key business areas and drives financial gains.
Overcoming Barriers to AI Success
Half of management respondents said they’ve deployed their first generative AI service or application, with an additional 28% planning to do so within the next six months. A 50% decline in the number of respondents reporting a lack of AI budget suggests increasing dedication to AI development and resource allocation.
The challenges associated with early AI exploration are also diminishing. The survey revealed fewer companies reporting data issues and privacy concerns, as well as reduced concern over insufficient data for model training. These improvements reflect growing expertise and better data management practices within the industry.
As financial services firms allocate budget and grow more savvy at data management, they can better position themselves to harness AI for enhanced operational efficiency, security and innovation across business functions.
Generative AI Powers More Use Cases
After data analytics, generative AI has emerged as the second-most-used AI workload in the financial services industry. The applications of the technology have expanded significantly, from enhancing customer experience to optimizing trading and portfolio management.
Notably, the use of generative AI for customer experience, particularly via chatbots and virtual assistants, has more than doubled, rising from 25% to 60%. This surge is driven by the increasing availability, cost efficiency and scalability of generative AI technologies for powering more sophisticated and accurate digital assistants that can enhance customer interactions.
More than half of the financial professionals surveyed are now using generative AI to enhance the speed and accuracy of critical tasks like document processing and report generation.
Financial institutions are also poised to benefit from
agentic AI
— systems that harness vast amounts of data from various sources and use sophisticated reasoning to autonomously solve complex, multistep problems. Banks and asset managers can use agentic AI systems to enhance risk management, automate compliance processes, optimize investment strategies and personalize customer services.
Advanced AI Drives Innovation
Recognizing the transformative potential of AI, companies are taking proactive steps to build AI factories — specially built accelerated computing platforms equipped with full-stack AI software — through cloud providers or on premises. This strategic focus on implementing high-value AI use cases is crucial to enhancing customer service, boosting revenue and reducing costs.
By tapping into advanced infrastructure and software, companies can streamline the development and deployment of AI models and position themselves to harness the power of agentic AI.
With industry leaders predicting at least 2x ROI on AI investments, financial institutions remain highly motivated to implement their highest-value AI use cases to drive efficiency and innovation.
Download the full report
to learn more about how financial services companies are using accelerated computing and AI to transform services and business operations.
Categories:
Generative AI
Tags:
Artificial Intelligence
|
Financial Services | https://blogs.nvidia.com.tw/blog/ai-in-financial-services-survey-2025/ | AI 帶來亮眼報酬:調查結果揭示金融業最新技術趨勢 | 金融服務業在使用人工智慧(AI)方面正邁入一個重要的里程碑,各大組織開始邁出測試與實驗的範疇,成功使用 AI 推動業務成果。
NVIDIA 的第五份
《金融服務業 AI 現況(State of AI in Financial Services)》年度調查報告
顯示,金融機構已經整合自身在 AI 方面的各項作為,以專注在核心應用項目上,這標誌著 AI 能力與熟練程度大幅提升。
AI
有助於增加營收與節省成本
投資於 AI 的公司正在看到實質效益,包括增加營收和節省成本等。
近七成的受訪者表示,AI 已經帶來 5% 或以上的營收成長,其中營收成長幅度達 10% 至 20% 的受訪者比例更是大幅增加。此外,超過六成的受訪者表示 AI 已協助減少 5% 或以上的年度成本。近四分之一的受訪者正計劃使用 AI 創造新的商機和收入來源。
交易與投資組合最佳化是投資報酬率(ROI)最高的
生成式 AI
使用案例,佔回應數量的 25%,其次是客戶體驗與參與度,佔 21%。這些數字突顯 AI在改變關鍵業務領域和推動財務收益時,所帶來可衡量的實際效益。
克服
AI
成功的關卡
半數管理層的受訪者表示,他們已經部署了第一個生成式 AI 服務或應用,另有 28% 的受訪者計劃在未來六個月內部署。回覆缺乏 AI 預算的受訪者人數減少了五成,這顯示對於 AI 開發與資源分配的投入程度日益增加。
與早期探索 AI 相關的挑戰同樣在減少。調查顯示,回答有資料問題和隱私疑慮的公司數量減少,對於模型訓練資料不足的疑慮也降低。這些改善反映出業界的專業知識與資料管理實務正在不斷增加。
隨著金融服務公司分配預算並更加擅長管理資料,他們可以更好地利用 AI 來提高跨業務單位的營運效率、安全性和進行創新。
生成式
AI
驅動更多使用案例
繼資料分析之後,生成式 AI 已經成為金融服務業裡第二大宗的 AI 工作負載。這項技術的應用範圍已大幅擴展,從提升客戶體驗到最佳化交易和投資組合管理。
值得注意的是,生成式 AI 在客戶體驗方面的應用,特別是透過聊天機器人和虛擬助理,數量增加了一倍以上,從 25% 上升到 60%。這樣大幅成長的趨勢是基於生成式 AI 技術的可用性、成本效率和可擴展性不斷提高,能夠驅動更複雜、更精準的數位助理,從而提升客戶互動情況。
半數以上受訪的金融專業人員現正使用生成式 AI 技術,以提高處理文件和產生報告等重要工作的速度和準確性。
金融機構也準備好從
代理型 AI
中受惠,代理型 AI 系統是指利用各種來源的大量資料,並使用複雜的推理流程自主解決複雜的多步驟問題。銀行和資產管理公司可以使用代理型 AI 系統來加強管理風險、自動化合規流程、最佳化投資策略,還有提供個人化的客戶服務。
先進的
AI
推動創新
在意識到 AI 的轉型潛力後,企業正積極採取措施,透過與雲端服務供應商合作或是在地端建立 AI 工廠,這些 AI 工廠是專門打造的加速運算平台,配備全端的 AI 軟體。企業在策略上特別鎖定實施高價值的 AI 使用案例,這對於提升客戶服務、增加收入與降低成本來說至關重要。
企業利用先進的基礎設施和軟體,可以簡化 AI 模型的開發和部署,並在善加發揮代理型 AI 力量方面站穩腳步。
由於業界領導業者預測 AI 投資的投資報酬率至少為兩倍,因此金融機構仍有很大動力去實現其最高價值的 AI 使用案例,以推動效率和創新。
下載完整報告
,進一步瞭解金融服務公司如何利用加速運算和 AI 來改變服務和業務運作。
Categories:
生成式人工智慧
Tags:
Artificial Intelligence
|
Financial Services |
https://blogs.nvidia.com/blog/nemo-guardrails-nim-microservices/ | NVIDIA Releases NIM Microservices to Safeguard Applications for Agentic AI | AI agents are poised to transform productivity for the world’s billion knowledge workers with “knowledge robots” that can accomplish a variety of tasks. To develop AI agents, enterprises need to address critical concerns like trust, safety, security and compliance.
New
NVIDIA NIM
microservices for AI guardrails — part of the
NVIDIA NeMo Guardrails
collection of software tools — are portable, optimized inference microservices that help companies improve the safety, precision and scalability of their generative AI applications.
Central to the orchestration of the microservices is NeMo Guardrails, part of the
NVIDIA NeMo
platform for curating, customizing and guardrailing AI. NeMo Guardrails helps developers integrate and manage AI guardrails in large language model (LLM) applications. Industry leaders Amdocs, Cerence AI and Lowe’s are among those using NeMo Guardrails to safeguard AI applications.
Developers can use the NIM microservices to build more secure, trustworthy AI agents that provide safe, appropriate responses within context-specific guidelines and are bolstered against jailbreak attempts. Deployed in customer service across industries like automotive, finance, healthcare, manufacturing and retail, the agents can boost customer satisfaction and trust.
One of the new microservices, built for moderating content safety, was trained using the Aegis Content Safety Dataset — one of the highest-quality, human-annotated data sources in its category. Curated and owned by NVIDIA, the dataset
is publicly available
on Hugging Face and includes over 35,000 human-annotated data samples flagged for AI safety and jailbreak attempts to bypass system restrictions.
NVIDIA NeMo Guardrails Keeps AI Agents on Track
AI is rapidly boosting productivity for a broad range of business processes. In customer service, it’s helping resolve customer issues up to
40% faster
. However, scaling AI for customer service and other AI agents requires secure models that prevent harmful or inappropriate outputs and ensure the AI application behaves within defined parameters.
NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior:
Content safety NIM microservice
that safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.
Topic control NIM microservice
that keeps conversations focused on approved topics, avoiding digression or inappropriate content.
Jailbreak detection NIM microservice
that adds protection against jailbreak attempts, helping maintain AI integrity in adversarial scenarios.
By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when only more general global policies and protections exist — as a one-size-fits-all approach doesn’t properly secure and control complex
agentic AI
workflows.
Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive and manufacturing, in locations like hospitals or warehouses.
Industry Leaders and Partners Safeguard AI With NeMo Guardrails
NeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies — called rails — to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance.
Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate and contextually appropriate responses.
“Technologies like NeMo Guardrails are essential for safeguarding generative AI applications, helping make sure they operate securely and ethically,” said Anthony Goonetilleke, group president of technology and head of strategy at Amdocs. “By integrating NVIDIA NeMo Guardrails into our amAIz platform, we are enhancing the platform’s ‘Trusted AI’ capabilities to deliver agentic experiences that are safe, reliable and scalable. This empowers service providers to deploy AI solutions safely and with confidence, setting new standards for AI innovation and operational excellence.”
Cerence AI, a company specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to help ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models.
“Cerence AI relies on high-performing, secure solutions from NVIDIA to power our in-car assistant technologies,” said Nils Schanz, executive vice president of product and technology at Cerence AI. “Using NeMo Guardrails helps us deliver trusted, context-aware solutions to our automaker customers and provide sensible, mindful and hallucination-free responses. In addition, NeMo Guardrails is customizable for our automaker customers and helps us filter harmful or unpleasant requests, securing our CaLLM family of language models from unintended or inappropriate content delivery to end users.”
Lowe’s, a leading home improvement retailer, is leveraging generative AI to build on the deep expertise of its store associates. By providing enhanced access to comprehensive product knowledge, these tools empower associates to answer customer questions, helping them find the right products to complete their projects and setting a new standard for retail innovation and customer satisfaction.
“We’re always looking for ways to help associates go above and beyond for our customers,” said Chandhu Nair, senior vice president of data, AI and innovation at Lowe’s. “With our recent deployments of NVIDIA NeMo Guardrails, we ensure AI-generated responses are safe, secure and reliable, enforcing conversational boundaries to deliver only relevant and appropriate content.”
To further accelerate AI safeguards adoption in AI application development and deployment in retail, NVIDIA recently announced at the NRF show that its
NVIDIA AI Blueprint for retail shopping assistants
incorporates NeMo Guardrails microservices for creating more reliable and controlled customer interactions during digital shopping experiences.
Consulting leaders Taskus, Tech Mahindra and Wipro are also integrating NeMo Guardrails into their solutions to provide their enterprise clients safer, more reliable and controlled generative AI applications.
NeMo Guardrails is open and extensible, offering integration with a robust ecosystem of leading AI safety model and guardrail providers, as well as AI observability and development tools. It supports integration with
ActiveFence’s ActiveScore
, which filters harmful or inappropriate content in conversational AI applications, and provides visibility, analytics and monitoring.
Hive, which provides its
AI-generated content detection models
for images, video and audio content as NIM microservices, can be easily integrated and orchestrated in AI applications using NeMo Guardrails.
The Fiddler AI Observability platform easily integrates with NeMo Guardrails to enhance AI guardrail monitoring capabilities. And Weights & Biases, an end-to-end AI developer platform, is expanding the capabilities of W&B Weave by adding integrations with NeMo Guardrails microservices. This enhancement builds on Weights & Biases’ existing portfolio of NIM integrations for optimized AI inferencing in production.
NeMo Guardrails Offers Open-Source Tools for AI Safety Testing
Developers ready to test the effectiveness of applying safeguard models and other rails can use
NVIDIA Garak
— an open-source toolkit for LLM and application vulnerability scanning developed by the NVIDIA Research team.
With Garak, developers can
identify vulnerabilities
in systems using LLMs by assessing them for issues such as data leaks, prompt injections, code hallucination and jailbreak scenarios. By generating test cases involving inappropriate or incorrect outputs, Garak helps developers detect and address potential weaknesses in AI models to enhance their robustness and safety.
Availability
NVIDIA NeMo Guardrails microservices, as well as
NeMo Guardrails
for rail orchestration and the
NVIDIA Garak
toolkit, are now available for developers and enterprises. Developers can get started building AI safeguards into AI agents for customer service using NeMo Guardrails with
this tutorial
.
See
notice
regarding software product information.
Categories:
Generative AI
Tags:
Artificial Intelligence
|
Cybersecurity
|
NVIDIA Blueprints
|
NVIDIA NeMo
|
NVIDIA NIM | https://blogs.nvidia.com.tw/blog/nemo-guardrails-nim-microservices/ | NVIDIA 發表為代理型 AI 應用提供安全防護的 NIM 微服務 | AI 代理為全球數十億名知識工作者提供可完成各種任務的「知識機器人」,改變他們的生產力。而為了開發 AI 代理,企業需要解決信任、安全、保全及法遵等關鍵問題。
作為
NVIDIA NeMo Guardrails
軟體工具集的一部分,全新用於人工智慧(AI)防護工作的
NVIDIA NIM
微服務是一款可攜式且經過最佳化的推論微服務,可以協助企業提高其生成式 AI 應用的安全性、精確性與可擴充性。
NeMo Guardrails 是這些微服務的協調核心,也是用於彙整、客製化和為 AI 提供保護的
NVIDIA NeMo
平台一部分。NeMo Guardrails 可協助開發人員在大型語言模型(LLM)應用中整合與管理 AI 防護工作。Amdocs、Cerence AI 和 Lowe’s 等業界領導廠商均使用 NeMo Guardrails 來保護 AI 應用。
開發人員可以使用 NIM 微服務來建立更安全、更值得信賴的 AI 代理,在特定情境的指引下提供安全且適當的回應,並且加強防禦嘗試越獄的行為。這些代理可以部署在汽車、金融、醫療保健、製造和零售等產業的客戶服務中,以提升客戶滿意度和信任度。
其中一個新的微服務是為了控制內容安全而建立,使用 Aegis 內容安全資料集(Aegis Content Safety Dataset)進行訓練,該資料集是同類型中品質最高、經人工註解的資料來源之一。Aegis 內容安全資料集由 NVIDIA 編輯和擁有,並在 Hugging Face 上
公開提供
,其中包括超過 35,000 個經人工註解的資料樣本,標示為 AI 安全和試圖繞過系統限制的越獄行為。
NVIDIA NeMo Guardrails
讓
AI
代理保持正常運作
AI 正在快速提升各種業務流程的工作效率。在客戶服務方面,AI 協助解決客戶問題的速度
加快了 40%
。然而,為客戶服務及其他 AI 代理擴大 AI 規模需要輔以安全的模型,以避免輸出有害或不當內容,並且確保 AI 應用的按照訂定的參數運作。
NVIDIA 為 NeMo Guardrails 推出三款全新的 NIM 微服務,可協助 AI 代理大規模運作,同時確保行為受到控制:
內容安全
NIM
微服務
可避免 AI 產生偏見或有害的輸出內容,確保回應內容符合道德標準。
主題控制
NIM
微服務
使得對話專注於經核准的主題上,避免離題或出現不當內容。
越獄偵測
NIM
微服務
可增加對越獄嘗試的防護,協助在對抗性情境中維持 AI 的完整性。
透過應用多種輕量、專用的模型作為防護措施,開發人員可以補足只有適用於一般情況的全面性政策與保護措施時可能出現的缺口,因為一體適用的做法無法妥善保護與控制複雜的
AI 代理
工作流程。
小型的語言模型,如 NeMo Guardrails 系列中的模型,可提供較低的延遲,即使在資源有限或分散式環境中也能高效率地執行。這使得它們成為醫療保健、汽車和製造業等產業在醫院或倉庫等地點擴大 AI 應用範圍的理想選擇。
業界領導廠商與合作夥伴利用
NeMo Guardrails
保護
AI
開放給開源社群使用的 NeMo Guardrails,可協助開發人員協調多種稱為 rails的 AI 軟體原則,以增強大型語言模型應用的安全性與控制能力。它可以與 NVIDIA NIM 微服務搭配使用,提供建置 AI 系統的強大框架,並在不影響安全性或效能的情況下進行大規模部署。
Amdocs 是全球領先的通訊與媒體公司軟體及服務供應商,該公司正使用 NeMo Guardrails 提供更安全、準確且符合情境的回應內容,以強化 AI 驅動的客戶互動。
Amdocs 科技事業群總裁暨策略部門主管 Anthony Goonetilleke 表示:「像 NeMo Guardrails 這樣的技術對於保護生成式 AI 應用的安全來說是非常重要的,能夠確保它們能安全且符合道德標準地進行運作。透過將 NVIDIA NeMo Guardrails 整合至 amAIz 平台,我們強化了平台的『可信任 AI』功能,以提供安全、可靠且具擴充能力的代理體驗。這讓服務供應商能夠安全放心地部署 AI 解決方案,為 AI 創新和卓越營運樹立新標準。」
專為汽車產業提供 AI 解決方案的 Cerence AI 正在使用 NVIDIA NeMo Guardrails 來協助確保其車載助理能夠在該公司 CaLLM 系列大小語言模型的支援下,提供符合情境的安全互動。
Cerence AI 產品與技術部門執行副總裁 Nils Schanz 表示:「Cerence AI 仰賴 NVIDIA 的高效能、安全解決方案來支援我們的車載助理技術。使用 NeMo Guardrails 能夠幫助我們為汽車製造商客戶提供可信賴的情境感知解決方案,並且提供合理、貼心且無幻覺的回應。NeMo Guardrails 還能配合汽車製造商客戶的需求進行客製化,同時協助我們過濾有害或令人不愉快的請求,確保我們的 CaLLM 語言模型系列不會向終端使用者傳送非預期或不當的內容。」
領先的家居裝修零售商 Lowe’s 正在使用生成式 AI 來培養店員具備深厚的專業知識。這些工具能夠讓店員取得更全面的產品知識,協助他們回答客戶的問題,並幫助找到完成裝修案所需的合適產品,同時為零售創新和客戶滿意度立下新標準。
Lowe’s 資料、AI 與創新部門資深副總裁 Chandhu Nair 表示:「我們一直在尋找方法幫助員工為客戶提供超乎期望的服務。透過最近部署的 NVIDIA NeMo Guardrails,我們可以確保 AI 產生出安全、穩妥且可靠的回應,為對話內容設下邊界,只提供相關且適當的內容。」
為了進一步加快在零售業 AI 應用開發和部署的過程中採用 AI 防護措施,NVIDIA 最近在 NRF 大會上宣布,其
適用於零售購物助理的 NVIDIA AI Blueprint
整合 NeMo Guardrails 微服務,以在數位購物體驗中創造更可靠、控制程度更高的客戶互動。
顧問業領導廠商 Taskus、Tech Mahindra 與 Wipro 也將 NeMo Guardrails 與該公司的解決方案進行整合,為企業客戶提供更安全、可靠且可控的生成式 AI 應用。
NeMo Guardrails 具有開放性和可擴展性,可與領先的 AI 安全模型和防護解決方案供應商,以及 AI 可觀察性和開發工具組成的強大的生態系進行整合。它支援與
ActiveFence 的 ActiveScore
整合,可以過濾對話式 AI 應用中的有害或不當內容,並且提供可視性、分析與監控等功能。
Hive 以 NIM 微服務的方式提供該公司針對圖片、影片和聲音內容的
AI 生成內容偵測模型
,可輕鬆整合至使用 NeMo Guardrails 的 AI 應用中並進行協調。
Fiddler AI Observability 平台能輕鬆與 NeMo Guardrails 整合,強化 AI 防護功能的監控能力。而端對端的 AI 開發者平台 Weights & Biases,則是透過加入與 NeMo Guardrails 微服務的整合,來擴充 W&B Weave 的功能。這項增強功能建立在 Weights & Biases 現有的 NIM 整合產品組合上,能夠在生產環境裡最佳化 AI 推論結果。
NeMo Guardrails
提供
AI
安全測試開源工具
準備測試應用安全防護模型和其他 rails 效果的開發人員,能夠使用 NVIDIA Research 團隊開發用於掃描大型語言模型及應用程式漏洞的開源工具包
NVIDIA Garak
。
透過使用 Garak,開發人員可以評估使用大型語言模型的系統是否存在資料外洩、提示注入、程式碼幻覺和越獄情境等問題,從而
找出系統中的漏洞
。Garak 可以藉由產生涉及不適當或不正確輸出內容的測試案例,協助開發人員偵測及解決 AI 模型中的潛在漏洞,以提升其穩健性與安全性。
上市時程
NVIDIA NeMo Guardrails 微服務,以及用於協調 rail 的
NeMo Guardrails
和
NVIDIA Garak
工具包,現已提供給開發人員和企業使用。開發人員可以利用
此教學內容
開始使用 NeMo Guardrails,為用於客戶服務的 AI 代理建置 AI 防護措施。
軟體產品資訊請參見
公告
。
Categories:
生成式人工智慧
Tags:
Artificial Intelligence
|
cybersecurity
|
NVIDIA Blueprints
|
NVIDIA NeMo
|
NVIDIA NIM |
https://blogs.nvidia.com/blog/cosmos-world-foundation-models/ | NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community | Editor’s note: This post was updated on Friday, Jan. 10, with Best of CES Awards results.
NVIDIA Cosmos
, a platform for accelerating
physical AI
development, introduces a family of
world foundation models
— neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs).
World foundation models, or WFMs, are as fundamental as large language models. They use input data, including text, image, video and movement, to generate and simulate virtual worlds in a way that accurately models the spatial relationships of objects in the scene and their physical interactions.
Announced at CES
, NVIDIA is making available the first wave of Cosmos WFMs for physics-based simulation and synthetic data generation — plus state-of-the-art tokenizers, guardrails, an accelerated data processing and curation pipeline, and a framework for model customization and optimization.
Cosmos won Best AI and Best Overall accolades from the
Best of CES Awards
by the CNET Group, awards partner for the Consumer Technology Association, which produces CES.
Researchers and developers, regardless of their company size, can freely use the Cosmos models under NVIDIA’s permissive open model license that allows commercial usage. Enterprises building AI agents can also use new open
NVIDIA Llama Nemotron and Cosmos Nemotron models
, unveiled at CES.
The openness of Cosmos’ state-of-the-art models unblocks
physical AI
developers building robotics and AV technology and enables enterprises of all sizes to more quickly bring their physical AI applications to market. Developers can use Cosmos models directly to generate physics-based synthetic data, or they can harness the
NVIDIA NeMo framework
to fine-tune the models with their own videos for specific physical AI setups.
Physical AI leaders — including robotics companies 1X, Agility Robotics and XPENG, and AV developers Uber and Waabi — are already working with Cosmos to accelerate and enhance model development.
Developers can preview the first Cosmos
autoregressive
and
diffusion
models on the
NVIDIA API catalog
, and download the family of models and fine-tuning framework from the
NVIDIA NGC catalog
and
Hugging Face
.
World Foundational Models for Physical AI
Cosmos world foundation models are a suite of open diffusion and autoregressive transformer models for physics-aware video generation. The models have been trained on 9,000 trillion tokens from 20 million hours of real-world human interactions, environment, industrial, robotics and driving data.
The models come in three categories: Nano, for models optimized for real-time,
low-latency inference
and edge deployment; Super, for highly performant baseline models; and Ultra, for maximum quality and fidelity, best used for distilling custom models.
When paired with
NVIDIA Omniverse
3D outputs, the diffusion models generate controllable, high-quality synthetic video data to bootstrap training of robotic and AV perception models. The autoregressive models predict what should come next in a sequence of video frames based on input frames and text. This enables real-time next-token prediction, giving physical AI models the foresight to predict their next best action.
Developers can use Cosmos’ open models for text-to-world and video-to-world generation. Versions of the diffusion and autoregressive models, with between 4 and 14 billion parameters each, are available now on the NGC catalog and
Hugging Face
.
Also available are a 12-billion-parameter upsampling model for refining text prompts, a 7-billion-parameter video decoder optimized for augmented reality, and guardrail models to ensure responsible, safe use.
To demonstrate opportunities for customization, NVIDIA is also releasing fine-tuned model samples for vertical applications, such as generating multisensor views for AVs.
Advancing Robotics, Autonomous Vehicle Applications
Cosmos world foundation models can enable
synthetic data generation
to augment training datasets, simulation to test and debug physical AI models before they’re deployed in the real world, and reinforcement learning in virtual environments to accelerate
AI agent learning
.
Developers can generate massive amounts of controllable, physics-based synthetic data by conditioning Cosmos with composed 3D scenes from NVIDIA Omniverse.
Waabi, a company pioneering generative AI for the physical world, starting with autonomous vehicles, is evaluating the use of Cosmos for the search and curation of data for AV software development and simulation. This will further accelerate the company’s industry-leading approach to safety, which is based on Waabi World, a generative AI simulator that can create any situation a vehicle might encounter with the same level of realism as if it happened in the real world.
In robotics, WFMs can generate synthetic virtual environments or worlds to provide a less expensive, more efficient and controlled space for robot learning. Embodied AI startup Hillbot is boosting its data pipeline by using Cosmos to generate terabytes of high-fidelity 3D environments. This AI-generated data will help the company refine its robotic training and operations, enabling faster, more efficient robotic skilling and improved performance for industrial and domestic tasks.
In both industries, developers can use NVIDIA Omniverse and Cosmos as a multiverse simulation engine, allowing a physical AI policy model to simulate every possible future path it could take to execute a particular task — which in turn helps the model select the best of these paths.
Data curation and the training of Cosmos models relied on thousands of NVIDIA GPUs through
NVIDIA DGX Cloud
, a high-performance, fully managed AI platform that provides accelerated computing clusters in every leading cloud.
Developers adopting Cosmos can use DGX Cloud for an easy way to deploy Cosmos models, with further support available through the
NVIDIA AI Enterprise
software platform.
Customize and Deploy With NVIDIA Cosmos
In addition to foundation models, the
Cosmos platform
includes a data processing and curation pipeline powered by
NVIDIA NeMo Curator
and optimized for NVIDIA data center GPUs.
Robotics and AV developers collect millions or billions of hours of real-world recorded video, resulting in petabytes of data. Cosmos enables developers to process 20 million hours of data in just 40 days on
NVIDIA Hopper GPUs
, or as little as 14 days on
NVIDIA Blackwell GPUs
. Using unoptimized pipelines running on a CPU system with equivalent power consumption, processing the same amount of data would take over three years.
The platform also features a suite of powerful video and image tokenizers that can convert videos into tokens at different video compression ratios for training various
transformer models
.
The Cosmos tokenizers deliver 8x more total compression than state-of-the-art methods and 12x faster processing speed, which offers superior quality and reduced computational costs in both training and
inference
. Developers can access these tokenizers, available under NVIDIA’s open model license, via
Hugging Face
and
GitHub
.
Developers using Cosmos can also harness model training and fine-tuning capabilities offered by
NeMo framework
, a GPU-accelerated framework that enables high-throughput AI training.
Developing Safe, Responsible AI Models
Now available to developers under the NVIDIA Open Model License Agreement, Cosmos was developed in line with NVIDIA’s
trustworthy AI
principles, which include nondiscrimination, privacy, safety, security and transparency.
The Cosmos platform includes Cosmos Guardrails, a dedicated suite of models that, among other capabilities, mitigates harmful text and image inputs during preprocessing and screens generated videos during postprocessing for safety. Developers can further enhance these guardrails for their custom applications.
Cosmos models on the
NVIDIA API catalog
also feature an inbuilt watermarking system that enables identification of AI-generated sequences.
NVIDIA Cosmos was developed by
NVIDIA Research
. Read the research paper, “
Cosmos World Foundation Model Platform for Physical AI
,” for more details on model development and benchmarks. Model cards providing additional information are available on
Hugging Face
.
Learn more about world foundation models in an
AI Podcast episode
that features Ming-Yu Liu, vice president of research at NVIDIA.
Get started
with NVIDIA
Cosmos
and join
NVIDIA at CES
. Watch the
Cosmos demo
and Huang’s keynote below:
See
notice
regarding software product information.
Categories:
Driving
|
Generative AI
|
Robotics
|
Software
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
DGX Cloud
|
Jetson
|
NVIDIA DRIVE
|
NVIDIA NeMo
|
NVIDIA Research
|
Omniverse
|
Physical AI
|
Robotics
|
Simulation and Design
|
Synthetic Data Generation
|
Transportation | https://blogs.nvidia.com.tw/blog/cosmos-world-foundation-models/ | NVIDIA 開放 Cosmos 世界基礎模型給實體 AI 開發者社群使用 | 加速開發
實體人工智慧(AI)
的
NVIDIA Cosmos
平台推出一系列
世界基礎模型
,這是可以預測和產生虛擬環境未來狀態的物理感知影片神經網路,以協助開發人員打造下一代機器人和自動駕駛車。
世界基礎模型(WFM)與大型語言模型一樣都是最基本的模型。它們使用文字、圖像、影片和動作這些輸入資料來產生和模擬虛擬世界,以精準模擬場景中物體的空間關係及其實體互動的情況。
NVIDIA
今日在 CES 大會上宣布
推出第一波 Cosmos WFM,用於基於物理的模擬及產生合成資料,以及最先進的標記器(tokenizer)、護欄、加速資料處理與整理管道,以及模型客製化與最佳化框架。
不論其公司規模大小,都可以在 NVIDIA 允許商業用途的寬容式開放模型授權下,讓研究人員與開發人員自由使用 Cosmos 模型。建立 AI 代理的企業也可以使用 NVIDIA 在 CES 大會上發表的全新開放式
NVIDIA Llama Nemotron 和 Cosmos Nemotron 模型
。
Cosmos 最先進模型的開放性,排除建立機器人與自動駕駛車技術的
實體 AI
開發人員所面臨的障礙,讓各種規模的企業都能更快速地將其實體 AI 應用推向市場。開發人員可以直接使用 Cosmos 模型來產生基於物理的合成資料,也可以利用
NVIDIA NeMo 架構
,針對特定的實體 AI 設定,使用自己的影片來微調模型。
機器人公司 1X、Agility Robotics 與小鵬汽車,以及自動駕駛車開發商 Uber 及 Waabi 等實體 AI 領導廠商,都已經使用 Cosmos 加速和加強模型開發作業。
開發人員可以在
NVIDIA API 目錄
預覽第一批 Cosmos
自我回歸
和
擴散
模型,以及從
NVIDIA NGC 目錄
和
Hugging Face
下載一系列模型和微調框架。
實體
AI
的世界基礎模型
Cosmos 世界基礎模型是一套開放式擴散和自我回歸 transformer 模型,用於產生物理感知影片內容。使用 2,000 萬個小時現實世界人類互動、環境、工業、機器人和駕駛資料的 9,000 兆個詞元來訓練這些模型。
此模型有三個類別:Nano 適用於針對即時、
低延遲推論
與邊緣部署進行最佳化的模型;Super 適用於高效能基準模型;Ultra 適用於最高品質與真實度,最適合用於提取客製化模型。
搭配
NVIDIA Omniverse
3D 輸出內容使用時,擴散模型會產生可控制的高品質合成影片資料,以開始訓練機器人與自動駕駛車感知模型。自我回歸模型會根據輸入畫面和文字預測影片畫面序列中的下一個畫面。這樣就能即時預測下一個詞元,讓實體 AI 模型能夠預測它的下一個最佳動作。
開發人員可以使用 Cosmos 的開放模型來產生文字到世界和影片到世界的內容。擴散模型與自我回歸模型的版本各擁有 40 億到 140 億個參數,現在在 NGC 目錄與
Hugging Face
開放使用。
還有 120 億個參數的上採樣模型,用於細化文字提示;70 億個參數的影片解碼器,針對擴增實境進行最佳化;以及護欄以確保安全、負責任的使用 AI。
NVIDIA 也推出針對垂直應用的微調模型樣本,例如為自動駕駛車生成多感測器視角,以展示客製化的機會。
推動機器人及自動駕駛車技術的應用
Cosmos 世界基礎模型能夠
產生合成資料
以增強訓練資料集、先行模擬以在真實世界部署前對實體 AI 模型進行測試與除錯,以及在虛擬環境中進行強化學習以加速
AI 代理學習
。
開發人員可以使用 NVIDIA Omniverse 的 3D 合成場景來訓練 Cosmos,產生大量可控制、基於物理的合成資料。
從自駕車開始為實體世界開創生成式 AI 的 Waabi,正在評估使用 Cosmos 搜尋和整理影片資料,用於開發和模擬自動駕駛車軟體。這將進一步加速公司以業界領先的方式推動安全性的發展。該公司利用 Waabi World 這個生成式 AI 模擬器創建任何車輛可能遇到的情境,並以與真實世界相同的真實感呈現。
開發機器人的 WFM 可以產生合成的虛擬環境或世界,為機器人學習提供成本更低、更有效率且可控制的空間。體現 AI 新創公司 Hillbot 使用 Cosmos 來產生 TB 等級真實感十足的 3D 環境,以增強其資料管道。這些由 AI 產生的資料將有助於該公司完善其機器人訓練與操作,讓機器人更快、更有效率地學習各項技能,以及提高執行工業與家庭任務的表現。
這兩個產業的開發人員都可以使用 NVIDIA Omniverse 與 Cosmos 做為多重宇宙模擬引擎,讓實體 AI 策略模型模擬未來執行特定任務時可能採取的每個路徑,這反過來又能幫助模型從這些路徑中選擇最佳路徑。
Cosmos 模型整理資料和訓練必須依賴
NVIDIA DGX Cloud
平台上的數千個 NVIDIA GPU,而 NVIDIA DGX Cloud 是一個高效能、完全託管的 AI 平台,可在各大雲端環境提供加速運算叢集。
採用 Cosmos 的開發人員可以使用 DGX Cloud 輕鬆部署 Cosmos 模型,並且透過
NVIDIA AI Enterprise
軟體平台提供更多支援。
使用
NVIDIA Cosmos
進行客製化與部署
除了基礎模型之外,
Cosmos
平台
還有由
NVIDIA NeMo Curator
支援的資料處理與整理管道,並且針對 NVIDIA 資料中心 GPU 進行最佳化。
機器人與自動駕車開發人員收集數百萬或數十億小時的真實世界影片畫面,產生出 PB 等級的大量資料。Cosmos 讓使用
NVIDIA Hopper GPU
的開發人員,只要 40 天就能處理完 2,000 萬個小時的資料,而使用
NVIDIA Blackwell GPU
的話更只要 14 天。如果使用在 CPU 系統上執行的未最佳化管道作業,且功耗相當,則處理相同數量的資料則要三年以上的時間。
此平台還擁有一套功能強大的影片和圖像標記器,可以用不同的影片壓縮比將影片轉換為標記,用於訓練各種
transformer 模型
。
Cosmos 標記器的總壓縮率比最先進的方法高出 8 倍,處理速度高出 12 倍,在訓練和
推論
方面都能提供優異品質與降低運算成本。開發人員可以在
Hugging Face
及
GitHub
取得這些以 NVIDIA 開放模型授權提供的標記器。
使用 Cosmos 的開發人員也能利用
NeMo 框架
提供的模型訓練與微調功能,NeMo 框架是一個 GPU 加速框架,能夠以高處理量的方式來訓練 AI。
開發安全、負責任的
AI
模型
Cosmos現已根據 NVIDIA 開放模型授權協議提供給開發人員使用。Cosmos在開發的過程中遵照 NVIDIA
值得信賴的 AI
原則,包括公平性、隱私性、安全、保障與公開透明度。
Cosmos 平台包含一套專用的 Cosmos Guardrails 模型,它除了其他功能,還能在預先處理過程中減緩有害的文字與圖像輸入,並且在後製處理過程中篩選所產生的影片內容以確保安全性。開發人員可針對自訂應用進一步強化這些防護措施。
NVIDIA API 目錄
上的 Cosmos 模型另有內建浮水印系統,能夠發現 AI 產生的連續畫面。
NVIDIA Cosmos 由
NVIDIA Research
開發。請閱讀研究論文《
Cosmos World Foundation Model Platform for Physical AI
》,以瞭解更多關於模型開發與基準測試的詳細資訊。在
Hugging Face
有提供其他資訊的模型卡。
在 1 月 7 日播出的
AI Podcast
節目中,NVIDIA 研究部門副總裁 Ming-Yu Liu 將介紹更多關於世界基礎模型的資訊。
開始使用
NVIDIA Cosmos 並參加
NVIDIA 在 CES 大會的各項活動
。
請見有關軟體產品資訊的
通知
。
Categories:
生成式人工智慧
|
自主機器
|
自動駕駛
|
軟體
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
DGX Cloud
|
Jetson
|
NVIDIA DRIVE
|
NVIDIA NeMo
|
NVIDIA Research
|
Omniverse
|
Robotics
|
Simulation and Design
|
Synthetic Data Generation
|
Transportation |
https://blogs.nvidia.com/blog/isaac-gr00t-blueprint-humanoid-robotics/ | NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development | Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation
humanoid robot
development efforts.
Announced by NVIDIA founder and CEO Jensen Huang today at the
CES
trade show, the
NVIDIA Isaac GR00T
Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.
Imitation learning — a subset of
robot learning
— enables
humanoids
to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the
Isaac GR00T blueprint
for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations.
Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a
digital twin
. These human actions are mimicked by a robot in simulation and recorded for use as ground truth.
The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the
NVIDIA Omniverse
and
NVIDIA Cosmos
platforms, exponentially expands this dataset through domain randomization and 3D upscaling.
The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in
NVIDIA Isaac Lab
, an open-source and modular framework for robot learning.
World Foundation Models Narrow the Sim-to-Real Gap
NVIDIA also
announced Cosmos
at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for
physical AI
development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and
synthetic data
.
In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse — a developer platform of application programming interfaces and microservices for building 3D applications and services — with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations.
An Expanding Ecosystem
Collectively,
NVIDIA Isaac GR00T
,
Omniverse
and
Cosmos
are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure.
Humanoid software, hardware and robot manufacturers can
apply for early access
to NVIDIA’s humanoid robot developer program.
Watch the
CES opening keynote
from NVIDIA founder and CEO Jensen Huang, and stay up to date by subscribing to the
newsletter
and following NVIDIA Robotics on
LinkedIn
,
Instagram
,
X
and
Facebook
.
See
notice
regarding software product information.
Categories:
Robotics
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
Digital Twin
|
Isaac
|
Omniverse
|
Robotics
|
Synthetic Data Generation | https://blogs.nvidia.com.tw/blog/isaac-gr00t-blueprint-humanoid-robotics/ | NVIDIA 宣布推出 Isaac GR00T 藍圖以加速開發人型機器人 | 人型機器人的市場規模在未來二十年內,有望達到 380 億美元之譜。為滿足如此龐大的需求,尤其是來自工業和製造業的需求,NVIDIA 發表了一系列機器人基礎模型、資料管道和模擬框架,以加速開發下一代
人型機器人
。
NVIDIA 創辦人暨執行長黃仁勳今日在
CES
大會宣布,用於產生合成動作的
NVIDIA Isaac GR00T
藍圖(blueprint)可以協助開發人員產生出極為大量的合成動作資料,以利用模仿學習的方式訓練人型機器人。
模仿學習是
機器人學習
裡的一個子集合,可以讓
人型機器人
用觀察和模仿專家真人示範的方式來學習新技能。想要收集這些廣泛又高品質的現實世界資料集,非常無聊且要花費許多時間,成本往往更高得令人卻步。使用適用於產生合成動作的
Isaac GR00T 藍圖
,開發人員只要少數的真人示範,就能輕鬆產生出龐大的大型合成資料集。
使用者從使用 GR00T-Teleop 工作流程開始,利用 Apple Vision Pro 在
數位孿生
模型裡捕捉真人的動作。模擬環境裡的機器人會模仿這些動作,並且記錄下來作為基本事實資料。
GR00T-Mimic 工作流程會將擷取到的真人示範內容乘以更大的合成動作資料集。最後,建構在
NVIDIA Omniverse
及
NVIDIA Cosmos
平台上的 GR00T-Gen 工作流程,會透過域隨機化與 3D 畫質提升技術,以倍數成長的方式擴充這個資料集。
隨後可以將這個資料集當成機器人策略的輸入項目,在
NVIDIA Isaac Lab
這個開源模組化的機器人學習框架中教導機器人如何有效安全地移動,且與周遭環境進行互動。
世界基礎模型縮小模擬與真實的差距
NVIDIA 也在 CES 上
宣布推出 Cosmos
,在這個平台上提供一系列預先訓練好的開放式世界基礎模型,專門用於產生物理感知影片內容與世界狀態,以協助開發
實體 AI
。它包括各種大小和輸入資料格式的自回歸和擴散模型。使用 18 千兆個詞元來訓練這些模型,這些詞元包括 200 萬小時的自動駕駛、機器人、無人機影片和
合成資料
。
Cosmos 平台除了有助於產生大型資料集,還能使用圖像畫質提升技術,將 3D 圖像變得更真實,以縮小模擬與真實之間的差距。把 Omniverse(用於開發 3D 應用程式和服務的應用程式介面和微服務開發平台)搭配 Cosmos 使用非常重要,因為 Cosmos 提供一個具有高度可控性、精準基於物理的模擬環境,能夠有效確保將世界模型常見可能造成幻覺的情況降至最低。
不斷成長茁壯的生態系
NVIDIA Isaac GR00T
、
Omniverse
及
Cosmos
這三個平台加起來協助實體 AI 及人型機器人的創新發展向前邁進一大步。各大機器人開發業者已經開始採用 Isaac GR00T 與展示其成果,包括 Boston Dynamics 和 Figure。
人型機器人軟體、硬體與機器人製造商可以
申請搶先體驗
NVIDIA 的人型機器人開發者計畫。
歡迎觀看 NVIDIA 創辦人暨執行長黃仁勳精彩的
CES 大會開幕主題演講
,並且訂閱
電子報
,也別忘了在
LinkedIn
、
Instagram
、
X
及
Facebook
追蹤 NVIDIA Robotics,隨時掌握最新資訊。
請見有關軟體產品資訊的
通知
。
Categories:
自主機器
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
Digital Twin
|
Isaac
|
Omniverse
|
Robotics
|
Synthetic Data Generation |
https://blogs.nvidia.com/blog/omniverse-sensor-rtx-autonomous-machines/ | Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTX | Generative AI and
foundation models
let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as
tokenization
and
large language and diffusion models
, developers and researchers can now address longstanding hurdles to autonomy.
These larger models require massive amounts of diverse data for training, fine-tuning and validation. But collecting such data — including from rare edge cases and potentially hazardous scenarios, like a pedestrian crossing in front of an autonomous vehicle (AV) at night or a human entering a welding robot work cell — can be incredibly difficult and resource-intensive.
To help developers fill this gap,
NVIDIA Omniverse Cloud Sensor RTX APIs
enable physically accurate
sensor simulation
for generating datasets at scale. The application programming interfaces (APIs) are designed to support sensors commonly used for autonomy — including cameras, radar and lidar — and can integrate seamlessly into existing workflows to accelerate the development of autonomous vehicles and robots of every kind.
Omniverse Sensor RTX APIs are now available to select developers in
early access
. Organizations such as Accenture, Foretellix, MITRE and Mcity are integrating these APIs via domain-specific blueprints to provide end customers with the tools they need to deploy the next generation of industrial manufacturing robots and self-driving cars.
Powering Industrial AI With Omniverse Blueprints
In complex environments like factories and warehouses, robots must be orchestrated to safely and efficiently work alongside machinery and human workers. All those moving parts present a massive challenge when designing, testing or validating operations while avoiding disruptions.
Mega
is an Omniverse Blueprint that offers enterprises a reference architecture of NVIDIA accelerated computing, AI,
NVIDIA Isaac
and
NVIDIA Omniverse
technologies. Enterprises can use it to develop
digital twins
and test AI-powered robot brains that drive robots, cameras, equipment and more to handle enormous complexity and scale.
Integrating Omniverse Sensor RTX, the blueprint lets robotics developers simultaneously render sensor data from any type of intelligent machine in a factory for high-fidelity, large-scale sensor simulation.
With the ability to test operations and workflows in simulation, manufacturers can save considerable time and investment, and improve efficiency in entirely new ways.
International supply chain solutions company KION Group and Accenture are using the Mega blueprint to build Omniverse digital twins that serve as virtual training and testing environments for industrial AI’s robot brains, tapping into data from smart cameras, forklifts, robotic equipment and digital humans.
The robot brains perceive the simulated environment with physically accurate sensor data rendered by the Omniverse Sensor RTX APIs. They use this data to plan and act, with each action precisely tracked with Mega, alongside the state and position of all the assets in the
digital twin
. With these capabilities, developers can continuously build and test new layouts before they’re implemented in the physical world.
Driving AV Development and Validation
Autonomous vehicles have been under development for over a decade, but barriers in acquiring the right training and validation data and slow iteration cycles have hindered large-scale deployment.
To address this need for sensor data, companies are harnessing the
NVIDIA Omniverse Blueprint for AV simulation
, a reference workflow that enables physically accurate sensor simulation. The workflow uses Omniverse Sensor RTX APIs to render the camera, radar and lidar data necessary for AV development and validation.
AV toolchain provider Foretellix has integrated the blueprint into its
Foretify AV development toolchain
to transform object-level simulation into physically accurate sensor simulation.
The Foretify toolchain can generate any number of testing scenarios simultaneously. By adding sensor simulation capabilities to these scenarios, Foretify can now enable developers to evaluate the completeness of their AV development, as well as train and test at the levels of fidelity and scale needed to achieve large-scale and safe deployment. In addition, Foretellix will use the newly announced
NVIDIA Cosmos platform
to generate an even greater diversity of scenarios for verification and validation.
Nuro, an autonomous driving technology provider with one of the largest level 4 deployments in the U.S., is using the Foretify toolchain to train, test and validate its self-driving vehicles before deployment.
In addition, research organization MITRE is collaborating with the University of Michigan’s Mcity testing facility to build a digital AV validation framework for regulatory use, including a digital twin of Mcity’s 32-acre proving ground for autonomous vehicles. The project uses the AV simulation blueprint to render physically accurate sensor data at scale in the virtual environment, boosting training effectiveness.
The future of robotics and autonomy is coming into sharp focus, thanks to the power of high-fidelity sensor simulation. Learn more about these solutions at CES by visiting Accenture at Ballroom F at the Venetian and Foretellix booth 4016 in the West Hall of Las Vegas Convention Center.
Learn more about the latest in automotive and generative AI technologies by joining
NVIDIA at CES
.
See
notice
regarding software product information.
Categories:
Robotics
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
Digital Twin
|
Industrial and Manufacturing
|
Isaac
|
NVIDIA Blueprints
|
Omniverse
|
Robotics
|
Simulation and Design
|
Transportation | https://blogs.nvidia.com.tw/blog/omniverse-sensor-rtx-autonomous-machines/ | 建造更聰明的自主機器:NVIDIA 宣布 Omniverse Sensor RTX 推出搶先體驗活動 | 生成式人工智慧(AI)和
基礎模型
讓自主機器能夠超越它們所接受訓練的操作設計領域。開發人員和研究人員使用
標記化
(tokenization)及
大型語言和擴散模型
等嶄新 AI 技術,現在可以解決一直以來在自主領域方面的各項障礙。
需要使用大量相異的資料來訓練、微調與驗證這些大型模型。不過收集這些資料(包括從罕見的邊緣情況和潛在危險情境中收集資料,例如行人在夜間橫越自動駕駛車前方,或是人類進入焊接機器人工作單元)可能非常困難,又得耗費不少資源。
為了協助開發人員填補這個缺口,
NVIDIA Omniverse Cloud Sensor RTX API
提供了物理精確的感測器模擬,用於大規模生成資料集。這些應用程式介面(API)用於支援常用於自主機器上的感測器,包括攝影機、雷達與光達,且能完美與現有的工作流程進行整合,以加快開發各種自動駕駛車輛與機器人。
現已開放部分開發人員
搶先體驗
Omniverse Sensor RTX API。埃森哲(Accenture)、Foretellix、MITRE 和 Mcity等企業正透過特定領域藍圖整合這些 API,為終端客戶提供部署下一代工業製造機器人和自動駕駛車所需的工具。
使用
Omniverse Blueprints
為工業
AI
提供動力
在工廠和倉庫等複雜環境中,機器人必須被精心協調,才能安全高效率地與機器和人類工作者並肩作業。在設計、測試或驗證操作,又要避免中斷作業時,所有這些移動部件都會帶來巨大的挑戰。
Mega
是一個 Omniverse Blueprint ,可為企業提供 NVIDIA 加速運算、AI、
NVIDIA Isaac
及
NVIDIA Omniverse
技術的參考架構。企業可以用它開發
數位孿生
模型,測試由 AI 驅動的機器人大腦,而這些大腦驅動著機器人、攝影機、設備等項目,以處理極為複雜又大量的作業。
這個整合了 Omniverse Sensor RTX 的藍圖可以讓機器人開發人員同時渲染工廠內任何類型智慧機器的感測器資料,實現高保真、大規模的感測器模擬。
隨著能夠在模擬環境裡測試操作和工作流程,製造商可以省下大量時間和投資,以全新方式提高作業效率。
國際供應鏈解決方案公司凱傲集團(KION Group)與埃森哲利用來自智慧攝影機、堆高機、機器人設備和數位人類的資料,使用 Mega 藍圖建立 Omniverse 數位孿生,作為工業AI機器人大腦的虛擬訓練和測試環境。
機器人大腦透過 Omniverse Sensor RTX API 渲染的物理精確感測器資料來感知模擬環境。機器人使用這些資料來計劃和採取行動,並透過 Mega 精準追蹤每一個動作,以及
數位孿生
中所有資產的狀態和位置。借助這些功能,開發人員可以在真正部署至實體環境裡之前,不斷建立和測試新配置。
推動開發與驗證自動駕駛車
自動駕駛車輛已開發超過十多年,但在取得正確的訓練與驗證資料方面所遇到的阻礙,還有緩慢的迭代週期,都阻礙了大規模部署。
為了滿足對感測器資料的這種需求,各家公司利用
NVIDIA Omniverse Blueprint for AV simulation
,這是一個實現物理精確感測器模擬的參考工作流程。這個工作流程使用 Omniverse Sensor RTX API 來渲染出開發與驗證自動駕駛汽車所需的攝影機、雷達與光達資料。
自動駕駛汽車工具鏈供應商 Foretellix 已經把這個藍圖納入該公司的
Foretify 自動駕駛車開發工作鏈
,將物件級模擬轉換為物理精準感測器模擬。
Foretify 工具鏈可以同時產生任意數量的測試情境。Foretify 在這些情境中加入感測器模擬功能,開發人員便能評估自己在開發自動駕駛車方面的完整性,並以實現大規模安全部署所需的保真度和規模水平進行訓練和測試。。Foretellix 還將使用最新發表的
NVIDIA Cosmos 平台
,產生更多樣化的情境進行確認與驗證。
自動駕駛技術提供商 Nuro 是美國規模最大的 level 4 部署業者之一,使用 Foretify 工具鏈在部署前對其自動駕駛車輛進行訓練、測試和驗證。
再者,研究機構 MITRE 與密西根大學的 Mcity 測試設施合作,建立供主管機關使用的數位自動駕駛車驗證框架,包括 Mcity 32 英畝自動駕駛車試驗場的數位孿生模型。這項合作案使用 自動駕駛車 模擬藍圖,在虛擬環境中大規模渲染出物理精確的感測器資料,以提升訓練成效。
得益於高保真感測器模擬技術,機器人與自動化的未來正逐漸成為人們關注的焦點。如需更深入瞭解 CES 大會上這些解決方案的資訊,請造訪埃森哲位於拉斯維加斯威尼斯人F展廳的攤位,以及 Foretellix 位於拉斯維加斯展覽中心西館 4016 號的展位。
欲了解最新的汽車與生成式 AI 技術,參加
NVIDIA 在 CES 大會的各項活動
。
請見有關軟體產品資訊的
通知
。
Categories:
自主機器
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
Digital Twin
|
Industrial and Manufacturing
|
Isaac
|
NVIDIA Blueprints
|
Omniverse
|
Robotics
|
Simulation and Design
|
Transportation |
https://blogs.nvidia.com/blog/physical-ai-robotics-isaac-sim-aws/ | NVIDIA Advances Physical AI With Accelerated Robotics Simulation on AWS | Field AI is building robot brains that enable robots to autonomously manage a wide range of industrial processes. Vention creates pretrained skills to ease development of robotic tasks. And Cobot offers Proxie, an AI-powered cobot designed to handle material movement and adapt to dynamic environments, working seamlessly alongside humans.
These leading robotics startups are all making advances using
NVIDIA Isaac Sim
on Amazon Web Services. Isaac Sim is a reference application built on
NVIDIA Omniverse
for developers to simulate and test AI-driven robots in physically based virtual environments.
NVIDIA announced at AWS re:Invent today that Isaac Sim now runs on Amazon Elastic Cloud Computing (EC2) G6e instances accelerated by
NVIDIA L40S GPUs
. And with
NVIDIA OSMO
, a cloud-native orchestration platform, developers can easily manage their complex robotics workflows across their AWS computing infrastructure.
This combination of NVIDIA-accelerated hardware and software — available on the cloud — allows teams of any size to scale their physical AI workflows.
Physical AI
describes AI models that can understand and interact with the physical world. It embodies the next wave of
autonomous machines and robots
, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses.
With physical AI, developers are embracing a
three computer solution
for training, simulation and inference to make breakthroughs.
Yet physical AI for robotics systems requires robust training datasets to achieve precision inference in deployment. Developing such datasets, however, and testing them in real situations can be impractical and costly.
Simulation offers an answer, as it can significantly accelerate the training, testing and deployment of AI-driven robots.
Harnessing L40S GPUs in the Cloud to Scale Robotics Simulation and Training
Simulation is used to verify, validate and optimize robot designs as well as the systems and their algorithms before deployment. Simulation can also optimize facility and system designs before construction or remodeling starts for maximum efficiencies, reducing costly manufacturing change orders.
Amazon EC2 G6e instances accelerated by NVIDIA L40S GPUs provide a 2x performance gain over the prior architecture, while allowing the flexibility to scale as scene and simulation complexity grows. The instances are used to train many computer vision models that power AI-driven robots. This means the same instances can be extended for various tasks, from data generation to simulation to model training.
Using
NVIDIA OSMO
in the cloud allows teams to orchestrate and scale complex robotics development workflows across distributed computing resources, whether on premises or in the AWS cloud.
Isaac Sim provides access to the latest
robotics simulation capabilities
and the cloud, fostering collaboration. One of the critical workflows is generating synthetic data for perception model training.
Using a
reference workflow
that combines
NVIDIA Omniverse Replicator
, a framework for building custom synthetic data generation (SDG) pipelines and a core extension of Isaac Sim, with
NVIDIA NIM microservices
, developers can build generative AI-enabled SDG pipelines.
These include the USD Code NIM microservice for generating Python USD code and answering OpenUSD queries, and the USD Search NIM microservice for exploring OpenUSD assets using natural language or image inputs. The Edify 360 HDRi NIM microservice generates 360-degree environment maps, while the Edify 3D NIM microservice creates ready-to-edit 3D assets from text or image prompts. This eases the synthetic data generation process by reducing many tedious and manual steps, from asset creation to image augmentation, using the power of generative AI.
Rendered.ai’s
synthetic data engineering platform integrated with Omniverse Replicator enables companies to generate synthetic data for computer vision models used in industries from security and intelligence to manufacturing and agriculture.
SoftServe
, an IT consulting and digital services provider, uses Isaac Sim to generate synthetic data and validate robots used in vertical farming with Pfeifer & Langen, a leading European food producer.
Tata Consultancy Services
is building custom synthetic data generation pipelines to power its Mobility AI suite to address automotive and autonomous use cases by simulating real-world scenarios. Its applications include defect detection, end-of-line quality inspection and hazard avoidance.
Learning to Be Robots in Simulation
While Isaac Sim enables developers to test and validate robots in physically accurate simulation,
Isaac Lab
, an open-source robot learning framework built on Isaac Sim, provides a virtual playground for building robot policies that can run on
AWS Batch
.
Because these simulations are repeatable, developers can easily troubleshoot and reduce the number of cycles required for validation and testing.
Several robotics developers are embracing
NVIDIA Isaac
on AWS to develop physical AI, such as:
Aescape’s robots
are able to provide precision-tailored massages by accurately modeling and tuning onboard sensors in Isaac Sim.
Cobot
has used Isaac Sim with its AI-powered cobot, Proxie, to optimize logistics in warehouses, hospitals, manufacturing sites, and more.
Cohesive Robotics
has integrated Isaac Sim into its software framework called Argus OS for developing and deploying robotic workcells used in high-mix manufacturing environments.
Field AI, a builder of robot foundation models, uses Isaac Sim and Isaac Lab to evaluate the performance of its models in complex, unstructured environments across industries such as construction, manufacturing, oil and gas, mining and more.
Standard Bots
is simulating and validating the performance of its R01 robot used in manufacturing and machining setup.
Swiss Mile
is using Isaac Sim and Isaac Lab for robot learning so that wheeled quadruped robots can perform tasks autonomously with new levels of efficiency in factories and warehouses.
Vention
, which offers a full-stack cloud-based automation platform, is harnessing Isaac Sim for developing and testing new capabilities for robot cells used by small to medium-size manufacturers.
Learn more about Isaac Sim 4.2, now available on Amazon EC2 G6e instances powered by NVIDIA L40S GPUs on
AWS Marketplace
.
Categories:
Robotics
Tags:
NVIDIA Isaac Sim
|
Omniverse Enterprise
|
Physical AI | https://blogs.nvidia.com.tw/blog/physical-ai-robotics-isaac-sim-aws/ | NVIDIA 運用 AWS 上的加速機器人模擬技術推進實體 AI | Field AI 正在建構機器人大腦,讓機器人得以自主管理各項工業流程。Vention 創造預先訓練好的技能,以簡化機器人任務的開發。而 Cobot 則提供一個由人工智慧(AI)驅動的協作機器人 Proxie,可處理材料移動並適應動態環境,與人類一起無縫合作。
這些領先的機器人新創公司皆使用 Amazon Web Services(AWS)上的
NVIDIA Isaac Sim
來取得進展。Isaac Sim 是建置在
NVIDIA Omniverse
上的參考應用程式,供開發人員在以物理原則為基礎的虛擬環境中模擬與測試 AI 驅動的機器人。
NVIDIA 今日在 AWS re:Invent 大會中宣布,現在可以在由 NVIDIA L40S GPU 加速的 Amazon Elastic Cloud Computing(EC2)G6e 執行個體上執行 Isaac Sim。開發人員還能透過雲端原生的協調平台
NVIDIA OSMO
,在 AWS 運算基礎架構中輕鬆管理複雜的機器人工作流程。
可在雲端使用的 NVIDIA 加速硬體與軟體組合,可讓任何規模的團隊擴充其實體 AI 工作流程。
實體 AI
描述了能夠理解實體世界並與其進行互動的 AI 模型。它體現了
自主機器和機器人
的下一波發展浪潮,如自駕車、工業機械手、移動機器人、人形機器人,甚至是機器人管理的基礎設施,如工廠和倉庫。
有了實體 AI,開發人員正採用
三電腦解決方案(three computer solution)
進行訓練、模擬和推論,以求突破。
然而,機器人系統的實體 AI 需要強大的訓練資料集,才能在部署環境裡取得精確的推論結果。不過想要開發這樣的資料集,並在實際環境裡進行測試,既不實際且成本又高。
模擬提供了答案,因為這項技術可以顯著加快 AI 驅動機器人的訓練、測試和部署。
在雲端運用
L40S GPU
來擴大模擬與訓練機器人的規模
模擬可在部署前用於確認、驗證和最佳化機器人設計,以及相關系統及其演算法。模擬還能在施工或改造開始前最佳化設施和系統設計,以達到最高效率,避免在製造過程中因變更訂單而產生的高昂成本。
由 NVIDIA L40S GPU 加速的 Amazon EC2 G6e 執行個體,提供比先前架構高出兩倍的效能提升,同時還能隨著場景及模擬複雜度增加而擴充的彈性。這些執行個體用於訓練許多為 AI驅動機器人提供動力的電腦視覺模型。這意味著相同的執行個體可以擴充來執行各種任務,從資料生成到模擬,再到模型訓練。
在雲端使用
NVIDIA OSMO
,可以讓團隊對分散各處的運算資源,無論是在本地或 AWS 雲端,都能協調與擴充複雜的機器人開發工作流程。
Isaac Sim 讓使用者可以獲得最新的
機器人模擬功能
及雲端資源,以促進合作。其中一個關鍵的工作流程是產生訓練感知模型所需的合成資料。
開發人員使用結合
NVIDIA
Omniverse
Replicator
與
NVIDIA NIM 微服務
的
參考工作流程
,便能建立支援生成式 AI 的 SDG 管道。NVIDIA Omniverse Replicator 是用於建立自訂合成資料生成(SDG)管道的框架,以及 Isaac Sim 的核心擴充功能。
其中包括用於產生 Python USD 程式碼和回答 OpenUSD 查詢的 USD Code NIM 微服務,以及用於使用自然語言或圖像輸入探索 OpenUSD 資產的 USD Search NIM 微服務。Edify 360 HDRi NIM 微服務可產生 360 度環境地圖,而 Edify 3D NIM 微服務則可根據文字或影像提示,建立可立即編輯的 3D 資產。如此一來便能利用生成式 AI 的力量,減少從建立資產到增強影像等許多繁瑣的手動步驟,簡化生成合成資料的流程。
Rendered.ai
的合成資料工程平台與 Omniverse Replicator 整合後,可讓企業為安全、情報、製造到農業等產業所使用的電腦視覺模型產生合成資料。
IT 諮詢與數位服務供應商
SoftServe
使用 Isaac Sim 來產生合成資料,且與歐洲領先的食品生產商 Pfeifer & Langen 合作驗證垂直農業中使用的機器人。
塔塔顧問服務(Tata Consultancy Services)建立客製化的合成資料生成管道,驅動其 Mobility AI 套件,藉由模擬真實世界的情境來解決汽車與自動化使用個案。其應用包括瑕疵偵測、生產線末端品質檢查及避免危險情況。
在模擬環境中學習成為機器人
Isaac Sim 可讓開發人員在精準符合物理原則的模擬環境中測試及驗證機器人,而建立在 Isaac Sim 上的開源機器人學習框架
Isaac Lab
則為建立可在 AWS Batch 上執行的機器人政策提供虛擬空間。
由於這些模擬是可以重複的,因此開發人員可以輕鬆排除故障,減少驗證和測試所需的週期。
多家機器人開發業者在 AWS 上採用
NVIDIA Isaac
來開發實體 AI。
Aescape
的機器人能夠透過 Isaac Sim 中對機器人身上的感應器進行準確的建模及調整,提供精準且量身打造的按摩服務。
Cobot 已將 Isaac Sim 與其 AI 驅動的協作機器人 Proxie 搭配使用,以最佳化倉庫、醫院、製造場所等地的物流作業化。
Cohesive Robotics 已將 Isaac Sim 整合至其名為 Argus OS™ 的軟體框架,用於開發和部署在高混合製造環境裡使用的機器人工作單元。
機器人基礎模型建造商的 Field AI 使用 Isaac Sim 和 Isaac Lab 來評估其模型在複雜、非結構性環境下的效能表現,這些環境涵蓋建築、製造、石油和天然氣、採礦等產業。
Standard Bots
正在模擬和驗證其用於製造和加工設置之的R01 機器人效能。
Swiss Mile
正在使用 Isaac Sim 和 Isaac Lab 進行機器人學習,使輪型四足機器人能夠在工廠和倉庫裡,以更高的效率自主執行各項任務。
Vention
提供基於雲端的全端自動化平台,正在使用 Isaac Sim 開發和測試中小型製造商使用的機器人單元新功能。
了解更多關於 Issac Sim 4.2 資訊,Issac Sim 4.2 現已在
AWS Marketplace
上由 NVIDIA L40S GPU 驅動的 Amazon EC2 G6e 執行個體上提供。
Categories:
自主機器
Tags:
NVIDIA Isaac Sim
|
Omniverse Enterprise |
https://blogs.nvidia.com/blog/category/gaming/ | Gaming | - Archives Page 1 | NVIDIA Blog
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
Search for:
Toggle Search
Home
AI
Data Center
Driving
Gaming
Pro Graphics
Robotics
Healthcare
Startups
AI Podcast
NVIDIA Life
Gaming
Most Popular
GeForce NOW Welcomes Warner Bros. Games to the Cloud With ‘Batman: Arkham’ Series
It’s a match made in heaven — GeForce NOW and Warner Bros. Games are collaborating to bring the…
Read Article
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
Medieval Mayhem Arrives With ‘Kingdom Come: Deliverance II’ on GeForce NOW
GeForce NOW celebrates its fifth anniversary this February with a lineup of five major releases. The month kicks off with Kingdom Come: Deliverance II. Prepare for a journey back in…
Read Article
GeForce NOW Celebrates Five Years of Cloud Gaming With AAA Blockbusters
GeForce NOW turns five this February. Five incredible years of high-performance gaming have been made possible thanks to the members who’ve joined the cloud gaming platform on its remarkable journey….
Read Article
‘Baldur’s Gate 3’ Mod Support Launches in the Cloud
GeForce NOW is expanding mod support for hit game Baldur’s Gate 3 in collaboration with Larian Studios and mod.io for Ultimate and Performance members. This expanded mod support arrives alongside…
Read Article
Fantastic Four-ce Awakens: Season One of ‘Marvel Rivals’ Joins GeForce NOW
Time to suit up, members. The multiverse is about to get a whole lot cloudier as GeForce NOW opens a portal to the first season of hit game Marvel Rivals…
Read Article
GeForce NOW at CES: Bring PC RTX Gaming Everywhere With the Power of GeForce NOW
This GFN Thursday recaps the latest cloud announcements from the CES trade show, including GeForce RTX gaming expansion across popular devices such as Steam Deck, Apple Vision Pro spatial computers,…
Read Article
CES 2025: AI Advancing at ‘Incredible Pace,’ NVIDIA CEO Says
NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics and agentic AI. AI is advancing…
Read Article
PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW
GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions. Announced today at the CES trade…
Read Article
GeForce NOW Rings in the New Year With 14 New Games
GeForce NOW is kicking off 2025 by delivering 14 games to the cloud this month, with two available to stream this week so members can get started on their New…
Read Article
Load More Articles
All NVIDIA News
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
All Systems Go: NVIDIA Engineer Takes NIMble Approach to Innovation
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
How Scaling Laws Drive Smarter, More Powerful AI
Corporate Information
About NVIDIA
Corporate Overview
Technologies
NVIDIA Research
Investors
Social Responsibility
NVIDIA Foundation
Get Involved
Forums
Careers
Developer Home
Join the Developer Program
NVIDIA Partner Network
NVIDIA Inception
Resources for Venture Capitalists
Venture Capital (NVentures)
Technical Training
Training for IT Professionals
Professional Services for Data Science
News & Events
Newsroom
NVIDIA Blog
NVIDIA Technical Blog
Webinars
Stay Informed
Events Calendar
NVIDIA GTC
NVIDIA On-Demand
Explore our regional blogs and other social networks
Privacy Policy
Manage My Privacy
Legal
Accessibility
Product Security
Contact
Copyright © 2025 NVIDIA Corporation
USA - United States
Share This
Facebook
LinkedIn
Email
Share on Mastodon
Enter your Mastodon instance URL (optional)
Share | https://blogs.nvidia.com.tw/blog/category/gaming/ | 遊戲 | 遊戲 彙整 - NVIDIA 台灣官方部落格
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
搜尋關鍵字:
Toggle Search
平台
智慧機器
概覽
JETSON
嵌入式系統
機器人
JETSON
資料中心
產品
資料中心 GPU
DGX
HGX
EGX
NGC
虛擬 GPU
解決方案
人工智慧與深度學習
高效能計算
虛擬 GPU
分析
應用範例
開發者
技術
CUDA-X
NVIDIA AMPERE 架構
NVIDIA VOLTA
MAGNUM
多執行個體 GPU
NVIDIA NVLINK
深度學習與人工智慧
概覽
產業
概覽
自動駕駛
醫療保健與生命科學
AI 城市
機器人
開發者
產品
概覽
DGX 系統
NVIDIA GPU 雲
NVIDIA TITAN RTX
NVIDIA TITAN V
解決方案
概覽
數據科學
推論
教育課程
AI 新創
設計視覺化
概覽
GRID
QUADRO
高階渲染技術
專業的虛擬實境解決方案
技術
NVIDIA RTX
NVLINK
TURING 架構
虛擬 GPU 技術
HOLODECK
創作者適用的
醫療保健與生命科學
概覽
給開發者
醫療圖像處理
基因體學
自動駕駛汽車
概覽
DRIVE PX
汽車產業夥伴
遊戲與娛樂
GEFORCE 遊戲平台
概覽
20 系列顯示卡
16 系列顯示卡
電競筆記型電腦
G-SYNC 顯示器
給創作者
開發者
NVIDIA 開發者
開發者新聞
開發者部落格
開發者論壇
開源平台
深度學習機構
訓練課程
GPU 科技大會
CUDA
產業
遊戲開發
醫療保健與生技
高等教育
製造業
媒體娛樂
公共部門
零售業
智慧城市
超級運算
電信業
運輸業
所有產業
驅動程式
概覽
GEFORCE 驅動程式
所有 NVIDIA 驅動程式
支援
關於 NVIDIA
概覽
NVIDIA 合作夥伴網絡
AI 運算模型
公司訊息
徵才訊息
投資人
NVIDIA 合作夥伴
NVIDIA 部落格
加入我們
RSS Feeds
訂閱電子報
聯繫我們
產品安全
遊戲
Most Popular
CES 2025:NVIDIA 執行長表示 AI 正以「驚人的速度」進步
NVIDIA 創辦人暨執行長黃仁勳以…
閱讀文章
Most Popular
使用 Transformer 產生合成資料:企業資料挑戰的解決方案
GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務給你歡樂無比的遊戲節慶時刻
揭開 NVIDIA DOCA 的神祕面紗
NVIDIA 榮獲 COMPUTEX Best Choice Award 大獎
擁有十幾年在台北國際電腦展(COMPUTEX)年度 Best…
閱讀文章
即將推出的 ACE:解碼 AI 技術,運用逼真的數位人類提升遊戲體驗
編者按:此篇文章屬於「解碼 AI 」系列,該系列文章會以簡單…
閱讀文章
解碼 AI:揭開驅動 AI 硬體、軟體和工具的神秘面紗
隨著 NVIDIA 在 2018 年推出 RTX 技術,以及…
閱讀文章
雲端三大重磅消息:全新的Activision Blizzard 遊戲、單日通行證、G-SYNC 技術即將登陸 GeForce NOW
NVIDIA宣布將為其 GeForce NOW 雲端遊戲服務…
閱讀文章
重生、重製與重新混合:《傳送門:序曲 RTX 版》讓傳奇遊戲 mod 重獲新生!
在大熱門的非官方《傳送門》遊戲前傳重製版《傳送門:序曲 RT…
閱讀文章
由台灣大哥大支援的 GeForce NOW 在 1 月將有 19 款遊戲於雲端上線
為了迎接全新的一年,台灣大哥大支援的 GeForce NOW…
閱讀文章
暢玩遊戲:NVIDIA GeForce NOW 將龐大的遊戲庫串流到車上
自駕車和電動車讓個人交通變得更安全、更永續,也更具娛樂性。 …
閱讀文章
NVIDIA GeForce NOW 將在汽車上以串流方式提供大量 AAA 級遊戲
NVIDIA (輝達) 今日宣布在車輛上也能享受到高效能的 …
閱讀文章
愉快的冬季佳節由GeForce NOW串流熱門遊戲開始吧
雖然這個12月外面的天氣可能不會很穩定,但每週 GeForc…
閱讀文章
更多文章
All NVIDIA News
擴展定律如何推動更有智慧又更強大的 AI 發展
安全至上:領先合作夥伴採用 NVIDIA 網路安全 AI 保護關鍵基礎設施
AI 帶來亮眼報酬:調查結果揭示金融業最新技術趨勢
NVIDIA 發表為代理型 AI 應用提供安全防護的 NIM 微服務
NVIDIA 攜手產業領導業者推動基因組學、藥物探索與醫療保健發展
平台
人工智慧與深度學習
智慧機器
資料中心
設計視覺化
醫療保健
自動駕駛
GeForce 遊戲
SHIELD
產品
DGX-1
DRIVE PX2
GeForce GTX 20 系列
GRID
Jetson
Quadro
SHIELD TV
Tesla
開發者
開發者專區
CUDA
訓練課程
GPU 科技大會
探究地區性部落格及其他社交網路
隱私權政策
管理我的隱私
請勿出售或分享我的資料
服務條款
輔助使用
公司政策
產品安全
聯絡方式
Copyright © 2025 NVIDIA Corporation
Taiwan |
https://blogs.nvidia.com/blog/geforce-now-thursday-june-16/ | Get Your Wish: Genshin Impact Coming to GeForce NOW | Greetings, Traveler.
Prepare for adventure.
Genshin Impact
, the popular open-world action role-playing game, is leaving limited beta and launching for all
GeForce NOW
members next week.
Gamers can get their game on today with the six total games joining the
GeForce NOW library
.
As
announced
last week,
Warhammer 40,000: Darktide
is coming to the cloud at launch — with GeForce technology. This September, members will be able to leap thousands of years into the future to the time of the Space Marines, streaming on GeForce NOW with NVIDIA DLSS and more.
Plus, the 2.0.41 GeForce NOW app update brings a highly requested feature: in-stream copy-and-paste support from the clipboard while streaming from the PC and Mac apps — so there’s no need to enter a long, complex password for the digital store. Get to your games even faster with this new capability.
GeForce NOW is also giving mobile gamers more options by bringing the perks of RTX 3080 memberships and PC gaming at 120 frames per second to all devices with support for 120Hz phones. The capability is rolling out in the coming weeks.
Take a Trip to Teyvat
After the success of a limited beta and receiving great feedback from members,
Genshin Impact
is coming next week to everyone streaming on GeForce NOW.
Embark on a journey as a traveler from another world, stranded in the fantastic land of Teyvat. Search for your missing sibling in a vast continent made up of seven nations. Master the art of elemental combat and build a dream team of over 40 uniquely skilled playable characters – like the newest additions of Yelan and Kuki Shinobu – each with their own rich stories, personalities and combat styles.
Experience the immersive campaign, dive deep into rich quests alongside iconic characters and complete daily challenges. Charge head-on into battles solo or invite friends to join the adventures. The world is constantly expanding, so bring it wherever you go across devices, streaming soon to underpowered PCs,
Macs
and Chromebooks on GeForce NOW.
RTX 3080 members
can level up their gaming for the best experience by streaming in
4K resolution
and 60 frames per second on the PC and Mac apps.
Let the Gaming Commence
All of the action this GFN Thursday kicks off with six new games arriving on the cloud. Members can also gear up for
Rainbow Six Siege
Year 7 Season 2.
Get ready for a new Operator, Team Deathmatch map and more in “Rainbow Six Siege” Year 7 Season 2.
Members can look for the following streaming this week:
Chivalry 2
(New release on
Steam
)
Starship Troopers – Terran Command
(New release on
Steam
and
Epic Games Store
)
Builder Simulator
(
Steam
)
Supraland
(Free on
Epic Games Store
)
The Legend of Heroes: Trails of Cold Steel II
(
Steam
)
POSTAL: Brain Damaged
(
Steam
)
Finally, members still have a chance to stream the
PC Building Simulator 2
open beta before it ends on Monday, June 20. Experience deeper simulation, an upgraded career mode and powerful new customization features to bring your ultimate PC to life.
To start your weekend gaming adventures, we’ve got a question. Let us know your thoughts on
Twitter
or in the comments below.
What are there more of in video games? 🤔
NPCs or Quests?
— 🌩️ NVIDIA GeForce NOW (@NVIDIAGFN)
June 15, 2022
Categories:
Gaming
Tags:
Cloud Gaming
|
GeForce NOW | https://blogs.nvidia.com.tw/blog/geforce-now-thursday-june-16/ | 願望成真:《原神 (Genshin Impact) 》即將於 GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務推出 | 旅人你好,
準備踏上冒險之旅吧。熱門開放世界動作角色扮演遊戲
《原神》
即將結束限量公測版,並將於下週推出,供所有
GeForce NOW
會員遊玩。
還有六款遊戲現已加入
GeForce NOW 遊戲庫
,供玩家即刻暢玩。
正如上週
公告
,
《戰鎚
40K
:黑潮
(Warhammer 40,000: Darktide)
》
即將於雲端推出,由 GeForce 技術支援。今年九月,會員將能橫跨數千年後的未來,進入太空海軍陸戰隊時代,遊戲將可於 GeForce NOW 上串流。
前往提瓦特
《原神》
限時公測版大獲成功,得到會員的極佳回饋,並將於下週開始在 GeForce NOW 上開放串流,供所有玩家遊玩。
化身來自另一世界的旅人踏上冒險之途,流連於提瓦特的奇幻土地。在由七個國家組成的寬廣大陸尋找失蹤手足。掌握元素戰鬥的藝術,打造一支夢幻團隊,40 多位角色均具備獨一無二的技能,例如最新加入的夜蘭 (Yelan) 和久岐忍 (Kuki Shinobu),他們各自都有豐富的故事、個性和戰鬥風格。
在《
Chasm
》的
2.7
版「荒夢藏虞淵
(Hidden Dreams in the Depths)
」更新中,探索故事深處的奧秘。
體驗身歷其境的戰役、與經典角色一同深入探索豐富任務並完成每日挑戰。衝鋒陷陣單打獨鬥,或邀請好友加入冒險。世界正在持續擴張,所以無論身處何處都能跨裝置使用,快速在低效能的 PC、
Mac
和 Chromebook 上透過 GeForce NOW串流遊玩。
遊戲開始
本週 GFN 以六款於雲端推出的新遊戲揭開序幕。會員也可以準備迎接
《虹彩六號:圍攻行動
(Rainbow Six Siege)
》
第 7 年第 2 季。
準備好迎接《虹彩六號:圍攻行動 (Rainbow Six Siege) 》第 7 年第 2 季新加入的戰鬥員、團隊殊死戰 (Team Deathmatch) 地圖等更多內容。
會員可於本週稍後期待以下遊戲開放串流:
《騎士精神
2 (Chivalry 2)
》
(於
Steam
全新發佈)
《星艦戰將:人類總動員
(Starship Troopers – Terran Command)
》
(於
Steam
與
Epic Games Store
全新發佈)
《
Builder Simulator
》
(
Steam
)
《
Supraland
》
(
Epic Games Store
開放免費遊玩)
《英雄傳說閃之軌跡
II (The Legend of Heroes: Trails of Cold Steel II)
》
(
Steam
)
《喋血街頭:腦損
(POSTAL: Brain Damaged
) 》(
Steam
)
最後,會員仍有機會在 6 月 20 日星期一結束前,串流遊玩
《
PC Builder Simulator 2
》
公測版。體驗更深入的模擬效果、經過升級的生涯模式和強大的全新自訂功能,讓你的終極 PC 栩栩如生。
Categories:
遊戲
Tags:
cloud gaming
|
GeForce Now |
https://blogs.nvidia.com/blog/geforce-now-thursday-april-7/ | Try This Out: GFN Thursday Delivers Instant-Play Game Demos on GeForce NOW | GeForce NOW
is about bringing new experiences to gamers.
This GFN Thursday introduces game demos to GeForce NOW. Members can now try out some of the hit games streaming on the service before purchasing the full PC version — including some finalists from the 2021 Epic MegaJam.
Plus, look for six games ready to stream from the
GeForce NOW library
starting today.
In addition, the 2.0.39 app update is rolling out for PC and Mac with a few fixes to improve the experience.
Dive In to Cloud Gaming With Demos
GeForce NOW supports new ways to play and is now offering free game demos to help gamers discover titles to play on the cloud — easy to find in the “Instant Play Free Demos” row.
Gamers can stream these demos before purchasing the full PC versions from popular stores like Steam, Epic Games Store, Ubisoft Connect, GoG and more. The demos are hosted on GeForce NOW, allowing members to check them out instantly — just click to play!
The first wave of demos, with more to come, includes:
Chorus
,
Ghostrunner
,
Inscryption, Diplomacy Is Not an Option
and
The RiftBreaker Prologue.
Members can even get a taste of the full GeForce NOW experience with fantastic
Priority and RTX 3080 membership
features like RTX in
Ghostrunner
and DLSS in
Chorus
.
On top of these great titles, demos of some finalists from the 2021
Epic MegaJam
will be brought straight from Unreal Engine to the cloud.
Zoom and nyoom to help BotiBoi gather as many files as possible and upload them to the server before the inevitable system crash in
Boti Boi
by the Purple Team. Assist a user by keeping files organized for fast access as seeking beeBots in
Microwasp Seekers
by Partly Atomic.
Keep an eye out for updates on demos coming to the cloud on GFN Thursdays and in the
GeForce NOW app
.
Get Your Game On
Play as a small fox on a big adventure in TUNIC, now streaming through both Steam and Epic Games Store.
Ready to jump into a weekend full of gaming?
GFN Thursday always comes with a new batch of games joining the GeForce NOW library. Check out these six titles ready to stream this week:
Die After Sunset
(
Steam
)
ELDERBORN
(
Steam
)
Northgard
(
Epic Games Store
)
Offworld Trading Company
(
Steam
)
Spirit Of The Island
(
Steam
)
TUNIC
(
Epic Games Store
)
Finally,
last week
GFN Thursday announced that
Star Control: Origins
would be coming to the cloud later in April. The game is already available to stream on GeForce NOW.
With all these great games available to try out, we’ve got a question for you this week. Let us know on
Twitter
or in the comments below.
Best game demo of all time. Go.
— 🌩️ NVIDIA GeForce NOW (@NVIDIAGFN)
April 5, 2022
Categories:
Gaming
Tags:
Cloud Gaming
|
GeForce NOW | https://blogs.nvidia.com.tw/blog/geforce-now-thursday-april-7/ | 快來試試:本週 GFN 在 GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務帶來可立即暢玩的遊戲 DEMO | GeForce NOW
聯盟
Taiwan Mobile
雲端遊戲服務
即將為玩家帶來全新體驗。
本週 GFN 推出 GeForce NOW 的遊戲試玩。會員現在可以在購買完整 PC 版之前,先試玩在服務上串流的熱門遊戲,包括 2021 年在 Epic MegaJam 晉級決賽的作品。
此外,敬請期待從今天起在
GeForce NOW 遊戲庫
串流的六款遊戲。
此外,PC 和 Mac 版也推出 2.0.39 應用程式更新,並完成部分修正以提升遊戲體驗。
透過試玩深度探索雲端遊戲體驗
GeForce NOW支援全新的遊戲方式,而且現在提供免費的遊戲試玩,協助玩家探索雲端上遊戲體驗,只要到「立即遊玩 遊戲試玩版」列中,就能輕鬆找到。
玩家從 Steam、Epic Games Store、Ubisoft Connect、GoG 等熱門商店購買完整 PC 版之前,可以先串流體驗這些試玩版。試玩版會託管於 GeForce NOW,讓會員可以立即查看,只要按一下即可暢玩!
在推出更多試玩之前,第一波遊戲包含:
《齊唱
(Chorus)
》
、
《幽影行者
(Ghostrunner)
》
、
《賭命牌卡
(Inscryption)
》、《外交不是一個選擇
(Diplomacy Is Not An Option)
》
和
《時空裂隙開拓者:序章
(The RiftBreaker: Prologue)
》。
除了這些精彩遊戲之外,在 2021 年
Epic MegaJam
中決賽入選作品的試玩也將直接從 Unreal Engine 串流至雲端。
暢玩 Purple Team 的
《
Boti Boi
》
,幫助 BotiBoi 盡可能收集檔案,並在遇到無法避免的系統當機前,將檔案上傳至伺服器。暢玩 Partly Atomic 的
《
Microwasp Seekers
》
,協助使用者在尋找 beeBots 時,能夠將檔案整理得井然有序,以便快速找到檔案。
請在本週 GFN 部落格和
GeForce NOW 應用程式
中,持續關注即將串流至雲端的遊戲試玩最新消息。
開始遊戲
準備好享受充滿遊戲樂趣的週末了嗎?
本週 GFN 會持續將新遊戲加入 GeForce NOW 遊戲庫。看看這六款即將於本週開始串流的遊戲:
《日落後死去
(Die After Sunset)
》
(
Steam
)
《
ELDERBORN
》
(
Steam
)
《
Northgard
》
(
Epic Games Store
)
《全球貿易壟斷公司
(Offworld Trading Company)
》
(
Steam
)
《
Spirit Of The Island
》
(
Steam
)
《
TUNIC
》
(
Epic Games Store
)
最後,
在上週的
本週 GFN 宣布了
《激戰
M
星雲:起源
(Star Control: Origins)
》
將於 4 月底在雲端上推出。這款遊戲已經可以在 GeForce NOW 上串流了。
Categories:
遊戲
Tags:
cloud gaming
|
GeForce Now |
https://blogs.nvidia.com/blog/geforce-now-fortnite-closed-beta/ | GFN Thursday: ‘Fortnite’ Comes to iOS Safari and Android Through NVIDIA GeForce NOW via Closed Beta | Starting next week,
Fortnite
on GeForce NOW will launch in a limited-time closed beta for mobile, all streamed through the Safari web browser on iOS and the
GeForce NOW Android app
.
The
beta is open for registration
for all GeForce NOW members, and will help test our server capacity, graphics delivery and new touch controls performance. Members will be admitted to the beta in batches over the coming weeks.
‘Fortnite’ Streaming Gameplay Comes to Mobile Through iOS Safari and Android With Touch Inputs
Alongside the amazing team at Epic Games, we’ve been working to enable a touch-friendly version of
Fortnite
for mobile delivered through the cloud. While PC games in the GeForce NOW library are best experienced on mobile with a gamepad, the introduction of touch controls built by the GeForce NOW team offers more options for players, starting with
Fortnite
.
Beginning today, GeForce NOW members can sign up for a chance to join the
Fortnite
limited-time closed beta
for mobile devices. Not an existing member? No worries. Register for a
GeForce NOW membership
and sign up to become eligible for the closed beta once the experience starts rolling out next week. Upgrade to a
Priority or RTX 3080 membership
to receive priority access to gaming servers. A paid GeForce NOW membership is not required to participate.
You could say the world is a little upside down in Fortnite Chapter 3.
For tips on gameplay mechanics or a refresher on playing
Fortnite
with touch controls, check out
Fortnite’s
Getting Started
page.
More Touch Games
And we’re just getting started. Cloud-to-mobile gaming is a great opportunity for publishers to get their games into more gamers’ hands with touch-friendly versions of their games. PC games or game engines, like Unreal Engine 4, which support Windows touch events can easily enable mobile touch support on GeForce NOW.
We’re working with additional publishers to add more touch-enabled games to GeForce NOW. And look forward to more publishers streaming full PC versions of their games to mobile devices with built-in touch support — reaching millions through the Android app and iOS Safari devices.
GFN Thursday Releases
Take on a four-player, first-person shooter set aboard a starship stranded at the edge of explored space in
The Anacrusis
.
GFN Thursday always means more games. Members can find these and more streaming on the cloud this week:
The Anacrusis
(New release on
Steam
and
Epic Games Store
, Jan. 13)
Supraland Six Inches Under
(New release on
Steam
, Jan. 14)
Galactic Civilizations 3
(Free on
Epic Games Store
, Jan. 13 – 20)
Ready or Not
(
Steam
)
We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.
What are you planning to play this weekend? Let us know on
Twitter
or in the comments below.
Categories:
Gaming
Tags:
Cloud Gaming
|
GeForce NOW | https://blogs.nvidia.com.tw/blog/geforce-now-fortnite-closed-beta/ | 本週 GFN:封測版《要塞英雄 (Fortnite) 》透過 GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務於 iOS Safari 與 Android 推出 | GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務上的《要塞英雄 (Fortnite) 》從下週起,將推出支援行動裝置的限時封測版,可完全透過 iOS 的 Safari 網頁瀏覽器和
GeForce NOW Android 應用程式串流暢玩
。
所有 GeForce NOW 會員皆可
註冊測試版
,並協助我們測試伺服器容量、畫面呈現效果,以及全新觸控功能的效能。會員將在未來幾週分批加入封測版。
《要塞英雄》推出支援
iOS Safari
與
Android
的行動裝置版串流遊戲體驗,並提供觸控輸入功能
我們與 Epic Games 的優秀團隊攜手合作,打造支援觸控的行動裝置版《要塞英雄》,並透過雲端提供遊戲。雖然 GeForce NOW 內的 PC 遊戲,最好使用遊戲控制器搭配行動裝置,以獲得最佳體驗,但 GeForce NOW 團隊推出的觸控功能,為玩家提供更多選擇,而這項體驗就從《要塞英雄》開始。
從今天起,GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲平台的會員能夠註冊,取得加入
《要塞英雄》
限時封測
行動裝置版的機會。還不是會員? 別擔心。立即加入 並註冊,待封測版體驗於下週推出後,您即符合參與資格。即使您不具付費的 GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲平台會員身分,也可參與。
如需遊戲機制的訣竅,或複習如何使用觸控功能暢玩《要塞英雄》,請參閱《要塞英雄》的
新手入門
頁面。
遊戲界的
3
月狂潮
又是充滿精彩遊戲的一個月,我們本週將推出八款遊戲供您串流暢玩,接下來在整個 3 月將陸續推出共 21 款遊戲。
《
ELEX II
》
(將於
Steam
新發行)
《遠方:湧變暗潮
(FAR: Changing Tides)
》
(將於
Steam
新發行)
《影武者
3 (Shadow Warrior 3)
》
(將於
Steam
新發行)
《
AWAY: The Survival Series
》
(
Epic Games Store
)
《
Labyrinthine Dreams
》
(
Steam
)
《太陽帝國:宇宙指揮官
–
起義
(Sins of a Solar Empire: Rebellion)
》
(
Steam
)
《
TROUBLESHOOTER: Abandoned Children
》
(
Steam
)
《
The Vanishing of Ethan Carter
》
(
Epic Games Store
)
同樣會在 3 月隆重登場的遊戲:
《
Buccaneers!
》
(3 月 7 日於
Steam
新發行)
《
Ironsmith Medieval Simulator
》
(3 月 9 日於
Steam
新發行)
《
Distant Worlds 2
》
(3 月 10 日於
Steam
新發行)
《怪獸超級越野賽
5 (Monster Energy Supercross – The Official Videogame 5)
》
(3 月 17 日於
Steam
新發行)
《工人物語
(The Settlers)
》
(3 月 17 日於
Ubisoft Connect
新發行)
《西伯利亞:以前世界
(Syberia: The World Before)
》
(3 月 18 日於
Steam
與
Epic Games Store
新發行)
《
Lumote: The Mastermote Chronicles
》
(3 月 24 日於
Steam
新發行)
《
Turbo Sloths
》
(3 月 30 日於
Steam
新發行)
《
Blood West
》
(
Steam
)
《模擬巴士駕駛員
(Bus Driver Simulator)
》
(
Steam
)
《
Conan Chop Chop
》
(
Steam
)
《
Dread Hunger
》
(
Steam
)
《惡棍英雄
(Fury Unleashed)
》
(
Steam
)
《釀造物語
(Hundred Days – Winemaking Simulator)
》
(
Steam
)
《英雄傳說閃之軌跡
II (The Legend of Heroes: Trails of Cold Steel II)
》
(
Steam
)
《瑪莎已死
(Martha is Dead)
》
(
Steam
與
Epic Games Store
)
《
Power to the People
》
(
Steam
)
《
Project Zomboid
》
(
Steam
)
《
Rugby 22
》
(
Steam
)
補充
2
月發行的遊戲
除了我們 2 月發佈的 30 款遊戲外,還有其他幾款遊戲也加入 GeForce NOW 遊戲庫。以下是上個月額外新增的幾款遊戲:
《外交不是一個選擇
(Diplomacy Is Not An Option)
》
(
Epic Games Store
)
《
Not Tonight 2
》
(
Steam
與
Epic Games Store
)
《模型建造者
(Model Builder)
》
(
Steam
)
我們之前也宣佈
《天外天
Epic 版 (Two Worlds Epic Edition) 》
將於 GeForce NOW 推出,不過目前此遊戲將不再於本服務上架。
Categories:
遊戲
Tags:
cloud gaming
|
GeForce Now |
https://blogs.nvidia.com/blog/ces-2025-jensen-huang/ | CES 2025: AI Advancing at ‘Incredible Pace,’ NVIDIA CEO Says | NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics and agentic AI.
AI is advancing at an ‘incredible pace,’ Huang told an audience of over 6,000 at CES 2025 in Las Vegas.
“It started with perception AI — understanding images, words and sounds. Then generative AI — creating text, images and sound,” Huang said. Now, we’re entering the era of “physical AI, AI that can proceed, reason, plan and act.”
NVIDIA GPUs and platforms are at the heart of this transformation, Huang explained, enabling breakthroughs across industries, including gaming, robotics and autonomous vehicles (AVs).
Key Announcements
Huang’s keynote showcased how NVIDIA’s latest innovations are enabling this new era of AI, with several groundbreaking announcements, including:
The
just-announced NVIDIA Cosmos platform
advances physical AI with new models and video data processing pipelines for robots, autonomous vehicles and vision AI.
New
NVIDIA Blackwell-based GeForce RTX 50 Series GPUs
offer stunning visual realism and unprecedented performance boosts.
AI foundation models introduced at CES for RTX PCs
feature NVIDIA NIM microservices and AI Blueprints for crafting digital humans, podcasts, images and videos.
The
new NVIDIA Project DIGITS
brings the power of NVIDIA Grace Blackwell to developer desktops in a compact package.
NVIDIA is partnering with Toyota
for safe next-gen vehicle development using the NVIDIA DRIVE AGX in-vehicle computer running NVIDIA DriveOS.
Huang started off his talk by reflecting on NVIDIA’s three-decade journey. In 1999, NVIDIA invented the programmable GPU. Since then, modern AI has fundamentally changed how computing works, he said. “Every single layer of the technology stack has been transformed, an incredible transformation, in just 12 years.”
Revolutionizing Graphics With GeForce RTX 50 Series
“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” Huang said.
With that, he introduced
the NVIDIA GeForce RTX 5090 GPU
, the most powerful GeForce RTX GPU so far, with 92 billion transistors and delivering 3,352 trillion AI operations per second (TOPS).
“Here it is — our brand-new GeForce RTX 50 series, Blackwell architecture,” Huang said, holding the blacked-out GPU aloft and noting how it’s able to harness advanced AI to enable breakthrough graphics. “The GPU is just a beast.”
“Even the mechanical design is a miracle,” Huang said, noting that the graphics card has two cooling fans.
More variations in the GPU series are coming. The GeForce RTX 5090 and GeForce RTX 5080 desktop GPUs are scheduled to be available Jan. 30. The GeForce RTX 5070 Ti and the GeForce RTX 5070 desktops are slated to be available starting in February. Laptop GPUs are expected in March.
DLSS 4 introduces
Multi Frame Generation, working in unison with the complete suite of DLSS technologies to boost performance by up to 8x.
NVIDIA also unveiled NVIDIA Reflex 2
, which can reduce PC latency by up to 75%.
The latest generation of DLSS can generate three additional frames for every frame we calculate, Huang explained. “As a result, we’re able to render at incredibly high performance, because AI does a lot less computation.”
RTX Neural Shaders
use small neural networks to improve textures, materials and lighting in real-time gameplay. RTX Neural Faces and RTX Hair advance real-time face and hair rendering, using generative AI to animate the most realistic digital characters ever. RTX Mega Geometry increases the number of ray-traced triangles by up to 100x, providing more detail.
Advancing Physical AI With Cosmos
In addition to advancements in graphics, Huang introduced the
NVIDIA Cosmos
world foundation model platform, describing it as a game-changer for robotics and industrial AI.
The next frontier of AI is physical AI, Huang explained. He likened this moment to the transformative impact of large language models on generative AI.
“The ChatGPT moment for general robotics is just around the corner,” he explained.
World foundation models, like large language models, are essential for advancing robots and AVs, but many developers lack the resources or expertise to train these models from scratch, Huang explained.
Cosmos integrates generative models, tokenizers, and a video processing pipeline to power physical AI systems like AVs and robots.
Cosmos equips AI models with advanced simulation capabilities, enabling them to predict and evaluate multiple future scenarios to select the best course of action.
Cosmos models process text, image and video prompts to create detailed virtual environments tailored for robotics and AV simulations.
Leading robotics and automotive companies, including
1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier,
Galbot
,
Hillbot
,
IntBot
,
Neura Robotics
, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.
Cosmos is open license and available on GitHub.
Empowering Developers With AI Foundation Models
Beyond robotics and autonomous vehicles, NVIDIA is empowering developers and creators with AI foundation models.
Huang introduced AI foundation models for RTX PCs
that supercharge digital humans, content creation, productivity and development.
“These AI models run in every single cloud because NVIDIA GPUs are now available in every single cloud,” Huang said. “It’s available in every single OEM, so you could literally take these models, integrate them into your software packages, create AI agents and deploy them wherever the customers want to run the software.”
Accelerated by GeForce RTX 50 Series GPUs
These models — offered as
NVIDIA NIM
microservices — are accelerated by the new
GeForce RTX 50 Series GPUs
.
The GPUs are designed to run these models efficiently, with support for FP4 computing that boosts AI inference performance by up to 2x while reducing memory usage compared to previous-generation hardware.
Huang explained the potential of new tools for creators: “We’re creating a whole bunch of blueprints that our ecosystem could take advantage of. All of this is completely open source, so you could take it and modify the blueprints.”
Top PC manufacturers and system builders are launching NIM-ready RTX AI PCs with GeForce RTX 50 Series GPUs. “AI PCs are coming to a home near you,” Huang said.
While these tools bring AI capabilities to personal computing, NVIDIA is also advancing AI-driven solutions in the automotive industry, where safety and intelligence are paramount.
Innovations in Autonomous Vehicles
Huang announced the
NVIDIA DRIVE Hyperion AV platform
, built on the new NVIDIA AGX Thor system-on-a-chip (SoC), designed for generative AI models and delivering advanced functional safety and autonomous driving capabilities.
“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”
DRIVE Hyperion, the first end-to-end AV platform, combines advanced SoCs, sensors, and safety systems into a comprehensive suite, already adopted by automotive leaders such as Mercedes-Benz, JLR and Volvo Cars.
Huang highlighted the critical role of synthetic data in advancing autonomous vehicles. Real-world data is limited, so synthetic data is essential for training the autonomous vehicle data factory, he explained.
NVIDIA Omniverse AI models and Cosmos to Build Detailed Driving Scenarios
Using NVIDIA Omniverse AI models and Cosmos, this approach creates highly detailed driving scenarios that significantly expand and improve training datasets for autonomous vehicles.
Using Omniverse and Cosmos, NVIDIA’s AI data factory can scale “hundreds of drives into billions of effective miles,” Huang said, dramatically increasing the datasets needed for safe and advanced autonomous driving.
“We are going to have mountains of training data for autonomous vehicles,” he added.
Toyota, the world’s largest automaker, will build its next-generation vehicles on the NVIDIA DRIVE AGX Orin
, running the safety-certified NVIDIA DriveOS operating system, Huang said.
“Just as computer graphics was revolutionized at such an incredible pace, you’re going to see the pace of AV development increasing tremendously over the next several years,” Huang said. These vehicles will offer functionally safe, advanced driving assistance capabilities.
Agentic AI and Digital Manufacturing
NVIDIA and its partners have launched AI
Blueprints for agentic AI
, including PDF-to-podcast for efficient research and video search and summarization for analyzing large quantities of video and images — enabling developers to build, test and run AI agents anywhere.
AI Blueprints enable developers to create custom agents for automating enterprise workflows. This new offering integrates NVIDIA AI Enterprise software, including NIM microservices and NeMo, with leading platforms like CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases.
Huang also unveiled Llama Nemotron, a new tool designed to enhance the development of generative AI models for enterprise applications.
Developers can use NVIDIA NIM microservices to build AI agents for tasks like customer support, fraud detection and supply chain optimization.
Available as NVIDIA NIM microservices, the models can supercharge AI agents on any accelerated system.
NVIDIA NIM microservices streamline video content management, boosting efficiency and audience engagement in the media industry.
Moving beyond digital applications, NVIDIA’s innovations are paving the way for AI to revolutionize the physical world with robotics.
“All of the enabling technologies that I’ve been talking about are going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs, in general robotics.”
NVIDIA Isaac GR00T Blueprint for Synthetic Motion Generation
In manufacturing, the
NVIDIA Isaac GR00T Blueprint
for synthetic motion generation will help developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.
Huang emphasized the importance of training robots efficiently, using NVIDIA Omniverse to generate millions of synthetic motions for humanoid training.
The Mega blueprint powers large-scale simulations of robot fleets, enabling companies like Accenture and KION to revolutionize warehouse automation.
These AI tools set the stage for NVIDIA’s latest innovation: a personal AI supercomputer called Project DIGITS.
NVIDIA Unveils Project DIGITS
Putting NVIDIA Grace Blackwell on every desk and at every AI developer’s fingertips, Huang unveiled
NVIDIA Project DIGITS
.
“I have one more thing that I want to show you,” Huang said. “None of this would be possible if not for this incredible project that we started about a decade ago. Inside the company, it was called Project DIGITS — deep learning GPU intelligence training system.”
Huang highlighted the legacy of NVIDIA’s AI supercomputing journey, telling the story of how in 2016 he delivered the first NVIDIA DGX system to OpenAI. “And obviously, it revolutionized artificial intelligence computing.”
The new Project DIGITS takes this mission further. “Every software engineer, every engineer, every creative artist — everybody who uses computers today as a tool — will need an AI supercomputer,” Huang said.
Huang revealed that Project DIGITS, powered by the GB10 Grace Blackwell Superchip, represents NVIDIA’s smallest yet most powerful AI supercomputer. “This is NVIDIA’s latest AI supercomputer,” Huang said, showcasing the device. “It runs the entire NVIDIA AI stack — all of NVIDIA software runs on this. DGX Cloud runs on this.”
Project DIGITS, NVIDIA’s smallest and most powerful AI supercomputer, will launch in May.
A Year of Breakthroughs
“It’s been an incredible year,” Huang said as he wrapped up the keynote. Huang highlighted NVIDIA’s major achievements: Blackwell systems, physical AI foundation models, and breakthroughs in agentic AI and robotics.
“I want to thank all of you for your partnership,” Huang said.
See
notice
regarding software product information.
Categories:
Corporate
|
Gaming
|
Generative AI
|
Software
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
GeForce
|
NVIDIA NIM
|
NVIDIA RTX
|
Physical AI
|
Robotics
|
Transportation | https://blogs.nvidia.com.tw/blog/ces-2025-jensen-huang/ | CES 2025:NVIDIA 執行長表示 AI 正以「驚人的速度」進步 | NVIDIA 創辦人暨執行長黃仁勳以長達 90 分鐘的主題演講揭開 2025 年 CES 大會的序幕,在這場精彩的演講中提到了包括推動遊戲、自駕車、機器人及代理型 AI 發展的嶄新產品。
他在拉斯維加斯的 Michelob Ultra 體育館對著超過六千名座無虛席的觀眾們說,AI「以驚人的速度進步。」
「我們從理解影像、文字和聲音的感知 AI 開始。接著是創造文字、影像和聲音的生成式 AI。」黃仁勳說。現在,我們正進入「實體 AI」時代,也就是能夠進行、推理、計畫與行動的 AI。
黃仁勳解釋說 NVIDIA 的 GPU 及平台是促進這項轉變的核心,帶動包括遊戲、機器人和自駕車在內各行各業突破性的進展。
黃仁勳在主題演講中展示了 NVIDIA 最新的創新技術如何開啟 AI 的新時代,並且發表了多項突破性的內容,包括:
剛剛發表的 NVIDIA Cosmos 平台
可為機器人、自駕車和視覺 AI 領域帶來全新模型和影片資料處理管道,推動實體 AI 的發展。
全新
NVIDIA Blackwell 架構 GeForce RTX 50 系列 GPU
能夠創作出驚人逼真度的視覺影像效果,又能將運算效能提升到前所未有的程度。
在 CES 大會推出適用於 RTX PC 的 AI 基礎模型
,具有 NVIDIA NIM 微服務與 AI Blueprints,可用於製作數位人類、podcast、圖片與影片。
全新 NVIDIA Project DIGITS
將 NVIDIA Grace Blackwell 的強大功能帶到開發人員的桌面上,而它小巧的身影幾乎可以放進口袋裡。
NVIDIA 攜手豐田(Toyota)汽車
使用運行 NVIDIA DriveOS 的 NVIDIA DRIVE AGX 車載電腦,合作開發安全的新世代車輛。
黃仁動在這場演講一開始,先是回顧 NVIDIA 三十年來的發展歷程。1999 年,NVIDIA 發明了可編程 GPU。黃仁勳說從那時開始,現代 AI 從根本上改變了運算的運作方式。「在短短 12 年間,技術堆疊的每一層都發生了翻天覆地的變化,這是令人難以置信的轉變。」
GeForce RTX 50
系列帶來繪圖技術革命
「GeForce 讓 AI 得以普及到大眾的手中,現在 AI 也回歸到GeForce。」黃仁勳說。
他以此為引向嘉賓們介紹
NVIDIA GeForce RTX 5090 GPU
,這是迄今為止最強大的 GeForce RTX GPU,擁有 920 億個電晶體,每秒可進行 3,352 兆次 AI 運算(TOPS)。
「這就是我們全新的 GeForce RTX 50 系列,Blackwell 架構。」黃仁勳高舉著一塊黑色的 GPU,指出它如何能夠利用先進的 AI 來創造出突破性的繪圖技術。「這顆 GPU 簡直就是一隻野獸。」
「即使是它的機械設計也是奇蹟。」黃仁勳指出顯示卡上有兩個冷卻風扇。
這個 GPU 系列的更多產品即將現身。GeForce RTX 5090 和 GeForce RTX 5080 桌上型 GPU 預定於 1 月 30 日上市。GeForce RTX 5070 Ti 和 GeForce RTX 5070 桌上型 GPU 預計於二月開始上市。筆記型電腦 GPU 預計將於三月上市。
DLSS 4 引入
多畫格生成(Multi Frame Generation)技術,搭配整套 DLSS 技術可以將效能提升八倍。
NVIDIA 還發表了 NVIDIA Reflex 2
,可以將 PC 延遲時間降低 75%。
黃仁勳解釋說最新一代的 DLSS 技術可以為我們計算出的每一個畫格另外產生三個畫格。「由於 AI 要運算的量少了很多,這樣我們就能能夠得到超高的渲染效能。」
RTX Neural Shaders
使用小型神經網路即時改善遊戲裡的紋理、材質與照明。RTX Neural Faces 和 RTX Hair 能夠即時渲染臉部和毛髮,使用生成式 AI 製作史上最有真實感的數位角色動畫。RTX Mega Geometry 可以將光線追蹤三角形的數量增加 100 倍,製作出更精細的畫面。
利用
Cosmos
推動實體
AI
的發展
除了繪圖技術方面的進展之外,黃仁勳還介紹
NVIDIA Cosmos
世界基礎模型平台,指稱其為改變機器人與工業 AI 領域發展遊戲規則的一項技術。
黃仁勳說實體 AI 是 AI 的下一個發展領域。他將這個時刻比喻為大型語言模型對於生成式 AI 所帶來的變革性影響。
他說:「通用機器人的 ChatGPT 時刻就要到來。」
黃仁勳說與大型語言模型一樣,世界基礎模型是推動開發機器人與自駕車的根本,不過並非所有開發人員都有專業知識與資源來訓練自己的模型。
Cosmos 整合了生成模型、標記器和影片處理管道,協助開發自駕車和機器人等實體 AI 系統。
開發 Cosmos 的目的在於將前瞻性與多元宇宙模擬的力量帶入 AI 模型上,讓模型能夠模擬各種可能的未來與選擇最佳行動。
黃仁勳解釋道 Cosmos 模型可以接收文字、圖像或影片提示,並且以影片方式產生虛擬世界狀態。「Cosmos優先處理自駕車和機器人的獨特需求,例如真實世界環境、照明和物體恆存性。」
包括 1X、思靈機器人(Agile Robots)、Agility、Figure AI、Foretellix、Fourier、
Galbot
、
Hillbot
、
IntBot
、
Neura Robotics
、Skild AI、Virtual Incision、Waabi 和小鵬汽車(XPENG)在內的
各大機器人和汽車公司
,以及乘車服務巨擘 Uber,皆為首批採用 Cosmos 的公司。
此外,現代汽車集團(Hyundai Motor Group)也採用 NVIDIA AI 與 Omniverse
,以打造更安全、更聰明的車輛,擁有更強大的製造能力及部署最先進的機器人技術。
可以在 GitHub 上取得採用開放授權形態的 Cosmos。
推出
AI
基礎模型強化開發人員的能力
除了機器人與自駕車,NVIDIA 還推出 AI 基礎模型強化開發人員與創作者的能力。
黃仁勳在演講中介紹了適用於 RTX PC 的 AI 基礎模型
,用於支援開發數位人類、內容創作、提高生產力及輔助各項開發作業。
黃仁勳表示:「現在可以在每一個雲端環境裡使用 NVIDIA GPU,各位便能在每一個雲端環境裡運行這些 AI 模型。每一家 OEM 都可以拿到這些模型,所以各位都能用到,把它們整合到你們的軟體套件裡,建立 AI 代理,然後部署到任何客戶想要執行軟體的地方。」
這些以
NVIDIA NIM
微服務形式提供的模型,由全新的
GeForce RTX 50 系列 GPU
加速。
這些 GPU 有快速執行這些模型的能力,加上支援 FP4 運算,將 AI 推論能力提高兩倍,與前一代硬體相比,能夠用更小的記憶體佔用空間在本機端運行生成式 AI 模型。
黃仁勳解釋創作者可以怎麼利用這些新工具:「我們正在創作一大堆藍圖,讓我們的生態系統可以善加利用。這一切都是完全開源的形態,各位可以自行取用和修改藍圖。」
頂級 PC 製造商和系統建置商將推出搭載 GeForce RTX 50 系列 GPU 的 NIM-ready RTX AI PC。「AI PC 即將進入各位附近的家中。」黃仁勳說。
在這些工具為 PC 帶來 AI 功能的同時,NVIDIA 也在首重安全與智慧的汽車業推動開發 AI 驅動的解決方案。
自駕車技術創新
黃仁勳發表採用全新 NVIDIA AGX Thor 系統單晶片(SoC)所開發出、專為生成式 AI 模型設計的
NVIDIA DRIVE Hyperion AV 平台
,可提供先進的功能安全與自動駕駛功能。
黃仁勳表示:「自駕車革命已經來臨。就像所有機器人一樣,打造自駕車需要用到三台電腦:用於訓練 AI 模型的 NVIDIA DGX,用於測試駕駛和產生合成資料的 Omniverse,而 DRIVE AGX 則是車內的超級電腦。」
DRIVE Hyperion 是第一個端對端的自駕車平台,整合了適用於下一代汽車的先進 SoC、感測器和安全系統,還有感測器套件與主動安全和 level 2 駕駛堆疊, Mercedes-Benz、捷豹路虎和 Volvo Cars 等引領發展行車安全功能的業者已經採用這個平台。
黃仁勳強調合成資料在推動開發自駕車方面所扮演的重要角色。他解釋說,從現實世界只能得到有限的資料,必須使用合成資料來訓練自駕車輛資料工廠。
在 NVIDIA Omniverse AI 模型和 Cosmos 的驅動下,這種方法可以「產生合成駕駛情境,以成倍方式強化訓練資料。」
黃仁勳表示使用 Omniverse 和 Cosmos,NVIDIA 的 AI 資料工廠可以將「數百次的駕駛擴展為數十億英哩的有效里程」,大幅增加發展安全先進自動駕駛技術所需的資料集。
「我們將為自駕車提供大量訓練資料。」他補充道。
黃仁勳表示
全球最大的汽車製造商豐田汽車將使用 NVIDIA DRIVE AGX Orin 開發下一代汽車
,並且運行通過安全認證的 NVIDIA DriveOS 作業系統。
黃仁勳說:「正如電腦繪圖技術以飛快速度掀起革命,各位在未來幾年內將會看到自駕車的發展速度大幅提升」。這些車輛將提供功能安全又先進的駕駛輔助功能。
代理型
AI
與數位製造
NVIDIA 及其合作夥伴推出
適用於代理式 AI 的 AI Blueprints
,包括用於提高研究效率的 PDF-to-podcast,以及用於分析大量影片與圖像的影片搜尋與摘要 – 這些藍圖都讓開發人員能夠隨時隨地建立、測試和運行 AI 代理。
開發人員可以使用 AI Blueprints 部署客製化代理,自動執行企業裡的工作流程。這一類全新的合作夥伴藍圖整合了 NVIDIA AI Enterprise 軟體,包括 NVIDIA NIM 微服務和 NVIDIA NeMo,以及 CrewAI、Daily、LangChain、LlamaIndex 和 Weights & Biases 等領先供應商的平台。
黃仁勳還宣布推出全新的
Llama Nemotron
。
開發人員可以使用 NVIDIA NIM 微服務建立 AI 代理,以執行客戶支援、詐欺偵測及供應鏈最佳化等工作。
以 NVIDIA NIM 微服務的形式提供這些模型,可以在任何加速系統上增強 AI 代理的效能。
NVIDIA NIM 微服務可以協助媒體業簡化影片內容管理,提升工作效率及觀眾參與度。
除了數位應用,NVIDIA 的創新技術也為 AI 透過機器人來徹底改變實體世界一事打下基礎。
「我一直在講的的這些技術,都會讓我們在未來幾年裡,在通用機器人領域看到非常快速又令人驚訝的突破。」
適用於產生合成動作的
NVIDIA Isaac GR00T Blueprint
將幫助開發人員產生海量合成動作資料,利用模仿學習來訓練製造業所使用的人形機器人。
黃仁勳強調高效率訓練機器人的重要性,利用 NVIDIA 的 Omniverse 平台產生數百萬個合成動作來訓練人形機器人。
Mega 藍圖能夠進行大規模模擬機器人機群,埃森哲(Accenture)及凱傲(KION)等倉儲自動化領導業者已經採用這項藍圖。
這些 AI 工具為 NVIDIA 的最新創新技術奠定基礎:名為 Project DIGITS 的個人 AI 超級電腦。
NVIDIA
推出
Project DIGITS
黃仁勳發表
NVIDIA Project DIGITS
,將 NVIDIA Grace Blackwell 放在每個人的桌上,讓每個 AI 開發人員輕鬆就能獲得 AI 的強大運算能力。
「我還有一個東西要給你們看。如果我們不是大概十年前就展開這麼厲害的專案,這一切都不可能發生。我們在 NVIDIA 裡把它取了 Project DIGITS 這個名字 – 深度學習 GPU 智慧訓練系統。」黃仁勳說。
黃仁勳強調 NVIDIA 在 AI 超級運算之路上的建樹,講述他在 2016 年將第一套 NVIDIA DGX 系統交給 OpenAI 使用的故事。「顯然,它徹底改變了 AI 運算。」
新的 Project DIGITS 又更進一步推動這個使命。「每個軟體工程師、每個工程師、每個創意藝術家 – 每個今天把電腦當成工具來使用的人 – 都會需要一台 AI 超級電腦。」黃仁勳說。
黃仁勳說 Project DIGITS 搭載 GB10 Grace Blackwell 超級晶片,是 NVIDIA 體積最小、功能卻最強大的 AI 超級電腦。「這是 NVIDIA 最新的 AI 超級電腦。」黃仁勳在展示它的時候這麼表示。「它可以運行整個 NVIDIA AI 堆疊 – 所有 NVIDIA 軟體都可以在上面運行。DGX Cloud 便是在這個上面運行。」
外型小巧卻功能強大的 Project DIGITS 預計將於五月上市。
突破的一年
「這是令人難以置信的一年。」黃仁勳在結束這場主題演講時表示。黃仁勳強調 NVIDIA 的主要成就: Blackwell 系統、實體 AI 基礎模型,以及在代理型 AI 和機器人方面的突破。
「我要感謝大家的合作。」黃仁勳說。
Categories:
企業端
|
生成式人工智慧
|
軟體
|
遊戲 |
https://blogs.nvidia.com/blog/category/enterprise/ | Data Center | - Archives Page 1 | NVIDIA Blog
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
Search for:
Toggle Search
Home
AI
Data Center
Driving
Gaming
Pro Graphics
Robotics
Healthcare
Startups
AI Podcast
NVIDIA Life
Data Center
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Scientists everywhere can now access Evo 2, a powerful new foundation model that understands the genetic code for…
Read Article
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
Safety First: Leading Partners Adopt NVIDIA Cybersecurity AI to Safeguard Critical Infrastructure
The rapid evolution of generative AI has created countless opportunities for innovation across industry and research. As is often the case with state-of-the-art technology, this evolution has also shifted the…
Read Article
What Are Foundation Models?
Editor’s note: This article, originally published on March 13, 2023, has been updated. The mics were live and tape was rolling in the studio where the Miles Davis Quintet was…
Read Article
AI-Designed Proteins Take on Deadly Snake Venom
AI-driven medicine could deliver life-saving snakebite treatments to the world’s most vulnerable….
Read Article
NVIDIA Blackwell Now Generally Available in the Cloud
AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The “reasoning” process involves multiple models, generating…
Read Article
What Is Retrieval-Augmented Generation, aka RAG?
Editor’s note: This article, originally published on Nov. 15, 2023, has been updated. To understand the latest advancements in generative AI, imagine a courtroom. Judges hear and decide cases based…
Read Article
Your browser doesn't support HTML5 video. Here is a
link to the video
instead.
Amphitrite Rides AI Wave to Boost Maritime Shipping, Ocean Cleanup With Real-Time Weather Prediction and Simulation
Named after Greek mythology’s goddess of the sea, France-based startup Amphitrite is fusing satellite data and AI to simulate and predict oceanic currents and weather. It’s work that’s making waves…
Read Article
AI Maps Titan’s Methane Clouds in Record Time
NVIDIA GPUs powered deep learning to decode years of Cassini data in seconds—helping researchers pioneer a smarter way to explore alien worlds….
Read Article
Fast, Low-Cost Inference Offers Key to Profitable AI
Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform — a…
Read Article
Load More Articles
All NVIDIA News
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
All Systems Go: NVIDIA Engineer Takes NIMble Approach to Innovation
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
GeForce NOW Welcomes Warner Bros. Games to the Cloud With ‘Batman: Arkham’ Series
Corporate Information
About NVIDIA
Corporate Overview
Technologies
NVIDIA Research
Investors
Social Responsibility
NVIDIA Foundation
Get Involved
Forums
Careers
Developer Home
Join the Developer Program
NVIDIA Partner Network
NVIDIA Inception
Resources for Venture Capitalists
Venture Capital (NVentures)
Technical Training
Training for IT Professionals
Professional Services for Data Science
News & Events
Newsroom
NVIDIA Blog
NVIDIA Technical Blog
Webinars
Stay Informed
Events Calendar
NVIDIA GTC
NVIDIA On-Demand
Explore our regional blogs and other social networks
Privacy Policy
Manage My Privacy
Legal
Accessibility
Product Security
Contact
Copyright © 2025 NVIDIA Corporation
USA - United States
Share This
Facebook
LinkedIn
Email
Share on Mastodon
Enter your Mastodon instance URL (optional)
Share | https://blogs.nvidia.com.tw/blog/category/enterprise/ | 企業端 | 企業端 彙整 - NVIDIA 台灣官方部落格
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
搜尋關鍵字:
Toggle Search
平台
智慧機器
概覽
JETSON
嵌入式系統
機器人
JETSON
資料中心
產品
資料中心 GPU
DGX
HGX
EGX
NGC
虛擬 GPU
解決方案
人工智慧與深度學習
高效能計算
虛擬 GPU
分析
應用範例
開發者
技術
CUDA-X
NVIDIA AMPERE 架構
NVIDIA VOLTA
MAGNUM
多執行個體 GPU
NVIDIA NVLINK
深度學習與人工智慧
概覽
產業
概覽
自動駕駛
醫療保健與生命科學
AI 城市
機器人
開發者
產品
概覽
DGX 系統
NVIDIA GPU 雲
NVIDIA TITAN RTX
NVIDIA TITAN V
解決方案
概覽
數據科學
推論
教育課程
AI 新創
設計視覺化
概覽
GRID
QUADRO
高階渲染技術
專業的虛擬實境解決方案
技術
NVIDIA RTX
NVLINK
TURING 架構
虛擬 GPU 技術
HOLODECK
創作者適用的
醫療保健與生命科學
概覽
給開發者
醫療圖像處理
基因體學
自動駕駛汽車
概覽
DRIVE PX
汽車產業夥伴
遊戲與娛樂
GEFORCE 遊戲平台
概覽
20 系列顯示卡
16 系列顯示卡
電競筆記型電腦
G-SYNC 顯示器
給創作者
開發者
NVIDIA 開發者
開發者新聞
開發者部落格
開發者論壇
開源平台
深度學習機構
訓練課程
GPU 科技大會
CUDA
產業
遊戲開發
醫療保健與生技
高等教育
製造業
媒體娛樂
公共部門
零售業
智慧城市
超級運算
電信業
運輸業
所有產業
驅動程式
概覽
GEFORCE 驅動程式
所有 NVIDIA 驅動程式
支援
關於 NVIDIA
概覽
NVIDIA 合作夥伴網絡
AI 運算模型
公司訊息
徵才訊息
投資人
NVIDIA 合作夥伴
NVIDIA 部落格
加入我們
RSS Feeds
訂閱電子報
聯繫我們
產品安全
企業端
Most Popular
CES 2025:NVIDIA 執行長表示 AI 正以「驚人的速度」進步
NVIDIA 創辦人暨執行長黃仁勳以…
閱讀文章
Most Popular
使用 Transformer 產生合成資料:企業資料挑戰的解決方案
GeForce NOW 聯盟 Taiwan Mobile 雲端遊戲服務給你歡樂無比的遊戲節慶時刻
揭開 NVIDIA DOCA 的神祕面紗
NVIDIA 發表「Mega」Omniverse Blueprint,打造工業機器人機群數位孿生
據資訊科技研究顧問公司 Gartner 指出,2024 年全…
閱讀文章
鴻海科技集團在美國、墨西哥和台灣設立新工廠,擴大 Blackwell 測試和生產
為了滿足目前已全面投產的 Blackwell 的需求,全球最…
閱讀文章
更快的預測:NVIDIA 推出 Earth-2 NIM 微服務, 可將更高解析度模擬的速度提高 500 倍
NVIDIA 今日於 SC24 發表了兩項全新的 NVIDI…
閱讀文章
NVIDIA 與業界軟體領導者宣布 Omniverse 即時物理數位孿生
NVIDIA 今日宣布推出 NVIDIA Omniverse…
閱讀文章
數位孿生 (digital twin) 是什麼?
走進汽車組裝廠,看到工作人員將螺帽鎖緊至螺栓,聽到氣動工具的…
閱讀文章
NVIDIA 執行長黃仁勳在日本 AI 高峰會上表示:「每個產業、每家公司、每個國家都必須推動一場新的產業革命。」
下一波的科技革命已經到來,而日本將成為其中重要的一部分。 在…
閱讀文章
日本市場創新者利用 NVIDIA AI 與 Omniverse 將實體 AI 應用於各產業
豐田汽車(Toyota)工廠裡的機器人搬運著重金屬材料。安川…
閱讀文章
NVIDIA 與軟銀加速推動日本成為全球 AI 強國
軟銀利用 NVIDIA Blackwell 架構打造全日本最…
閱讀文章
日本雲端服務領導業者建構 NVIDIA AI 基礎設施為 AI 時代進行產業轉型
NVIDIA 今日宣布日本雲端服務領導業者軟銀(SoftBa…
閱讀文章
更多文章
All NVIDIA News
擴展定律如何推動更有智慧又更強大的 AI 發展
安全至上:領先合作夥伴採用 NVIDIA 網路安全 AI 保護關鍵基礎設施
AI 帶來亮眼報酬:調查結果揭示金融業最新技術趨勢
NVIDIA 發表為代理型 AI 應用提供安全防護的 NIM 微服務
NVIDIA 攜手產業領導業者推動基因組學、藥物探索與醫療保健發展
平台
人工智慧與深度學習
智慧機器
資料中心
設計視覺化
醫療保健
自動駕駛
GeForce 遊戲
SHIELD
產品
DGX-1
DRIVE PX2
GeForce GTX 20 系列
GRID
Jetson
Quadro
SHIELD TV
Tesla
開發者
開發者專區
CUDA
訓練課程
GPU 科技大會
探究地區性部落格及其他社交網路
隱私權政策
管理我的隱私
請勿出售或分享我的資料
服務條款
輔助使用
公司政策
產品安全
聯絡方式
Copyright © 2025 NVIDIA Corporation
Taiwan |
https://blogs.nvidia.com/blog/author/brian-caulfield/ | Brian Caulfield | Brian Caulfield Author Page | NVIDIA Blog
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
Search for:
Toggle Search
Home
AI
Data Center
Driving
Gaming
Pro Graphics
Robotics
Healthcare
Startups
AI Podcast
NVIDIA Life
Brian Caulfield
Brian Caulfield edits NVIDIA's corporate blog. Previously, he was a journalist with Forbes, Red Herring and Business 2.0. He has also written for Wired magazine.
AI-Designed Proteins Take on Deadly Snake Venom
AI-driven medicine could deliver life-saving snakebite treatments to the world’s most vulnerable….
Read Article
When the Earth Talks, AI Listens
Scientists repurpose speech recognition AI to decode seismic activity, uncovering patterns that could one day help predict earthquakes….
Read Article
AI Maps Titan’s Methane Clouds in Record Time
NVIDIA GPUs powered deep learning to decode years of Cassini data in seconds—helping researchers pioneer a smarter way to explore alien worlds….
Read Article
CES 2025: AI Advancing at ‘Incredible Pace,’ NVIDIA CEO Says
NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics and agentic AI. AI is advancing…
Read Article
Tech Leader, AI Visionary, Endlessly Curious Jensen Huang to Keynote CES 2025
On Jan. 6 at 6:30 p.m. PT, NVIDIA founder and CEO Jensen Huang — with his trademark leather jacket and an unwavering vision — will step onto the CES 2025…
Read Article
AI Pioneers Win Nobel Prizes for Physics and Chemistry
Artificial intelligence, once the realm of science fiction, claimed its place at the pinnacle of scientific achievement Monday in Sweden. In a historic ceremony at Stockholm’s iconic Konserthuset, John Hopfield…
Read Article
AI Will Drive Scientific Breakthroughs, NVIDIA CEO Says at SC24
NVIDIA kicked off SC24 in Atlanta with a wave of AI and supercomputing tools set to revolutionize industries like biopharma and climate science. The announcements, delivered by NVIDIA founder and…
Read Article
‘Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution,’ NVIDIA CEO Says
The next technology revolution is here, and Japan is poised to be a major part of it. At NVIDIA’s AI Summit Japan on Wednesday, NVIDIA founder and CEO Jensen Huang…
Read Article
‘India Should Manufacture Its Own AI,’ Declares NVIDIA CEO
Artificial intelligence will be the driving force behind India’s digital transformation, fueling innovation, economic growth, and global leadership, NVIDIA founder and CEO Jensen Huang said Thursday at NVIDIA’s AI Summit…
Read Article
Load More Articles
Most Popular
Massive Foundation Model for Biomolecular Sciences Now Available via NVIDIA BioNeMo
Telcos Dial Up AI: NVIDIA Survey Unveils Industry’s AI Trends
Physicists Tap James Webb Space Telescope to Track New Asteroids and City-Killer Rock
GeForce NOW Welcomes Warner Bros. Games to the Cloud With ‘Batman: Arkham’ Series
How Scaling Laws Drive Smarter, More Powerful AI
Corporate Information
About NVIDIA
Corporate Overview
Technologies
NVIDIA Research
Investors
Social Responsibility
NVIDIA Foundation
Get Involved
Forums
Careers
Developer Home
Join the Developer Program
NVIDIA Partner Network
NVIDIA Inception
Resources for Venture Capitalists
Venture Capital (NVentures)
Technical Training
Training for IT Professionals
Professional Services for Data Science
News & Events
Newsroom
NVIDIA Blog
NVIDIA Technical Blog
Webinars
Stay Informed
Events Calendar
NVIDIA GTC
NVIDIA On-Demand
Explore our regional blogs and other social networks
Privacy Policy
Manage My Privacy
Legal
Accessibility
Product Security
Contact
Copyright © 2025 NVIDIA Corporation
USA - United States
Share This
Facebook
LinkedIn
Email
Share on Mastodon
Enter your Mastodon instance URL (optional)
Share | https://blogs.nvidia.com.tw/blog/author/brian-caulfield/ | Brian Caulfield | Brian Caulfield, 作者 NVIDIA 台灣官方部落格
Skip to content
Artificial Intelligence Computing Leadership from NVIDIA
搜尋關鍵字:
Toggle Search
平台
智慧機器
概覽
JETSON
嵌入式系統
機器人
JETSON
資料中心
產品
資料中心 GPU
DGX
HGX
EGX
NGC
虛擬 GPU
解決方案
人工智慧與深度學習
高效能計算
虛擬 GPU
分析
應用範例
開發者
技術
CUDA-X
NVIDIA AMPERE 架構
NVIDIA VOLTA
MAGNUM
多執行個體 GPU
NVIDIA NVLINK
深度學習與人工智慧
概覽
產業
概覽
自動駕駛
醫療保健與生命科學
AI 城市
機器人
開發者
產品
概覽
DGX 系統
NVIDIA GPU 雲
NVIDIA TITAN RTX
NVIDIA TITAN V
解決方案
概覽
數據科學
推論
教育課程
AI 新創
設計視覺化
概覽
GRID
QUADRO
高階渲染技術
專業的虛擬實境解決方案
技術
NVIDIA RTX
NVLINK
TURING 架構
虛擬 GPU 技術
HOLODECK
創作者適用的
醫療保健與生命科學
概覽
給開發者
醫療圖像處理
基因體學
自動駕駛汽車
概覽
DRIVE PX
汽車產業夥伴
遊戲與娛樂
GEFORCE 遊戲平台
概覽
20 系列顯示卡
16 系列顯示卡
電競筆記型電腦
G-SYNC 顯示器
給創作者
開發者
NVIDIA 開發者
開發者新聞
開發者部落格
開發者論壇
開源平台
深度學習機構
訓練課程
GPU 科技大會
CUDA
產業
遊戲開發
醫療保健與生技
高等教育
製造業
媒體娛樂
公共部門
零售業
智慧城市
超級運算
電信業
運輸業
所有產業
驅動程式
概覽
GEFORCE 驅動程式
所有 NVIDIA 驅動程式
支援
關於 NVIDIA
概覽
NVIDIA 合作夥伴網絡
AI 運算模型
公司訊息
徵才訊息
投資人
NVIDIA 合作夥伴
NVIDIA 部落格
加入我們
RSS Feeds
訂閱電子報
聯繫我們
產品安全
Brian Caulfield
Brian Caulfield edits NVIDIA's corporate blog. Previously, he was a journalist with Forbes, Red Herring and Business 2.0. He has also written for Wired magazine.
CES 2025:NVIDIA 執行長表示 AI 正以「驚人的速度」進步
NVIDIA 創辦人暨執行長黃仁勳以長達 90 分鐘的主題演…
閱讀文章
NVIDIA 執行長黃仁勳在日本 AI 高峰會上表示:「每個產業、每家公司、每個國家都必須推動一場新的產業革命。」
下一波的科技革命已經到來,而日本將成為其中重要的一部分。 在…
閱讀文章
聯想為企業帶來更智慧的 AI,NVIDIA 執行長:「我們希望實現超人類的生產力」
為加速推動企業人工智慧(AI)創新,NVIDIA 創辦人暨執…
閱讀文章
鴻海科技集團將採用 NVIDIA Blackwell 架構打造台灣最快的 AI 超級電腦
NVIDIA 與鴻海科技集團將攜手建造台灣規模最大的超級電腦…
閱讀文章
NVIDIA AI 高峰會聚焦前所未見的能源效率和 AI 驅動的創新
NVIDIA 企業平台副總裁暨總經理 Bob Pette 週…
閱讀文章
Meta 執行長 Mark Zuckerberg 告訴 NVIDIA 執行長黃仁勳,創作者將擁有個人化的 AI 助理
在 2024 年 SIGGRAPH 大會上,NVIDIA 創…
閱讀文章
NVIDIA 執行長表示:「我們為生成式人工智慧時代打造了一款處理器」
生成式人工智慧 (AI) 有望徹底改變它所觸及的每一個產業 …
閱讀文章
NVIDIA 執行長表示將把人工智慧帶入各產業
ChatGPT 才只是開始而已。 隨著如今運算技術出現他所說…
閱讀文章
模範嬰兒車:智慧嬰兒車在 CES 2023 大獲成功
當過新手爸媽的人都知道,養兒育女充滿挑戰,不僅有各種擔憂,還…
閱讀文章
Toy Jensen 獻唱《Jingle Bells》 美妙鈴聲為聖誕節揭開序幕
李以樂和李欣庭這兩位才華橫溢的歌手經常在網路上直播唱歌,有次…
閱讀文章
更多文章
平台
人工智慧與深度學習
智慧機器
資料中心
設計視覺化
醫療保健
自動駕駛
GeForce 遊戲
SHIELD
產品
DGX-1
DRIVE PX2
GeForce GTX 20 系列
GRID
Jetson
Quadro
SHIELD TV
Tesla
開發者
開發者專區
CUDA
訓練課程
GPU 科技大會
探究地區性部落格及其他社交網路
隱私權政策
管理我的隱私
請勿出售或分享我的資料
服務條款
輔助使用
公司政策
產品安全
聯絡方式
Copyright © 2025 NVIDIA Corporation
Taiwan |
https://blogs.nvidia.com/blog/three-computer-cosmos-ces/ | NVIDIA Enhances Three Computer Solution for Autonomous Mobility With Cosmos World Foundation Models | Autonomous vehicle (AV) development is made possible by three distinct computers:
NVIDIA DGX
systems for training the AI-based stack in the data center,
NVIDIA Omniverse
running on
NVIDIA OVX
systems for simulation and synthetic data generation, and the
NVIDIA AGX
in-vehicle computer to process real-time sensor data for safety.
Together, these purpose-built, full-stack systems enable continuous development cycles, speeding improvements in performance and safety.
At the CES trade show, NVIDIA today announced a new part of the equation:
NVIDIA Cosmos
, a platform comprising state-of-the-art generative world foundation models (WFMs), advanced tokenizers, guardrails and an accelerated video processing pipeline built to advance the development of physical AI systems such as AVs and robots.
With Cosmos added to the three-computer solution, developers gain a data flywheel that can turn thousands of human-driven miles into billions of virtually driven miles — amplifying training data quality.
“The AV data factory flywheel consists of fleet data collection, accurate 4D reconstruction and AI to generate scenes and traffic variations for training and closed-loop evaluation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Using the NVIDIA Omniverse platform, as well as Cosmos and supporting AI models, developers can generate synthetic driving scenarios to amplify training data by orders of magnitude.”
“Developing physical AI models has traditionally been resource-intensive and costly for developers, requiring acquisition of real-world datasets and filtering, curating and preparing data for training,” said Norm Marks, vice president of automotive at NVIDIA. “Cosmos accelerates this process with generative AI, enabling smarter, faster and more precise AI model development for autonomous vehicles and robotics.”
Transportation leaders are using Cosmos to build physical AI for AVs, including:
Waabi
, a company pioneering generative AI for the physical world, will use Cosmos for the search and curation of video data for AV software development and simulation.
Wayve
, which is developing AI foundation models for autonomous driving, is evaluating Cosmos as a tool to search for edge and corner case driving scenarios used for safety and validation.
AV toolchain provider
Foretellix
will use Cosmos, alongside
NVIDIA Omniverse Sensor RTX APIs
, to evaluate and generate high-fidelity testing scenarios and training data at scale.
In addition, ridesharing giant
Uber
is partnering with NVIDIA to accelerate autonomous mobility. Rich driving datasets from Uber, combined with the features of the Cosmos platform and
NVIDIA DGX Cloud
, will help AV partners build stronger AI models even more efficiently.
Availability
Cosmos WFMs are now available under
an open model license
on
Hugging Face
and the
NVIDIA NGC catalog
. Cosmos models will soon be available as fully optimized
NVIDIA NIM
microservices.
Get started
with Cosmos and join
NVIDIA at CES
.
See
notice
regarding software product information.
Categories:
Driving
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
NVIDIA DGX
|
Omniverse
|
Transportation | https://blogs.nvidia.com.tw/blog/three-computer-cosmos-ces/ | NVIDIA以 Cosmos 世界基礎模型增強適用於自動駕駛的三台電腦解決方案 | 自動駕駛的發展以三台不同的電腦實現:
NVIDIA DGX
系統用於在資料中心訓練以人工智慧(AI)為基礎的堆疊,在
NVIDIA OVX
系統上運行的
NVIDIA
Omniverse
用於模擬與產生合成資料,而
NVIDIA AGX
車載電腦則用於即時處理感測器產生出的資料以確保安全。
這些專門建置的全堆疊系統共同推動持續性的開發進程,加快提高效能與安全性。
NVIDIA 今日在 CES 大會宣布此方程式又加入一個新成員:NVIDIA Cosmos。 這個平台包含最先進的生成世界基礎模型(WFM)、先進的標記器、護欄和加速影片處理管道,專為推動開發自駕車輛與機器人等實體 AI 系統而打造。
將 Cosmos 加入三台電腦的解決方案,開發人員獲得一個資料飛輪,可以將人類駕駛所累積出的數千哩的里程轉換為數十億哩的虛擬駕駛里程,提高訓練資料的品質。
NVIDIA AI 研究部門副總裁 Sanja Fidler 表示:「自動駕駛資料工廠的飛輪包括收集車隊資料、精準的 4D 重構與 AI,以產生場景與各種交通路況,用於訓練與閉環評估。開發人員使用 NVIDIA Omniverse 平台以及 Cosmos 和支援的 AI 模型,可以產生合成的行車場景,將訓練資料放大數倍。」
NVIDIA車用產品副總裁 Norm Marks 表示:「開發人員在開發實體 AI 模型的過程向來是資源密集且成本高昂的工作,需要取得真實世界的資料集,並且篩選、整理和準備訓練資料。Cosmos利用生成式 AI 加快這個過程,更聰明、快速且精確開發用於自動駕駛和機器人的 AI 模型。」
交通運輸領域領導業者使用
Cosmos
為自動駕駛建立實體 AI,包括:
Waabi
為實體世界開創生成式 AI,使用 Cosmos 搜尋和整理影片資料,用於開發和模擬自動駕駛軟體。
Wayve
開發適用於自動駕駛的 AI 基礎模型,正在評估 Cosmos,將其作為搜尋用於安全和驗證之邊緣和極端駕駛情況的工具。
自駕車工具鏈供應商
Foretellix
使用 Cosmos 與
NVIDIA Omniverse Sensor RTX API
,以評估和產生大量高擬真度的測試場景及訓練資料。
此外,乘車服務巨擘
Uber
也將與 NVIDIA 合作,加速推動開發自動駕駛移動技術。Uber 提供豐富的駕駛資料集,加上 Cosmos 平台與
NVIDIA DGX Cloud
,將協助自駕車合作夥伴更有效率地建立更強大的 AI 模型。
上市時間
Cosmos WFM現已在
Hugging Face
及
NVIDIA NGC 目錄
上以
開放模型授權
的方式提供。Cosmos模型即將以完全最佳化
NVIDIA NIM
微服務的形式提供。
開始使用
Cosmos、觀看示範,並且參加
NVIDIA 在 CES 大會的活動
。
請見有關軟體產品資訊的
通知
。
Categories:
自動駕駛
Tags:
Artificial Intelligence
|
CES 2025
|
Cosmos
|
NVIDIA DGX
|
Omniverse
|
Transportation |
End of preview.
- Data Loading
- zh-tw (taiwan zh)
from datasets import load_from_disk, load_dataset
# Option 1: Load the dataset saved in Hugging Face's native disk format
dataset = load_from_disk("nvidia_blog_dataset")
print(dataset)
print(dataset['train'][0]) # Print the first record
# Option 2: Load the dataset from a JSON Lines file
dataset_json = load_dataset("json", data_files="nvidia_blog_dataset.jsonl")
print(dataset_json)
print(dataset_json['train'][0])
- Downloads last month
- 17