Spaces:
Running
Running
title: README | |
emoji: 🐢 | |
colorFrom: purple | |
colorTo: gray | |
sdk: static | |
pinned: false | |
<div class="grid lg:grid-cols-3 gap-x-4 gap-y-7"> | |
<p class="lg:col-span-3"> | |
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers. | |
</p> | |
<a | |
href="https://huggingface.co/blog/intel" | |
class="block overflow-hidden group" | |
> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Learn more about Hugging Face collaboration with Intel AI</div> | |
</a> | |
<a | |
href="https://github.com/huggingface/optimum" | |
class="block overflow-hidden group" | |
> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Quantize Transformers with Intel® Neural Compressor and Optimum</div> | |
</a> | |
<a href="https://huggingface.co/blog/generative-ai-models-on-intel-cpu" class="block overflow-hidden group"> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="/blog/assets/143_q8chat/thumbnail.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Quantizing 7B LLM on Intel CPU</div> | |
</a> | |
<div class="lg:col-span-3"> | |
<p class="mb-2"> | |
Intel optimizes widely adopted and innovative AI software | |
tools, frameworks, and libraries for Intel® architecture. Whether | |
you are computing locally or deploying AI applications on a massive | |
scale, your organization can achieve peak performance with AI | |
software optimized for Intel® Xeon® Scalable platforms. | |
</p> | |
<p class="mb-2"> | |
Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers. | |
</p> | |
<h3>Useful Resources:</h3> | |
<ul> | |
<li class="ml-6"><a href="https://huggingface.co/hardware/intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked partner page" data-ga-label="partner page">Intel AI + Hugging Face partner page</a></li> | |
<li class="ml-6"><a href="https://github.com/IntelAI" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel ai github" data-ga-label="intel ai github">Intel AI GitHub</a></li> | |
<li class="ml-6"><a href="https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel partner page" data-ga-label="intel partner page">Developer Resources from Intel and Hugging Face</a></li> | |
</ul> | |
<p> </p> | |
</div> | |
<div class="lg:col-span-3"> | |
<h1>Get Started</h1> | |
<h3>1. Intel Acceleration Libraries</h3> | |
<p class="mb-2"> | |
To get started with Intel hardware and software optimizations, download and install the Optimum Intel | |
and Intel® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries: | |
</p> | |
<ul> | |
<li class="ml-6"><a href="https://github.com/huggingface/optimum-intel#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">🤗 Optimum Intel library</a></li> | |
<li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">Intel® Extension for Transformers</a></li> | |
</ul> | |
<p class="mb-2"> | |
The Optimum Intel library provides primarily hardware acceleration, while the Intel® Extension | |
for Transformers is focused more on software accleration. Both should be present to achieve ideal | |
performance and productivity gains in transfer learning and fine-tuning with Hugging Face. | |
</p> | |
<h3>2. Find Your Model</h3> | |
<p class="mb-2"> | |
Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Face’s website. | |
Add “intel” to your search to narrow your search to models pretrained by Intel. | |
</p> | |
<img | |
alt="" | |
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-model_search.png" | |
style="margin:auto;transform:scale(0.8);" | |
/> | |
<h3>3. Read Through the Demo, Dataset, and Quick-Start Commands</h3> | |
<p class="mb-2"> | |
On the model’s page (called a “Model Card”) you will find description and usage information, an embedded | |
inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers” | |
for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer. | |
</p> | |
<img | |
alt="" | |
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-use_transformers.png" | |
style="margin:auto;transform:scale(0.8);" | |
/> | |
<img | |
alt="" | |
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-quickstart.png" | |
style="margin:auto;transform:scale(0.8);" | |
/> | |
</div> | |
</div> | |