Spaces:
Running
Running
File size: 5,682 Bytes
0c348ce ccbe0e4 5194ae2 5458a97 5194ae2 5458a97 5194ae2 299cb77 5194ae2 75118d3 5194ae2 5458a97 5194ae2 75118d3 5194ae2 75118d3 5194ae2 15cb7d0 5194ae2 8f50f2d 5194ae2 8f50f2d 5194ae2 c0621a4 5194ae2 17277f9 5194ae2 15cb7d0 5194ae2 dc2bff9 5194ae2 17277f9 30a7347 17277f9 30a7347 17277f9 7580608 435c65b c1e9404 435c65b 17277f9 c1e9404 435c65b 17277f9 c1e9404 17277f9 435c65b c1e9404 435c65b 17277f9 435c65b 17277f9 c1e9404 435c65b 17277f9 435c65b c1e9404 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
title: README
emoji: 🐢
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---
<div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
<p class="lg:col-span-3">
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
</p>
<a
href="https://huggingface.co/blog/intel"
class="block overflow-hidden group"
>
<div
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
>
<img
alt=""
src="https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png"
class="w-40"
/>
</div>
<div class="underline">Learn more about Hugging Face collaboration with Intel AI</div>
</a>
<a
href="https://github.com/huggingface/optimum"
class="block overflow-hidden group"
>
<div
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
>
<img
alt=""
src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png"
class="w-40"
/>
</div>
<div class="underline">Quantize Transformers with Intel® Neural Compressor and Optimum</div>
</a>
<a href="https://huggingface.co/blog/generative-ai-models-on-intel-cpu" class="block overflow-hidden group">
<div
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
>
<img
alt=""
src="/blog/assets/143_q8chat/thumbnail.png"
class="w-40"
/>
</div>
<div class="underline">Quantizing 7B LLM on Intel CPU</div>
</a>
<div class="lg:col-span-3">
<p class="mb-2">
Intel optimizes widely adopted and innovative AI software
tools, frameworks, and libraries for Intel® architecture. Whether
you are computing locally or deploying AI applications on a massive
scale, your organization can achieve peak performance with AI
software optimized for Intel® Xeon® Scalable platforms.
</p>
<p class="mb-2">
Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers.
</p>
<h3>Useful Resources:</h3>
<ul>
<li class="ml-6"><a href="https://huggingface.co/hardware/intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked partner page" data-ga-label="partner page">Intel AI + Hugging Face partner page</a></li>
<li class="ml-6"><a href="https://github.com/IntelAI" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel ai github" data-ga-label="intel ai github">Intel AI GitHub</a></li>
<li class="ml-6"><a href="https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel partner page" data-ga-label="intel partner page">Developer Resources from Intel and Hugging Face</a></li>
</ul>
<p> </p>
</div>
<div class="lg:col-span-3">
<h1>Get Started</h1>
<h3>1. Intel Acceleration Libraries</h3>
<p class="mb-2">
To get started with Intel hardware and software optimizations, download and install the Optimum Intel
and Intel® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries:
</p>
<ul>
<li class="ml-6"><a href="https://github.com/huggingface/optimum-intel#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">🤗 Optimum Intel library</a></li>
<li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">Intel® Extension for Transformers</a></li>
</ul>
<p class="mb-2">
The Optimum Intel library provides primarily hardware acceleration, while the Intel® Extension
for Transformers is focused more on software accleration. Both should be present to achieve ideal
performance and productivity gains in transfer learning and fine-tuning with Hugging Face.
</p>
<h3>2. Find Your Model</h3>
<p class="mb-2">
Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Face’s website.
Add “intel” to your search to narrow your search to models pretrained by Intel.
</p>
<img
alt=""
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-model_search.png"
style="margin:auto;transform:scale(0.8);"
/>
<h3>3. Read Through the Demo, Dataset, and Quick-Start Commands</h3>
<p class="mb-2">
On the model’s page (called a “Model Card”) you will find description and usage information, an embedded
inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers”
for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.
</p>
<img
alt=""
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-use_transformers.png"
style="margin:auto;transform:scale(0.8);"
/>
<img
alt=""
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-quickstart.png"
style="margin:auto;transform:scale(0.8);"
/>
</div>
</div>
|