Agents Course documentation

Onboarding: Your First Steps ⛵

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Onboarding: Your First Steps ⛵

Time to Onboard

Now that you have all the details, let’s get started! We’re going to do four things:

  1. Create your Hugging Face Account if it’s not already done
  2. Sign up to Discord and introduce yourself (don’t be shy 🤗)
  3. Follow the Hugging Face Agents Course on the Hub
  4. Spread the word about the course

Step 1: Create Your Hugging Face Account

(If you haven’t already) create a Hugging Face account here.

Step 2: Join Our Discord Community

👉🏻 Join our discord server here.

When you join, remember to introduce yourself in #introduce-yourself.

We have multiple AI Agent-related channels:

  • agents-course-announcements: for the latest course information.
  • 🎓-agents-course-general: for general discussions and chitchat.
  • agents-course-questions: to ask questions and help your classmates.
  • agents-course-showcase: to show your best agents.

In addition you can check:

  • smolagents: for discussion and support with the library.

If this is your first time using Discord, we wrote a Discord 101 to get the best practices. Check the next section.

Step 3: Follow the Hugging Face Agent Course Organization

Stay up to date with the latest course materials, updates, and announcements by following the Hugging Face Agents Course Organization.

👉 Go here and click on follow.

Follow

Step 4: Spread the word about the course

Help us make this course more visible! There are two way you can help us:

  1. Show your support by ⭐ the course’s repository.
Repo star
  1. Share Your Learning Journey: Let others know you’re taking this course! We’ve prepared an illustration you can use in your social media posts

You can download the image by clicking 👉 here

Step 5: Running Models Locally with Ollama (In case you run into Credit limits)

  1. Install Ollama

    Follow the official Instructions here.

  2. Pull a model Locally

    ollama pull qwen2:7b #Check out ollama website for more models
  1. Start Ollama in the background (In one terminal)
    ollama serve

If you run into the error “listen tcp 127.0.0.1:11434: bind: address already in use”, you can use command sudo lsof -i :11434 to identify the process ID (PID) that is currently using this port. If the process is ollama, it is likely that the installation script above has started ollama service, so you can skip this command to start Ollama.

  1. Use LiteLLMModel Instead of HfApiModel

    To use LiteLLMModel module in smolagents, you may run pip command to install the module.

    pip install smolagents[litellm]
    from smolagents import LiteLLMModel

    model = LiteLLMModel(
        model_id="ollama_chat/qwen2:7b",  # Or try other Ollama-supported models
        api_base="http://127.0.0.1:11434",  # Default Ollama local server
        num_ctx=8192,
    )
  1. Why this works?
  • Ollama serves models locally using an OpenAI-compatible API at http://localhost:11434.
  • LiteLLMModel is built to communicate with any model that supports the OpenAI chat/completion API format.
  • This means you can simply swap out HfApiModel for LiteLLMModel no other code changes required. It’s a seamless, plug-and-play solution.

Congratulations! 🎉 You’ve completed the onboarding process! You’re now ready to start learning about AI Agents. Have fun!

Keep Learning, stay awesome 🤗

< > Update on GitHub