NV-ELLM

Efficient Large Language Model (LLM) Customization Training

Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support. Large language models can automate these tasks, enabling enterprises to enhance operations, reduce costs, and boost productivity. In this course, you'll go beyond using out-of-the-box pretrained LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases—without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model's internal weights. Using the open-source NVIDIA NeMo™ framework, you’ll learn prompt engineering and various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.
Course Details

Duration

1 day

Prerequisites

  • Professional experience with the Python programming language
  • Familiarity with fundamental deep learning topics like model architecture, training, and inference
  • Familiarity with a modern Python-based deep learning framework (PyTorch preferred)
  • Familiarity working with out-of-the-box pretrained LLMs

Skills Gained

  • Use prompt engineering to improve the performance of pretrained LLMs
  • Apply various fine-tuning techniques with limited data to accomplish tasks specific to your use cases
  • Use a single pretrained model to perform multiple custom tasks
  • Leverage the NeMo framework to customize models like GPT, LLaMA-2, and Falcon with ease
Course Outline
  • Introduction
  • Engineering Effective Prompts
  • Customized Prompt Learning
  • Parameter-Efficient Fine-Tuning (PEFT) and Supervised Fine-Tuning (SFT)
  • Assessment and Q&A