Duration: Five Days

Overview

In this Generative AI course, students learn the fundamentals of deep learning and how to train generative AI models, starting with a review of core Python concepts if needed. You will also learn about the Anaconda computing environment, importing and manipulating data with Pandas, and exploratory data analysis with Pandas and Seaborn.

Objectives

  • Understand the basics of machine learning (ML) and deep learning, including the different types of ML models, classification and regression, and neural networks.
  • Develop a deep learning model construction for prediction, including preprocessing tabular datasets for deep learning workflows, data validation strategies, architecture modifications for managing overfitting, and regularization strategies.
  • Apply trustworthy AI frameworks for this deep learning prediction context.
  • Learn the fundamentals of generative AI, including generating new content versus analyzing existing content, example use cases, and the ethics of generative AI.
  • Implement sequential generation with recurrent neural networks (RNNs) and variational autoencoders (VAEs).
  • Build a generative adversarial network (GAN).
  • Understand transformer architectures, including the problems with recurrent architectures, attention-based architectures, positional encoding, and the Transformer model.
  • Gain an overview of current popular large language models (LLMs), such as ChatGPT, DALL-E 2, and Bing AI.
  • Learn about medium-sized LLMs that can be run in your environment, such as Stanford Alpaca and Facebook Llama.
  • Explore transfer learning with your data in these contexts.

Prerequisites

Learners should have prior experience developing Deep Learning models, including architectures such as feed-forward artificial Neural Networks, recurrent and convolutional. 

Outline for Fundamentals of Deep Learning and Development of Generative AI Models Training

  • Review of Core Python Concepts (**if needed – depends on tool context**)
    • Anaconda Computing Environment
    • Importing and manipulating Data with Pandas
    • Exploratory Data Analysis with Pandas and Seaborn
    • NumPy ndarrays versus Pandas Dataframes
  • Overview of Machine Learning / Deep Learning
    • Developing predictive models with ML
    • How Deep Learning techniques have extended ML 
    • Use cases and models for ML and Deep Learning 
  • Hands on Introduction to Artificial Neural Networks (ANNs) and Deep Learning 
    • Components of Neural Network Architecture
    • Evaluate Neural Network Fit on a Known Function
    • Define and Monitor Convergence of a Neural Network
    • Evaluating Models
    • Scoring New Datasets with a Model
  • Hands on Deep Learning Model Construction for Prediction 
    • Preprocessing Tabular Datasets for Deep Learning Workflows
    • Data Validation Strategies
    • Architecture Modifications for Managing Over-fitting
    • Regularization Strategies
    • Deep Learning Classification Model example
    • Deep Learning Regression Model example 
    • Trustworthy AI Frameworks for this DL prediction context
  • Generative AI fundamentals:
    • Generating new content versus analyzing existing content
    • Example use cases: text, music, artwork, code generation
    • Ethics of generative AI
  • Sequential Generation with RNN
    • Recurrent neural networks overview
    • Preparing text data
    • Setting up training samples and outputs
    • Model training with batching
    • Generating text from a trained model
    • Pros and cons of sequential generation
  • Variational Autoencoders
    • What is an autoencoder?
    • Building a simple autoencoder from a fully connected layer
    • Sparse autoencoders
    • Deep convolutional autoencoders
    • Applications of autoencoders to image denoising
    • Sequential autoencoder
    • Variational autoencoders
  • Generative Adversarial Networks
    • Model stacking
    • Adversarial examples
    • Generational and discriminative networks
    • Building a generative adversarial network 
  • Transformer Architectures
    • The problems with recurrent architectures
    • Attention-based architectures
    • Positional encoding
    • The Transformer: attention is all you need
    • Time series classification using transformers
  • Overview of current popular large language models (LLM):
    • ChatGPT
    • DALL-E 2
    • Bing AI
  • Medium sized LLM on in your own environment:
    • tanford Alpaca
    • Facebook Llama
    • Transfer learning with your own data in these contexts 

 

01/15/2024 - 01/19/2024
10:00 AM - 06:00 PM
Eastern Standard Time
Online Virtual Class
USD $2,995.00
Enroll
02/19/2024 - 02/23/2024
10:00 AM - 06:00 PM
Eastern Standard Time
Online Virtual Class
USD $2,995.00
Enroll
03/25/2024 - 03/29/2024
10:00 AM - 06:00 PM
Eastern Standard Time
Online Virtual Class
USD $2,995.00
Enroll