Learn how to combine parameter efficient fine tuning (PEFT) with reinforcement learning from human feedback (RLHF) for lightweight detoxification of an LLM on a single GPU.
Build an end-to-end pipeline to fine-tune and deploy a generative large-language model using Amazon SageMaker, including SFT, PEFT, and RLHF workflows.
Deep dive into distributed Ray for high-performance ML workloads, with tips for optimizing on AWS and a demo using Ray's AI Runtime to train BERT models on EKS.
Build a complete AI/ML pipeline for NLP with SageMaker: data ingestion, feature engineering, model training, hyperparameter tuning, and deployment.
Demystifying quantum computing and quantum machine learning. Learn about quantum algorithms and how hybrid quantum-classical approaches solve today's problems.
Get started with Kubeflow Pipelines on AWS and integrate SageMaker features for data labeling, distributed training, and scalable model deployment.
Discover how AutoPilot automatically inspects data, picks algorithms, trains models, and provides complete visibility into the model creation process.
Demonstrate human-in-the-loop workflows using Amazon A2I, where AI handles routine predictions and humans focus on complex edge cases.
Control and optimize your SageMaker costs across notebooks, training jobs, hyperparameter tuning, batch predictions, and GPU usage.
Deploy models to production and monitor performance degradation in real-time using SageMaker Model Monitor and automated alerting.
Analyze documents and derive insights from text data sources using Comprehend for NLP and Kendra for enterprise search.
Query data across your data warehouse, data lake, and operational databases using Amazon Redshift lake house architecture.