Keywords - AWS Cloud, AutoML, MLOps, A/B testing, End-to-End Model Deployment, NLP, BERT, Statistical Biases
Pre-requisites - Deep Learning & AWS Cloud Technical Essentials specialization, Machine Learning, Python, Statistics
Certificates - certificate1 , certificate2, certificate3, certificate4
This specialization focused on -
Preparing data, detecting statistical biases, performing feature engineering at scale to train models, evaluate and fine tune models with AutoML.
Building, deploying, monitoring and operationalize end-to-end machine learning pipelines.
Storing and managing ML features using feature store. Debugging, profiling, tuning, & evaluating models while tracking data lineage and model artifacts.
Building data labeling and human-in-the-loop pipelines to improve model performance with human intelligence.
The repository contains following notebooks completed as part of the specialization -
PART-1
Registering and visualizing dataset
Detecting Data Bias with Amazon Sagemaker Clarify
Training ML models using Amazon Sagemaker Autopilot
PART-2
Training a text classifier using Amazon SageMaker BlazingText built-in algorithm
Feature transformation with Amazon SageMaker processing job and Feature Store
Building a SageMaker Pipeline to train and deploy BERT-Based text classifier
PART-3
Model Optimization using automatic model tuning
A/B testing, traffic shifting and autoscaling
Data labeling and human-in-loop-pipelines with Amazon Augmented AI
Comments