The stream starts at 11:11, thanks for tuning in!
In this livestream, Cami will be walking through how to utilize AWS SageMaker to connect your web application to a PyTorch model. You can follow along here: developers.facebook.com/blog/post/2020/08/03/connecting-web-app-pytorch-model-using-amazon-sagemaker
------- Social Accounts -------
Cami's Twitter: twitter.com/cwillycs
P.S. We're hiring! facebook.com/careers/jobs/149461039538182/
Facebook Open Source Twitter: twitter.com/fbOpenSource
Facebook Open Source Facebook Page: facebook.com/fbOpenSource/
Facebook Open Source website: opensource.facebook.com/
- [Livestream] Connecting your web app to a PyTorch Model using AWS SageMaker: Deploy your ML model ( Download)
- Connecting your web app to a PyTorch Model using AWS SageMaker: Deploy your ML model ( Download)
- [Livestream] Connecting your web app to a PyTorch Model using AWS SageMaker: Build web client P.1 ( Download)
- [Livestream] Connecting your web app to a PyTorch Model using AWS SageMaker: Build REST API ( Download)
- End To End Machine Learning Project Implementation Using AWS Sagemaker ( Download)
- [Livestream] Connecting your web app to a PyTorch Model using AWS SageMaker: Connect model to Lambda ( Download)
- [Livestream] Connecting your web app to a PyTorch Model using AWS SageMaker: Build web client P.2 ( Download)
- Deploy ML model in 10 minutes. Explained ( Download)
- Build train and deploy model in sagemaker | sagemaker tutorial | sagemaker pipeline ( Download)
- AWS re:Invent 2020: Deploying PyTorch models for inference using TorchServe ( Download)
- Deploying your ML Model with TorchServe ( Download)
- How to host and scale Pytorch Models on AWS ( Download)
- how to deploy pytorch model to production ( Download)
- AWS Tutorials: Deploy Machine Learning Model API on AWS EC2 (Permanent Running) ( Download)
- Deploy ML models with FastAPI, Docker, and Heroku | Tutorial ( Download)