Op werkdagen voor 23:00 besteld, morgen in huis Gratis verzending vanaf €20
,

Implementing MLOps in the Enterprise

A Production-First Approach

Paperback Engels 2023 1e druk 9781098136581
Verkooppositie 4778Hoogste positie: 4778
Verwachte levertijd ongeveer 16 werkdagen

Samenvatting

With demand for scaling, real-time access, and other capabilities, businesses need to consider building operational machine learning pipelines. This practical guide helps your company bring data science to life for different real-world MLOps scenarios. Senior data scientists, MLOps engineers, and machine learning engineers will learn how to tackle challenges that prevent many businesses from moving ML models to production.

Authors Yaron Haviv and Noah Gift take a production-first approach. Rather than beginning with the ML model, you'll learn how to design a continuous operational pipeline, while making sure that various components and practices can map into it. By automating as many components as possible, and making the process fast and repeatable, your pipeline can scale to match your organization's needs.

You'll learn how to provide rapid business value while answering dynamic MLOps requirements.

This book will help you:
- Learn the MLOps process, including its technological and business value
- Build and structure effective MLOps pipelines
- Efficiently scale MLOps across your organization
- Explore common MLOps use cases
- Build MLOps pipelines for hybrid deployments, real-time predictions, and composite AI
- Learn how to prepare for and adapt to the future of MLOps
- Effectively use pre-trained models like HuggingFace and OpenAI to complement your MLOps strategy

Specificaties

ISBN13:9781098136581
Trefwoorden:machine learning
Taal:Engels
Bindwijze:paperback
Aantal pagina's:350
Uitgever:O'Reilly
Druk:1
Verschijningsdatum:31-10-2023
Hoofdrubriek:IT-management / ICT
ISSN:

Lezersrecensies

Wees de eerste die een lezersrecensie schrijft!

Over Noah Gift

Noah Gift is lecturer and consultant at UC Davis Graduate School of Management in the MSBA program. Professionally, Noah has approximately 20 years’ experience programming in Python and is a member of the Python Software Foundation. He has worked for a variety of companies in roles ranging from CTO, general manager, consulting CTO, and cloud architect. Currently, he is consulting start-ups and other companies on machine learning and cloud architecture, and is doing CTO-level consulting via Noah Gift Consulting. He has published close to 100 technical publications including two books on subjects ranging from cloud machine learning to DevOps. He is also a certified AWS Solutions Architect. Noah has an MBA from University of California, Davis; an MS in computer information systems from California State University, Los Angeles; and a BS in nutritional science from Cal Poly, San Luis Obispo. You can find more about Noah by following him on Github (https://github.com/noahgift/), visiting http://noahgift.com, or connecting with him on https://www.linkedin.com/in/noahgift/.

Andere boeken door Noah Gift

Inhoudsopgave

Preface
Who This Book Is For
Navigating This Book
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
Yaron
Noah

1. MLOps: What Is It and Why Do We Need It?
What Is MLOps?
MLOps in the Enterprise
Understanding ROI in Enterprise Solutions
Understanding Risk and Uncertainty in the Enterprise
MLOps Versus DevOps
What Isn’t MLOps?
Mainstream Definitions of MLOps
What Is ML Engineering?
MLOps and Business Incentives
MLOps in the Cloud
Key Cloud Development Environments
The Key Players in Cloud Computing
MLOps On-Premises
MLOps in Hybrid Environments
Enterprise MLOps Strategy
Conclusion
Critical Thinking Discussion Questions
Exercises

2. The Stages of MLOps
Getting Started
Choose Your Algorithm
Design Your Pipelines
Data Collection and Preparation
Data Storage and Ingestion
Data Exploration and Preparation
Data Labeling
Feature Stores
Model Development and Training
Writing and Maintaining Production ML Code
Tracking and Comparing Experiment Results
Distributed Training and Hyperparameter Optimization
Building and Testing Models for Production
Deployment (and Online ML Services)
From Model Endpoints to Application Pipelines
Online Data Preparation
Continuous Model and Data Monitoring
Monitoring Data and Concept Drift
Monitoring Model Performance and Accuracy
The Strategy of Pretrained Models
Building an End-to-End Hugging Face Application
Flow Automation (CI/CD for ML)
Conclusion
Critical Thinking Discussion Questions
Exercises

3. Getting Started with Your First MLOps Project
Identifying the Business Use Case and Goals
Finding the AI Use Case
Defining Goals and Evaluating the ROI
How to Build a Successful ML Project
Approving and Prototyping the Project
Scaling and Productizing Projects
Project Structure and Lifecycle
ML Project Example from A to Z
Exploratory Data Analysis
Data and Model Pipeline Development
Application Pipeline Development
Scaling and Productizing the Project
CI/CD and Continuous Operations
Conclusion
Critical Thinking Discussion Questions
Exercises

4. Working with Data and Feature Stores
Data Versioning and Lineage
How It Works
Common ML Data Versioning Tools
Data Preparation and Analysis at Scale
Structured and Unstructured Data Transformations
Distributed Data Processing Architectures
Interactive Data Processing
Batch Data Processing
Stream Processing
Stream Processing Frameworks
Feature Stores
Feature Store Architecture and Usage
Ingestion and Transformation Service
Feature Storage
Feature Retrieval (for Training and Serving)
Feature Stores Solutions and Usage Example
Using Feast Feature Store
Using MLRun Feature Store
Conclusion
Critical Thinking Discussion Questions
Exercises

5. Developing Models for Production
AutoML
Running, Tracking, and Comparing ML Jobs
Experiment Tracking
Saving Essential Metadata with the Model Artifacts
Comparing ML Jobs: An Example with MLflow
Hyperparameter Tuning
Auto-Logging
MLOps Automation: AutoMLOps
Example: Running and Tracking ML Jobs Using Azure Databricks
Handling Training at Scale
Building and Running Multi-Stage Workflows
Managing Computation Resources Efficiently
Conclusion
Critical Thinking Discussion Questions
Exercises

6. Deployment of Models and AI Applications
Model Registry and Management
Solution Examples
SageMaker Example
MLflow Example
MLRun Example
Model Serving
Amazon SageMaker
Seldon Core
MLRun Serving
Advanced Serving and Application Pipelines
Implementing Scalable Application Pipelines
Model Routing and Ensembles
Model Optimization and ONNX
Data and Model Monitoring
Integrated Model Monitoring Solutions
Standalone Model Monitoring Solutions
Model Retraining
When to Retrain Your Models
Strategies for Data Retraining
Model Retraining in the MLOps Pipeline
Deployment Strategies
Measuring the Business Impact
Conclusion
Critical Thinking Discussion Questions
Exercises

7. Building a Production Grade MLOps Project from A to Z
Exploratory Data Analysis
Interactive Data Preparation
Preparing the Credit Transaction Dataset
Preparing the User Events (Activities) Dataset
Extracting Labels and Training a Model
Data Ingestion and Preparation Using a Feature Store
Building the Credit Transactions Data Pipeline (Feature Set)
Building the User Events Data Pipeline (FeatureSet)
Building the Target Labels Data Pipeline (FeatureSet)
Ingesting Data into the Feature Store
Model Training and Validation Pipeline
Creating and Evaluating a Feature Vector
Building and Running an Automated Training and Validation Pipeline
Real-Time Application Pipeline
Defining a Custom Model Serving Class
Building an Application Pipeline with Enrichment and Ensemble
Testing the Application Pipeline Locally
Deploying and Testing the Real-Time Application Pipeline
Model Monitoring
CI/CD and Continuous Operations
Conclusion
Critical Thinking Discussion Questions
Exercises

8. Building Scalable Deep Learning and Large Language Model Projects
Distributed Deep Learning
Horovod
Ray
Data Gathering, Labeling, and Monitoring in DL
Data Labeling Pitfalls to Avoid
Data Labeling Best Practices
Data Labeling Solutions
Using Foundation Models as Labelers
Monitoring DL Models with Unstructured Data
Build Versus Buy Deep Learning Models
Foundation Models, Generative AI, LLMs
Risks and Challenges with Generative AI
MLOps Pipelines for Efficiently Using and Customizing LLMs
Application Example: Fine-Tuning an LLM Model
Conclusion
Critical Thinking Discussion Questions
Exercises

9. Solutions for Advanced Data Types
ML Problem Framing with Time Series
Navigating Time Series Analysis with AWS
Diving into Time Series with DeepAR+
Time Series with the GCP BigQuery and SQL
Build Versus Buy for MLOps NLP Problems
Build Versus Buy: The Hugging Face Approach
Exploring Natural Language Processing with AWS
Exploring NLP with OpenAI
Video Analysis, Image Classification, and Generative AI
Image Classification Techniques with CreateML
Composite AI
Getting Started with Serverless for Composite AI
Use Cases of Composite AI with Serverless
Conclusion
Critical Thinking Discussion Questions
Exercises

10. Implementing MLOps Using Rust
The Case for Rust for MLOps
Leveling Up with Rust, GitHub Copilot, and Codespaces
In the Beginning Was the Command Line
Getting Started with Rust for MLOps
Using PyTorch and Hugging Face with Rust
Using Rust to Build Tools for MLOps
Building Containerized Rust Command-Line Tools
GPU PyTorch Workflows
Using TensorFlow Rust
Doing k-means Clustering with Rust
Final Notes on Rust
Ruff Linter
rust-new-project-template
Conclusion
Critical Thinking Discussion Questions
Exercises

A. Job Interview Questions
B. Enterprise MLOps Interviews

Index
About the Authors

Managementboek Top 100

Rubrieken

Populaire producten

    Personen

      Trefwoorden

        Implementing MLOps in the Enterprise