Op werkdagen voor 23:00 besteld, morgen in huis Gratis verzending vanaf €20
, ,

Learning Ray

Flexible Distributed Python for Machine Learning

Paperback Engels 2023 9781098117221
Verkooppositie 5415Hoogste positie: 5415
Verwachte levertijd ongeveer 8 werkdagen


Get started with Ray, the open source distributed computing framework that simplifies the process of scaling compute-intensive Python workloads. With this practical book, Python programmers, data engineers, and data scientists will learn how to leverage Ray locally and spin up compute clusters. You'll be able to use Ray to structure and run machine learning programs at scale.

Authors Max Pumperla, Edward Oakes, and Richard Liaw show you how to build machine learning applications with Ray. You'll understand how Ray fits into the current landscape of machine learning tools and discover how Ray continues to integrate ever more tightly with these tools. Distributed computation is hard, but by using Ray you'll find it easy to get started.

- Learn how to build your first distributed applications with Ray Core
- Conduct hyperparameter optimization with Ray Tune
- Use the Ray RLlib library for reinforcement learning
- Manage distributed training with the Ray Train library
- Use Ray to perform data processing with Ray Datasets
- Learn how work with Ray Clusters and serve models with Ray Serve
- Build end-to-end machine learning applications with Ray AIR


Aantal pagina's:250
Hoofdrubriek:IT-management / ICT


Wees de eerste die een lezersrecensie schrijft!

Geef uw waardering

Zeer goed Goed Voldoende Matig Slecht


Who Should Read This Book
Goals of This Book
Navigating This Book
How to Use the Code Examples
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us

1. An Overview of Ray
What Is Ray?
What Led to Ray?
Ray’s Design Principles
Three Layers: Core, Libraries, and Ecosystem
A Distributed Computing Framework
A Suite of Data Science Libraries
Ray AIR and the Data Science Workflow
Data Processing with Ray Datasets
Model Training
Hyperparameter Tuning
Model Serving
A Growing Ecosystem

2. Getting Started with Ray Core
An Introduction to Ray Core
A First Example Using the Ray API
An Overview of the Ray Core API
Understanding Ray System Components
Scheduling and Executing Work on a Node
The Head Node
Distributed Scheduling and Execution
A Simple MapReduce Example with Ray
Mapping and Shuffling Document Data
Reducing Word Counts

3. Building Your First Distributed Application
Introducing Reinforcement Learning
Setting Up a Simple Maze Problem
Building a Simulation
Training a Reinforcement Learning Model
Building a Distributed Ray App
Recapping RL Terminology

4. Reinforcement Learning with Ray RLlib
An Overview of RLlib
Getting Started with RLlib
Building a Gym Environment
Running the RLlib CLI
Using the RLlib Python API
Configuring RLlib Experiments
Resource Configuration
Rollout Worker Configuration
Environment Configuration
Working with RLlib Environments
An Overview of RLlib Environments
Working with Multiple Agents
Working with Policy Servers and Clients
Advanced Concepts
Building an Advanced Environment
Applying Curriculum Learning
Working with Offline Data
Other Advanced Topics

5. Hyperparameter Optimization with Ray Tune
Tuning Hyperparameters
Building a Random Search Example with Ray
Why Is HPO Hard?
An Introduction to Tune
How Does Tune Work?
Configuring and Running Tune
Machine Learning with Tune
Using RLlib with Tune
Tuning Keras Models

6. Data Processing with Ray
Ray Datasets
Ray Datasets Basics
Computing Over Ray Datasets
Dataset Pipelines
Example: Training Copies of a Classifier in Parallel
External Library Integrations
Building an ML Pipeline

7. Distributed Training with Ray Train
The Basics of Distributed Model Training
Introduction to Ray Train by Example
Predicting Big Tips in NYC Taxi Rides
Loading, Preprocessing, and Featurization
Defining a Deep Learning Model
Distributed Training with Ray Train
Distributed Batch Inference
More on Trainers in Ray Train
Migrating to Ray Train with Minimal Code Changes
Scaling Out Trainers
Preprocessing with Ray Train
Integrating Trainers with Ray Tune
Using Callbacks to Monitor Training

8. Online Inference with Ray Serve
Key Characteristics of Online Inference
ML Models Are Compute Intensive
ML Models Aren’t Useful in Isolation
An Introduction to Ray Serve
Architectural Overview
Defining a Basic HTTP Endpoint
Scaling and Resource Allocation
Request Batching
Multimodel Inference Graphs
End-to-End Example: Building an NLP-Powered API
Fetching Content and Preprocessing
NLP Models
HTTP Handling and Driver Logic
Putting It All Together

9. Ray Clusters
Manually Creating a Ray Cluster
Deployment on Kubernetes
Setting Up Your First KubeRay Cluster
Interacting with the KubeRay Cluster
Exposing KubeRay
Configuring KubeRay
Configuring Logging for KubeRay
Using the Ray Cluster Launcher
Configuring Your Ray Cluster
Using the Cluster Launcher CLI
Interacting with a Ray Cluster
Working with Cloud Clusters
Using Other Cloud Providers

10. Getting Started with the Ray AI Runtime
Why Use AIR?
Key AIR Concepts by Example
Ray Datasets and Preprocessors
Tuners and Checkpoints
Batch Predictors
Workloads That Are Suited for AIR
AIR Workload Execution
AIR Memory Management
AIR Failure Model
Autoscaling AIR Workloads

11. Ray’s Ecosystem and Beyond
A Growing Ecosystem
Data Loading and Processing
Model Training
Model Serving
Building Custom Integrations
An Overview of Ray’s Integrations
Ray and Other Systems
Distributed Python Frameworks
Ray AIR and the Broader ML Ecosystem
How to Integrate AIR into Your ML Platform
Where to Go from Here?

About the Authors

Managementboek Top 100


Populaire producten



        Learning Ray