Op werkdagen voor 23:00 besteld, morgen in huis Gratis verzending vanaf €20

Deep Learning from Scratch

Building with Python from First Principles

Paperback Engels 2019 9781492041412
Verkooppositie 2269Hoogste positie: 2269
Verwachte levertijd ongeveer 8 werkdagen

Samenvatting

With the resurgence of neural networks in the 2010s, deep learning has become essential for machine learning practitioners and even many software engineers. This book provides a comprehensive introduction for data scientists and software engineers with machine learning experience. You’ll start with deep learning basics and move quickly to the details of important advanced architectures, implementing everything from scratch along the way.

Author Seth Weidman shows you how neural networks work using a first principles approach. You’ll learn how to apply multilayer neural networks, convolutional neural networks, and recurrent neural networks from the ground up. With a thorough understanding of how neural networks work mathematically, computationally, and conceptually, you’ll be set up for success on all future deep learning projects.

This book provides:
- Extremely clear and thorough mental models—accompanied by working code examples and mathematical explanations—for understanding neural networks
- Methods for implementing multilayer neural networks from scratch, using an easy-to-understand object-oriented framework
- Working implementations and clear-cut explanations of convolutional and recurrent neural networks
- Implementation of these neural network concepts using the popular PyTorch framework

Specificaties

ISBN13:9781492041412
Taal:Engels
Bindwijze:paperback
Aantal pagina's:250
Uitgever:O'Reilly
Druk:1
Verschijningsdatum:7-10-2019
Hoofdrubriek:IT-management / ICT
ISSN:

Lezersrecensies

Wees de eerste die een lezersrecensie schrijft!

Geef uw waardering

Zeer goed Goed Voldoende Matig Slecht

Over Seth Weidman

Seth is a data scientist who lives in San Francisco. He has been obsessed with understanding Deep Learning ever since he began learning about it in late 2016 and has been writing and speaking about it whenever he can ever since. Professionally, he has applied a variety of machine learning models in industry, taught data science to individuals and companies, and works on modeling and Python projects on the side. Full time, he teaches data science to companies via the Corporate Training team at Metis. He strives to find the simplicity on the other side of complexity.

Andere boeken door Seth Weidman

Inhoudsopgave

Preface
Understanding Neural Networks Requires Multiple Mental Models
Chapter Outlines
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments

1. Foundations
Functions
Math
Diagrams
Code
Derivatives
Math
Diagrams
Code
Nested Functions
Diagram
Math
Code
Another Diagram
The Chain Rule
Math
Code
A Slightly Longer Example
Math
Diagram
Code
Functions with Multiple Inputs
Math
Diagram
Code
Derivatives of Functions with Multiple Inputs
Diagram
Math
Code
Functions with Multiple Vector Inputs
Math
Creating New Features from Existing Features
Math
Diagram
Code
Derivatives of Functions with Multiple Vector Inputs
Diagram
Math
Code
Vector Functions and Their Derivatives: One Step Further
Diagram
Math
Code
Vector Functions and Their Derivatives: The Backward Pass
Computational Graph with Two 2D Matrix Inputs
Math
Diagram
Code
The Fun Part: The Backward Pass
Diagram
Math
Code
Conclusion

2. Fundamentals
Supervised Learning Overview
Supervised Learning Models
Linear Regression
Linear Regression: A Diagram
Linear Regression: A More Helpful Diagram (and the Math)
Adding in the Intercept
Linear Regression: The Code
Training the Model
Calculating the Gradients: A Diagram
Calculating the Gradients: The Math (and Some Code)
Calculating the Gradients: The (Full) Code
Using These Gradients to Train the Model
Assessing Our Model: Training Set Versus Testing Set
Assessing Our Model: The Code
Analyzing the Most Important Feature
Neural Networks from Scratch
Step 1: A Bunch of Linear Regressions
Step 2: A Nonlinear Function
Step 3: Another Linear Regression
Diagrams
Code
Neural Networks: The Backward Pass
Training and Assessing Our First Neural Network
Two Reasons Why This Is Happening
Conclusion

3. Deep Learning from Scratch
Deep Learning Definition: A First Pass
The Building Blocks of Neural Networks: Operations
Diagram
Code
The Building Blocks of Neural Networks: Layers
Diagrams
Building Blocks on Building Blocks
The Layer Blueprint
The Dense Layer
The NeuralNetwork Class, and Maybe Others
Diagram
Code
Loss Class
Deep Learning from Scratch
Implementing Batch Training
NeuralNetwork: Code
Trainer and Optimizer
Optimizer
Trainer
Putting Everything Together
Our First Deep Learning Model (from Scratch)
Conclusion and Next Steps

4. Extensions
Some Intuition About Neural Networks
The Softmax Cross Entropy Loss Function
Component #1: The Softmax Function
Component #2: The Cross Entropy Loss
A Note on Activation Functions
Experiments
Data Preprocessing
Model
Experiment: Softmax Cross Entropy Loss
Momentum
Intuition for Momentum
Implementing Momentum in the Optimizer Class
Experiment: Stochastic Gradient Descent with Momentum
Learning Rate Decay
Types of Learning Rate Decay
Experiments: Learning Rate Decay
Weight Initialization
Math and Code
Experiments: Weight Initialization
Dropout
Definition
Implementation
Experiments: Dropout
Conclusion

5. Convolutional Neural Networks
Neural Networks and Representation Learning
A Different Architecture for Image Data
The Convolution Operation
The Multichannel Convolution Operation
Convolutional Layers
Implementation Implications
The Differences Between Convolutional and Fully Connected Layers
Making Predictions with Convolutional Layers: The Flatten Layer
Pooling Layers
Implementing the Multichannel Convolution Operation
The Forward Pass
Convolutions: The Backward Pass
Batches, 2D Convolutions, and Multiple Channels
2D Convolutions
The Last Element: Adding “Channels”
Using This Operation to Train a CNN
The Flatten Operation
The Full Conv2D Layer
Experiments
Conclusion

6. Recurrent Neural Networks
The Key Limitation: Handling Branching
Automatic Differentiation
Coding Up Gradient Accumulation
Motivation for Recurrent Neural Networks
Introduction to Recurrent Neural Networks
The First Class for RNNs: RNNLayer
The Second Class for RNNs: RNNNode
Putting These Two Classes Together
The Backward Pass
RNNs: The Code
The RNNLayer Class
The Essential Elements of RNNNodes
“Vanilla” RNNNodes
Limitations of “Vanilla” RNNNodes
One Solution: GRUNodes
LSTMNodes
Data Representation for a Character-Level RNN-Based Language Model
Other Language Modeling Tasks
Combining RNNLayer Variants
Putting This All Together
Conclusion

7. PyTorch
PyTorch Tensors
Deep Learning with PyTorch
PyTorch Elements: Model, Layer, Optimizer, and Loss
Implementing Neural Network Building Blocks Using PyTorch: DenseLayer
Example: Boston Housing Prices Model in PyTorch
PyTorch Elements: Optimizer and Loss
PyTorch Elements: Trainer
Tricks to Optimize Learning in PyTorch
Convolutional Neural Networks in PyTorch
DataLoader and Transforms
LSTMs in PyTorch
Postscript: Unsupervised Learning via Autoencoders
Representation Learning
An Approach for Situations with No Labels Whatsoever
Implementing an Autoencoder in PyTorch
A Stronger Test for Unsupervised Learning, and a Solution
Conclusion
A. Deep Dives
Matrix Chain Rule
Gradient of the Loss with Respect to the Bias Terms
Convolutions via Matrix Multiplication

Index

Managementboek Top 100

Rubrieken

Populaire producten

    Personen

      Trefwoorden

        Deep Learning from Scratch