内容简介
Google、微软和Facebook等公司正在积极发展内部的深度学习团队。对于我们而言,深度学习仍然是一门非常复杂和难以掌握的课题。如果你熟悉Python,并且具有微积分背景,以及对于机器学习的基本理解,本书将帮助你开启深度学习之旅。
* 检验机器学习和神经网络基础
* 学习如何训练前馈神经网络
* 使用TensorFlow实现你的第1个神经网络
* 管理随着网络加深带来的各种问题
* 建立神经网络用于分析复杂图像
* 使用自动编码器实现有效的维度缩减
* 深入了解从序列分析到语言检验
* 掌握强化学习基础
作者简介
Nikhil Buduma是Remedy的联合创始人和首席科学家,该公司位于美国旧金山,旨在建立数据驱动为主的健康管理新系统。16岁时,他在圣何塞州立大学管理过一个药物发现实验室,为资源受限的社区研发新颖而低成本的筛查方法。到了19岁,他是国际生物学奥林匹克竞赛的两枚金牌获得者。随后加入MIT,在那里他专注于开发大规模数据系统以影响健康服务、精神健康和医药研究。在MIT,他联合创立了Lean On Me,一家全国性的非营利组织,提供匿名短信热线在大学校园内实现有效的一对一支持,并运用数据来积极影响身心健康。如今,Nikhil通过他的风投基金Q Venture Partners投资硬科技和数据公司,还为Milwaukee Brewers篮球队管理一支数据分析团队。
本书内容贡献者Nick Locascio是一位深度学习顾问、作家和研究者。Nick在MIT的Regina Barzilay实验室获得了本科和工程硕士学位,专业从事NLP和计算机视觉研究。他曾工作于多个项目,从训练神经网络到编写自然语言提示,甚至与MGH Radiology部门合作将深度学习应用于乳腺X线摄影的医学辅助诊断。Nick的工作已被MIT News和CNBC报道。在其闲暇之余,Nick为财富500强企业提供私人的深度学习咨询服务。他还联合创立了标志性的MIT课程6.S191 Intro to Deep Learning,教过300余名学生,听众包括博士后和教授。
目录
Preface
1. The Neural Network
Building Intelligent Machines
The Limits of Traditional Computer Programs
The Mechanics of Machine Learning
The Neuron
Expressing Linear Perceptrons as Neurons
Feed-Forward Neural Networks
Linear Neurons and Their Limitations
Sigmoid, Tanh, and ReLU Neurons
Softmax Output Layers
Looking Forward
2. Training Feed-Forward Neural Networks
The Fast-Food Problem
Gradient Descent
The Delta Rule and Learning Rates
Gradient Descent with Sigmoidal Neurons
The Backpropagation Algorithm
Stochastic and Minibatch Gradient Descent
Test Sets, Validation Sets, and Overfitting
Preventing Overfitting in Deep Neural Networks
Summary
3. Implementing Neural Networks in TensorFIow
What Is TensorFlow?
How Does TensorFlow Compare to Alternatives?
Installing TensorFlow
Creating and Manipulating TensorFlow Variables
TensorFlow Operations
Placeholder Tensors
Sessions in TensorFlow
Navigating Variable Scopes and Sharing Variables
Managing Models over the CPU and GPU
Specifying the Logistic Regression Model in TensorFlow
Logging and Training the Logistic Regression Model
Leveraging TensorBoard to Visualize Computation Graphs and Learning
Building a Multilayer Model for MNIST in TensorFlow
Summary
4. Beyond Gradient Descent
The Challenges with Gradient Descent
Local Minima in the Error Surfaces of Deep Networks
Model Identifiability
How Pesky Are Spurious Local Minima in Deep Networks?
Flat Regions in the Error Surface
When the Gradient Points in the Wrong Direction
Momentum-Based Optimization
A Brief View of Second-Order Methods
Learning Rate Adaptation
AdaGrad——Accumulating Historical Gradients
RMSProp——Exponentially Weighted Moving Average of Gradients
Adam——Combining Momentum and RMSProp
The Philosophy Behind Optimizer Selection
Summary
5. Convolutional Neural Networks
Neurons in Human Vision
The Shortcomings of Feature Selection
Vanilla Deep Neural Networks Don't Scale
Filters and Feature Maps
Full Description of the Convolutional Layer
Max Pooling
Full Architectural Description of Convolution Networks
Closing the Loop on MNIST with Convolutional Networks
Image Preprocessing Pipelines Enable More Robust Models
Accelerating Training with Batch Normalization
Building a Convolutional Network for CIFAR-10
Visualizing Learning in Convolutional Networks
Leveraging Convolutional Filters to Replicate Artistic Styles
Learning Convolutional Filters for Other Problem Domains
Summary
6. Embedding and Representation Learning
Learning Lower-Dimensional Representations
Principal Component Analysis
Motivating the Autoencoder Architecture
Implementing an Autoencoder in TensorFlow
Denoising to Force Robust Representations
Sparsity in Autoencoders
When Context Is More Informative than the Input Vector
The Word2Vec Framework
Implementing the Skip-Gram Architecture
Summary
7. Models for Sequence Analysis
Analyzing Variable-Length Inputs
Tackling seq2seq with Neural N-Grams
Implementing a Part-of-Speech Tagger
Dependency Parsing and SyntaxNet
Beam Search and Global Normalization
A Case for Stateful Deep Learning Models
Recurrent Neural Networks
The Challenges with Vanishing Gradients
Long Short-Term Memory (LSTM) Units
TensorFlow Primitives for RNN Models
Implementing a Sentiment Analysis Model
Solving seq2seq Tasks with Recurrent Neural Networks
Augmenting Recurrent Networks with Attention
Dissecting a Neural Translation Network
Summary
8. Memory Augmented Neural Networks
Neural Turing Machines
Attention-Based Memory Access
NTM Memory Addressing Mechanisms
Differentiable Neural Computers
Interference-Free Writing in DNCs
DNC Memory Reuse
Temporal Linking of DNC Writes
Understanding the DNC Read Head
The DNC Controller Network
Visualizing the DNC in Action
Implementing the DNC in TensorFlow
Teaching a DNC to Read and Comprehend
Summary
9. Deep Reinforcement Learning
Deep Reinforcement Learning Masters Atari Games
What Is Reinforcement Learning?
Markov Decision Processes (MDP)
Policy
Future Return
Discounted Future Return
Explore Versus Exploit
Policy Versus Value Learning
Policy Learning via Policy Gradients
Pole-Cart with Policy Gradients
OpenAI Gym
Creating an Agent
Building the Model and Optimizer
Sampling Actions
Keeping Track of History
Policy Gradient Main Function
PGAgent Performance on Pole-Cart
Q-Learning and Deep Q-Networks
The Bellman Equation
Issues with Value Iteration
Approximating the Q-Function
Deep Q-Network (DQN)
Training DQN
Learning Stability
Target Q-Network
Experience Replay
From Q-Function to Policy
DQN and the Markov Assumption
DQN's Solution to the Markov Assumption
Playing Breakout wth DQN
Building Our Architecture
Stacking Frames
Setting Up Training Operations
Updating Our Target Q-Network
Implementing Experience Replay
DQN Main Loop
DQNAgent Results on Breakout
Improving and Moving Beyond DQN
Deep Recurrent Q-Networks (DRQN)
Asynchronous Advantage Actor-Critic Agent (A3C)
UNsupervised REinforcement and Auxiliary Learning (UNREAL)
Summary
Index
深度学习基础(影印版) [ Fundamentals of Deep Learning] 下载 mobi epub pdf txt 电子书 格式
深度学习基础(影印版) [ Fundamentals of Deep Learning] 下载 mobi pdf epub txt 电子书 格式 2024
深度学习基础(影印版) [ Fundamentals of Deep Learning] mobi epub pdf txt 电子书 格式下载 2024