內容簡介
機器學習已經成為許多商業應用和研究項目的一個組成部分,同時擁有廣泛研究團隊的大型公司也投入到這個領域。如果你使用Python,即使是初學者,《Python機器學習入門(影印版 英文版)》也將教你如何構建自己的機器學習解決方案。有瞭目前可用的豐富數據,機器學習應用程序隻受限於你的想象力。
你將學習使用Python和scikit-learn庫所需的全部步驟來創建成功的機器學習應用程序。《Python機器學習入門(影印版 英文版)》作者安德烈亞斯·穆勒、莎拉·圭多專注於使用機器學習算法的實踐方麵,而不會過多討論其背後的數學原理。熟悉NumPy和matplotlib庫將有助於你從《Python機器學習入門(影印版 英文版)》中獲得*多信息。
有瞭《Python機器學習入門(影印版 英文版)》,你會學到:機器學習的基本概念和應用程序各種廣泛使用的機器學習算法的優點和缺點如何呈現通過機器學習處理後的數據,包括需要關注的數據方麵於模型評估和參數調整的**方法用於連接模型和封裝工作流的管道的概念處理文本數據的方法,包括特定於文本的處理技術改善你的機器學習和數據科學技能的建議
作者簡介
AndreasMuller,在波恩大學的機器學習專業獲得博士學位。在擔任計算機視覺應用的機器學習研究員後,他加入瞭紐約大學數據科學中心:他也是scikit-learn維護者和核心貢獻者。SarahGuido,是一位數據科學傢,與許多創業公司有密切閤作,近擔任Bitly的首席數據科學傢。Sarah獲得密歇根大學信息科學碩士學位,在多個學術會議上成功地發錶瞭演講。
目錄
Preface
1. Introduction
Why Machine Learning?
Problems Machine Learning Can Solve
Knowing Your Task and Knowing Your Data
Why Python?
scikit-learn
Installing scikit-learn
Essential Libraries and Tools
Jupyter Notebook
NumPy
SciPy
matplotlib
pandas
mglearn
Python 2 Versus Python 3
Versions Used in this Book
A First Application: Classifying Iris Species
Meet the Data
Measuring Success: Training and Testing Data
First Things First: Look at Your Data
Building Your First Model: k-Nearest Neighbors
Making Predictions
Evaluating the Model
Summary and Outlook
2. Supervised Learning
Classification and Regression
Generalization, Overfitting, and Underfitting
Relation of Model Complexity to Dataset Size
Supervised Machine Learning Algorithms
Some Sample Datasets
k-Nearest Neighbors
Linear Models
Naive Bayes Classifiers
Decision Trees
Ensembles of Decision Trees
Kernelized Support Vector Machines
Neural Networks (Deep Learning)
Uncertainty Estimates from Classifiers
The Decision Function
Predicting Probabilities
Uncertainty in Multiclass Classification
Summary and Outlook
3. Unsupervised Learning and Preprocessing
Types of Unsupervised Learning
Challenges in Unsupervised Learning
Preprocessing and Scaling
Different Kinds of Preprocessing
Applying Data Transformations
Scaling Training and Test Data the Same Way
The Effect of Preprocessing on Supervised Learning
Dimensionality Reduction, Feature Extraction, and Manifold Learning
Principal Component Analysis (PCA)
Non-Negative Matrix Factorization (NMF)
Manifold Learning with t-SNE
Clustering
k-Means Clustering
Agglomerative Clustering
DBSCAN
Comparing and Evaluating Clustering Algorithms
Summary of Clustering Methods
Summary and Outlook
4. Representing Data and Engineering Features
Categorical Variables
One-Hot-Encoding (Dummy Variables)
Numbers Can Encode Categoricals
Binning, Discretization, Linear Models, and Trees
Interactions and Polynomials
Univariate Nonlinear Transformations
Automatic Feature Selection
Univariate Statistics
Model-Based Feature Selection
Iterative Feature Selection
Utilizing Expert Knowledge
Summary and Outlook
5. Model Evaluation and Improvement
Cross-Validation
Cross-Validation in scikit-learn
Benefits of Cross-Validation
Stratified k-Fold Cross-Validation and Other Strategies
Grid Search
Simple Grid Search
The Danger of Overfitting the Parameters and the Validation Set
Grid Search with Cross-Validation
Evaluation Metrics and Scoring
Keep the End Goal in Mind
Metrics for Binary Classification
Metrics for Multiclass Classification
Regression Metrics
Using Evaluation Metrics in Model Selection
Summary and Outlook
6. Algorithm Chains and Pipelines
Parameter Selection with Preprocessing
Building Pipelines
Using Pipelines in Grid Searches
The General Pipeline Interface
Convenient Pipeline Creation with make_pipeline
Accessing Step Attributes
Accessing Attributes in a Grid-Searched Pipeline
Grid-Searching Preprocessing Steps and Model Parameters
Grid-Searching Which Model To Use
Summary and Outlook
7. Working with Text Data
Types of Data Represented as Strings
Example Application: Sentiment Analysis of Movie Reviews
Representing Text Data as a Bag of Words
Applying Bag-of-Words to a Toy Dataset
Bag-of-Words for Movie Reviews
Stopwords
Rescaling the Data with tf-idf
Investigating Model Coefficients
Bag-of-Words with More Than One Word (n-Grams)
Advanced Tokenization, Stemming, and Lemmatization
Topic Modeling and Document Clustering
Latent Dirichlet Allocation
Summary and Outlook
8. Wrapping Up
Approaching a Machine Learning Problem
Humans in the Loop
From Prototype to Production
Testing Production Systems
Building Your Own Estimator
Where to Go from Here
Theory
Other Machine Learning Frameworks and Packages
Ranking, Recommender Systems, and Other Kinds of Learning
Probabilistic Modeling, Inference, and Probabilistic Programming
Neural Networks
Scaling to Larger Datasets
Honing Your Skills
Conclusion
Index
精彩書摘
《Python機器學習入門(影印版 英文版)》:
Another very useful clustering algorithm is DBSCAN(which stands for"densitybased spatial clustering of applications with noise").The main benefits of DBSCAN are that it does not require the user to set the number of clusters a priori,it can capture clusters of complex shapes,and it can identify points that are not part of anycluster.DBSCAN is somewhat slower than agglomerative clustering and k—means,butstill scales to relatively large datasets.
DBSCAN works by identifying points that are in"crowded"regions of the feature space,where many data points are close together.These regions are referred to as dense regions in feature space.The idea behind DBSCAN is that clusters form dense regions of data,separated by regions that are relatively empty.
Points that are within a dense region are called core samples(or core points),and they are defmed as follows.There are two parameters in DBSCAN: min_samples and eps.
If there are at least rnin_samples many data points within a distance of eps to a given data point,that data point is classified as a core sample.Core samples that are closerto each other than the distance eps are put into the same cluster by DBSCAN.
……
Python機器學習入門(影印版 英文版) 下載 mobi epub pdf txt 電子書 格式