gradient boosting classifier sklearn

gradient boosting

AdaboostFreund Schapire1997W,y(x),,w,,,,

scikit-learn (gbdt) - pinard

sacikit-learnGradientBoostingClassifierGBDT GradientBoostingRegressorGBDTlossAdaboostBoostingCART

2)learning_rate: $\nu$$f_{k}(x) = f_{k-1}(x) + \nu h_k(x)$$\nu$$0 < \nu \leq 1 $$\nu$n_estimatorslearning_rate$\nu$1

3) subsample: (0,1]11GBDT1[0.5, 0.8]1.0


scikit-learngradient boosting

>>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0)>>> X_train, X_test = X[:200], X[200:]>>> y_train, y_test = y[:200], y[200:]>>> est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1,... max_depth=1, random_state=0, loss='ls').fit(X_train, y_train)>>> mean_squared_error(y_test, est.predict(X_test)) 5.00... 2.2.1 n_estimators : int (default=100) 2.2 loss : {ls, lad, huber, quantile}, optional (default=ls) 2.3 learning_rate : float, optional (default=0.1) SGBlearning_raten_estimators learning_rate max_depth : integer, optional (default=3) Decision Stump, 2.5 warm_start : bool, default: False True 3.3.1 train_score_ : array, shape = [n_estimators] 3.2 feature_importances_ : array, shape = [n_features]

scikit-learn - gradientboostingclassifier | scikit-learn tutorial

Gradient Boosting for classification. The Gradient Boosting Classifier is an additive ensemble of a base model whose error is corrected in successive iterations (or stages) by the addition of Regression Trees which correct the residuals (the error of the previous stage).