Introduction to Regression in Scikit-learn
Regression is a fundamental technique in machine learning used to predict continuous values. Scikit-learn, a powerful Python library, offers a wide range of regression models that are easy to implement and customize. In this blog post, we'll explore various regression techniques and how to apply them using Scikit-learn.
Linear Regression: The Building Block
Let's start with the simplest form of regression: linear regression. This model assumes a linear relationship between the input features and the target variable.
from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import numpy as np # Generate sample data X = np.random.rand(100, 1) y = 2 * X + 1 + np.random.randn(100, 1) * 0.1 # Split the data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create and train the model model = LinearRegression() model.fit(X_train, y_train) # Make predictions y_pred = model.predict(X_test) # Evaluate the model mse = mean_squared_error(y_test, y_pred) print(f"Mean Squared Error: {mse}")
This example demonstrates how to create, train, and evaluate a simple linear regression model using Scikit-learn.
Polynomial Regression: Capturing Non-linear Relationships
When the relationship between variables is not linear, polynomial regression can be a powerful tool. Scikit-learn allows us to easily implement polynomial features:
from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline # Create polynomial features degree = 3 polyreg = make_pipeline(PolynomialFeatures(degree), LinearRegression()) # Fit the model polyreg.fit(X_train, y_train) # Make predictions y_pred_poly = polyreg.predict(X_test) # Evaluate the model mse_poly = mean_squared_error(y_test, y_pred_poly) print(f"Mean Squared Error (Polynomial): {mse_poly}")
This code snippet shows how to create a polynomial regression model of degree 3 using a pipeline in Scikit-learn.
Regularized Regression: Preventing Overfitting
Regularization techniques help prevent overfitting by adding a penalty term to the loss function. Scikit-learn provides several regularized regression models:
Ridge Regression (L2 Regularization)
from sklearn.linear_model import Ridge ridge = Ridge(alpha=1.0) ridge.fit(X_train, y_train) y_pred_ridge = ridge.predict(X_test)
Lasso Regression (L1 Regularization)
from sklearn.linear_model import Lasso lasso = Lasso(alpha=1.0) lasso.fit(X_train, y_train) y_pred_lasso = lasso.predict(X_test)
Elastic Net (Combination of L1 and L2)
from sklearn.linear_model import ElasticNet elastic = ElasticNet(alpha=1.0, l1_ratio=0.5) elastic.fit(X_train, y_train) y_pred_elastic = elastic.predict(X_test)
These regularized models help in feature selection and preventing overfitting, especially when dealing with high-dimensional data.
Model Evaluation and Selection
Scikit-learn provides various tools for evaluating and selecting the best regression model:
Cross-Validation
from sklearn.model_selection import cross_val_score scores = cross_val_score(model, X, y, cv=5, scoring='neg_mean_squared_error') print(f"Cross-validation scores: {-scores}") print(f"Average MSE: {-scores.mean()}")
Grid Search for Hyperparameter Tuning
from sklearn.model_selection import GridSearchCV param_grid = {'alpha': [0.1, 1.0, 10.0]} grid_search = GridSearchCV(Ridge(), param_grid, cv=5) grid_search.fit(X_train, y_train) print(f"Best parameters: {grid_search.best_params_}") print(f"Best score: {grid_search.best_score_}")
Advanced Regression Techniques
Scikit-learn also offers more advanced regression techniques:
Support Vector Regression (SVR)
from sklearn.svm import SVR svr = SVR(kernel='rbf', C=1.0, epsilon=0.1) svr.fit(X_train, y_train.ravel()) y_pred_svr = svr.predict(X_test)
Random Forest Regression
from sklearn.ensemble import RandomForestRegressor rf_reg = RandomForestRegressor(n_estimators=100, random_state=42) rf_reg.fit(X_train, y_train.ravel()) y_pred_rf = rf_reg.predict(X_test)
These advanced techniques can capture complex relationships in the data and often provide better performance than simpler models.
Conclusion
Scikit-learn provides a rich set of regression models and tools for data scientists and machine learning practitioners. By understanding these different techniques and how to implement them, you can tackle a wide range of regression problems efficiently and effectively.
Remember to always start with simple models and gradually increase complexity as needed. Proper model evaluation and selection are crucial for building reliable and accurate regression models.