Fenil Sonani

Overfitting and Regularization in Machine Learning: Complete Guide with Python Implementation

5 min read

AI-Generated Content Notice

Some code examples and technical explanations in this article were generated with AI assistance. The content has been reviewed for accuracy, but please test any code snippets in your development environment before using them.


Overfitting and Regularization in Machine Learning: Complete Guide with Python Implementation

Introduction

Overfitting is one of the most pervasive challenges in machine learning, where models perform exceptionally well on training data but fail to generalize to unseen data. This phenomenon occurs when models learn not just the underlying patterns but also the noise and idiosyncrasies of the training set, resulting in poor real-world performance.

Regularization techniques provide systematic approaches to combat overfitting by constraining model complexity, adding penalties for complexity, or introducing controlled randomness during training. This comprehensive guide explores the theoretical foundations, practical implementations, and advanced strategies for detecting, understanding, and preventing overfitting through various regularization methods.

From fundamental L1 and L2 regularization to modern techniques like dropout and early stopping, we'll implement complete frameworks for building robust, generalizable machine learning models that perform well beyond the training environment.

Understanding Overfitting

Mathematical Foundation

Overfitting occurs when a model's capacity exceeds what's needed to capture the true underlying relationship between inputs and outputs. Mathematically, we can express this through the decomposition of generalization error:

Generalization Error = Training Error + Generalization Gap

Where the generalization gap grows as model complexity increases beyond the optimal point.

Let's implement a comprehensive framework for understanding and detecting overfitting:

import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split, validation_curve, learning_curve
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.ensemble import RandomForestRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.pipeline import Pipeline
import pandas as pd
from typing import List, Tuple, Dict, Optional, Callable, Union
import warnings
warnings.filterwarnings('ignore')

# Set style for better visualizations
plt.style.use('seaborn-v0_8')
sns.set_palette("husl")

class OverfittingAnalyzer:
    """
    Comprehensive analyzer for detecting and understanding overfitting in machine learning models
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        np.random.seed(random_state)
        self.results_history: List[Dict] = []
        
    def generate_synthetic_data(self, 
                              n_samples: int = 500,
                              n_features: int = 1,
                              noise_level: float = 0.1,
                              true_function: Optional[Callable] = None) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
        """
        Generate synthetic dataset for overfitting analysis
        
        Args:
            n_samples: Number of samples to generate
            n_features: Number of input features
            noise_level: Standard deviation of noise
            true_function: Custom function to generate target values
            
        Returns:
            Tuple of (X, y_true, y_noisy)
        """
        if n_features == 1:
            X = np.linspace(0, 1, n_samples).reshape(-1, 1)
        else:
            X = np.random.uniform(0, 1, size=(n_samples, n_features))
        
        if true_function is None:
            # Default true function: polynomial with sine component
            if n_features == 1:
                y_true = (2 * X.flatten() - 1) ** 3 + 0.5 * np.sin(8 * np.pi * X.flatten())
            else:
                # For multivariate case, use a combination of features
                y_true = np.sum(X ** 2, axis=1) + 0.5 * np.sin(4 * np.pi * np.sum(X, axis=1))
        else:
            y_true = true_function(X)
        
        # Add noise
        noise = np.random.normal(0, noise_level, size=y_true.shape)
        y_noisy = y_true + noise
        
        return X, y_true, y_noisy
    
    def detect_overfitting(self, 
                          model: any,
                          X_train: np.ndarray,
                          y_train: np.ndarray,
                          X_val: np.ndarray,
                          y_val: np.ndarray,
                          complexity_param: str,
                          complexity_range: List[Union[int, float]],
                          scoring: str = 'mse') -> Dict[str, any]:
        """
        Detect overfitting by analyzing validation curves
        
        Args:
            model: Scikit-learn model or pipeline
            X_train, y_train: Training data
            X_val, y_val: Validation data
            complexity_param: Parameter name controlling model complexity
            complexity_range: Range of complexity values to test
            scoring: Scoring metric ('mse' or 'r2')
            
        Returns:
            Dictionary with overfitting analysis results
        """
        train_scores = []
        val_scores = []
        
        for complexity in complexity_range:
            # Set complexity parameter
            if hasattr(model, 'set_params'):
                model.set_params(**{complexity_param: complexity})
            else:
                setattr(model, complexity_param, complexity)
            
            # Fit model
            model.fit(X_train, y_train)
            
            # Make predictions
            train_pred = model.predict(X_train)
            val_pred = model.predict(X_val)
            
            # Calculate scores
            if scoring == 'mse':
                train_score = mean_squared_error(y_train, train_pred)
                val_score = mean_squared_error(y_val, val_pred)
            elif scoring == 'r2':
                train_score = r2_score(y_train, train_pred)
                val_score = r2_score(y_val, val_pred)
            else:
                raise ValueError(f"Unsupported scoring: {scoring}")
            
            train_scores.append(train_score)
            val_scores.append(val_score)
        
        # Detect overfitting point
        overfitting_detected, overfitting_point = self._find_overfitting_point(
            complexity_range, train_scores, val_scores, scoring
        )
        
        # Calculate overfitting severity
        overfitting_severity = self._calculate_overfitting_severity(
            train_scores, val_scores, scoring
        )
        
        results = {
            'complexity_range': complexity_range,
            'train_scores': train_scores,
            'val_scores': val_scores,
            'overfitting_detected': overfitting_detected,
            'overfitting_point': overfitting_point,
            'overfitting_severity': overfitting_severity,
            'optimal_complexity': self._find_optimal_complexity(complexity_range, val_scores, scoring),
            'scoring_metric': scoring
        }
        
        self.results_history.append(results)
        return results
    
    def _find_overfitting_point(self, 
                               complexity_range: List,
                               train_scores: List[float],
                               val_scores: List[float],
                               scoring: str) -> Tuple[bool, Optional[float]]:
        """Find the point where overfitting begins"""
        if scoring == 'mse':
            # For MSE, overfitting occurs when validation error starts increasing
            # while training error continues decreasing
            val_increasing = False
            for i in range(1, len(val_scores)):
                if val_scores[i] > val_scores[i-1] * 1.01:  # 1% tolerance
                    return True, complexity_range[i]
            return False, None
        else:  # r2
            # For R2, overfitting occurs when validation R2 starts decreasing
            for i in range(1, len(val_scores)):
                if val_scores[i] < val_scores[i-1] * 0.99:  # 1% tolerance
                    return True, complexity_range[i]
            return False, None
    
    def _calculate_overfitting_severity(self, 
                                      train_scores: List[float],
                                      val_scores: List[float],
                                      scoring: str) -> float:
        """Calculate overfitting severity score"""
        if scoring == 'mse':
            # Higher validation error relative to training indicates more overfitting
            min_train = min(train_scores)
            min_val = min(val_scores)
            return (min_val - min_train) / min_train if min_train > 0 else 0
        else:  # r2
            # Lower validation R2 relative to training indicates more overfitting
            max_train = max(train_scores)
            max_val = max(val_scores)
            return (max_train - max_val) / max_train if max_train > 0 else 0
    
    def _find_optimal_complexity(self, 
                               complexity_range: List,
                               val_scores: List[float],
                               scoring: str) -> float:
        """Find optimal complexity based on validation performance"""
        if scoring == 'mse':
            optimal_idx = np.argmin(val_scores)
        else:  # r2
            optimal_idx = np.argmax(val_scores)
        return complexity_range[optimal_idx]
    
    def plot_overfitting_analysis(self, results: Dict[str, any]) -> None:
        """
        Plot comprehensive overfitting analysis
        
        Args:
            results: Results from detect_overfitting method
        """
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 6))
        
        complexity_range = results['complexity_range']
        train_scores = results['train_scores']
        val_scores = results['val_scores']
        scoring = results['scoring_metric']
        
        # Plot 1: Validation curves
        ax1.plot(complexity_range, train_scores, 'o-', linewidth=3, markersize=8,
                label='Training Score', color='blue', alpha=0.8)
        ax1.plot(complexity_range, val_scores, 's-', linewidth=3, markersize=8,
                label='Validation Score', color='red', alpha=0.8)
        
        # Mark overfitting point
        if results['overfitting_detected']:
            overfitting_point = results['overfitting_point']
            ax1.axvline(x=overfitting_point, color='orange', linestyle='--', 
                       linewidth=2, alpha=0.7, label=f'Overfitting Point ({overfitting_point})')
        
        # Mark optimal complexity
        optimal_complexity = results['optimal_complexity']
        ax1.axvline(x=optimal_complexity, color='green', linestyle='--', 
                   linewidth=2, alpha=0.7, label=f'Optimal Complexity ({optimal_complexity})')
        
        ax1.set_xlabel('Model Complexity', fontsize=12)
        ax1.set_ylabel(f'Score ({scoring.upper()})', fontsize=12)
        ax1.set_title('Overfitting Detection: Validation Curves', fontsize=14, fontweight='bold')
        ax1.legend(fontsize=11)
        ax1.grid(True, alpha=0.3)
        
        if scoring == 'mse':
            ax1.set_yscale('log')
        
        # Plot 2: Overfitting gap
        if scoring == 'mse':
            gap = np.array(val_scores) - np.array(train_scores)
            gap_label = 'Validation - Training MSE'
        else:
            gap = np.array(train_scores) - np.array(val_scores)
            gap_label = 'Training - Validation R²'
        
        ax2.plot(complexity_range, gap, 'o-', linewidth=3, markersize=8,
                color='purple', alpha=0.8)
        ax2.axhline(y=0, color='black', linestyle='-', alpha=0.3)
        ax2.set_xlabel('Model Complexity', fontsize=12)
        ax2.set_ylabel(gap_label, fontsize=12)
        ax2.set_title('Overfitting Gap Analysis', fontsize=14, fontweight='bold')
        ax2.grid(True, alpha=0.3)
        
        # Add severity annotation
        severity = results['overfitting_severity']
        severity_text = f'Overfitting Severity: {severity:.3f}'
        if severity > 0.5:
            severity_color = 'red'
            severity_desc = '(High)'
        elif severity > 0.2:
            severity_color = 'orange'
            severity_desc = '(Moderate)'
        else:
            severity_color = 'green'
            severity_desc = '(Low)'
        
        ax2.text(0.05, 0.95, f'{severity_text}\n{severity_desc}', 
                transform=ax2.transAxes, fontsize=11, fontweight='bold',
                color=severity_color, verticalalignment='top',
                bbox=dict(boxstyle='round', facecolor='white', alpha=0.8))
        
        plt.tight_layout()
        plt.show()
        
        # Print summary
        print(f"\nOverfitting Analysis Summary:")
        print(f"Overfitting Detected: {'Yes' if results['overfitting_detected'] else 'No'}")
        if results['overfitting_detected']:
            print(f"Overfitting Point: {results['overfitting_point']}")
        print(f"Optimal Complexity: {results['optimal_complexity']}")
        print(f"Overfitting Severity: {results['overfitting_severity']:.3f}")

# Demonstrate overfitting detection
print("Generating synthetic dataset for overfitting analysis...")
analyzer = OverfittingAnalyzer()

# Generate data
X, y_true, y_noisy = analyzer.generate_synthetic_data(n_samples=200, noise_level=0.15)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y_noisy, test_size=0.3, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.3, random_state=42)

# Analyze polynomial regression overfitting
print("\nAnalyzing polynomial regression overfitting...")

# Create polynomial pipeline
poly_pipeline = Pipeline([
    ('poly', PolynomialFeatures(include_bias=False)),
    ('scaler', StandardScaler()),
    ('regressor', LinearRegression())
])

# Analyze overfitting across different polynomial degrees
overfitting_results = analyzer.detect_overfitting(
    model=poly_pipeline,
    X_train=X_train,
    y_train=y_train,
    X_val=X_val,
    y_val=y_val,
    complexity_param='poly__degree',
    complexity_range=list(range(1, 16)),
    scoring='mse'
)

# Plot results
analyzer.plot_overfitting_analysis(overfitting_results)

Regularization Techniques

1. L1 Regularization (Lasso)

L1 regularization adds a penalty proportional to the sum of absolute values of parameters:

Loss_regularized = Loss_original + λ * Σ|w_i|
class L1RegularizationAnalyzer:
    """
    Comprehensive analyzer for L1 (Lasso) regularization
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        
    def analyze_l1_regularization(self, 
                                X_train: np.ndarray,
                                y_train: np.ndarray,
                                X_val: np.ndarray,
                                y_val: np.ndarray,
                                alpha_range: List[float],
                                max_degree: int = 10) -> Dict[str, any]:
        """
        Analyze L1 regularization effects across different alpha values
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data
            alpha_range: Range of L1 regularization strengths
            max_degree: Maximum polynomial degree for feature expansion
            
        Returns:
            Dictionary with L1 regularization analysis results
        """
        results = {
            'alpha_range': alpha_range,
            'train_mse': [],
            'val_mse': [],
            'num_features_selected': [],
            'coefficients': [],
            'feature_importance_evolution': []
        }
        
        # Create polynomial features
        poly_features = PolynomialFeatures(degree=max_degree, include_bias=False)
        X_train_poly = poly_features.fit_transform(X_train)
        X_val_poly = poly_features.transform(X_val)
        
        # Standardize features
        scaler = StandardScaler()
        X_train_scaled = scaler.fit_transform(X_train_poly)
        X_val_scaled = scaler.transform(X_val_poly)
        
        feature_names = poly_features.get_feature_names_out(['x'])
        
        for alpha in alpha_range:
            # Fit Lasso regression
            lasso = Lasso(alpha=alpha, random_state=self.random_state, max_iter=2000)
            lasso.fit(X_train_scaled, y_train)
            
            # Make predictions
            train_pred = lasso.predict(X_train_scaled)
            val_pred = lasso.predict(X_val_scaled)
            
            # Calculate metrics
            train_mse = mean_squared_error(y_train, train_pred)
            val_mse = mean_squared_error(y_val, val_pred)
            
            # Count non-zero coefficients (selected features)
            num_selected = np.sum(np.abs(lasso.coef_) > 1e-6)
            
            # Store results
            results['train_mse'].append(train_mse)
            results['val_mse'].append(val_mse)
            results['num_features_selected'].append(num_selected)
            results['coefficients'].append(lasso.coef_.copy())
            
            # Feature importance (absolute coefficient values)
            feature_importance = np.abs(lasso.coef_)
            results['feature_importance_evolution'].append(feature_importance)
        
        results['feature_names'] = feature_names
        return results
    
    def plot_l1_analysis(self, results: Dict[str, any]) -> None:
        """Plot comprehensive L1 regularization analysis"""
        fig, axes = plt.subplots(2, 2, figsize=(16, 12))
        
        alpha_range = results['alpha_range']
        
        # Plot 1: Training and validation MSE
        axes[0, 0].semilogx(alpha_range, results['train_mse'], 'o-', 
                           linewidth=3, markersize=8, label='Training MSE', color='blue')
        axes[0, 0].semilogx(alpha_range, results['val_mse'], 's-', 
                           linewidth=3, markersize=8, label='Validation MSE', color='red')
        
        # Find optimal alpha
        optimal_idx = np.argmin(results['val_mse'])
        optimal_alpha = alpha_range[optimal_idx]
        axes[0, 0].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7, label=f'Optimal α={optimal_alpha:.4f}')
        
        axes[0, 0].set_xlabel('L1 Regularization Strength (α)', fontsize=12)
        axes[0, 0].set_ylabel('Mean Squared Error', fontsize=12)
        axes[0, 0].set_title('L1 Regularization: Model Performance', fontsize=14, fontweight='bold')
        axes[0, 0].legend(fontsize=11)
        axes[0, 0].grid(True, alpha=0.3)
        axes[0, 0].set_yscale('log')
        
        # Plot 2: Feature selection
        axes[0, 1].semilogx(alpha_range, results['num_features_selected'], 'o-', 
                           linewidth=3, markersize=8, color='purple')
        axes[0, 1].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[0, 1].set_xlabel('L1 Regularization Strength (α)', fontsize=12)
        axes[0, 1].set_ylabel('Number of Selected Features', fontsize=12)
        axes[0, 1].set_title('L1 Regularization: Feature Selection', fontsize=14, fontweight='bold')
        axes[0, 1].grid(True, alpha=0.3)
        
        # Plot 3: Coefficient paths
        coefficients = np.array(results['coefficients'])
        feature_names = results['feature_names']
        
        # Plot only top features to avoid clutter
        max_coef_per_feature = np.max(np.abs(coefficients), axis=0)
        top_features_idx = np.argsort(max_coef_per_feature)[-10:]  # Top 10 features
        
        for i, feature_idx in enumerate(top_features_idx):
            axes[1, 0].semilogx(alpha_range, coefficients[:, feature_idx], 
                               linewidth=2, alpha=0.8, 
                               label=feature_names[feature_idx] if i < 5 else "")
        
        axes[1, 0].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[1, 0].set_xlabel('L1 Regularization Strength (α)', fontsize=12)
        axes[1, 0].set_ylabel('Coefficient Value', fontsize=12)
        axes[1, 0].set_title('L1 Regularization: Coefficient Paths', fontsize=14, fontweight='bold')
        axes[1, 0].legend(fontsize=9, loc='best')
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 4: Feature importance heatmap
        importance_matrix = np.array(results['feature_importance_evolution']).T
        
        # Select top features for visualization
        mean_importance = np.mean(importance_matrix, axis=1)
        top_features_mask = mean_importance > np.percentile(mean_importance, 70)
        
        im = axes[1, 1].imshow(importance_matrix[top_features_mask], 
                              aspect='auto', cmap='viridis', interpolation='nearest')
        axes[1, 1].set_xlabel('Alpha Index', fontsize=12)
        axes[1, 1].set_ylabel('Feature Index', fontsize=12)
        axes[1, 1].set_title('Feature Importance Evolution', fontsize=14, fontweight='bold')
        
        # Add colorbar
        plt.colorbar(im, ax=axes[1, 1], shrink=0.8)
        
        plt.tight_layout()
        plt.show()
        
        # Print analysis summary
        print(f"\nL1 Regularization Analysis Summary:")
        print(f"Optimal α: {optimal_alpha:.6f}")
        print(f"Features selected at optimal α: {results['num_features_selected'][optimal_idx]}")
        print(f"Training MSE at optimal α: {results['train_mse'][optimal_idx]:.6f}")
        print(f"Validation MSE at optimal α: {results['val_mse'][optimal_idx]:.6f}")
        
        # Show most important features at optimal alpha
        optimal_coeffs = results['coefficients'][optimal_idx]
        important_features = np.where(np.abs(optimal_coeffs) > 1e-6)[0]
        print(f"\nSelected features at optimal α:")
        for i, feature_idx in enumerate(important_features[:10]):  # Show top 10
            coeff_value = optimal_coeffs[feature_idx]
            feature_name = feature_names[feature_idx]
            print(f"  {feature_name}: {coeff_value:.4f}")

# Demonstrate L1 regularization
print("\nAnalyzing L1 (Lasso) regularization...")
l1_analyzer = L1RegularizationAnalyzer()

# Define alpha range (regularization strengths)
alpha_range = np.logspace(-4, 1, 50)  # From 0.0001 to 10

l1_results = l1_analyzer.analyze_l1_regularization(
    X_train=X_train,
    y_train=y_train,
    X_val=X_val,
    y_val=y_val,
    alpha_range=alpha_range,
    max_degree=12
)

l1_analyzer.plot_l1_analysis(l1_results)

2. L2 Regularization (Ridge)

L2 regularization adds a penalty proportional to the sum of squared parameters:

Loss_regularized = Loss_original + λ * Σw_i²
class L2RegularizationAnalyzer:
    """
    Comprehensive analyzer for L2 (Ridge) regularization
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        
    def analyze_l2_regularization(self, 
                                X_train: np.ndarray,
                                y_train: np.ndarray,
                                X_val: np.ndarray,
                                y_val: np.ndarray,
                                alpha_range: List[float],
                                max_degree: int = 10) -> Dict[str, any]:
        """
        Analyze L2 regularization effects across different alpha values
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data
            alpha_range: Range of L2 regularization strengths
            max_degree: Maximum polynomial degree for feature expansion
            
        Returns:
            Dictionary with L2 regularization analysis results
        """
        results = {
            'alpha_range': alpha_range,
            'train_mse': [],
            'val_mse': [],
            'coefficient_norms': [],
            'coefficients': [],
            'condition_numbers': []
        }
        
        # Create polynomial features
        poly_features = PolynomialFeatures(degree=max_degree, include_bias=False)
        X_train_poly = poly_features.fit_transform(X_train)
        X_val_poly = poly_features.transform(X_val)
        
        # Standardize features
        scaler = StandardScaler()
        X_train_scaled = scaler.fit_transform(X_train_poly)
        X_val_scaled = scaler.transform(X_val_poly)
        
        feature_names = poly_features.get_feature_names_out(['x'])
        
        for alpha in alpha_range:
            # Fit Ridge regression
            ridge = Ridge(alpha=alpha, random_state=self.random_state)
            ridge.fit(X_train_scaled, y_train)
            
            # Make predictions
            train_pred = ridge.predict(X_train_scaled)
            val_pred = ridge.predict(X_val_scaled)
            
            # Calculate metrics
            train_mse = mean_squared_error(y_train, train_pred)
            val_mse = mean_squared_error(y_val, val_pred)
            
            # Calculate coefficient L2 norm
            coeff_norm = np.linalg.norm(ridge.coef_)
            
            # Calculate condition number (measure of multicollinearity)
            try:
                # Add regularization to the normal equations
                XTX_reg = X_train_scaled.T @ X_train_scaled + alpha * np.eye(X_train_scaled.shape[1])
                condition_number = np.linalg.cond(XTX_reg)
            except:
                condition_number = np.inf
            
            # Store results
            results['train_mse'].append(train_mse)
            results['val_mse'].append(val_mse)
            results['coefficient_norms'].append(coeff_norm)
            results['coefficients'].append(ridge.coef_.copy())
            results['condition_numbers'].append(condition_number)
        
        results['feature_names'] = feature_names
        return results
    
    def plot_l2_analysis(self, results: Dict[str, any]) -> None:
        """Plot comprehensive L2 regularization analysis"""
        fig, axes = plt.subplots(2, 2, figsize=(16, 12))
        
        alpha_range = results['alpha_range']
        
        # Plot 1: Training and validation MSE
        axes[0, 0].semilogx(alpha_range, results['train_mse'], 'o-', 
                           linewidth=3, markersize=8, label='Training MSE', color='blue')
        axes[0, 0].semilogx(alpha_range, results['val_mse'], 's-', 
                           linewidth=3, markersize=8, label='Validation MSE', color='red')
        
        # Find optimal alpha
        optimal_idx = np.argmin(results['val_mse'])
        optimal_alpha = alpha_range[optimal_idx]
        axes[0, 0].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7, label=f'Optimal α={optimal_alpha:.4f}')
        
        axes[0, 0].set_xlabel('L2 Regularization Strength (α)', fontsize=12)
        axes[0, 0].set_ylabel('Mean Squared Error', fontsize=12)
        axes[0, 0].set_title('L2 Regularization: Model Performance', fontsize=14, fontweight='bold')
        axes[0, 0].legend(fontsize=11)
        axes[0, 0].grid(True, alpha=0.3)
        axes[0, 0].set_yscale('log')
        
        # Plot 2: Coefficient norm shrinkage
        axes[0, 1].semilogx(alpha_range, results['coefficient_norms'], 'o-', 
                           linewidth=3, markersize=8, color='purple')
        axes[0, 1].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[0, 1].set_xlabel('L2 Regularization Strength (α)', fontsize=12)
        axes[0, 1].set_ylabel('L2 Norm of Coefficients', fontsize=12)
        axes[0, 1].set_title('L2 Regularization: Coefficient Shrinkage', fontsize=14, fontweight='bold')
        axes[0, 1].grid(True, alpha=0.3)
        axes[0, 1].set_yscale('log')
        
        # Plot 3: Coefficient evolution (ridge paths)
        coefficients = np.array(results['coefficients'])
        feature_names = results['feature_names']
        
        # Plot top features to avoid clutter
        final_coeff_magnitude = np.abs(coefficients[0])  # Coefficients at smallest alpha
        top_features_idx = np.argsort(final_coeff_magnitude)[-10:]  # Top 10 features
        
        for i, feature_idx in enumerate(top_features_idx):
            axes[1, 0].semilogx(alpha_range, coefficients[:, feature_idx], 
                               linewidth=2, alpha=0.8,
                               label=feature_names[feature_idx] if i < 5 else "")
        
        axes[1, 0].axvline(x=optimal_alpha, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[1, 0].set_xlabel('L2 Regularization Strength (α)', fontsize=12)
        axes[1, 0].set_ylabel('Coefficient Value', fontsize=12)
        axes[1, 0].set_title('L2 Regularization: Ridge Paths', fontsize=14, fontweight='bold')
        axes[1, 0].legend(fontsize=9, loc='best')
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 4: Condition number improvement
        condition_numbers = results['condition_numbers']
        finite_conditions = [cn for cn in condition_numbers if np.isfinite(cn)]
        finite_alphas = [alpha_range[i] for i, cn in enumerate(condition_numbers) if np.isfinite(cn)]
        
        if finite_conditions:
            axes[1, 1].loglog(finite_alphas, finite_conditions, 'o-', 
                             linewidth=3, markersize=8, color='orange')
            axes[1, 1].axvline(x=optimal_alpha, color='green', linestyle='--', 
                              linewidth=2, alpha=0.7)
            axes[1, 1].set_xlabel('L2 Regularization Strength (α)', fontsize=12)
            axes[1, 1].set_ylabel('Condition Number', fontsize=12)
            axes[1, 1].set_title('L2 Regularization: Numerical Stability', fontsize=14, fontweight='bold')
            axes[1, 1].grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        # Print analysis summary
        print(f"\nL2 Regularization Analysis Summary:")
        print(f"Optimal α: {optimal_alpha:.6f}")
        print(f"Training MSE at optimal α: {results['train_mse'][optimal_idx]:.6f}")
        print(f"Validation MSE at optimal α: {results['val_mse'][optimal_idx]:.6f}")
        print(f"Coefficient L2 norm at optimal α: {results['coefficient_norms'][optimal_idx]:.6f}")
        if np.isfinite(results['condition_numbers'][optimal_idx]):
            print(f"Condition number at optimal α: {results['condition_numbers'][optimal_idx]:.2e}")

# Demonstrate L2 regularization
print("\nAnalyzing L2 (Ridge) regularization...")
l2_analyzer = L2RegularizationAnalyzer()

l2_results = l2_analyzer.analyze_l2_regularization(
    X_train=X_train,
    y_train=y_train,
    X_val=X_val,
    y_val=y_val,
    alpha_range=alpha_range,
    max_degree=12
)

l2_analyzer.plot_l2_analysis(l2_results)

3. Elastic Net Regularization

Elastic Net combines L1 and L2 regularization:

Loss_regularized = Loss_original + λ₁ * Σ|w_i| + λ₂ * Σw_i²
class ElasticNetAnalyzer:
    """
    Comprehensive analyzer for Elastic Net regularization
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        
    def analyze_elastic_net(self, 
                          X_train: np.ndarray,
                          y_train: np.ndarray,
                          X_val: np.ndarray,
                          y_val: np.ndarray,
                          alpha_range: List[float],
                          l1_ratio_range: List[float],
                          max_degree: int = 10) -> Dict[str, any]:
        """
        Analyze Elastic Net regularization across different alpha and l1_ratio values
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data
            alpha_range: Range of total regularization strengths
            l1_ratio_range: Range of L1 vs L2 mixing ratios
            max_degree: Maximum polynomial degree for feature expansion
            
        Returns:
            Dictionary with Elastic Net analysis results
        """
        # Create polynomial features
        poly_features = PolynomialFeatures(degree=max_degree, include_bias=False)
        X_train_poly = poly_features.fit_transform(X_train)
        X_val_poly = poly_features.transform(X_val)
        
        # Standardize features
        scaler = StandardScaler()
        X_train_scaled = scaler.fit_transform(X_train_poly)
        X_val_scaled = scaler.transform(X_val_poly)
        
        # Grid search over alpha and l1_ratio
        results_grid = np.zeros((len(alpha_range), len(l1_ratio_range)))
        feature_selection_grid = np.zeros((len(alpha_range), len(l1_ratio_range)))
        best_score = float('inf')
        best_params = {}
        best_model = None
        
        for i, alpha in enumerate(alpha_range):
            for j, l1_ratio in enumerate(l1_ratio_range):
                # Fit Elastic Net
                elastic_net = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, 
                                       random_state=self.random_state, max_iter=2000)
                elastic_net.fit(X_train_scaled, y_train)
                
                # Predict and evaluate
                val_pred = elastic_net.predict(X_val_scaled)
                val_mse = mean_squared_error(y_val, val_pred)
                
                # Count selected features
                num_selected = np.sum(np.abs(elastic_net.coef_) > 1e-6)
                
                results_grid[i, j] = val_mse
                feature_selection_grid[i, j] = num_selected
                
                # Track best model
                if val_mse < best_score:
                    best_score = val_mse
                    best_params = {'alpha': alpha, 'l1_ratio': l1_ratio}
                    best_model = elastic_net
        
        results = {
            'alpha_range': alpha_range,
            'l1_ratio_range': l1_ratio_range,
            'validation_mse_grid': results_grid,
            'feature_selection_grid': feature_selection_grid,
            'best_params': best_params,
            'best_score': best_score,
            'best_model': best_model,
            'feature_names': poly_features.get_feature_names_out(['x'])
        }
        
        return results
    
    def plot_elastic_net_analysis(self, results: Dict[str, any]) -> None:
        """Plot comprehensive Elastic Net analysis"""
        fig, axes = plt.subplots(2, 2, figsize=(16, 12))
        
        alpha_range = results['alpha_range']
        l1_ratio_range = results['l1_ratio_range']
        
        # Plot 1: Validation MSE heatmap
        im1 = axes[0, 0].imshow(results['validation_mse_grid'], 
                               aspect='auto', cmap='viridis', interpolation='nearest',
                               extent=[l1_ratio_range[0], l1_ratio_range[-1], 
                                      np.log10(alpha_range[-1]), np.log10(alpha_range[0])])
        axes[0, 0].set_xlabel('L1 Ratio', fontsize=12)
        axes[0, 0].set_ylabel('log₁₀(α)', fontsize=12)
        axes[0, 0].set_title('Elastic Net: Validation MSE', fontsize=14, fontweight='bold')
        plt.colorbar(im1, ax=axes[0, 0], shrink=0.8)
        
        # Mark best parameters
        best_alpha = results['best_params']['alpha']
        best_l1_ratio = results['best_params']['l1_ratio']
        axes[0, 0].plot(best_l1_ratio, np.log10(best_alpha), 'r*', markersize=15, 
                       label=f'Best: α={best_alpha:.4f}, L1={best_l1_ratio:.2f}')
        axes[0, 0].legend(fontsize=10)
        
        # Plot 2: Feature selection heatmap
        im2 = axes[0, 1].imshow(results['feature_selection_grid'], 
                               aspect='auto', cmap='plasma', interpolation='nearest',
                               extent=[l1_ratio_range[0], l1_ratio_range[-1], 
                                      np.log10(alpha_range[-1]), np.log10(alpha_range[0])])
        axes[0, 1].set_xlabel('L1 Ratio', fontsize=12)
        axes[0, 1].set_ylabel('log₁₀(α)', fontsize=12)
        axes[0, 1].set_title('Elastic Net: Feature Selection', fontsize=14, fontweight='bold')
        plt.colorbar(im2, ax=axes[0, 1], shrink=0.8)
        axes[0, 1].plot(best_l1_ratio, np.log10(best_alpha), 'r*', markersize=15)
        
        # Plot 3: Cross-section at best alpha
        best_alpha_idx = np.argmin(np.abs(np.array(alpha_range) - best_alpha))
        mse_cross_section = results['validation_mse_grid'][best_alpha_idx, :]
        features_cross_section = results['feature_selection_grid'][best_alpha_idx, :]
        
        ax3_twin = axes[1, 0].twinx()
        line1 = axes[1, 0].plot(l1_ratio_range, mse_cross_section, 'b-o', 
                               linewidth=3, markersize=8, label='Validation MSE')
        line2 = ax3_twin.plot(l1_ratio_range, features_cross_section, 'r-s', 
                             linewidth=3, markersize=8, label='Features Selected')
        
        axes[1, 0].axvline(x=best_l1_ratio, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[1, 0].set_xlabel('L1 Ratio', fontsize=12)
        axes[1, 0].set_ylabel('Validation MSE', fontsize=12, color='blue')
        ax3_twin.set_ylabel('Number of Features', fontsize=12, color='red')
        axes[1, 0].set_title(f'Cross-section at α={best_alpha:.4f}', fontsize=14, fontweight='bold')
        
        # Combine legends
        lines = line1 + line2
        labels = [l.get_label() for l in lines]
        axes[1, 0].legend(lines, labels, loc='best', fontsize=11)
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 4: Best model coefficients
        best_model = results['best_model']
        feature_names = results['feature_names']
        coefficients = best_model.coef_
        
        # Show only non-zero coefficients
        non_zero_idx = np.where(np.abs(coefficients) > 1e-6)[0]
        if len(non_zero_idx) > 0:
            selected_coeffs = coefficients[non_zero_idx]
            selected_names = [feature_names[i] for i in non_zero_idx]
            
            # Sort by absolute value
            sort_idx = np.argsort(np.abs(selected_coeffs))[::-1]
            selected_coeffs = selected_coeffs[sort_idx]
            selected_names = [selected_names[i] for i in sort_idx]
            
            # Plot top 15 features
            n_show = min(15, len(selected_coeffs))
            y_pos = np.arange(n_show)
            
            bars = axes[1, 1].barh(y_pos, selected_coeffs[:n_show], 
                                  color=['red' if c < 0 else 'blue' for c in selected_coeffs[:n_show]],
                                  alpha=0.7)
            axes[1, 1].set_yticks(y_pos)
            axes[1, 1].set_yticklabels(selected_names[:n_show], fontsize=10)
            axes[1, 1].set_xlabel('Coefficient Value', fontsize=12)
            axes[1, 1].set_title('Selected Features (Best Model)', fontsize=14, fontweight='bold')
            axes[1, 1].grid(True, alpha=0.3, axis='x')
        
        plt.tight_layout()
        plt.show()
        
        # Print analysis summary
        print(f"\nElastic Net Analysis Summary:")
        print(f"Best parameters: α={best_alpha:.6f}, L1 ratio={best_l1_ratio:.3f}")
        print(f"Best validation MSE: {results['best_score']:.6f}")
        print(f"Features selected: {np.sum(np.abs(best_model.coef_) > 1e-6)}")
        
        # Interpretation of L1 ratio
        if best_l1_ratio > 0.8:
            interpretation = "Mostly L1 regularization (feature selection dominant)"
        elif best_l1_ratio < 0.2:
            interpretation = "Mostly L2 regularization (coefficient shrinkage dominant)"
        else:
            interpretation = "Balanced L1/L2 regularization"
        print(f"Regularization interpretation: {interpretation}")

# Demonstrate Elastic Net regularization
print("\nAnalyzing Elastic Net regularization...")
elastic_analyzer = ElasticNetAnalyzer()

# Define parameter ranges
alpha_range_en = np.logspace(-4, 0, 20)  # From 0.0001 to 1
l1_ratio_range = np.linspace(0.1, 0.9, 9)  # L1 ratio from 0.1 to 0.9

elastic_results = elastic_analyzer.analyze_elastic_net(
    X_train=X_train,
    y_train=y_train,
    X_val=X_val,
    y_val=y_val,
    alpha_range=alpha_range_en,
    l1_ratio_range=l1_ratio_range,
    max_degree=10
)

elastic_analyzer.plot_elastic_net_analysis(elastic_results)

Advanced Regularization Techniques

1. Early Stopping

class EarlyStoppingAnalyzer:
    """
    Analyzer for early stopping regularization technique
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        
    def analyze_early_stopping(self, 
                             X_train: np.ndarray,
                             y_train: np.ndarray,
                             X_val: np.ndarray,
                             y_val: np.ndarray,
                             max_epochs: int = 1000,
                             patience_values: List[int] = [5, 10, 20, 50]) -> Dict[str, any]:
        """
        Analyze early stopping with different patience values
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data
            max_epochs: Maximum number of training epochs
            patience_values: List of patience values to test
            
        Returns:
            Dictionary with early stopping analysis results
        """
        results = {
            'patience_values': patience_values,
            'training_curves': {},
            'stopping_epochs': {},
            'final_performances': {}
        }
        
        for patience in patience_values:
            print(f"Training with patience = {patience}")
            
            # Use MLPRegressor with early stopping
            model = MLPRegressor(
                hidden_layer_sizes=(100, 50),
                max_iter=max_epochs,
                early_stopping=True,
                validation_fraction=0.0,  # We provide our own validation set
                n_iter_no_change=patience,
                random_state=self.random_state,
                learning_rate_init=0.01
            )
            
            # Manual training loop to track progress
            train_losses = []
            val_losses = []
            
            # Split training data for internal validation
            X_train_split, X_internal_val, y_train_split, y_internal_val = train_test_split(
                X_train, y_train, test_size=0.2, random_state=self.random_state
            )
            
            # Use iterative training
            model.partial_fit(X_train_split, y_train_split)
            
            best_val_loss = float('inf')
            patience_counter = 0
            epoch = 0
            
            while epoch < max_epochs and patience_counter < patience:
                # Train one more iteration
                model.partial_fit(X_train_split, y_train_split)
                
                # Calculate losses
                train_pred = model.predict(X_train_split)
                val_pred = model.predict(X_val)
                
                train_loss = mean_squared_error(y_train_split, train_pred)
                val_loss = mean_squared_error(y_val, val_pred)
                
                train_losses.append(train_loss)
                val_losses.append(val_loss)
                
                # Early stopping logic
                if val_loss < best_val_loss:
                    best_val_loss = val_loss
                    patience_counter = 0
                else:
                    patience_counter += 1
                
                epoch += 1
            
            results['training_curves'][patience] = {
                'train_losses': train_losses,
                'val_losses': val_losses,
                'epochs': list(range(len(train_losses)))
            }
            results['stopping_epochs'][patience] = epoch
            results['final_performances'][patience] = {
                'train_loss': train_losses[-1],
                'val_loss': val_losses[-1]
            }
        
        return results
    
    def plot_early_stopping_analysis(self, results: Dict[str, any]) -> None:
        """Plot early stopping analysis"""
        patience_values = results['patience_values']
        n_patience = len(patience_values)
        
        fig, axes = plt.subplots(2, 2, figsize=(16, 12))
        colors = plt.cm.viridis(np.linspace(0, 1, n_patience))
        
        # Plot 1: Training curves for different patience values
        for i, patience in enumerate(patience_values):
            curve_data = results['training_curves'][patience]
            epochs = curve_data['epochs']
            train_losses = curve_data['train_losses']
            val_losses = curve_data['val_losses']
            
            axes[0, 0].plot(epochs, train_losses, '--', color=colors[i], alpha=0.7,
                           label=f'Train (patience={patience})')
            axes[0, 0].plot(epochs, val_losses, '-', color=colors[i], linewidth=2,
                           label=f'Val (patience={patience})')
        
        axes[0, 0].set_xlabel('Epoch', fontsize=12)
        axes[0, 0].set_ylabel('Loss', fontsize=12)
        axes[0, 0].set_title('Early Stopping: Training Curves', fontsize=14, fontweight='bold')
        axes[0, 0].legend(fontsize=9, bbox_to_anchor=(1.05, 1), loc='upper left')
        axes[0, 0].grid(True, alpha=0.3)
        axes[0, 0].set_yscale('log')
        
        # Plot 2: Stopping epochs vs patience
        stopping_epochs = [results['stopping_epochs'][p] for p in patience_values]
        axes[0, 1].plot(patience_values, stopping_epochs, 'o-', linewidth=3, markersize=8,
                       color='red')
        axes[0, 1].set_xlabel('Patience', fontsize=12)
        axes[0, 1].set_ylabel('Stopping Epoch', fontsize=12)
        axes[0, 1].set_title('Stopping Epoch vs Patience', fontsize=14, fontweight='bold')
        axes[0, 1].grid(True, alpha=0.3)
        
        # Plot 3: Final performance comparison
        final_train_losses = [results['final_performances'][p]['train_loss'] for p in patience_values]
        final_val_losses = [results['final_performances'][p]['val_loss'] for p in patience_values]
        
        x_pos = np.arange(len(patience_values))
        width = 0.35
        
        bars1 = axes[1, 0].bar(x_pos - width/2, final_train_losses, width, 
                              label='Training Loss', alpha=0.8, color='blue')
        bars2 = axes[1, 0].bar(x_pos + width/2, final_val_losses, width, 
                              label='Validation Loss', alpha=0.8, color='red')
        
        axes[1, 0].set_xlabel('Patience Value', fontsize=12)
        axes[1, 0].set_ylabel('Final Loss', fontsize=12)
        axes[1, 0].set_title('Final Performance vs Patience', fontsize=14, fontweight='bold')
        axes[1, 0].set_xticks(x_pos)
        axes[1, 0].set_xticklabels(patience_values)
        axes[1, 0].legend(fontsize=11)
        axes[1, 0].grid(True, alpha=0.3)
        axes[1, 0].set_yscale('log')
        
        # Plot 4: Overfitting analysis
        generalization_gaps = [final_val_losses[i] - final_train_losses[i] 
                             for i in range(len(patience_values))]
        
        axes[1, 1].plot(patience_values, generalization_gaps, 'o-', linewidth=3, markersize=8,
                       color='purple')
        axes[1, 1].axhline(y=0, color='black', linestyle='--', alpha=0.5)
        axes[1, 1].set_xlabel('Patience', fontsize=12)
        axes[1, 1].set_ylabel('Generalization Gap (Val - Train)', fontsize=12)
        axes[1, 1].set_title('Overfitting vs Patience', fontsize=14, fontweight='bold')
        axes[1, 1].grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        # Find optimal patience
        optimal_idx = np.argmin(final_val_losses)
        optimal_patience = patience_values[optimal_idx]
        
        print(f"\nEarly Stopping Analysis Summary:")
        print(f"Optimal patience: {optimal_patience}")
        print(f"Stopping epoch at optimal patience: {results['stopping_epochs'][optimal_patience]}")
        print(f"Final validation loss: {final_val_losses[optimal_idx]:.6f}")
        print(f"Generalization gap: {generalization_gaps[optimal_idx]:.6f}")

# Demonstrate early stopping (Note: This is computationally intensive)
print("\nAnalyzing early stopping regularization...")
early_stopping_analyzer = EarlyStoppingAnalyzer()

# Use a subset of data for faster computation
n_subset = min(500, len(X_train))
indices = np.random.choice(len(X_train), n_subset, replace=False)
X_train_subset = X_train[indices]
y_train_subset = y_train[indices]

early_stopping_results = early_stopping_analyzer.analyze_early_stopping(
    X_train=X_train_subset,
    y_train=y_train_subset,
    X_val=X_val,
    y_val=y_val,
    max_epochs=200,
    patience_values=[5, 10, 20, 30]
)

early_stopping_analyzer.plot_early_stopping_analysis(early_stopping_results)

2. Dropout Regularization

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

class DropoutAnalyzer:
    """
    Analyzer for dropout regularization (requires PyTorch)
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        torch.manual_seed(random_state)
        
    def create_neural_network(self, input_size: int, dropout_rate: float = 0.0) -> nn.Module:
        """Create a neural network with specified dropout rate"""
        return nn.Sequential(
            nn.Linear(input_size, 128),
            nn.ReLU(),
            nn.Dropout(dropout_rate),
            nn.Linear(128, 64),
            nn.ReLU(),
            nn.Dropout(dropout_rate),
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Dropout(dropout_rate),
            nn.Linear(32, 1)
        )
    
    def analyze_dropout(self, 
                       X_train: np.ndarray,
                       y_train: np.ndarray,
                       X_val: np.ndarray,
                       y_val: np.ndarray,
                       dropout_rates: List[float],
                       epochs: int = 100) -> Dict[str, any]:
        """
        Analyze dropout regularization with different dropout rates
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data
            dropout_rates: List of dropout rates to test
            epochs: Number of training epochs
            
        Returns:
            Dictionary with dropout analysis results
        """
        device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        
        # Convert data to tensors
        X_train_tensor = torch.FloatTensor(X_train).to(device)
        y_train_tensor = torch.FloatTensor(y_train.reshape(-1, 1)).to(device)
        X_val_tensor = torch.FloatTensor(X_val).to(device)
        y_val_tensor = torch.FloatTensor(y_val.reshape(-1, 1)).to(device)
        
        results = {
            'dropout_rates': dropout_rates,
            'training_curves': {},
            'final_performances': {}
        }
        
        for dropout_rate in dropout_rates:
            print(f"Training with dropout rate = {dropout_rate}")
            
            # Create model
            model = self.create_neural_network(X_train.shape[1], dropout_rate).to(device)
            criterion = nn.MSELoss()
            optimizer = optim.Adam(model.parameters(), lr=0.001)
            
            train_losses = []
            val_losses = []
            
            for epoch in range(epochs):
                # Training
                model.train()
                optimizer.zero_grad()
                train_pred = model(X_train_tensor)
                train_loss = criterion(train_pred, y_train_tensor)
                train_loss.backward()
                optimizer.step()
                
                # Validation
                model.eval()
                with torch.no_grad():
                    val_pred = model(X_val_tensor)
                    val_loss = criterion(val_pred, y_val_tensor)
                
                train_losses.append(train_loss.item())
                val_losses.append(val_loss.item())
                
                if epoch % 20 == 0:
                    print(f"  Epoch {epoch}: Train Loss = {train_loss.item():.6f}, "
                          f"Val Loss = {val_loss.item():.6f}")
            
            results['training_curves'][dropout_rate] = {
                'train_losses': train_losses,
                'val_losses': val_losses,
                'epochs': list(range(epochs))
            }
            results['final_performances'][dropout_rate] = {
                'train_loss': train_losses[-1],
                'val_loss': val_losses[-1]
            }
        
        return results
    
    def plot_dropout_analysis(self, results: Dict[str, any]) -> None:
        """Plot dropout regularization analysis"""
        dropout_rates = results['dropout_rates']
        n_rates = len(dropout_rates)
        
        fig, axes = plt.subplots(2, 2, figsize=(16, 12))
        colors = plt.cm.viridis(np.linspace(0, 1, n_rates))
        
        # Plot 1: Training curves
        for i, dropout_rate in enumerate(dropout_rates):
            curve_data = results['training_curves'][dropout_rate]
            epochs = curve_data['epochs']
            train_losses = curve_data['train_losses']
            val_losses = curve_data['val_losses']
            
            axes[0, 0].plot(epochs, train_losses, '--', color=colors[i], alpha=0.7,
                           label=f'Train (dropout={dropout_rate})')
            axes[0, 0].plot(epochs, val_losses, '-', color=colors[i], linewidth=2,
                           label=f'Val (dropout={dropout_rate})')
        
        axes[0, 0].set_xlabel('Epoch', fontsize=12)
        axes[0, 0].set_ylabel('Loss', fontsize=12)
        axes[0, 0].set_title('Dropout: Training Curves', fontsize=14, fontweight='bold')
        axes[0, 0].legend(fontsize=9, bbox_to_anchor=(1.05, 1), loc='upper left')
        axes[0, 0].grid(True, alpha=0.3)
        axes[0, 0].set_yscale('log')
        
        # Plot 2: Final performance vs dropout rate
        final_train_losses = [results['final_performances'][dr]['train_loss'] for dr in dropout_rates]
        final_val_losses = [results['final_performances'][dr]['val_loss'] for dr in dropout_rates]
        
        axes[0, 1].plot(dropout_rates, final_train_losses, 'o-', linewidth=3, markersize=8,
                       label='Training Loss', color='blue')
        axes[0, 1].plot(dropout_rates, final_val_losses, 's-', linewidth=3, markersize=8,
                       label='Validation Loss', color='red')
        
        # Find optimal dropout rate
        optimal_idx = np.argmin(final_val_losses)
        optimal_dropout = dropout_rates[optimal_idx]
        axes[0, 1].axvline(x=optimal_dropout, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7, label=f'Optimal ({optimal_dropout})')
        
        axes[0, 1].set_xlabel('Dropout Rate', fontsize=12)
        axes[0, 1].set_ylabel('Final Loss', fontsize=12)
        axes[0, 1].set_title('Final Performance vs Dropout Rate', fontsize=14, fontweight='bold')
        axes[0, 1].legend(fontsize=11)
        axes[0, 1].grid(True, alpha=0.3)
        axes[0, 1].set_yscale('log')
        
        # Plot 3: Generalization gap
        generalization_gaps = [final_val_losses[i] - final_train_losses[i] 
                             for i in range(len(dropout_rates))]
        
        axes[1, 0].plot(dropout_rates, generalization_gaps, 'o-', linewidth=3, markersize=8,
                       color='purple')
        axes[1, 0].axhline(y=0, color='black', linestyle='--', alpha=0.5)
        axes[1, 0].axvline(x=optimal_dropout, color='green', linestyle='--', 
                          linewidth=2, alpha=0.7)
        axes[1, 0].set_xlabel('Dropout Rate', fontsize=12)
        axes[1, 0].set_ylabel('Generalization Gap (Val - Train)', fontsize=12)
        axes[1, 0].set_title('Overfitting vs Dropout Rate', fontsize=14, fontweight='bold')
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 4: Performance improvement
        baseline_val_loss = final_val_losses[0]  # Assuming first is no dropout
        improvements = [(baseline_val_loss - val_loss) / baseline_val_loss * 100 
                       for val_loss in final_val_losses]
        
        bars = axes[1, 1].bar(range(len(dropout_rates)), improvements, 
                             color=['red' if imp < 0 else 'green' for imp in improvements],
                             alpha=0.7)
        axes[1, 1].set_xlabel('Dropout Rate', fontsize=12)
        axes[1, 1].set_ylabel('Performance Improvement (%)', fontsize=12)
        axes[1, 1].set_title('Performance Improvement vs Baseline', fontsize=14, fontweight='bold')
        axes[1, 1].set_xticks(range(len(dropout_rates)))
        axes[1, 1].set_xticklabels([f'{dr:.1f}' for dr in dropout_rates])
        axes[1, 1].grid(True, alpha=0.3, axis='y')
        axes[1, 1].axhline(y=0, color='black', linestyle='-', alpha=0.5)
        
        plt.tight_layout()
        plt.show()
        
        print(f"\nDropout Analysis Summary:")
        print(f"Optimal dropout rate: {optimal_dropout}")
        print(f"Final validation loss: {final_val_losses[optimal_idx]:.6f}")
        print(f"Generalization gap: {generalization_gaps[optimal_idx]:.6f}")
        print(f"Performance improvement: {improvements[optimal_idx]:.2f}%")

# Demonstrate dropout regularization
if torch.cuda.is_available() or True:  # Can run on CPU too
    print("\nAnalyzing dropout regularization...")
    dropout_analyzer = DropoutAnalyzer()
    
    # Expand features for neural network
    poly_features_nn = PolynomialFeatures(degree=3, include_bias=False)
    X_train_nn = poly_features_nn.fit_transform(X_train)
    X_val_nn = poly_features_nn.transform(X_val)
    
    # Standardize
    scaler_nn = StandardScaler()
    X_train_nn = scaler_nn.fit_transform(X_train_nn)
    X_val_nn = scaler_nn.transform(X_val_nn)
    
    dropout_results = dropout_analyzer.analyze_dropout(
        X_train=X_train_nn,
        y_train=y_train,
        X_val=X_val_nn,
        y_val=y_val,
        dropout_rates=[0.0, 0.1, 0.2, 0.3, 0.5],
        epochs=100
    )
    
    dropout_analyzer.plot_dropout_analysis(dropout_results)
else:
    print("\nSkipping dropout analysis (PyTorch not available or no GPU)")

Comprehensive Regularization Comparison

class RegularizationComparison:
    """
    Comprehensive comparison of different regularization techniques
    """
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
        self.results = {}
        
    def compare_regularization_methods(self, 
                                     X_train: np.ndarray,
                                     y_train: np.ndarray,
                                     X_val: np.ndarray,
                                     y_val: np.ndarray,
                                     X_test: np.ndarray,
                                     y_test: np.ndarray) -> Dict[str, any]:
        """
        Compare different regularization methods
        
        Args:
            X_train, y_train: Training data
            X_val, y_val: Validation data  
            X_test, y_test: Test data
            
        Returns:
            Dictionary with comparison results
        """
        # Create polynomial features
        poly_features = PolynomialFeatures(degree=10, include_bias=False)
        X_train_poly = poly_features.fit_transform(X_train)
        X_val_poly = poly_features.transform(X_val)
        X_test_poly = poly_features.transform(X_test)
        
        # Standardize features
        scaler = StandardScaler()
        X_train_scaled = scaler.fit_transform(X_train_poly)
        X_val_scaled = scaler.transform(X_val_poly)
        X_test_scaled = scaler.transform(X_test_poly)
        
        methods = {
            'No Regularization': LinearRegression(),
            'Ridge (L2)': Ridge(alpha=0.1),
            'Lasso (L1)': Lasso(alpha=0.01, max_iter=2000),
            'Elastic Net': ElasticNet(alpha=0.01, l1_ratio=0.5, max_iter=2000),
            'Random Forest': RandomForestRegressor(n_estimators=100, max_depth=10, 
                                                 random_state=self.random_state)
        }
        
        results = {
            'methods': list(methods.keys()),
            'train_mse': [],
            'val_mse': [],
            'test_mse': [],
            'train_r2': [],
            'val_r2': [],
            'test_r2': [],
            'num_features': [],
            'models': {}
        }
        
        for method_name, model in methods.items():
            print(f"Training {method_name}...")
            
            # Fit model
            if method_name == 'Random Forest':
                # Random Forest works better with original features
                model.fit(X_train, y_train)
                train_pred = model.predict(X_train)
                val_pred = model.predict(X_val)
                test_pred = model.predict(X_test)
            else:
                model.fit(X_train_scaled, y_train)
                train_pred = model.predict(X_train_scaled)
                val_pred = model.predict(X_val_scaled)
                test_pred = model.predict(X_test_scaled)
            
            # Calculate metrics
            train_mse = mean_squared_error(y_train, train_pred)
            val_mse = mean_squared_error(y_val, val_pred)
            test_mse = mean_squared_error(y_test, test_pred)
            
            train_r2 = r2_score(y_train, train_pred)
            val_r2 = r2_score(y_val, val_pred)
            test_r2 = r2_score(y_test, test_pred)
            
            # Count effective features
            if hasattr(model, 'coef_'):
                num_features = np.sum(np.abs(model.coef_) > 1e-6)
            elif hasattr(model, 'feature_importances_'):
                num_features = np.sum(model.feature_importances_ > 1e-6)
            else:
                num_features = X_train_scaled.shape[1]
            
            # Store results
            results['train_mse'].append(train_mse)
            results['val_mse'].append(val_mse)
            results['test_mse'].append(test_mse)
            results['train_r2'].append(train_r2)
            results['val_r2'].append(val_r2)
            results['test_r2'].append(test_r2)
            results['num_features'].append(num_features)
            results['models'][method_name] = model
        
        self.results = results
        return results
    
    def plot_comparison(self, results: Dict[str, any]) -> None:
        """Plot comprehensive comparison of regularization methods"""
        fig, axes = plt.subplots(2, 3, figsize=(20, 12))
        
        methods = results['methods']
        n_methods = len(methods)
        x_pos = np.arange(n_methods)
        
        # Plot 1: MSE comparison
        train_mse = results['train_mse']
        val_mse = results['val_mse']
        test_mse = results['test_mse']
        
        width = 0.25
        axes[0, 0].bar(x_pos - width, train_mse, width, label='Train', alpha=0.8, color='blue')
        axes[0, 0].bar(x_pos, val_mse, width, label='Validation', alpha=0.8, color='red')
        axes[0, 0].bar(x_pos + width, test_mse, width, label='Test', alpha=0.8, color='green')
        
        axes[0, 0].set_xlabel('Regularization Method', fontsize=12)
        axes[0, 0].set_ylabel('Mean Squared Error', fontsize=12)
        axes[0, 0].set_title('MSE Comparison', fontsize=14, fontweight='bold')
        axes[0, 0].set_xticks(x_pos)
        axes[0, 0].set_xticklabels(methods, rotation=45, ha='right')
        axes[0, 0].legend(fontsize=11)
        axes[0, 0].grid(True, alpha=0.3)
        axes[0, 0].set_yscale('log')
        
        # Plot 2: R² comparison
        train_r2 = results['train_r2']
        val_r2 = results['val_r2']
        test_r2 = results['test_r2']
        
        axes[0, 1].bar(x_pos - width, train_r2, width, label='Train', alpha=0.8, color='blue')
        axes[0, 1].bar(x_pos, val_r2, width, label='Validation', alpha=0.8, color='red')
        axes[0, 1].bar(x_pos + width, test_r2, width, label='Test', alpha=0.8, color='green')
        
        axes[0, 1].set_xlabel('Regularization Method', fontsize=12)
        axes[0, 1].set_ylabel('R² Score', fontsize=12)
        axes[0, 1].set_title('R² Comparison', fontsize=14, fontweight='bold')
        axes[0, 1].set_xticks(x_pos)
        axes[0, 1].set_xticklabels(methods, rotation=45, ha='right')
        axes[0, 1].legend(fontsize=11)
        axes[0, 1].grid(True, alpha=0.3)
        
        # Plot 3: Feature selection
        num_features = results['num_features']
        bars = axes[0, 2].bar(x_pos, num_features, alpha=0.8, color='purple')
        axes[0, 2].set_xlabel('Regularization Method', fontsize=12)
        axes[0, 2].set_ylabel('Number of Features Used', fontsize=12)
        axes[0, 2].set_title('Feature Selection', fontsize=14, fontweight='bold')
        axes[0, 2].set_xticks(x_pos)
        axes[0, 2].set_xticklabels(methods, rotation=45, ha='right')
        axes[0, 2].grid(True, alpha=0.3)
        
        # Add value labels on bars
        for bar, value in zip(bars, num_features):
            axes[0, 2].text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.5, 
                           f'{value}', ha='center', va='bottom', fontsize=10)
        
        # Plot 4: Generalization gap (Validation - Training MSE)
        generalization_gaps = [val_mse[i] - train_mse[i] for i in range(n_methods)]
        bars = axes[1, 0].bar(x_pos, generalization_gaps, 
                             color=['red' if gap > 0 else 'green' for gap in generalization_gaps],
                             alpha=0.8)
        axes[1, 0].axhline(y=0, color='black', linestyle='--', alpha=0.5)
        axes[1, 0].set_xlabel('Regularization Method', fontsize=12)
        axes[1, 0].set_ylabel('Generalization Gap (Val - Train MSE)', fontsize=12)
        axes[1, 0].set_title('Overfitting Analysis', fontsize=14, fontweight='bold')
        axes[1, 0].set_xticks(x_pos)
        axes[1, 0].set_xticklabels(methods, rotation=45, ha='right')
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 5: Test performance ranking
        test_performance_ranking = np.argsort(test_mse)  # Lower MSE is better
        ranked_methods = [methods[i] for i in test_performance_ranking]
        ranked_test_mse = [test_mse[i] for i in test_performance_ranking]
        
        bars = axes[1, 1].barh(range(n_methods), ranked_test_mse, alpha=0.8, 
                              color=plt.cm.RdYlGn_r(np.linspace(0.3, 0.9, n_methods)))
        axes[1, 1].set_xlabel('Test MSE', fontsize=12)
        axes[1, 1].set_ylabel('Method (Ranked)', fontsize=12)
        axes[1, 1].set_title('Test Performance Ranking', fontsize=14, fontweight='bold')
        axes[1, 1].set_yticks(range(n_methods))
        axes[1, 1].set_yticklabels(ranked_methods)
        axes[1, 1].grid(True, alpha=0.3, axis='x')
        axes[1, 1].set_xscale('log')
        
        # Add rank labels
        for i, (bar, mse) in enumerate(zip(bars, ranked_test_mse)):
            axes[1, 1].text(bar.get_width() * 1.1, bar.get_y() + bar.get_height()/2, 
                           f'#{i+1} ({mse:.4f})', ha='left', va='center', fontsize=10)
        
        # Plot 6: Bias-Variance tradeoff visualization
        # Approximate bias as squared difference from best possible performance
        best_test_mse = min(test_mse)
        bias_proxy = [(mse - best_test_mse) for mse in test_mse]
        variance_proxy = generalization_gaps
        
        axes[1, 2].scatter(bias_proxy, variance_proxy, s=200, alpha=0.7, c=range(n_methods), 
                          cmap='viridis')
        
        for i, method in enumerate(methods):
            axes[1, 2].annotate(method, (bias_proxy[i], variance_proxy[i]), 
                               xytext=(5, 5), textcoords='offset points', fontsize=9)
        
        axes[1, 2].set_xlabel('Bias Proxy (Test MSE - Best Test MSE)', fontsize=12)
        axes[1, 2].set_ylabel('Variance Proxy (Generalization Gap)', fontsize=12)
        axes[1, 2].set_title('Bias-Variance Tradeoff', fontsize=14, fontweight='bold')
        axes[1, 2].grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        # Print detailed comparison
        print("\n" + "="*100)
        print("COMPREHENSIVE REGULARIZATION COMPARISON")
        print("="*100)
        
        print(f"{'Method':<15} {'Train MSE':<12} {'Val MSE':<12} {'Test MSE':<12} "
              f"{'Test R²':<10} {'Features':<10} {'Gen Gap':<10}")
        print("-"*100)
        
        for i, method in enumerate(methods):
            print(f"{method:<15} {train_mse[i]:<12.6f} {val_mse[i]:<12.6f} "
                  f"{test_mse[i]:<12.6f} {test_r2[i]:<10.3f} {num_features[i]:<10} "
                  f"{generalization_gaps[i]:<10.6f}")
        
        # Recommendations
        best_test_idx = np.argmin(test_mse)
        best_generalization_idx = np.argmin([abs(gap) for gap in generalization_gaps])
        
        print(f"\nRECOMMENDATIONS:")
        print(f"Best Test Performance: {methods[best_test_idx]} (MSE: {test_mse[best_test_idx]:.6f})")
        print(f"Best Generalization: {methods[best_generalization_idx]} (Gap: {generalization_gaps[best_generalization_idx]:.6f})")
        
        return results

# Run comprehensive comparison
print("\nRunning comprehensive regularization comparison...")
comparison = RegularizationComparison()

comparison_results = comparison.compare_regularization_methods(
    X_train=X_train,
    y_train=y_train,
    X_val=X_val,
    y_val=y_val,
    X_test=X_test,
    y_test=y_test
)

comparison.plot_comparison(comparison_results)

Performance Metrics and Best Practices

Performance Summary Table

Regularization MethodOverfitting PreventionFeature SelectionComputational CostBest Use Case
L1 (Lasso)✅ High✅ ExcellentLowSparse features, interpretability
L2 (Ridge)✅ High❌ NoneLowDense features, numerical stability
Elastic Net✅ High✅ GoodLowMixed feature types
Early Stopping✅ Excellent❌ NoneMediumNeural networks, iterative algorithms
Dropout✅ Excellent❌ NoneMediumDeep neural networks
Random Forest✅ Good✅ GoodHighNon-linear relationships

Conclusion

Overfitting remains one of the most critical challenges in machine learning, but with proper understanding and application of regularization techniques, it can be effectively managed. This comprehensive analysis has covered:

  1. Overfitting Detection: Systematic approaches to identify when models are overfitting
  2. L1 Regularization: Feature selection through sparsity-inducing penalties
  3. L2 Regularization: Coefficient shrinkage for numerical stability
  4. Elastic Net: Balanced approach combining L1 and L2 benefits
  5. Early Stopping: Preventing overfitting through optimal training duration
  6. Dropout: Regularization through stochastic neuron deactivation
  7. Comprehensive Comparison: Data-driven selection of optimal regularization strategies

Key insights from our analysis:

  • L1 regularization excels when feature selection is important and interpretability is required
  • L2 regularization provides excellent numerical stability and works well with correlated features
  • Elastic Net offers the best of both worlds for most practical applications
  • Early stopping is essential for iterative algorithms and neural networks
  • Dropout is particularly effective for deep learning applications

The choice of regularization technique should be based on the specific characteristics of your dataset, model architecture, and performance requirements. Regular validation and comparison of different approaches ensures optimal model performance and generalization.

References

  1. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media.

  2. Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.

  3. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, 58(1), 267-288.

  4. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, 67(2), 301-320.

  5. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.


This comprehensive guide provides both theoretical foundations and practical implementations for managing overfitting through regularization. For more advanced topics in machine learning optimization and model selection, explore my other articles on cross-validation techniques and hyperparameter tuning.

Connect with me on LinkedIn or X to discuss regularization strategies and machine learning best practices!

Share this content

Reading time: 5 minutes
Progress: 0%
#Machine Learning#Overfitting#Regularization#L1 Regularization#L2 Regularization#Dropout#Python#Model Selection
Overfitting and Regularization in Machine Learning: Complete Guide with Python Implementation - Fenil Sonani