Fenil Sonani

Neural Networks Fundamentals: From Perceptron to Deep Learning with Python

3 min read

AI-Generated Content Notice

Some code examples and technical explanations in this article were generated with AI assistance. The content has been reviewed for accuracy, but please test any code snippets in your development environment before using them.


Neural Networks Fundamentals: From Perceptron to Deep Learning with Python

Introduction

Neural networks form the foundation of modern artificial intelligence, powering everything from image recognition to natural language processing. This comprehensive guide covers neural network fundamentals, from the basic perceptron to multi-layer networks, with practical Python implementations you can build from scratch.

Understanding neural networks deeply will help you make better architectural decisions, debug training issues, and optimize performance in your deep learning projects.

The Biological Inspiration

Neural networks are inspired by biological neurons. A biological neuron:

  • Receives signals through dendrites
  • Processes signals in the cell body
  • Sends output through the axon when threshold is reached

Artificial neurons follow a similar pattern: Input → Processing → Output

Building Blocks: The Perceptron

Single Perceptron Implementation

import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_classification, make_circles
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from typing import Tuple, List, Dict, Optional
import warnings
warnings.filterwarnings('ignore')

plt.style.use('seaborn-v0_8')

class Perceptron:
    """Single perceptron implementation from scratch"""
    
    def __init__(self, learning_rate: float = 0.01, max_epochs: int = 1000):
        self.learning_rate = learning_rate
        self.max_epochs = max_epochs
        self.weights: Optional[np.ndarray] = None
        self.bias: float = 0.0
        self.training_history: List[Dict] = []
    
    def _activation(self, z: np.ndarray) -> np.ndarray:
        """Step activation function"""
        return np.where(z >= 0, 1, 0)
    
    def fit(self, X: np.ndarray, y: np.ndarray) -> 'Perceptron':
        """Train the perceptron"""
        n_samples, n_features = X.shape
        
        # Initialize weights and bias
        self.weights = np.random.normal(0, 0.01, n_features)
        self.bias = 0.0
        
        # Training loop
        for epoch in range(self.max_epochs):
            errors = 0
            epoch_loss = 0
            
            for i in range(n_samples):
                # Forward pass
                linear_output = np.dot(X[i], self.weights) + self.bias
                prediction = self._activation(linear_output)
                
                # Calculate error
                error = y[i] - prediction
                epoch_loss += error ** 2
                
                # Update weights and bias if there's an error
                if error != 0:
                    self.weights += self.learning_rate * error * X[i]
                    self.bias += self.learning_rate * error
                    errors += 1
            
            # Store training history
            accuracy = 1 - (errors / n_samples)
            self.training_history.append({
                'epoch': epoch,
                'errors': errors,
                'accuracy': accuracy,
                'loss': epoch_loss / n_samples
            })
            
            # Early stopping if no errors
            if errors == 0:
                print(f"Converged at epoch {epoch}")
                break
        
        return self
    
    def predict(self, X: np.ndarray) -> np.ndarray:
        """Make predictions"""
        linear_output = np.dot(X, self.weights) + self.bias
        return self._activation(linear_output)
    
    def plot_training_history(self):
        """Plot training progress"""
        if not self.training_history:
            print("No training history available")
            return
        
        epochs = [h['epoch'] for h in self.training_history]
        accuracies = [h['accuracy'] for h in self.training_history]
        losses = [h['loss'] for h in self.training_history]
        
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
        
        # Accuracy plot
        ax1.plot(epochs, accuracies, 'b-', linewidth=2, marker='o', markersize=4)
        ax1.set_xlabel('Epoch', fontsize=12)
        ax1.set_ylabel('Accuracy', fontsize=12)
        ax1.set_title('Training Accuracy', fontweight='bold')
        ax1.grid(True, alpha=0.3)
        ax1.set_ylim(0, 1.05)
        
        # Loss plot
        ax2.plot(epochs, losses, 'r-', linewidth=2, marker='s', markersize=4)
        ax2.set_xlabel('Epoch', fontsize=12)
        ax2.set_ylabel('Loss', fontsize=12)
        ax2.set_title('Training Loss', fontweight='bold')
        ax2.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()

# Generate linearly separable data
X, y = make_classification(n_samples=200, n_features=2, n_redundant=0, 
                          n_informative=2, n_clusters_per_class=1, random_state=42)

# Train perceptron
perceptron = Perceptron(learning_rate=0.1, max_epochs=100)
perceptron.fit(X, y)

# Plot results
perceptron.plot_training_history()

print(f"Final weights: {perceptron.weights}")
print(f"Final bias: {perceptron.bias}")
print(f"Training accuracy: {perceptron.training_history[-1]['accuracy']:.3f}")

Visualizing Decision Boundary

def plot_decision_boundary(model, X: np.ndarray, y: np.ndarray, title: str = "Decision Boundary"):
    """Plot decision boundary for 2D data"""
    
    plt.figure(figsize=(10, 8))
    
    # Create mesh
    h = 0.02
    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                         np.arange(y_min, y_max, h))
    
    # Make predictions on mesh
    mesh_points = np.c_[xx.ravel(), yy.ravel()]
    Z = model.predict(mesh_points)
    Z = Z.reshape(xx.shape)
    
    # Plot decision boundary
    plt.contourf(xx, yy, Z, alpha=0.3, cmap=plt.cm.RdYlBu)
    
    # Plot data points
    scatter = plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdYlBu, edgecolors='black')
    plt.colorbar(scatter)
    plt.xlabel('Feature 1', fontsize=12)
    plt.ylabel('Feature 2', fontsize=12)
    plt.title(title, fontweight='bold', fontsize=14)
    plt.grid(True, alpha=0.3)
    plt.show()

# Visualize perceptron decision boundary
plot_decision_boundary(perceptron, X, y, "Perceptron Decision Boundary")

Multi-Layer Perceptron (MLP)

Building an MLP from Scratch

class MultiLayerPerceptron:
    """Multi-layer perceptron implementation from scratch"""
    
    def __init__(self, layer_sizes: List[int], learning_rate: float = 0.01, 
                 max_epochs: int = 1000, activation: str = 'sigmoid'):
        self.layer_sizes = layer_sizes
        self.learning_rate = learning_rate
        self.max_epochs = max_epochs
        self.activation = activation
        self.weights: List[np.ndarray] = []
        self.biases: List[np.ndarray] = []
        self.training_history: List[Dict] = []
        
        # Initialize weights and biases
        self._initialize_parameters()
    
    def _initialize_parameters(self):
        """Initialize weights using Xavier initialization"""
        for i in range(len(self.layer_sizes) - 1):
            # Xavier initialization
            fan_in = self.layer_sizes[i]
            fan_out = self.layer_sizes[i + 1]
            limit = np.sqrt(6 / (fan_in + fan_out))
            
            weight = np.random.uniform(-limit, limit, (fan_in, fan_out))
            bias = np.zeros((1, fan_out))
            
            self.weights.append(weight)
            self.biases.append(bias)
    
    def _activation_function(self, z: np.ndarray) -> np.ndarray:
        """Apply activation function"""
        if self.activation == 'sigmoid':
            return 1 / (1 + np.exp(-np.clip(z, -500, 500)))  # Clip to prevent overflow
        elif self.activation == 'tanh':
            return np.tanh(z)
        elif self.activation == 'relu':
            return np.maximum(0, z)
        else:
            raise ValueError(f"Unsupported activation: {self.activation}")
    
    def _activation_derivative(self, z: np.ndarray) -> np.ndarray:
        """Compute activation function derivative"""
        if self.activation == 'sigmoid':
            s = self._activation_function(z)
            return s * (1 - s)
        elif self.activation == 'tanh':
            return 1 - np.tanh(z) ** 2
        elif self.activation == 'relu':
            return (z > 0).astype(float)
        else:
            raise ValueError(f"Unsupported activation: {self.activation}")
    
    def _forward_pass(self, X: np.ndarray) -> Tuple[List[np.ndarray], List[np.ndarray]]:
        """Forward propagation"""
        activations = [X]
        z_values = []
        
        for i in range(len(self.weights)):
            z = np.dot(activations[-1], self.weights[i]) + self.biases[i]
            z_values.append(z)
            
            if i == len(self.weights) - 1:  # Output layer
                # Use sigmoid for binary classification
                a = self._activation_function(z)
            else:  # Hidden layers
                a = self._activation_function(z)
            
            activations.append(a)
        
        return activations, z_values
    
    def _backward_pass(self, X: np.ndarray, y: np.ndarray, 
                      activations: List[np.ndarray], z_values: List[np.ndarray]) -> Tuple[List[np.ndarray], List[np.ndarray]]:
        """Backward propagation"""
        m = X.shape[0]
        weight_gradients = []
        bias_gradients = []
        
        # Output layer error
        delta = activations[-1] - y.reshape(-1, 1)
        
        # Backpropagate errors
        for i in range(len(self.weights) - 1, -1, -1):
            # Compute gradients
            dW = np.dot(activations[i].T, delta) / m
            db = np.sum(delta, axis=0, keepdims=True) / m
            
            weight_gradients.insert(0, dW)
            bias_gradients.insert(0, db)
            
            # Compute error for previous layer
            if i > 0:
                delta = np.dot(delta, self.weights[i].T) * self._activation_derivative(z_values[i-1])
        
        return weight_gradients, bias_gradients
    
    def fit(self, X: np.ndarray, y: np.ndarray) -> 'MultiLayerPerceptron':
        """Train the neural network"""
        
        for epoch in range(self.max_epochs):
            # Forward pass
            activations, z_values = self._forward_pass(X)
            
            # Compute loss (binary cross-entropy)
            predictions = activations[-1]
            loss = -np.mean(y * np.log(predictions + 1e-15) + 
                           (1 - y) * np.log(1 - predictions + 1e-15))
            
            # Backward pass
            weight_grads, bias_grads = self._backward_pass(X, y, activations, z_values)
            
            # Update parameters
            for i in range(len(self.weights)):
                self.weights[i] -= self.learning_rate * weight_grads[i]
                self.biases[i] -= self.learning_rate * bias_grads[i]
            
            # Calculate accuracy
            predicted_classes = (predictions > 0.5).astype(int).flatten()
            accuracy = np.mean(predicted_classes == y)
            
            # Store history
            self.training_history.append({
                'epoch': epoch,
                'loss': loss,
                'accuracy': accuracy
            })
            
            # Print progress
            if (epoch + 1) % 100 == 0:
                print(f"Epoch {epoch + 1}: Loss = {loss:.4f}, Accuracy = {accuracy:.4f}")
        
        return self
    
    def predict(self, X: np.ndarray) -> np.ndarray:
        """Make predictions"""
        activations, _ = self._forward_pass(X)
        return (activations[-1] > 0.5).astype(int).flatten()
    
    def predict_proba(self, X: np.ndarray) -> np.ndarray:
        """Predict probabilities"""
        activations, _ = self._forward_pass(X)
        return activations[-1].flatten()
    
    def plot_training_history(self):
        """Plot training progress"""
        if not self.training_history:
            return
        
        epochs = [h['epoch'] for h in self.training_history]
        losses = [h['loss'] for h in self.training_history]
        accuracies = [h['accuracy'] for h in self.training_history]
        
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
        
        # Loss plot
        ax1.plot(epochs, losses, 'r-', linewidth=2)
        ax1.set_xlabel('Epoch', fontsize=12)
        ax1.set_ylabel('Loss', fontsize=12)
        ax1.set_title('Training Loss', fontweight='bold')
        ax1.grid(True, alpha=0.3)
        
        # Accuracy plot
        ax2.plot(epochs, accuracies, 'b-', linewidth=2)
        ax2.set_xlabel('Epoch', fontsize=12)
        ax2.set_ylabel('Accuracy', fontsize=12)
        ax2.set_title('Training Accuracy', fontweight='bold')
        ax2.grid(True, alpha=0.3)
        ax2.set_ylim(0, 1.05)
        
        plt.tight_layout()
        plt.show()

# Generate non-linearly separable data
X_nonlinear, y_nonlinear = make_circles(n_samples=300, noise=0.1, factor=0.3, random_state=42)

# Standardize data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_nonlinear)

# Create and train MLP
mlp = MultiLayerPerceptron(
    layer_sizes=[2, 4, 4, 1],  # 2 inputs, two hidden layers with 4 neurons each, 1 output
    learning_rate=0.1,
    max_epochs=1000,
    activation='sigmoid'
)

mlp.fit(X_scaled, y_nonlinear)
mlp.plot_training_history()

# Visualize results
plot_decision_boundary(mlp, X_scaled, y_nonlinear, "MLP Decision Boundary (Non-linear Data)")

Activation Functions Deep Dive

class ActivationFunctions:
    """Comprehensive activation functions analysis"""
    
    @staticmethod
    def sigmoid(x: np.ndarray) -> np.ndarray:
        return 1 / (1 + np.exp(-np.clip(x, -500, 500)))
    
    @staticmethod
    def tanh(x: np.ndarray) -> np.ndarray:
        return np.tanh(x)
    
    @staticmethod
    def relu(x: np.ndarray) -> np.ndarray:
        return np.maximum(0, x)
    
    @staticmethod
    def leaky_relu(x: np.ndarray, alpha: float = 0.01) -> np.ndarray:
        return np.where(x > 0, x, alpha * x)
    
    @staticmethod
    def elu(x: np.ndarray, alpha: float = 1.0) -> np.ndarray:
        return np.where(x > 0, x, alpha * (np.exp(x) - 1))
    
    @staticmethod
    def swish(x: np.ndarray) -> np.ndarray:
        return x * ActivationFunctions.sigmoid(x)
    
    def plot_activation_functions(self):
        """Plot various activation functions"""
        x = np.linspace(-5, 5, 1000)
        
        functions = {
            'Sigmoid': self.sigmoid(x),
            'Tanh': self.tanh(x),
            'ReLU': self.relu(x),
            'Leaky ReLU': self.leaky_relu(x),
            'ELU': self.elu(x),
            'Swish': self.swish(x)
        }
        
        fig, axes = plt.subplots(2, 3, figsize=(15, 10))
        axes = axes.flatten()
        
        for i, (name, y) in enumerate(functions.items()):
            axes[i].plot(x, y, linewidth=3, color=f'C{i}')
            axes[i].set_title(name, fontweight='bold', fontsize=12)
            axes[i].grid(True, alpha=0.3)
            axes[i].set_xlabel('Input', fontsize=10)
            axes[i].set_ylabel('Output', fontsize=10)
            
            # Add zero lines
            axes[i].axhline(y=0, color='k', linestyle='-', alpha=0.3)
            axes[i].axvline(x=0, color='k', linestyle='-', alpha=0.3)
        
        plt.tight_layout()
        plt.show()
    
    def compare_activation_effects(self, X: np.ndarray, y: np.ndarray):
        """Compare different activation functions on the same dataset"""
        
        activations = ['sigmoid', 'tanh', 'relu']
        results = {}
        
        for activation in activations:
            print(f"Training with {activation} activation...")
            
            mlp = MultiLayerPerceptron(
                layer_sizes=[2, 8, 1],
                learning_rate=0.01,
                max_epochs=500,
                activation=activation
            )
            
            mlp.fit(X, y)
            
            # Final performance
            final_accuracy = mlp.training_history[-1]['accuracy']
            final_loss = mlp.training_history[-1]['loss']
            
            results[activation] = {
                'model': mlp,
                'final_accuracy': final_accuracy,
                'final_loss': final_loss,
                'history': mlp.training_history
            }
            
            print(f"  Final accuracy: {final_accuracy:.4f}")
        
        # Plot comparison
        fig, axes = plt.subplots(1, 3, figsize=(18, 5))
        
        # Training curves
        for activation, data in results.items():
            history = data['history']
            epochs = [h['epoch'] for h in history]
            accuracies = [h['accuracy'] for h in history]
            
            axes[0].plot(epochs, accuracies, linewidth=2, label=activation.capitalize())
        
        axes[0].set_xlabel('Epoch', fontsize=12)
        axes[0].set_ylabel('Accuracy', fontsize=12)
        axes[0].set_title('Training Accuracy Comparison', fontweight='bold')
        axes[0].legend()
        axes[0].grid(True, alpha=0.3)
        
        # Final performance comparison
        activations_list = list(results.keys())
        accuracies = [results[act]['final_accuracy'] for act in activations_list]
        losses = [results[act]['final_loss'] for act in activations_list]
        
        bars1 = axes[1].bar(activations_list, accuracies, alpha=0.7, color=['blue', 'orange', 'green'])
        axes[1].set_ylabel('Final Accuracy', fontsize=12)
        axes[1].set_title('Final Accuracy Comparison', fontweight='bold')
        axes[1].grid(True, alpha=0.3)
        
        # Add value labels
        for bar, acc in zip(bars1, accuracies):
            axes[1].text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, 
                        f'{acc:.3f}', ha='center', va='bottom', fontweight='bold')
        
        # Decision boundaries
        for i, (activation, data) in enumerate(results.items()):
            # Create subplot for decision boundary
            plt.figure(figsize=(8, 6))
            plot_decision_boundary(data['model'], X, y, f"{activation.capitalize()} Decision Boundary")
        
        plt.tight_layout()
        plt.show()
        
        return results

# Analyze activation functions
activation_analyzer = ActivationFunctions()
activation_analyzer.plot_activation_functions()

print("\nComparing activation functions on non-linear data...")
activation_results = activation_analyzer.compare_activation_effects(X_scaled, y_nonlinear)

Modern Neural Networks with Libraries

# Using TensorFlow/Keras for comparison
try:
    import tensorflow as tf
    from tensorflow import keras
    from tensorflow.keras import layers
    
    class ModernNeuralNetwork:
        """Modern neural network using TensorFlow/Keras"""
        
        def __init__(self, input_dim: int, hidden_layers: List[int], 
                     activation: str = 'relu', learning_rate: float = 0.001):
            self.model = self._build_model(input_dim, hidden_layers, activation)
            self.model.compile(
                optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
                loss='binary_crossentropy',
                metrics=['accuracy']
            )
            self.history = None
        
        def _build_model(self, input_dim: int, hidden_layers: List[int], activation: str):
            """Build the neural network architecture"""
            model = keras.Sequential()
            model.add(layers.Dense(hidden_layers[0], activation=activation, input_dim=input_dim))
            
            for units in hidden_layers[1:]:
                model.add(layers.Dense(units, activation=activation))
            
            # Output layer
            model.add(layers.Dense(1, activation='sigmoid'))
            
            return model
        
        def fit(self, X: np.ndarray, y: np.ndarray, epochs: int = 100, batch_size: int = 32):
            """Train the model"""
            self.history = self.model.fit(
                X, y, epochs=epochs, batch_size=batch_size, 
                validation_split=0.2, verbose=0
            )
            return self
        
        def predict(self, X: np.ndarray) -> np.ndarray:
            """Make predictions"""
            return (self.model.predict(X) > 0.5).astype(int).flatten()
        
        def plot_training_history(self):
            """Plot training history"""
            if self.history is None:
                return
            
            fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
            
            # Loss
            ax1.plot(self.history.history['loss'], label='Training Loss')
            ax1.plot(self.history.history['val_loss'], label='Validation Loss')
            ax1.set_xlabel('Epoch', fontsize=12)
            ax1.set_ylabel('Loss', fontsize=12)
            ax1.set_title('Training and Validation Loss', fontweight='bold')
            ax1.legend()
            ax1.grid(True, alpha=0.3)
            
            # Accuracy
            ax2.plot(self.history.history['accuracy'], label='Training Accuracy')
            ax2.plot(self.history.history['val_accuracy'], label='Validation Accuracy')
            ax2.set_xlabel('Epoch', fontsize=12)
            ax2.set_ylabel('Accuracy', fontsize=12)
            ax2.set_title('Training and Validation Accuracy', fontweight='bold')
            ax2.legend()
            ax2.grid(True, alpha=0.3)
            
            plt.tight_layout()
            plt.show()
    
    # Compare our implementation with TensorFlow
    print("\n=== Comparing Custom vs TensorFlow Implementation ===")
    
    # Our custom implementation
    print("Training custom MLP...")
    custom_mlp = MultiLayerPerceptron([2, 8, 8, 1], learning_rate=0.01, max_epochs=200)
    custom_mlp.fit(X_scaled, y_nonlinear)
    custom_accuracy = custom_mlp.training_history[-1]['accuracy']
    
    # TensorFlow implementation
    print("Training TensorFlow model...")
    tf_model = ModernNeuralNetwork(2, [8, 8], activation='sigmoid', learning_rate=0.01)
    tf_model.fit(X_scaled, y_nonlinear, epochs=200)
    tf_predictions = tf_model.predict(X_scaled)
    tf_accuracy = np.mean(tf_predictions == y_nonlinear)
    
    print(f"\nResults Comparison:")
    print(f"Custom MLP Accuracy: {custom_accuracy:.4f}")
    print(f"TensorFlow Accuracy: {tf_accuracy:.4f}")
    
    # Plot both training histories
    tf_model.plot_training_history()
    
except ImportError:
    print("TensorFlow not available. Skipping modern implementation comparison.")

Neural Network Best Practices

class NeuralNetworkBestPractices:
    """Best practices for neural network training"""
    
    @staticmethod
    def demonstrate_weight_initialization_impact():
        """Show impact of different weight initialization strategies"""
        
        init_strategies = {
            'Random Small': lambda shape: np.random.normal(0, 0.01, shape),
            'Random Large': lambda shape: np.random.normal(0, 1.0, shape),
            'Xavier': lambda shape: np.random.uniform(
                -np.sqrt(6/(shape[0]+shape[1])), 
                np.sqrt(6/(shape[0]+shape[1])), 
                shape
            ),
            'He': lambda shape: np.random.normal(0, np.sqrt(2/shape[0]), shape)
        }
        
        results = {}
        
        for name, init_func in init_strategies.items():
            print(f"Testing {name} initialization...")
            
            # Create MLP with custom initialization
            mlp = MultiLayerPerceptron([2, 8, 8, 1], learning_rate=0.01, max_epochs=200)
            
            # Override default initialization
            for i in range(len(mlp.weights)):
                mlp.weights[i] = init_func(mlp.weights[i].shape)
            
            mlp.fit(X_scaled, y_nonlinear)
            
            results[name] = {
                'final_accuracy': mlp.training_history[-1]['accuracy'],
                'final_loss': mlp.training_history[-1]['loss'],
                'convergence_epoch': len(mlp.training_history)
            }
        
        # Plot comparison
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
        
        strategies = list(results.keys())
        accuracies = [results[s]['final_accuracy'] for s in strategies]
        losses = [results[s]['final_loss'] for s in strategies]
        
        # Accuracy comparison
        bars1 = ax1.bar(strategies, accuracies, alpha=0.7, color=['blue', 'red', 'green', 'orange'])
        ax1.set_ylabel('Final Accuracy', fontsize=12)
        ax1.set_title('Impact of Weight Initialization', fontweight='bold')
        ax1.tick_params(axis='x', rotation=45)
        ax1.grid(True, alpha=0.3)
        
        # Loss comparison
        bars2 = ax2.bar(strategies, losses, alpha=0.7, color=['blue', 'red', 'green', 'orange'])
        ax2.set_ylabel('Final Loss', fontsize=12)
        ax2.set_title('Loss by Initialization Strategy', fontweight='bold')
        ax2.tick_params(axis='x', rotation=45)
        ax2.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        # Print results
        print("\nInitialization Comparison Results:")
        print("-" * 50)
        for strategy, data in results.items():
            print(f"{strategy:15}: Accuracy={data['final_accuracy']:.4f}, Loss={data['final_loss']:.4f}")
    
    @staticmethod
    def learning_rate_analysis():
        """Analyze impact of different learning rates"""
        
        learning_rates = [0.001, 0.01, 0.1, 0.5, 1.0]
        results = {}
        
        for lr in learning_rates:
            print(f"Testing learning rate: {lr}")
            
            mlp = MultiLayerPerceptron([2, 8, 1], learning_rate=lr, max_epochs=200)
            mlp.fit(X_scaled, y_nonlinear)
            
            results[lr] = {
                'history': mlp.training_history,
                'final_accuracy': mlp.training_history[-1]['accuracy']
            }
        
        # Plot learning curves
        plt.figure(figsize=(12, 8))
        
        for lr, data in results.items():
            history = data['history']
            epochs = [h['epoch'] for h in history]
            losses = [h['loss'] for h in history]
            
            plt.plot(epochs, losses, linewidth=2, label=f'LR = {lr}')
        
        plt.xlabel('Epoch', fontsize=12)
        plt.ylabel('Loss', fontsize=12)
        plt.title('Learning Rate Impact on Training', fontweight='bold', fontsize=14)
        plt.legend()
        plt.grid(True, alpha=0.3)
        plt.yscale('log')
        plt.show()
        
        # Print results
        print("\nLearning Rate Analysis:")
        print("-" * 30)
        for lr in sorted(results.keys()):
            acc = results[lr]['final_accuracy']
            print(f"LR {lr:4}: Final Accuracy = {acc:.4f}")

# Demonstrate best practices
print("\n=== Neural Network Best Practices ===")

practices = NeuralNetworkBestPractices()

print("\n1. Weight Initialization Impact:")
practices.demonstrate_weight_initialization_impact()

print("\n2. Learning Rate Analysis:")
practices.learning_rate_analysis()

Key Concepts Summary

Neural Network Components

ComponentPurposeKey Considerations
WeightsStore learned patternsProper initialization crucial
BiasesShift activation thresholdUsually initialized to zero
Activation FunctionsIntroduce non-linearityChoose based on problem type
Loss FunctionMeasure prediction errorBinary cross-entropy for classification
OptimizerUpdate parametersLearning rate is critical

Common Issues and Solutions

  1. Vanishing Gradients: Use ReLU, proper initialization, batch normalization
  2. Exploding Gradients: Gradient clipping, lower learning rate
  3. Overfitting: Regularization, dropout, early stopping
  4. Slow Convergence: Better initialization, learning rate scheduling
  5. Poor Generalization: More data, regularization, cross-validation

Performance Guidelines

  • Hidden Layers: Start with 1-2, add more if needed
  • Neurons per Layer: Start with 2-10x input features
  • Learning Rate: 0.001-0.01 is usually good starting point
  • Activation: ReLU for hidden layers, sigmoid/softmax for output
  • Initialization: Use Xavier/He initialization
  • Batch Size: 32-128 for most problems

Conclusion

Neural networks are powerful function approximators that can learn complex patterns. Key takeaways:

  • Start simple with single hidden layer
  • Proper initialization is crucial for training success
  • Choose appropriate activation functions for your problem
  • Learning rate tuning often has the biggest impact
  • Understand the fundamentals before using frameworks
  • Monitor training carefully to detect issues early

Building neural networks from scratch helps you understand the fundamentals, but use established frameworks like TensorFlow or PyTorch for production work.

References

  1. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.

  2. Nielsen, M. A. (2015). Neural networks and deep learning. Determination press.

  3. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.


Connect with me on LinkedIn or X to discuss neural networks and deep learning!

Share this content

Reading time: 3 minutes
Progress: 0%
#Neural Networks#Deep Learning#Perceptron#Backpropagation#Python#TensorFlow#PyTorch
Neural Networks Fundamentals: From Perceptron to Deep Learning with Python - Fenil Sonani