Fenil Sonani

Clustering Algorithms in Machine Learning: Complete Guide with Python

3 min read

AI-Generated Content Notice

Some code examples and technical explanations in this article were generated with AI assistance. The content has been reviewed for accuracy, but please test any code snippets in your development environment before using them.


Clustering Algorithms in Machine Learning: Complete Guide with Python

Introduction

Clustering is a fundamental unsupervised learning technique that groups similar data points together without labeled examples. It's essential for exploratory data analysis, customer segmentation, image processing, and discovering hidden patterns in data.

This comprehensive guide covers the most important clustering algorithms with practical Python implementations and real-world applications.

Understanding Clustering

What is Clustering?

Clustering partitions data into groups (clusters) where:

  • Points within clusters are similar to each other
  • Points in different clusters are dissimilar
  • No target labels are provided (unsupervised)

Types of Clustering

  1. Partitional: Divide data into non-overlapping clusters (K-means)
  2. Hierarchical: Create tree-like cluster structures (Agglomerative)
  3. Density-based: Find clusters based on density (DBSCAN)
  4. Model-based: Assume data follows certain distributions (GMM)

K-Means Clustering

Algorithm Implementation

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_blobs, make_circles, make_moons
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.mixture import GaussianMixture
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import adjusted_rand_score, silhouette_score, calinski_harabasz_score
from typing import Dict, List, Tuple, Optional
import warnings
warnings.filterwarnings('ignore')

plt.style.use('seaborn-v0_8')

class KMeansAnalyzer:
    """Comprehensive K-means clustering analyzer"""
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
    
    def kmeans_from_scratch(self, X: np.ndarray, k: int, max_iters: int = 100) -> Tuple[np.ndarray, np.ndarray]:
        """K-means implementation from scratch"""
        
        n_samples, n_features = X.shape
        
        # Initialize centroids randomly
        centroids = X[np.random.choice(n_samples, k, replace=False)]
        
        history = []
        
        for iteration in range(max_iters):
            # Assign points to closest centroid
            distances = np.sqrt(((X - centroids[:, np.newaxis])**2).sum(axis=2))
            labels = np.argmin(distances, axis=0)
            
            # Update centroids
            new_centroids = np.array([X[labels == i].mean(axis=0) for i in range(k)])
            
            # Store history
            history.append({
                'iteration': iteration,
                'centroids': centroids.copy(),
                'labels': labels.copy(),
                'inertia': np.sum([np.sum((X[labels == i] - centroids[i])**2) for i in range(k)])
            })
            
            # Check convergence
            if np.allclose(centroids, new_centroids):
                print(f"Converged after {iteration + 1} iterations")
                break
                
            centroids = new_centroids
        
        return labels, centroids, history
    
    def elbow_method_analysis(self, X: np.ndarray, max_k: int = 10) -> Dict:
        """Perform elbow method to find optimal k"""
        
        k_range = range(1, max_k + 1)
        inertias = []
        silhouette_scores = []
        
        for k in k_range:
            if k == 1:
                # For k=1, inertia is total variance
                inertia = np.sum((X - X.mean(axis=0))**2)
                inertias.append(inertia)
                silhouette_scores.append(0)  # Undefined for k=1
            else:
                kmeans = KMeans(n_clusters=k, random_state=self.random_state, n_init=10)
                labels = kmeans.fit_predict(X)
                inertias.append(kmeans.inertia_)
                silhouette_scores.append(silhouette_score(X, labels))
        
        # Calculate elbow point using rate of change
        differences = np.diff(inertias)
        diff_ratios = np.diff(differences)
        elbow_point = np.argmax(diff_ratios) + 2  # +2 because of double diff
        
        return {
            'k_range': list(k_range),
            'inertias': inertias,
            'silhouette_scores': silhouette_scores,
            'elbow_point': elbow_point
        }
    
    def plot_elbow_analysis(self, results: Dict):
        """Plot elbow method results"""
        
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
        
        # Elbow curve
        ax1.plot(results['k_range'], results['inertias'], 'bo-', linewidth=2, markersize=8)
        ax1.axvline(x=results['elbow_point'], color='red', linestyle='--', 
                   label=f'Elbow at k={results["elbow_point"]}')
        ax1.set_xlabel('Number of Clusters (k)', fontsize=12)
        ax1.set_ylabel('Inertia (WCSS)', fontsize=12)
        ax1.set_title('Elbow Method for Optimal k', fontweight='bold')
        ax1.legend()
        ax1.grid(True, alpha=0.3)
        
        # Silhouette score
        ax2.plot(results['k_range'][1:], results['silhouette_scores'][1:], 'go-', 
                linewidth=2, markersize=8)
        best_k_silhouette = np.argmax(results['silhouette_scores'][1:]) + 2
        ax2.axvline(x=best_k_silhouette, color='red', linestyle='--', 
                   label=f'Best silhouette at k={best_k_silhouette}')
        ax2.set_xlabel('Number of Clusters (k)', fontsize=12)
        ax2.set_ylabel('Silhouette Score', fontsize=12)
        ax2.set_title('Silhouette Analysis', fontweight='bold')
        ax2.legend()
        ax2.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
    
    def visualize_clustering_process(self, X: np.ndarray, k: int):
        """Visualize K-means clustering process"""
        
        if X.shape[1] != 2:
            print("Visualization only available for 2D data")
            return
        
        labels, centroids, history = self.kmeans_from_scratch(X, k, max_iters=20)
        
        # Plot key iterations
        iterations_to_plot = [0, len(history)//3, 2*len(history)//3, -1]
        fig, axes = plt.subplots(2, 2, figsize=(15, 12))
        axes = axes.flatten()
        
        colors = plt.cm.Set3(np.linspace(0, 1, k))
        
        for idx, iter_idx in enumerate(iterations_to_plot):
            ax = axes[idx]
            
            if iter_idx == -1:
                iter_idx = len(history) - 1
            
            step = history[iter_idx]
            current_labels = step['labels']
            current_centroids = step['centroids']
            
            # Plot data points
            for cluster_id in range(k):
                cluster_points = X[current_labels == cluster_id]
                if len(cluster_points) > 0:
                    ax.scatter(cluster_points[:, 0], cluster_points[:, 1], 
                             c=[colors[cluster_id]], alpha=0.6, s=50)
            
            # Plot centroids
            ax.scatter(current_centroids[:, 0], current_centroids[:, 1], 
                      c='red', marker='x', s=200, linewidths=3)
            
            ax.set_title(f'Iteration {step["iteration"] + 1}', fontweight='bold')
            ax.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()

# Generate sample datasets
datasets = {
    'blobs': make_blobs(n_samples=300, centers=4, cluster_std=1.0, random_state=42),
    'circles': make_circles(n_samples=300, noise=0.1, factor=0.3, random_state=42),
    'moons': make_moons(n_samples=300, noise=0.1, random_state=42)
}

# Analyze K-means on different datasets
kmeans_analyzer = KMeansAnalyzer()

for name, (X, y_true) in datasets.items():
    print(f"\n{'='*50}")
    print(f"K-means Analysis on {name.upper()} Dataset")
    print(f"{'='*50}")
    
    # Standardize data
    scaler = StandardScaler()
    X_scaled = scaler.fit_transform(X)
    
    # Elbow method
    elbow_results = kmeans_analyzer.elbow_method_analysis(X_scaled, max_k=8)
    kmeans_analyzer.plot_elbow_analysis(elbow_results)
    
    # Visualize clustering process (use optimal k from elbow)
    optimal_k = elbow_results['elbow_point']
    print(f"Optimal k (elbow method): {optimal_k}")
    kmeans_analyzer.visualize_clustering_process(X_scaled, optimal_k)

Hierarchical Clustering

Agglomerative Clustering Implementation

from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.spatial.distance import pdist, squareform

class HierarchicalClusteringAnalyzer:
    """Hierarchical clustering analyzer"""
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
    
    def agglomerative_analysis(self, X: np.ndarray, linkage_methods: List[str] = None) -> Dict:
        """Analyze different linkage methods"""
        
        if linkage_methods is None:
            linkage_methods = ['ward', 'complete', 'average', 'single']
        
        results = {}
        
        for method in linkage_methods:
            print(f"Analyzing {method} linkage...")
            
            # Test different number of clusters
            cluster_range = range(2, 8)
            silhouette_scores = []
            
            for n_clusters in cluster_range:
                agg = AgglomerativeClustering(n_clusters=n_clusters, linkage=method)
                labels = agg.fit_predict(X)
                score = silhouette_score(X, labels)
                silhouette_scores.append(score)
            
            results[method] = {
                'cluster_range': list(cluster_range),
                'silhouette_scores': silhouette_scores,
                'best_n_clusters': cluster_range[np.argmax(silhouette_scores)]
            }
            
            print(f"  Best n_clusters: {results[method]['best_n_clusters']}")
        
        return results
    
    def plot_dendrograms(self, X: np.ndarray, linkage_methods: List[str] = None):
        """Plot dendrograms for different linkage methods"""
        
        if linkage_methods is None:
            linkage_methods = ['ward', 'complete', 'average', 'single']
        
        fig, axes = plt.subplots(2, 2, figsize=(15, 12))
        axes = axes.flatten()
        
        for idx, method in enumerate(linkage_methods):
            # Compute linkage matrix
            if method == 'ward':
                Z = linkage(X, method=method)
            else:
                Z = linkage(pdist(X, metric='euclidean'), method=method)
            
            # Plot dendrogram
            dendrogram(Z, ax=axes[idx], truncate_mode='level', p=5, 
                      show_leaf_counts=True, leaf_font_size=10)
            axes[idx].set_title(f'{method.capitalize()} Linkage Dendrogram', fontweight='bold')
            axes[idx].set_xlabel('Sample Index or Cluster Size')
            axes[idx].set_ylabel('Distance')
        
        plt.tight_layout()
        plt.show()
    
    def plot_hierarchical_comparison(self, results: Dict):
        """Plot comparison of hierarchical clustering methods"""
        
        methods = list(results.keys())
        
        fig, axes = plt.subplots(1, 2, figsize=(15, 6))
        
        # Silhouette score comparison
        for method, data in results.items():
            axes[0].plot(data['cluster_range'], data['silhouette_scores'], 
                        'o-', linewidth=2, markersize=8, label=method.capitalize())
        
        axes[0].set_xlabel('Number of Clusters', fontsize=12)
        axes[0].set_ylabel('Silhouette Score', fontsize=12)
        axes[0].set_title('Hierarchical Clustering: Silhouette Analysis', fontweight='bold')
        axes[0].legend()
        axes[0].grid(True, alpha=0.3)
        
        # Best performance by method
        best_clusters = [results[method]['best_n_clusters'] for method in methods]
        best_scores = [max(results[method]['silhouette_scores']) for method in methods]
        
        bars = axes[1].bar(methods, best_scores, alpha=0.7, color=['blue', 'orange', 'green', 'red'])
        axes[1].set_ylabel('Best Silhouette Score', fontsize=12)
        axes[1].set_title('Best Performance by Linkage Method', fontweight='bold')
        axes[1].grid(True, alpha=0.3)
        
        # Add value labels
        for bar, score, n_clusters in zip(bars, best_scores, best_clusters):
            axes[1].text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, 
                        f'{score:.3f}\n(k={n_clusters})', ha='center', va='bottom', 
                        fontweight='bold', fontsize=10)
        
        plt.tight_layout()
        plt.show()

# Analyze hierarchical clustering
hier_analyzer = HierarchicalClusteringAnalyzer()

# Use blobs dataset for hierarchical analysis
X_blobs, y_blobs = datasets['blobs']
X_blobs_scaled = StandardScaler().fit_transform(X_blobs)

print("\nHierarchical Clustering Analysis:")
hier_results = hier_analyzer.agglomerative_analysis(X_blobs_scaled)
hier_analyzer.plot_dendrograms(X_blobs_scaled)
hier_analyzer.plot_hierarchical_comparison(hier_results)

DBSCAN (Density-Based Clustering)

DBSCAN Implementation and Analysis

from sklearn.neighbors import NearestNeighbors

class DBSCANAnalyzer:
    """DBSCAN clustering analyzer"""
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
    
    def find_optimal_eps(self, X: np.ndarray, min_samples: int = 5) -> float:
        """Find optimal eps parameter using k-distance plot"""
        
        # Calculate k-nearest neighbors
        neighbors = NearestNeighbors(n_neighbors=min_samples)
        neighbors_fit = neighbors.fit(X)
        distances, _ = neighbors_fit.kneighbors(X)
        
        # Sort distances to kth nearest neighbor
        distances = np.sort(distances[:, min_samples-1], axis=0)
        
        # Plot k-distance graph
        plt.figure(figsize=(10, 6))
        plt.plot(distances, linewidth=2)
        plt.xlabel('Data Points sorted by distance', fontsize=12)
        plt.ylabel(f'{min_samples}-NN Distance', fontsize=12)
        plt.title('K-Distance Plot for Optimal Eps', fontweight='bold')
        plt.grid(True, alpha=0.3)
        
        # Find elbow point (simplified method)
        if len(distances) > 10:
            second_derivative = np.diff(distances, n=2)
            elbow_point = np.argmax(second_derivative) + 2
            optimal_eps = distances[elbow_point]
            
            plt.axhline(y=optimal_eps, color='red', linestyle='--', 
                       label=f'Suggested eps: {optimal_eps:.3f}')
            plt.legend()
        else:
            optimal_eps = np.median(distances)
        
        plt.show()
        return optimal_eps
    
    def dbscan_parameter_analysis(self, X: np.ndarray) -> Dict:
        """Analyze DBSCAN with different parameters"""
        
        eps_values = np.linspace(0.1, 2.0, 10)
        min_samples_values = [3, 5, 10]
        
        results = {}
        
        for min_samples in min_samples_values:
            eps_results = []
            
            for eps in eps_values:
                dbscan = DBSCAN(eps=eps, min_samples=min_samples)
                labels = dbscan.fit_predict(X)
                
                # Calculate metrics
                n_clusters = len(set(labels)) - (1 if -1 in labels else 0)
                n_noise = list(labels).count(-1)
                
                if n_clusters > 1:
                    try:
                        silhouette = silhouette_score(X, labels)
                    except:
                        silhouette = -1
                else:
                    silhouette = -1
                
                eps_results.append({
                    'eps': eps,
                    'n_clusters': n_clusters,
                    'n_noise': n_noise,
                    'silhouette_score': silhouette,
                    'labels': labels
                })
            
            results[min_samples] = eps_results
        
        return results
    
    def plot_dbscan_analysis(self, X: np.ndarray, results: Dict):
        """Plot DBSCAN parameter analysis"""
        
        min_samples_values = list(results.keys())
        
        fig, axes = plt.subplots(2, 2, figsize=(15, 12))
        
        # Plot 1: Number of clusters vs eps
        for min_samples in min_samples_values:
            data = results[min_samples]
            eps_vals = [item['eps'] for item in data]
            n_clusters = [item['n_clusters'] for item in data]
            
            axes[0, 0].plot(eps_vals, n_clusters, 'o-', linewidth=2, 
                           markersize=6, label=f'min_samples={min_samples}')
        
        axes[0, 0].set_xlabel('Eps', fontsize=12)
        axes[0, 0].set_ylabel('Number of Clusters', fontsize=12)
        axes[0, 0].set_title('Clusters vs Eps Parameter', fontweight='bold')
        axes[0, 0].legend()
        axes[0, 0].grid(True, alpha=0.3)
        
        # Plot 2: Noise points vs eps
        for min_samples in min_samples_values:
            data = results[min_samples]
            eps_vals = [item['eps'] for item in data]
            n_noise = [item['n_noise'] for item in data]
            
            axes[0, 1].plot(eps_vals, n_noise, 'o-', linewidth=2, 
                           markersize=6, label=f'min_samples={min_samples}')
        
        axes[0, 1].set_xlabel('Eps', fontsize=12)
        axes[0, 1].set_ylabel('Number of Noise Points', fontsize=12)
        axes[0, 1].set_title('Noise Points vs Eps Parameter', fontweight='bold')
        axes[0, 1].legend()
        axes[0, 1].grid(True, alpha=0.3)
        
        # Plot 3: Silhouette score vs eps
        for min_samples in min_samples_values:
            data = results[min_samples]
            eps_vals = [item['eps'] for item in data]
            silhouette_scores = [item['silhouette_score'] if item['silhouette_score'] > -1 
                               else np.nan for item in data]
            
            axes[1, 0].plot(eps_vals, silhouette_scores, 'o-', linewidth=2, 
                           markersize=6, label=f'min_samples={min_samples}')
        
        axes[1, 0].set_xlabel('Eps', fontsize=12)
        axes[1, 0].set_ylabel('Silhouette Score', fontsize=12)
        axes[1, 0].set_title('Silhouette Score vs Eps Parameter', fontweight='bold')
        axes[1, 0].legend()
        axes[1, 0].grid(True, alpha=0.3)
        
        # Plot 4: Best clustering result
        best_score = -2
        best_params = None
        best_labels = None
        
        for min_samples in min_samples_values:
            for item in results[min_samples]:
                if item['silhouette_score'] > best_score:
                    best_score = item['silhouette_score']
                    best_params = (item['eps'], min_samples)
                    best_labels = item['labels']
        
        if best_labels is not None and X.shape[1] == 2:
            unique_labels = set(best_labels)
            colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels)))
            
            for k, col in zip(unique_labels, colors):
                if k == -1:
                    col = 'black'
                    marker = 'x'
                    alpha = 0.5
                else:
                    marker = 'o'
                    alpha = 0.8
                
                class_member_mask = (best_labels == k)
                xy = X[class_member_mask]
                axes[1, 1].scatter(xy[:, 0], xy[:, 1], c=[col], marker=marker, 
                                  alpha=alpha, s=50)
            
            axes[1, 1].set_title(f'Best DBSCAN Result\neps={best_params[0]:.2f}, '
                                f'min_samples={best_params[1]}\nSilhouette: {best_score:.3f}', 
                                fontweight='bold')
            axes[1, 1].grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        return best_params

# Analyze DBSCAN on circles dataset
dbscan_analyzer = DBSCANAnalyzer()
X_circles, _ = datasets['circles']
X_circles_scaled = StandardScaler().fit_transform(X_circles)

print("DBSCAN Analysis on Circles Dataset:")
optimal_eps = dbscan_analyzer.find_optimal_eps(X_circles_scaled, min_samples=5)
dbscan_results = dbscan_analyzer.dbscan_parameter_analysis(X_circles_scaled)
best_params = dbscan_analyzer.plot_dbscan_analysis(X_circles_scaled, dbscan_results)

print(f"Best parameters: eps={best_params[0]:.3f}, min_samples={best_params[1]}")

Gaussian Mixture Models (GMM)

GMM Implementation and Analysis

class GMMAnalyzer:
    """Gaussian Mixture Model analyzer"""
    
    def __init__(self, random_state: int = 42):
        self.random_state = random_state
    
    def gmm_model_selection(self, X: np.ndarray, max_components: int = 10) -> Dict:
        """Select optimal number of components using information criteria"""
        
        n_components_range = range(1, max_components + 1)
        
        aic_scores = []
        bic_scores = []
        log_likelihoods = []
        
        for n_components in n_components_range:
            gmm = GaussianMixture(n_components=n_components, random_state=self.random_state)
            gmm.fit(X)
            
            aic_scores.append(gmm.aic(X))
            bic_scores.append(gmm.bic(X))
            log_likelihoods.append(gmm.score(X))
        
        # Find optimal number of components
        optimal_aic = n_components_range[np.argmin(aic_scores)]
        optimal_bic = n_components_range[np.argmin(bic_scores)]
        
        return {
            'n_components_range': list(n_components_range),
            'aic_scores': aic_scores,
            'bic_scores': bic_scores,
            'log_likelihoods': log_likelihoods,
            'optimal_aic': optimal_aic,
            'optimal_bic': optimal_bic
        }
    
    def plot_gmm_analysis(self, results: Dict):
        """Plot GMM analysis results"""
        
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
        
        # Information criteria
        n_comp_range = results['n_components_range']
        ax1.plot(n_comp_range, results['aic_scores'], 'b-o', linewidth=2, 
                markersize=8, label='AIC')
        ax1.plot(n_comp_range, results['bic_scores'], 'r-s', linewidth=2, 
                markersize=8, label='BIC')
        
        # Mark optimal points
        ax1.axvline(x=results['optimal_aic'], color='blue', linestyle='--', alpha=0.7)
        ax1.axvline(x=results['optimal_bic'], color='red', linestyle='--', alpha=0.7)
        
        ax1.set_xlabel('Number of Components', fontsize=12)
        ax1.set_ylabel('Information Criterion', fontsize=12)
        ax1.set_title('GMM Model Selection', fontweight='bold')
        ax1.legend()
        ax1.grid(True, alpha=0.3)
        
        # Log likelihood
        ax2.plot(n_comp_range, results['log_likelihoods'], 'g-^', linewidth=2, markersize=8)
        ax2.set_xlabel('Number of Components', fontsize=12)
        ax2.set_ylabel('Log Likelihood', fontsize=12)
        ax2.set_title('Log Likelihood vs Components', fontweight='bold')
        ax2.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()

# Analyze GMM on blobs dataset
gmm_analyzer = GMMAnalyzer()
X_blobs, _ = datasets['blobs']
X_blobs_scaled = StandardScaler().fit_transform(X_blobs)

print("Gaussian Mixture Model Analysis:")
gmm_results = gmm_analyzer.gmm_model_selection(X_blobs_scaled, max_components=8)
gmm_analyzer.plot_gmm_analysis(gmm_results)

print(f"Optimal components (AIC): {gmm_results['optimal_aic']}")
print(f"Optimal components (BIC): {gmm_results['optimal_bic']}")

Clustering Evaluation Metrics

class ClusteringEvaluator:
    """Comprehensive clustering evaluation"""
    
    def __init__(self):
        pass
    
    def evaluate_clustering(self, X: np.ndarray, labels_pred: np.ndarray, 
                          labels_true: np.ndarray = None) -> Dict:
        """Evaluate clustering performance"""
        
        metrics = {}
        
        # Internal metrics (don't require true labels)
        if len(set(labels_pred)) > 1:
            metrics['silhouette_score'] = silhouette_score(X, labels_pred)
            metrics['calinski_harabasz_score'] = calinski_harabasz_score(X, labels_pred)
            
            # Davies-Bouldin score (lower is better)
            try:
                from sklearn.metrics import davies_bouldin_score
                metrics['davies_bouldin_score'] = davies_bouldin_score(X, labels_pred)
            except ImportError:
                pass
        
        # External metrics (require true labels)
        if labels_true is not None:
            metrics['adjusted_rand_score'] = adjusted_rand_score(labels_true, labels_pred)
            
            try:
                from sklearn.metrics import normalized_mutual_info_score, homogeneity_score, completeness_score
                metrics['normalized_mutual_info'] = normalized_mutual_info_score(labels_true, labels_pred)
                metrics['homogeneity_score'] = homogeneity_score(labels_true, labels_pred)
                metrics['completeness_score'] = completeness_score(labels_true, labels_pred)
            except ImportError:
                pass
        
        # Basic statistics
        metrics['n_clusters'] = len(set(labels_pred)) - (1 if -1 in labels_pred else 0)
        metrics['n_noise'] = list(labels_pred).count(-1) if -1 in labels_pred else 0
        
        return metrics
    
    def compare_algorithms(self, X: np.ndarray, y_true: np.ndarray = None) -> Dict:
        """Compare different clustering algorithms"""
        
        # Standardize data
        X_scaled = StandardScaler().fit_transform(X)
        
        # Define algorithms
        algorithms = {
            'K-Means': KMeans(n_clusters=4, random_state=42),
            'Agglomerative': AgglomerativeClustering(n_clusters=4),
            'DBSCAN': DBSCAN(eps=0.5, min_samples=5),
            'GMM': GaussianMixture(n_components=4, random_state=42)
        }
        
        results = {}
        
        for name, algorithm in algorithms.items():
            print(f"Evaluating {name}...")
            
            # Fit and predict
            if hasattr(algorithm, 'fit_predict'):
                labels = algorithm.fit_predict(X_scaled)
            else:  # GMM case
                labels = algorithm.fit(X_scaled).predict(X_scaled)
            
            # Evaluate
            metrics = self.evaluate_clustering(X_scaled, labels, y_true)
            results[name] = {
                'labels': labels,
                'metrics': metrics,
                'algorithm': algorithm
            }
        
        return results
    
    def plot_algorithm_comparison(self, X: np.ndarray, results: Dict):
        """Plot comparison of clustering algorithms"""
        
        if X.shape[1] != 2:
            print("Visualization only available for 2D data")
            return
        
        fig, axes = plt.subplots(2, 2, figsize=(15, 12))
        axes = axes.flatten()
        
        for idx, (name, data) in enumerate(results.items()):
            labels = data['labels']
            metrics = data['metrics']
            
            # Plot clustering result
            if -1 in labels:
                # Handle noise points for DBSCAN
                unique_labels = set(labels)
                colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels)))
                
                for k, col in zip(unique_labels, colors):
                    if k == -1:
                        col = 'black'
                        marker = 'x'
                        alpha = 0.5
                    else:
                        marker = 'o'
                        alpha = 0.8
                    
                    class_member_mask = (labels == k)
                    xy = X[class_member_mask]
                    axes[idx].scatter(xy[:, 0], xy[:, 1], c=[col], marker=marker, 
                                    alpha=alpha, s=50)
            else:
                axes[idx].scatter(X[:, 0], X[:, 1], c=labels, 
                                cmap='viridis', alpha=0.6, s=50)
            
            # Title with metrics
            title = f"{name}\n"
            if 'silhouette_score' in metrics:
                title += f"Silhouette: {metrics['silhouette_score']:.3f}\n"
            if 'adjusted_rand_score' in metrics:
                title += f"ARI: {metrics['adjusted_rand_score']:.3f}"
            
            axes[idx].set_title(title, fontweight='bold', fontsize=10)
            axes[idx].grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        # Print detailed comparison
        print("\nDetailed Metrics Comparison:")
        print("-" * 80)
        
        metric_names = ['silhouette_score', 'adjusted_rand_score', 'n_clusters', 'n_noise']
        
        for metric in metric_names:
            print(f"\n{metric.replace('_', ' ').title()}:")
            for name, data in results.items():
                if metric in data['metrics']:
                    value = data['metrics'][metric]
                    print(f"  {name:15}: {value:.4f}" if isinstance(value, float) else f"  {name:15}: {value}")

# Comprehensive algorithm comparison
evaluator = ClusteringEvaluator()

for name, (X, y_true) in datasets.items():
    print(f"\n{'='*60}")
    print(f"Algorithm Comparison on {name.upper()} Dataset")
    print(f"{'='*60}")
    
    comparison_results = evaluator.compare_algorithms(X, y_true)
    evaluator.plot_algorithm_comparison(X, comparison_results)

Best Practices and Guidelines

Choosing the Right Algorithm

AlgorithmBest ForAdvantagesDisadvantages
K-MeansSpherical, similar-sized clustersFast, simple, scalableRequires k, assumes spherical clusters
HierarchicalUnknown number of clustersNo need to specify k, creates hierarchyComputationally expensive O(n³)
DBSCANIrregular shapes, noise handlingFinds arbitrary shapes, handles noiseSensitive to parameters, struggles with varying densities
GMMOverlapping clusters, probabilisticSoft clustering, handles overlapAssumes Gaussian distributions

Parameter Selection Guidelines

  1. K-Means: Use elbow method or silhouette analysis for k
  2. DBSCAN: Use k-distance plot for eps, start with min_samples = 2 × dimensions
  3. Hierarchical: Use dendrogram to choose cut height
  4. GMM: Use AIC/BIC for number of components

Evaluation Strategy

  1. Internal metrics when true labels unknown (Silhouette, Calinski-Harabasz)
  2. External metrics when true labels available (ARI, NMI)
  3. Visual inspection for 2D/3D data
  4. Domain knowledge validation

Performance Summary

Typical performance characteristics:

  • K-Means: O(nkt) - very fast, scales well
  • Hierarchical: O(n³) - slow for large datasets
  • DBSCAN: O(n log n) - good performance with spatial indexing
  • GMM: O(nkt) - similar to K-means but slower convergence

Conclusion

Clustering is a powerful unsupervised learning technique with wide applications. Key takeaways:

  • No universal best algorithm - choice depends on data characteristics
  • Parameter tuning is crucial for optimal results
  • Combine multiple evaluation metrics for robust assessment
  • Visualize results when possible to validate clustering quality
  • Consider computational constraints for large datasets

Understanding these algorithms and their trade-offs enables you to choose the right approach for your specific clustering challenges.

References

  1. MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley symposium on mathematical statistics and probability.

  2. Ester, M., et al. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. KDD-96 Proceedings.

  3. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. Springer.


Connect with me on LinkedIn or X to discuss clustering algorithms and machine learning!

Share this content

Reading time: 3 minutes
Progress: 0%
#Clustering#Unsupervised Learning#K-means#DBSCAN#Hierarchical Clustering#Python#Scikit-learn
Clustering Algorithms in Machine Learning: Complete Guide with Python - Fenil Sonani