Skip to content

This project implements and compares two variants of Differential Evolution (DE) algorithms for optimizing a complex, modified 3D Rastrigin function. The comparison focuses on evaluating the performance difference between a classical differential evolution approach and an enhanced variant that integrates deep learning techniques.

Notifications You must be signed in to change notification settings

HasanAbdelhady/Deep-Learning-Enhanced-Differential-Evolution

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Comparing Evolutionary Algorithms: Standard DE vs Deep-DE

πŸ“‹ Project Overview

This project implements and compares two variants of Differential Evolution (DE) algorithms for optimizing a complex, modified 3D Rastrigin function. The comparison focuses on evaluating the performance difference between a classical differential evolution approach and an enhanced variant that integrates deep learning techniques.

🎯 Algorithms Under Comparison

  1. Standard DE: A classical differential evolution algorithm implementing standard mutation, crossover, and selection strategies
  2. Simplified Deep-DE: An enhanced variant that integrates a surrogate neural network model to guide the mutation process, leveraging an experience buffer to learn from successful mutations

πŸ”¬ Problem Statement

The optimization challenge involves a modified 3D Rastrigin function with several complexity-enhancing modifications:

  • Base Function: 3D Rastrigin function with search space bounds [-5.12, 5.12] for each dimension
  • Asymmetry: Fixed 3D shift vector [-1.23, 2.41, 0.85] to break symmetry
  • Distortion: Additional quadratic and sub-quadratic (power 1.5) penalty terms
  • Rotation: Cross-variable interaction terms introducing rotational effects

The mathematical formulation includes:

f(x,y,z) = base_rastrigin + asymmetry_distortion + rotation_terms

πŸ—οΈ Methodology

Performance Evaluation Metrics

The algorithms are compared using four key performance indicators:

  1. Solution Quality: Best fitness value achieved and corresponding solution coordinates
  2. Convergence Speed: Number of generations required to reach threshold proximity to optimum
  3. Improvement Rate: Rate of fitness improvement across successive generations
  4. Stability: Consistency measured by variance in fitness during final generations

Algorithm Details

Standard DE

  • Random population initialization within problem bounds
  • Classical DE mutation: mutant = a + F Γ— (b - c)
  • Binomial crossover with rate CR
  • Greedy selection based on fitness improvement

Simplified Deep-DE

  • Surrogate Model: Feedforward neural network predicting beneficial mutations
  • Experience Buffer: Repository storing successful mutations for model training
  • Guided Mutation: Neural network-generated mutations after sufficient training data
  • Adaptive Learning: Periodic model updates using collected experiences

πŸš€ Installation & Usage

Prerequisites

  • Python 3.7+
  • PyTorch
  • NumPy
  • Matplotlib
  • imageio

Setup Instructions

  1. Create and activate virtual environment:
python -m venv venv

# On Windows:
venv\Scripts\activate

# On Unix or MacOS:
source venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Run the comparison:
python main.py

Output

The program will:

  • Execute both algorithms with identical parameters
  • Generate performance metrics and comparisons
  • Create visualizations in the visualization/ directory
  • Display comprehensive results in the terminal

πŸ“Š Results & Visualizations

Performance Summary

Based on experimental runs with population size 100 and 50 generations:

Metric Standard DE Deep-DE Improvement
Best Fitness 4.334628 4.235910 2.28%
Convergence Speed 35 generations 19 generations 45.71%
Solution Quality Good Superior βœ“
Stability Stable Stable β‰ˆ

Convergence Analysis

Convergence Comparison

The convergence plot demonstrates that Deep-DE not only achieves better final fitness but also converges significantly faster than Standard DE.

Population Evolution

The algorithms show distinct population evolution patterns:

Standard DE Evolution

Standard DE GIF

Deep-DE Evolution

Deep-DE GIF

Note: Darker points indicate better fitness values (approaching global minimum)

3D Function Landscape

3D Function Comparison

The 3D visualization shows both algorithms' final solutions plotted on the complex Rastrigin landscape, demonstrating their effectiveness in navigating the multimodal search space.

Final Population Distribution

Final Populations

The final population comparison illustrates how both algorithms converge, with Deep-DE showing slightly tighter clustering around the optimum.

2D Function Slices

2D Function Slices

Cross-sectional views of the function landscape help visualize the complexity of the optimization problem and solution locations.

πŸ“ Project Structure

β”œβ”€β”€ main.py              # Main execution script and Deep-DE implementation
β”œβ”€β”€ classic_de.py        # Standard Differential Evolution implementation
β”œβ”€β”€ problem.py           # Modified 3D Rastrigin function definition
β”œβ”€β”€ metrics.py           # Performance metrics calculation utilities
β”œβ”€β”€ visuals.py           # Visualization generation functions
β”œβ”€β”€ requirements.txt     # Python dependencies
β”œβ”€β”€ README.md           # This documentation
└── visualization/      # Generated plots and animations
    β”œβ”€β”€ convergence.png
    β”œβ”€β”€ final_populations.png
    β”œβ”€β”€ 3d_function_comparison.png
    β”œβ”€β”€ 2d_function_slices.png
    β”œβ”€β”€ algorithm_comparison.png
    β”œβ”€β”€ distance_to_optimum.png
    β”œβ”€β”€ standard_de.gif
    β”œβ”€β”€ deep_de.gif
    └── *.png frame files

πŸ” Key Implementation Details

Surrogate Neural Network

  • Architecture: 3-layer feedforward network (128 β†’ 128 β†’ 64 β†’ 3)
  • Input: Concatenated parent triplets (9D β†’ 3D mutation vector)
  • Training: Adam optimizer with cosine annealing scheduler
  • Loss Function: Mean Squared Error on successful mutations

Experience Buffer

  • Capacity: 1000 experiences (configurable)
  • Storage: Parent triplets, resulting mutations, fitness improvements
  • Sampling: Random batch selection for neural network training
  • Usage: Begins after 32+ successful experiences collected

🎯 Key Findings

  1. Deep-DE demonstrates superior performance with 2.28% better solution quality
  2. Convergence acceleration of 45.71% - reaching optimum in nearly half the generations
  3. Maintained stability while achieving faster convergence
  4. Neural network guidance effectively learns problem-specific mutation strategies

πŸ”¬ Technical Notes

  • Reproducibility: Fixed random seeds (torch: 42, numpy: 33) ensure consistent results
  • Boundary Handling: Clamping ensures solutions remain within valid bounds
  • Crossover Strategy: Binomial crossover with guaranteed minimum one-point exchange
  • Visualization: Real-time 3D population tracking with rotating viewpoints

πŸ“ˆ Future Enhancements

Potential improvements to explore:

  • Multi-objective optimization variants
  • Adaptive population sizing
  • Advanced surrogate architectures (attention mechanisms)
  • Hybrid local search integration
  • Dynamic parameter adaptation

Note: This implementation serves as a research tool for comparing evolutionary algorithms enhanced with machine learning techniques. Results may vary with different problem instances and parameter settings.

About

This project implements and compares two variants of Differential Evolution (DE) algorithms for optimizing a complex, modified 3D Rastrigin function. The comparison focuses on evaluating the performance difference between a classical differential evolution approach and an enhanced variant that integrates deep learning techniques.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages