Course: Digital Communication Systems
Student: Panagiota Grosdouli
Language: Python 3.x (NumPy, Matplotlib)
This project investigates the use of a neural network (MLP) for symbol decision in BPSK and QPSK modulation schemes under Additive White Gaussian Noise (AWGN).
The goal is to train a small MLP to learn the optimal decision rule and compare its Bit Error Rate (BER) to that of the classical maximum-likelihood detector.
For an AWGN channel:
y = s + n
n ~ N(0, σ²)
Relation between signal-to-noise ratio and noise variance:
Eb/N0 = γb
σ² = 1 / (2 * γb)
Theoretical BER for BPSK:
Pb = Q(sqrt(2 * Eb/N0))
- Generate random bits.
- Map them to BPSK or QPSK symbols.
- Add AWGN noise for selected
Eb/N0values.
- Architecture:
[Input] → ReLU(16–24 neurons) → Sigmoid(Output) - Loss function: Binary Cross Entropy
- Training SNR = 6 dB
- Evaluate generalization for 0–12 dB.
- Compute BER for both modulations.
- Compare MLP vs Optimal Detector.
To run the project:
python ml_symbol_decision.pyAutomatically generates:
ml_vs_opt_ber.csvml_vs_opt_ber.pngqpsk_mlp_decisions_6dB.png
- The MLP effectively learns the optimal decision boundary.
- In AWGN, its performance closely matches the maximum-likelihood detector.
- At low SNR, small deviations occur due to limited training data.
- Introduce Rayleigh fading:
y = h * s + n
h ~ CN(0,1)
- Train with pilot estimates (
ĥ) as extra input. - Explore CNN/LSTM architectures for sequence detection.
Neural networks offer a modern, data-driven approach to symbol detection,
capable of adapting to complex, non-ideal channel conditions.

