Skip to main content

Table 1 Comparison of different architectures configuration and MSE accuracy

From: Multi-headed deep learning-based estimator for correlated-SIRV Pareto type II distributed clutter

Model

Configuration

Layers/blocks

Number of neurons/filters/units

Activation function

Optimizers

Training accuracy (MSE)

Validation accuracy (MSE)

Stacked LSTM

4 LSTMs Layers + 2 Dense Layers

LSTM: 128, 64, 64, 32 Dense: 64, 2

Sigmoid ReLU

SGD

1.11

1.24

RMSprop

1.08

1.19

Adam

1.05

1.14

BLSTM

BLSTM + 2 Dense layers

BLSTM: 128 units Dense: 100, 2

Sigmoid Tanh ReLU

SGD

1.33

1.87

RMSprop

1.28

1.85

Adam

1.26

1.85

CNN-LSTM

2 CNN Layers + 2 LSTM Layers + 2 Dense Layers

Conv1D:64, 64 LSTM: 32, 32 Dense: 64, 2

Sigmoid Tanh ReLU

SGD

0.98

1.12

RMSprop

0.86

0.89

Adam

0.86

0.87

Multi-head

LSTM-SAE + CNN + BLSTM + CNN-LSTM + LSTM + Dense

See Fig. 4

ReLU

SGD

0.51

0.71

RMSprop

0.39

0.59

Adam

0.31

0.34