From: DeConFuse: a deep convolutional transform-based unsupervised fusion framework
Method | Architecture description | Other parameters |
---|---|---|
DeConFuse | \( 5 \times \left \{\begin {array}{c} \textbf {layer1} : \textbf {1D Conv} (1,4,5,1,2)^{1}\\ \textbf {Maxpool} (2, 2)^{2}\\ \textbf {SELU}\\ \textbf {layer2} : \textbf {1D Conv} (5,8,3,1,1)^{1}\\ \end {array}\right.\)layer3 : Fully connected | Learning rate = 0.001, λ=0.01,μ=0.0001, Optimizer used: Adam **with parameters** (β1,β2)=(0.9,0.999), weight_decay = 5e-5, epsilon = 1e-8 |
ConvTimeNet | \( 5 \times \left \{\begin {array}{c} \text {\textbf {layer1}} : \text {\textbf {1D Convolution}} (1,32,9,1,4)^{1}\\ \text {\textbf {Batch normalization}} + \text {\textbf {SELU}}\\ \text {\textbf {layer2}} : \text {\textbf {1D Convolution}} (32,32,3,1,1)^{1}\\ \text {\textbf {Batch normalization}} + \text {\textbf {SELU}} + \text {\textbf {SC}}^{3}\\ \text {\textbf {layer3}} : \text {\textbf {1D Convolution}} (32,64,9,1,4)^{1}\\ \text {\textbf {Batch normalization}} + \text {\textbf {SELU}}\\ \text {\textbf {layer4}} : \text {\textbf {1D Convolution}} (64,64,3,1,1)^{1}\\ \text {\textbf {Batch normalization}} + \text {\textbf {SELU}} + \text {\textbf {SC}}^{3}\\ \text {\textbf {layer3}} : \text {\textbf {Global Average Pooling}}\\ \end {array}\right.\)layer4 : Fully connectedFor Trading, addedlayer5 : Softmax | For forecasting: Learning rate = 0.001, For trading: Learning rate = 0.0001, Optimizer used: Adam **with parameters** (β1,β2)=(0.9,0.999), weight_decay = 1e-4, epsilon = 1e-8 |
TimeNet | \( 5 \times \left \{\begin {array}{c} \text {\textbf {layer1}} : \text {\textbf {LSTM unit}} (1,12,2,True)^{4}\\ \text {\textbf {layer2}} : \text {\textbf {Global Average Pooling}}\\ \end {array}\right. \)layer3 : Fully connectedFor trading, addedlayer4 : Softmax | For forecasting: Learning Rate = 0.001, For trading: Learning Rate = 0.0005, Optimizer used: Adam **with parameters** (β1,β2)=(0.9,0.999), weight_decay = 5e-5, epsilon = 1e-8 |