The objective is to conduct an empirical study into the training and performance of transformer models under different loss functions. We leverage Yahoo Finance, Pytorch, and various machine learning modules.
- Assess the effectiveness of employing Mean Squared Error and Mean Absolute Error as loss functions in transformer models.
- Evaluate the impact of Cross-Entropy Loss on transformers, for time series predictions.
- Contrast and compare results processed from Long Short-Term Memory and Transformers.
- Establish a robust baseline model as a basis for FinRL’s reinforcement models.
PyTorch Python
Yun Zhe Chen (Project Lead) [email protected] David C [email protected] Wenjie Chen [email protected] Andy Zhu [email protected] Derrick L [email protected] Hongwei L [email protected]
- Gather up resources
- Review published research
- Review relevant machine learning topics and become familiar with Pytorch
- Learning ARMA + Regression LTSM model + Transformer for practice
- Collect and preprocess data from Yahoo Finance
- Compose a time series for collected data
- Apply standard LSTM model training using PyTorch
- Implement and test the LSTM model by employing Mean Squared Error and Mean Absolute Error as loss functions
- Report current progress for discussion with professor
- Apply standard transformer model training using PyTorch
- Implement and test the transformer model by employing Mean Squared Error and Mean Absolute Error as loss functions
- Implement cross-entropy to test the transformer model as a loss function
- Analyze results
- Conduct compare and contrast for LSTM and transformer models
- Prepare poster for presentation
Project Link https://github.com/blitzionic/FinRL---Stock-Prediction
Resoures: