A neural network is model composed of many parameters. Although neural networks have achieved huge success, it is challenging to interpret/explain a trained neural network. The major focus of our research is to interpret/explain neural networks and apply the interpretation/explanation to improve training and solve real-world problems.
Sensitivity Analysis
A trained neural network builds an input-output mapping based on many parameters. We attempt to explain this implicit mapping by using sensitivity analysis. The sensitivity is usually defined as the output change degree due to input/parameter change. High sensitivity indicates the output is very sensitive to input/parameter perturbations. Sensitivity analysis provides the tool for adjusting and interpreting neural networks.
Citation
- Xiaoqin Zeng, Jing Shao, Yingfeng Wang, and Shuiming Zhong, “A Sensitivity-based Approach for Pruning Architecture of Madalines”, Neural Computing and Applications, vol. 18, no. 8, pp. 957-965, 2009.
- Yingfeng Wang, Xiaoqin Zeng, Daniel So Yeung, and Zhihang Peng, “Computation of Madalines Sensitivity to Input and Weight Perturbations,” Neural Computation, vol. 18, no. 11, pp. 2854-2877, 2006.
- Xiaoqin Zeng, Yingfeng Wang, and Kang Zhang, ”Computation of Aadalines Sensitivity to Weight Perturbation,” IEEE Transactions on Neural Networks, vol. 17, no. 2, pp. 515-519, 2006.
- Yingfeng Wang and Xiaoqin Zeng, “Using a Sensitivity Measure to Improve Training Accuracy and Convergence for Madalines,” Proceedings of International Joint Conference on Neural Networks (IJCNN), pp. 1750-1756, Jul. 2006
- Yingfeng Wang, Xiaoqin Zeng, and Daniel S. Yeung, “Sensitivity Analysis of Madalines to Weight Perturbation,” Lecture Notes in Artificial Intelligence, vol. 3930, pp. 822-831, 2006.
- Yingfeng Wang, Xiaoqin Zeng, and Daniel S. Yeung, “Analysis of Sensitivity Behavior of Madalines,” Proceedings of IEEE International Conference on Machine Learning and Cybernetics (ICMLC), pp. 4731-4737, Aug. 2005.
- Yingfeng Wang, Xiaoqin Zeng, and Lixin Han, “Sensitivity of Madalines to Input and Weight Perturbations,” Proceedings of IEEE International Conference on Machine Learning and Cybernetics (ICMLC), pp.1349-1354, Nov. 2003.
Graph Neural Networks
We propose a general training strategy to improve graph autoencoder training. This strategy injects noise to training input data. It can fit almost all existing autoencoders.
Citation
- Yingfeng Wang*, Biyun Xu, Myungjae Kwak, and Xiaoqin Zeng, “A Noise Injection Strategy for Graph Autoencoder Training,” Neural Computing and Applications, vol. 33, no. 10, pp. 4807-4814, 2021.
- Yingfeng Wang, Biyun Xu, Myungjae Kwak, and Xiaoqin Zeng, “A Simple Training Strategy for Graph Autoencoder,” Proceedings of the International Conference on Machine Learning and Computing (ICMLC), pp 341-345, Feb. 2020.
Uncertainty Quantification
Machine learning models can predict/classify well on many cases but very badly on some cases. Therefore, we need to measure uncertainty of specific predictions/classifications, which can not be replaced by the overall accuracy. However, most conventional machine learning models do not allow us to directly measure uncertainties. Here, we present a strategy for uncertainty quantification.
Citation
- Meng Hsiu Tsai, Nicole Marie Ely, and Yingfeng Wang*, “Uncertainty Estimation for Twitter Inference,” Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), pp. 1437-1440, 2021.