NEUROCOMPUTING, cilt.623, 2025 (SCI-Expanded, Scopus)
In the era of big data, the exponential increase in complex-valued data generation requires overwhelming resources such as storage capacity, device-to-device communications, data processing costs, and power consumption. Hence, in this study, we raise the question of whether all training data is beneficial to the training of fully complex-valued neural networks (FCVNNs). To this end, exploiting the online censoring (OC) strategy, we introduce novel cost-effective learning algorithms for training the FCVNNs, called the OC-based fully complex nonlinear gradient descent (OC-FCNGD) for the FCVNN and the OC-based augmented FCNGD (OCAFCNGD) for the augmented FCVNN (AFCVNN). These algorithms are derived using the Wirtinger calculus to simplify their derivations, define them in more compact forms, and thus remove Schwarz symmetry restrictions on their complex activation functions. The proposed OC-FCNGD and OC-AFCNGD algorithms considerably reduce the training times of the FCVNN and AFCVNN by censoring less informative training data, that is, by avoiding unnecessary weight updating. In this way, the OC-FCNGD and OC-AFCNGD also achieve superior testing results than their standard counterparts. Importantly, the proposed algorithms are not limited to the training of shallow architectures such as the FCVNN and AFCVNN, they are also extended and applied to the fully complex-valued deep neural network (FCVDNN) and augmented FCVDNN (AFCVDNN) in this paper, maintaining the efficiency and performance advantages arising from the OC strategy even in deep neural network models. Furthermore, we rigorously analyze the deterministic convergence of the OC-FCNGD and OCAFCNGD, including both weak and strong convergence, by considering the unified mean value theorem. Finally, comprehensive experiments validate the theoretical and practical advantages of the proposed algorithms.