In numerous practical applications, acquiring substantial quantities of labelled data is challenging and expensive, but unlabelled data is readily accessible. Conventional supervised learning methods frequently underperform in scenarios characterised by little labelled data or imbalanced datasets. This study introduces a hybrid semi-supervised learning (SSL) architecture that integrates Neurochaos Learning (NL) with a threshold-based Self-Training (ST) method to overcome this constraint. The NL architecture converts input characteristics into chaos-based firing-rate representations that encapsulate nonlinear relationships within the data, whereas ST progressively enlarges the labelled set utilising high-confidence pseudo-labelled samples. The model’s performance is assessed using ten benchmark datasets and five machine learning classifiers, with 85% of the training data considered unlabelled and just 15% utilised as labelled data. The proposed Self-Training Neurochaos Learning (NL+ST) architecture consistently attains superior performance gain relative to standalone ST models, especially on limited, nonlinear and imbalanced datasets like Wine (162.42 %), Iris (121.34 %) and Glass Identification (95.46 %). The results indicate that using chaos-based feature extraction with SSL improves generalisation, resilience, and classification accuracy in low-data contexts.
| Primary Language | English |
|---|---|
| Subjects | Applied Mathematics (Other) |
| Journal Section | Research Article |
| Authors | |
| Submission Date | January 6, 2026 |
| Acceptance Date | March 13, 2026 |
| Publication Date | March 28, 2026 |
| DOI | https://doi.org/10.51537/chaos.1857261 |
| IZ | https://izlik.org/JA73JD34RZ |
| Published in Issue | Year 2026 Volume: 8 Issue: 1 |
Chaos Theory and Applications in Applied Sciences and Engineering: An interdisciplinary journal of nonlinear science
The published articles in CHTA are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License