📢 Publish Your Research for Free - Full APC Waiver, No Hidden Charges. Submit Your Article Today! Submit Now →
JImage

ARTICLE TYPE : RESEARCH ARTICLE

Published on :   17 Apr 2026, Volume - 2
Journal Title :   WebLog Journal of Computer Science and Technology | WebLog J Comp Sci Technol | WJCST
Source URL:   weblog icon https://weblogoa.com/articles/wjcst.2026.d1702
Permanent Identifier (DOI) :   doi icon https://doi.org/10.5281/zenodo.19689259

Tri-Domain Multimodal Fusion: A Triple - Branch BiLSTM-CNN Framework for Robust Satellite Telemetry Anomaly Detection

Nayna Potdukhe 1
Astha Thakur 1 *
Anushree Fadnavis 1
Shrushti Samarth 1
Bhumika Thakre 1
1Department of Artificial Intelligence and Machine Learning, Government Polytechnic, Nagpur, India

Abstract

Telemetry data that reflects the operational condition of onboard subsystems is continuously produced in vast quantities by satellite systems [1, 5]. For ensuring satellite reliability and preventing system failures, it is necessary to identify unusual trends in this data. Telemetry signals are frequently difficult for conventional anomaly detection techniques to handle because of their complicated temporal and multidimensional character. This study suggests a multimodal deep learning architecture for anomaly prediction in satellite telemetry data in order to solve this issue. The suggested method starts with data preprocessing, which includes data cleaning, normalization, and feature engineering utilizing lag-based features and rolling statistical metrics. After that, the class imbalance is addressed using the Synthetic Minority Oversampling Technique (SMOTE), and the sequential telemetry patterns are created using a sliding window method. The system uses a multimodal architecture that integrates three complementary models: a spectrogram-based CNN for identifying the signal's frequency domain features, a CNN for extracting spatial representations from telemetry-derived images, and a Bidirectional Long Short-Term Memory (BiLSTM) network with an attention mechanism for learning temporal patterns [11, 13].

A feature fusion layer integrates the features taken from these models, which is followed by completely connected neural layers that use a sigmoid activation function to classify anomalies. Performance indicators including accuracy, precision, recall, F1-score, and AUC are used to assess the model's detection efficacy. The suggested multimodal method successfully integrates temporal, spatial, and spectral data from telemetry data, according to experimental findings, which enhances the ability to detect anomalies. This framework offers a flexible way to automatically monitor the health of satellites and send out timely alerts about anomalies.

Keywords: Satellite Telemetry; Anomaly Detection; Multimodal Deep Learning; BiLSTM; Self Attention Mechanism; Convolutional Neural Networks (CNN); Spectrogram Analysis; Sensor Fusion; Spacecraft Health Monitoring; SMOTE; Time-Series Analysis; F1-Score Optimization

Citation

Potdukhe N, Thakur A, Fadnavis A, Samarth S, Thakre B. Tri-Domain Multimodal Fusion: A Triple - Branch BiLSTM-CNN Framework for Robust Satellite Telemetry Anomaly Detection. WebLog J Comp Sci Technol. wjcst.2026.d1702. https://doi.org/10.5281/zenodo.19689259