M 6 Anomaly Detection in Networks
M 6 Anomaly Detection in Networks
Detection in Networks:
Use of RNN
1. Sequence Modeling
RNNs learn patterns across time — for example, they can learn what a "normal" sequence
looks like and identify when something deviates.
2. Forecasting-Based Anomaly Detection
•Train an RNN (or LSTM/GRU) to predict the next value(s) in a time series.
•If the prediction error at a time point exceeds a certain threshold → mark as anomaly.
3. Autoencoder-Based Detection
•Use an RNN Autoencoder: input a sequence, and the RNN encodes and then decodes it.
•The reconstruction error is used as an anomaly score.
•High error → sequence is anomalous.
• TensorFlow/Keras, PyTorch for modelling
• scikit-learn for preprocessing
• Prophet or ARIMA can complement RNNs for comparison
• TSAD libraries like Merlion or River for broader toolkits
• Anomaly:
• A data point or sequence that deviates significantly from the model’s
expectation or learned distribution.
• Two main RNN-based approaches are:
Recurrent Neural Networks (RNNs)
for Temporal Anomaly Detection
• What is Temporal Anomaly Detection?
• Temporal Anomaly Detection refers to identifying unexpected or
abnormal patterns over time in sequential or time-dependent data.
• Examples:
• Unusual spikes in network traffic logs
• Irregular heartbeat patterns in ECG data
• Unexpected machinery vibration patterns
• These anomalies may not be obvious in isolated data points, but
become clear only in the context of temporal patterns.
• Recurrent Neural Networks (RNNs) are designed to process
sequences. They learn temporal dependencies by maintaining a
hidden state that is updated at every time step based on:
• Current input
• Previous hidden state
• This allows them to remember what happened in the past and detect
deviations from normal sequences — making them perfect for
temporal anomaly detection.
Prediction Based Anomaly Detection
RNN Autoencoder Based Anomaly
Detection