How Much Training Data is Enough? A Case Study for HTTP Anomaly-Based Intrusion Detection.

IEEE ACCESS(2020)

引用 10|浏览11
暂无评分
摘要
Most anomaly-based intrusion detectors rely on models that learn from training datasets whose quality is crucial in their performance. Albeit the properties of suitable datasets have been formulated, the influence of the dataset size on the performance of the anomaly-based detector has received scarce attention so far. In this work, we investigate the optimal size of a training dataset. This size should be large enough so that training data is representative of normal behavior, but after that point, collecting more data may result in unnecessary waste of time and computational resources, not to mention an increased risk of overtraining. In this spirit, we provide a method to find out when the amount of data collected at the production environment is representative of normal behavior in the context of a detector of HTTP URI attacks based on 1-grammar. Our approach is founded on a set of indicators related to the statistical properties of the data. These indicators are periodically calculated during data collection, producing time series that stabilize when more training data is not expected to translate to better system performance, which indicates that data collection can be stopped. We present a case study with real-life datasets collected at the University of Seville (Spain) and a public dataset from the University of Saskatchewan. The application of our method to these datasets showed that more than 42 & x0025; of one trace, and almost 20 & x0025; of another were unnecessarily collected, thereby showing that our proposed method can be an efficient approach for collecting training data at the production environment.
更多
查看译文
关键词
Training,Detectors,Training data,Intrusion detection,Production,Data collection,Machine learning,Anomaly-based intrusion detection,dataset assessment,training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要