Optimal Hypothesis Testing Based on Information Theory

CoRR(2023)

引用 0|浏览1
暂无评分
摘要
There has a major problem in the current theory of hypothesis testing in which no unified indicator to evaluate the goodness of various test methods since the cost function or utility function usually relies on the specific application scenario, resulting in no optimal hypothesis testing method. In this paper, the problem of optimal hypothesis testing is investigated based on information theory. We propose an information-theoretic framework of hypothesis testing consisting of five parts: test information (TI) is proposed to evaluate the hypothesis testing, which depends on the a posteriori probability distribution function of hypotheses and independent of specific test methods; precision with the unit of bit is proposed to evaluate the degree of validity of specific test methods; the sampling a posteriori (SAP) probability test method is presented, which makes stochastic selections on the hypotheses according to the a posteriori probability distribution of the hypotheses; the probability of test failure is defined to reflect the probability of the failed decision is made; test theorem is proved that all precision lower than the TI is achievable. Specifically, for every precision lower than TI, there exists a test method with the probability of test failure tending to zero. Conversely, there is no test method whose precision is more than TI. Numerical simulations are performed to demonstrate that the SAP test is asymptotically optimal. In addition, the results show that the precision of the SAP test and the existing test methods, such as the maximum a posteriori probability, expected a posteriori probability, and median a posteriori probability tests, are not more than TI.
更多
查看译文
关键词
optimal hypothesis testing,information theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要