Opportunistic Information-Bottleneck for Goal-oriented Feature Extraction and Communication
IEEE Open Journal of the Communications Society(2024)
摘要
The Information Bottleneck (IB) method is an information theoretical
framework to design a parsimonious and tunable feature-extraction mechanism,
such that the extracted features are maximally relevant to a specific learning
or inference task. Despite its theoretical value, the IB is based on a
functional optimization problem that admits a closed form solution only on
specific cases (e.g., Gaussian distributions), making it difficult to be
applied in most applications, where it is necessary to resort to complex and
approximated variational implementations. To overcome this limitation, we
propose an approach to adapt the closed-form solution of the Gaussian IB to a
general task. Whichever is the inference task to be performed by a (possibly
deep) neural-network, the key idea is to opportunistically design a regression
sub-task, embedded in the original problem, where we can safely assume a
(joint) multivariate normality between the sub-task's inputs and outputs. In
this way we can exploit a fixed and pre-trained neural network to process the
input data, using a tunable number of features, to trade data-size and
complexity for accuracy. This approach is particularly useful every time a
device needs to transmit data (or features) to a server that has to fulfil an
inference task, as it provides a principled way to extract the most relevant
features for the task to be executed, while looking for the best trade-off
between the size of the feature vector to be transmitted, inference accuracy,
and complexity. Extensive simulation results testify the effectiveness of the
proposed methodhttps://info.arxiv.org/help/prep#comments and encourage to
further investigate this research line.
更多查看译文
关键词
Goal-Oriented communications,information bottleneck,edge-intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要