Mappings, dimensionality and reversing out of deep neural networks

Zhixiang Cui,Peter Grindrod

IMA Journal of Applied Mathematics(2023)

引用 0|浏览0
暂无评分
摘要
Abstract We consider a large cloud of vectors formed at each layer of a standard neural network, corresponding to a large number of separate inputs which were presented independently to the classifier. Although the embedding dimension (the total possible degrees of freedom) reduces as we pass through successive layers, from input to output, the actual dimensionality of the point clouds that the layers contain does not necessarily reduce. We argue that this phenomenon may result in a vulnerability to (universal) adversarial attacks (which are small specific perturbations). This analysis requires us to estimate the intrinsic dimension of point clouds (with values between 20 and 200) within embedding spaces of dimension 1000 up to 800,000. This needs some care. If the cloud dimension actually increases from one layer to the next it implies there is some ‘volume filling’ over-folding, and thus there exist possible small directional perturbations in the latter space that are equivalent to shifting large distances within the former space, thus inviting possibility of universal and imperceptible attacks.
更多
查看译文
关键词
deep neural networks,neural networks,dimensionality,mappings
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要