Gradient-enhanced deep Gaussian processes for multifidelity modelling
CoRR(2024)
摘要
Multifidelity models integrate data from multiple sources to produce a single
approximator for the underlying process. Dense low-fidelity samples are used to
reduce interpolation error, while sparse high-fidelity samples are used to
compensate for bias or noise in the low-fidelity samples. Deep Gaussian
processes (GPs) are attractive for multifidelity modelling as they are
non-parametric, robust to overfitting, perform well for small datasets, and,
critically, can capture nonlinear and input-dependent relationships between
data of different fidelities. Many datasets naturally contain gradient data,
especially when they are generated by computational models that are compatible
with automatic differentiation or have adjoint solutions. Principally, this
work extends deep GPs to incorporate gradient data. We demonstrate this method
on an analytical test problem and a realistic partial differential equation
problem, where we predict the aerodynamic coefficients of a hypersonic flight
vehicle over a range of flight conditions and geometries. In both examples, the
gradient-enhanced deep GP outperforms a gradient-enhanced linear GP model and
their non-gradient-enhanced counterparts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要