I am Yifan Jiang (蒋亦凡), a final year student in the Mathematics of Random Systems CDT. Before coming to Oxford, I studied as an undergraduate in Mathematics at Fudan University. I have a broad interest in stochastic analysis and its applications in finance and machine learning.
It is my honor to be supervised by Professor Jan Obłój and Professor Gui-Qiang Chen. My DPhil research is focused on a causal transport–type distributionally robust optimization in dynamic context.
A new preprint on the sensitivity of causal DRO is now available. We introduce a pathwise Malliavin deriviative and extend the adjoint operator, Skorokhod integral, to a class of regular martingale integrators.
Feb 1, 2024
A new preprint on the duality of causal DRO is now available. Any comments are very welcome!
Oct 2, 2023
Our paper has been recently accepted for NeurIPS 2023 🎉🎉🎉
Deep neural networks are known to be vulnerable to adversarial attacks (AA). For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified. Design of such attacks as well as methods of adversarial training against them are subject of intense research. We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions leveraging recent insights from DRO sensitivity analysis. We consider a set of distributional threat models. Unlike the traditional pointwise attacks, which assume a uniform bound on perturbation of each input data point, distributional threat models allow attackers to perturb inputs in a non-uniform way. We link these more general attacks with questions of out-of-sample performance and Knightian uncertainty. To evaluate the distributional robustness of neural networks, we propose a first-order AA algorithm and its multistep version. Our attack algorithms include Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) as special cases. Furthermore, we provide a new asymptotic estimate of the adversarial accuracy against distributional threat models. The bound is fast to compute and first-order accurate, offering new insights even for the pointwise AA. It also naturally yields out-of-sample performance guarantees. We conduct numerical experiments on CIFAR-10, CIFAR-100, ImageNet datasets using DNNs on RobustBench to illustrate our theoretical results. Our code is available at here.
Empirical Approximation to Invariant Measures for McKean–Vlasov Processes: Mean-field Interaction vs Self-interaction
This paper proves that, under a monotonicity condition, the invariant probability measure of a McKean–Vlasov process can be approximated by weighted empirical measures of some processes including itself. These processes are described by distribution dependent or empirical measure dependent stochastic differential equations constructed from the equation for the McKean–Vlasov process. Convergence of empirical measures is characterized by upper bound estimates for their Wasserstein distances to the invariant measure. Numerical simulations of the mean-field Ornstein–Uhlenbeck process are implemented to demonstrate the theoretical results.
Convergence of the Deep BSDE Method for FBSDEs with Non-Lipschitz Coefficients
Yifan Jiang, and Jinfeng Li
Probability, Uncertainty and Quantitative Risk, Dec 2021
This paper is dedicated to solving high-dimensional coupled FBSDEs with non-Lipschitz diffusion coefficients numerically. Under mild conditions, we provided a posterior estimate of the numerical solution that holds for any time duration. This posterior estimate validates the convergence of the recently proposed Deep BSDE method. In addition, we developed a numerical scheme based on the Deep BSDE method and presented numerical examples in financial markets to demonstrate the high performance.