Pengwee Wang's blog

Back

Flow Matching and Diffusion ModelsBlur image

Introduction#

生成对象(Object):对图像,视频,蛋白质等数据类型可视为向量,即 zRdz \in \mathbb{R}^d

生成(Generation):从数据分布中采样,zpdataz \sim p_{data}

数据集(Dataset):服从数据分布的有限样本,z1,...,zNpdataz_1, ...,z_N \sim p_{data}

条件生成(Conditional Generation):从条件分布中采样,zpdata(y)z \sim p_{data}(\cdot \mid y)

目标:训练生成模型,将初始分布(pinitp_{\text{init}})的样本转化为数据分布样本pdatap_{\text{data}}

Flow and Diffusion Models#

通过模拟常微分方程(Ordinary Differential Equations, ODEs)和随机微分方程(Stochastic Differential Equations, SDEs)可以实现从初始分布到数据分布的转换,分别对应Flow Model和Diffusion Model

Flow Models#

Flow Model可以由ODE来描述,即

X0pinitrandom initddtXt=utθ(Xt)ODEGoal: X1pdataψ1θ(X0)pdataX_0 \sim p_{\text{init}} \quad \triangleright \text{random init}\\ \frac{d}{dt}X_t=u_t^\theta(X_t) \quad \triangleright \text{ODE} \\ \text{Goal: } X_1 \sim p_{\text{data}} \Leftrightarrow \psi_{1}^{\theta}(X_0) \sim p_{\text{data}}

其中向量场 utθ:Rd×[0,1]Rdu_t^\theta: \mathbb{R}^d\times[0,1] \rightarrow \mathbb{R}^d 为神经网络,θ\theta为参数。ψtθ\psi^\theta_t描述了由utθu_t^\theta引起的Flow,为ODE方程解(Trajectory)的集合

通过使用Euler算法,可以模拟ODE计算出Flow,实现从Flow Model中采样

image-20250820180506862

Diffusion Models#

Diffusion Model可以由SDEs描述,如下所示(由于其随机性SDEs不使用微分表示形式)

dXt=utθ(Xt)dt+σtdWtSDEX0pinitrandom initializationGoal: X1pdatadX_t = u_t^\theta(X_t)dt +\sigma_tdW_t \quad \triangleright \text{SDE} \\ X_0 \sim p_{init} \quad \triangleright \text{random initialization} \\ \text{Goal: } X_1 \sim p_{\text{data}}

其中 σt0\sigma_t \geq 0为diffusion系数,WtW_t为随机过程 布朗运动(Brownian motion)

可以看出Diffusion Model是Flow Model的一个拓展,当σt=0\sigma_t = 0时即为Flow Model

同样的,可以使用以下算法实现从Diffusion Model中采样

image-20250820184041093

Training Target and Train Loss#

对于Flow Model和Diffusion Model

X0pinit,dXt=utθ(Xt)dt(Flow model)X0pinit,dXt=utθ(Xt)dt+σtdWt(Diffusion model)\begin{align*} X_0 \sim p_{\text{init}},\quad dX_t &= u_t^\theta(X_t) dt & \text{(Flow model)} \\ X_0 \sim p_{\text{init}},\quad dX_t &= u_t^\theta(X_t) dt + \sigma_t dW_t & \text{(Diffusion model)} \end{align*}

训练可以通过最小化以下损失实现

L(θ)=utθ(x)uttarget(x)training target2\mathcal{L}(\theta) = \left\| u_t^\theta(x) - \underbrace{u_t^{\text{target}}(x)}_{\text{training target}} \right\|^2

utθu_t^\theta 为网络模型,uttarget(x)u_t^{\text{target}}(x)为目标向量场,其实现将初始数据分布转化为目标数据分布,为了实现计算 L(θ)\mathcal{L}(\theta) 或者间接计算 L(θ)\mathcal{L}(\theta)需要构建uttarget(x)u_t^{\text{target}}(x)

Probability Path#

Probability Path是从初始分布到目标数据分布的渐进插值(gradual interpolation),分为条件概率路径(conditional probability path)和边缘概率路径(marginal probability path),分别为pt(z)p_t(\cdot \mid z)pt()p_t(\cdot),其中:

p0(z)=pinit,p1(z)=δzfor all zRdp_0(\cdot \mid z) = p_{\text{init}}, \quad p_1(\cdot \mid z) = \delta_z \quad \text{for all } z \in \mathbb{R}^d

pt()p_t(\cdot) 可由以下公式获得

zpdata, xpt(z)    xptsampling from marginal pathpt(x)=pt(xz)pdata(z)dzdensity of marginal pathp0=pinitandp1=pdatanoise-data interpolation\begin{align*} &z \sim p_{\text{data}},\ x \sim p_t(\cdot \mid z) \implies x \sim p_t &\triangleright \text{sampling from marginal path} \\ &p_t(x) = \int p_t(x \mid z) p_{\text{data}}(z)dz &\triangleright \text{density of marginal path} \\ &p_0 = p_{\text{init}} \quad \text{and} \quad p_1 = p_{\text{data}} &\triangleright \text{noise-data interpolation} \\ \end{align*}

Training Target for Flow Model#

对于zRdpdataz \in \mathbb{R^d} \sim p_{data},记uttarget(z)u_t^{target}(\cdot \mid z)为条件概率路径 pt(z)p_t(\cdot \mid z) 对应的条件向量场,即

X0pinit,ddtXt=uttarget(Xtz)Xtpt(z)(0t1)X_0 \sim p_{\text{init}},\quad \frac{\mathrm{d}}{\mathrm{d}t}X_t = u_t^{\text{target}}(X_t|z) \quad \Rightarrow \quad X_t \sim p_t(\cdot|z) \quad (0 \leq t \leq 1)

uttarget(x)u_t^{target}(x)可定义为

uttarget(x)=uttarget(xz)pt(xz)pdata(z)pt(x)dzu_t^{\text{target}}(x) = \int u_t^{\text{target}}(x|z) \frac{p_t(x|z)p_{\text{data}}(z)}{p_t(x)} \,\mathrm{d}z

且满足:

X0pinit,ddtXt=uttarget(Xt)Xtpt(0t1)X_0 \sim p_{\text{init}},\quad \frac{\mathrm{d}}{\mathrm{d}t}X_t = u_t^{\text{target}}(X_t) \quad \Rightarrow \quad X_t \sim p_t \quad (0 \leq t \leq 1)

其中X1pdataX_1 \sim p_{data}

这可以由Continuity Equation 证明

Continuity Equation

对于向量场uttargetu_t^{target}X0pinitX_0 \sim p_{init},有XtptX_t \sim p_t0t10 \leq t \leq 1 成立有且仅有

tpt(x)=div(ptuttarget)(x)for all xRd,0t1\partial_t p_t(x) = -\mathrm{div}(p_t u_t^{\text{target}})(x) \quad \text{for all } x \in \mathbb{R}^d, 0 \leq t \leq 1

其中tpt(x)=ddtpt(x)\partial_t p_t(x) = \frac{\mathrm{d}}{\mathrm{d}t} p_t(x)div(vt)(x)=i=1dxivt(x)\mathrm{div}(v_t)(x) = \sum_{i=1}^d \frac{\partial}{\partial x_i} v_t(x)

Training Target for Diffusion Model#

同样的,对于Diffusion Model,可以构建uttargetu_t^{target}如下所示,满足Xtpt(0t1)X_t \sim p_t \quad (0 \leq t \leq 1) ,即

X0pinit,dXt=[uttarget(Xt)+σt22logpt(Xt)]dt+σtdWtXtpt(0t1)\begin{align*} &X_0 \sim p_{\text{init}}, \quad \mathrm{d}X_t = \left[ u_t^{\text{target}}(X_t) + \frac{\sigma_t^2}{2} \nabla \log p_t(X_t) \right] \mathrm{d}t + \sigma_t \mathrm{d}W_t \\ &\Rightarrow X_t \sim p_t \quad (0 \leq t \leq 1) \end{align*}

并且将pt(x),uttargetp_t(x), u_t^{target} 替换为 pt(xz),uttarget(xz)p_t(x\mid z), u_t^{target}(x \mid z) 时仍然成立

其中,logpt(x)\nabla \log p_t(x) 称为marginal score function,logpt(xz)\nabla \log p_t(x \mid z) 称为conditional score function,二者满足

logpt(x)=pt(x)pt(x)=pt(xz)pdata(z)dzpt(x)=pt(xz)pdata(z)dzpt(x)=logpt(xz)pt(xz)pdata(z)pt(x)dz\nabla \log p_t(x) = \frac{\nabla p_t(x)}{p_t(x)} = \frac{\nabla \int p_t(x|z) p_{\text{data}}(z) \,\mathrm{d}z}{p_t(x)} = \frac{\int \nabla p_t(x|z) p_{\text{data}}(z) \,\mathrm{d}z}{p_t(x)} = \int \nabla \log p_t(x|z) \frac{p_t(x|z) p_{\text{data}}(z)}{p_t(x)} \,\mathrm{d}z

这可以由Fokker-Planck Equation证明

Fokker-Planck Equation

对于X0pinit,dXt=ut(Xt)dt+σtdWtX_0 \sim p_{\text{init}}, \quad \mathrm{d}X_t = u_t(X_t)\,\mathrm{d}t + \sigma_t\,\mathrm{d}W_t 描述的SDE,XtptX_t \sim p_t 成立,当且仅当

tpt(x)=div(ptut)(x)+σt22Δpt(x)for all xRd,0t1\partial_t p_t(x) = -\mathrm{div}(p_t u_t)(x) + \frac{\sigma_t^2}{2} \Delta p_t(x) \quad \text{for all } x \in \mathbb{R}^d, 0 \leq t \leq 1

其中,Δwt(x)=i=1d2xi2wt(x)=div(wt)(x)\Delta w_t(x) = \sum_{i=1}^d \frac{\partial^2}{\partial x_i^2} w_t(x) = \mathrm{div}(\nabla w_t)(x)

Remark Langevin dynamics

pt=pp_t=p时,即概率路径为静态时,有

dXt=σt22logp(Xt)dt+σtdWt\mathrm{d}X_t = \frac{\sigma_t^2}{2} \nabla \log p(X_t)\,\mathrm{d}t + \sigma_t\,\mathrm{d}W_t

此时 X0pXtp(t0)X_0 \sim p \quad \Rightarrow \quad X_t \sim p \quad (t \geq 0),即Langevin dynamics

Gaussian probability path#

设噪声调度αt,βt\alpha_t, \beta_t为单调连续可微函数且α0=β1=0,α1=β0=1\alpha_0=\beta_1=0, \alpha_1=\beta_0=1,定义Gaussian conditional probability path为

pt(z)=N(αtz,βt2Id)p_t(\cdot|z) = \mathcal{N}(\alpha_t z, \beta_t^2 I_d)

其满足 p0(z)=N(α0z,β02Id)=N(0,Id),andp1(z)=N(α1z,β12Id)=δzp_0(\cdot|z) = \mathcal{N}(\alpha_0 z, \beta_0^2 I_d) = \mathcal{N}(0, I_d), \quad \text{and} \quad p_1(\cdot|z) = \mathcal{N}(\alpha_1 z, \beta_1^2 I_d) = \delta_z

则从其marginal path中采样可以通过以下方法得到

zpdata, ϵpinit=N(0,Id)x=αtz+βtϵptz \sim p_{\text{data}},\ \epsilon \sim p_{\text{init}} = \mathcal{N}(0, I_d) \Rightarrow x = \alpha_t z + \beta_t \epsilon \sim p_t

基于Gaussian probability path的conditional Gaussian vector field可以计算得到

uttarget(xz)=(α˙tβ˙tβtαt)z+β˙tβtxu_t^{\text{target}}(x|z) = \left( \dot{\alpha}_t - \frac{\dot{\beta}_t}{\beta_t} \alpha_t \right) z + \frac{\dot{\beta}_t}{\beta_t} x

其中α˙t=tαt\dot{\alpha}_t = \partial_t \alpha_tβ˙t=tβt\dot{\beta}_t = \partial_t \beta_t

同样的可以得到其marginal score function为

logpt(xz)=xαtzβt2\nabla \log p_t(x|z) = -\frac{x - \alpha_t z}{\beta_t^2}

Flow Matching#

对于Flow Model,定义flow matching loss为

LFM(θ)=EtUnif,xpt[utθ(x)uttarget(x)2]=EtUnif,zpdata,xpt(z)[utθ(x)uttarget(x)2]\begin{align*} \mathcal{L}_{\text{FM}}(\theta) &= \mathbb{E}_{t \sim \text{Unif}, x \sim p_t}[\|u_t^\theta(x) - u_t^{\text{target}}(x)\|^2] \\ &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, x \sim p_t(\cdot|z)}[\|u_t^\theta(x) - u_t^{\text{target}}(x)\|^2] \end{align*}

zpdata, xpt(z)    xptz \sim p_{\text{data}},\ x \sim p_t(\cdot \mid z) \implies x \sim p_t

定义conditional flow matching loss为

LCFM(θ)=EtUnif,zpdata,xpt(z)[utθ(x)uttarget(xz)2]\mathcal{L}_{\text{CFM}}(\theta) = \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, x \sim p_t(\cdot|z)}[\|u_t^\theta(x) - u_t^{\text{target}}(x|z)\|^2]

其中uttarget(xz)u_t^{\text{target}}(x|z)可以人为构造获得(例如Gaussian probability path)

可以证明,

LFM(θ)=LCFM(θ)+C\mathcal{L}_{\text{FM}}(\theta) = \mathcal{L}_{\text{CFM}}(\theta) + C

θLFM(θ)=θLCFM(θ)\nabla_\theta \mathcal{L}_{\text{FM}}(\theta) = \nabla_\theta \mathcal{L}_{\text{CFM}}(\theta)

因此优化LCFM\mathcal{L}_{\text{CFM}}即优化LFM\mathcal{L}_{\text{FM}},而对于LCFM\mathcal{L}_{\text{CFM}},只需构造probability path即可,至此可以得到训练Flow Model的算法,整个流程即称为Flow Matching

Flow Matching for Gaussian Conditional Probability Paths

对于Gaussian Probability Path,有

ϵN(0,Id)xt=αtz+βtϵN(αtz,βt2Id)=pt(z)\epsilon \sim \mathcal{N}(0, I_d) \quad \Rightarrow \quad x_t = \alpha_t z + \beta_t \epsilon \sim \mathcal{N}(\alpha_t z, \beta_t^2 I_d) = p_t(\cdot|z) uttarget(xz)=(α˙tβ˙tβtαt)z+β˙tβtxu_t^{\mathrm{target}}(x|z)=\left(\dot{\alpha}_t-\frac{\dot{\beta}_t}{\beta_t}\alpha_t\right)z+\frac{\dot{\beta}_t}{\beta_t}x LCFM(θ)=EtUnif,zpdata,xN(αtz,βt2Id)[utθ(x)(α˙tβ˙tβtαt)zβ˙tβtx2]=(i)EtUnif,zpdata,ϵN(0,Id)[utθ(αtz+βtϵ)(α˙tz+β˙tϵ)2]\begin{gathered} \mathcal{L}_{\mathrm{CFM}}(\theta)=\mathbb{E}_{t\sim\mathrm{Unif},z\sim p_{\mathrm{data}},x\sim\mathcal{N}(\alpha_{t}z,\beta_{t}^{2}I_{d})}[\|u_{t}^{\theta}(x)-\left(\dot{\alpha}_{t}-\frac{\dot{\beta}_{t}}{\beta_{t}}\alpha_{t}\right)z-\frac{\dot{\beta}_{t}}{\beta_{t}}x\|^{2}] \\ \overset{(i)}{\operatorname*{=}}\mathbb{E}_{t\sim\mathrm{Unif},z\sim p_{\mathrm{data}},\epsilon\sim\mathcal{N}(0,I_{d})}[\|u_{t}^{\theta}(\alpha_{t}z+\beta_{t}\epsilon)-(\dot{\alpha}_{t}z+\dot{\beta}_{t}\epsilon)\|^{2}] \end{gathered}

特别的,对于αt=t\alpha_t=tβt=1t\beta_t=1-t,有

pt(xz)=N(tz,(1t)2)p_{t}(x|z)=\mathcal{N}(tz,(1-t)^{2}) Lcfm(θ)=EtUnif,zpdata,ϵN(0,Id)[utθ(tz+(1t)ϵ)(zϵ)2]\mathcal{L}_{\mathrm{cfm}}(\theta)=\mathbb{E}_{t\sim\mathrm{Unif},z\sim p_{\mathrm{data}},\epsilon\sim\mathcal{N}(0,I_{d})}[\|u_{t}^{\theta}(tz+(1-t)\epsilon)-(z-\epsilon)\|^{2}]

称之为(Gaussian) CondOT probability path,训练过程如下所示

image-20250821093129333

Score Matching#

对于Diffusion Models,由于uttargetu_t^{target} 难以得到,因此使用score network σt2:Rd×[0,1]R\sigma_t^2 : \mathbb{R}^d \times [0, 1] \to \mathbb{R}对score function进行拟合,同样的,存在score matching loss和conditional score matching loss如下

LSM(θ)=EtUnif,zpdata,xpt(z)[stθ(x)logpt(x)2] score matching lossLCSM(θ)=EtUnif,zpdata,xpt(z)[stθ(x)logpt(xz)2] conditional score matching loss\begin{align*} \mathcal{L}_{\text{SM}}(\theta) &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, x \sim p_t(\cdot|z)}[\|s_t^\theta(x) - \nabla \log p_t(x)\|^2] \quad \triangleright \text{ score matching loss} \\ \mathcal{L}_{\text{CSM}}(\theta) &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, x \sim p_t(\cdot|z)}[\|s_t^\theta(x) - \nabla \log p_t(x|z)\|^2] \quad \triangleright \text{ conditional score matching loss} \end{align*}

同样的,虽然logpt(x)\nabla \log p_t(x)未知,但logpt(xz)\nabla \log p_t(x \mid z)可以人工构造,且存在

LSM(θ)=LSFM(θ)+C    θLSM(θ)=θLCSM(θ)\begin{align*} &\mathcal{L}_{\text{SM}}(\theta) = \mathcal{L}_{\text{SFM}}(\theta) + C \\ &\implies \nabla_\theta \mathcal{L}_{\text{SM}}(\theta) = \nabla_\theta \mathcal{L}_{\text{CSM}}(\theta) \end{align*}

因此,优化LCSM(θ)\mathcal{L}_{\text{CSM}}(\theta)即可,此时采样过程如下所示

X0pinit,dXt=[utθ(Xt)+σt22stθ(Xt)]dt+σtdWt    X1pdataX_0 \sim p_{\text{init}}, \quad \mathrm{d}X_t = \left[ u_t^\theta(X_t) + \frac{\sigma_t^2}{2} s_t^\theta(X_t) \right] \mathrm{d}t + \sigma_t \mathrm{d}W_t \implies X_1 \sim p_{data}

其中,尽管理论上对任意σt0\sigma_t \geq 0均可实现采样,但由于存在对随机微分方程模拟不精确导致的精度误差,以及训练误差,因此存在一个最优的σt\sigma_t。同时观察采样过程可以发现模拟该SDE还需学习utθu_t^\theta,但其实通常可以使用一个两输出的网络同时处理utθu_t^\thetastθs_t^\theta,并且对于特定的概率路径,两者可以相互转化。

Denoising Diffusion Models: Score Matching for Gaussian Probability Paths

对于Gaussian Probability Paths,有

logpt(xz)=xαtzβt2\nabla \log p_t(x|z) = -\frac{x - \alpha_t z}{\beta_t^2}

LCSM(θ)=EtUnif,zpdata,xpt(z)[stθ(x)+xαtzβt22]=EtUnif,zpdata,ϵN(0,Id)[stθ(αtz+βtϵ)+ϵβt2]=EtUnif,zpdata,ϵN(0,Id)[1βt2βtstθ(αtz+βtϵ)+ϵ2]\begin{align*} \mathcal{L}_{\text{CSM}}(\theta) &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, x \sim p_t(\cdot|z)}\left[\left\|s_t^\theta(x) + \frac{x - \alpha_t z}{\beta_t^2}\right\|^2\right] \\ &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, \epsilon \sim \mathcal{N}(0, I_d)}\left[\left\|s_t^\theta(\alpha_t z + \beta_t \epsilon) + \frac{\epsilon}{\beta_t}\right\|^2\right] \\ &= \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, \epsilon \sim \mathcal{N}(0, I_d)}\left[\frac{1}{\beta_t^2} \left\|\beta_t s_t^\theta(\alpha_t z + \beta_t \epsilon) + \epsilon\right\|^2\right] \end{align*}

由于1βt2\frac{1}{\beta^2_t}βt\beta_t趋近于0时loss趋近于无穷大,因此通常舍弃常数项1βt2\frac{1}{\beta^2_t},并用以下方法reparameterize stθs^\theta_tϵtθ\epsilon_t^\theta(噪声预测网络)得到DDPM损失函数

βtstθ(x)=ϵtθ(x)LDDPM(θ)=EtUnif,zpdata,ϵN(0,Id)[ϵtθ(αtz+βtϵ)ϵ2]-\beta_t s_t^\theta(x) = \epsilon_t^\theta(x) \quad \Rightarrow \quad \mathcal{L}_{\text{DDPM}}(\theta) = \mathbb{E}_{t \sim \text{Unif}, z \sim p_{\text{data}}, \epsilon \sim \mathcal{N}(0, I_d)}\left[\left\|\epsilon_t^\theta(\alpha_t z + \beta_t \epsilon) - \epsilon\right\|^2\right]

其训练过程如下所示

image-20250821102850655

此外,对于Gaussian Probability Paths,vector field和score可以相互转化,即

uttarget(xz)=(βt2α˙tαtβ˙tβt)logpt(xz)+α˙tαtxuttarget(x)=(βt2α˙tαtβ˙tβt)logpt(x)+α˙tαtxu_t^{\text{target}}(x|z) = \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \nabla \log p_t(x|z) + \frac{\dot{\alpha}_t}{\alpha_t} x \\ u_t^{\text{target}}(x) = \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \nabla \log p_t(x) + \frac{\dot{\alpha}_t}{\alpha_t} x

proof

uttarget(xz)=(α˙tβ˙tβtαt)z+β˙tβtx=(i)(βt2α˙tαtβ˙tβt)(αtzxβt2)+α˙tαtx=(βt2α˙tαtβ˙tβt)logpt(xz)+α˙tαtxu_t^{\text{target}}(x|z) = \left( \dot{\alpha}_t - \frac{\dot{\beta}_t}{\beta_t} \alpha_t \right) z + \frac{\dot{\beta}_t}{\beta_t} x \stackrel{(i)}{=} \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \left( \frac{\alpha_t z - x}{\beta_t^2} \right) + \frac{\dot{\alpha}_t}{\alpha_t} x = \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \nabla \log p_t(x|z) + \frac{\dot{\alpha}_t}{\alpha_t} x uttarget(x)=uttarget(xz)pt(xz)pdata(z)pt(x)dz=[(βt2α˙tαtβ˙tβt)logpt(xz)+α˙tαtx]pt(xz)pdata(z)pt(x)dz=(i)(βt2α˙tαtβ˙tβt)logpt(x)+α˙tαtx\begin{align*} u_t^{\text{target}}(x) &= \int u_t^{\text{target}}(x|z) \frac{p_t(x|z) p_{\text{data}}(z)}{p_t(x)} \,\mathrm{d}z \\ &= \int \left[ \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \nabla \log p_t(x|z) + \frac{\dot{\alpha}_t}{\alpha_t} x \right] \frac{p_t(x|z) p_{\text{data}}(z)}{p_t(x)} \,\mathrm{d}z \\ &\stackrel{(i)}{=} \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) \nabla \log p_t(x) + \frac{\dot{\alpha}_t}{\alpha_t} x \end{align*}

utθu_t^\thetastθs^\theta_t 也可以相互转化,有

utθ(x)=(βt2α˙tαtβ˙tβt)stθ(x)+α˙tαtxu_t^\theta(x) = \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t \right) s_t^\theta(x) + \frac{\dot{\alpha}_t}{\alpha_t} x stθ(x)=αtutθ(x)α˙txβt2αtαtβ˙tβts_t^\theta(x) = \frac{\alpha_t u_t^\theta(x) - \dot{\alpha}_t x}{\beta_t^2 \alpha_t - \alpha_t \dot{\beta}_t \beta_t}

因此对于Gaussian probability paths来说,只需训练utθu_t^\thetastθs^\theta_t 即可,且使用flow matching或者使用score matching的方法均可

最后,对于训练好的stθs_t^\theta 从SDE中采样过程如下

X0pinit,dXt=[(βt2α˙tαtβ˙tβt+σt22)stθ(x)+α˙tαtx]dt+σtdWt    X1=pdataX_0 \sim p_{\text{init}}, \quad \mathrm{d}X_t = \left[ \left( \beta_t^2 \frac{\dot{\alpha}_t}{\alpha_t} - \dot{\beta}_t \beta_t + \frac{\sigma_t^2}{2} \right) s_t^\theta(x) + \frac{\dot{\alpha}_t}{\alpha_t} x \right] \mathrm{d}t + \sigma_t \mathrm{d}W_t \\ \implies X_1=p_{data}

Summary#

总的来说,Flow Matching比Score Matching更简洁并且Flow Matching更具有拓展性,可以实现从一个任意初始分布pinitp_{init}得到任意分布pdatap_{data},但是denoising diffusion models只适用于Gaussian initial distributions and Gaussian probability path。Flow Matching类似于Stochastic Interpolants。

Conditional (Guided) Generation#

在给定条件下进行生成(generate an object conditioned on some additional information),称之为conditional generation,为了和conditional vector field区分多称为guided generation

用数学语言描述即,对于yYy \in \mathcal{Y},对pdata(xy)p_{data}(x \mid y)中采样,因此模型包含条件向量场utθ(y)u_t^{\theta}(\cdot \mid y),模型架构如下所示

Neural network: utθ:Rd×Y×[0,1]Rd,(x,y,t)utθ(xy)Fixed: σt:[0,1][0,),tσt\begin{align*} \text{Neural network: } & u_t^\theta : \mathbb{R}^d \times \mathcal{Y} \times [0, 1] \to \mathbb{R}^d, \quad (x, y, t) \mapsto u_t^\theta(x|y) \\ \text{Fixed: } & \sigma_t : [0, 1] \to [0, \infty), \quad t \mapsto \sigma_t \end{align*}

对于给定的yRdyy \in \mathbb{R}^{d_y},采样过程可以描述为

Initialization:X0pinit Initialize with simple distributionSimulation:dXt=utθ(Xty)dt+σtdWt Simulate SDE from t=0 to t=1.Goal:X1pdata(y)X1 to be distributed like pdata(y)\begin{align*} \text{Initialization:} \quad & X_0 \sim p_{\text{init}} \quad &\triangleright \text{ Initialize with simple distribution} \\ \text{Simulation:} \quad & \mathrm{d}X_t = u_t^\theta(X_t|y)\,\mathrm{d}t + \sigma_t\,\mathrm{d}W_t \quad &\triangleright \text{ Simulate SDE from } t=0 \text{ to } t=1. \\ \text{Goal:} \quad & X_1 \sim p_{\text{data}}(\cdot|y) \quad &\triangleright X_1 \text{ to be distributed like } p_{\text{data}}(\cdot|y) \end{align*}

上述在σt=0\sigma_t=0时即为guided flow model

Guided Models#

Guided Flow Models的训练损失(优化目标,或者说guided conditional flow matching objective)很容的得到,如下所示

LCFMguided(θ)=E(z,y)pdata(z,y),tUnif(0,1),xpt(z)[utθ(xy)uttarget(xz)2]\begin{align*} \mathcal{L}_{\text{CFM}}^{\text{guided}}(\theta) &= \mathbb{E}_{(z,y) \sim p_{\text{data}}(z,y),\, t \sim \text{Unif}(0,1),\, x \sim p_t(\cdot|z)} \left[ \left\| u_t^\theta(x|y) - u_t^{\text{target}}(x|z) \right\|^2 \right] \end{align*}

同样的,对于Guided Diffusion Models,有guided conditional score matching objective如下

LCSMguided(θ)=E[stθ(xy)logpt(xz)2]=(z,y)pdata(z,y), tUnif(0,1), xpt(z)\begin{align*} \mathcal{L}_{\text{CSM}}^{\text{guided}}(\theta) &= \mathbb{E}_{\square} \left[ \| s_t^\theta(x|y) - \nabla \log p_t(x|z) \|^2 \right] \\ \square &= (z, y) \sim p_{\text{data}}(z, y),\ t \sim \text{Unif}(0,1),\ x \sim p_t(\cdot|z) \end{align*}

虽然理论上上述以及足够生成标签yy对应样本,但是实际上生成效果并不十分fit yy,以及,无法控制生成内容对label的fit程度。一种解决方法是人为加强yy的作用,比较先进的技术是Classifier-Free Guidance。

Classifier-Free Guidance#

对于Flow Models,以Gaussian probability paths为例

uttarget(xy)=atx+btlogpt(xy)\begin{align*} u_t^{\text{target}}(x|y) = a_t x + b_t \nabla \log p_t(x|y) \end{align*}

其中

(at,bt)=(α˙tαt,α˙tβt2β˙tβtαtαt)\begin{align*} (a_t, b_t) = \left( \frac{\dot{\alpha}_t}{\alpha_t}, \frac{\dot{\alpha}_t \beta_t^2 - \dot{\beta}_t \beta_t \alpha_t}{\alpha_t} \right) \end{align*}

logpt(xy)=log(pt(x)pt(yx)pt(y))=logpt(x)+logpt(yx)\begin{align*} \nabla \log p_t(x|y) = \nabla \log \left( \frac{p_t(x) p_t(y|x)}{p_t(y)} \right) = \nabla \log p_t(x) + \nabla \log p_t(y|x) \end{align*}

uttarget(xy)=atx+bt(logpt(x)+logpt(yx))=uttarget(x)+btlogpt(yx)\begin{align*} u_t^{\text{target}}(x|y) = a_t x + b_t (\nabla \log p_t(x) + \nabla \log p_t(y|x)) = u_t^{\text{target}}(x) + b_t \nabla \log p_t(y|x) \end{align*}

可以看出,guided vector field是由unguided vector field和guided score相加得到,一种很自然的想法是对guided score进行加权,得到

u~t(xy)=uttarget(x)+wbtlogpt(yx)\begin{align*} \tilde{u}_t(x|y) = u_t^{\text{target}}(x) + wb_t \nabla \log p_t(y|x) \end{align*}

其中guided score可以看作是噪声类别分类器,早期的工作确实使用这样的方法实现,但是进一步对guided score进行分析得到如下:

u~t(xy)=uttarget(x)+wblogpt(yx)=uttarget(x)+wb(logpt(xy)logpt(x))=uttarget(x)(wax+wblogpt(x))+(wax+wblogpt(xy))=(1w)uttarget(x)+wuttarget(xy).\begin{align*} \tilde{u}_t(x|y) &= u_t^{\text{target}}(x) + w_b \nabla \log p_t(y|x) \\ &= u_t^{\text{target}}(x) + w_b (\nabla \log p_t(x|y) - \nabla \log p_t(x)) \\ &= u_t^{\text{target}}(x) - (w_a x + w_b \nabla \log p_t(x)) + (w_a x + w_b \nabla \log p_t(x|y)) \\ &= (1 - w) u_t^{\text{target}}(x) + w u_t^{\text{target}}(x|y). \end{align*}

u~t(xy)\tilde{u}_t(x|y)由unguided vector field和guided vector field加权得到,并且,通过构造y=y = \varnothing其对应概率为人为设计的超参数η\eta,从而实现使用uttarget(x)u_t^{\text{target}}(x|\varnothing)代替uttarget(x)u_t^{\text{target}}(x),具体可公式化描述为

LCFMCFG(θ)=E[utθ(xy)uttarget(xz)2]=(z,y)pdata(z,y), tUnif(0,1), xpt(z), replace y= with prob. η\begin{align*} \mathcal{L}_{\text{CFM}}^{\text{CFG}}(\theta) &= \mathbb{E}_{\square} \left[ \| u_t^\theta(x|y) - u_t^{\text{target}}(x|z) \|^2 \right] \\ \square &= (z, y) \sim p_{\text{data}}(z, y),\ t \sim \text{Unif}(0,1),\ x \sim p_t(\cdot|z),\ \text{replace } y = \varnothing \text{ with prob. } \eta \end{align*}

对于Diffusion Models,s~t(xy)\tilde{s}_t(x|y)同样可改写如下

s~t(xy)=logpt(x)+wlogpt(yx)=logpt(x)+w(logpt(xy)logpt(x))=(1w)logpt(x)+wlogpt(xy)=(1w)logpt(x)+wlogpt(xy)\begin{align*} \tilde{s}_t(x|y) &= \nabla \log p_t(x) + w \nabla \log p_t(y|x) \\ &= \nabla \log p_t(x) + w (\nabla \log p_t(x|y) - \nabla \log p_t(x)) \\ &= (1 - w) \nabla \log p_t(x) + w \nabla \log p_t(x|y) \\ &= (1 - w) \nabla \log p_t(x|\varnothing) + w \nabla \log p_t(x|y) \end{align*}

training objective如下

LCSMCFG(θ)=E[stθ(x(1ξ)y+ξ)logpt(xz)2]=(z,y)pdata(z,y), tUnif(0,1), xpt(z), replace y= with prob. η\begin{align*} \mathcal{L}_{\text{CSM}}^{\text{CFG}}(\theta) &= \mathbb{E}_{\square} \left[ \| s_t^\theta(x|(1 - \xi)y + \xi \varnothing) - \nabla \log p_t(x|z) \|^2 \right] \\ \square &= (z, y) \sim p_{\text{data}}(z, y),\ t \sim \text{Unif}(0,1),\ x \sim p_t(\cdot|z),\ \text{replace } y = \varnothing \text{ with prob. } \eta \end{align*}

训练时,我们通常也可同时优化stθ(xy){s}_t^\theta(x|y)utθ(xy){u}_t^\theta(x|y),对应的,有

s~tθ(xy)=(1w)stθ(x)+wstθ(xy),u~tθ(xy)=(1w)utθ(x)+wutθ(xy).\begin{align*} \tilde{s}_t^\theta(x|y) &= (1 - w) s_t^\theta(x|\varnothing) + w s_t^\theta(x|y), \\ \tilde{u}_t^\theta(x|y) &= (1 - w) u_t^\theta(x|\varnothing) + w u_t^\theta(x|y). \end{align*}

采样时,有

dXt=[u~tθ(Xty)+σt22stθ(Xty)]dt+σtdWt\mathrm{d}X_t = \left[ \tilde{u}_t^\theta(X_t|y) + \frac{\sigma_t^2}{2} s_t^\theta(X_t|y) \right] \mathrm{d}t + \sigma_t \mathrm{d}W_t

Network architectures#

网络模型的设计随建模数据的复杂程度各有差别,但都需满足

Neural network: utθ:Rd×Y×[0,1]Rd,(x,y,t)utθ(xy)\text{Neural network: } u_t^\theta : \mathbb{R}^d \times \mathcal{Y} \times [0, 1] \to \mathbb{R}^d, \quad (x, y, t) \mapsto u_t^\theta(x|y)

U-Nets#

image-20250821130050114

Diffusion Transformers#

img

References#

[1] Peter Holderrieth and Ezra Erives.An Introduction to Flow Matching and Diffusion Models[EB/OL].https://arxiv.org/abs/2506.02070,2025.

Flow Matching and Diffusion Models
https://pengweewang.github.io/blog/flow-matching-and-diffusion-models/
Author Pengwee Wang
Published at August 21, 2025