Introduction
The advent of large-scale diffusion models conditioned on text embeddings has allowed for creative control over the generative process. A recent and powerful technique is that of prompt scheduling, i.e., instead of passing a fixed prompt to the diffusion model, the prompt can changed depending on the timestep. This concept was initially proposed by Doggettx in this reddit post and the code changes to the stable diffusion repository can be seen here.
Examples of the prompt scheduling technique proposed by Doggettx.
More generally, we can view this as have the conditional information (in this case text embeddings) scheduled w.r.t. time. Formally, assume we have a U-Net trained on the noise-prediction task conditioned on a time scheduled text embedding . The sampling procedure amounts to solving the probability flow ODE from time to time . where define the drift and diffusion coefficients of a Variance Preserving (VP) type SDE .
Training-free guidance
A closely related area of active research has been the development of techniques which search of the optimal generation parameters.
More specifically, they attempt to solve the following optimization problem: where is a real-valued loss function on the output .
Several recent works this year explore solving the continuous adjoint equations to find the gradients: These gradients can the be used in combination with gradient descent algorithms to solve the optimization problem. However, what if is scheduled and not constant w.r.t to time?
Problem statement. Given and , find:
In an earlier blog post we showed how to find by solving the continuous adjoint equations. How do the continuous adjoint equations change with replacing with time scheduled in the sampling equation? What we will now show is that
We can just simply replace with in the continuous adjoint equations.
This result will intuitive, does require some technical details to show.
Gradients of time-scheduled conditional variables
It is well known that diffusion models are just a special type of neural differential equation, either a neural ODE or SDE. As such we will show this result holds more generally for neural ODEs.
Theorem (Continuous adjoint equations for time scheduled conditional variables). Suppose there exists a function which can be defined as a càdlàg piecewise function where is continuous on each partition of given by and whose right derivatives exists for all . Let be continuous in , uniformly Lipschitz in , and continuously differentiable in . Let be the unique solution for the ODE with initial condition . Then and there exists a unique solution to the following initial value problem:
Why càdlàg?
In practice is often a discrete set where corresponds to the number of discretization steps the numerical ODE solver takes. While the proof is easier for a continuously differentiable function we opt for this construction for the sake of generality. We choose a càdlàg piecewise function, a relatively mild assumption, to ensure that the we can define the augmented state on each continuous interval of the piecewise function in terms of the right derivative.
In the remainder of this blog post will provide the proof of this result. Our proof technique is an extension of the one used by Patrick Kidger (Appendix C.3.1) used to prove the existence to the solution to the continuous adjoint equations for neural ODEs.
Proof. Recall that is a piecewise function of time with partition of the time domain . Without loss of generality we consider some time interval for some . Consider the augmented state defined on the interval : where denotes the right derivative of at time . Let denote the augmented state as Then the Jacobian of is defined as As the state evolves with on the interval in the forward direction the derivative of this augmented vector field w.r.t. is clearly as it only depends on time. Remark, as the bottom row of the Jacobian of is all and is continuous in we can consider the evolution of over the whole interval rather than just a partition of it. The evolution of the augmented adjoint state on is then given as Therefore, is a solution to the initial value problem:
Next we show that there exist a unique solution to the initial value problem. Now as is continuous and is continuously differentiable in it follows that is a continuous function on the compact set . As such it is bounded by some . Likewise, for the map is Lipschitz in with Lipschitz constant and this constant is independent of . Therefore, by the Picard-Lindelöf theorem the solution exists and is unique.
□
If you found this useful and would like to cite this post in academic context, please cite this as: Blasingame, Zander W. (Dec 2024). Gradients for Time Scheduled Conditional Variables in Neural Differential Equations. https://zblasingame.github.io.
or as a BibTeX entry:
@article{blasingame2024gradients-for-time-scheduled-conditional-variables-in-neural-differential-equations,
title = {Gradients for Time Scheduled Conditional Variables in Neural Differential Equations},
author = {Blasingame, Zander W.},
year = {2024},
month = {Dec},
url = {https://zblasingame.github.io/blog/2024/cadlag-conditional/}
}