\documentclass[twoside]{article} \usepackage{amsfonts} % used for R in Real numbers \pagestyle{myheadings} \markboth{ A Lyapunov function for pullback attractors } { Peter E. Kloeden } \begin{document} \setcounter{page}{91} \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent Nonlinear Differential Equations, \newline Electron. J. Diff. Eqns., Conf. 05, 2000, pp. 91--102\newline http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu or ejde.math.unt.edu (login: ftp)} \vspace{\bigskipamount} \\ % A Lyapunov function for pullback attractors of nonautonomous differential equations % \thanks{ {\em Mathematics Subject Classifications:} 34D20, 34D45. \hfil\break\indent {\em Key words:} Lyapunov function, nonautonomous system, pullback attractor. \hfil\break\indent \copyright 2000 Southwest Texas State University. \hfil\break\indent Published October 25, 2000. \hfil\break\indent Partly supported by the DFG Forschungschwerpunkt ``Ergodentheorie, Analysis und \hfil\break\indent effiziente Simulation dynamischer Systeme".} } \date{} \author{ Peter E. Kloeden \\[12pt] {\em Dedicated to Alan Lazer} \\ {\em on his 60th birthday }} \maketitle \begin{abstract} The contruction of a Lyapunov function characterizing the pullback attractor of a cocycle dynamical system is presented. This system is the state space component of a skew-product flow generated by a nonautonomous differential equation that is driven by an autonomous dynamical system on a metric space. \end{abstract} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \section{Introduction} Lyapunov functions are an effective practical as well as theoretical tool for the investigation of stability properties of dynamical systems, see e.g. \cite{KMS,KL,KS,S,Y}. Converse results ensuring the existence of a Lyapunov function that characterizes a particular type of stability property, such as the uniform asymptotic stability of a global attractor of an autonomous dynamical system, have been particularly useful in numerical dynamics \cite{KL,KS}. Such a result is presented here for the pullback attractor of a cocycle dynamical system generated by a nonautonomous differential equation. The idea of pullback attraction, which has been used for a long time in other contexts (see e.g. \cite{MAK}) provides a means of constructing limiting objects in nonautonomous systems that exist in actual time rather than asymptotically in the future \cite{A,CKS,CF,FS,KS}. The cocycle formalism and pullback attractors are briefly recalled in the next two sections, the reader being referred to the literature (e.g. \cite{A,CKS,CF,FS,KS}) for more details, examples and motivation. The main result is then formulated in Section 4 and the proof is presented in Section 6 following the proof of a lemma on the existence of a pullback absorbing neighbourhood system in Section 5. In order to focus on the idea behind the construction of the Lyapunov function rather than on technical details, the differential equations considered here are assumed to be globally defined and to satisfy a global Lipschitz condition. The paper is concluded with a comment on the properties of the Lyapunov function and an example in Section 7. The following notation and definitions will be used. $H^{*}(A,B)$ denotes the Hausdorff separation or semi--metric between nonempty compact subsets $A$ and $B$ of ${\mathbb R}^d$, and is defined by $$ H^{*}(A,B) := \max_{a\in A} \mathop{\rm dist}(a,B) $$ where $\mathop{\rm dist}(a,B):=\min_{b\in B} \|a-b\|$. For a nonempty compact subset $A$ of ${\mathbb R}^d$ and $r>0$, the open and closed balls about $A$ of radius $r$ are defined, respectively, by $$ B(A; r) := \{x \in {\mathbb R}^d : \ \mathop{\rm dist}(x,A) < r\}, \quad B[A; r] := \{x \in {\mathbb R}^d : \ \mathop{\rm dist}(x,A) \leq r\}. $$ \section{The cocycle formalism} Consider a parametrized differential equation $$ \dot{x} = f(p,x) $$ on ${\mathbb R}^d$, where $p$ is a parameter that is allowed to vary with time in a certain way. In particular, let $P$ be a topological space and consider a group $\Theta = \{\theta_t\}_{t\in {\mathbb R}}$ of mappings $\theta_t : P \to P$ for each $t \in {\mathbb R}$ such that $(t,p) \mapsto \theta_t p$ is continuous. The autonomous dynamical system $\Theta$ on $P$ acts as a driving mechanism that generates the time variation in the parameter $p$ in the parametrized differential equation above to form a nonautonomous differential equation \begin{equation}\label{PDE} \dot{x} = f(\theta_t p,x) \end{equation} on ${\mathbb R}^d$ for each $p \in P$. It will be assumed amongst other things (see later) that $f:P\times{\mathbb R}^d \to {\mathbb R}^d$ is continuous, that $f(p,\cdot)$ is globally Lipschitz continuous on ${\mathbb R}^d$ for each $p \in P$ and that the global forwards existence and uniqueness of solutions of (\ref{PDE}) holds (e.g., due to an additional dissipativity structural assumption). The solution mapping $\Phi :{\mathbb R}^{+}\times P\times {\mathbb R}^d \to {\mathbb R}^d$ of (\ref{PDE}), for which \begin{equation} \label{cprop1} \frac{d}{dt} \Phi(t,p,x_0) = f\left(\theta_t p, \Phi(t,p,x_0) \right), \quad x_0 \in {\mathbb R}^d, p \in P, t \in {\mathbb R}^{+}, \end{equation} with the {\em initial condition property\/} \begin{equation}\label{cprop2} \Phi(0,p,x_0) = x_0, \quad x_0 \in {\mathbb R}^d, p \in P, \end{equation} then satisfies the {\em cocycle property\/} \begin{equation}\label{cprop3} \Phi(s+t,p,x_0) = \Phi(s,\theta_t p,\Phi(t,p,x_0) ), \quad x_0 \in {\mathbb R}^d,\ p \in P, \ s, t \in {\mathbb R}^{+}. \end{equation} That is, $\Phi$ is a {\em cocycle mapping\/} on ${\mathbb R}^d$ with respect to the autonomous dynamical system $\Theta$ on $P$. In fact, the product mapping $(\Phi,\Theta)$ then forms an autonomous semi--dynamical system, or skew--product flow, on the product space ${\mathbb R}^d \times P$. Note that the $t$ variable in $\Phi$ is now the time that has elapsed since starting rather than absolute time. Although solutions of initial value problems may also be (at least partially) extendable backwards in time, interest in this paper is on what happens forwards in time since starting, as is typical in investigations of systems with some kind of dissipative behaviour. A simple example is a conventionally written nonautonomous differential equation \begin{equation}\label{NDE} \dot{x} = g(t,x), \quad t \in {\mathbb R}, x \in {\mathbb R}^d, \end{equation} with $p = t_0$, the initial time instant, and shift mappings $\theta_t t_0:= t_0 +t$ on $P = {\mathbb R}$. Thus, $f(\theta_t p,x) :=g(t_0+t,x)$ here and the solution mapping $\Phi(t,t_0,x_0) := x(t+t_0;t_0,x_0)$ in terms of the solution of the corresponding initial value problem as it is usually written. A less trivial example of the above skew--product formalism is given by Sell's investigations of almost periodic differential equations \cite{S}, in which $P$ is a compact metric space of admissible vector field functions and $\theta_t$ is a temporal shift operator acting on these vector field functions. Random dynamical systems \cite{A,CF,FS} also provide examples with a measure space rather than a topological space as the parameter space. \section{Pullback attractors} The most obvious way to formulate asymptotic behaviour for a nonautonomous dynamical system is consider the limit set of the forwards trajectory $\{\Phi(t,p’,x_0\}_{t\geq 0}$ as $t \to \infty$ for each fixed initial value $(p,x_0)$. The resulting (omega) limit set $\omega^{+}(p,x_0)$ now depends on both the starting parameter $p$ and the starting state $x_0$. In general, the limit sets $\omega^{+}(p,x_0)$ are not invariant under $\Phi$ in the sense that $\Phi(t,p, \omega^{+}(p,x_0))= \omega^{+}(p,x_0)$ for all $t\in {\mathbb R}^{+}$. In fact, it is too restrictive to define invariance like this in terms of just a single set. Instead, it is more useful to say that a family $\widehat{A} = \left\{A_{p}; \, p \in P \right\}$ of nonempty compact subsets of ${\mathbb R}^d$ is {\em invariant under} $\Phi$, or $\Phi$-{\em invariant,} if $$ \Phi(t,p, A_{p}) = A_{\theta_t p}, \quad p \in P, t \in {\mathbb R}^{+}. $$ For example, with $P = {\mathbb R}$, the family of singleton sets defined by $A_{t_0} =\{\bar{\phi}(t_0)\}$, where $\bar{\phi}$ is a solution of the nonautonomous differential equations (\ref{NDE}) that exists for all $t \in {\mathbb R}$, is $\Phi$-invariant. The natural generalization of convergence seems to be the {\em forwards running convergence\/} defined by $$ H^{*}(\Phi(t,p,x_0),A_{\theta_t p}) \to 0 \quad \mbox{as} \ \ t \to \infty. $$ However, this does not ensure convergence to a specific component set $A_{p}$ for a fixed $p$. For that one needs to start ``progressively earlier'' at $\theta_{-t} p$ in order to ``finish'' at $p$. (Think of $P = {\mathbb R}$ with $p$ being the final time $t_0$ and $\theta_{-t} t_0= t_0-t$ the new starting time). This leads to the concept of {\em pullback convergence\/} defined by $$ H^{*}(\Phi(t,\theta_{-t} p,x_0),A_{p}) \to 0 \quad \mbox{as} \ \ t \to \infty. $$ The invariant family $\widehat{A}$ is then called a {\em pullback attractor.\/} The concepts of forwards and pullback attraction are independent of each other. For example, consider the scalar differential equations $\dot{x} = g(t,x) = \pm 2t x$ with the parameter set $P= {\mathbb R}$ and the shift mappings as above. In both cases the invariant families have components sets $A_{t_0} = \{0\}$ for all $t_0 \in {\mathbb R}$. The zero solution is forwards attracting only for the ``$-$" system and pullback attracting only for the ``$+$" system. (See the example at the end of the paper for some additional details). The purpose of this paper is to construct a Lyapunov function that characterizes such pullback attraction and pullback attractors. This will be done in terms of a more general definition of a pullback attractor that encompasses local attraction as well as parametrically dependent regions of pullback attraction. A $\Phi$-invariant family of compact subsets $\widehat{A} = \{A_{p} ; \, p \in P\}$ will be called a {\em pullback attractor with respect to a basin of attraction system} ${\cal D}_{att}$ if it satisfies the pullback attraction property \begin{equation}\label{pb} \lim_{t \to \infty} H^*\left(\Phi(t,\theta_{-t} p,D_{\theta_{-t} p}),A_{p}\right) = 0 \end{equation} for all $p \in P$ and all $\widehat{D} = \left\{D_{p};\ p \in P \right\}$ belonging to a basin of attraction system ${\cal D}_{att}$, that is, a collection of families of nonempty sets $\widehat{D} = \left\{D_{p} ;\ p \in P\right\}$ where $D_{p}$ is bounded in ${\mathbb R}^d$ for each $p \in P$ with the property that $\widehat{D}^{(1)}= \left\{D^{(1)}_{p}; p \in P \right\} \in {\cal D}_{att}$ if $\widehat{D}^{(2)} =\left\{D^{(2)}_{p}\ ;\ p \in P\right\} \in {\cal D}_{att}$ and $D^{(1)}_{p} \subseteq D^{(2)}_{p}$ for all $p \in P$. Note that the mapping $t \mapsto A_{\theta_t p}$ is continuous for each fixed $p \in\Phi$ due to the continuity of $\Phi$ in $t$ and the $\Phi$-invarianceof $\widehat{A}$. (However, the mapping $p \mapsto A_{p}$ is usually only upper semi continuous, see \cite{CKS}). Obviously $\widehat{A} \in {\cal D}_{att}$. In fact, $A_p \subset \mathop{\rm int}{\cal D}_{att}(p)$, where ${\cal D}_{att}(p) :=\bigcup_{\widehat{D} = \{D_{p} ;\ p \in P\} \in {\cal D}_{att}} D_p$, for each $p \in P$. \section{Lyapunov Functions for Pullback Attractors} The main result is to establish the existence of a Lyapunov function that characterizes pullback attraction and pullback attractors. \begin{theorem}\label{th1} Let $\widehat{A}$ be a pullback attractor with a basin of attraction system ${\cal D}_{att}$ for the cocycle dynamical system $(\Phi,\Theta)$ generated by the differential equation (\ref{PDE}), where \begin{itemize} \item $(p,x) \mapsto f(p,x)$ is continuous in $(p,x) \in P\times {\mathbb R}^d$; \item $x \mapsto f(p,x)$ is globally Lipschitz continuous on ${\mathbb R}^d$ with Lipschitz constant $L(p)$ for each $p \in P$; \item $p \mapsto L( p)$ is continuous; \item $(t,p) \mapsto \theta_t p$ is continuous. \end{itemize} Then there exists a function $V : \bigcup_{p\in P} \left(\{p\}\times {\cal D}_{att}(p)\right) \to {\mathbb R}^{+}$ such that \vspace*{2mm} \noindent {\bf Property 1 (upper bound): } For all $p \in P$ and $x_0 \in {\cal D}_{att}(p)$ \begin{equation}\label{ub} V(p,x_0) \leq \mathop{\rm dist}(x_0, A_{p}) ; \end{equation} \vspace*{2mm} \noindent {\bf Property 2 (lower bound): } For each $p \in P$ there exists a function $a(p, \cdot) : {\mathbb R}^{+} \to {\mathbb R}^{+}$ with $a(p,0) = 0$ and $a(p,r) > 0$ for $r > 0$ which is increasing in $r$ such that \begin{equation}\label{lb} a(p,\mathop{\rm dist}(x_0, A_{p})) \leq V(p,x_0) \end{equation} for all $x_0 \in {\cal D}_{att}(p)$; \vspace*{2mm} \noindent {\bf Property 3 (Lipschitz condition): } For all $p \in P$ and $x_0$, $y_0 \in {\cal D}_{att}(p)$ \begin{equation}\label{lip} \left|V(p,x_0)- V(p,y_0)\right| \leq \|x_0-y_0\| ; \end{equation} \vspace*{3 mm} \noindent {\bf Property 4 (pullback convergence): } For all $p \in P$ and any $\widehat{D} \in {\cal D}_{att}$ \begin{equation}\label{pbc} \limsup_{t \to \infty} \sup_{z \in D_{\theta_{-t} p}} V(p,\Phi(t,\theta_{-t} p,z)) = 0. \end{equation} \vspace*{3 mm} \noindent In addition,\\ \noindent {\bf Property 5 (forwards convergence): } There exists $\widehat{N} \in {\cal D}_{att}$ consisting of nonempty compact sets $N_{p}$ which are $\Phi$-positively invariant in the sense that $\Phi(t,p,N_{p}) \subseteq N_{\theta_t p}$ for all $t \geq 0$, $p\in P$, and satisfy $A_{p} \subset {\rm int} N_{p}$ for each $p\in P$ such that \begin{equation}\label{for} V(\theta_t p,\Phi(t,p,x_0)) \leq e^{-t} V(p,x_0) \end{equation} for all $x_0 \in N_{p}$ and $t \geq 0$. \end{theorem} The proof will be given in Section 6, but first it will be shown in the next section that the assumed pullback attractor has a pullback absorbing neighbourhood system. \section{Pullback Absorbing Neighbourhood Systems} A family $\widehat{B} = \left\{B_{p}\ ;\ p\in P \right\} \in {\cal D}_{att}$ of nonempty compact subsets $B_{p}$ of ${\mathbb R}^d$ with nonempty interior is called a {\em pullback absorbing neighbourhood system\/} for a $\Phi$-pullback attractor $\widehat{A}$ if it is $\Phi$-positively invariant and if it {\em pullback absorbs\/} all $\widehat{D} \in {\cal D}_{att}$, that is for each $\widehat{D} \in {\cal D}_{att}$ and $p \in P$ there exists a $T(\widehat{D},p) \in {\mathbb R}^{+}$ such that $$ \Phi(t,\theta_{-t} p,D_{\theta_{-t} p}) \subset \mbox{\rm int} B_{p} \quad \mbox{for all} \quad t \geq T(\widehat{D},p). $$ Obviously, $\widehat{A} \subset \widehat{B} \in {\cal D}_{att}$. Moreover, by positive invariance and the cocycle property $$ \Phi(s+t,\theta_{-s-t}p,B_{\theta_{-s-t}p}) \subset \Phi(t,\theta_{-t}p, B_{\theta_{-t}p}) $$ for all $s$, $t \geq 0$ and $p \in P$, from which it follows that \begin{equation}\label{pba} A_{p} = \bigcap_{t \geq 0} \Phi(t,\theta_{-t}p,B_{\theta_{-t}p}) \quad \mbox{for all} \quad p \in P. \end{equation} The following lemma shows that there always exists such a pullback absorbing neighbourhood system for any given cocycle attractor. This will be required for the construction of the Lyapunov function for the proof of Theorem \ref{th1}. \begin{lemma}\label{lem} If $\widehat{A}$ is a cocycle attractor with a basin of attraction system ${\cal D}_{att}$ for a cocycle dynamical system $(\Phi,\Theta)$ for which $(t,p,x) \mapsto \Phi(t, \theta_{-t} p,x)$ is continuous, then there exists a pullback absorbing neighbourhood system $\widehat{B} \subset {\cal D}_{att}$ of $\widehat{A}$ with respect to $\Phi$. \end{lemma} \noindent {\bf Proof: } Since $A_p \subset {\rm int}{\cal D}_{att}(p)$, there is a $\delta_{p} \in (0,1]$ such that $B[A_{p}; 2\delta_{p}] \subset {\rm int}{\cal D}_{att}(p)$ for each $p \in P$. Define $$ B_{p} := \overline{\bigcup_{t\geq 0} \Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p};\delta_{\theta_{-t}p}])}. $$ Obviously, $A_{p} \subset B[A_{p};\delta_{p}] \subset B_{p} \subset {\cal D}_{att}(p)$ for each $p \in P$. By the cocycle property \begin{eqnarray*} \Phi(t,p,B_{p}) & \subseteq & \overline{\bigcup_{s\geq 0} \Phi(t,p, \Phi(s,\theta_{-s}p,B[A_{\theta_{-s}p}; \delta_{\theta_{-s}p}]))} \\ & = & \overline{\bigcup_{s\geq 0} \Phi(s+t,\theta_{-s}p,B[A_{\theta_{-s}p}; \delta_{\theta_{-s}p}])} \\ & = & \overline{\bigcup_{r\geq t} \Phi(r,\theta_{-r+t}p,B[A_{\theta_{-r+t}p}; \delta_{\theta_{-r+t}p}])} \\ & \subseteq & \overline{\bigcup_{r\geq 0} \Phi(r,\theta_{-r}\theta_{t}p,B[A_{\theta_{-s}\theta_{t}p}; \delta_{\theta_{-r} \theta_{t}p}])} \ = \ B_{\theta_t p} \end{eqnarray*} for all $t \geq 0$, so $\Phi(t,p,B_{p}) \subseteq B_{\theta_t p}$, that is, $\widehat{B} = \left\{B_{p}\ ;\ p\in P \right\}$ is $\Phi$-positively invariant. Now by pullback convergence, there exists a $T = T(p,\delta_{p}) \in {\mathbb R}^{+}$ such that $$ \Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p};\delta_{\theta_{-t}p}]) \subseteq B[A_{p};\delta_{p}] \subset B_{p} $$ for all $t \geq T$. Hence \begin{eqnarray*} B_{p} & = & \overline{\bigcup_{t\geq 0} \Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p}; \delta_{\theta_{-t}p}])} \\ & \subseteq & B[A_{p}; \delta_{p}] \cup \overline{\bigcup_{t\in [0,T]}\Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p}; \delta_{\theta_{-t}p}])} \\ & = & \overline{\bigcup_{t\in [0,T]}\Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p}; \delta_{\theta_{-t}p}])}, \\ & \subseteq & \overline{\bigcup_{t\in [0,T]}\Phi(t,\theta_{-t}p,B[A_{\theta_{-t}p};1])} \\ & \subseteq & \overline{\bigcup_{t\in [0,T]}\Phi(t,\theta_{-t}p,B^{*})} =: U_{p,T}. \end{eqnarray*} Here $B^{*} := \overline{\bigcup_{t\in [0,T]}B[A_{\theta_{-t}p};1]}$ is compact by the continuity of the mapping $t \mapsto A_{\theta_{-t}p}$ and the compactness of the sets $B[A_{\theta_{-t}p};1]$. The compactness of the set $U_{p,T}$ then follows by the continuity of the setvalued mapping $t \mapsto \Phi(t,\theta_{-t}p,B^{*})$. Hence $B_{p}$ is compact for each $p \in P$. To see that $\widehat{B}$ so constructed is pullback absorbing from ${\cal D}_{att}$, let $\widehat{D} \in {\cal D}_{att}$ and fix $p \in P$. Since $\widehat{A}$ is pullback attracting, there exists a $T(\widehat{D},\delta_{p},p) \in {\mathbb R}^{+}$ such that $$ H^{*}\left(\Phi(t,\theta_{-t}p,D_{\theta_{-t}p}), A_{p}\right) < \delta_{p}, $$ that is, $\Phi(t,\theta_{-t}p,D_{\theta_{-t}p}) \subset B(A_{p};\delta_{p})$, for all $t \geq T(\widehat{D},\delta_{p},p)$. But $B(A_{p};\delta_{p}) \subset \mathop{\rm int} B_{p}$, so $\Phi(t,\theta_{-t}p,D_{\theta_{-t}p}) \subset {\rm int} B_{p}$ for all $t \geq T(\widehat{D},\delta_{p}, p)$. Hence $\widehat{B}$ is pullback absorbing as required. \hfill $\Box$ \section{Proof of Theorem \protect{\ref{th1}}} A Lyapunov function $V$ that characterizes a pullback attractor $\widehat{A}$ and satisfies properties 1--5 of Theorem \ref{th1} will be constructed by modifying to the ordinary differential equation setting under consideration the construction used in \cite{K} for nonautonomous difference equations. Define $V(p,x_0)$ for all $p \in P$ and $x_0 \in {\cal D}_{att}(p)$ by $$ V(p,x_0) := \sup_{t\geq 0} e^{-T_{p,t}} \mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right), $$ where $$ T_{p,t} = t + \int_{0}^{t} L(\theta_{-s}p) \, ds \quad \mbox{with} \quad T_{p,0} = 0. $$ The integral here exists due to the continuity assumptions. Note that $T_{p,t} \geq t$ and that $$ T_{\theta_{t} p,s+t} = T_{p,s} + t + \int_0^t L(\theta_r p)\, dr $$ for all $s$, $t \geq 0$ and $p \in P$, the latter holding because \begin{eqnarray*} T_{\theta_{t} p,s+t} & = & s+t + \int_0^{s+t} L(\theta_{-r}\theta_t p)\, dr \\ & = & s+ \int_t^{s+t} L(\theta_{-r+t} p)\, dr +t + \int_0^{t} L(\theta_{-r+t} p)\, dr \\ & = & s+ \underbrace{\int_0^{s} L(\theta_{-u}p)\, du}_{u=r-t} + t - \underbrace{\int_{t}^{0} L(\theta_{v} p)\, dv}_{v=t-r} \\ & = & T_{p,s} + t + \int_{0}^{t} L(\theta_{v} p)\, dv. \end{eqnarray*} \subsection{Proof of property 1} Since $e^{-T_{p,t}} \leq 1$ for all $t \geq 0$ and since $\mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right)$ is monotonically increasing from $0 \leq \mathop{\rm dist}\left(x_0, \Phi(0,p,B_{p})\right) = \mathop{\rm dist}\left(x_0, B_{p}\right)$ at $t = 0$ to $\mathop{\rm dist}\left(x_0,A_{p}\right)$ as $t \to \infty$, it follows that $$ V(p,x_0) = \sup_{t\geq 0} e^{-T_{p,t}} \mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right) \leq 1 \cdot {\rm dist}\left(x_0,A_{p}\right) . $$ \subsection{Proof of property 2} By Property 1, $V(p,x_0) = 0$ for $x_0 \in A_{p}$. Assume instead that $x_0 \in {\cal D}_{att}(p)\setminus A_{p}$. Now the supremum in $$ V(p,x_0) = \sup_{t\geq 0} e^{-T_{p,t}} \mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right) $$ involves the product of an exponentially decreasing quantity bounded below by zero and a bounded increasing function, since the $\Phi(t,\theta_{-t}p,B_{\theta_{-t}p})$ are a nested family of compact sets decreasing to $A_{p}$with increasing $t$. Hence there exists a $T^{*} = T^{*}(p,x_0) \in {\mathbb R}^{+}$ such that $$ \frac{1}{2}\mathop{\rm dist}(x_0, A_{p})\leq \mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p, B_{\theta_{-t}p})\right) $$ for all $t \geq T^{*}$, but not for $t < T^{*}$. Thus, from above, \begin{eqnarray*} V(p,x_0) & \geq & e^{-T_{p,T^{*}}} \mathop{\rm dist}\left(x_0, \Phi(T^{*},\theta_{-T^{*}}p,B_{\theta_{-T^{*}}p})\right) \\ & \geq & \frac{1}{2} e^{-T_{p,T^{*}}} \mathop{\rm dist}\left(x_0, A_{p}\right). \end{eqnarray*} Define $$ \hat{T}(p,r) := \sup \{ T^{*}(p,x_0) : x_0 \in {\cal D}_{att}(p), \ \ \mathop{\rm dist}\left(x_0, A_{p}\right)=r \}. $$ Then $\hat{T}(p,r) < \infty$. To see this note that by the triangle rule $$ \mathop{\rm dist}(x_0,A_{p}) \leq {\rm dist}(x_0,\Phi(t,\theta_{-t}p,B_{\theta_{-t}p})) + H^{*}(\Phi(t,\theta_{-t}p,B_{\theta_{-t}p}),A_{p}). $$ Also, by pullback convergence, there exists a finite $T(p,r/2)$ such that $$ H^{*}(\Phi(t,\theta_{-t}p,B_{\theta_{-t}p}),A_{p}) < \frac{1}{2} r $$ for all $t \geq T(p,r/2)$. Hence $$ r \leq \mathop{\rm dist}(x_0,\Phi(t,\theta_{-t}p,B_{\theta_{-t}p})) + \frac{1}{2} r $$ for $\mathop{\rm dist}(x_0,A_{p}) = r$ and $t \geq T(p,r/2)$, that is $$ \frac{1}{2} r \leq \mathop{\rm dist}(x_0,\Phi(t,\theta_{-t}p,B_{\theta_{-t}p})). $$ Thus , $\hat{T}(p,r) \leq T(p,r/2) < \infty$. In addition, $\hat{T}(p,r)$ is obviously nondecreasing in $r$ as $r \to 0$. Finally, define \begin{equation}\label{lbf} a(p,r) := \frac{1}{2}r \ e^{-T_{p,\hat{T}(p,r)}}, \end{equation} which satisfies the stated properties. \subsection{Proof of property 3} From the definition \begin{eqnarray*} \lefteqn{ \left|V(p,x_0) - V(p,y_0)\right| }\\ & = & \big|\sup_{t\geq 0} e^{-T_{p,t}} \mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right) \\ &&\quad- \sup_{t\geq 0} e^{-T_{p,t}} \mathop{\rm dist}\left(y_0, \Phi(t,\theta_{-t}p,B_{\theta_{-t}p})\right) \big| \\ & \leq & \sup_{t\geq 0} e^{-T_{p,t}} \left|\mathop{\rm dist}\left(x_0, \Phi(t,\theta_{-t}p, B_{\theta_{-t}p})\right) - \mathop{\rm dist}\left(y_0, \Phi(t,\theta_{-t}p, B_{\theta_{-t}p})\right) \right| \\ & \leq & \sup_{t\geq 0} e^{-T_{p,t}} \|x_0-y_0\| \leq \|x_0-y_0\|. \end{eqnarray*} \subsection{Proof of property 4} Assume the opposite. Then there exists an $\varepsilon_0 > 0$, a sequence $t_j \to \infty$ in ${\mathbb R}^{+}$ and points $x_j \in \Phi(t_j,\theta_{-t_j}p, D_{\theta_{-t_j}p})$ such that $V(p,x_j) \geq \varepsilon_0$ for all $j \in {\mathbb N}$. Since $\widehat{D} \in {\cal D}_{att}$ and $\widehat{B}$ is pullback absorbing, there exists a $T = T(\widehat{D},p) \in {\mathbb R}^{+}$ such that $$ \Phi(t_j,\theta_{-t_j}p,D_{\theta_{-t_j}p}) \subset B_{p}, \quad \mbox{for all} \quad t_j \geq T. $$ Hence $x_j \in B_{p}$ for all $j$ such that $t_j \geq T$. Then, since $B_{p}$ is a compact set, there exists a convergent subsequence $x_{j'} \to x^{*} \in B_{p}$. But $$ x_{j'} \in \overline{\bigcup_{t \geq t_{j'}} \Phi(t,\theta_{-t}p,D_{\theta_{-t}p}) } $$ and $$ \bigcap_{t_{j'}} \overline{\bigcup_{t \geq t_{j'}} \Phi(t,\theta_{-t}p, D_{\theta_{-t}p}) } \subseteq A_{p} $$ by (\ref{pba}) and the definition (and existence) of a pullback absorbing system. Hence $x^{*} \in A_{p}$ and $V(p,x^{*}) = 0$ must hold. But $V$ is Lipschitz continuous in its second variable by property 3, so $$ \varepsilon_0 \leq V(p,x_{j'}) = \|V(p,x_{j'})-V(p,x^{*})\| \leq \|x_{j'}- x^{*}\|, $$ which contradicts the convergence $x_{j'} \to x^{*}$. Hence property 4 must hold. \subsection{Proof of property 5} Take $N_{p} \equiv B_{p}$ for each $p \in P$. Thus $\widehat{N} = \left\{N_{p}\ ; p\in P \right\}$ is positively invariant. It remains to establish the exponential decay inequality (\ref{for}). Note that the cocycle mapping $\Phi$, considered as the solution mapping of the nonautonomous differential equation (\ref{PDE}), satisfies the Lipschitz condition $$ \left\|\Phi(t,p,x_0) - \Phi(t,p,y_0) \right\| \leq e^{\int_0^t L(\theta_s p)\, ds} \|x_0-y_0\| $$ for all $x_0$, $y_0 \in {\mathbb R}^d$, from which it follows that $$ \mathop{\rm dist}( \Phi(t,p,x_0), \Phi(t,p,C_{p})) \leq e^{\int_0^t L(\theta_s p)\, ds} \mathop{\rm dist}(x_0, C_{p}) $$ for any nonempty compact subset $C_{p}$ of ${\mathbb R}^d$. Now $\Phi(t,p,x_0) \in N_{\theta_t p}$ when $x_0 \in N_{p}$. Re-indexing and then using the cocycle property and the above Lipschitz condition thus gives \begin{eqnarray*} V(\theta_t p,\Phi(t,p,x_0)) & = & \sup_{s\geq 0} e^{-T_{\theta_t p,s+t}} \mathop{\rm dist}( \Phi(t,p,x_0), \Phi(s+t,\theta_{-s} p,B_{\theta_{-s} p})) \\ & = & \sup_{s\geq 0} e^{-T_{\theta_t p,s+t}} \mathop{\rm dist}( \Phi(t,p,x_0), \Phi(t,p, \Phi(s,\theta_{-s} p ,B_{\theta_{-s} p}))) \\ & \leq & \sup_{s\geq 0} e^{-T_{\theta_{t} p,s+t}} e^{\int_0^t L(\theta_r p)\, dr} \mathop{\rm dist}( x_0, \Phi(s,\theta_{-s} p,B_{\theta_{-s} p} )) \end{eqnarray*} However, $T_{\theta_{t} p,s+t} = T_{p,s}+t + \int_0^t L(\theta_r p)\, dr$, so \begin{eqnarray*} \lefteqn{ V(\theta_t p,\Phi(t,p,x_0))}\\ & \leq & \sup_{s\geq 0} e^{-T_{p,s}-t-\int_0^t L(\theta_r p)\, dr + \int_0^t L(\theta_r p)\, dr} \mathop{\rm dist}( x_0, \Phi(s,\theta_{-s} p,B_{\theta_{-s}p} )) \\ & = & \sup_{s\geq 0}e^{-T_{p,s}-t} \mathop{\rm dist}( x_0, \Phi(s,\theta_{-s}p,B_{\theta_{-s}p} )) \\ & = & e^{-t} \sup_{s\geq 0} e^{-T_{p,s}} \mathop{\rm dist}( x_0, \Phi(s,\theta_{-s} p,B_{\theta_{-s} p} )) = e^{-t} V(p,x_0), \end{eqnarray*} which is the desired inequality. \vspace*{2mm} This completes the proof of Theorem \ref{th1}. \hfill $\Box$ \section{Example} The forwards convergence inequality (\ref{for}) of the pullback Lyapunov function does not imply the usual forwards Lyapunov stability or asymptotic stability. Athough the inequality $$ a(\theta_{t} p,\mathop{\rm dist}(\Phi(t,p,x_0),A_{\theta_{t} p})) \leq e^{-t} V(p,x_0) $$ then holds, $\mathop{\rm dist}(\Phi(t,p,x_0),A_{\theta_{t} p})$ need not become small as $t \to \infty$. The reason for this is that, without additional assumptions on the dynamical bahaviour, it is possible that $$ \inf_{t\geq 0} a(\theta_{t} p,r) = 0 $$ for some $r > 0$ and $p \in P$. In fact, this is what happens with the differential equation $$ \dot{x} = 2tx $$ with the solution $x(t;t_0,x_0) = x_0 e^{t^2-t_0^2}$, where $t \geq t_0$, and the cocycle mapping $$ \Phi(t;t_0,x_0) = x_0 e^{(t+t_0)^2-t_0^2}, \quad t \geq 0. $$ Here the parameter $p = t_0 \in P = {\mathbb R}$ and $\theta_t t_0 = t+t_0$. The pullback attractor here has components $A_{t_0} = \{0\}$ for each $t_0 \in {\mathbb R}$ and the pullback attraction is global, i.e. there is no restriction on the bounded subsets that are considered in the basin of attraction system. A Lyapunov function satisfying the properties of Theorem \ref{th1} is given by $$ V(t_0,x_0) = |x_0| e^{-t_0-t_0^2-\frac{1}{4}}. $$ Property 1 with $a(t_0,r) = r e^{-|t_0|-t_0^2-\frac{1}{4}}$ and property 2 are immediate, while property 3 follows from \begin{eqnarray*} V(t_0,\Phi(t;t_0-t,x_0)) & = & \left| x_0e^{(t+t_0-t)^2-(t_0-t)^2} \right| e^{-t_0-t_0^2-\frac{1}{4}} \\ & = & e^{-(t_0-t)^2 - t_0 -\frac{1}{4}}|x_0| \to 0 \quad \mbox{as} \quad t \to \infty. \end{eqnarray*} In addition, $V$ satisfies inequality (\ref{for}), since \begin{eqnarray*} V(t_0+t,\Phi(t;t_0,x_0)) & = & \left|x_0 e^{(t+t_0)^2-t_0^2}\right| e^{-(t_0+t)-(t_0+t)^2-\frac{1}{4}} \\ & = & e^{-t} V(t_0,x_0) \to 0 \quad \mbox{as} \quad t \to \infty. \end{eqnarray*} However, the zero solution is obviously not forwards Lyapunov stable. \begin{thebibliography}{99} {\frenchspacing \bibitem{A} L. Arnold, {\em Random Dynamical Systems.\/} Springer--Verlag, Heidelberg, 1998. \bibitem{CKS} D.N. Cheban, P.E. Kloeden and B. Schmalfu\ss, Pullback attractors in dissipative nonautonomous differential equations under discretization. {\em DANSE}--Preprint, FU Berlin, 1998. \bibitem{CF} H. Crauel and F. Flandoli, Attractors for random dynamical systems, {\em Probab. Theory Relat. Fields,\/} {\bf 100} (1994), 365--393. \bibitem{FS} F. Flandoli and B. Schmalfu\ss, Random attractors for the 3d stochastic Navier Stokes equation with multiplicative white noise, {\em Stochastics and Stochastics Reports,\/} {\bf 59} (1996), 21--45. \bibitem{KMS} J. Kato, A.A. Martynyuk and A.A Shetakov {\em Stability of Motion of Nonautonomous Systems (Method of Limiting Equations), } Gordon and Breach Publishers, Luxemburg, 1996. \bibitem{K} P.E. Kloeden, Lyapunov functions for cocycle attractors in nonautonomous difference equation, {\em Izvestiya Akad Nauk Republ. Moldavia Matematika\/} {\bf 26} (1998), 32--42. \bibitem{KL} P.E. Kloeden and J. Lorenz, Stable attracting sets in dynamical systems and in their one-step discretizations, {\em SIAM J. Numer. Analysis\/} {\bf 23} (1986), 986-995. \bibitem{KS} P.E. Kloeden and B. Schmalfu{\ss}, Lyapunov functions and attractors under variable time--step discretization, {\em Discrete \& Conts. Dynamical Systems\/} {\bf 2} (1996), 163--172. \bibitem{MAK} M.A. Krasnosel'skii, {\em The Operator of Translation along Trajectories of Differential Equations\/}, Translations of Mathematical Monographs, Volume 19. American Math. Soc., Providence, R.I., 1968. \bibitem{S} G.R. Sell, {\em Lectures on Topological Dynamics and Differential Equations.\/} Van Nostrand--Reinbold, London, 1971. \bibitem{Y} T. Yoshizawa, {\em Stability Theory by Lyapunov's Second Method.\/} Mathematical Soc. Japan, Tokyo, 1966 }\end{thebibliography} \medskip \noindent{\sc Peter E.~Kloeden}\\ FB Mathematik, Johann Wolfgang Goethe Universit\"at\\ D-60054 Frankfurt am Main, Germany\\ email: kloeden@math.uni-frankfurt.de \end{document}