\documentclass[twoside]{article} \pagestyle{myheadings} \markboth{ A numerical method for solving heat equations}{ Zhilin Li \& Yun-Qiu Shen } \begin{document} \setcounter{page}{100} \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent Fourth Mississippi State Conference on Differential Equations and \newline Computational Simulations, Electronic Journal of Differential Equations, \newline Conference 03, 1999, pp 100-108. \newline http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu or ejde.math.unt.edu (login: ftp)} \vspace{\bigskipamount} \\ % A numerical method for solving heat equations involving interfaces \thanks{ {\em Mathematics Subject Classifications:} 65M06, 65M12, 65M15. \hfil\break\indent {\em Key words:} Finite difference, Second order accuracy, Interface, Local coordinates. \hfil\break\indent \copyright 2000 Southwest Texas State University and University of North Texas. \hfil\break\indent Published July 10, 2000.} } \date{} \author{Zhilin Li \& Yun-Qiu Shen} \maketitle \begin{abstract} In 1993, Li and Mayo [3] gave a finite-difference method with second order accuracy for solving the heat equations involving interfaces with constant coefficients and discontinuous sources. In this paper, we expand their result by presenting a finite-difference method which allows each coefficient to take different values in different sub-regions of the interface. Our method is useful in physical applications, and has also second order accuracy. \end{abstract} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \section{Introduction} Consider the heat equation \begin{equation} u_t = (\beta u_x )_x + (\beta u_y)_y + f(x,y,t)\,, \label{1.1} \end{equation} where $t$ is in $[a, \infty )$ and $(x,y)$ in an open region $\Omega\subset R^2$ which is divided into two sub-regions $\Omega^+$, $\Omega^-$ by an irregular interface $\Gamma$. Let $\Omega = \Omega^+ \cup \Gamma \cup \Omega^-$ be an open rectangular region in $R^2$, and let $$ \beta (x,y) = \cases{\beta^+, & if $(x,y) \in \Omega^+$,\cr \beta^-, & if $(x,y) \in \Omega^-,$} %\label{1.2} $$ where $\beta^+$ and $\beta^- $ are positive constants. Assume that $f$ is continuous in each sub-region, and may have jumps of discontinuty across $\Gamma$. The solution $u(x,y,t)$ and its normal derivative $u_n(x,y,t)$ crossing the curve $\Gamma$ have prescribed jumps \begin{eqnarray} &[u]\equiv u^+(x(s),y(s), t) - u^-(x(s),y(s), t) = \omega (s, t),& \label{1.3} \\ &[u_n]\equiv u_n^+(x(s),y(s), t) - u_n^-(x(s),y(s), t) = g(s,t),& \label{1.4} \end{eqnarray} where $s$ is a parameter of $\Gamma$, the superscripts + and $-$ denote the limiting values of a function from one side in $\Omega^+$ and another side in $\Omega^-$ respectively. Throughout this paper, we use $[G]=G^+-G^-$ to denote the difference of the limiting values of a function $G$ from different sides of the interface as in (\ref{1.3}), (\ref{1.4}). We also assume that the initial condition at $t=a$ and the boundary condition on $\partial B$ are known. In 1993, Li and Mayo [3] gave a finite-difference method with second accuracy for solving (\ref{1.1}), assuming that $f$ is discontinuous but $\beta$ is constant. In this paper, we present a finite-difference method for solving (\ref{1.1}), which allows $\beta$ to be taken different values in different sub-regions of the interface. This feature is useful in physical applications. We organize this paper as follows: In Section 2, we establish local coordinate systems around the interface. In Section 3, we give the correction terms for the finite-difference method. In Section 4, we show that our method is second order accurate. Finally, in Section 5, we give some numerical examples, in which the actual solutions are known, to confirm the theoretical result. \section{Local Coordinate Systems} We first give local coordinate systems along the interface $\Gamma$, as in \cite{L1,L2}. When a point $(x_0, y_0)$ on the interface is fixed for the origin, we use the normal direction as the $\xi$-direction, which has an angle $\theta$ with the $x$-axis. Rotating the $\xi$-direction by ninety degrees counter-clockwise, we obtain the $\eta$-direction. Now we express the curve $\Gamma$ as a function of the independent variable $\eta$ locally. \begin{equation} \xi = \chi ( \eta). \label{2.1} \end{equation} We express (\ref{1.3}),(\ref{1.4}) locally by using the $\eta$ coordinate. \begin{eqnarray} [u]& \equiv& u^+ - u^- = \omega ( \eta, t), \label{2.2} \\ \relax [u_n] &\equiv& u_n^+ - u_n^- = g( \eta,t), \label{2.3} \end{eqnarray} where $\omega$ and $g$ are known in advance. From (\ref{2.1}) and (\ref{2.3}), we have $$ g = [u_n] = { { -[u_{\eta}] \chi_{\eta} + [u_{\xi}] } \over { \sqrt{ 1 + \chi_{\eta} ^2 } }}. $$ Using differentiation with respect to $\eta$, noting $\chi_{\eta}(0)=0$, we have \begin{eqnarray} [u_{\eta} ] &=& \omega_{\eta}, \label{2.5}\\ \relax [u_{\xi}] &=& g, \label{2.6}\\ \relax [u_{\eta \eta} ] &=& -g \chi_{ \eta \eta} + \omega_{ \eta \eta}, \label{2.7}\\ \relax [u_{\xi \eta}] &=& \omega_{\eta} \chi_{\eta \eta} +g_{\eta}.\label{2.8} \end{eqnarray} Changing (\ref{1.1}) locally, using \begin{eqnarray} \xi &=& (x-x_0)cos\theta+(y-y_0)sin\theta, \label{2.9} \\ \eta &=& -(x-x_0)sin\theta+(y-y_0)cos\theta, \label{2.10} \end{eqnarray} we have $$ u_{xx}+u_{yy}=u_{\xi \xi}+u_{\eta \eta}, $$ therefore, \begin{equation} u_t = \beta u_{\xi \xi} + \beta u_{\eta \eta} + \widetilde f (\xi, \eta, t), \label{2.12} \end{equation} where $\widetilde f (\xi, \eta, t)=f(x(\xi,\eta),y(\xi, \eta),t)$. (\ref{2.12}) implies \begin{equation} [u_{\xi \xi}] = [u_{ \xi \xi } ]_1 + \left[{{u_t} \over {\beta}} \right], \label{2.13} \end{equation} where \begin{equation} [u_{ \xi \xi } ]_1 = g \chi_{ \eta \eta} - \omega_{\eta \eta} - \left[ { {\widetilde f} \over {\beta} } \right].\label{2.14} \end{equation} In (\ref{2.5})-(\ref{2.8})and (\ref{2.13})-(\ref{2.14}), all functions in the right-hand sides are known except for $[ {u_t}/ {\beta}]$ in (\ref{2.13}) which will be explored in the next section. \section{Correction Terms} We first discretize in both of the $x$-direction and the $y$-direction with a mesh of size $h$. \begin{eqnarray} u_{xx} &\approx& \delta_x u_{i,j} \equiv {1 \over {h^2}} (u_{i-1,j}-2u_{i,j}+u_{i+1,j}), \label{3.1}\\ u_{yy} &\approx& \delta_y u_{i,j} \equiv {1 \over {h^2}} (u_{i,j-1}-2u_{i,j}+u_{i,j+1}). \label{3.2} \end{eqnarray} We group all the grid points in $\Omega$ into two sets. The set $S_{{\rm reg}}$ consists the regular points, each point in one sub-region has no neighbor point in the another sub-region. The set $S_{{\rm irr}}$ consists the irregular points, each point in one sub-region has at least one neighbor point in the other sub-region. For a regular grid point, the local truncation error of (\ref{3.1}),(\ref{3.2}) from $ \beta u_{xx} + \beta_{yy} + f$ is $O(h^2)$. For an irregular grid point, we need add some correction terms in (\ref{3.1}),(\ref{3.2}) such that the local truncation error is $O(h)$, therefore the global error of the solution for solving the heat equation is $O(h^2)$ after the discretization of time $t$ in certain way. At first, we relate the jumps with respect to $x$ and $y$ to the jumps with respect to $\xi$ and $\eta$ by (\ref{2.9}) and (\ref{2.10}). \begin{eqnarray} [u_x] &=& [u_{\xi}]\cos \theta - [u_{\eta}] \sin \theta, \nonumber\\ \relax [u_y] &=& [u_{\xi}] \sin \theta + [u_{\eta}] \cos \theta, \label{3.3} \\ \relax [u_{xx}] &=& [u_{xx}]_1 + \left[ {{u_t} \over {\beta}}\right] \cos^2 \theta, \nonumber \end{eqnarray} where \begin{eqnarray} [u_{xx}]_1 &=& [u_{\xi \xi} ]_1 \cos^2 \theta - 2[u_{\xi \eta}] \cos \theta \sin \theta + [u_{\eta \eta}] \sin^2 \theta, \nonumber\\ \relax [u_{yy}]& =& [u_{yy}]_1 + \left[ {{u_t} \over {\beta}} \right] \sin^2 \theta, \label{3.6} \end{eqnarray} where $$[u_{yy}]_1 = [u_{\xi \xi} ]_1 \sin^2 \theta + 2[u_{\xi \eta}] \cos \theta \sin \theta + [u_{\eta \eta}] \cos^2 \theta.$$ By (\ref{2.5})-(\ref{2.8}) and (\ref{2.13})-(\ref{2.14}), all the terms in the right-hand sides of (\ref{3.3})-(\ref{3.6}) are known except for $[{u_t}/ \beta]$. \bigskip Now we consider an irregular grid point in the $x$-direction, there are four cases: \begin{description} \item{(a)} $(x_i,y_j) \in \Omega^-,(x_{i+1}, y_j) \in \Omega^+$ \item{(b)} $(x_i,y_j) \in \Omega^+, (x_{i-1}, y_j) \in \Omega^-$ \item{(c)} $(x_i,y_j)\in \Omega^+, (x_{i+1}, y_j) \in \Omega^- $ \item{(d)} $(x_i,y_j) \in \Omega^-, (x_{i-1}, y_j) \in \Omega^+$ \end{description} For case (a), let the intersection of the line segment connecting $(x_i,y_j)$, $(x_{i+1},y_j)$ and $\Gamma$ be $(x^*,y_j)$. Using Taylor's series around $x^*$, we have \begin{eqnarray*} \lefteqn{ {1 \over {h^2}} ( u(x_{i-1},y_j)-2u(x_i,y_j)+u(x_{i+1},y_j) ) }\\ &=& {1 \over {h^2}} \left\{ [u] + [u_x](x_{i+1} - x^*) + {{[u_{xx}]} \over {2!}} (x_{i+1}-x^*)^2 \right\} + u^-_{xx} + O(h), \end{eqnarray*} which implies \begin{equation} u_{xx} = \delta_x u_{i,j} + C_x u_{i,j} - { { [{{u_t} \over \beta}] (x_{i+1}-x^*)^2} \over {2h^2} } +O(h), \label{3.7} \end{equation} where $$ C_x u_{i,j}= - {1 \over {h^2}} \left\{ [u] + [u_x](x_{i+1} - x^*) + {{[u_{xx}]_1} \over {2!}} (x_{i+1}-x^*)^2 \right\}.$$ Similarly, for case (b), we have \begin{equation} u_{xx} = \delta_x u_{i,j} + C_x u_{i,j} + { { [ {{u_t} \over \beta}] (x_{i-1}-x^*)^2} \over {2h^2} } +O(h), \label{3.8} \end{equation} where $$ C_x u_{i,j}= {1 \over {h^2}} \left\{ [u] + [u_x](x_{i-1} - x^*) + {{[u_{xx}]_1} \over {2!}} (x_{i-1}-x^*)^2 \right\} . $$ For case (c), we have \begin{equation} u_{xx} = \delta_x u_{i,j} + C_x u_{i,j} + { { [{{u_t} \over \beta}] (x_{i+1}-x^*)^2} \over {2h^2} } +O(h), \label{3.9} \end{equation} where $$ C_x u_{i,j}= {1 \over {h^2}} \left\{ [u] + [u_x](x_{i+1} - x^*) + {{[u_{xx}]_1} \over {2!}} (x_{i+1}-x^*)^2 \right\}.$$ For case (d), we have \begin{equation} u_{xx} = \delta_x u_{i,j} + C_x u_{i,j} - { { [{{u_t} \over \beta}] (x_{i-1}-x^*)^2} \over {2h^2} } +O(h), \label{3.10} \end{equation} where $$ C_x u_{i,j}= - {1 \over {h^2}} \left\{ [u] + [u_x](x_{i-1} - x^*) + {{[u_{xx}]_1} \over {2!}} (x_{i-1}-x^*)^2 \right\}.$$ Analogously, in the $y$-direction, we also have four cases. We add correction terms such that the local truncation is $O(h)$. Now using these correction terms in both $x$-and $y$-directions, we obtain a system of ordinary differential equations \begin{eqnarray} (u_{i,j})_t &=& \beta (\delta_x u_{i,j} + \delta_y u_{i,j} ) + f_{i,j} \nonumber \\ && + \beta \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \left\{ C_xu_{i,j}+ \left[ {{u_t} \over {\beta} } \right] {{\tau_{x_0} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} } \right\} \label{3.11} \\ &&+ \beta \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \left\{ C_yu_{i,j}+ \left[ {{u_t} \over {\beta}} \right] {{\tau_{y_0} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} } \right\} +O(h), \nonumber \end{eqnarray} where $i_0 =i-1$ or $i+1$, $j_0=j-1$ or $j+1$. $\tau_{x_0}$, $\tau_{y_0}=1$ or $-1$ according to (\ref{3.7})-(\ref{3.10}). We have $$ \left[ {{u_t} \over \beta } \right] = u^-_t \left[ { 1 \over \beta } \right] + { {[u_t]} \over {\beta^+} } = u^+_t \left[ { 1 \over \beta } \right] + { {[u_t]} \over {\beta^-} }, \label{3.12}$$ which imply \begin{equation} \left[ {{u_t} \over \beta } \right] = (u_{i,j})_t \left[ {1 \over \beta} \right] + { {\omega_t} \over {\widetilde \beta}} +O(h), \label{3.14} \end{equation} where $\widetilde \beta = \beta^-$ if $\beta = \beta^+$, $\widetilde \beta = \beta^+$ if $\beta = \beta^-$, and $\omega$ is defined in (\ref{2.2}). Using (\ref{3.14}), we have the following system of ordinary differential equations $$ (u_{i,j})_t = F(u_{i,j-1},u_{i-1,j},u_{i,j},u_{i+1,j},u_{i,j+1}) $$ with the right-hand side $F$ is equal to \begin{equation} { {\beta ( \delta_x u_{i,j} + \delta_y u_{i,j} + \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \widetilde C_xu_{i,j} + \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \widetilde C_yu_{i,j}) + f_{i,j} } \over { 1- \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \beta [ { 1 \over {\beta} } ] {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} } - \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \beta [ { 1 \over {\beta}} ] {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} } } }, \label{3.15} \end{equation} where $$\widetilde C_x u_{ij} = C_x u_{ij} + {{ \omega_t} \over {\widetilde \beta}} {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} }$$ and $$\widetilde C_y u_{ij} = C_y u_{ij} + {{ \omega_t} \over {\widetilde \beta}} {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} }. $$ \vskip 0.2cm At a regular grid point, the local truncation error of the right-hand side of (\ref{3.15}) from $ \beta u_{xx} + \beta_{yy} + f $ is $O(h^2)$. In the next section, we will show that at an irregular grid point, the local truncation error is $O(h)$. Finally, we discretize time t by choosing $\Delta t = h$. We use Crank-Nicholson method. \begin{eqnarray} u_{i,j,k+1} &=& u_{i,j,k} + \Delta t ( 0.5 F(u_{i,j-1,k},u_{i-1,j,k},u_{i,j,k},u_{i+1,j,k},u_{i,j+1,k}) \label{3.16}\\ &&+ 0.5 F(u_{i,j-1,k+1},u_{i-1,j,k+1},u_{i,j,k+1}, u_{i+1,j,k+1},u_{i,j+1,k+1})), \nonumber \end{eqnarray} which implies the local truncation error for discretizing on $t$ is $O((\Delta t)^2)$, \cite{G1,M1}. To solve for $u_{i,j,k+1}$, from (\ref{3.16}), we use S.O.R. iteration with a certain parameter $\lambda$. Set $$ u^{(0)}_{i,j,k+1} = u_{i,j,k},$$ and for $n=1,2,\dots$, set \begin{eqnarray} u^{(n+1)}_{i,j,k+1}&=& (1- \lambda ) u^{(n)}_{i,j,k+1}+ \lambda ( u_{i,j,k} \nonumber \\ && +0.5F(u_{i,j-1,k},u_{i-1,j,k},u_{i,j,k},u_{i+1,j,k},u_{i,j+1,k}) \label{3.18} \\ && + 0.5F(u^{(n+1)}_{i,j-1,k+1},u^{(n+1)}_{i-1,j,k+1}, u^{(n)}_{i,j,k+1},u^{(n)}_{i+1,j,k+1},u^{(n)}_{i,j+1,k+1}) )\,.\nonumber \end{eqnarray} \section{Accuracy Analysis} We first show that the denominator of the right-hand side of (\ref{3.15}) is bounded below and above by positive constants. Let \begin{eqnarray} D_{i,j} &\equiv& 1- \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \beta \left[ { 1 \over {\beta} } \right] {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} } \label{4.1} \\ &&- \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \beta \left[ { 1 \over {\beta}} \right] {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} }. \nonumber \end{eqnarray} and \begin{eqnarray} E_{i,j} &\equiv& 1+ \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \left| \beta \left[ { 1 \over {\beta} } \right] {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} }\right| \label{4.2} \\ &&+ \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \left| \beta \left[ { 1 \over {\beta}} \right] {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} } \right|. \nonumber \end{eqnarray} \begin{lemma} %Lemma 4.1 Let $D_{i,j}$ and $E_{i,j}$ be as defined above. Then \begin{equation} D_{i,j} \quad \ge { {\min(\beta^+, \beta^-)} \over {\max(\beta^+, \beta^-) } } \quad {\rm and} \quad E_{i,j} \quad \le 1+\max(\beta^+, \beta^-) \left| \left[ 1 \over \beta \right] \right|, \label{4.3} \end{equation} where $i_0 =i-1$ or $i+1$, $j_0=j-1$ or $j+1$. $\tau_{x_{i_0}}$, $\tau_{y_{j_0}}=1$ or $-1$ according to (\ref{3.7})-(\ref{3.10}). \end{lemma} \paragraph{Proof:} At first we prove the lower bound. Look at the first summation. In $x$-direction, we have four cases. \begin{description} \item{(a)} $ (x_i,y_j) \in \Omega^-$, $(x_{i+1}, y_j) \in \Omega^+ $. Then $x_{i_0}=x_{i+1}$, $\tau_{x_{i_0}} = -1$, $ \beta = \beta^-$, and $ \beta \left[ {1 \over \beta} \right] \tau_{x_{i_0}} = 1-{ {\beta^-} \over {\beta^+} }$ which is positive iff $\beta^- < \beta^+$. \item{(b)} $(x_i,y_j) \in \Omega^+$, $(x_{i-1}, y_j) \in \Omega^-$. Then $x_{i_0}=x_{i-1}$, $\tau_{x_{i_0}}= 1$, $\beta = \beta^+$, and $ \beta \left[{1 \over \beta} \right] \tau_{x_{i_0}} = 1-{ {\beta^+} \over {\beta^-} }$ which is positive iff $\beta^+ < \beta^-$. \item{(c)} $(x_i,y_j) \in \Omega^+$, $(x_{i+1}, y_j) \in \Omega^-$. Then $x_{i_0}=x_{i+1}$, $\tau_{x_{i_0}} = 1$, $\beta = \beta^+$, and $ \beta \left[ {1 \over \beta} \right] \tau_{x_{i_0}} = 1-{ {\beta^+} \over {\beta^-} }$ which is positive iff $\beta^+ < \beta^-$. \item{(d)} $ (x_i,y_j) \in \Omega^-$, $(x_{i+1}, y_j) \in \Omega^+ $. Then $x_{i_0}=x_{i-1}$, $\tau_{x_{i_0}} = -1$, $ \beta = \beta^-$, and $ \beta \left[ {1 \over \beta} \right] \tau_{x_{i_0}} = 1-{ {\beta^-} \over {\beta^+} }$ which is positive iff $\beta^- < \beta^+$. \end{description} \noindent For all cases, a term in the first summation is positive iff $\beta = \min \{ \beta^+, \beta^- \}$. Similarly we can show that a term in the second summation is positive iff $\beta = \min \{ \beta^+, \beta^- \}$ too. Only the positive terms will reduce the lower bound of the left-hand side of (\ref{4.1}). So the left-hand side is bounded below by $$ D_{i,j} \ge 1 - 0.5 \Big( 1- { {\min(\beta^+,\beta^-)} \over {\max(\beta^+,\beta^-)} } \Big) -0.5 \Big( 1- { {\min(\beta^+,\beta^-)} \over {\max(\beta^+,\beta^-)} } \Big) ={ {\min(\beta^+,\beta^-)} \over {\max(\beta^+,\beta^-)} }.$$ Now we turn to the upper bound which can be proved directly from (\ref{4.2}). $$E_{i,j} \le 1 + 0.5 \beta \left| \left[ { 1 \over \beta} \right] \right| + 0.5 \beta \left| \left[ { 1 \over \beta} \right] \right| \le 1+ \max ( \beta^+, \beta^- ) \left| \left[ { 1 \over \beta} \right] \right| . $$ The lower bound (\ref{4.3}) is useful for the stability of the numerical scheme and the upper bound (\ref{4.3}) will be used in the proof of the following theorem. \begin{theorem} % Theorem 4.2. At irregular grid points $(x_i,y_j)$ in $\Omega$, \begin{equation} F(u_{i,j-1},u_{i-1,j},u_{i,j},u_{i+1,j},u_{i,j+1}) - ( \beta u_{xx} + \beta u_{yy} + f )(x_i,y_j) = O(h)\,. \label{4.5} \end{equation} \end{theorem} \paragraph{Proof:} From (\ref{3.11}), (\ref{3.14}), (\ref{3.15}), and (\ref{4.3}), we have \begin{eqnarray*} \lefteqn{F(u_{i,j-1},u_{i-1,j},u_{i,j},u_{i+1,j},u_{i,j+1}) = (u_{i,j})_t }\\ &=& \beta \Big( \delta_x u_{i,j} + \delta_y u_{i,j} + \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \widetilde C_xu_{i,j} + \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \widetilde C_yu_{i,j}\Big) \\ && + f_{i,j} + (u_{i,j})_t \Big\{ \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \beta \left[ { 1 \over {\beta} } \right] {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} } \\ && + \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \beta \left[ { 1 \over {\beta}} \right] {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} } \Big\} \\ &=& \beta ( \delta_x u_{i,j} + \delta_y u_{i,j} ) + f_{i,j} \\ && + \beta \sum_{(x_{i_0},y^*) \in S_{{\rm irr}}} \Big\{ C_xu_{i,j}+ \left[ {{u_t} \over {\beta} } \right] {{\tau_{x_{i_0}} (x_{i_0}-x^*)^2 \cos^2 \theta } \over {2h^2} } \Big\} \\ && + \beta \sum_{(x^*,y_{j_0}) \in S_{{\rm irr}}} \Big\{ C_yu_{i,j}+ \left[ {{u_t} \over {\beta}} \right] {{\tau_{y_{j_0}} (y_{j_0}-y^*)^2 \sin^2 \theta } \over {2h^2} } \Big\} +O(h) \\ &=& ( \beta u_{xx} + \beta u_{yy} + f )(x_i,y_j) + O(h), \end{eqnarray*} which proves (\ref{4.5}).\bigskip At a regular grid point, the local truncation error is $O(h^2)$. At an irregular grid point, the local truncation error is $O(h)$ by Theorem 4.2. The local truncation error from the discretization of time is $O((\Delta t)^2)=O(h^2)$. All these imply that the numerical solution has global error $O(h^2)$. \section{Numerical Examples} We choose some examples in which the actual solutions are known; therefore, numerical error computations can be obtained to confirm the theoretical result of our method. We choose $$\displaylines{ \Omega = (-1,1) \times (-1,1), \quad t \in [1, \infty ),\cr \Omega^+ = \{ (x,y)\in \Omega \quad | x^2+y^2 > 1/4 \}, \cr \Omega^- = \{ (x,y)\in \Omega \quad | x^2+y^2 < 1/4 \},\cr \Gamma = \{ (x,y)\in \Omega \quad | x^2+y^2 = 1/4 \} . }$$ The exact solution for $f=0$ is \begin{equation} u(x,y,t)={1 \over t} e^{-(x^2+y^2)/(4 \beta t)}. \label{5.6} \end{equation} We give the initial condition when $t=1$, and the Dirichlet boundary condition when $x$ and $y$ are equal to $1$ or $-1$. We choose $\lambda$=1.75 in (\ref{3.18}). For different pairs $\beta^+$ and $\beta^-$, in $t$ from 1.0 to 1.5, we obtain the data shown in Table 1. \begin{table} \label{tbl1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $h$ & $\beta^+$ & $\beta^-$& error & ratio \\ \hline 0.100 & 1000 & 1 & 2.227782D-04 & -- \\ 0.050 & 1000 & 1 & 5.391984D-05 & 4.13 \\ 0.025 & 1000 & 1 & 1.307318D-05 & 4.12 \\ \hline 0.100 & 1 & 1000 & 2.392997D-05 & -- \\ 0.050 & 1 & 1000 & 6.703319D-06 & 3.57 \\ 0.025 & 1 & 1000 & 1.766744D-06 & 3.79 \\ \hline 0.100 & 5 & 1 & 2.629529D-04 & -- \\ 0.050 & 5 & 1 & 6.351060D-05 & 4.14 \\ 0.025 & 5 & 1 & 1.550294D-05 & 4.10 \\ \hline 0.100 & 1 & 5 & 5.461059D-05 & -- \\ 0.050 & 1 & 5 & 1.309861D-05 & 4.17 \\ 0.025 & 1 & 5 & 3.263757D-06 & 4.01 \\ \hline \end{tabular} \end{center} \caption{Accuracy of computations} \end{table} The error is computed using the infinity norm. It shows that when $h$ is divided by 2, the error becomes approximately a quarter of its previous value, which confirms that the numerical solution has second order accuracy. \paragraph{Acknowledgements.} The second author wants to thank Professor H. T. Banks for his advice and encouragement during his visit to the Center for Research in Scientific Computation. The second author also wants to thank Professor X.-B. Lin for his hospitality and suggestions. The first author was partially supported by an NSF grant DMS-96-26703, North Carolina State University FR\&PD grant, and the ARO under grant number 39676-MA. \begin{thebibliography}{0} \bibitem{G1} N. Gershenfeld, The Nature of Mathematical Modeling, Cambridge Univ. Press, Cambridge, 1999. \bibitem{L1} R. J. LeVeque and Z. Li, The immersed interface method for elliptic equations with discontinuous coefficients and singular sources, \it SIAM J. Numer. Anal. \rm 31 (1994), 1019 - 1044. \bibitem{L2} Z. Li and A. Mayo, ADI method for heat equations with discontinuities along an arbitrary interface, in Proc. Symp. Appl. Math. Vol. 48 (W. Gautschi ed.), AMS, 311-315, 1993. \bibitem{M1} K. W. Morton and D. F. Mayers, Numerical Solution of Partial Differential Equations, Cambridge Univ. Press, Cambridge, 1994. \end{thebibliography} \noindent{\sc Zhilin Li } \\ Center for Research in Scientific Computation \\ and Department of Mathematics \\ North Carolina State University \\ Raleigh, NC 27695-8205, USA \medskip \noindent{\sc Yun-Qiu Shen}\\ Department of Mathematics, Western Washington University \\ Bellingham, WA 98225-9063, USA \\ e-mail: yqshen@cc.wwu.edu \end{document}