\documentclass[twoside]{article} \usepackage{amsfonts, amsmath, amsthm, amssymb} \pagestyle{myheadings} \markboth{\hfil Noncommutative operational calculus \hfil}% {\hfil Henry E. Heatherly \& Jason P. Huffman \hfil} \begin{document} \setcounter{page}{11} \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent {\sc 15th Annual Conference of Applied Mathematics, Univ. of Central Oklahoma}, \newline Electronic Journal of Differential Equations, Conference~02, 1999, pp. 11--18. \newline ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu (login: ftp)} \vspace{\bigskipamount} \\ Noncommutative operational calculus \thanks{ {\em 1991 Mathematics Subject Classifications:} 44A40, 45D05, 34A12, 16S60. \hfil\break\indent {\em Key words and phrases:} convolution, Mikusi\'{n}ski, Volterra integral equations, \hfil\break\indent operational calculus, linear operators. \hfil\break\indent \copyright 1999 Southwest Texas State University and University of North Texas. \hfil\break\indent Published November 24, 1999.} } \date{} \author{Henry E. Heatherly \& Jason P. Huffman} \maketitle \begin{abstract} Oliver Heaviside's operational calculus was placed on a rigorous mathematical basis by Jan Mikusi\'{n}ski, who constructed an algebraic setting for the operational methods. In this paper, we generalize Mikusi\'{n}ski's methods to solve linear ordinary differential equations in which the unknown is a matrix- or linear operator-valued function. Because these functions can be zero-divisors and do not necessarily commute, Mikusi\'{n}ski's one-dimensional calculus cannot be used. The noncommuative operational calculus developed here, however, is used to solve a wide class of such equations. In addition, we provide new proofs of existence and uniqueness theorems for certain matrix- and operator valued Volterra integral and integro-differential equations. Several examples are given which demonstrate these new methods. \end{abstract} \newtheorem{example}{Example}[section] \newtheorem{proposition}[example]{Proposition} \newtheorem{lemma}[example]{Lemma} \section{Introduction} Let $\mathfrak{M}$ be the linear space of all continuous, complex-valued functions defined on $[0, \infty)$. Taken with the Duhamel convolution operation, $f*g(t)=\int_0^t f(t-\tau) g(\tau) d\tau$, $\mathfrak{M}$ is a commutative, associative algebra over $\mathbb{C}$. (We use $\mathbb{C}$ for the field of complex numbers and $\mathbb{N}$ for the set of natural numbers.) To put Heaviside's operational calculus on a rigorous mathematical basis, Mikusi\'{n}ski considered the quotient field of $\mathfrak{M}$ consisting of all the (equivalence classes of) fractions $\frac{f}{g}$, where $f,g \in \mathfrak{M}$ and $g \neq 0$, and denoted here by $Q(\mathfrak{M})$. These fractions, which can be thought of as generalized functions, are called Mikusi\'{n}ski operators and are the basis for the operational calculus. In the field $Q(\mathfrak{M})$ there exists an integral operator, i.e. the Heaviside unit function $H(t) \equiv 1$ for all $t$, and a differential operator, $s=\frac{\delta} {H}$, where $\delta$ is the unity in $Q(\mathfrak{M})$. Thus, for a function $f \in \mathfrak{M}$, $H*f=\int_0^tf(\tau)d\tau$, and if $f$ is continuously differentiable, then $s*f=f'+f(0)$. (Note that $s*a$ is well-defined for all $a \in Q(\mathfrak{M})$, but the resulting product may not be a continuous function.) Using the last equation above, Mikusi\'{n}ski developed algebraic expressions for the $n$-th derivatives of a function, which allowed the transformation of certain differential equations into algebraic equations. In this paper, we expand on Mikusi\'{n}ski's methods to solve linear ordinary differential equations in which the coefficients and the unknown function are matrix- or operator-valued. We use $M_n[X]$ to denote the $n$-by-$n$ matrices over a set $X$. Consider a matrix-valued function $F:[0, \infty) \rightarrow M_n[\mathbb{C}]$, continuous in each entry. It is easy to see that we may identify $F$ with a matrix of complex-valued functions, so that $F=[f_{ij}]$, where $f_{ij} \in \mathfrak{M}$ for all $i,j$, and then $F(t)=[f_{ij}(t)]$, for all $t$. Thus, we consider the linear space of all such functions (for a fixed $n$), denoted $M_n[\mathfrak{M}]$, and define the convolution of two matrix-valued functions as follows: \[ F*G(t)=\int_0^t F(t-\tau)G(\tau)d\tau=[f_{ij}][g_{ij}] \] where $F=[f_{ij}],G=[g_{ij}]$ and the right hand side denotes matrix multiplication with juxtaposition in each entry taken as the Duhamel convolution. Thus, $M_n[\mathfrak{M}]$ is an associative $\mathbb{C}$-algebra. Two difficulties arise in $M_n[\mathfrak{M}]$ which are not present in $\mathfrak{M}$: the functions in $M_n[\mathfrak{M}]$ do not necessarily commute with each other, and there exist nonzero \textit{zero-divisors}, i.e. nonzero elements whose product is zero. However, we are able to overcome these difficulties to develop a noncommutative operational calculus which generalizes Mikusi\'{n}ski's methods and is used to solve a broad class of equations. \section{A Matrix Operational Calculus} It is easy to see that the algebra $M_n[\mathfrak{M}]$ embeds as a sub-algebra of the $n$-by-$n$ matrices over the Mikusi\'{n}ski operators, $M_n[Q(\mathfrak{M})]$. Because of the two limitations mentioned above, there is no field of fractions for the algebra $M_n[Q(\mathfrak{M})]$. However, a well-behaved subset of $M_n[Q(\mathfrak{M})]$ can be used to construct a ring of fractions in a similar way. Let $\Delta_n$ be the set of all matrices of operators whose entries along the main diagonal are all nonzero and all other entries are zero. We use the notation $\operatorname{Diag}(a_1, a_2, \dots , a_n)$ for such a matrix. Let $R=M_n[Q(\mathfrak{M})]$. Then because $\Delta_n$ forms a \textit{denominator set} for $R$, we form, in the standard way, the ring of quotients $R \Delta_n^{-1}$ which consists of all fractions $\frac{a}{b}$ where $a \in R, b \in \Delta_n$ and $b \neq 0$. (For more on general rings of quotients, see \cite[pp.50-61]{ste}.) In this quotient ring $R \Delta_n^{-1}$ there exists an integral operator for matrix-valued functions, i.e. $H_n=\operatorname{Diag}(H,H,\dots,H)$, and a differential operator $S_n=\operatorname{Diag} \left( s,s,...,s \right)$. (Recall that $s=\frac{\delta}{H}$.) Thus, for $F \in M_n[\mathfrak{M}]$, $H_n*F=\int_0^tF(\tau) d\tau$, and if $F$ is continuously differentiable, then \[ S_n*F=F'+F(0). \] Using this last equation repeatedly, we develop algebraic expressions for any $n$-th derivative of a function in $M_n[\mathfrak{M}]$ (assuming the derivative exists). This enables us to solve differential equations in an algebraic setting, as the following examples illustrate. (Note: we use $\alpha$ to denote the scalar multiple $\alpha \cdot \delta_n$ of the unity element $\delta_n \in R \Delta_n^{-1}$.) \begin{example} \label{mat} Let $\alpha =\left[ \begin{array}{ll} \alpha _1 & \alpha _2 \\ \alpha _3 & \alpha _4 \end{array} \right] $ and $\beta =\left[ \begin{array}{ll} \beta _1 & \beta _2 \\ \beta _3 & \beta _4 \end{array} \right] ,$ and $A=\left[ \begin{array}{ll} a_1 & 0 \\ 0 & a_2 \end{array} \right] ,$ where $\alpha _i,\beta _i,a_i\in \mathbb{C}.$ We solve the initial value problem $X^{\prime \prime }+AX=0;$ $X(0)=\alpha ,$ $X^{\prime }(0)=\beta ,$ where $X\in M_2[\mathfrak{M}].$ \end{example} \noindent {\bf Solution:} Using the operational calculus of this section, we make the substitution $X^{\prime \prime }=S_2^2X-S_2X(0)-X^{\prime }(0)=S_2^2X-S_2\alpha -\beta $. We rewrite the equation as $(S_2^2+A)X=S_2\alpha +\beta $, or $% X=(S_2^2+A)^{-1}(S_2\alpha +\beta ).$ Now, \begin{eqnarray*} X &=&\left[ \begin{array}{ll} \frac{\delta }{H^2}+a_1 & 0 \\ 0 & \frac{\delta }{H^2}+a_2 \end{array} \right] ^{-1}\left[ \begin{array}{ll} \frac{\alpha _1}H+\beta _1 & \frac{\alpha _2}H+\beta _2 \\ \frac{\alpha _3}H+\beta _3 & \frac{\alpha _4}H+\beta _4 \end{array} \right] \\ &=&\left[ \begin{array}{ll} \frac{H^2}{\delta +a_1H^2} & 0 \\ 0 & \frac{H^2}{\delta +a_2H^2} \end{array} \right] \left[ \begin{array}{ll} \frac{\alpha _1}H+\beta _1 & \frac{\alpha _2}H+\beta _2 \\ \frac{\alpha _3}H+\beta _3 & \frac{\alpha _4}H+\beta _4 \end{array} \right] \\ &=&\left[ \begin{array}{ll} \frac{s\alpha _1+\beta _1}{s^2+a_1} & \frac{s\alpha _2+\beta _2}{s^2+a_1} \\ \frac{s\alpha _3+\beta _3}{s^2+a_2} & \frac{s\alpha _4+\beta _4}{s^2+a_2} \end{array} \right] \\ &=&\left[ \begin{array}{ll} \alpha _1\cos \sqrt{a_1}t+\beta _1\sin \sqrt{a_1}t & \alpha _2\cos \sqrt{a_1}% t+\beta _2\sin \sqrt{a_1}t \\ \alpha _3\cos \sqrt{a_2}t+\beta _3\sin \sqrt{a_2}t & \alpha _4\cos \sqrt{a_2}% t+\beta _4\sin \sqrt{a_2}t \end{array} \right] . \end{eqnarray*} $\square$ This last step is obtained by identifying the rational expressions of $s$ in terms of continuous functions of $t$. This process is similar to that used by Mikusi\'{n}ski in the one-dimensional operational calculus. For more on this, see \cite[pp.30-40]{mik}). Generalizing this method from the two-dimensional case to the $n$-dimensional case, this matrix operational calculus is well-suited to solve linear, matrix-valued ordinary differential equations with coefficients which are diagonal scalar matrices. To expand this method further, we broaden the class of coefficient matrices. Recall that a matrix $A$ is said to be {\it diagonalizable} (by a similarity transformation) if there exists an invertible matrix $P$ such that $P^{-1}AP$ is a diagonal matrix. \begin{example} \label{diag} Let $\alpha =\left[ \begin{array}{ll} \alpha _1 & \alpha _2 \\ \alpha _3 & \alpha _4 \end{array} \right] $ and $\beta =\left[ \begin{array}{ll} \beta _1 & \beta _2 \\ \beta _3 & \beta _4 \end{array} \right] ,$ and $A=\left[ \begin{array}{ll} a_1 & a_2 \\ a_3 & a_4 \end{array} \right] ,$ where $\alpha _i,\beta _i,a_i\in \mathbb{C}$, and $A$ is diagonalizable. We solve the initial value problem $X^{\prime \prime }+AX=F;$ $X(0)=\alpha ,$ $X^{\prime }(0)=\beta ,$ where $X,F\in M_2[\mathfrak{M}].$ \end{example} \noindent {\bf Solution:} There is an invertible matrix $P$ such that $% P^{-1}AP=D,$ where $D \in \Delta_n$. Then, letting $Y=P^{-1}XP,$ we rewrite the equation as follows: \[ (PYP^{-1})^{\prime \prime }+A(PYP^{-1})=F. \] Bringing the derivatives inside the coefficient matrices and substituting for $A,$ we have $PY^{\prime \prime }P^{-1}+PDP^{-1}PYP^{-1}=F.$ Multiply appropriately to get $Y^{\prime \prime }+DY=P^{-1}FP$. We solve this initial value problem (with the appropriately modified initial conditions) by following Example \ref{mat}. Thus, the unique solution is simply $X=PYP^{-1},$ where $Y$ is the solution to the latter initial value problem. \hfill $\square$ \medskip Again, generalizing this method to $n$ dimensions, the matrix operational calculus is suitable for linear matrix-valued O.D.E.'s in which the coefficients are diagonalizable matrices. The Mikusi\'{n}ski one-dimensional operational calculus is not particularly well-suited to handle linear O.D.E.'s with variable coefficients. One reason for this ``defect'' is that the Duhamel convolution operation does not agree well with pointwise multiplication of functions. Because the matrix operational calculus presented here is a direct generalization of Mikusi\'{n}ski's methods, a similar limitation occurs. \section{Volterra Integral and Integro-Differential Equations} In this section, we give new proofs of existence and uniqueness theorems for matrix-valued linear Volterra integral and integro-differential equations. These existence and uniquess theorems are known, e.g. see \cite[p.42]{gri}. However, the proofs provided here offer the results immediately and easily in the algebraic setting, and we avoid using any iterative methods. Some brief comments on the background algebraic ideas helpful at this point. Let $A$ be a linear associative algebra over $\mathbb{C}$. An element $y \in A$ is said to be \textit{quasi-regular} if there exists $\hat{y} \in A$ such that $y+\hat{y}+y\hat{y}=0$. The element $\hat{y}$ is uniquely determined by $y$ and is called the \textit{quasi-inverse} of $y$. If every element in $A$ is quasi-regular we write $\mathfrak{J}(A)=A$ and call $A$ a \textit{Jacobson radical algebra}. It is well known that if $\mathfrak{J}(A)=A$, then $\mathfrak{J}(M_n[A])=M_n[\mathfrak{J}(A)]$, \cite[p.140]{sza}. Highly pertinent to the development herein is that $\mathfrak{M}$ is a Jacobson radical algebra, (for a proof, see \cite[p.195]{huf}). Consequently, $\mathfrak{J}(M_n[ \mathfrak{M} ])= M_n[ \mathfrak{M} ]$. \begin{proposition} \label{volt} Let $K,F\in M_n[\mathfrak{M}].$ Then the matrix-valued integral equation \[ X+\int_0^tK(t-\tau )X(\tau )d\tau =F \] has a unique, continuous solution $X \in M_n[\mathfrak{M}]$. \end{proposition} \begin{proof} Since $M_n[ \mathfrak{M} ]$ is a Jacobson radical algebra, there is a unique $\hat{K} \in M_n[ \mathfrak{M}]$ such that $K+\hat{K}= -K*\hat{K}$. Then a routine calculation shows that $X+K*X=F$, when $X=F+F*\hat{K}$, yielding the desired solution to the integral equation. The uniqueness of $\hat{K}$ guarantees the uniqueness of this solution. \end{proof} It is worth noting that in the above proof $\hat{K}$ plays the role of the \textit{resolvent} function in the theory of integral equations, \cite[p.44]{gri}. \begin{proposition} Let $K,F\in M_n[\mathfrak{M}]$ and let $A\in M_n[\mathbb{C}]$. Then the matrix-valued integro-differential equation \begin{eqnarray*} &X^{\prime }+AX+\int_0^tK(t-\tau )X(\tau )d\tau =F& \\ &X(0) =X_0& \end{eqnarray*} has a unique, continuous solution $X \in M_n[ \mathfrak{M} ]$. \end{proposition} \begin{proof} Using $X^{\prime }=S_n*X-X(0)$, and working in the quotient ring $Q(M_n[ \mathfrak{M} ])$, we can rewrite the integro-differential equation as \[ S_n*X-X_0+AX+K*X=F. \] Multiplying both sides of the equation by $H_n$ we have \[ X-H_nX_0+H_n*AX+H_n*K*X=H_n*F. \] Since $H_n$ is in the center of $M_n[\mathfrak{M}]$, then $H_nX_0=X_0H_n$ and $% H_n*AX=AH_n*X.$ Hence, the equation becomes $X+AH_n*X+H_n*K*X=H_n*F+X_0H_n$, and then \[ X+[AH_n+H_n*K]*X=[H_n*F+X_0H_n]. \] By Proposition \ref{volt} this last equation has a unique solution in $M_n[ \mathfrak{M} ],$ which is given by \[ X=P+P*\hat{Q}, \] where $P=[H_n*F+X_0H_n]$ and $Q=[AH_n+H_n*K]$. \end{proof} \section{Equations with Operator-Valued Functions} In this section we consider equations whose coefficients and unknowns can be bounded linear operators on a separable Hilbert space. We will make use of the well known fact that if $\Omega$ is a separable Hilbert space with an orthonormal basis $\{e_k\}_{k=1}^\infty$, then a linear operator $A$, defined everywhere on $\Omega$, is bounded if and only if there exists a (unique) representation of $A$ as an infinite matrix $[\alpha _{ij}]_{i,j=1}^\infty $ with respect to the basis $\{e_k\}.$ (For a proof of this, see \cite[p.49]{akh}.) Thus, for any such bounded linear operator, we have the representation \[ A=\left[ \begin{array}{lll} a_{11} & a_{21} & \cdots \\ a_{21} & a_{22} & \\ \vdots & & \ddots \end{array} \right] =[a_{ij}]. \] Since $\sum_{i=1}^\infty \left| a_{ik}\right| ^2<\infty ,$ for all $k\in \mathbb{N}$, matrix multiplication is well defined, (i.e. the pertinent series all will converge). With matrix multiplication, pointwise addition, and multiplication by a complex scalar, the collection of all such bounded linear operators, here denoted $B_\infty$, is a $\mathbb{C}$-algebra. To develop an operational calculus, we again identify the elements of our space with equivalent elements of a more amenable space, (as with the matrix-valued functions in Section 2). Here, we identify a bounded linear operator $A=[\alpha _{ij}]$ with the infinite matrix $[f_{ij}]$ of one-dimensional functions, $f_{ij} \in \mathfrak{M}$ and $f_{ij}(t)=\alpha _{ij}$. The collection of all such (countably) infinite matrices of functions, which is strictly larger than $B_\infty$, will be denoted $M_\infty [\mathfrak{M}]$. Next, we embed the set $M_\infty [% \mathfrak{M}]$ into the set of all infinite matrices of Mikusi\'{n}ski operators, i.e. $M_\infty [Q(\mathfrak{M})]$. It is clear that, with pointwise addition, $M_\infty [\mathfrak{M}]$ embeds into $M_\infty [Q( \mathfrak{M})] $ as an abelian group. A fundamental difficulty here is that, unlike $B_\infty$, the sets $M_\infty [\mathfrak{M}]$ and $M_\infty [Q( \mathfrak{M})] $ are not $\mathbb{C}$-algebras using matrix multiplication. This is because each entry in a product matrix $C=AB$ is now an infinite sum of continuous functions or Mikusi\'{n}ski operators. The convergence of these sums is necessary for a well-defined multiplication, and it is not difficult to find examples for which a sum of functions or operators does not converge, \cite[p.372]{mik}. We circumvent this difficulty by considering $M_\infty [Q( \mathfrak{M})] $ not as an algebra over the complex numbers, but rather as a module over a well-behaved subset of $M_\infty [Q( \mathfrak{M})] $. In this case, the ring of scalars will be the subset of diagonal matrices of operators, i.e. $\Delta_\infty =\{\operatorname{Diag}(\frac{f_1}{g_1}, \frac{f_2}{g_2}, \frac{f_3}{g_3},...) :f_{i},g_{i}\in \mathfrak{M}$, $f_{i}\neq 0,g_{i}\neq 0\}\subset M_\infty [Q( \mathfrak{M})]$. It is clear that matrix multiplication is well defined in $\Delta_\infty $, for if $A,B\in \Delta_\infty ,$ then $AB=\operatorname{Diag}( a_1b_1, a_2b_2, a_3b_3,...)$. Thus $(\Delta_\infty ,+,\cdot )$ is a commutative ring. Observe that the mapping from $\Delta_\infty \times M_\infty [Q( \mathfrak{M})] \rightarrow M_\infty [Q( \mathfrak{M})] $ defined via \[ \operatorname{Diag}(\alpha_1, \alpha_2, \alpha_3,...)\cdot[f_{ii}]=\left[ \begin{array}{lll} \alpha _{1}f_{11} & \alpha _{1}f_{12} & \cdots \\ \alpha _{2}f_{21} & \alpha _{2}f_{22} & \\ \vdots & & \ddots \end{array} \right] \] is a well defined scalar multiplication, considering $M_\infty [Q( \mathfrak{M})] $ as a (left) $\Delta_\infty $-module. Note that infinite-dimensional integral and differential operators exist in $M_\infty [Q( \mathfrak{M})]$ and are denoted $H_\infty=\operatorname{Diag}(H,H,H,...)$ and $S_\infty=\operatorname{Diag}(s,s,s,...)$, respectively. Next, we give an example. \begin{example} Let $A$ and $B$ be bounded linear operators on a separable Hilbert space $\Omega$ such that $A\in \Delta_\infty .$ Then, with respect to an orthonormal basis $% \{e_k\},$ we have the matrix representations $A=\operatorname{Diag}(a_1, a_2, a_3,...),B=(b_{ij}).$ We solve the initial value problem $X^{\prime \prime }+AX=B;$ $X(0)=\alpha ,X^{\prime }(0)=\beta ,$ where $X$ is a bounded linear operator on $\Omega$ and $\alpha $ and $\beta $ are (infinite) coefficient matrices. \end{example} \noindent {\bf Solution:} Using the operational calculus, we make the substitution $X^{\prime \prime }=S_\infty ^2X-S_\infty X(0)-X^{\prime }(0)=S_\infty ^2X-S_\infty \alpha -\beta .$ Then we rewrite the equation as $% (S_\infty ^2+A)X=S_\infty \alpha +\beta ,$ or $X=(S_\infty ^2+A)^{-1}(S_\infty \alpha +\beta ).$ We easily calculate $(S_\infty ^2+A)^{-1}$ in $M_\infty [Q( \mathfrak{M})] $ via \begin{eqnarray*} (S_\infty ^2+A)^{-1}&=&\operatorname{Diag}(s^2+a_1, s^2+a_2, s^2+a_3,...)^{-1}\\ &=& \operatorname{Diag}\left( \frac \delta {s^2+a_1}, \frac \delta {s^2+a_2}, \frac \delta {s^2+a_3},... \right). \end{eqnarray*} This gives the explicit solution \[ X=(S_\infty ^2+A)^{-1}(S_\infty \alpha +\beta )=\left[ \begin{array}{lll} \frac{s\alpha _{11}+\beta _{11}}{s^2+a_{1}} & \frac{s\alpha _{12}+\beta _{12}% }{s^2+a_{1}} & \cdots \\ \frac{s\alpha _{21}+\beta _{21}}{s^2+a_{2}} & \frac{s\alpha _{22}+\beta _{22}% }{s^2+a_{2}} & \\ \vdots & & \ddots \end{array} \right] . \] Finally, in a manner similar to the finite case, we identify this solution back in $M_\infty [\mathfrak{M}]$, where the entries are all continuous functions. \hfill$\square$ \medskip We solve similarly for linear ordinary differential equations in which the coefficient matrices are elements of $\Delta_\infty$. Next, following the general methods of Section 3, we give existence and uniqueness theorems for linear operator-valued Volterra integral and integro-differential equations. \begin{proposition} \label{voltop} Let $\Omega$ be a separable Hilbert space with orthonormal basis $\{e_k\}.$ If $A$ and $B$ are bounded linear operators defined everywhere on $\Omega$ such that $A \in \Delta_\infty$, then the Volterra integral equation \[ X+\int_0^tA(t-\tau )X(\tau )d\tau =B \] has a unique, bounded solution. \end{proposition} \begin{proof} Observe that the bounded linear operator $X=B+B*\hat{A}$, where $\hat{A}$ denotes the quasi-inverse (i.e. the resolvent) of $A$, is the solution to the integral equation. We demonstrate that this operator $\hat{A}$ exists in $M_\infty [Q( \mathfrak{M})]$. Consider $A=\operatorname{Diag}(a_1,a_2,a_3,...) \in \Delta_\infty$. Routine calculation shows that the operator $\operatorname{Diag}( \hat{a_1}, \hat{a_2}, \hat{a_3},...)$, i.e. the infinite diagonal matrix each of whose entries is a quasi-inverse of the appropriate entry in $A$, is the quasi-inverse of $A.$ Thus we have that $\hat{A}= \operatorname{Diag}(\hat{a_1}, \hat{a_2}, \hat{a_3},...) \in M_\infty [Q( \mathfrak{M})]$. Hence, $B+B*\hat{A}=X\in M_\infty [Q( \mathfrak{M})]$. That the solution $X$ is unique follows from the uniqueness of quasi-inverses. \end{proof} \begin{proposition} Let $\Omega$ be a separable Hilbert space with orthonormal basis $\{e_k\}.$ If $A,B$, and $C$ are bounded linear operators defined everywhere on $\Omega$ such that $A \in \Delta_\infty$, then the Volterra integro-differential equation \begin{eqnarray*} &X^{\prime }+BX+\int_0^tA(t-\tau )X(\tau )d\tau =C &\\ &X(0) = X_0& \end{eqnarray*} has a unique, bounded solution. \end{proposition} \begin{proof} Using the operational calculus, we rewrite the equation as $S_\infty X-X_0+BX+A*X=C.$ Multiplying both sides by the infinite-dimensional integral operator $H_\infty$ yields $X-H_\infty X_0+H_\infty *BX+H_\infty *A*X=H_\infty *C.$ Since $A$ and $H_\infty$ are in $\Delta_\infty $, we have $H_\infty X_0=X_0H_\infty $ and $H_\infty *BX=BH_\infty *X$. Thus, we rewrite the equation as $X+[BH_\infty +H_\infty *A]*X=[H_\infty *C+X_0H_\infty ].$ Therefore, by Proposition \ref {voltop}, there is a unique, bounded solution to the integro-differential equation. Letting $F=[H_\infty *C+X_0H_\infty ]$ and $K= [BH_\infty+H_\infty *A]$, this solution is given by \[ X=F+F*\hat{K}. \] \end{proof} \begin{thebibliography}{9} \bibitem[Akh]{akh} Akheizer, N.I., and I.M. Glazman, \textit{Theory of Linear Operators in Hilbert Space}, Frederick Ungar, (New York, 1961). \bibitem[Gri]{gri} Gripenberg, G., S.O. Londen, and O. Staffans, \textit{Volterra Integral and Functional Equations}, Cambridge University Press, (New York, 1990). \bibitem[Huf]{huf} Huffman, J. P., and H. E. Heatherly, ``An Algebraic Approach to the Existence and Uniqueness of Solutions to Volterra Linear Integral and Integro-Differential Equations,'' \textit{Proc. 14th Annual Conference of Applied Mathematics}, David P. Stapleton and David S. Bridge, Eds., University of Central Oklahoma, (Edmond, OK, 1998), 193-199. \bibitem[Mik]{mik} Mikusi\'{n}ski, J., \textit{Operational Calculus}, Pergamon Press, (New York, 1959). \bibitem[Ste]{ste} Stenstr\"{o}m, B., \textit{Rings of Quotients}, Springer-Verlag, (New York, 1975). \bibitem[Sza]{sza} Sz\'{a}sz, F.A., \textit{Radicals of Rings}, John Wiley, (New York, 1981). \end{thebibliography} \bigskip \noindent{\sc Henry E. Heatherly} \\ Department of Mathematics \\ University of Louisiana, Lafayette \\ Lafayette, LA 70504, USA \\ e-mail: heh5820@usl.edu \medskip \noindent{\sc Jason P. Huffman} \\ Department of Mathematical, Computing, and Information Sciences \\ Jacksonville State University \\ Jacksonville, AL 36265, USA \\ e-mail: jhuffman@jsucc.jsu.edu \end{document}