Artifact c2725a3991edd710b422c6f73171a7b05871c48f9eec0d11ebe481a108ef5c54:
- Executable file
r37/packages/crack/crack.tex
— part of check-in
[f2fda60abd]
at
2011-09-02 18:13:33
on branch master
— Some historical releases purely for archival purposes
git-svn-id: https://svn.code.sf.net/p/reduce-algebra/code/trunk/historical@1375 2bfe0521-f11c-4a00-b80e-6202646ff360 (user: arthurcnorman@users.sourceforge.net, size: 37887) [annotate] [blame] [check-ins using] [more...]
\documentclass[12pt]{article} %Sets size of page and margins \oddsidemargin 10mm \evensidemargin 10mm \topmargin 0pt \headheight 0pt \headsep 0pt \textheight 23.5cm \textwidth 15cm \title{The computer algebra package {\sc Crack}} \author{Thomas Wolf \\ School of Mathematical Sciences \\ Queen Mary and Westfield College \\ University of London \\ London E1 4NS \\ T.Wolf@maths.qmw.ac.uk \\ \\ Andreas Brand \\Fakult\"{a}t f\"{u}r Mathematik und Informatik \\Friedrich Schiller Universit\"{a}t Jena \\ 07740 Jena \\ Germany \\ maa@hpux.rz.uni-jena.de } \begin{document} \maketitle \tableofcontents \section{The purpose of {\sc Crack}} The package {\sc Crack} attempts the solution of an overdetermined system of ordinary or partial differential equations (ODEs/PDEs) with at most polynomial nonlinearities. Under `normal circumstances' the number of DEs which describe physical processes matches the number of unknown functions which are involved. Moreover none of those equations can be solved or integrated and integrability conditions yield only identities. Although applying the package {\sc Crack} to such problems directly will not be of much help usually, it is possible to investigate difficult DE-systems indirectly by studying analytic properties which would be useful for their solution. In this way overdetermined PDE-systems result. Applications of {\sc Crack} include a program {\sc Conlaw} for the computation of conservation laws of DEs, a program {\sc LiePDE} for the computation of infinitesimal symmetries of DEs and a program {\sc ApplySym} for the computation of symmetry and similarity variables from infinitesimal symmetries. \section{Technical details} \subsection{System requirements} The required system is {\sc Reduce}, version 3.6., strictly speaking the PSL version of {\sc Reduce} as distributed by the Konrad Zuse Institut / Berlin. Work on compatibility issues aims to provide a CSL {\sc Reduce} compatible version of {\sc Crack} in near future (by the end of 1998). Memory requirements depend crucially on the application. The {\tt crack.rlg} file is produced from running {\tt crack.tst} in a 4MB session running {\sc Reduce}, version 3.6 under {\sc Linux}. On the other hand it is not difficult to formulate problems that consume any amount of memory. \subsection{Installation} In a running {\sc Reduce} session either do \\ \verb+ in "crack.red"$ + \\ or, in order to speed up computation, either compile it with \verb+ on comp$ + \\ before the above command, or, generate a fast-loading compiled file once with \\ \verb+ faslout "crack"$ + \\ \verb+ in "crack.red"$ + \\ \verb+ faslend$ + \\ and load that file to run {\sc Crack} with \\ \verb+ load crack$ + \subsection{Updates / web demos} %{\sc Crack} can be run from a web demo A web demo under the address \verb+http://cathode.maths.qmw.ac.uk/demo.html+ that was created by Francis Wright and Arrigo Triulzi allows to run problems of restricted size. The latest version is available from \\ \verb+ftp://ftp.maths.qmw.ac.uk/pub/tw/crack/+. Publications related to {\sc Crack} can be found under \\ \verb+http://www.maths.qmw.ac.uk/~tw/public2.html#2+. \subsection{The files} The following files are provided with {\sc Crack} \begin{itemize} \item {\tt crack.red} contains read-in statements of a number of files {\tt cr*.red}. \item {\tt crack.tst} contains test-examples. \item {\tt crack.rlg} contains the output of {\tt crack.tst}. \item {\tt crack.tex} is this manual. \end{itemize} \subsection{The call} {\sc Crack} is called by \begin{tabbing} {\tt crack}(\=\{{\it equ}$_1$, {\it equ}$_2$, \ldots , {\it equ}$_m$\}, \\ \>\{{\it ineq}$_1$, {\it ineq}$_2$, \ldots , {\it ineq}$_n$\}, \\ \>\{{\it fun}$_1$, {\it fun}$_2$, \ldots , {\it fun}$_p$\}, \\ \>\{{\it var}$_1$, {\it var}$_2$, \ldots , {\it var}$_q$\}); \end{tabbing} $m,n,p,q$ are arbitrary. \begin{itemize} \item The {\it equ}$_i$ are identically vanishing partial differential expressions, i.e.\ they represent equations $0 = {\it equ}_i$, which are to be solved for the functions ${\it fun}_j$ as far as possible, thereby drawing only necessary conclusions and not restricting the general solution. \item The {\it ineq}$_i$ are algebraic or differential expressions which must not vanish identically for any solution to be determined, i.e. only such solutions are computed for which none of the expressions {\it ineq}$_i$ vanishes identically in all independent variables. \item The dependence of the (scalar) functions ${\it fun}_j$ on independent variables must be defined beforehand with {\tt DEPEND} rather than declaring these functions as operators. Their arguments may themselves only be identifiers representing variables, not expressions. Also other unknown functions not in ${\it fun}_j$ must not be represented as operators but only using {\tt DEPEND}. \item The functions ${\it fun}_j$ and their derivatives may only occur polynomially. \item The ${\it var}_k$ are further independent variables, which are not already arguments of any of the ${\it fun}_j$. If there are none then the fourth argument is the empty list \{\}, although it does no harm to include arguments of functions ${\it fun}_j$. \item The dependence of the ${\it equ}_i$ on the independent variables and on constants and functions other than ${\it fun}_j$ is arbitrary. \item {\sc Crack} can be run in automatic batch mode (by default) or interactively with the switch {\tt OFF BATCH\_MODE}. \end{itemize} \subsection{The result} The result is a list of solutions \[ \{{\it sol}_1, \ldots \} \] where each solution is a list of 4 lists: \begin{tabbing} \{\=\{${\it con}_1, \; {\it con}_2, \ldots , \; {\it con}_q$\}, \\ \>\{${\it fun}_a={\it ex}_a, \;\; {\it fun}_b={\it ex}_b, \ldots , \;\; {\it fun}_p={\it ex}_p$\},\= \\ \>\{${\it fun}_c, \;\; {\it fun}_d, \ldots , \;\; {\it fun}_r$\}, \> \\ \>\{{\it ineq}$_1$, {\it ineq}$_2$, \ldots , {\it ineq}$_s$\}. \> \} \end{tabbing} For example, in the case of a linear system as input, there is at most one solution ${\it sol}_1$. If {\sc Crack} finds a contradiction as e.g. $0=1$ then there exists no solution and it returns the empty list \{\}. If {\sc Crack} can factorize algebraically a non-linear equation then factors are set to zero individually and different sub-cases are studied through {\sc Crack} calling itself recursively. If during such a recursive call a contradiction results, then this sub-case will not have a solution but other sub-cases still may have solutions. The empty list is also returned if no solution exists which satisfies the inequalities {\it ineq}$_i \neq 0.$ The expressions ${\it con}_i$ (if there are any), are the remaining necessary and sufficient conditions for the functions ${\it fun}_c,\ldots,{\it fun}_r$ in the third list. Those functions can be original functions from the equations to be solved (of the second argument of the call of {\sc Crack}) or new functions or constants which arose from integrations. The dependence of new functions on variables is declared with {\tt DEPEND} and to visualize this dependence the algebraic mode function ${\tt FARGS({\it fun}_i)}$ can be used. If there are no ${\it con}_i$ then all equations are solved and the functions in the third list are unconstrained. The elements of the fourth list are the expressions who have been assumed to be unequal zero in the derivation of this solution. The second list contains equations ${\it fun}_i={\it ex}_i$ where each ${\it fun}_i$ is an original function and ${\it ex}_i$ is the computed expression for ${\it fun}_i$. \subsection{Interactive mode, flags, parameters and the list of procedures} Under normal circumstances one will try to have problems solved automatically by {\sc Crack}. An alternative is to input {\tt OFF BATCH\_MODE;} before calling {\sc Crack} and solve problems interactively. In interactive mode it is possible to \begin{itemize} \item inspect data, like equations and their properties, unknown functions, variables, identities, a statistics, \item save, change, add or drop equations, \item inspect and change flags and parameters which govern individual modules as well as their interplay, \item specify how to proceed, like doing \begin{itemize} \item one automatic step, \item one specific step, \item a number of automatic steps, \item a specific step as often as possible. \end{itemize} \end{itemize} To get interactive help one enters `h' or `?'. Flags and parameters are stored as symbolic fluid variables which means that they can be accessed by {\tt lisp( ... )}, like {\tt lisp( print\_:=5 );} before calling {\sc Crack}. {\tt print\_}, for example, is a measure of the maximal length of expressions still to be printed on the screen (the number of factors in terms). A complete list of flags and parameters is given at the beginning of the file {\tt crinit.red}. One more parameter shall be mentioned, which is the list of modules/procedures called {\tt proc\_list\_}. In interactive mode this list can be looked at with {\tt `p'} or be changed with {\tt `cp'}. This list defines in which order the different modules/procedures are tried whenever {\sc Crack} has to decide of what to do next. There are exceptions to this rule possible. Some procedures, say $P_1$, require after their execution another specific procedure, say $P_2$, to be executed, independent of whether $P_2$ would be next according to {\tt proc\_list\_}. This is managed by $P_1$ writing after its completion the procedure $P_2$ into a hot-list. This list is dealt with in the {\tt `to\_do'} step which comes always first in {\tt proc\_list\_}. A way to have the convenience of running {\sc Crack} automatically and still being able to break the fixed rhythm prescribed by {\tt proc\_list\_} is to have the entry {\tt stop\_batch} in {\tt proc\_list\_} and have {\sc Crack} started in automatic batch mode. Then execution is continuing until none of the procedures which come before {\tt stop\_batch} are applicable any more so that {\tt stop\_batch} is executed next which will stop automatic execution and go into interactive mode. This allows either to continue the computation interactively, or to change the {\tt proc\_list\_} with {\tt `cp'} and to continue in automatic mode. The default value of {\tt proc\_list\_} does not include all possible modules because not all are suitable for any kind of overdetermined system to be solved. The complete list is shown in interactive mode under {\tt `cp'}. A few basic modules are described in the following section. The efficiency of {\sc Crack} in automatic mode is very much depending on the content of {\tt proc\_list\_} and the sequence of its elements. Optimizing {\tt proc\_list\_} for a given task needs experience which can not be formalized in a few simple rules and will therefore not be explained in more detail here. The following remarks are only guidelines. \begin{description} \item[{\tt to\_do :}] hot list of steps to be taken next, should always come first, \item[{\tt subst\_level\_? :}] substitutions of functions by expressions differing by their maximal allowed size and other properties, \item[{\tt separation :}] what is described as direct separation in the next section, \item[{\tt gen\_separation :}] what is as indirect separation in the next section, only to be used for linear problems, \item[{\tt quick\_integration :}] integration of very specific short equations, \item[{\tt full\_integration :}] integration of equations which have the chance to lead to a substitution, \item[{\tt integration :}] any integration, \item[{\tt factorization :}] splitting the computation into the investigation of different subcases resulting from the algebraic factorization of an equation, only useful for non-linear problems, \item[{\tt undetlinode :}] parametric solution of single under determined linear ODE (with non-constant coefficients), only applicable for linear problems, \item[{\tt length\_reduction\_1 :}] length reduction by algebraic combination, only for linear problems, \item[{\tt length\_reduction\_2 :}] length reduction by differential reduction, \item[{\tt decoupling :}] steps towards the computation of a differential Gr\"{o}bner Basis, \item[{\tt add\_differentiated\_pdes :}] only useful for non-linear differential equations with leading derivative occuring non-linearly, \item[{\tt add\_diff\_star\_pdes :}] for the treatment of non-linear indirectly separable equations, \item[{\tt multintfac :}] to find integrating factors of for a system of equations (very slow), \item[{\tt alg\_solve\_deriv :}] to be used for equations quadratic in the leading derivative, \item[{\tt alg\_solve\_system :}] to be used if a (sub-)system of equations shall be solved for a set of functions or their derivatives algebraically, \item[{\tt subst\_derivative :}] substitution of a derivative of a function everywhere by a new function if such a derivative exists \item[{\tt undo\_subst\_derivative :}] undo the above substitution. \end{description} \section{Contents of the {\sc Crack} package} The package {\sc Crack} contains a number of modules. The basic ones are for computing a pseudo differential Gr\"{o}bner Basis (using integrability conditions in a systematic way), integrating exact PDEs, separating PDEs, solving DEs containing functions of only a subset of all variables and solving standard ODEs (of Bernoulli or Euler type, linear, homogeneous and separable ODEs). These facilities will be described briefly together with examples. The test file {\tt crack.tst} demonstrates these and others. \subsection{Pseudo Differential Gr\"{o}bner Basis} This module (called `decoupling' in {\tt proc\_list\_}) reduces derivatives in equations by using other equations and it applies integrability conditions to formulate additional equations which are subsequently reduced, and so on. %How this could work is demonstrated in the following example. %The integrability condition for the system %\[ \begin{array}{cccl} %f = f(x,y), \; \; & f,_{x} & = & 1 \\ % & f,_{y} & = & (f-x-1/y)x - 1/y^2 %\end{array} \] %provides an algebraic condition for the function $f$ %which turns out not only to be necessary but also sufficient to solve both %equations: %\begin{eqnarray*} % 0 = f,_{xy} - f,_{yx} & = & - xf,_x - f + 2x + 1/y \\ % & = & - f + x + 1/y \; \; \; \; \; \; % \mbox{(with $f,_x$ from above)} %\end{eqnarray*} %\[ \rightarrow f = x + 1/y. \] A general algorithm to bring a system of PDEs into a standard form where all integrability conditions are satisfied by applying a finite number of additions, multiplications and differentiations is based on the general theory of involutive systems \cite{Riq,Th,Ja}. Essential to this theory is a total ordering of partial derivatives which allows assignment to each PDE of a {\em Leading Derivative} (LD) according to a chosen ordering of functions and derivatives. Examples for possible orderings are \begin{itemize} \item lex.\ order of functions $>$ lex.\ order of variables, \item lex.\ order of functions $>$ total differential order $>$ lex.\ order of variables, \item total order $>$ lex.\ order of functions $>$ lex.\ order of variables \end{itemize} or mixtures of them by giving weights to individual functions and variables. Above, the `$>$' indicate ``before'' in priority of criteria. The principle is then to \begin{itemize} \item take two equations at a time and differentiate them as often as necessary to get equal LDs, \item regard these two equations as algebraic equations in the common LD and calculate the remainder w.r.t.\ the LD, i.e.\ to generate an equation without the LD by the Euclidean algorithm, and \item add this equation to the system. \end{itemize} Usually pairs of equations are taken first, such that only one must be differentiated. If in such a generation step one of both equations is not differentiated then it is called a simplification step and this equation will be replaced by the new equation. The algorithm ends if each combination of two equations yields only equations which simplify to an identity modulo the other equations. A more detailed description is given e.g. in \cite{Alex,Reid1}. Other programs implementing this algorithm are described e.g. in \cite{FS,Alex,Fush,Reid1} and \cite{Mans}. In the interactive mode of {\sc Crack} it is possible to change the lexicographical ordering of variables, of functions, to choose between `total differential order' ordering of variables or lexicographical ordering of variables and to choose whether lexicographical ordering of functions should have a higher priority than the ordering of the variables in a derivative, or not. An example of the computation of a differential Gr\"{o}bner Basis is given in the test file {\tt crack.tst}. \subsection{Integrating exact PDEs} The technical term `exact' is adapted for PDEs from exterior calculus and is a small abuse of language but it is useful to characterize the kind of PDEs under consideration. The purpose of the integration module in {\sc Crack} is to decide whether a given differential expression $D$ which involves unknown functions $f^i(x^j),\;\; 1\leq i\leq m$ of independent variables $x^j, 1\leq j\leq n$ is a total derivative of another expression $I$ w.r.t. some variable $x^k, 1\leq k\leq n$ \[ D(x^i,\; f^j,\; f^j,_p,\; f^j,_{pq}, \ldots) = \frac{d I(x^i,\; f^j,\; f^j,_p,\; f^j,_{pq}, \ldots)}{d x^k}. \] The index $k$ is reserved in the following for the integration variable $x^k.$ With an appropriate function of integration $c^r,$ which depends on all variables except $x^k$ it is no loss of generality to replace $0 = D$ by $0 = I + c^r$ in a system of equations. Of course there always exists a function $I$ with a total derivative equal to $D$ but the question is whether for \underline{arbitrary} $f^i$ the integral $I$ is functionally dependent only on the $f^i$ and their derivatives, and \underline{not on integrals of $f^i.$} \\ \underline{Preconditions:} \\ $D$ is a polynomial in the $f^i$ and their derivatives. The number of functions and variables is free. For deciding the existence of $I$ only, the explicit occurrence of the variables $x^i$ is arbitrary. In order to actually calculate $I$ explicitly, $D$ must have the property that all terms in $D$ must either contain an unknown function of $x^k$ or must be formally integrable w.r.t. $x^k.$ That means if $I$ exists then only a special explicit occurrence of $x^k$ can prevent the calculation of $I$ and furthermore only in those terms which do not contain any unknown function of $x^k.$ If such terms occur in $D$ and $I$ exists then $I$ can still be expressed as a polynomial in the $f^i, f^i,_j, \ldots$ and terms containing indefinite integrals with integrands explicit in $x^k.$ \\ \underline{Algorithm:} \\ Successive partial integration of the term with the highest $x^k$-derivative of any $f^i.$ By that the differential order w.r.t. $x^k$ is reduced successively. This procedure is always applicable because steps involve only differentiations and the polynomial integration $(\int h^n\frac{\partial h}{\partial x}dx = h^{n+1}/(n+1))$ where $h$ is a partial derivative of some function $f^i.$ For a more detailed description see \cite{WoInt}.\\ \underline{Stop:} \\ Iteration stops if no term with any $x^k$-derivative of any $f^i$ is left. If in the remaining un-integrated terms any $f^i(x^k)$ itself occurs, then $I$ is not expressible with $f^i$ and its derivatives only. In case no $f^i(x^k)$ occurs then any remaining terms can contain $x^k$ only explicitly. Whether they can be integrated depends on their formal integrability. For their integration the {\sc Reduce} integrator is applied. \\ \underline{Speed up:} \\ The partial integration as described above preserves derivatives with respect to other variables. For example, the three terms $f,_x, f f,_{xxx}, f,_{xxy}$ can not combine somehow to the same terms in the integral because if one ignores $x$-derivatives then it is clear that $f, f^2$ and $f,_y$ are like three completely different expressions from the point of view of $x$-integrations. This allows the following drastic speed up for large expressions. It is possible to partition the complete sum of terms into partial sum such that each of the partial sum has to be integrable on its own. That is managed by generating a label for each term and collecting terms with equal label into partial sums. The label is produced by dropping all $x$-derivatives from all functions to be computed and dropping all factors which are not powers of derivatives of functions to be computed. The partitioning into partial sums has two effects. Firstly, if the integration of one partial sum fails then the remaining sums do not have to be tried for integration. Secondly, doing partial integration for each term means doing many subtractions. It is much faster to subtract terms from small sums than from large sums. \underline{Example :} \\ We apply the above algorithm to \begin{equation} D := 2f,_yg' + 2f,_{xy}g + gg'^3 + xg'^4 + 3xgg'^2g'' = 0 \label{D} \end{equation} with $f = f(x,y), \, g = g(x), \, '\equiv d/dx.$ Starting with terms containing $g$ and at first with the highest derivative $g,_{xx},$ the steps are \[ \begin{array}{rcccl} \int 3xgg,_x^2g,_{xx} dx & = & \int d(xgg,_x^3) & - & \int \left( \partial_x(xg) g,_x^3\right) dx \\ \\ & = & xgg,_x^3 & - & \int g,_x^3(g + xg,_x) dx, \end{array} \] \[ I := I + xgg,_x^3 \] \[ D := D - g,_x^3(g + xg,_x) - 3xgg,_x^2g,_{xx} \] The new terms $- g,_x^3(g + xg,_x)$ are of lower order than $g,_{xx}$ and so in the expression $D$ the maximal order of $x$-derivatives of $g$ is lowered. The conditions that $D$ is exact are the following. \begin{itemize} \item The leading derivative must occur linearly before each partial integration step. \item After the partial integration of the terms with first order $x$-derivatives of $f$ the remaining $D$ must not contain $f$ or other derivatives of $f$, because such terms cannot be integrated w.r.t.\ $x$ without specifying $f$. \end{itemize} The result of $x$- and $y$-integration in the above example is (remember $g=g(x)$) \begin{equation} 0 = 2fg + xygg,_x^3 + c_1(x) + c_2(y) \; \; (=I). \nonumber \end{equation} {\sc Crack} can now eliminate $f$ and substitute for it in all other equations. \\ \underline{Generalization:} \\ If after applying the above basic algorithm, terms are left which contain functions of $x^k$ but each of these functions depends only on a subset of all $x^i, 1\leq i\leq n,$ then a generalized version of the above algorithm can still provide a formal expression for the integral $I$ (see \cite{WoInt}). The price consists of additional differential conditions, but they are equations in less variables than occur in the integrated equation. Integrating for example \begin{equation} \tilde{D} = D + g^2(y^2 + x\sin y + x^2e^y) \label{Dnew} \end{equation} by introducing as few new functions and additional conditions as possible gives as the integral $\tilde{I}$ \begin{eqnarray*} \tilde{I} & = & 2fg + xygg,_{x}^{3} + c_1(x) + c_2(y) \\ & & + \frac{1}{3}y^3c_3'' - \cos y(xc_3'' - c_3) + e^y(x^2c_3'' - 2xc_3' + 2c_3) \end{eqnarray*} with $c_3 = c_3(x), \, '\equiv d/dx$ and the single additional condition $g^2 = c_3'''.$ The integration of the new terms of (\ref{Dnew}) is achieved by partial integration again, for example \begin{eqnarray*} \int g^2x^2 dx & = & x^2\int g^2 dx - \int (2x\!\int g^2 dx) dx \\ & = & x^2\int g^2 dx - 2x\int\!\!\int g^2 dx + 2 \int\!\!\int\!\!\int g^2 dx \\ & = & x^2c_3'' - 2xc_3' + 2c_3. \end{eqnarray*} \underline{Characterization:} \\ This algorithm is a decision algorithm which does not involve any heuristic. After integration the new equation is still a polynomial in $f^i$ and in the new constant or function of integration. Therefore the algorithms for bringing the system into standard form can still be applied to the PDE-system after the equation $D = 0$ is replaced by $I = 0.$ The complexity of algorithms for bringing a PDE-system into a standard form depends nonlinearly on the order of these equations because of the nonlinear increase of the number of different leading derivatives and by that the number of equations generated intermediately by such an algorithm. It therefore in general pays off to integrate equations during such a standard form algorithm. If an $f^i,$ which depends on all variables, can be eliminated after an integration, then depending on its length it is in general helpful to substitute $f^i$ in other equations and to reduce the number of equations and functions by one. This is especially profitable if the replaced expression is short and contains only functions of less variables than $f^i.$ \\ \underline{Test:} \\ The corresponding test input is \begin{verbatim} depend f,x,y; depend g,x; crack({2*df(f,y)*df(g,x)+2*df(f,x,y)*g+g*df(g,x)**3 +x*df(g,x)**4+3*x*g*df(g,x)**2*df(g,x,2) +g**2*(y**2+x*sin y+x**2*e**y)}, {},{f,g},{}); \end{verbatim} The meaning of the {\sc Reduce} command {\tt depend} is to declare that $f$ depends in an unknown way on $x$ and $y$. For more details on the algorithm see \cite{WoInt}. \subsection{Direct separation of PDEs} As a result of repeated integrations the functions in the remaining equations have less and less variables. It therefore may happen that after a substitution an equation results where at least one variable occurs only explicitly and not as an argument of an unknown function. Consequently all coefficients of linearly independent expressions in this variable can be set to zero individually. \\ {\em Example:} \\ $f = f(x,y), \;\; g = g(x), \;\; x,y,z$ are independent variables. The equation is \begin{equation} 0 = f,_y + z(f^2+g,_x) + z^2(g,_x+yg^2) \label{sep} \end{equation} $x$-separation? $\rightarrow$ no \\ $y$-separation? $\rightarrow$ no \\ $z$-separation? $\rightarrow$ yes: $0 \,=\, f,_y \,=\, f^2+g,_x \,=\, g,_x+yg^2$ \\ $y$-separation? $\rightarrow$ yes: $0 = g,_x = g^2\;\;$ (from the third equation from the $z$-separation) If $z^2$ had been replaced in (\ref{sep}) by a third function $h(z)$ then direct separation would not have been possible. The situation changes if $h$ is a parametric function which is assumed to be independently given and which should not be calculated, i.e.\ $f$ and $g$ should be calculated for any arbitrary given $h(z)$. Then the same separation could have been done with an extra treatment of the special case $h,_{zz} = 0,$ i.e.\ $h$ linear in $z$. This different treatment of unknown functions makes it necessary to input explicitly the functions to be calculated as the third argument to {\sc Crack}. The input in this case would be \begin{verbatim} depend f,x,y; depend g,x; depend h,z; crack({df(f,y)+z*f**2+(z+h)*df(g,x)+h*y*g**2},{},{f,g},{z}); \end{verbatim} The fourth parameter for {\sc Crack} is necessary to make clear that in addition to the variables of $f$ and $g$, $z$ is also an independent variable. If the flag {\tt independence\_} is not {\tt nil} then {\sc Crack} will stop if linear independence of the explicit expressions of the separation variable (in the example $1,z,z^2$) is not clear and ask interactively whether separation should be done or not. \subsection{Indirect separation of PDEs} For the above direct separation a precondition is that at least one variable occurs only explicitly or as an argument of parametric functions. The situation where each variable is an argument of at least one function but no function contains all independent variables of an equation needs a more elaborate treatment. The steps are these \begin{itemize} \item A variable $x_a$ is chosen which occurs in as few functions as possible. This variable will be separated directly later which requires that all unknown functions $f_i$ containing $x_a$ are to be eliminated. Therefore, as long as $F:=\{f_i\}$ is not empty do the following: \begin{itemize} \item Choose the function $f_i(y_p)$ in $F$ with the smallest number of variables $y_p$ and with $z_{ij}$ as those variables on which $f_i$ does not depend. \item Identify all different products $P_{ik}$ of powers of $f_i$-derivatives and of $f_i$ in the equation. Determine the $z_{ij}$-dependent factors $C_{ik}$ of the coefficients of $P_{ik}$ and store them in a list. \item For each $C_{il}$ ($i$ fixed, $l=1,\ldots)$ choose a $z_{ij}$ and : \begin{itemize} \item divide by $C_{il}$ the equation and all following elements $C_{im}$ with $m>l$ of this list, such that these elements are still the actual coefficients in the equation after the division, \item differentiate the equation and the $C_{im}, m>l$ w.r.t.\ $z_{ij}$ \end{itemize} \end{itemize} \item The resulting equation no longer contains any unknown function of $x_a$ and can be separated w.r.t.\ $x_a$ directly in case $x_a$ still occurs explicitly. In both cases the equation(s) is (are) free of $x_a$ afterwards and inverting the sequence of integration and multiplication of all those equations (in case of direct separability) will also result in an equation(s) free of $x_a.$ More exactly, the steps are \begin{itemize} \item multiplication of the equation(s) and the $C_{im}$ with $m<l$ by the elements of the $C_{ik}$-lists in exactly the inverse order, \item integration of these exact PDEs and the $C_{im}$ w.r.t.\ $z_{ij}$. \end{itemize} \item The equations originating that way are used to evaluate those functions which do not depend on $x_a$ and which survived in the above differentiations. Substituting these functions in the original equation, may enable direct separability w.r.t. variables on which the $f_i$ do not depend on. \item The whole procedure is repeated for another variable $x_b$ if the original DE could not be separated directly and still has the property that it contains only functions of a subset of all variables in the equation. \end{itemize} The additional bookkeeping of coefficients $C_{ik}$ and their updating by division, differentiation, integration and multiplication is done to use them as integrating factors for the backward integration. The following example makes this clearer. The equation is \begin{equation} 0 = f(x) g(y) - \frac{1}{2}xf'(x) - g'(y) - (1+x^2)y. \label{isep} \end{equation} The steps are (equal levels of indentation in the example correspond to those in the algorithm given above) \begin{itemize} \item $x_1:=x, \, F=\{f\}$ \begin{itemize} \item Identify $f_1:=f, \; \; \; \; \; y_1:=x, \; \; \; \; \; z_{11}:=y$ \item and $P_1=\{f',f\}, \; \; \; \; \; C_1=\{1,g\}$ \begin{itemize} \item Divide $C_{12}$ and (\ref{isep}) by $C_{11}=1$ and differentiate w.r.t. $z_{11}=y:$ \begin{eqnarray} 0 & = & fg' - g'' - (1+x^2) \label{isep2} \\ C_{12} & = & g' \nonumber \end{eqnarray} \item Divide (\ref{isep2}) by $C_{12}=g'$ and differentiate w.r.t. $z_{11}=y:$ \[ 0 = - (g''/g')' - (1+x^2)(1/g')' \] \end{itemize} \end{itemize} \item Direct separation w.r.t.\ $x$ and integration: \[\begin{array}{rclclcl} x^2: 0 & = & (1/g')' & \Rightarrow & c_1g' = 1 & \Rightarrow & g = y/c_1 + c_2 \\ x^0: 0 & = & (g''/g')' & \Rightarrow & c_3g' = g'' & \Rightarrow & c_3 = 0 \end{array} \] \item Substitution of $g$ in the original DE \[0 = (y/c_1+c_2)f - \frac{1}{2}xf' - 1/c_1 - (1+x^2)y \] provides a form which allows {\sc Crack} standard methods to succeed by direct separation w.r.t.\ $y$ \[\begin{array}{rclcl} y^1: 0 & = & f/c_1 - 1 - x^2 & \Rightarrow & f' = 2c_1x \\ y^0: 0 & = & c_2f - \frac{1}{2}xf' - 1/c_1 & \Rightarrow & 0 = c_2c_1(1+x^2) - c_1x^2 - 1/c_1 \end{array}\] and direct separation w.r.t.\ $x$: \begin{eqnarray*} x^0: 0 & = & c_2c_1 - c_1 \\ x^2: 0 & = & c_2c_1 - 1/c_1 \\ & \Rightarrow & 0 = c_1 - 1/c_1 \\ & \Rightarrow & c_1 = \pm 1 \Rightarrow c_2 = 1. \end{eqnarray*} \end{itemize} We get the two solutions $f = 1 + x^2, g = 1 + y$ and $f = - 1 - x^2, g = 1 - y.$ The corresponding input to {\sc Crack} would be \begin{verbatim} depend f,x; depend g,y; crack({f*g-x*df(f,x)/2-df(g,y)-(1+x**2)*y},{},{f,g},{}); \end{verbatim} \subsection{Solving standard ODEs} For solving standard ODEs the package {\sc ODESolve} by Malcalm MacCallum and Francis Wright \cite{Mal} is applied. This package is distributed with {\sc Reduce} and can be used independently of {\sc Crack}. The syntax of {\sc ODESolve} is quite similar to that of {\sc Crack} \\ \verb+depend+ {\it function}, {\it variable}; \\ \verb+odesolve(+ODE, {\it function}, {\it variable}); \\ In the present form (1998) it solves standard first order ODEs (Bernoulli and Euler type, with separable variables, $\ldots$) and linear higher order ODEs with constant coefficients. An improved version is currently under preparation by Francis Wright. The applicability of {\sc ODESolve} is increased by a {\sc Crack}-subroutine which recognizes such PDEs in which there is only one unknown function of all variables and all occurring derivatives of this function are only derivatives w.r.t. one variable of only one partial derivative. For example the PDE for $f(x,y)$ \[ 0 = f,_{xxy} + f,_{xxyy} \] can be viewed as a first order ODE in $y$ for $f,_{xxy}.$ \section{Acknowledgement} Francis Wright contributed a module that provides simplifications of expressions involving symbolic derivatives and integrals. Also, {\sc Crack} makes extensive use of the {\sc Reduce} program {\sc ODESolve} written by Malcolm MacCallum and Francis Wright. Arrigo Triulzi provided a module to support the use of different total orderings of derivatives in doing pseudo differential Gr\"{o}bner basis computations. Work on this package has been supported by the Konrad Zuse Institut/Berlin through a fellowship of T.W.. Winfried Neun and Herbert Melenk are thanked for many discussions and constant support. Anthony Hearn provided free copies of {\sc Reduce} to us as a {\sc Reduce} developers group which also is thankfully acknowledged. \begin{thebibliography}{99} \bibitem{Riq} C. Riquier, Les syst\`{e}mes d'\'{e}quations aux d\'{e}riv\'{e}es partielles, Gauthier--Villars, Paris (1910). \bibitem{Th} J. Thomas, Differential Systems, AMS, Colloquium publications, v. 21, N.Y. (1937). \bibitem{Ja} M. Janet, Le\c{c}ons sur les syst\`{e}mes d'\'{e}quations aux d\'{e}riv\'{e}es, Gauthier--Villars, Paris (1929). \bibitem{Topu} V.L. Topunov, Reducing Systems of Linear Differential Equations to a Passive Form, Acta Appl. Math. 16 (1989) 191--206. \bibitem{Alex} A.V. Bocharov and M.L. Bronstein, Efficiently Implementing Two Methods of the Geometrical Theory of Differential Equations: An Experience in Algorithm and Software Design, Acta. Appl. Math. 16 (1989) 143--166. \bibitem{Reid1} G.J. Reid, A triangularization algorithm which determines the Lie symmetry algebra of any system of PDEs, J.Phys. A: Math. Gen. 23 (1990) L853-L859. \bibitem{FS} F. Schwarz, Automatically Determining Symmetries of Partial Differential Equations, Computing 34, (1985) 91-106. \bibitem{Fush} W.I. Fushchich and V.V. Kornyak, Computer Algebra Application for Determining Lie and Lie--B\"{a}cklund Symmetries of Differential Equations, J. Symb. Comp. 7, (1989) 611--619. \bibitem{Mans} E.L. Mansfield, The differential algebra package diffgrob2, Mapletech 3, (1996) 33-37 . \bibitem{Ka} E. Kamke, Differentialgleichungen, L\"{o}sungsmethoden und L\"{o}sungen, Band 1, Gew\"{o}hnliche Differentialgleichungen, Chelsea Publishing Company, New York, 1959. \bibitem{Wo} T. Wolf, An Analytic Algorithm for Decoupling and Integrating systems of Nonlinear Partial Differential Equations, J. Comp. Phys., no. 3, 60 (1985) 437-446 and, Zur analytischen Untersuchung und exakten L\"{o}sung von Differentialgleichungen mit Computeralgebrasystemen, Dissertation B, Jena (1989). \bibitem{WoInt} T. Wolf, The Symbolic Integration of Exact PDEs, preprint, (1991). \bibitem{WM} M.A.H. MacCallum, F.J. Wright, Algebraic Computing with REDUCE, Clarendon Press, Oxford (1991). \bibitem{Mal} M.A.H. MacCallum, An Ordinary Differential Equation Solver for REDUCE, Proc. ISAAC'88, Springer Lect. Notes in Comp Sci. 358, 196--205. \bibitem{Step} H. Stephani, Differential equations, Their solution using symmetries, Cambridge University Press (1989). \bibitem{LIEPDE} T. Wolf, An efficiency improved program {\sc LiePDE} for determining Lie - symmetries of PDEs, Proceedings of the workshop on Modern group theory methods in Acireale (Sicily) Nov. (1992) \bibitem{Karp} V.I. Karpman, Phys. Lett. A 136, 216 (1989) \bibitem{Cham} B. Champagne, W. Hereman and P. Winternitz, The computer calculation of Lie point symmetries of large systems of differential equation, Comp. Phys. Comm. 66, 319-340 (1991) \end{thebibliography} \end{document}