next up previous
Next: Bibliography Up: Applications of Computer Algebra Previous: Curves with prescribed singularities

Symbolic-numerical polynomial solving

Algebraic geometry is concerned with investigating the structure of the set of solutions of finitely many polynomial equations. Solving such a system is considered to be part of numerical analysis rather than of algebraic geometry. However, knowing something about the structure of the solution set can actually help in finding the solutions.

Given polynomials $f_1, \dots, f_k \in K[x_1, \dots, x_n]$, $K$ being $\mathbf{R}$ or $\mathbf{C}$, numerical solving means determining the coordinates $(p_1, \dots,
p_n)$, up to a given precision of all (respectively some, respectively one) points of the variety

\begin{displaymath}V = \bigl\{p = (p_1, \dots, p_n) \in K^n \,\big\vert\:
f_1(p) = \ldots = f_k(p) = 0\bigr\}.\end{displaymath}

Algebraically, we are interested in describing the structure of $V$, in computing its dimension, in the number of solutions (if finite), in the decomposition into irreducible varieties (e.g., by primary decomposition), in the radical of the ideal $I$ generated by $f_1, \dots, f_k$ or the normalisation of the ring $K[x_1, \dots, x_n]/I$. Since $V$, the set of solutions of $f_1 = \ldots = f_k = 0$ depends only on $I$, even only on the radical of $I$, all the above mentioned methods can be used in preparing the given polynomial system for better numerical solving.

Hence, algebraic geometry and computer algebra may be used as symbolic preprocessing for easy numerical postprocessing. In particular, the following algorithms and methods may be applied (all being implemented in SINGULAR).

Pure numerical solving has the advantage of being fast, flexible in accuracy by using iterative methods and being applicable not only to polynomial systems. Indeed, the big success of numerical methods during the past years seems to show that symbolic methods are of little use in solving systems coming from real life problems. However, due to rounding errors, numerical methods are principally uncertain, often unstable in an unpredictable way, sometimes do not find all solutions and have problems with overdetermined systems. Moreover, they can hardly treat underdetermined systems (sometimes curves, at most surfaces, as solution sets) and certainly get into trouble near singularities.

On the other hand, symbolic methods are principally exact and stable. However they have a high complexity, are, therefore, slow, and, in practice, are applicable only to small systems (this is the case, in particular, for radical computation, primary decomposition and normalisation). Nevertheless, they are applicable to any polynomial system of any dimension and for zero-dimensional systems they can predict precisely the number of complex solutions (counted with multiplicities). Moreover, as is well-known, symbolic preprocessing of a system of polynomials (even of ordinary and partial differential equations) may not only lead to better conditions for the system to be solved numerically but can help to find all solutions or even make numerical solving possible (see below).

There is continuous progress in applying symbolic methods to numerical solving, cf. the various articles in the ISSAC Proceedings, the survey article by Möller [Mo], the textbook by Cox, Little and O'Shea [CLO2] or the recent paper by Verschelde [Ve]. Besides Gröbner basis many other methods have been used. Recently, resultant methods have been re-popularised, in particular in connection with numerical solving (cf. [CE], [WEM]), partly due to the new sparse resultants by Gelfand, Kapranov and Zelevinsky [GKZ]. I should also mention the work of Stetter (cf. [Ste1,Ste2]), which is not discussed in this paper.

In the following I shall describe roughly how Gröbner bases and resultants can be applied to prepare for numerical solving of zero-dimensional systems. Moreover, I shall present experimental material for comparing the performance of the two methods, which seems to be the first practical comparison of resultants and Gröbner bases in connection with numerical solving under equal conditions. The motivation for doing this came from a collaboration with electrical engineers, aiming at symbolic analysing and sizing micro-electric analog circuits.

Let $f_1, \dots, f_k \in K[x_1, \dots, x_n]$, and assume that the system $f_1 = \ldots = f_k = 0$ has only finitely many complex solutions. The problem is to find all solutions up to a given precision. We present two methods, one by computing a lexicographical Gröbner basis and then splitting this into triangular sets, the second by computing the sparse $u$-resultants and the determinants of the partly evaluated resultant matrix. Both methods end up with the problem of solving univariate polynomial equations for which we use Laguerre's method.

Solving polynomial systems using Gröbner bases and triangular sets:

Zero-dimensional system $f_1, \dots, f_k \in K[x_1, \dots, x_n]$, $k \ge n$.
Complex roots of $f_1 = \ldots = f_k = 0$ in $\mathbf{C}^n$.
There are several variations on how to compute triangular sets. The $V(T_i)$ need not be disjoint (but can be made disjoint). The $T_i$ need not define maximal ideals (but this can be achieved), we may use the factorising Gröbner, etc. Some of these have been implemented in SINGULAR by D. Hillebrand, a former student of H.M. Möller (cf. [Hi]).

SINGULAR example (the output has been changed to save space):

> ring s = 0,(x,y,z),lp;
> ideal i = x2+y+z-1,x+y2+z-1,x+y+z2-1;
> option(redSB);  //option for computing a reduced Groebner basis
> ideal j = groebner(i); j;
j[1]=z6-4z4+4z3-z2, j[2]=2yz2+z4-z2, j[3]=y2-y-z2+z, j[4]=x+y+z2-1
> LIB "triang.lib";
> triangMH(j);    //triangular system with Moeller's method
>                 //(fast, but not necessarily disjoint) 
[1]:                    [2]:    
   _[1]=z2                 _[1]=z4-4z2+4z-1
   _[2]=y2-y+z             _[2]=2y+z2-1
   _[3]=x+y-1              _[3]=2x+z2-1
> triangMH(j,2);  //triangular system (with Moeller's method,  
>                 //improved by Hillebrand) and factorisation
[1]:            [2]:          [3]:              [4]:
   _[1]=z          _[1]=z        _[1]=z2+2z-1      _[1]=z-1
   _[2]=y          _[2]=y-1      _[2]=y-z          _[2]=y
   _[3]=x-1        _[3]=x        _[3]=x-z          _[3]=x
We can now solve the system easily by recursively finding roots of univariate polynomials, SINGULAR commands are:

> LIB "solve.lib";
> triang_solve(triangMH(j,2),30);   //accuracy of 30 digits
or applying triangLf_solve(i); directly to $i$.

Resultant methods have recently become popular again, due to new sparse resultants invented by Gelfand, Kapranov and Zelevinsky [GKZ]. Indeed, they beat by far the classical Macaulay resultants as is shown, for example, in [We]. The following algorithm to use sparse resultants for polynomial solving is due to Canny and Emiris [CE], it has been implemented in SINGULAR by M. Wenk [We].

For computing resultants, we have to start with a zero-dimensional polynomial system with as many equations as variables, and have to compute the $u$-resultant (named so by Van der Waerden) where $u_i$ are new variables which have to be specialised later. The construction of the sparse resultant matrix uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes. Specialisations of the $u$-coordinates are used then to reduce the problem to the univariate case. The determinants of the specialised $u$-resultant matrices are univariate polynomials, the roots are determined by Laguerre's algorithm.

The main advantage of the sparse resultants against Macaulay resultants is that the size of the resultant matrices depend on the Newton polytopes and not just on the degrees of the input polynomials, hence is much smaller. Note that by the resultant method we can only determine roots in $(\mathbf{C}\setminus\{0\})^n$.

Here is a more detailed description of the algorithm (for details see [CE] and [We]):

Solving polynomial systems using resultants (after Gelfand, Kapranov, Zelevinsky (1994) and Canny, Emiris (1997)):

Zero-dimensional system $f_1, \dots, f_n \in
\mathbf{C}[x_1, \dots, x_n]$.

Complex roots of $f_1 = \ldots = f_n = 0$ in $(\mathbf{C}\setminus\{0\})^n$.
Most of the time is spent in the second last and, in particular, in the last step.

As an example, we show the computation of the complex zero-set of the ideals $I_1,\dots,I_5$ (with precision of 30 digits, see below) which represent more than 60 examples. On average our resultant solver could manage the same examples as Mathematica and MAPLE (but MAPLE found fewer roots). The problems for the resultant solver occurred either because of too big matrices or because of numerical problems for the subsequent Laguerre solver (no convergence). Sometimes not all solutions were found, sometimes the system returned too many solutions because multiple solutions were interpreted as different ones.

Our experiments do not confirm the claim made in [WEM] that resultant methods are best suited to polynomial systems solving, at least for bigger examples. On the contrary, Gröbner bases and triangular sets showed the best performance, identified most precisely the correct number of simple roots, could treat many more examples and had the least numerical difficulties. The most expensive part was usually the computation of a lexicographical Gröbner basis (computed through FGLM); the triangular set computation was less expensive and the numerical solving depended very much on the example.

Of course, the Gröbner bases in SINGULAR are highly tuned and the resultant computations can certainly be improved. The examples show that resultant methods are a good alternative for small examples. But for many variables even the sparse resultants become huge and we have to compute several determinants of these matrices. This is the main bottleneck for the resultant method. Nevertheless, still more research has to be done.

Interpretation of the table: vars: number of variables, mult: number of complex solutions with multiplicity, # roots: number of different roots without multiplicity, time: total time, degree of res.: number of (not necessarily simple roots) in $(\mathbf{C}\setminus\{0\})^n$, matrix size: number of rows of (square) resultant matrix.

Commands used:

Triang. systems Resultant method
No vars mult #roots roots degree matrix
found time of res. time size
1 3 53 32 32 24 sec 31 $\sim 40$ sec $\sim 160$
2 4 16 16 16 2 sec 16 $\sim 5$ sec $\sim 70$
3 5 70 70 70 5 sec 70 $\sim 50$ sec $\sim 880$
4 6 156 156 156 22 sec 156 $> 5000$ sec $\sim 5460$
5 10 27 27 27 66 sec 30 $> 5000$ sec $\sim 10000$

Mathematica 4.0 MAPLE V.5 REDUCE 3.7
No roots roots roots
found time found time found time
1 37 100 sec - $> 5000$ sec - $> 5000$ sec
2 16 9 sec 1 1 sec 16 $22$ sec

$\big\langle x^3z+6x^2-2xz^3+y^4+yz,\, 5x^2y^2+y^3z+3yz^3+z^3,\,
-x^2z-2xyz^2+4y^4+2y^2z\big\rangle\subset \mathbf{C}[x,y,z]$,

(cf. [WEM]) $\big\langle 5x^2+6x+3y+6z+2w+3,
x-11z^2+3z+7w-9, x+3y+2z-3w^2+13\big\rangle\subset \mathbf{C}[x,y,z,w]$,

(cyclic 5) $\big\langle a+b+c+d+e,\, ab+ae+bc+cd+de,\,
abc+abe+ade+bcd+cde,\, abcd+abce+abde+acde+bcde,\, abcde-1\big\rangle\subset

(Arnborg 6) $\big\langle a+b+c+d+e+f,\, ab+af+bc+cd+de+ef,\,
abc+abf+aef+bcd+cde+def,\, abc...
...+abcef+abdef+acdef+bcdef,\, abcdef-1\big\rangle\subset

(POSSO, Methan$6_-1$) $\big\langle-10ai-320000a+64bh+10hj+11hk,\\
160000a-32bh-5bi-5bk-5000b,\, -ci+...
...0hj-11hk-10jk-10k^2-4200k \big\rangle\subset

The computations, with precision of 30 digits, were performed on a Pentium Pro 200 with 128 MB. MAPLE and REDUCE could not solve examples 3 - 5 within our time limit of 5000 seconds, Mathematica stopped with an error.

MAPLE offers also the opportunity to preprocess a set of ideal generators in view of solving, by using the command gsolve;. But within our time limit of $5000$ seconds, this also lead only to a result for example 2. However, in this case, MAPLE is able to compute all roots, by successively applying fsolve(eqn,var,complex); and substituting.

next up previous
Next: Bibliography Up: Applications of Computer Algebra Previous: Curves with prescribed singularities
Christoph Lossen