Next: Bibliography Up: Applications of Computer Algebra Previous: Curves with prescribed singularities

# Symbolic-numerical polynomial solving

Algebraic geometry is concerned with investigating the structure of the set of solutions of finitely many polynomial equations. Solving such a system is considered to be part of numerical analysis rather than of algebraic geometry. However, knowing something about the structure of the solution set can actually help in finding the solutions.

Given polynomials , being or , numerical solving means determining the coordinates , up to a given precision of all (respectively some, respectively one) points of the variety

Algebraically, we are interested in describing the structure of , in computing its dimension, in the number of solutions (if finite), in the decomposition into irreducible varieties (e.g., by primary decomposition), in the radical of the ideal generated by or the normalisation of the ring . Since , the set of solutions of depends only on , even only on the radical of , all the above mentioned methods can be used in preparing the given polynomial system for better numerical solving.

Hence, algebraic geometry and computer algebra may be used as symbolic preprocessing for easy numerical postprocessing. In particular, the following algorithms and methods may be applied (all being implemented in SINGULAR).

• Find other generators of , or of ideals with the same solution set, for example, triangular sets, based on Gröbner basis computations (see below), which allow better numerical solving: the numerical algorithms become more stable, we can solve overdetermined systems and find all solutions.

• Create other ideals, having the same solution set, for example, the radical for obtaining only simple zeros, primary decomposition for splitting the system into several smaller ones.

• Compute a parametrisation of the solution set , which is only possible for rational , sometimes it is achieved by the normalisation of .

• Reduce higher dimensional solving to 0-dimensional solving by applying a Noether normalisation.
Pure numerical solving has the advantage of being fast, flexible in accuracy by using iterative methods and being applicable not only to polynomial systems. Indeed, the big success of numerical methods during the past years seems to show that symbolic methods are of little use in solving systems coming from real life problems. However, due to rounding errors, numerical methods are principally uncertain, often unstable in an unpredictable way, sometimes do not find all solutions and have problems with overdetermined systems. Moreover, they can hardly treat underdetermined systems (sometimes curves, at most surfaces, as solution sets) and certainly get into trouble near singularities.

On the other hand, symbolic methods are principally exact and stable. However they have a high complexity, are, therefore, slow, and, in practice, are applicable only to small systems (this is the case, in particular, for radical computation, primary decomposition and normalisation). Nevertheless, they are applicable to any polynomial system of any dimension and for zero-dimensional systems they can predict precisely the number of complex solutions (counted with multiplicities). Moreover, as is well-known, symbolic preprocessing of a system of polynomials (even of ordinary and partial differential equations) may not only lead to better conditions for the system to be solved numerically but can help to find all solutions or even make numerical solving possible (see below).

There is continuous progress in applying symbolic methods to numerical solving, cf. the various articles in the ISSAC Proceedings, the survey article by Möller [Mo], the textbook by Cox, Little and O'Shea [CLO2] or the recent paper by Verschelde [Ve]. Besides Gröbner basis many other methods have been used. Recently, resultant methods have been re-popularised, in particular in connection with numerical solving (cf. [CE], [WEM]), partly due to the new sparse resultants by Gelfand, Kapranov and Zelevinsky [GKZ]. I should also mention the work of Stetter (cf. [Ste1,Ste2]), which is not discussed in this paper.

In the following I shall describe roughly how Gröbner bases and resultants can be applied to prepare for numerical solving of zero-dimensional systems. Moreover, I shall present experimental material for comparing the performance of the two methods, which seems to be the first practical comparison of resultants and Gröbner bases in connection with numerical solving under equal conditions. The motivation for doing this came from a collaboration with electrical engineers, aiming at symbolic analysing and sizing micro-electric analog circuits.

Let , and assume that the system has only finitely many complex solutions. The problem is to find all solutions up to a given precision. We present two methods, one by computing a lexicographical Gröbner basis and then splitting this into triangular sets, the second by computing the sparse -resultants and the determinants of the partly evaluated resultant matrix. Both methods end up with the problem of solving univariate polynomial equations for which we use Laguerre's method.

Solving polynomial systems using Gröbner bases and triangular sets:

Input
Zero-dimensional system , .
Output
Complex roots of in .
• Compute a reduced lexicographical Gröbner basis of the ideal with .

• Compute a triangular system: a triangular basis is a reduced lexicographical Gröbner basis (as many polynomials as variables) with of the form with . A triangular system for consists of triangular bases such that . Triangular systems can be computed effectively, basically by two different methods, one due to Lazard [La], [D5], the other due to Möller [Mo]. Choose any of these methods to compute a triangular system for .

• Use a numerical solver (e.g. Laguerre's method) to find all zeros of , . The union of these zero-sets is the desired solution set.
There are several variations on how to compute triangular sets. The need not be disjoint (but can be made disjoint). The need not define maximal ideals (but this can be achieved), we may use the factorising Gröbner, etc. Some of these have been implemented in SINGULAR by D. Hillebrand, a former student of H.M. Möller (cf. [Hi]).

SINGULAR example (the output has been changed to save space):

> ring s = 0,(x,y,z),lp;
> ideal i = x2+y+z-1,x+y2+z-1,x+y+z2-1;
> option(redSB);  //option for computing a reduced Groebner basis
> ideal j = groebner(i); j;
j[1]=z6-4z4+4z3-z2, j[2]=2yz2+z4-z2, j[3]=y2-y-z2+z, j[4]=x+y+z2-1
> LIB "triang.lib";
> triangMH(j);    //triangular system with Moeller's method
>                 //(fast, but not necessarily disjoint)
[1]:                    [2]:
_[1]=z2                 _[1]=z4-4z2+4z-1
_[2]=y2-y+z             _[2]=2y+z2-1
_[3]=x+y-1              _[3]=2x+z2-1
> triangMH(j,2);  //triangular system (with Moeller's method,
>                 //improved by Hillebrand) and factorisation
[1]:            [2]:          [3]:              [4]:
_[1]=z          _[1]=z        _[1]=z2+2z-1      _[1]=z-1
_[2]=y          _[2]=y-1      _[2]=y-z          _[2]=y
_[3]=x-1        _[3]=x        _[3]=x-z          _[3]=x

We can now solve the system easily by recursively finding roots of univariate polynomials, SINGULAR commands are:

> LIB "solve.lib";
> triang_solve(triangMH(j,2),30);   //accuracy of 30 digits

or applying triangLf_solve(i); directly to .

Resultant methods have recently become popular again, due to new sparse resultants invented by Gelfand, Kapranov and Zelevinsky [GKZ]. Indeed, they beat by far the classical Macaulay resultants as is shown, for example, in [We]. The following algorithm to use sparse resultants for polynomial solving is due to Canny and Emiris [CE], it has been implemented in SINGULAR by M. Wenk [We].

For computing resultants, we have to start with a zero-dimensional polynomial system with as many equations as variables, and have to compute the -resultant (named so by Van der Waerden) where are new variables which have to be specialised later. The construction of the sparse resultant matrix uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes. Specialisations of the -coordinates are used then to reduce the problem to the univariate case. The determinants of the specialised -resultant matrices are univariate polynomials, the roots are determined by Laguerre's algorithm.

The main advantage of the sparse resultants against Macaulay resultants is that the size of the resultant matrices depend on the Newton polytopes and not just on the degrees of the input polynomials, hence is much smaller. Note that by the resultant method we can only determine roots in .

Here is a more detailed description of the algorithm (for details see [CE] and [We]):

Solving polynomial systems using resultants (after Gelfand, Kapranov, Zelevinsky (1994) and Canny, Emiris (1997)):

Input
Zero-dimensional system .

Output
Complex roots of in .
• Add and compute the Newton polytopes of , .

• Compute, using linear programming, a polyhedral subdivision of the Minkowski sum . Translate by a small vector in such that lattice points are interior points.

• Construct from the square resultant matrix , which has as entries either a number or a variable .

• Set , at the -th place. The set of all roots of , , contains the -th components of all complex solutions of the system (in an unordered manner).

• For each solution of identify the components which were computed in the previous step. This is done by substituting by random numbers and by in , computing the determinant and solve this for .
Most of the time is spent in the second last and, in particular, in the last step.

As an example, we show the computation of the complex zero-set of the ideals (with precision of 30 digits, see below) which represent more than 60 examples. On average our resultant solver could manage the same examples as Mathematica and MAPLE (but MAPLE found fewer roots). The problems for the resultant solver occurred either because of too big matrices or because of numerical problems for the subsequent Laguerre solver (no convergence). Sometimes not all solutions were found, sometimes the system returned too many solutions because multiple solutions were interpreted as different ones.

Our experiments do not confirm the claim made in [WEM] that resultant methods are best suited to polynomial systems solving, at least for bigger examples. On the contrary, Gröbner bases and triangular sets showed the best performance, identified most precisely the correct number of simple roots, could treat many more examples and had the least numerical difficulties. The most expensive part was usually the computation of a lexicographical Gröbner basis (computed through FGLM); the triangular set computation was less expensive and the numerical solving depended very much on the example.

Of course, the Gröbner bases in SINGULAR are highly tuned and the resultant computations can certainly be improved. The examples show that resultant methods are a good alternative for small examples. But for many variables even the sparse resultants become huge and we have to compute several determinants of these matrices. This is the main bottleneck for the resultant method. Nevertheless, still more research has to be done.

Interpretation of the table: vars: number of variables, mult: number of complex solutions with multiplicity, # roots: number of different roots without multiplicity, time: total time, degree of res.: number of (not necessarily simple roots) in , matrix size: number of rows of (square) resultant matrix.

Commands used:

• triang_solve(I); and ures_solve(I); (SINGULAR),
• NSolve(eqns,vars,30); (Mathematica),
• evalf(solve(eqns,vars),30); (MAPLE),
• GROESOLVE(eqns,vars);, respectively solve(I,var); with options on complex; on rounded; precision 30; (REDUCE).
 SINGULAR 1-3-7 Triang. systems Resultant method No vars mult #roots roots degree matrix found time of res. time size 1 3 53 32 32 24 sec 31 sec 2 4 16 16 16 2 sec 16 sec 3 5 70 70 70 5 sec 70 sec 4 6 156 156 156 22 sec 156 sec 5 10 27 27 27 66 sec 30 sec

 Mathematica 4.0 MAPLE V.5 REDUCE 3.7 No roots roots roots found time found time found time 1 37 100 sec - sec - sec 2 16 9 sec 1 1 sec 16 sec

,

(cf. [WEM]) ,

(cyclic 5) ,

(Arnborg 6) ,

(POSSO, Methan) .

The computations, with precision of 30 digits, were performed on a Pentium Pro 200 with 128 MB. MAPLE and REDUCE could not solve examples 3 - 5 within our time limit of 5000 seconds, Mathematica stopped with an error.

MAPLE offers also the opportunity to preprocess a set of ideal generators in view of solving, by using the command gsolve;. But within our time limit of seconds, this also lead only to a result for example 2. However, in this case, MAPLE is able to compute all roots, by successively applying fsolve(eqn,var,complex); and substituting.

Next: Bibliography Up: Applications of Computer Algebra Previous: Curves with prescribed singularities
Christoph Lossen
2001-03-21