or , numerical solving means determining the coordinates
, up to a given precision of all (respectively some, respectively one)
points of the variety
Hence, algebraic geometry and computer algebra may be used as symbolic preprocessing for easy numerical postprocessing. In particular, the following algorithms and methods may be applied (all being implemented in SINGULAR).
On the other hand, symbolic methods are principally exact and stable. However they have a high complexity, are, therefore, slow, and, in practice, are applicable only to small systems (this is the case, in particular, for radical computation, primary decomposition and normalisation). Nevertheless, they are applicable to any polynomial system of any dimension and for zero-dimensional systems they can predict precisely the number of complex solutions (counted with multiplicities). Moreover, as is well-known, symbolic preprocessing of a system of polynomials (even of ordinary and partial differential equations) may not only lead to better conditions for the system to be solved numerically but can help to find all solutions or even make numerical solving possible (see below).
There is continuous progress in applying symbolic methods to numerical solving, cf. the various articles in the ISSAC Proceedings, the survey article by Möller [Mo], the textbook by Cox, Little and O'Shea [CLO2] or the recent paper by Verschelde [Ve]. Besides Gröbner basis many other methods have been used. Recently, resultant methods have been re-popularised, in particular in connection with numerical solving (cf. [CE], [WEM]), partly due to the new sparse resultants by Gelfand, Kapranov and Zelevinsky [GKZ]. I should also mention the work of Stetter (cf. [Ste1,Ste2]), which is not discussed in this paper.
In the following I shall describe roughly how Gröbner bases and resultants can be applied to prepare for numerical solving of zero-dimensional systems. Moreover, I shall present experimental material for comparing the performance of the two methods, which seems to be the first practical comparison of resultants and Gröbner bases in connection with numerical solving under equal conditions. The motivation for doing this came from a collaboration with electrical engineers, aiming at symbolic analysing and sizing micro-electric analog circuits.
Let , and assume that the system has only finitely many complex solutions. The problem is to find all solutions up to a given precision. We present two methods, one by computing a lexicographical Gröbner basis and then splitting this into triangular sets, the second by computing the sparse -resultants and the determinants of the partly evaluated resultant matrix. Both methods end up with the problem of solving univariate polynomial equations for which we use Laguerre's method.
Solving polynomial systems using Gröbner bases and triangular sets:
SINGULAR example (the output has been changed to save space):
> ring s = 0,(x,y,z),lp; > ideal i = x2+y+z-1,x+y2+z-1,x+y+z2-1; > option(redSB); //option for computing a reduced Groebner basis > ideal j = groebner(i); j; j=z6-4z4+4z3-z2, j=2yz2+z4-z2, j=y2-y-z2+z, j=x+y+z2-1 > LIB "triang.lib"; > triangMH(j); //triangular system with Moeller's method > //(fast, but not necessarily disjoint) : : _=z2 _=z4-4z2+4z-1 _=y2-y+z _=2y+z2-1 _=x+y-1 _=2x+z2-1 > triangMH(j,2); //triangular system (with Moeller's method, > //improved by Hillebrand) and factorisation : : : : _=z _=z _=z2+2z-1 _=z-1 _=y _=y-1 _=y-z _=y _=x-1 _=x _=x-z _=xWe can now solve the system easily by recursively finding roots of univariate polynomials, SINGULAR commands are:
> LIB "solve.lib"; > triang_solve(triangMH(j,2),30); //accuracy of 30 digitsor applying
triangLf_solve(i);directly to .
Resultant methods have recently become popular again, due to new sparse resultants invented by Gelfand, Kapranov and Zelevinsky [GKZ]. Indeed, they beat by far the classical Macaulay resultants as is shown, for example, in [We]. The following algorithm to use sparse resultants for polynomial solving is due to Canny and Emiris [CE], it has been implemented in SINGULAR by M. Wenk [We].
For computing resultants, we have to start with a zero-dimensional polynomial system with as many equations as variables, and have to compute the -resultant (named so by Van der Waerden) where are new variables which have to be specialised later. The construction of the sparse resultant matrix uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes. Specialisations of the -coordinates are used then to reduce the problem to the univariate case. The determinants of the specialised -resultant matrices are univariate polynomials, the roots are determined by Laguerre's algorithm.
The main advantage of the sparse resultants against Macaulay resultants is that the size of the resultant matrices depend on the Newton polytopes and not just on the degrees of the input polynomials, hence is much smaller. Note that by the resultant method we can only determine roots in .
Here is a more detailed description of the algorithm (for details see [CE] and [We]):
Solving polynomial systems using resultants (after Gelfand, Kapranov, Zelevinsky (1994) and Canny, Emiris (1997)):
As an example, we show the computation of the complex zero-set of the ideals (with precision of 30 digits, see below) which represent more than 60 examples. On average our resultant solver could manage the same examples as Mathematica and MAPLE (but MAPLE found fewer roots). The problems for the resultant solver occurred either because of too big matrices or because of numerical problems for the subsequent Laguerre solver (no convergence). Sometimes not all solutions were found, sometimes the system returned too many solutions because multiple solutions were interpreted as different ones.
Our experiments do not confirm the claim made in [WEM] that resultant methods are best suited to polynomial systems solving, at least for bigger examples. On the contrary, Gröbner bases and triangular sets showed the best performance, identified most precisely the correct number of simple roots, could treat many more examples and had the least numerical difficulties. The most expensive part was usually the computation of a lexicographical Gröbner basis (computed through FGLM); the triangular set computation was less expensive and the numerical solving depended very much on the example.
Of course, the Gröbner bases in SINGULAR are highly tuned and the resultant computations can certainly be improved. The examples show that resultant methods are a good alternative for small examples. But for many variables even the sparse resultants become huge and we have to compute several determinants of these matrices. This is the main bottleneck for the resultant method. Nevertheless, still more research has to be done.
Interpretation of the table: vars: number of variables, mult: number of complex solutions with multiplicity, # roots: number of different roots without multiplicity, time: total time, degree of res.: number of (not necessarily simple roots) in , matrix size: number of rows of (square) resultant matrix.
|Triang. systems||Resultant method|
|Mathematica 4.0||MAPLE V.5||REDUCE 3.7|
|2||16||9 sec||1||1 sec||16||sec|
The computations, with precision of 30 digits, were performed on a Pentium Pro 200 with 128 MB. MAPLE and REDUCE could not solve examples 3 - 5 within our time limit of 5000 seconds, Mathematica stopped with an error.
MAPLE offers also the opportunity to preprocess a set of ideal generators in view of solving, by using the command gsolve;. But within our time limit of seconds, this also lead only to a result for example 2. However, in this case, MAPLE is able to compute all roots, by successively applying fsolve(eqn,var,complex); and substituting.