MOCAA M3 workshop in computational algebra

Text Size: ππππ

CECM Home > Events > MOCAA M3 2008 > Talk Abstracts

MOCAA M3 2008 Home Registration Program Participants Abstracts Slides

Talk Abstracts

Triangular Decompositions for Solving Parametric Polynomial Systems

Changbo Chen · University of Western Ontario


We extend to parametric constructible sets and parametric semi-algebraic sets the notion of comprehensive triangular decomposition, which was originally limited to algebraic varieties. We propose algorithms for computing all these decompositions. This establishes a unified framework for solving parametric polynomial systems by means of triangular decompositions. In addition, we apply such tools to analyze the stability of several biochemical networks which are modelled by dynamical systems.

Computations with Ore Polynomial Matrices

Howard Cheng · University of Lethbridge


We give an overview of some of the computational problems with Ore polynomial matrices. In particular, we examine the algorithms for rank, left nullspace, row-reduced form and Popov form (and the associated unimodular transformation matrices). In many cases, straightforward elimination-based approaches are insufficient because of intermediate expression swell on the coefficients. We examine techniques to control coefficient growth for polynomial matrices and extend the techniques to the case of Ore polynomial matrices. This results in fraction-free and modular algorithms for these problems.

Multiplication modulo triangular sets.

Muhammad Chowdhury · University of Western Ontario


Multiplication modulo a multivariate triangular set is a difficult problem. In this talk, we extend a construction of Li, Moreno Maza and Schost. Given a triangular set T, we build another triangular set V where Vi = e Ti(1 - e) Ui, and Ui has known roots. In general the required precision of e will be d1 d2 ... dn. We describe some families of special cases, for which lower bounds are available.

Advanced Maple Programming

Juergen Gerhard · Maplesoft


This tutorial presentation will discuss some of the more = advanced programming features in Maple, such as overloading, modules, = how to construct Maple packages, as well as guidelines for efficient = programming in Maple.

Using compiled code from within Maple

Juergen Gerhard · Maplesoft


This tutorial presentation will focus on discussing Maple's External Calling interface, which allows users to call externally compiled code written in C, Fortran, or Java from within Maple. The Maple Compiler, which enables on-the-fly compilation of Maple code, will be discussed as well.

Detecting perfect powers of lacunary polynomials.

Mark Giesbrecht · University of Waterloo


When computing with polynomials having only a few non- zero coefficients, we want algorithms whose speed is proportional only to their compact size - the number and length of non-zero coefficients, and the logarithm of their degree - and not proportional to their possibly huge degree. Many problems associated with lacunary polynomials have been proven computationally intractable, such as determining squarefreeness and computing GCDs. It is therefore heartening that we can now provide an algorithm for determining if a lacunary polynomial is a perfect power, which runs in time polynomial in its compact size. To accomplish this we employ a randomized evaluation scheme and prove its effectiveness with a character sum argument. The algorithm is shown to be fast both in theory and practice.

This is joint work with Daniel Roche.

A bound that reduces differential Nullstellensatz to the algebraic one.

Oleg Golubitsky · University of Western Ontario


A system of algebraic differential equations can be considered as a polynomial system, if we replace each (partial) derivative present in it by an algebraic indeterminate. If the differential system is consistent, so is the corresponding algebraic one. The converse is not true: for example, differential system {u'=u, u''=1} does not have solutions, while its algebraic counterpart {x1=x0, x2=1} does. However, if a sufficient number of derivatives of the original differential polynomials is added to the differential system (this does not affect the solution set), also the converse will hold. In this way, the problem of checking consistency of the differential algebraic system can be reduced to a purely algebraic problem (the Nullstellensatz), for which efficient algorithms and tight bounds are known. We will give an upper bound on the required number of derivatives.

This is joint work with M. Kondratieva, A. Ovchinnikov, and A. Szanto

A new solution to the polynomial GCD normalization problem

Mahdi Javadi · Simon Fraser University


When computing a greatest common divisor of two polynomials, the sparse modular gcd algorithm of Richard Zippel cannot be applied directly if the leading coefficient of the gcd in the main variable is a sum of two or more terms. This is because we are not able to scale the univariate images of the gcd (in the main variable) consistently. This is called the "normalization problem".

In 2005 de Kleine, Monagan and Wittkopf presented a solution to the normalization problem. Their idea is to scale each univariate image with an unknown scaling factor so their algorithm might need more univariate images of the gcd in order to construct a consistent system of linear equations.

In this talk we will present a new solution to the normalization problem which does not require introducing any new unknown variable. In this method we will take advantage of the fact that we know the form of the leading coefficient, i.e. we know which terms are present in the leading coefficient. Using this information, we will be able to compute the leading coefficient efficiently by solving a linear system of equations and hence scale each univariate image with the correct scaling multiple. Another advantage of this method is that we can further parallelize the algorithm after computing the leading coefficient.

Recent Progress with Order Bases

George Labahn · University of Waterloo


Order bases provides a mechanism for describing all the solutions of certain rational approximation problems (as a module). They are used for finding inversion formulae for structured matrices, for guessing recurrence formulae, for finding matrix normal forms and for various Padé approximation problems. In this talk we will describe some recent developments in the use of these abses along with fast methods for finding the bases.

Computing with Constructible Sets

Liyun Li · University of Western Ontario


Constructible sets appear naturally when solving systems of equations, in particular in presence of parameters. Our Maple implementation of comprehensive triangular decompositions has led us to dedicate a module, ConstructibleSetTools, to computing with constructible sets. The main operations are Difference, MakePairwiseDisjoint (MPD), Symmetrically Make Pairwise Disjoint (SMPD). Relying on the traditional Euclidean algorithm for computing GCDs and the augment refinement method by Bach, Driscoll and Shallit, we introduce efficient algorithms for Difference, MPD and SMPD by exploiting the triangular structure of their representations. We also present applications to polynomial solver verifications and invariant theory.

LinearAlgebra: some missing documentation

David Linder · Maplesoft


The LinearAlgebra package, introduced in Maple 6, is a typical piece of Maple. Most but not all of the desired functionality is present and documented. But an important set of tasks or usage cases are not as well documented as they might be. Partly this reflects Maple's general strategy of documenting routines rather than tasks. And partly it reflects the breadth of the discipline. Some important or recurring common tasks will be illustrated. If time permits, additional examples illustrating efficiency will be discussed.

A Symbolic-Numeric Approach to Computing Inertia of Products of Matrices of Rational Numbers

John May · Maplesoft


Consider a rational matrix, particularly one whose entries have large numerators and denominators but which is presented as a product of very sparse matrices with relatively small entries. We will discuss symbolic-numeric hybrid strategies to compute the inertia (the number of positive, negative, and zero eigenvalues) of such a matrix in the nonsingular case that effectively exploits the product structure. The method works by computing the numerical LU decomposition of the product with a succession of floating point QR decompositions of the small, sparse, factors in the product representation.

This is joint work with B. David Saunders, and David H. Wood in the Department of Computer Science at the University of Delaware.

Sparse polynomial arithmetic part I: High Performance

Roman Pearce · Simon Fraser University


How should we multiply and divide sparse polynomials? In his 1984 paper, David Stoutemyer concluded that the best overall approach was recursive dense. That is, a polynomial in n variables is stored as a univariate polynomial, with coefficients that are recursive polynomials in the remaining n-1 variables.

In this talk we revisit the question and examine algorithms and data structures for the distributed representation, where polynomials are stored as (coefficient, monomial) pairs. We show how commonly used algorithms and data structures perform poorly with respect to cache, and we demonstrate a high performance implementation based on a "heap of pointers" approach used by the ALTRAN system in 1974.

Our main contribution is a new algorithm for pseudo-division. We use the heap of pointers to delay all coefficient arithmetic and eliminate extraneous scaling of terms. We also dynamically switch from using a heap that is the size of the quotient to a heap based on the divisor, to ensure optimal complexity when the quotient is larger than the divisor. The resulting algorithm divides with the same complexity as multiplying the quotient and the divisor and merging their product with the dividend.

We have implemented polynomial multipliation, exact division, and division with remainder (using pseudo division). We will present benchmarks comparing our software with Maple, Magma, Singular, Trip and Pari.

Sparse polynomial arithmetic part II: A proposal to dramatically speed up polynomials in Maple.

Michael B. Monagan · Simon Fraser University


In the previous talk, Roman Pearce described our new algorithms that multiply and divide polynomials in a distributed representation. Our algorithms use an auxilliary structure called a "chained pointer heap". Our benchmarks show that on large polynomial multiplications and divisions, our code is consistently 100+ times faster than Maple. Moreover, we are consistently 10+ times faster than Singular and Magma. Can this code be integrated into Maple in such a way that the Maple user will also see the factor of 100+ times improvement on large problems and a good improvement on small and medium sized problems too?

In the talk I show our first integration attempt which was to reprogram the Maple library routines `expand/bigprod` and `expand/bigdiv` (these are are invoked by expand(f*g) and divide(f,g) on moderately large inputs f and g) to use our new codes. I will show good improvements to factor and gcdex and also not so good improvements. It turns out that even though the conversions to our data structures and back to Maple are coded in C, there is a high overhead when converting to Maple's "SUM of PRODs" data structure which kills much of the gain.

We will propose a simple change to Maple's default representation for polynomials so that we can realize the full benefit of our new algorithms for multiplication and division. Our data representation will be much more more compact. And since our algorithms do not generate garbage, Maple will be able multiply and divide much much larger problems and much much faster. The change will be consistent with Maple's op(...) and nops(...) model so that no Maple library codes need be modified.

We will show that in addition to improving the speed of polynomial multiplication and division, there are many many other core Maple kernel routines that will be hugely impoved, e.g., indets, has, and sign. Even better, we can have our cake and eat it too: polynomials will be AUTOMATICALLY SORTED. No more 1-x+y^10-y. Always y^10-x-y+1.

Of course it is but a proposal. There is much work to be done. We will need the help of the MapleSoft kernel programmers. Nevertheless, we feel that this proposal represents the biggest improvement that we have ever contributed to Maple.

This is joint work with Roman Pearce at Simon Fraser University.

Scheduling parallel algorithms in computer algebra.

Marc Moreno Maza · University of Western Ontario


Many programming environments for parallel computing are available today to computer algebraists. An important criterion for their choice should be scheduling. Indeed, symbolic computations tend to parallelize into highly irregular tasks.

When static scheduling is not strictly necessary, symbolic computations are often a good candidate for automatic scheduling due to the divide-and-conquer structure of many algorithms. This talk aims at illustrating this latter claim using Cilk and its implementation of the work stealing principle.

The Modpn library: Bringing Fast Polynomial Arithmetic into Maple

Marc Moreno Maza · University of Western Ontario


One of the main successes of the Computer Algebra community in the last 30 years is the discovery of algorithms, called modular methods, that allow to keep the swell of the intermediate expressions under control. Without these methods, many applications of Computer Algebra would not be possible and the impact of Computer Algebra in the scientific computing would be severely limited. Amongst the computer algebra systems which have emerged in the 70s and 80s Maple and its developers have played an essential role in this area.

Another major advance in symbolic computation is the development of implementation techniques for asymptotically fast polynomial arithmetic. Computer algebra systems and libraries initiated in the 90s, such as Magma and NTL, have been key actors in this effort.

In this extended abstract, we present Modpn, a Maple library dedicated to fast arithmetic for multivariate polynomials over finite fields. The main objective of Modpn is to provide highly efficient core routines for supporting the implementation of modular methods.

We illustrate the impact of fast polynomial arithmetic on a simple modular method by comparing its implementations in Maple with classical arithmetic and with the Modpn library. Then, we discuss the design of Modpn, its strengths and necessary future improvements.

This is joint work with Xin Li, R. Rasheed, and É. Schost

When does (T) equal Sat(T)?

Wei Pan · University of Western Ontario


Given a regular chain T, the polynomials in T may not be a system of generators of the saturated ideal Sat(T). A natural idea is to test whether (T)=Sat(T) holds. We generalize the notion of primitivity for a polynomial to a regular chain. We establish a necessary and sufficient condition, together with a Groebner basis free algorithm, for testing this equality.

The LinBox project for linear algebra computations

Daniel Roche · University of Waterloo


LinBox is a C++ template lirary for performing exact linear algebra computations. The backbone of this project has been support for so-called "black box" methods, which stem from Wiedemann's landmark 1986 paper and support fast linear system solving, rank computation, nullspace sampling, and the like for matrices which are sparse or structured (e.g. Toeplitz).

A more recent advance has been the use of block projections to further enhance the efficiency of black-box method. Essentially as a result of these developments, LinBox now has full support for dense matrix linear algebra as well, made efficient by the use of highly-tuned BLAS implementations.

As of the most recent release, version 1.5, LinBox stands as a powerful and robust tool for all sorts of linear algebra computations over a number of domains, including finite fields of various sizes, mutli-precision integers and rational numbers, and polynomials. Various matrix representations and file formats are also supported to keep in line with LinBox's goal of full genericity without any sacrifice on performance.

LinBox is also considered "middleware", lying between basic underlying libraries and higher-level computer algebra systems. Some of the underlying libraries used by LinBox are GMP, NTL, BLAS, Lapack, and Givaro. At the same time, LinBox has been used as a backend to web computation servers and the GAP homology package, and interfaces have been written for the computer algebra systems Maple and SAGE.

In a tutorial-style presentation, we will highlight the key features and capabilities of LinBox, with some "hands-on" examples of building and use. We will also examine some of the challenges faced and planned developments for the upcoming Version 2.0 release.

Triangularizing integer matrices

Arne Storjohann · University of Waterloo


Most methods for triangularizing matrices over the ring of integers (computing the Hermite form) use a direct elimination scheme, usually a variation of gaussian elimination with division replaced by the extended euclidean algorithm. Growth in the size of numbers can be controlled with modular techniques, or by carefully organising the computation, but for large matrices the cost of direct elimination methods is still prohibitive. In this talk I show how the Hermite form can be recovered, for most input matrices, from a few random linear system solutions.

A package for delayed polynomial arithmetic (demo)

Paul Vrbik · Simon Fraser University


I will begin with a discussion of what delayed computations are and how they may help speed up common computations in computer algebra. Our algorithms for polynomial multiplication and division compute the terms in the product (quotient(s) and remainder(s)) sequentially. They use a heap to reduce the number of monomial comparisons and delay coeffient arithmetic until the last possible moment. After presenting our algorithms, I will demonstrate how to use our Maple library for doing delayed polynomial arithmetic.

Functional Decomposition of Symbolic Polynomials

Stephen M. Watt · University of Western Ontario


Earlier work has presented algorithms to factor and compute GCD of symbolic Laurent polynomials, that is multivariate polynomials whose exponents are themselves integer-valued polynomials. This talk shows how to extend the notion of univariate polynomial decomposition to symbolic polynomials and presents an algorithm to compute these decompositions. For example, the symbolic polynomial f(x) = 2xn2+n-4xn2+2xn2-n+1 can be decomposed as f = g ∘ h where g(x) = 2x2+1 and h(x) =xn2/2 + n/2 - xn2/2 - n/2.

Evaluation properties of invariant polynomials.

Jie Wu · UWO and Graduate University of the Chinese Academy of Science


A polynomial invariant under the action of a finite group can be rewritten using generators of the invariant ring. We investigate the complexity aspects of this rewriting process; we show that evaluation techniques enable one to reach a polynomial cost.

Challenges and Progress in Geometric Methods of Differential Systems

Wenyuan Wu · University of Western Ontario


We outline challenges and progress in Numerical Jet Geometry. Several new ideas are described in this talk, yielding witness point versions of fundamental operations in Jet geometry.

The first new idea is to replace differentiation (prolongation) of equations by geometric lifting of witness Jet points. Unlike other approaches our geometric lifting technique can characterize projections without constructing an explicit algebraic equation representation.

The second new idea is to compute singular components of differential systems, by embedding the given system, in a larger space. Then the rank degeneracy conditions of a certain extended form of the classical geometric symbol of a differential system, ultimately yields witness Jet points characterizing singular solutions of the system. Simple examples are given to illustrate these ideas.

On the Representation of Constructible Sets

Yuzhen Xie · University of Waterloo


The occurrence of redundant components is a natural phenomenon when computing with constructible sets. Two levels of redundancy can exist. A first one appears in representing a single constructible set with regular systems. This problem arises when computing the complement of a constructible set, the union, the intersection or the difference of two constructible sets. A second level of redundancy can occur in a family of constructible sets. We present different algorithms for computing an irredundant representation of a constructible set or a family thereof. We provide a complexity analysis and report on an experimental comparison.

Efficient Order Basis Computation

Wei Zhou · University of Waterloo


Order basis (also known as sigma-basis, minimal approximant basis), originally developed by Beckermann and Labahn in 1994 for rational approximation and interpolation problems, has recently been applied to many other important problems in polynomial matrix computation. These include column reduction, matrix inverse, determinant, and null space basis computation. Storjohann, in 2006, provided an efficient way to compute a part of order basis that is within a given degree bound. We extend Storjohann's result by providing a way to compute a complete order basis with a similar computational cost.