Improved algorithms, better implementations, and faster computers have enabled many previously time-consuming computer algebra computations to be performed routinely and have extended the range of what is practically possible to compute. However, there remains many computations that still require excessive computing time, and while existing computer algebra systems have had some practical success in applications, their widespread use in computational science and engineering remains limited. Part of this is due to inherent difficulties of exact computation; nevertheless, there are many cases where the performance achieved by an implementation could be dramatically improved through optimized implementations and parallel computation and the use of special purpose hardware.

While the computer algebra community has begun to incorporate high performance computing techniques, tuning algorithms to perform well on modern computer architectures and adapting algorithms and systems to parallel computers can be a difficult and time consuming process; much work remains in these directions. Moreover, there are many challenges in achieving high-performance in computer algebra algorithm implementations due to their irregular structure and higher level data types. Also the complexity of computer algebra systems make the incorporation of parallel computation challenging.

This session is devoted to exploring the application of high-performance computing to computer algebra algorithms, applications and systems, and the research and implementation challenges this poses.

- Practical implementation and performance analysis of computer algebra algorithms
- Implementation and optimization techniques for computer algebra algorithms
- Parallel implementations of computer algebra algorithms and parallel computer algebra systems
- The application and extension of optimizing compiler techniques to the implementation of computer algebra algorithms.
- Adapting the implementation of computer algebra algorithms to improve performance by better utilizing the underlying hardware
- Techniques for automating the optimization and platform adaptation of computer algebra algorithms
- Cache complexity and cache-oblivious computer algebra algorithms
- Hardware accelaration technologies (multi-cores, GPUs, FPGAs)

http://www.easychair.org/conferences/?conf=hpcaaca2009.

**Accepted contributions:**

**A computer algebraist meets a computer centre director**

(James Davenport)**Towards an efficient implementation for the resolution of structured linear system**

(Benoit Lacelle and Eric Schost)**Parallel Disk-Based Computation and Computational Group Theory**

(Eric Robinson, Gene Cooperman, Daniel Kunkle and Jürgen Müller)**Multitasking Polynomial Homotopy Continuation in PHCpack**

(Jan Verschelde)**Memory Efficiency in Polynomial Multiplication**

(Daniel Roche)**Fast multiplication and its variants in Newton iteration**

(Ling Ding and Eric Schost)**Balanced Dense Polynomial Multiplication on Multi-cores**

(Yuzhen Xie and Marc Moreno Maza)**SPIRAL-Generated Modular FFTs**

(Jeremy Johnson and Lingchuan Meng)**A Note on the Performance of Sparse Matrix-vector Multiplication with Column Reordering**

(Sardar Haque and Shahadat Hossain)**Implementing Modular Methods in Maple with the Modpn library**

(Wei Pan, Xin Li and Marc Moreno Maza)**Code Generation and Autotuning in Computer Algebra**

(Jeremy Johnson, Werner Krandick, David Richardson and Anatole Ruslanov)**Linear Algebra Modulo Tiny Primes**

(Jean-Guillaume Dumas, David Saunders and Bryan Youse) )