Keynotes
Keynotes ParCo2013
Speakers will bw announced as soon as these become available towards the end of 2012.
Keynotes ParCo2011
Andy Adamatsky, Dept. of Computer Science, UWE, Bristol, UK
Title: Physarum Machines
Abstract:
A Physarum machine is a programmable amorphous biological computer experimentally implemented in a plasmodium of Physarum polycephalum.
Physarum machines are programmed by configurations of repelling and attracting gradients, and localized reflectors. Physarum is a biological prototype of all storage modification machines and modern computer architectures.
In this talk it will be shown how a plasmodium of Physarum polycephlum can solve geometrical and graph-theoretic problems, implement logical computations and intelligence actuation.
Jack Dennis, Professor of Computer Science, MIT, USA
Title: The Fresh Breeze Project
Abstract:
The development of multicore chips is producing a sea change in the world of computer system architecture. One result is the conventional program execution model (PXM), that has been the mainstay of commercial software development for more than thirty years, has been rendered obsolete. The Fresh Breeze PXM is proposed as a new basis for the design of massively parallel computer systems that can achieve high performance with sound support for program development including composability of parallel programs. The presentation will review the history of Program Execution Models (PXMs) and how their evolution has led to the Fresh Breeze project.
Bernhard Fabianek, European Commission, Brussels
Title: The Future of High Performance Computing in Europe
Abstract:
This talk will highlight the pan-European relevance high-performance computing has gained. It will outline the setup of an HPC eco-system and the challenges associated with this. Furthermore a strategic agenda for high-performance computing will be presented including the underlying fundamental data and the envisaged actions.
Bill Gropp, Paul and Cynthia Saylor Professor of Computer Science, University of Illinois at Urbana-Champaign, USA
Title: Performance Modeling as the Key to Extreme Scale Performance
Abstract:
Parallel computing is primarily about achieving greater performance than is possible without using parallelism. Especially for the high-end, where systems cost tens to hundreds of millions of dollars, making the best use of these valuable and scarce systems is important. Yet few applications really understand how well they are performing with respect to the achievable performance on the system.
The Blue Waters system, currently being installed at the University of Illinois, will offer sustained performance in excess of 1 PetaFLOPS for many applications. However, achieving this level of performance requires careful attention to many details, as this system has many features that must be used to get the best performance. To address this problem, the Blue Waters project is exploring the use of performance models that provide enough information to guide the development and tuning of applications, ranging from improving the performance of small loops to identifying the need for new algorithms.
Using Blue Waters as an example of an extreme scale system, this talk will describe some of the challenges faced by
applications at this scale, the role that performance modeling can play in preparing applications for extreme scale, and some ways in which performance modeling has guided performance enhancements for those applications.
Thomas Lippert, Forschungszentrum Juelich GmbH, Juelich, Germany
Title: Europe's Supercomputing Research Infrastructure PRACE
Abstract:
During the last three years a consortium of 20 European countries prepared the legal and technical prerequisites for the establishment of a leadership-class supercomputing infrastructure in Europe. The consortium named "Partnership for Advanced Computing in Europe" has carried out a preparatory phase project supported by the European Commission. The statutes of the research infrastructure association, a Belgian "association sans but lucrative", were signed in April 2010 and its inauguration took place in June 2010. Four members have committed to provide compute cycles worth EURO 100 Million each in the 5 years period until 2015. Six sites from the four hosting countries in succession will install machines of the highest performance class (Tier-0) for leading edge capability computing, providing a diversity of architectures beyond Petaflop/s towards Exaflop/s.
Access to the infrastructure is provided on the basis of scientific quality through a pan-European peer review scheme under the guidance of the scientific steering committee (SSC) of PRACE. The SSC is an autonomous group of leading European peers from a variety of fields in computational science and engineering. Proposals can be submitted in form of projects or as programs by communities. The provision of computer time through PRACE started in August 2010 on the supercomputer JUGENE of Research Centre Juelich.
Presently PRACE is further developing its infrastructure in the first implementation project soon to be followed by the second implementation project, both again funded by the European Commission. As important steps forward, PRACE's Tier-0 supercomputing infrastructure will be complemented by national centres (Tier-1) of the PRACE partners. In the tradition of DEISA, the Tier-1 centres will provide limited access to national systems for European groups - granted through national peer review - under the synchronization and governance of PRACE. Furthermore, PRACE aims at establishing an industrial user-vendor platform with the goal to create a European Technology Platform for HPC.
Ignacio Martin Llorente, OpenNebula Project Director,
DSA-Research.org, Universidad Complutense de Madrid, Spain
Title: Challenges in Hybrid and Federated Cloud Computing
Abstract:
Federated and hybrid clouds will play a significant role in IT strategies and e-infrastructures in the coming years. The keynote describes the different hybrid cloud computing scenarios, ranging from the combination of local private infrastructure with commercial cloud providers that offer no real support for federation to one built on data centers of the same organization where the sites are completely dedicated to supporting all aspects of federation. The level of federation is defined based on the amount of information disclosed and how much control over the resources is provided across sites.
The talk presents the existing challenges for interoperability in federated and hybrid cloud computing scenarios, and ends with examples of multi-cloud environments running OpenNebula, such as the hybrid cloud computing approach in the StratusLab project.