Call for Participation: Abstracts & Workshop Proposals - International Symposium on Grids and Clouds (ISGC) 2013
The International Symposium on Grids and Clouds (ISGC) 2013, hosted by Academia Sinica Grid Computing Centre (ASGC), will be held at Academia Sinica in Taipei, Taiwan, from 17 to 22 March 2013, with co-located events and workshops. The Program Committee cordially invites your participation by taking part in the Call for Abstracts and/or Call for Workshops.
The theme of ISGC 2013 is Collaborative Simulation, Modelling and Data Analytics in Grids and Clouds.
ISGC 2013 will bring together from the Asia-Pacific region and around the world, researchers that are developing applications to produce these large-scale data sets and the data analytics tools to extract the knowledge from the generated data, and the e-infrastructure providers that integrate the distributed computing, storage and network resources to support these multidisciplinary research collaborations. The meeting will feature workshops, tutorials, keynotes and technical sessions to further support the development of a global e-infrastructure for collaborative Simulation, Modelling and Data Analytics.
Call for Abstracts - Before 16 November 2012, please submit abstracts through the Indico at http://indico3.twgrid.org/indico/conferenceDisplay.py?confId=357
Topics of Interest
Applications and results from the Virtual Research Communities and Industry:
1. Physics (including HEP) & Engineering Applications Submissions should report on experience with physics and engineering applications that exploit grid and cloud computing services, applications that are planned or under development, or application tools and methodologies.
Topics of interest include:
- End-user data analysis;
- Management of distributed data;
- Applications level monitoring;
- Performance analysis and system tuning;
- Workload scheduling;
- Management of an experimental collaboration as a virtual organization;
- Comparison between grid and other distributed computing paradigms as enablers of physics data handling and analysis;
- Expectations for the evolution of computing models drawn from recent experience handling extremely large and geographically diverse datasets.
2. Biomedicine & Life Sciences Applications
During the last decade, Biomedicine and Life Sciences have dramatically changed thanks to the use of High Performance Computing and highly Distributed Computing Infrastructures such as grids and clouds. Submissions should concentrate on practical applications in the fields of Biomedicine and Life Sciences, such as:
- Cloud-based use of biomedical data
- Medical imaging
- Drug discovery
- Public health applications / infrastructures
- High throughput biological data processing/analysis
- Integration of semantically diverse data sets and applications
- Combining grid with distributed data and services
- Biomedical data management issues
- Applications for non-technical end users
3. Earth & Environmental Science & Biodiversity
Today, it is well understood that precise, long-term observations are essential to quantify the patterns and trends of on-going environmental changes, and that continuously evolving models are needed to integrate our fundamental knowledge of processes with the geospatial and temporal information delivered by various monitoring activities. This makes it critically important that the environmental sciences community should put a strong emphasis on analysing the best practices and adopting common solutions on the management of heterogeneous data and data flows.
Natural and Environmental sciences are placing an increasing emphasis on the understanding of the Earth as a single, highly complex, coupled system with living and dead organisms. It is well accepted, for example, that the feedbacks involving oceanic and atmospheric processes can have major consequences for the long-term development of the climate system, which in turn affects biodiversity, natural hazards and can control the development of the cryosphere and lithosphere. Natural disaster mitigation is one of the most critical regional issues in Asia.
Despite the diversity of environmental sciences, many projects share the same significant challenges. These include the collection of data from multiple distributed sensors (potentially in very remote locations), the management of large low-level data sets, the requirement for metadata fully specifying how, when and where the data were collected, and the post-processing of those low-level data into higher-level data products which need to be presented to scientific users in a concise and intuitive form.
This session would in particular address how these challenges are being handled with the aids of e-Science paradigm.
4. Humanities & Social Sciences Applications
Researchers working in the social sciences and the humanities have started to explore the use of advanced computing infrastructures such as grids to address the grand challenges of their disciplines. For example, social scientists working on issues such as globalization, international migration, uneven development and deprivation are interested in linking complementary datasets and models at local, national, regional and global scales.
Similarly, in the humanities, researchers from a wide range of disciplines are interested in managing, linking and analyzing distributed datasets and corpora. There has been a significant increase in the digital material available to researchers, through digitization programmes but also because more and more data is now "born digital".
As more and more applications demonstrate the successful application of e-Research approaches and technologies in the humanities and social sciences, questions arise as to whether common models of usage exist that could be underpinned by a generic e-Infrastructure. The session will focus on experiences made in developing e-Research approaches and tools that go beyond single application demonstrators. Their wider applicability may be based on a set of common concerns, common approaches or reusable tools and services. We are also specifically inviting contributions concerned with teaching e-Research approaches at undergraduate and postgraduate levels as well as other initiatives to "bridge the chasm" between early adopters the majority of researchers.
Activity to enable the provisioning of a Resource Infrastructure
5. Infrastructure & Operations Management
This session will cover the current state of the art and recent advances in managing the internal operation of large scale research infrastructures and the interactions between them. The scope of this track will include advances in high-performance networking (including IPv4 to IPv6 transition), monitoring tools and metrics, service management (ITIL and SLAs), security, improving service and site reliability, interoperability between infrastructures, user and operational support procedures, and other topics relevant to provide a trustworthy, scalable and federated environment for general grid and cloud operations.
6. Middleware & Interoperability
Middleware technology is an inevitable cornerstone of modern federated Grid and Cloud infrastructures. Their robustness, scalability and reliability are of major importance to support academic and business infrastructure users in gaining new scientific insights or increasing their revenues. Until recently middleware technologies were developed from specific requirements of certain communities and use cases. Today middleware technologies must converge by employing open standards to enable interoperability among technologies and infrastructures or to re-use components from other technologies - convergence, collaboration and innovation is and must be a key element of this endeavor. Therefore submissions should highlight their contribution to the convergence, collaboration and innovation of interoperable middleware technologies for federated IT-infrastructures. Topics of interest include but are not limited to:
- One-step-ahead interoperable middleware solutions (Grid-to-Grid, Grid-to-Cloud and vise-versa, Cloud-to-Cloud) including application use cases, employed open standards and implementation highlights
- Examples for convergence of middleware technologies, e.g. replacement of components by external, standardized and interoperable components from other middleware distributions
7. Infrastructure Clouds & Virtualisation
This track will focus on the use of Infrastructure-as-a-Service (IaaS) cloud computing and virtualization technologies in large-scale distributed computing environments in science and technology. The organizers solicit papers describing underlying virtualization and "cloud" technology, scientific applications and case studies related to using such technology in large scale infrastructure as well as solutions overcoming challenges and leveraging opportunities in this setting. Of particular interest are results exploring usability of virtualization and infrastructure clouds from the perspective of scientific applications, the performance, reliability and fault-tolerance of solutions used, data management issues. Papers dealing with the cost, price, and cloud markets, with security and privacy, as well as portability and standards, are also most welcome.
8. Business Models & Sustainability
Whenever a business is established, it employs a particular business model that describes the architecture of the value (economic, social, etc.) creation, delivery, and capture mechanisms employed by the business enterprise. Business models are used to describe and classify businesses (especially in an entrepreneurial setting), but they are also used by managers inside companies to explore possibilities for future development. Business models are also referred to in some instances within the context of accounting for purposes of public reporting.
Sustainability is the capacity to endure and interfaces with economics through the social and ecological consequences of economic activity. Among the many ways of living more sustainably, one can cite the use of science to develop new technologies (green technologies, renewable energy, or new and affordable cost-effective practices) to make adjustments that conserve resources.
These two concepts apply to the e-infrastructure world and the purpose of this session will be to report about existing or foreseen initiatives aiming at guaranteeing the long-term sustainability of e-Infrastructures by means of business models.
Technologies that provide access and exploitation of different site resources and infrastructures.
9. Data Management
Data management encompasses the organization, distribution, storage, access, and validation of digital assets. Data management requirements can be characterized by data life stages that include shared project collections, to formally published libraries, to preservation of reference collections. Papers are sought that demonstrate the management of data through the multiple phases of the scientific data life cycle, from creation to re-use. Of particular importance are demonstrations of systems that validate assertions about collection properties, including integrity, chain of custody, and provenance.
10. Managing Distributed Computing Systems
This track will highlight the latest research achievements in interoperability between commercial clouds, conventional grids, desktop grids and volunteer computing. The topics will cover new technologies of the related software frameworks, recent application developments, as well as infrastructure operation and user support techniques for all levels: campus, institutional, and for very large scale cyberscience computing.
Special focus will be on the following areas:
- Interoperability with other and integration in other e-infrastructures;
- Virtualization techniques;
- Data management;
- Energy efficiency and Green computing aspects;
- Quality of service;
- Novel uses of volunteer computing and Desktop Grids;
- Best practices and (social) impacts.
11. High Performance & Technical Computing (HPTC)
With the growing availability of computing resources such as public grids (e.g., EGI and OSG) and public/private clouds (e.g., Amazon EC2), it has becomes possible to develop and deploy applications that exploit as many computing resources as possible. However, it is quite challenging to effectively access, aggregate and manage all available resources that are usually under control by different resource providers. This session will solicit recent research and development achievements and best practices in exploiting the wide variety of computing resources available. HPTC resources include dedicated High Performance Computing (HPC), High Throughput Computing (HTC), GPUs and many-core systems.
The topics of interest include, but are not limited to the followings:
- Experiences, use cases and best practices on the development and operation of large-scale HPTC applications;
- Delivery of and access to HPTC resources through grid and cloud computing (as a Service) models;
- Integration and interoperability to support coordinated federated use of different HPTC e-infrastructures;
- Use of virtualization techniques to support portability across different HPTC systems;
- Robustness and reliability of HPTC applications and systems over a long-time scale.
12. Big Data Analytics
Characterized as Volume, Variety, Velocity (V3) by commercial sector, Big Data could not be easily dealt with by popular Relational Databases. The deluge of data is forcing a new generation scientific process and the discovery mechanism. Big Data in Big Science is not only large in scale but also globally distributed, as data may come from different sources, formats, workflows, and speed (real time). It is transforming all industries, resulting in innovation in infrastructure, compute and storage hardware, data analytics and algorithms, and a wide range of software applications and services. This track aims to invite innovative research, application and technology on big data. Submissions should focus on both the conceptual modelling as well as techniques related to big data analytics.Leslie Versweyveld 2012-11-07T10:21:01Z
A new book on Desktop Grid Computing, published by CRC Press, contains 15 chapters over 388 pages. The book appears in the series "Chapman & Hall/CRC Numerical Analysis and Scientific Computing Series". It has been published on June 25, 2012 with ISBN 9781439862148. The editors are Christophe Cérin, Université Paris XIII, Villetaneuse, France and Gilles Fedak, INRIA, University of Lyon, France. The book illustrates how Desktop Grid computing is used in various fields, such as bioinformatics and medical imaging; presents state-of-the-art methods, models, and technologies; and examines the design of middleware and architecture. You can get the book from your favorite bookshop.
Summary: Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance.
The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical issues, offers details about implementation and experiments, and includes references to further reading and notes.
One of the first books to give a thorough and up-to-date presentation of this topic, this resource describes various approaches and models as well as recent trends that underline the evolution of desktop grids. It balances the theory of designing desktop grid middleware and architecture with applications and real-world deployment on large-scale platforms.
We are glad to announce that the new version 3.5.2 of gUSE has been released. From this version gUSE/WS-PGRADE supports the workflow export/import to/from the SHIWA Workflow Repository where you can share workflows of different workflow systems.Kitti Varga 2012-10-17T11:11:00Z
We are glad to announce that the new version 3.5.2 of gUSE has been released. From this version gUSE/WS-PGRADE supports the workflow export/import to/from the SHIWA Workflow Repository where you can share workflows of different workflow systems.Kitti Varga 2012-10-17T11:01:35Z
The SHIWA project officially ended today, but our work will be continued in the ER-flow project.
Latest Success Story is available about how SHIWA and its subcontractor Correlation Systems worked together.
See more at the Success Stories page.Kitti Varga 2012-10-03T14:30:45Z
Gabor Terstyanszky who is the coordinator of the ER-flow project and leads the WP3 - SA1 (SHIWA Simulation Platform) workpackage of the SHIWA project introduces the aims of the ER-flow project to promote workflow sharing and to investigate interoperability of the scientific data in workflow sharing by using the SHIWA services.
Kitti Varga 2012-09-27T14:25:51Z
We are happy to announce, that the ER-flow project has started today.Kitti Varga 2012-09-27T14:24:29Z
Gabor Terstyanszky who is the coordinator of the recently started ER-flow project and leads the WP3 - SA1 (SHIWA Simulation Platform) workpackage of the SHIWA project introduces the aims of the ER-flow project to promote workflow sharing and to investigate interoperability of the scientific data in workflow sharing by using the SHIWA services.
Kitti Varga 2012-09-27T14:18:33Z
CloudBroker GmbH and the SCIentific gateway Based User Support (SCI-BUS) EU FP7 project are happy to announce a new release of the CloudBroker Platform that allows performing scientific calculations on own infrastructure-as-a-service (IaaS) clouds. Several already available public cloud providers as well as newly implemented private cloud solutions can be utilized. Furthermore, in the current pricing scheme the surcharges for using the public platform are zero if the corresponding resources and associated software are not offered for a fee. CloudBroker and SCI-BUS hope that these new abilities will spark cloud usage in scientific computing in general, and in particular in science gateways.
The CloudBroker Platform is a software-as-a-service (SaaS) application store that allows easy offering and using compute-intensive applications on different cloud infrastructures. It is based on a pay-per-use model and can be accessed via a web browser or through REST and Java application programming interfaces (APIs) as well as the command line. The platform also represents one of the base technologies in the SCI-BUS science gateway project, where it provides the connection to public and private cloud infrastructures.
The goal of SCI-BUS is to make it easier to build, operate and use science gateways, that is, web portals for different research communities on top of distributed computing infrastructures (DCIs). For this, generic-purpose technology is provided and gateways for a number of different communities are supported, including systems biology, computational chemistry, astrophysics, heliophysics, seismology, medicine, rendering, electronic document handling, business process optimization, small and medium enterprises (SMEs) and software testing.
Within the SCI-BUS project, the following additional features were incorporated into the CloudBroker Platform and are now made available to all platform users:
- Adapters to OpenStack and Eucalyptus private clouds in addition to the existing Amazon Web Services and IBM SmartCloud Enterprise public cloud adapters
- Ability to register and offer own IaaS cloud resources using accounts on any of the above clouds
- Deployment of application software on own cloud resources without special checking procedure
- Separate treatment of cloud compute and storage resources
- Interface to IBM SmartCloud Enterprise object storage based on Nirvanix
- Tagging of jobs, for example to differentiate from where a job originates
Further related features such as an interface to OpenNebula private clouds are currently in preparation. As in the current pricing scheme the surcharges for using the public CloudBroker Platform are calculated percentage-wise, there are no extra platform usage charges if the prices for own resources and software are set to zero.
Dr. Wibke Sudholt, CTO of CloudBroker and work package leader in SCI-BUS, comments: “The new release opens the CloudBroker Platform to users who would like to utilize their own cloud accounts or infrastructures in addition to what CloudBroker provides. Offering these resources and the software deployed on it through the platform then presents a seamless and efficient way towards SaaS services for all corresponding providers.” Prof. Dr. Peter Kacsuk, head of the Laboratory of Parallel and Distributed Systems (LPDS) at the Computer and Automation Research Institute, Hungarian Academy of Sciences (MTA SZTAKI) and SCI-BUS project coordinator, adds: “With the possibility to use the CloudBroker Platform for free for own cloud infrastructures and to register any software there, users now have the full freedom to deploy and run whatever they need for their scientific workflows in the cloud. Using the CloudBroker interface in our WS-PGRADE / gUSE gateway framework, both public and private clouds are readily available for science gateway providers and users.”
The CloudBroker Platform incorporates the described additions from release number 1.1 onwards. Its public installation, where in the current pricing scheme everybody can register free of charge after a user check, is available under
CloudBroker offers commercial hosted and in-house versions of the platform as well. The new features were also topics during a demo on the SCI-BUS booth and in the Science Gateway session at the EGI Technical Forum in Prague on September 20, 2012.About
CloudBroker GmbH: CloudBroker (http://www.cloudbroker.com ) is a spin-off company of the ETH Zurich located in Zurich, Switzerland. It offers scientific and technical applications as a service in the cloud, for usage in fields such as biology, chemistry, health and engineering. Its flagship product, the CloudBroker Platform, provides on-demand web access to application software on top of compute and storage resources in the cloud.
SCI-BUS: SCI-BUS (http://www.sci-bus.eu ) is a European project supported by the FP7 Capacities Programme under contract no. RI-283481. It aims at developing gateway technology and community gateways to provide researchers seamless access to major computing, data and networking infrastructures and services, with focus on scientific workflows. SCI-BUS is a collaboration of 15 consortium members and six subcontractors, supporting a number of different gateways in various disciplines.Contacts
Prof. Dr. Peter Kacsuk
Laboratory of Parallel and Distributed Systems
Victor Hugo u. 18-22
Phone: +36 1 329 7864
Instituto de Biocomputación y Física de Sistemas Complejos
C/ Mariano Esquillor s/n
Phone: +34 976 76 1000 ext. 5413
Kitti Varga 2012-09-27T11:29:57Z
SHIWA and IGE projects are happy to announce the successful integration of Globus Toolkit 5 into the SHIWA Simulation Platform through its DCI Bridge component.
The two projects, IGE and SHIWA, extended their collaborative efforts by signing a Memorandum of Understanding (MoU). The SHIWAproject’s aim is to leverage existing workflow based solutions and enable cross-workflow and inter-workflow exploitation of Distributed Computing Infrastructures (DCIs) to help the collaboration of user communities that work on similar research field but use different workflow system.
The adaptation of the DCI Bridge Connector of SHIWA software to GT5 separated into two phases. In the first phase IGE analyzed the potential for migration of the GT2 and GT4 adaptors of the DCI Bridge to GT5, based on technical documentation and source code of the respective adaptors. In this phase IGE developed the GT5 adaptor (for file transfer and job submission) based on existing adaptors. In the second phase SHIWA developers integrated the GT5 plugin into the DCI Bridge, which will be followed by a testing phase using GT5 resources of the IGE Test bed.
The SSP (SHIWA Simulation Platform,http://ssp.shiwa-workflow.eu/) is a product of the SHIWA project and offers users production-level services supporting workflow interoperability. As part of the SHIWA Simulation Platform the SHIWA Repository facilitates publishing and sharing workflows, and the SHIWA Simulation Platform enables their actual enactment and execution in most of the DCIs (e.g. GT5) available in Europe. Workflows from different systems can be nested to compose new applications, using existing workflow components.The portal is based on WS-PGRADE/gUSE technologies.
Globus Toolkit/GT5: The open source Globus Toolkit is a fundamental enabling technology for the Grid, letting people to share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy. The toolkit includes software services and libraries for resource monitoring, discovery, and management, plus security and file management. GT5 is the last Globus grid type. Most components of GT5 are incremental updates (numerous bug fixes and new features) over their GT4 counter-parts (e.g. GridFTP, RLS, MyProxy, GSI-OpenSSH). Some components of GT4 are not included in GT5 (e.g. GT4 Java Core, WS-GRAM4, RFT), to be replaced by new software under development (e.g. Crux, Globus.org Service).
SHIWA- The SHIWA (SHaring Interoperable Workflows for large-scale scientific simulations on Available DCIs, http://www.shiwa-workflow.eu/) is a European project supported by the FP7Capacities Programme under contract no. RI- 261585 and led by the LPDS (Laboratory of Parallel and Distributed Systems), MTA SZTAKI (Computer and Automation Research Institute, Hungarian Academy of Sciences). SHIWA is a collaboration of 8 consortium members and 5 subcontractors and focuses on the interoperability of many different European workflow systems and also provides access to a larger computational power with the interoperability solution between different Distributed Computing Infrastructures.
IGE- The Initiative for Globus in Europe (http://www.ige-project.eu) is a project providing European researchers with the tools to share computing power, databases, instruments, and other on-line tools. IGE is a member of the Globus Alliance, the collaboration that produces the Globus toolkit, an open source set of software components that address the most challenging problems in distributed resource sharing.