ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +14086380968,350667546#
(or): +16465588656,350667546#

Telephone (US Toll): Dial: +1 408 638 0968
(or) +1 646 558 8656
International numbers availableZoom International Dial-in Numbers
Meeting ID: 350 667 546

See the Events page for details of upcoming presentations. Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Video library

Videos and slides from past presentations are available (see below). Presentations from prior years that are not listed below may be available in the archive:

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

January 21, 2014

Pushing the Integration Envelope of Cyberinfrastructure to Realize the CyberGIS Vision
Presenter: Shaowen Wang, (NCSA)

Presentation Slides

CyberGIS, ­geographic information science and systems (GIS) based on advanced cyberinfrastructure, ­has emerged during the past several years as a vibrant interdisciplinary field. It has played essential roles in enabling computing- and data-intensive research and education across a broad swath of academic disciplines with significant societal impact. However, fulfilling such roles is increasingly dependent on the ability to simultaneously process and visualize complex and very large geospatial data sets and conduct associated analyses and simulations, which often require tight integration of collaboration, computing, data, and visualization capabilities. This presentation addresses this requirement as a set of challenges and opportunities for advancing cyberinfrastructure and related sciences while discussing the state of art of CyberGIS.


December 17, 2013

Presenter(s):
Nate Coraor, (Penn State)
Philip Blood, (Pittsburgh Supercomputing Center)
Rich LeDuc, (National Center for Genome Analysis Support)
Yu Ma, (Indiana University)
Ravi Madduri, (Argonne National Laboratory)

Presentation Slides

We will present a symposium describing current and planned efforts to enable more scientists to easily, transparently, and reproducibly analyze and share large-scale next-generation sequencing data with the Galaxy framework. 

Topics and speakers are as follows:

  • James Taylor (Galaxy Team): The future of Galaxy.
  • Philip Blood (Pittsburgh Supercomputing Center): Integrating Galaxy Main with XSEDE and establishing an XSEDE Galaxy Gateway.
  • Rich LeDuc and Yu Ma (National Center for Genome Analysis Support (NCGAS), Indiana U): Utilizing Galaxy at NCGAS with integrated InCommon authentication.
  • Ravi Madduri (Argonne National Laboratory): Experiences in building a next-generation sequencing analysis service using Galaxy, Globus Online, and Amazon Web Services.


October 15, 2013

Research Data Management-as-a-Service with Globus Online
Presenter: Rachana Ananthakrishnan , (University of Chicago)

Presentation Slides

As science becomes more computation- and data-intensive, there is an increasing need for researchers to move and share data across institutional boundaries. Managing massive volumes of data throughout their lifecycle is rapidly becoming an inhibitor to research progress, due in part to the complex and costly IT infrastructure required – infrastructure that is typically out of reach for the hundreds of thousands of small and medium labs that conduct the bulk of scientific research. Globus Online is a powerful system that aims to provide easy-to-use services and tools for research data management – as simple as the cloud-hosted Netflix for streaming movies, or Gmail for e-mail – and make advanced IT capabilities available to any researcher with access to a Web browser. Globus Online provides software-as-a-service (SaaS) for research data management, including data movement, storage, sharing, and publication. We will describe how researchers can deal with data management challenges in a simple and robust manner. Globus Online makes large-scale data transfer and synchronization easy by providing a reliable, secure, and highly-monitored environment with powerful and intuitive interfaces. Globus also provides federated identity and group management capabilities for integrating Globus services into campus systems, research portals, and scientific workflows. New functionality includes data sharing, simplifying collaborations within labs or around the world. Tools specifically built for IT administrators on campuses and computing facilities give additional features, controls, and visibility into users' needs and usage patterns. We will present use cases that illustrate how Globus Online is used by campuses (e.g. University of Michigan), supercomputing centers (e.g., Blue Waters, NERSC), and national cyberinfrastructure providers (e.g. XSEDE) to facilitate secure, high-performance data movement among local computers and HPC resources. We will also outline the simple steps required to create a Globus Online endpoint and make the service available to all facility users without specialized hardware, software or IT expertise. There will be a live demonstration of how to use Globus Online.


September 17, 2013

Introducing the XSEDE Workflow Community Applications Team
Presenter: Marlon Pierce, (IU)

Presentation Slides

Many computational problems have sophisticated execution patterns, or scientific workflows, that build on top of the basic scheduling and queuing structures offered by XSEDE service providers. Examples include techniques for large scale parameter space exploration and coupling of several independently developed applications into new, composite computational experiments. Such executions may occur across multiple resources as well, using the optimal resource for each application in the dependency chain. Managing these complex executions is only part of the problem: scientists must be able to capture the metadata associated with a particular set of runs and share their workflows within their teams and with collaborators. Many academic groups have devoted significant research efforts into building workflow software that addresses one or more of these problems. The goal of the newly constituted XSEDE Workflow Community Applications Team is to assist scientists with scientific workflow problems in using workflow software to their research to XSEDE. The long term goal of the XSEDE workflow team is to assist XSEDE in providing a well-documented, reliable environment for the execution of scientific workflows in partnership with workflow software development teams.

Supporting non-traditional users as they leverage XSEDE into their research
Presenter: Roberto O. Gomez, (PSC)

Presentation Slides

Our role as service providers of HPC has in the past required us to offer user support to, for lack of a better word, ‘traditional' HPC users, who are usually well versed in parallel computing, and are familiar with supercomputing centers and the mechanics of using large resources. We tend to think that this is the normal case, irrespective of their research discipline. Our support efforts typically are dominated by porting users' codes (which are already parallel) to new architectures, tuning these codes to improve their performance, and installing software to satisfy specific user requirements. Of course, we do offer training for new users, and we deal individually with ‘green' team members of the research teams with which we interact. But by and large, our support role might be more accurately described as ‘software support', rather than ‘user support'. As we try to open access to HPC resources to a wider audience and bring in users from research areas not normally associated with HPC, however, this paradigm we had grown comfortable with flips on its head. Non-traditional HPC users bring with them a different host of requirements that put more emphasis on the ‘user' side of the support process. We will illustrate this with some examples of recent NIP and ECSS activities at PSC.


September 19, 2013

Introducing the XSEDE Workflow Community Applications Team
Presenter: Marlon Pierce, (IU)

Many computational problems have sophisticated execution patterns, or scientific workflows, that build on top of the basic scheduling and queuing structures offered by XSEDE service providers. Examples include techniques for large scale parameter space exploration and coupling of several independently developed applications into new, composite computational experiments. Such executions may occur across multiple resources as well, using the optimal resource for each application in the dependency chain. Managing these complex executions is only part of the problem: scientists must be able to capture the metadata associated with a particular set of runs and share their workflows within their teams and with collaborators. Many academic groups have devoted significant research efforts into building workflow software that addresses one or more of these problems. The goal of the newly constituted XSEDE Workflow Community Applications Team is to assist scientists with scientific workflow problems in using workflow software to their research to XSEDE. The long term goal of the XSEDE workflow team is to assist XSEDE in providing a well-documented, reliable environment for the execution of scientific workflows in partnership with workflow software development teams.


August 20, 2013

Engineering Breakthroughs at NCSA (XSEDE, Blue Waters, Industry)
Presenter: Seid Koric, (NCSA)

Presentation Slides

Examples of some of my recent academic and industrial HPC work and collaborations are provided. Application examples range from manufacturing and multiphysics material processing simulations to massively parallel linear solvers. Parallel scalability of engineering codes on iForge (exclusive industrial HPC resource at NCSA) and Blue Waters (NSF funded sustained peta-scale system) are included.

Identification of Mechanism Based Inhibitors of Oncogene Pathways Using High-Performance Docking
Presenter: Bhanu Rekepalli, (NICS)
Principal Investigator: Yuri Peterson (MUSC)

Presentation Slides

Modern high throughput drug discovery is a complex and costly endeavor. Since small molecule chemical space is currently in the millions and is growing rapidly, performing in vitro and cell-based screening is complex and cost-prohibitive even for small subsets of molecules without significant investment in expertise and infrastructure. Thus, computational approaches are increasingly incorporated in drug discovery. Due to limited computational resources most computational drug discovery efforts are limited by either using rigid conformer libraries to bypass the need for more computationally intensive flexible docking, or performing flexible docking on small subsets (tens of thousands). High performance computing resources, such as Kraken, have allowed access to unprecedented amounts of computing power for biomedical researchers, and opens the possibility of exploring immense chemical space in hours that would previously have taken years on local clusters. Our working hypothesis is High Performance Docking (HP-D) allows probing of vast amounts of chemical space that is impractical by any other means. To test this hypothesis we have instrumented the docking program DOCK6 and compared performance between a small academic parallel cluster and Kraken. We show a dramatic and scalable increase in performance that will allowed the exploration of vast amounts of chemical space to identify compounds that dock on proteins related to ovarian cancer pathway.


June 18, 2013

Atomistic Characterization of Stable and Metastable Alumina Surfaces
Presenter: Sudhakar Pamidighantam, (NCSA)
Principal Investigator: Douglas Spearot, Co-pi: shawn Coleman (University of Arkansas)

[Presentation Slides]

This presentation is to describe work in progress for the extended collaborative support requested to assist the PIs, with (1) improving the parallel scalability of the virtual diffraction algorithm implemented as a "compute" in LAMMPS and (2) creating a workflow that automates the data transfer of the atomistic simulation results from TACC Stampede to SDSC Gordon in order to perform the virtual diffraction analysis and (3) visualization. Before the virtual diffraction algorithm is made publically available and incorporated into the main distribution of LAMMPS, the performance of the code must scale to larger atomic models. Currently the algorithm is memory-bound, likely due to the simple MPI parallelization used. Strategies and implementation of scaling methods in Compute will be described. Visualization using VisIT system with various configuration protocols is being implemented. Integration into GridChem science gateway and user interactions will be discussed. A plan for the workflow implementation will be presented. Sudhakar.

 


April 16, 2013

Data Analysis on Massive Online Game Logs
Presenter: Dora Cai (NCSA)
Principal Investigator: Marshall Scott Poole (UIUC)

Presentation Slides

This presentation will talk about the work with Professor Marshall Scott Poole entitled "Some Assembly Required: Using High Performance Computing to Understand the Emergence of Teams and Ecosystems of Teams". Massively Multiplayer Online Games (MMOGs) provide unique opportunities to investigate large social networks. This project is a multidisciplinary Social Sciences project dedicated to the study of communication-related behaviors using data from MMOGs. A twenty-person team of scholars from four universities is engaged in the study. The project has performed systematic studies on many research areas, such as social network analysis, gamer behavior studies, and virtual world simulation. The Gordon supercomputer has provided great support on this project. This talk will provide an overview of the project, describe my involvement as an ECSS consultant in this project, and present the recent progress on developing a tool to visualize the social networks in MMOGs

Development of Novel Quantum Chemical Molecular Dynamics for Materials Science Modeling
Presenter: Jacek Jakowski (NICS)
Co-Principal Investigators: Jacek Jakowski (NICS), Sophya Garashchuk, (U. of South Carolina), Steve Stuart (Clemson University), Predrag Krstic (University of Tennessee& ORNL), Stephan Irle (Nagoya University)

I will present my work on the project titled: "Modeling of nano-scale carbon and metalized carbon materials for the 'EPSCoR Desktop to TeraGrid EcoSystems project'". with CO-PI: Sophya Garashchuk, (U. of South Carolina), Steve Stuart (Clemson University), Predrag Krstic (University of Tennessee& ORNL), Stephan Irle (Nagoya University). This project contains several subprojects that focus on development and application of various molecular dynamics approaches to material science problems. I will particularly discuss development and parallelization of Bohmian dynamics for modeling quantum nuclear effects of selected nuclei. Implementation and scaling on Kraken at NICS and science problems illustration will be presented.


March 19, 2013

Visualization of Volcanic Eruption Simulations (CFDLib)
Presenter: Amit Chourasia (SDSC)
Principal Investigator: Darcy Ogden (SIO, UCSD)

Presentation Slides

Eruptive conduits feeding volcanic jets and plumes are connected to the atmosphere through volcanic vents that, depending on their size and 3D shape, can alter the dynamics and structure of these eruptions. The host rock comprising the vent, in turn, can collapse, fracture, and erode in response to the eruptive flow field. This project uses visualization to illustrate and analyze results from fully coupled numerical simulations of high speed, multiphase volcanic mixtures erupting through erodible, visco-plastic host rocks. This work explores the influence of different host rock rheologies and eruptive conditions on the development of simulated volcanic jets. The visualizations shows the dependence of lithic segregation in the plume on eruption pressure.

Using Hybrid MPI+OpenMP Approach to Improve the Scalability of a Phase-Field-Crystal Code
Presenter: Reuben Budiardja (NICS)
Principal Investigator: Katsuyo Thornton (University of Michigan)

Presentation Slides

Phase-Field-Crystal (PFC) model is a recent development in the computational modeling of nanostructured materials that addresses the challenges for understanding of complex processes in nanostructure growth and self-assembly. A PFC-based code requires good scalability and time-to-solution to perform calculations with sufficient resolutions on the dynamics of metals. In this talk we will describe the work in improving the scalability of a PFC code. At the heart of the code is the solving of multiple indefinite Helmholtz equations. We will discuss the hybrid OpenMP + MPI approach to improve the time-to-solution by exploiting different parallelisms that exist in the code.


February 19, 2013

I/O Analysis for the Community Multiscale Air Quality (CMAQ) Simulation
Presenter: Kwai Wong (University of tennessee)
Principal Investigator: Joshua Fu (University of Tennessee)

Presentation Slides

The Community Multiscale Air Quality (CMAQ) Model is commonly used by many researches to simulate ozone, particulate matter (PM), toxics, visibility, and acidic and nutrient pollutant? throughout the troposphere. The scale of the model ranges from urban (few km) to regional (hundreds of kilometers) to inter-continental (thousands of kilometers) transport. Depending on the time and length scales of a simulation, the amount of IO will affect the overall performance of the simulation. In this presentation, we will examine the steps and results of the IO procedures used in the code.

Improving the performance and efficiency of an inverse stiffness mapping problem
Presenter: Carlos Rosales (TACC)
Principal Investigator: Lorraine Olson (Rose-Hulman Institute of Technology)

Presentation Slides

In this talk we will discuss the improvement to a legacy Fortran code used by the PI to investigate early stage breast cancer detection by using an inverse stiffness mapping approach. The talk will describe the basic methodology, the original naive attempts at improving its performance, and the computational trick used to substitute the original solver with a a more effective MUMPS and BLAS combination.


Showing 41 - 50 of 61 results.
Items per Page 10
of 7