ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +16468769923,,114343187# (or) +16699006833,,114343187#

Telephone (US Toll): Dial(for higher quality, dial a number based on your current location):

US: +1 646 876 9923 (or) +1 669 900 6833 (or) +1 408 638 0968

Meeting ID: 114 343 187

Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

Previous years' ECSS seminars may accessed through these links:

2017

2016

2015

2014

August 16, 2016

Re-presenting Large Image Collections for Data Mining and Analysis

Presenter(s): Paul Rodriguez (SDSC)
Principal Investigator(s): Elizabeth Wuerffel (Valparaiso University) Alison Langmead (University of Pittsburgh)

Presentation Slides

I will discuss two NIP/ECSS projects that both involve primarily image analysis in the context of digital humanities. (Image Analysis of Rural Photography, PI Wuerffel; Decomposing Bodies, [aka Image Analysis of Bertillon Prison Cards], PI Langmead). Both of them are superficially about taking old B&W photograph collections and 'digitizing' them. In a general sense, the goal is to re-represent the data so that the digital humanist can perform particular socio/historical/artistic/cultural analysees. In a more practical sense, the goal is to extract feature from the images and metadata and provide infrastructure support for analysis. Part of our challenge is to line up these two goals.

I will also discuss the technical and programmatic aspects, mostly for my own pieces of the projects. Although the processes, project logistics, and infrastructure are very similar between projects, the actual feature extraction code and data products have little overlap - which is due to the nature of the image data themselves. Feature extraction for both projects primarily involve an assembly of techniques/tools that are in open source packages, where the trickier aspects require coming up with good strategies for applying techniques, evaluating how well they work on this data, and exploring possible methods that might be useful to the user.

Experiences running Dynamic Traffic Assignment Simulations at scale using HPC Infrastructure

Presenter(s): Amit Gupta (TACC)
Principal Investigator(s): Natalia Ruiz Juri (UT)

Presentation Slides

Dynamic Traffic Assignment (DTA) simulations form an important analysis tool for Transportation researchers in attempting to model complex interactions between travelers and transportation infrastructure. These simulation frameworks are complex to develop, maintain and extend. VISTA is a widely used Transportation Simulation framework providing Dynamic Traffic Assignment. I discuss our experiences in scaling VISTA on the Stampede system as an exemplar of how HPC infrastructure and tools can augment analysis workflows in transportation research by significantly speeding up simulation experiments. I also discuss some challenges and tradeoffs in enabling DTA frameworks for use in HPC environments and also directions for continuing/future work under ECSS support.


June 21, 2016

SeedMe platform: Enabling scriptable data sharing

Presenter(s): Amit Chourasia (SDSC)

Abstract Most scientific computation and analyses create important transient data and preliminary results. Quick and effective access and assessments of this data is necessary for efficient use of researchers time and computation resources, but this process is complicated when a large collaborating team is geographically dispersed and/or some team members do not have direct access to the computation resource and output data. Current methods for sharing and assessing transient data and preliminary results are cumbersome, labor intensive, and largely unsupported by useful tools and procedures. Each research team is forced to create their own ad hoc procedures to push results from system to system, and user to user, to guide the next step in their research.

In this talk we introduce the SeedMe platform which provides a web-based cyberinfrastructure to enable easy sharing and streaming of transient data and preliminary results directly from computing resources to a variety of platforms, from mobile devices to workstations. The SeedMe platform is open to all researchers and provides web browser based as well scriptable tools for easy integration with ad hoc computation workflows. The talk will also briefly discuss applications and uses cases that may be relevant for ECSS and Science Gateway projects.

Biography Amit Chourasia is a Sr. Visualization Scientist at the San Diego Supercomputer Center (SDSC), UC San Diego. He leads the Visualization group where his work is focused on leading the research, development and application of software tools and techniques for visualization. Key area of his work is to develop methods to represent data in a visual form that is clear, succinct and accurate (a challenging yet very exciting endeavor). Data sharing is also at a forefront of his interests, to this end he is developed a web based infrastructure to enable this important and at times critical gap in scientific process via the SeedMe project.


May 17, 2016

Turning on Performance in LAMMPS Molecular Dynamics

Presenter(s): Kent Milfeld (TACC)
Principal Investigator(s): Peter Koenig (Procter & Gamble)

Presentation Slides

LAMMPS is a large-package atomistic and molecular dynamics simulator. Through the Industrial Challenge Program TACC supported Peter Koenig (PI) in using LAMMPS on the Stampede system. The object of the program was to create atomistic and particle simulations that could be used to determine micellar properties to confirm and replace experiments that develop rheology models for mixing, filling, and product performance predictions. The presentation will focus on the support work: LAMMPS optimizations, which included a few code changes, a description on adding new Classes (styles) for modifying interactions, and other efforts supporting efficient use of the Stampede system

Curation en masse: Exploration of the Quality of Video Collections

Presenter(s): Anne Bowen (TACC)
Principal Investigator(s): Alan Bovik (UT)

Presentation Slides

TACC provided support for Alan Bovik (Laboratory for Image and Video Engineering at UT Austin) to assess the use of automatic quality assessment algorithms at a large scale for museum digital video collections. This project involved developing a visual analysis tool and workflow for massive video quality assessment on TACC systems using the BRISQUE algorithm. The presentation will present an overview of the Quality Assessment workflow, and specifically focus on the challenges we encountered with using BRISQUE (and non-referential quality assessment algorithms in general) on museum collections. These challenges prompted the development of the visual analysis tool to assist with interpretation of the results.


April 19, 2016

How to Tune and Extract Higher Performance with MVAPICH2 Libraries

Presenter(s): Dhabaleswar K.(DK) Panda (Ohio State)

Presentation Slides

The Ohio State University MVAPICH2 libraries support the latest MPI 3.1 standard and deliver high performance, scalability and fault tolerance for high-end computing systems using InfiniBand, Omni-Path, 10-40 GigE/iWARP and RoCE (V1 and V2) networking technologies. MVAPICH2-GDR library uses novel designs to exploit the cutting-edge GPUDirect technology to provide high performance for MPI applications on systems with NVIDIA-GPUs. These libraries have multiple features, parameters and knobs to optimize the performance on modern systems. However, many users are not fully-aware of all these features, optimization and tuning techniques. This talk is aimed to address these concerns and provide a set of concrete guidelines to XSEDE users to boost performance of their applications. We will start with an overview of the MVAPICH2 libraries and their features and optimized designs. Next, we will provide an in-depth overview of the runtime optimizations and tuning flexibility. We will demonstrate how you can tune and optimize these libraries to fit the needs of your application on a given system. Using a set of `Best Practice' examples, we will highlight the impact of tuning and optimizations on a set of common XSEDE applications including Amber, Lulesh, Hoomdblue, and MILC.

Bio --------- DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 350 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,550 organizations worldwide (in 79 countries). More than 360,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 10th, 13th and 25th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 160 organizations in 22 countries. More than 15,900 downloads of these libraries have taken place. He is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda


March 15, 2016

XDMoD(XD Metrics Service)

Presenter(s): Thomas Furlani (University at Buffalo, SUNY)

Presentation Slides

The University at Buffalo's XDMoD tool provides for the comprehensive management of HPC systems, including the ability to provide performance data for all jobs running on a cluster. Using the XDMoD Job Viewer, system support personnel can readily identify poorly performing jobs - with the end goal of working the the end user to improve performance.

In this presentation, we will begin with a brief PowerPoint presentation on XDMoD with an emphasis on the Job Viewer tab. This will be followed by a live demo that utilizes the Job Viewer within XDMoD to analyze various XSEDE jobs. The demo will be interactive - allowing ECSS staff to help guide the demo.

Link to the recorded presentation.
May require download of proprietary software to view video

The XDMoD team is interested in collecting feedback on usability. David LaVergne is conducting one-on-one interviews with users regarding their use of the current interface (mainly the Usage and Metrics Explorer tabs, as well as the new Job Viewer tab). As user support folks, I'd like to know generally what information you find the most (and least) useful, and how well the current interface (and a proposed redesign) supports that. Please contact him if interested, a small gift is involved!


February 16, 2016

fMRI image registration with AFNI's 3dQwarp

Presenter(s): Junqi Yin (NICS)
Principal Investigator(s): Frank Skidmore (University of Alabama Birmingham)

The Analysis of Functional Neuroimaging (AFNI) software package is widely used in the community for the brain MR image analysis. For many types of analysis workflows, one important step is to register a subject's image to a pre-defined template so different subjects can be compared within a normalized coordination system. This is specially challenging if the subject has brain atrophy due to some kinds of neurological condition such as Parkinson's disease. The 3dQwarp code in AFNI is a non-linear image registration procedure that overcomes the drawbacks of a linear affine transformation. However, the existing OpenMP instrumentation in 3dQwarp is not efficient for small-patch optimization, and the lack of convergence criteria of the iterative algorithm also hurts the accuracy. Based on the profiling and benchmark, we have been working on the optimization of its OpenMP structure and the improvement of warped image fidelity, which can be used for voxel-to-voxel type of downstream analysis.

ECSS-er Junqi Yin (NICS) will be sharing observations from his work with PI Frank Skidmore (U Alabamba Birmingham) on Blacklight and Greenfield to optimize a widely used neuroimaging package called Analysis of Funtional Neuroimaging (AFNI).

Boosting molecular dynamics with advanced hardware and algorithms

Presenter(s): Lei Huang (TACC)
Principal Investigator(s): Dr. Doraiswami Ramkrishna (Purdue)

Presentation Slides

There are several open-sourced packages available for general purpose molecular dynamics (MD) simulations. However, researchers still need to develop their own MD engines under special circumstance. Dr. Doraiswami Ramkrishna's group at Purdue developed a package for umbrella sampling and molecular dynamics for polymorph prediction. By leveraging the power of Intel Xeon Phi and adopting several advanced algorithms in molecular dynamics, we achieved ~9x speedup and got a performance superior to LAMMPS.

Lei Huang (TACC) will tell us about his work with PI Doraiswami Ramkrishna (Purdue) to port several advanced algorithms in molecular dynamics to the Xeon Phi on Stampede with a factor of 9 speedup.


January 19, 2016

Performance Enhancements to PlascomCM

Presenter(s): Lucas A. Wilson (TACC)
Principal Investigator(s): Daniel Bodony (UIUC)

Presentation Slides

PlascomCM is a Fortran90 application that is used to investigate the behavior of compressible, viscous gases, usually in the contexts of aerospace or mechanical engineering and with a focus on turbulence and generated sound. Several recent examples include predicting and controlling the noise produced by high-speed turbulent jets, such as found on commercial and military aircraft, and Mach 2.25 turbulent boundary layer grazing a flexible panel with application to multi-physics design of future hypersonic vehicles. The discretization of the governing non-linear partial differential equations uses an overset mesh and multiblock approach with locally structured meshes for which spatial derivatives are approximated with fixed-width stencil-based computations based on finite-difference-like considerations. This talk will highlight ECSS work done over the last 3 years to improve the performance of PlascomCM, with the end goal of efficiently using the Intel Xeon Phi coprocessors on Stampede. Code modifications which have improved caching and enabled vectorization will be highlighted. Further modifications which are currently being considered to improve performance on Xeon Phi will also be discussed.

Apache Airavata and XSEDE Science Gateways

Presenter(s): Suresh Marru (IU)
Principal Investigator(s): Mark Shephard (Rensselaer Polytechnic Institute) Cameron Smith (Rensselaer Polytechnic Institute)

Presentation Slides

The Symposium talk will walk through projects initially started as optimization and code porting ECSS efforts which later were extended to include gateway support. The resulting codes are made available to community at large through these gateway interfaces. Examples will include PI: Prof. Arne Pearlstein's flow-induced vibration simulation gateway and PI's Mark Shephard and Cameron Smith's PHASTA Gateway. The talk will also discuss the use of a multi-tenanted science gateway framework based on Apache Airavata as a starting point and to achieve short term operational sustainability through externally funded NSF projects. Lastly, we will discuss the reuse of ECSS contributed extensions across projects.


December 15, 2015

Bridges: Connecting Researchers, Data, and HPC

Presenter(s): Nick Nystrom (PSC)
Principal Investigator(s): Nick Nystrom (PSC)

Presentation Slides

Bridges is a new kind of supercomputer being built at the Pittsburgh Supercomputing Center (PSC) to empower new research communities, bring desktop convenience to supercomputing, expand campus access, and help researchers facing challenges in Big Data to work more intuitively. Funded by a $9.65M NSF award, Bridges consists of tiered, large-shared-memory resources with nodes having 12TB, 3TB, and 128GB each, dedicated nodes for database, web, and data transfer, high-performance shared and distributed data storage, the Spark/Hadoop ecosystem, and powerful new CPUs and GPUs. Bridges is the first production deployments of Intel's new Omni-Path Architecture (OPA) Fabric, which will interconnect its nodes and storage. Bridges emphasizes usability, flexibility, and interactivity. Widely-used languages and frameworks such as Java, Python, R, MATLAB, Hadoop, and Spark benefit transparently from large memory and the high-performance OPA fabric. Virtualization enable hosting web services, NoSQL databases, and application-specific environments and enhances reproducibility. Bridges, allocated through XSEDE, is available at no charge to the open research community. Bridges is also available to industry through PSC's corporate programs.

Design of Experiments and Big Data Analytics for Energy Efficient Buildings

Presenter(s): Pragnesh Patel (NICS)
Principal Investigator(s): Joshua New (ORNL)

Presentation Slides

A central challenge in the domain of energy efficiency is being able to realistically model a specific class of building and scaling those classes up to the entire United States building stock across ASHRAE climate zones, then projecting how specific retrofits or retrofit packages would maximize return-on-investment for subsidies through federal, state, local, and utility tax incentives, rebates, and loan programs. Nearly all projections regarding energy savings, for any of the plethora of technologies required to address the need for US energy security, are reliant upon accurate models as the central primitive by which to integrate the national impact with meaningful measures of uncertainty, error, variance, and risk. This challenge is compounded by the fact that buildings, unlike cars or planes, are manufactured in the field at the time of construction based on one-off designs with a median lifespan of 73 years. Due to variance of building materials, construction, and equipment (and the necessary flux of these over time), a given building is unlikely to closely resemble the prototypical building class. Therefore, each building needs to be modeled individually and precisely to achieve optimal retrofit and construction practices. We have developed design of experiement for calibrating building energy models, which minimize the number of simulations required while maximizing the statistical resolution of analysis results. Initial statistical analysis of parametric ensembles using techniques such as multiple analysis of variance (MANOVA) and a software infrastructure tying together several machine learning packages (MLSuite) have recently pushed the cutting edge of building energy analysis from about 10 inputs and 12-24 outputs to156 inputs and 96 outputs. The science-enabling software infrastructure has been improved as part of this project include improving R code for design of experiments along with R analysis code while quickly instantiating R on every parallel node/core, integration of Energyplus code for large-scale simulation runs with OpenDIEL workflow system along with pre and post processing data analysis codes.


October 20, 2015

SoyKB pipeline on XSEDE - an overview

Presenter(s): Mats Rynge (USC/ISI)
Principal Investigator(s): Dong Xu (University of Missouri, Columbia)

Presentation Slides

The Soybean Knowledge Base project (http://soykb.org/) is conducting resequencing of more than 1000+ soybean germplasm lines using Illumina paired end sequencing for multiple projects, selected for major traits including oil, protein, soybean cyst nematode resistance (SCN), abiotic stress resistance (drought, heat and salt) and root system architecture. In this talk we discuss how SoyKB uses XSEDE for the sequencing pipeline and how ECSS helped create the Pegasus workflow for the pipeline. We will also discuss our current effort of transitioning from TACC Stampede to TACC Wrangler.


September 15, 2015

Asteroseismic Modeling Portal

Presenter(s): Haiying Xu (NCAR)
Principal Investigator(s): Travis Metcalfe (Space Science Institute)

Presentation Slides

The Asteroseismic Modeling Portal is a community facility that allows astronomers to derive the fundamental properties of sun-like stars from observations of their natural vibrations. The underlying science code uses a parallel genetic algorithm to match the observations with standard theoretical models of stars. In the first five years of the project, AMP was applied to more than 100 stars observed by NASA's Kepler mission, yielding a uniform set of stellar properties that have been used to study the structure and evolution of stars and their planetary systems. By using the AMP gateway, more than 100 users around world can submit jobs, retrieve results and even analyze the performance of source codes very easily. And during 8 year running, AMP has submitted 30424 jobs and spent 18,795,892 SUs. XSEDE/ECSS objectives include updating OS and related software of the servers, and optimizing parallel performance of AMP 2.0 science code on by TACC staff.


Showing 21 - 30 of 68 results.
Items per Page 10
of 7