ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +14086380968,350667546#
(or): +16465588656,350667546#

Telephone (US Toll): Dial: +1 408 638 0968
(or) +1 646 558 8656
International numbers availableZoom International Dial-in Numbers
Meeting ID: 350 667 546

See the Events page for details of upcoming presentations. Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Video library

Videos and slides from past presentations are available (see below). Presentations from prior years that are not listed below may be available in the archive:

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

September 19, 2017

COSMIC2 - A Science Gateway for Cryo-Electron Microscopy with Globus for Terabyte-sized Dataset

Presenter(s): Mona Wong-Barnum (SDSC)
Principal Investigator(s): Andres Leschziner (UCSD) Michael Cianfrocco (University of Michigan)

Presentation Slides

Structural biology is in the midst of a revolution. Instrumentation and software improvements have allowed for the full realization of cryo-electron microscopy (cryo-EM) as a tool capable of determining atomic structures of protein and macromolecular samples. These advances open the door to solving new structures that were previously unattainable, which will soon make cryo-EM a ubiquitous tool for structural biology worldwide, serving both academic and commercial purposes. However, despite its power, new users to cryo-EM face significant obstacles. One major barrier consists of the handling of large datasets (10+ terabytes), where new cryo-EM users must learn how to interface with the Linux command line while also dealing with managing and submitting jobs to high performance computing resources. To address this barrier, we are developing the COSMIC2 Science Gateway as an easy, web-based, science gateway to simplify cryo-EM data analysis using a standardized workflow. Specifically, we have adapted the successful and mature Cyberinfrastructure for Phylogenetic Research (CIPRES) Workbench [8] and integrated Globus Auth [6] and Globus Transfer [7] to enable federated user identity management and large dataset transfers to Extreme Science and Engineering Discovery Environment's (XSEDE) [1] high performance computing (HPC) systems. With the support of XSEDE's Extended Collaborative Support Services (ECSS) [16] and the Science Gateway Community Institute's (SGCI) Extended Developer Support (EDS), this gateway will lower the barrier to high performance computing tools and facilitate the growth of cryo-EM to become a routine tool for structural biology. Talk previously given at PEARC'17

First steps in optimising Cosmos++: A C++ MPI code for simulating black holes

Presenter(s): Damon McDougall (ICES)
Principal Investigator(s): Patrick C. Fragile (College of Charleston)

Presentation Slides

This ECSS project is to have Cosmos++ run on Stampede2 effectively. Stampede2, at present, is made up entirely of Intel Xeon Phi nodes. These are low clock-frequency but high core-count nodes, and there are some challenges associated with running on this hardware efficiently. Although the project's end goal is to hybridise a pure MPI code, this talk will focus on some of the initial steps we have taken to improve serial performance and how these steps relate to C++ software design. Prior knowledge of compiled languages and custom types would be beneficial but isn't required.


August 15, 2017

HTC with a Sprinkle of HPC: Finding Gravitational Waves with LIGO

Presenter(s): Lars Koesterke (TACC)
Principal Investigator(s): Duncan Brown (Syracuse University) Josh Willis (Abilene Christian University)

Presentation Slides

XSEDE is supporting the LIGO project to detect signatures of gravitational waves in a stream of data generated by (currently) two observatories in the U.S., located in Washington State and Louisiana. I will report on an ECSS project tasked to improve the performance of one of the largest (most resource demanding) pipelines called pycbc (python compact binary collision). The software evolved from a slow and performance-unaware state to a high-performing pipeline capable of utilizing Xeon, Xeon Phi, and Nvidia GPU architectures alike. Achieving high performance required only a few sprinkles of HPC (High Performance Computing) on top of a HTC (High Throughput Computing) pipeline. While the HPC pieces relevant for this particular project are all well known to ECSS staff it may be surprising what was missing in the considerations of the software developers. Hence this is more a story of how to educate users than a story of new and groundbreaking HPC concepts. Nevertheless I am confident that my fellow ECSS staffers will find this project interesting and enlightening.

Enabling multi-events 3D simulations for earthquake hazard assessment

Presenter(s): Yifeng Cui (SDSC)
Principal Investigator(s): Morgan Moschetti (USGS)

Presentation Slides

Researchers from USGS use Stampede to perform a series of computationally intensive simulations for improved understanding of earthquake hazards. Hercules, a finite element solver developed at CMU, is used to make the calculations which combines meshing, partitioning and solving functions in a single, self-contained code. Meshing employs a highly efficient octree-based algorithm that scales well. The simulation results are used to investigate the effects of complex geologic structure and topography on seismic wave propagation and ground-shaking hazards, and to evaluate model uncertainties in U.S. seismic hazard models. This talk will provide an overview of current status of the seismic hazard analysis research, and introduce the code performance, the optimizations involved in supporting multi-event simulations for this study through the ECSS project.


June 20, 2017

Visualization of simulated white dwarf collisions as a primary channel for type Ia supernovae

Presenter(s): David Bock (NCSA)
Principal Investigator(s): Doron Kushnir (Princeton)

Presentation Slides

Type Ia supernovae are an important and significant class of supernovae. While it is known that these events result from thermonuclear explosions of white dwarfs, there is currently no satisfactory scenario to achieve such explosions. Direct collisions of white dwarfs are simulated to study the possibility that the resulting explosions are the main source of type Ia supernovae. An adaptive mesh refinement grid simulates the varying levels of detail and a custom volume renderer is used to visualize density, temperature, and the resulting nickel production during the collision.

Humanities Computing With XSEDE: The Role of ECSS in Past, Present, and Future (upcoming) Projects

Presenter(s): Alan Craig (NCSA)

Presentation Slides

This symposium will address the role of ECSS in humanities related projects carried out in XSEDE. Humanities related disciplines are typically underrepresented in the XSEDE ecosystem. I will address my experiences and attempt to answer questions such as: "Where do these projects come from?", "What kinds of things are humanities scholars doing with XSEDE?", "What are some hurdles that need to be overcome for successful projects?", "How do the ECSS collaborations work?", and "How do we know if a project is successful?" in the context of several example projects.


May 16, 2017

Disclosure Scriptability

Presenter(s): Kwai Wong (UTK)
Principal Investigator(s): Matthew DeAngelis (Georgia State University)

Presentation Slides

Motivated by the increasing tendency for computing power to assist, or even replace, human effort in the acquisition and analysis of financial information and in the execution of trading strategies, this project examines the "scriptability" of firm disclosures, or the relative ease with which a computer program can transform the large amounts of unstructured data contained in various firm disclosures into usable information. The objective of this ECSS project is to provide support to manage and run a set of computer codes examining the scriptability of a large volume of documents. The performance and the workflow procedure of the computations on will be presented.

Visual exploration and analysis of time series earthquake data

Presenter(s): Amit Chourasia (SDSC)
Principal Investigator(s): Keith Richards-Dinger (UC Riverside) James Dieterich (UC Riverside) Yifeng Cui (SDSC)

Presentation Slides

Earthquake hazard estimation requires systematic investigation of past records as well as fundamental processes that cause the quake. Robust risk estimation requires detailed long-term records of earthquakes at all scales (magnitude, space, time), which are not available. Hence a synthetic method based on first principals could generate such records that could bridge this critical gap of missing data. RSQSim is such a simulator that generates seismic event catalogs for several thousand years at various scales. This synthetic catalog contains rich detail about the events and corresponding properties.
Exploring this data is of vital importance to validate the simulator as well as to identify features of interest such as quake time histories, conduct analysis such as mean recurrence interval of events on each fault section, etc. This work describes and demonstrates a prototype web based visual tool that enables scientists and students explore this rich dataset. It also discusses the refinement and streamlining data management and analysis that is less error prone and scalable.
This work was performed in collaboration with Keith Richards-Dinger, James Dieterich and Yilfeng Cui and supported by ECSS.


April 18, 2017

Securing Access to Science Gateways with CILogon and Role-based Access Control

Presenter(s): Marcus Christie (IU)

Presentation Slides

CILogon is a service that allows users to securely access cyberinfrastructure resources by authenticating with their home institutions. Users benefit by not needing to learn a new username and password, and science gateway administrators benefit by not needing to securely manage user passwords.
Apache Airavata is a software framework for building science gateways. Apache Airavata provides abstractions for describing compute and storage resources and the applications that can run on them. Through a web interface, users can launch and monitor applications running on a local cluster, the commercial cloud, or national cyberinfrastructure.
The Apache Airavata project recently integrated support for CILogon into it's web portal. In this talk this integration will be discussed along with the role-based access control authorization system developed for Airavata. Together, CILogon and role-based access control significantly ease the burden on users and administrators of securing access to science gateways.

Statistical Analysis for Partially-Observed Markov Processes with Marked Point Process Observation

Presenter(s): Mitchel Horton (NICS) Junqi Yin (NICS)
Principal Investigator(s): Professor Yong Zeng (University of Missouri at Kansas City)

Presentation Slides

Volatility is influential in investment, monetary policy making, risk management and security valuation, and is regarded as one of the most important financial market indicators. Recently, a general partially-observed framework of Markov processes with Marked Point Process (MPP) observations has been proposed for streaming financial ultra-high frequency (UHF) data.
For this project, particle Markov Chain Monte Carlo (PMCMC), is applied to the parameter estimation for two models: Geometric Brownian Motion (GBM), and Heston Stocastic Volatility (HSV). Both models operate under 1/8 and 1/100 tick mark rules.
This method combines particle filtering with Markov Chain Monte Carlo (MCMC) to achieve sequential parameter learning in a Bayesian way. MCMC is used to propose new values for model parameters; particle filtering is used to detect values of marginal likelihood in the state-space model based on the proposed parameters.
The CUDA codes to compute the Bayes factors for model comparison and selection between GBM and HSV are donei, and in the simulation testing stage. With the time remaining for this project, new features will be added to HSV (another, even more highly-parallelizable particle filtering (namely, sequential Monte Carlo) method, will be used to solve the same filtering equations which are stochastic PDEs).


March 21, 2017

The Paleoscape Project for Studies of Modern Human Origins

Presenter(s): David O'Neal (PSC)
Principal Investigator(s): Curtis Marean. (Arizona State)

Presentation Slides

There is widespread consensus in human origins research (paleoanthropology) that the modern human lineage evolved in Africa and all modern humans are descended from that population. The archaeological record for the behavior of this crucial phase is richest in the southern African sub-region and particularly rich in the Cape. It has been hypothesized that the Cape, due to its uniquely rich coastal and terrestrial food resources, may have been the refuge region for the progenitor lineage of all modern humans during harsh global glacial phases.
During this phase of human origins, the economy was based entirely on hunting and gathering, and hunter-gatherer adaptations are tied to the way that climate and environment shape the food and technological resource base. For this reason human origins research recognizes the evolutionary significance of paleoclimate and paleoenvironment, and has a long tradition of engaging with climate and environmental scientists in an effort to understand if and how bio-behavioral evolution in the hominin line responded to climate change.
This XSEDE ECSS project implements the following workflow:
1) run a South African regional climate model to hindcast the climate parameters needed to project vegetation and other resources into the past, 2) run vegetation projections from these climate projections, and 3) run multiple agent-based simulations of the foragers on the these ancient paleoscapes. This unique endeavor is made possible by an unprecedented collaboration of scientists from several countries and many disciplines.


February 21, 2017

Julia on HPC Platform

Presenter(s): Dong Ju Choi (UCSD)
Principal Investigator(s): Christopher Rackauckas (University of California, Irvine)

Julia is a high level programming languge providing both MATLAB like easiness and parallel computing performance. We began to learn the language to assist Chris Rackauckas at University of California Irvine for the development of a Julia differential equation package on SDSC Comet system.
This presentation will introduce the basic Julia usage on HPC platform specially with Comet system during the support.

Low Reynolds Number Hydrodynamics for Micro-robotic Applications

Presenter(s): Anirban Jana (PSC)
Principal Investigator(s): Metin Sitti (CMU)

Presentation Slides

In this talk, I will present my ECSS project with Prof Metin Sitti from CMU, where I helped develop an efficient simulation capability to perform simulations of microrobots in liquid enviroments at low Reynolds numbers (in the Stokes flow regime). The simulations are based on the boundary element method for Stokes flows. The ECSS project activities included selection of a suitable software stack, workflow development from preprocessing to simulation to analysis, and parallelization of an existing open source Stokes flow BEM code . This enabled simulation time to be reduced from days to minutes, opening up the possibility of comprehensive design space explorations, or more highly resolved simulations, or simulations of much more complex systems such as swarms of microrobots, in the future.


January 17, 2017

Toward Flood Inundation Mapping at the Continental Scale: the Cyberinfrastructure Approach

Presenter(s): Yan Liu (CyberGIS Center and NCSA, UIUC)
Principal Investigator(s): David Maidment (UT Austin) David Tarboton (Utah State University)

Presentation Slides

Project 1: National Flood Interoperability Experiment (NFIE) As high-resolution national terrain and water data become increasingly available, hydrology researchers have the opportunities to conduct hydrological study directly at continental scale for the conterminous U.S. (CONUS). To enable continental hydrology research, integrated cyberinfrastructure power plays a critical role in coupling interdisciplinary data, software, and multiple types of computational platforms and making them available for the research community to perform methodological experiments on big data and complex computations that they could not handle in lab environment. One leading effort in this direction is the NSF National Flood Interoperability Experiment (NFIE). NFIE aims to combine high-resolution terrain (10m and finer) and water (National Hydrography Dataset) data with NOAA National Water Model (NWM) forecast for real-time national inundation mapping and forecast.

Project 2: Terrain analysis using digital elevation models (TauDEM) Two ECSS projects help achieve NFIE goals on XSEDE resources at NCSA (on ROGER supercomputer, one of the 3rd-tier resources) and TACC (on Stampede). The NFIE ECSS project develops computational solutions to data integration and processing, methodology development, workflow construction, and experiment computation. The TauDEM ECSS project accelerates a core software piece in NFIE workflow, i.e., TauDEM, to conduct high-performance and scalable hydrological information analysis on large geospatial raster data. In this talk, I will introduce the data and computational challenges in these two projects and review the solutions developed to achieve major milestones. Results and their impact in the research community will be presented. I will share our computational experience on the highly-coupled HPC and cloud computing platform on ROGER and discuss the advantages of hybrid supercomputing architecture in building end-to-end solutions for research groups and science gateways.


December 20, 2016

Comet Virtual Cluster 'User' Experience

Presenter(s): Trevor Cooper (SDSC) Fugang Wang (IU)

Presentation Slides

Comet is an XSEDE HPC resource hosted and operated at SDSC and supported by systems staff at SDSC and user support staff at IU.

This demonstration explores the Virtual Cluster (VC) capability of Comet, a unique feature that provides projects with the ability to fully define their own software environment with a set of dynamically allocated virtual machines.

We will begin the demonstration with an overview on the design and architecture of the virtual cluster capability, and how it compares to other virtualized and cloud services. The high performance of the virtualized clusters combining the full AVX2 feature set of the Haswell processors and the InfiniBand HCAs using SR-IOV for MPI will be discussed.

We will then follow with a demonstration on how to build, configure, and manage virtual clusters using the Cloudmesh client, a tool to easily interface with multiple clouds from the command line and a command shell following a guide originally create for the XSEDE[16] hands-on tutorial. Time-permitting we will demonstrate multiple pre-configured virtual clusters and provide examples of possible administrative workflows available in this unique environment.

Finally we will provide references for further information on obtaining an allocation for Comet virtual clusters.

The Science Gateways Community Institute

Presenter(s): Nancy Wilkins-Diehr (SDSC)

Presentation Slides

Science gateways are a fundamental part of today's research landscape. Beginning in 2013, more users accessed XSEDE resources via gateways than they did from the command line. However, despite the presence of gateways for many years, development of these environments is often done with an ad hoc process, limiting success, resource efficiency, and long-term impact. Developers are often unaware that others have solved similar challenges before, and they do not know where to turn for advice or expertise. Without knowledge of what's possible, projects waste money and time implementing the most basic functions rather than the value-added features for their unique audience. Critically also many gateway efforts fail. Some fail early by not understanding how to build communities of users; others fail later by not developing plans for sustainability.

The Science Gateways Community Institute (SGCI, http://www.sciencegateways.org) is one of the first implementation-phase software institutes to be awarded through NSF's Software Infrastructure for Sustained Innovation (SI2) program. SGCI has been designed as a service organization to address challenges by offering services to and building community among the research communities developing gateways. An application to be an XSEDE level two service provider is planned. The Institute's five-component design is the result of several years of studies, including many focus groups and a 5,000-person survey of the research community. This talk will describe SGCI's offerings and how they might benefit your work.


October 18, 2016

Towards Large-scale Genomics, Transcriptomics, and Metagenomics for All

Presenter(s): Philip Blood (PSC)
Principal Investigator(s): Noushin Ghaffari (Texas A&M) Ping Ma (U. Georgia) James Taylor (Johns Hopkins)

Presentation Slides

Although increasing numbers of researchers in genomics and related disciplines are utilizing advanced cyberinfrastructure for their work, these still represent a relatively small fraction of the biologists who could benefit from access to the latest genomics tools backed by large-scale computing resources. Rapid advances in these fields have caused an explosion of tools and algorithms that present a dizzying array of constantly changing options. Hence, even for scientists who are adept at using advanced computing infrastructure, it is challenging to determine the optimal mix of tools and employ these effectively to analyze large genomic data sets. In this talk I will highlight several XSEDE ECSS projects aimed at tackling aspects of these problems, both through formal ECSS collaborations and the "Novel and Innovative Projects" (NIP) arm of ECSS. These projects include the development of a pipeline for high-quality transcriptome analysis based on well-characterized RNA Sequencing Quality Control (SEQC) datasets, making memory-hungry sequence assembly tools available through the Galaxy XSEDE Gateway (usegalaxy.org), enabling large-scale analysis of human microbiome data, and facilitating the Critical Assessment of Metagenome Interpretation (CAMI: http://www.cami-challenge.org/).

Petascale DNS Using the Fast Poisson Solver PSH3D

Presenter(s): Darren Adams (NCSA)
Principal Investigator(s): Antonio Ferrante (U Wash)

Presentation Slides

Direct numerical simulation (DNS) of high Reynolds number (Re = O(105)) turbulent flows requires computational meshes of O(1012) grid points. Thus, DNS requires the use of petascale supercomputers. DNS often requires the solution of a Helmholtz (or Poisson) equation for pressure, which constitutes the bottleneck of the solver. We have developed and implemented a parallel solver of the Helmholtz equation in 3D called petascale Helmholtz 3D (PSH3D). The numerical method underlying PSH3D combines a parallel 2D Fast Fourier transform (P2DFFT) and a parallel linear solver (PLS). Our numerical results show that PSH3D scales up to at least 262,144 cores. PSH3D has a peak performance 6× faster than 3D FFT-based methods (e.g., P3DFFT) when used with the partial-global optimization. We have verified that the use of PSH3D with the partial-global optimization in our DNS solver does not reduce the accuracy of the numerical solution when tested for the Taylor-Green vortex flow.


September 20, 2016

Integrating Scientific Tools and Web Portals

Presenter(s): Kevin (Feng) Chen (TACC)
Principal Investigator(s): Carol X. Song (Purdue) Ritu Arora (TACC)

Presentation Slides

Abstract: Diagrid is powered by the HUBzero® software developed at Purdue University. It is specifically designed to help a scientific community share resources and work together with one another. The Diagrid Science as a Service platform allows for easy web-based access to software applications used by thousands of researchers around the world. In today's ECSS symposium, Dr. Kevin Chen will discuss the development on scientific tools leveraging Diagrid web portal and XSEDE HPC resources.

System-level Checkpoint-Restart with DMTCP

Presenter(s): Jerome Vienne (TACC)
Principal Investigator(s): Gene Cooperman (Northeastern University)

Presentation Slides

DMTCP (Distributed MultiThreaded CheckPointing) is a software package used to checkpoint-restart applications. The primary purpose of checkpointing in HPC is achieving fault tolerance. If a computation fails, whether for reasons of hardware failure or temporary software failure, then the user restarts the computation from a previous checkpoint. This presentation highlights work on ECSS project with the team that develops it. The initial purpose of the ECSS project was to provide support to extend the scalability of DMTCP but it ended to be more than that. During the presentation, I will introduce DMTCP and explain how it can be used to checkpoint-restart and debug a batch session, checkpoint OpenSHMEM implementations and large scale experiments running on InfiniBand clusters. All these points brought to different challenges that were solved during this ECSS project. This collaboration led to papers presented at XSEDE'16, OpenSHMEM 2016 and IEEE ICPADS 2016.


August 16, 2016

Re-presenting Large Image Collections for Data Mining and Analysis

Presenter(s): Paul Rodriguez (SDSC)
Principal Investigator(s): Elizabeth Wuerffel (Valparaiso University) Alison Langmead (University of Pittsburgh)

Presentation Slides

I will discuss two NIP/ECSS projects that both involve primarily image analysis in the context of digital humanities. (Image Analysis of Rural Photography, PI Wuerffel; Decomposing Bodies, [aka Image Analysis of Bertillon Prison Cards], PI Langmead). Both of them are superficially about taking old B&W photograph collections and 'digitizing' them. In a general sense, the goal is to re-represent the data so that the digital humanist can perform particular socio/historical/artistic/cultural analysees. In a more practical sense, the goal is to extract feature from the images and metadata and provide infrastructure support for analysis. Part of our challenge is to line up these two goals.

I will also discuss the technical and programmatic aspects, mostly for my own pieces of the projects. Although the processes, project logistics, and infrastructure are very similar between projects, the actual feature extraction code and data products have little overlap - which is due to the nature of the image data themselves. Feature extraction for both projects primarily involve an assembly of techniques/tools that are in open source packages, where the trickier aspects require coming up with good strategies for applying techniques, evaluating how well they work on this data, and exploring possible methods that might be useful to the user.

Experiences running Dynamic Traffic Assignment Simulations at scale using HPC Infrastructure

Presenter(s): Amit Gupta (TACC)
Principal Investigator(s): Natalia Ruiz Juri (UT)

Presentation Slides

Dynamic Traffic Assignment (DTA) simulations form an important analysis tool for Transportation researchers in attempting to model complex interactions between travelers and transportation infrastructure. These simulation frameworks are complex to develop, maintain and extend. VISTA is a widely used Transportation Simulation framework providing Dynamic Traffic Assignment. I discuss our experiences in scaling VISTA on the Stampede system as an exemplar of how HPC infrastructure and tools can augment analysis workflows in transportation research by significantly speeding up simulation experiments. I also discuss some challenges and tradeoffs in enabling DTA frameworks for use in HPC environments and also directions for continuing/future work under ECSS support.


June 21, 2016

SeedMe platform: Enabling scriptable data sharing

Presenter(s): Amit Chourasia (SDSC)

Abstract Most scientific computation and analyses create important transient data and preliminary results. Quick and effective access and assessments of this data is necessary for efficient use of researchers time and computation resources, but this process is complicated when a large collaborating team is geographically dispersed and/or some team members do not have direct access to the computation resource and output data. Current methods for sharing and assessing transient data and preliminary results are cumbersome, labor intensive, and largely unsupported by useful tools and procedures. Each research team is forced to create their own ad hoc procedures to push results from system to system, and user to user, to guide the next step in their research.

In this talk we introduce the SeedMe platform which provides a web-based cyberinfrastructure to enable easy sharing and streaming of transient data and preliminary results directly from computing resources to a variety of platforms, from mobile devices to workstations. The SeedMe platform is open to all researchers and provides web browser based as well scriptable tools for easy integration with ad hoc computation workflows. The talk will also briefly discuss applications and uses cases that may be relevant for ECSS and Science Gateway projects.

Biography Amit Chourasia is a Sr. Visualization Scientist at the San Diego Supercomputer Center (SDSC), UC San Diego. He leads the Visualization group where his work is focused on leading the research, development and application of software tools and techniques for visualization. Key area of his work is to develop methods to represent data in a visual form that is clear, succinct and accurate (a challenging yet very exciting endeavor). Data sharing is also at a forefront of his interests, to this end he is developed a web based infrastructure to enable this important and at times critical gap in scientific process via the SeedMe project.


May 17, 2016

Turning on Performance in LAMMPS Molecular Dynamics

Presenter(s): Kent Milfeld (TACC)
Principal Investigator(s): Peter Koenig (Procter & Gamble)

Presentation Slides

LAMMPS is a large-package atomistic and molecular dynamics simulator. Through the Industrial Challenge Program TACC supported Peter Koenig (PI) in using LAMMPS on the Stampede system. The object of the program was to create atomistic and particle simulations that could be used to determine micellar properties to confirm and replace experiments that develop rheology models for mixing, filling, and product performance predictions. The presentation will focus on the support work: LAMMPS optimizations, which included a few code changes, a description on adding new Classes (styles) for modifying interactions, and other efforts supporting efficient use of the Stampede system

Curation en masse: Exploration of the Quality of Video Collections

Presenter(s): Anne Bowen (TACC)
Principal Investigator(s): Alan Bovik (UT)

Presentation Slides

TACC provided support for Alan Bovik (Laboratory for Image and Video Engineering at UT Austin) to assess the use of automatic quality assessment algorithms at a large scale for museum digital video collections. This project involved developing a visual analysis tool and workflow for massive video quality assessment on TACC systems using the BRISQUE algorithm. The presentation will present an overview of the Quality Assessment workflow, and specifically focus on the challenges we encountered with using BRISQUE (and non-referential quality assessment algorithms in general) on museum collections. These challenges prompted the development of the visual analysis tool to assist with interpretation of the results.


April 19, 2016

How to Tune and Extract Higher Performance with MVAPICH2 Libraries

Presenter(s): Dhabaleswar K.(DK) Panda (Ohio State)

Presentation Slides

The Ohio State University MVAPICH2 libraries support the latest MPI 3.1 standard and deliver high performance, scalability and fault tolerance for high-end computing systems using InfiniBand, Omni-Path, 10-40 GigE/iWARP and RoCE (V1 and V2) networking technologies. MVAPICH2-GDR library uses novel designs to exploit the cutting-edge GPUDirect technology to provide high performance for MPI applications on systems with NVIDIA-GPUs. These libraries have multiple features, parameters and knobs to optimize the performance on modern systems. However, many users are not fully-aware of all these features, optimization and tuning techniques. This talk is aimed to address these concerns and provide a set of concrete guidelines to XSEDE users to boost performance of their applications. We will start with an overview of the MVAPICH2 libraries and their features and optimized designs. Next, we will provide an in-depth overview of the runtime optimizations and tuning flexibility. We will demonstrate how you can tune and optimize these libraries to fit the needs of your application on a given system. Using a set of `Best Practice' examples, we will highlight the impact of tuning and optimizations on a set of common XSEDE applications including Amber, Lulesh, Hoomdblue, and MILC.

Bio --------- DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 350 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,550 organizations worldwide (in 79 countries). More than 360,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 10th, 13th and 25th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 160 organizations in 22 countries. More than 15,900 downloads of these libraries have taken place. He is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda