ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +16468769923,,114343187# (or) +16699006833,,114343187#

Telephone (US Toll): Dial(for higher quality, dial a number based on your current location):

US: +1 646 876 9923 (or) +1 669 900 6833 (or) +1 408 638 0968

Meeting ID: 114 343 187

Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

Previous years' ECSS seminars may accessed through these links:

2017

2016

2015

2014

October 16, 2018

PolyRun - Polymer Microstructure Exploration HPC Gateway

Presenter(s): Amit Chourasia (SDSC) Christopher Thompson (Purdue)

Polymers are long chain macromolecules with physical properties that make them appealing for a wide range of uses in structural support, organic electronics, and biomedical applications. The microscopic structure adopted by polymers plays a key role in determining their suitability for advanced applications. Computational simulation tools provide a convenient and powerful method to guide experiments to create desirable structures. In this talk we will discuss ECSS activity to support development of PolyRun Gateway that allows seasoned and non-HPC users to easily perform complex computations and utilize simulations as an aid in designing experiments towards desired materials.

Efficient construction of limit order books for financial markets

Presenter(s): Robert Sinkovits (SDSC)

A limit order book (LOB) is a record of unexecuted orders to buy or sell a stock at a specified price. The LOB can then be used as a starting point for deeper analysis of markets, leading to a better understanding of the impact of trading behaviors, suggestions for regulations to make markets more effective or identification of manipulative practices such as quote stuffing. Construction of full-resolution LOBs is computationally demanding and, as a consequence, approximations are often employed. Unfortunately, this limits the utility of the LOBs in the era of high frequency trading. In this collaboration with Mao Ye (U. Illinois), we describe how we were able to first optimize the performance of existing full-resolution LOB construction software to achieve a 100x reduction in run time, and then refactor the software to ultimately improve time to solution by 1000-3000x.


September 18, 2018

The XSEDE Monthly HPC Workshops

Presenter(s): John Urbanic (PSC)

Presentation Slides

I will talk about the XSEDE Monthly Workshop Series, which uses the Wide Area Classroom. It has exceeded 10,500 actual-sitting-in-the-classroom students over the past 5 years, with growth continuing. The HPC topics core to the series will be discussed, as will the benefits of the WAC approach. We will discuss audience satisfaction and demographics as well as discuss the latest improvements and developments. All of this with the intention that many of these techniques are of use to other XSEDE outreach, training and education efforts.

GISandbox: A Science Gateway for Geospatial Computing

Presenter(s): Davide Del Vento (NCAR)

Presentation Slides

Science gateways provide easy access to domain-specific tools and data. The field of Geographic Information Science and Systems (GIS) uses myriad tools and datasets, which raises challenges in designing a science gateway to meet users' diverse research and teaching needs. GISandbox is a new science gateway designed to meet the needs of researchers and educators leveraging geospatial computing. The GISandbox is built on Jupyter Notebooks to create an easy, open, and flexible platform for geospatial computing. Jupyter Notebooks is a widely used interactive computing environment running in the browser that integrates live code, narrative, equations and images. We extend the Jupyter Notebook platform to enable users to run interactive notebooks on the cloud resource Jetstream or computationally-intensive notebooks on the Bridges supercomputer located at the Pittsburgh Supercomputing Center. A novel Job Management platform allows the user to easily submit a Jupyter Notebook for batch execution on Bridges (and eventually Comet), monitor the SLURM job, and retrieve output files. GISandbox Virtual Machines are created in Jetstream's Atmosphere interface and then deployed and configured using a series of Ansible scripts. When properly used, Ansible scripts allow to create an easily reproducible and scalable system. In this talk we will highlight use cases of GISandbox, give a bird's view on how we have met their requirements in our implementation and discuss future plans including how it could be applied in other domains.


August 21, 2018

OpenTopography: A gateway to high resolution topography data and services

Presenter(s): Choonhan Youn (SDSC)

Presentation Slides

Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic and bathymetric data for scientific, environmental, engineering and planning purposes. Because of the richness of these data sets, they are often extremely valuable beyond the application that drove their acquisition and thus are of interest to a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies such as lidar, it is often difficult to distribute these datasets. Furthermore, the data can be technically challenging to work with, requiring software and computing resources not readily available to many users. Some of these complex algorithms require high performance computing resources to run efficiently, especially in an on-demand processing and analysis environment. With the steady growth in the number of users, complex and resource intensive algorithms to generate derived products from these invaluable datasets, HPC resources are becoming more necessary to meet the increasing demand. By utilizing the comet XSEDE resource, OpenTopography aims to democratize access and processing of these high-resolution topographic data.

Development of multiple scattering theory method: the recent progress and applications

Presenter(s): Yang Wang (PSC)

Presentation Slides

Multiple scattering theory is an ab initio electronic structure calculation method in the framework of density functional theory. It differs from other ab initio methods in that it is an all-electron method and is not based on variational approach. Its advantage of having easy access to the Green function makes it a unique tool for the study of random alloys and electronic transport. In this presentation, I will give a brief overview of the multiple scattering theory, and will discuss the recent ECSS projects relevant to the development and applications of multiple scattering theory method.


June 19, 2018

An Innovative Tool for IO Workload Management on Supercomputers

Presenter(s): Si Liu (TACC)

Presentation Slides

Modern supercomputer applications have been driving a high demand for capable storage resources in addition to fast computing resources. However, these storage systems, especially parallel shared filesystems, have become the Achilles' heel of powerful supercomputers. Single user's improper IO work can easily result in global filesystem performance degradation and even unresponsiveness. In this project, we developed an innovative IO workload managing system that optimally controls the IO workload from the users' side. This system will automatically detect and restrict improper IO workload from supercomputer users to protect parallel shared filesystems.

The Brain Image Library

Presenter(s): Derek Simmel (PSC)

Presentation Slides

The Brain Image Library (BIL) is a national public resource enabling researchers to deposit, analyze, mine, share and interact with large brain image datasets. As part of a comprehensive U.S. NIH BRAIN cyberinfrastructure initiative, BIL encompasses the deposition of datasets, the integration of datasets into a searchable web-accessible system, the redistribution of datasets, and a High Performance Computing enclave to allow researchers to process datasets in-place and share restricted and pre-release datasets. BIL serves a geographically distributed user base including large confocal imaging centers that are generating petabytes of confocal imaging datasets per year. For these users, the library serves as an archive facility for whole brain volumetric datasets from mammals, and a facility to provide researchers with a practical way to analyze, mine, share or interact with large image datasets. The Brain Image Library is a operated as a partnership between the Biomedical Applications Group at the Pittsburgh Supercomputing Center, the Center for Biological Imaging at the University of Pittsburgh and the Molecular Biosensor and Imaging Center at Carnegie Mellon University.
In this talk, I will briefly review the characteristics of the data that the Brain Image Library will store, and the infrastructure we are building at PSC to ingest and manage the data for access.


May 15, 2018

Computational fluid-structure interaction of biological systems

Presenter(s): Hang Liu (TACC)
Principal Investigator(s): Haoxiang Luo (Vanderbilt University)

Presentation Slides

I will briefly discuss what we have done to optimize the VICAR3D codes developed by the PI's group through this ECSS project. This includes those standard procedures we usually do in this kind efforts such as profiling code performance characteristics, sorting out the performance glitches, reorganizing the data domain decomposition, making the code more efficient in parallel, examining the performance portability when applying the code on architectures from Sandy Bridge and Knights Corner on Stampede1 to Knight's Landing on Stampede2. I would also like to share some interesting collisions and pleasant collaborations with the PI during the project and the lessons we learned.

A historical big data analysis to disclose the social construction of juvenile delinquency

Presenter(s): Sandeep Puthanveetil (NCSA)
Principal Investigator(s): Yu Zhang (The State University of New York at Brockport)

Presentation Slides

Social construction is a theoretical position that social reality is created through the human's definition and interaction. As one type of social reality, juvenile delinquency is perceived as part of social problems, deeply contextualized and socially constructed in American society. The social construction of juvenile delinquency started far earlier than the first juvenile court in 1899 in the U.S. Scholars have tried traditional historical analysis to explore the timeline of the social construction of juvenile delinquency in the past, but it is inefficient to examine hundred years of documents using traditional paper-and-pencil methods. This project aims to study the social construction of juvenile delinquency in the United States using data analysis of scanned historic newspaper collections. It combines image and linguistic analyses, and big data tools to analyze hundreds of years of scanned newspaper images and show a clear development of social construction of juvenile delinquency in the American society. Currently, the startup phase analyzes data from an archive of newspapers (1853-1921) from the Library of Congress Chronicling America website (http://chroniclingamerica.loc.gov/newspapers/). Sandeep will provide a very brief overview of the project, discuss the image analysis tools being designed and developed as part of this project, specifically with regard to segmentation of newspaper articles and OCR, their current progress, and some of the upcoming tasks in the text analysis and visualization stages of the project.


April 17, 2018

Clusters in the Cloud - Programmable, Elastic Cyberinfrastructure

Presenter(s): Eric Coulter (IU)
Principal Investigator(s): Sudhakar Pamidighantam (IU) Amit Majumdar (SDSC) Borries Demeler (UTHSC)

Presentation Slides

Eric will discuss the process of building a customized virtual cluster using Openstack, Ansible and SLURM, the benefits of elastic resources for gateway groups, and how this can be applied to extend the compute resources available to traditional hardware systems. Eric has worked with PI Sudhakar Pamidighantam (SEAGrid science gateway), PI Amit Majumdar (Neuroscience Gateway) and PI Borries Demeler (UltraScan science gateway) to enable production-ready virtual clusters on Jetstream.

Software as a Service Gateways

Presenter(s): Eroma Abeysinghe (IU)
Principal Investigator(s): Alison Marsden (Stanford) Charles Danko (Cornell)

Presentation Slides

Research groups producing open source scientific software often have a daunting task of helping their user communities with build instructions for a wide variety of hardware platforms, assist in optimizing the applications and develop detailed documentation to use the software. Such software communities can ease the support by developing custom science gateways for these specialized software. In this talk, Eroma Abeyasinghe will discuss two such efforts in developing, deploying and operating science gateways for Finite-Element Blood flow solver (SimVascular) and Detection of Regulatory Elements (dReg). Working in collaborations with PI's Alison Marsden and Charles Danko respectively. Eroma will discuss her experiences in developed these gateways based on the open community science gateway framework Apache Airavata and the PI's early success in community engagement with research and education.


March 20, 2018

ECSS Symposium featuring PI Panel

Presenter(s): Michael Cianfrocco (University of Michigan) Cameron Smith (Rensselaer Polytechnic Institute) Jian Tao (Texas A&M University) Sever Tipei (University of Illinois)

Curious about XSEDE's Extended Collaborative Support Services (ECSS)? Join us at our ECSS Symposium webinar on March 20 to hear from a panel of PIs about their experiences working with ECSS! They'll share what it was like requesting ECSS support, what the collaboration was like throughout the course of the project, and how ECSS support helped them achieve results.

Presenter(s):

Michael Cianfrocco is a Research Assistant Professor at the University of Michigan's Life Sciences Institute. Michael's ECSS project, "Analysis of Cryo-EM data on Comet and Gordon," began with a postdoctoral position with Andres Leschziner's lab at UCSD. Michael has been working with Mona Wong (SDSC) through both ECSS and the Science Gateways Community Institute to develop a gateway that would offer the cryoEM science community a web-based tool to simplify the analysis of data using a standardized workflow running on XSEDE's supercomputers. This gateway will lower the barrier to high performance computing tools and contribute to the fast-growing field of structural biology.

Cameron Smith is a Computational Scientist at the Scientific Computation Research Center at Rensselaer Polytechnic Institute. Cameron's project, "Adaptive Finite-element Simulations of Complex Industrial Flow Problems" focuses on scaling and performance analysis of adaptive in-memory workflows using PHASTA CFD, EnGPar load balancing, and PUMI unstructured mesh services on Stampede2's Knights Landing processors. The workflows are executed through the PHASTA science gateway. Cameron worked with ECSS staff Lars Koersterke and Lei Huang (both at TACC) on this project.

Jian Tao is a Research Scientist in the Strategic Initiatives Group at Texas A&M Engineering Experiment Station and High Performance Research Computing at Texas A&M University. Jian's work, "Deploying Containerized Coastal Model on XSEDE Resources," first began while he was at Louisiana State University. The goal is to develop and deploy enhancements into the SIMULOCEAN science gateway, integrating new Docker features of Bridges and Globus capabilities for authentication, file transfer and sharing. The PI worked with Mona Wong and Andrea Zonca (SDSC) and Stuart Martin from the Globus team.

Sever Tipei is a Professor of Composition-Theory in the School of Music at University of Illinois' College of Fine and Applied Arts. His project, "DISSCO, a Digital Instrument for Sound Synthesis and Composition" involves optimization and parallelization of the multi-threaded code DISSCO (developed jointly at the UIUC Computer Music Project and at Argonne National Laboratory). DISSCO combines the field of Computer-assisted Composition with that of the Sound Design in a seamless process. Sever has worked with ECSS staff Paul Rodriguez and Bob Sinkovits (both at SDSC) on this project.


February 20, 2018

Deep Learning: An Increasingly Common HPC Task

Presenter(s): Paola Buitrago (PSC) Joel Welling (PSC)

Presentation Slides Joel Welling Slides

Presentation Slides Paola Buitrago Slides

Deep learning is a highly compute- and data-intensive category of tasks with wide applicability in science as well as industry. Join Paola Buitrago and Joel Welling from the Pittsburgh Supercomputing Center in two talks that will provide an overview of the current deep learning landscape and examples of the deep learning environments available to XSEDE users. Paola will provide a brief history of the field and an update on its technical performance, with examples from domains as diverse as vision and theorem proving. Joel will follow with a description of the PSC's support for two major deep learning packages, TensorFlow and Caffe.


January 16, 2018

An Introduction to Jetstream

Presenter(s): Virginia Trueheart (TACC)

Presentation Slides

Jetstream is an interactive computing resource designed to make High Performance Computing accessible to users that are not part of traditional HPC fields. This tutorial aims to introduce Jetstream's capabilities to this expanded user base. It will demonstrate how to access the system, make use of the various Virtual Machines available, and use publicly available images to assist with research. It will also cover how to create, modify, and save personal images that can be customized to individual workflows and be saved long term for reference in publication.

Visualizations of Simulated Supercell Storm Data

Presenter(s): Greg Foss (TACC)
Principal Investigator(s): Amy McGovern (University of Oklahoma)

Presentation Slides

XSEDE ECSS project: High Performance Computing Resources in Support of Spatiotemporal Relational Data Mining for Anticipation of Severe Weather.
Amy McGovern and her collaborator Corey Potvin from the National Severe Storms Laboratory are developing and applying novel spatiotemporal data mining techniques to supercell thunderstorm simulations, with the goal of identifying tornado precursors. The overall goal of the project is to improve tornado warning lead time and accuracy by integrating into the "Warn on Forecast" project, a National Oceanic and Atmospheric Administration research program tasked to increase tornado, severe thunderstorm, and flash flood warning lead times.
XSEDE ECSS staff was enlisted to see what could be found using 3D visualization techniques and an interactive user interface. The resulting images and animations will assist in defining storm features and as input to the data mining: ensuring automatically extracted objects match visually identified ones. This talk will feature graphics from a selection of three (5.7 TB) datasets, with visualization samples identifying various supercell thunderstorm features.


December 19, 2017

Jupyter Notebooks deployments at scale for Gateways and Workshops

Presenter(s): Andrea Zonca (SDSC)

Presentation Slides

Andrea Zonca (SDSC) will give an overview on deployment options for Jupyter Notebooks at scale on XSEDE resources. They are all based on deploying Jupyterhub on Jetstream, then either spawn Notebooks on a traditional HPC system or setup a distributed scalable system on Jetstream instances either via Docker Swarm or Kubernetes.

Deployment and benchmarking of RDMA Hadoop, Spark, and HBase on SDSC Comet

Presenter(s): Mahidhar Tatineni (SDSC)

Presentation Slides

Data-intensive computing middleware (such as Hadoop, Spark) can potentially benefit greatly from the hardware already designed for high performance and scalability with advanced processor technology, large memory/core, and high performance storage/filesystems. Mahidhar Tatineni (SDSC) will give an overview of the deployment and performance of Remote Direct Memory Access (RDMA) Hadoop, Spark, and HBase middleware on the XSEDE Comet HPC resource. These packages have been developed by Dr. D.K. Panda's Network-Based Computing (NBC) Laboratory at the Ohio State University. The talk will cover details of the integration with the HPC scheduling framework, the design and components of the packages, and the performance benefits of the design. Applications tested include the Kira toolkit (astronomy image processing), latent Dirichlet allocation (LDA) for topic modeling, and BigDL (distributed deep learning library).


Showing 1 - 10 of 70 results.
Items per Page 10
of 7