ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +16468769923,,114343187# (or) +16699006833,,114343187#

Telephone (US Toll): Dial(for higher quality, dial a number based on your current location):

US: +1 646 876 9923 (or) +1 669 900 6833 (or) +1 408 638 0968

Meeting ID: 114 343 187

See the Events page for details of upcoming presentations. Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

Previous years' ECSS seminars may accessed through these links:

2017

2016

2015

2014

March 18, 2014

Science Gateway Support and SoftWare Spinoffs
Presenters: Marie Ma (Yu Ma) and Lahiru Gunathilake (Indiana University)

Presentation Slides

Science gateways enable broad communities of scientists to use XSEDE resources through Web browser and similar user interfaces. XSEDE's Extended Collaborative Support Services (ECSS) has staff available to work with science gateway developers to help them integrate their gateways with XSEDE.  Frequently, a solution for one gateway's problems can be reused by other gateways. In this two-part presentation, we describe a range of gateway support activities and some reusable software nuggets that we have derived.  An XSEDE-compatible web-based authentication for gateway users is a common problem, especially given the wide range of programming languages and frameworks used to build gateways.  We summarize support activities for the General Automated Atomic Model Parameterization (GAAMP) computational chemistry gateway, NCGAS Galaxy-based bioinformatics gateway, the ParamChem computational chemistry gateway, and the UltraScan biophysics gateway and describe three common requirements: the need to perform XSEDE-compatible Web authentication, the need to manage job executions securely, and the need to monitor jobs through the gateway.  This has led our group to develop small, open source gateway code nuggets that can be easily used in other projects.  As open source software, these are open for any to use but also, just as importantly, open for code contributions.  We conclude with information on how to obtain, use, and contribute to the software.


February 18, 2014

Postponed to a later date (TBD)

Perspectives on Data Sharing: Two data-centric gateways at NCAR
Presenters: Don Middleton, Eric Nienhouse, Nathan Wilhelmi (NCAR)

The U.S. government and the NSF have substantially elevated the importance of scientific data management, sharing, and openness as a national priority. In addition to providing access to computational resources for scientific communities, science gateways can also be an ideal place for communities to share data using common infrastructure that¹s been tuned to their specific needs. In this presentation, we will briefly review some of the salient NSF policy shifts regarding data, touch on related emerging trends including Big Data and EarthCube, demonstrate two data-centric Science Gateways (climate modeling and Arctic science), and finish up by providing an overview of our architecture and software engineering process.

Bio of Speakers:

Don Middleton leads the Visualization and Enabling Technologies (VETS) program in NCAR¹s Computational and Information Systems Laboratory (CISL). This program includes the development and delivery of data collections and cyberinfrastructure to a broad, national and global community. The project portfolio includes the NCAR Command Language (NCL) and the PyNGL/PyNIO toolkit, the Community Data Portal (CDP), the NSF-sponsored Advanced Cooperative Arctic Data and Information Service (ACADIS), the Earth System Grid (ESG) data system, NSF¹s XSEDE project, the DOE-sponsored Parvis effort, the UCSD-led Chronopolis digital preservation project, and the multi-agency sponsored National Multimodel Ensemble (NMME) project. Middleton is active in NSF¹s EarthCube activity and also contributes to an expert team on federated data management systems for the World Meteorological Organization Information System (UN/WMO-WIS).

Eric Nienhouse is a software engineer and Agile Scrum Product Owner for the Science Gateway Framework (SGF) software, which supports the ESG-NCAR Science Gateway and the ACADIS Arctic science data management system. Eric is passionate about building products that enable the scientific user community to focus on its science. As product owner, Eric identifies and prioritizes project requirements to ensure the SGF software and services meet the needs of stakeholders.

Nathan Wilhelmi is a software engineer and the Scrum Master for the Science Gateway Framework (SGF) software, which supports the ESG-NCAR Science Gateway and the ACADIS Arctic science data management system. As the Scrum Master, Nathan is responsible for facilitating and improving the Scrum process, ensuring the improvement of code quality, and researching and adopting new technologies.


January 21, 2014

Pushing the Integration Envelope of Cyberinfrastructure to Realize the CyberGIS Vision
Presenter: Shaowen Wang, (NCSA)

Presentation Slides

CyberGIS, ­geographic information science and systems (GIS) based on advanced cyberinfrastructure, ­has emerged during the past several years as a vibrant interdisciplinary field. It has played essential roles in enabling computing- and data-intensive research and education across a broad swath of academic disciplines with significant societal impact. However, fulfilling such roles is increasingly dependent on the ability to simultaneously process and visualize complex and very large geospatial data sets and conduct associated analyses and simulations, which often require tight integration of collaboration, computing, data, and visualization capabilities. This presentation addresses this requirement as a set of challenges and opportunities for advancing cyberinfrastructure and related sciences while discussing the state of art of CyberGIS.


December 17, 2013

Presenter(s):
Nate Coraor, (Penn State)
Philip Blood, (Pittsburgh Supercomputing Center)
Rich LeDuc, (National Center for Genome Analysis Support)
Yu Ma, (Indiana University)
Ravi Madduri, (Argonne National Laboratory)

Presentation Slides

We will present a symposium describing current and planned efforts to enable more scientists to easily, transparently, and reproducibly analyze and share large-scale next-generation sequencing data with the Galaxy framework. 

Topics and speakers are as follows:

  • James Taylor (Galaxy Team): The future of Galaxy.
  • Philip Blood (Pittsburgh Supercomputing Center): Integrating Galaxy Main with XSEDE and establishing an XSEDE Galaxy Gateway.
  • Rich LeDuc and Yu Ma (National Center for Genome Analysis Support (NCGAS), Indiana U): Utilizing Galaxy at NCGAS with integrated InCommon authentication.
  • Ravi Madduri (Argonne National Laboratory): Experiences in building a next-generation sequencing analysis service using Galaxy, Globus Online, and Amazon Web Services.


October 15, 2013

Research Data Management-as-a-Service with Globus Online
Presenter: Rachana Ananthakrishnan , (University of Chicago)

Presentation Slides

As science becomes more computation- and data-intensive, there is an increasing need for researchers to move and share data across institutional boundaries. Managing massive volumes of data throughout their lifecycle is rapidly becoming an inhibitor to research progress, due in part to the complex and costly IT infrastructure required – infrastructure that is typically out of reach for the hundreds of thousands of small and medium labs that conduct the bulk of scientific research. Globus Online is a powerful system that aims to provide easy-to-use services and tools for research data management – as simple as the cloud-hosted Netflix for streaming movies, or Gmail for e-mail – and make advanced IT capabilities available to any researcher with access to a Web browser. Globus Online provides software-as-a-service (SaaS) for research data management, including data movement, storage, sharing, and publication. We will describe how researchers can deal with data management challenges in a simple and robust manner. Globus Online makes large-scale data transfer and synchronization easy by providing a reliable, secure, and highly-monitored environment with powerful and intuitive interfaces. Globus also provides federated identity and group management capabilities for integrating Globus services into campus systems, research portals, and scientific workflows. New functionality includes data sharing, simplifying collaborations within labs or around the world. Tools specifically built for IT administrators on campuses and computing facilities give additional features, controls, and visibility into users' needs and usage patterns. We will present use cases that illustrate how Globus Online is used by campuses (e.g. University of Michigan), supercomputing centers (e.g., Blue Waters, NERSC), and national cyberinfrastructure providers (e.g. XSEDE) to facilitate secure, high-performance data movement among local computers and HPC resources. We will also outline the simple steps required to create a Globus Online endpoint and make the service available to all facility users without specialized hardware, software or IT expertise. There will be a live demonstration of how to use Globus Online.


September 17, 2013

Introducing the XSEDE Workflow Community Applications Team
Presenter: Marlon Pierce, (IU)

Presentation Slides

Many computational problems have sophisticated execution patterns, or scientific workflows, that build on top of the basic scheduling and queuing structures offered by XSEDE service providers. Examples include techniques for large scale parameter space exploration and coupling of several independently developed applications into new, composite computational experiments. Such executions may occur across multiple resources as well, using the optimal resource for each application in the dependency chain. Managing these complex executions is only part of the problem: scientists must be able to capture the metadata associated with a particular set of runs and share their workflows within their teams and with collaborators. Many academic groups have devoted significant research efforts into building workflow software that addresses one or more of these problems. The goal of the newly constituted XSEDE Workflow Community Applications Team is to assist scientists with scientific workflow problems in using workflow software to their research to XSEDE. The long term goal of the XSEDE workflow team is to assist XSEDE in providing a well-documented, reliable environment for the execution of scientific workflows in partnership with workflow software development teams.

Supporting non-traditional users as they leverage XSEDE into their research
Presenter: Roberto O. Gomez, (PSC)

Presentation Slides

Our role as service providers of HPC has in the past required us to offer user support to, for lack of a better word, ‘traditional' HPC users, who are usually well versed in parallel computing, and are familiar with supercomputing centers and the mechanics of using large resources. We tend to think that this is the normal case, irrespective of their research discipline. Our support efforts typically are dominated by porting users' codes (which are already parallel) to new architectures, tuning these codes to improve their performance, and installing software to satisfy specific user requirements. Of course, we do offer training for new users, and we deal individually with ‘green' team members of the research teams with which we interact. But by and large, our support role might be more accurately described as ‘software support', rather than ‘user support'. As we try to open access to HPC resources to a wider audience and bring in users from research areas not normally associated with HPC, however, this paradigm we had grown comfortable with flips on its head. Non-traditional HPC users bring with them a different host of requirements that put more emphasis on the ‘user' side of the support process. We will illustrate this with some examples of recent NIP and ECSS activities at PSC.


September 19, 2013

Introducing the XSEDE Workflow Community Applications Team
Presenter: Marlon Pierce, (IU)

Many computational problems have sophisticated execution patterns, or scientific workflows, that build on top of the basic scheduling and queuing structures offered by XSEDE service providers. Examples include techniques for large scale parameter space exploration and coupling of several independently developed applications into new, composite computational experiments. Such executions may occur across multiple resources as well, using the optimal resource for each application in the dependency chain. Managing these complex executions is only part of the problem: scientists must be able to capture the metadata associated with a particular set of runs and share their workflows within their teams and with collaborators. Many academic groups have devoted significant research efforts into building workflow software that addresses one or more of these problems. The goal of the newly constituted XSEDE Workflow Community Applications Team is to assist scientists with scientific workflow problems in using workflow software to their research to XSEDE. The long term goal of the XSEDE workflow team is to assist XSEDE in providing a well-documented, reliable environment for the execution of scientific workflows in partnership with workflow software development teams.


August 20, 2013

Engineering Breakthroughs at NCSA (XSEDE, Blue Waters, Industry)
Presenter: Seid Koric, (NCSA)

Presentation Slides

Examples of some of my recent academic and industrial HPC work and collaborations are provided. Application examples range from manufacturing and multiphysics material processing simulations to massively parallel linear solvers. Parallel scalability of engineering codes on iForge (exclusive industrial HPC resource at NCSA) and Blue Waters (NSF funded sustained peta-scale system) are included.

Identification of Mechanism Based Inhibitors of Oncogene Pathways Using High-Performance Docking
Presenter: Bhanu Rekepalli, (NICS)
Principal Investigator: Yuri Peterson (MUSC)

Presentation Slides

Modern high throughput drug discovery is a complex and costly endeavor. Since small molecule chemical space is currently in the millions and is growing rapidly, performing in vitro and cell-based screening is complex and cost-prohibitive even for small subsets of molecules without significant investment in expertise and infrastructure. Thus, computational approaches are increasingly incorporated in drug discovery. Due to limited computational resources most computational drug discovery efforts are limited by either using rigid conformer libraries to bypass the need for more computationally intensive flexible docking, or performing flexible docking on small subsets (tens of thousands). High performance computing resources, such as Kraken, have allowed access to unprecedented amounts of computing power for biomedical researchers, and opens the possibility of exploring immense chemical space in hours that would previously have taken years on local clusters. Our working hypothesis is High Performance Docking (HP-D) allows probing of vast amounts of chemical space that is impractical by any other means. To test this hypothesis we have instrumented the docking program DOCK6 and compared performance between a small academic parallel cluster and Kraken. We show a dramatic and scalable increase in performance that will allowed the exploration of vast amounts of chemical space to identify compounds that dock on proteins related to ovarian cancer pathway.


June 18, 2013

Atomistic Characterization of Stable and Metastable Alumina Surfaces
Presenter: Sudhakar Pamidighantam, (NCSA)
Principal Investigator: Douglas Spearot, Co-pi: shawn Coleman (University of Arkansas)

[Presentation Slides]

This presentation is to describe work in progress for the extended collaborative support requested to assist the PIs, with (1) improving the parallel scalability of the virtual diffraction algorithm implemented as a "compute" in LAMMPS and (2) creating a workflow that automates the data transfer of the atomistic simulation results from TACC Stampede to SDSC Gordon in order to perform the virtual diffraction analysis and (3) visualization. Before the virtual diffraction algorithm is made publically available and incorporated into the main distribution of LAMMPS, the performance of the code must scale to larger atomic models. Currently the algorithm is memory-bound, likely due to the simple MPI parallelization used. Strategies and implementation of scaling methods in Compute will be described. Visualization using VisIT system with various configuration protocols is being implemented. Integration into GridChem science gateway and user interactions will be discussed. A plan for the workflow implementation will be presented. Sudhakar.

 


April 16, 2013

Data Analysis on Massive Online Game Logs
Presenter: Dora Cai (NCSA)
Principal Investigator: Marshall Scott Poole (UIUC)

Presentation Slides

This presentation will talk about the work with Professor Marshall Scott Poole entitled "Some Assembly Required: Using High Performance Computing to Understand the Emergence of Teams and Ecosystems of Teams". Massively Multiplayer Online Games (MMOGs) provide unique opportunities to investigate large social networks. This project is a multidisciplinary Social Sciences project dedicated to the study of communication-related behaviors using data from MMOGs. A twenty-person team of scholars from four universities is engaged in the study. The project has performed systematic studies on many research areas, such as social network analysis, gamer behavior studies, and virtual world simulation. The Gordon supercomputer has provided great support on this project. This talk will provide an overview of the project, describe my involvement as an ECSS consultant in this project, and present the recent progress on developing a tool to visualize the social networks in MMOGs

Development of Novel Quantum Chemical Molecular Dynamics for Materials Science Modeling
Presenter: Jacek Jakowski (NICS)
Co-Principal Investigators: Jacek Jakowski (NICS), Sophya Garashchuk, (U. of South Carolina), Steve Stuart (Clemson University), Predrag Krstic (University of Tennessee& ORNL), Stephan Irle (Nagoya University)

I will present my work on the project titled: "Modeling of nano-scale carbon and metalized carbon materials for the 'EPSCoR Desktop to TeraGrid EcoSystems project'". with CO-PI: Sophya Garashchuk, (U. of South Carolina), Steve Stuart (Clemson University), Predrag Krstic (University of Tennessee& ORNL), Stephan Irle (Nagoya University). This project contains several subprojects that focus on development and application of various molecular dynamics approaches to material science problems. I will particularly discuss development and parallelization of Bohmian dynamics for modeling quantum nuclear effects of selected nuclei. Implementation and scaling on Kraken at NICS and science problems illustration will be presented.


Showing 41 - 50 of 63 results.
Items per Page 10
of 7