ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.
The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.
Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.
Note – Symposium not held in July and November due to conflicts with PEARC and SC conferences.
Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar
iPhone one-tap (US Toll): +16468769923,,114343187# (or) +16699006833,,114343187#
Telephone (US Toll): Dial(for higher quality, dial a number based on your current location):
US: +1 646 876 9923 (or) +1 669 900 6833 (or) +1 408 638 0968
Meeting ID: 114 343 187
Upcoming events are also posted to the Training category of XSEDE News.
Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.
Previous years' ECSS seminars may accessed through these links:
January 19, 2021
Introduction to Jetstream2 - Accelerating Science and Engineering on Demand
Presenter(s): Jeremy Fischer (Indiana University)
This talk will give an overview of Jetstream and the award of Jetstream2. We'll discuss successes, failures, and some things we learned along the way. We'll discuss use cases and try to provide plenty of time for questions about the system at the end of the session.
Exosphere, User-Friendly Interface for Research Clouds
Presenter(s): Chris Martin (University of Arizona) Julian Pistorius (University of Arizona)
Exosphere is a client interface for managing computing workloads on OpenStack cloud infrastructure. It is a user-friendly alternative to Horizon, the default OpenStack graphical interface. Exosphere can be used with most research cloud infrastructure, requiring near-zero custom integration work. The Exosphere team aims to bring advanced features of research clouds within reach of non-advanced users, such as elastic workload scaling, GPU-accelerated streaming desktops, and secure, reproducible sharing of data science workbench environments. Link to Slides
October 20, 2020
Presenter(s): Sergiu Sanielevic (PSC)
Neocortex will be a highly innovative resource at PSC that will accelerate AI-powered scientific discovery by vastly shortening the time required for deep learning training, foster greater integration of artificial deep learning with scientific workflows, and provide revolutionary new hardware for the development of more efficient algorithms for artificial intelligence and graph analytics.
Presenter(s): Shawn Brown (PSC)
Bridges-2, PSC's newest supercomputer, will provide transformative capability for rapidly evolving, computation-intensive and data-intensive research, creating opportunities for collaboration and convergent research. It will support both traditional and non-traditional research communities and applications. Bridges-2 will integrate new technologies for converged, scalable HPC, machine learning and data; prioritize researcher productivity and ease of use; and provide an extensible architecture for interoperation with complementary data-intensive projects, campus resources, and clouds.
September 15, 2020
High Resolution Spatial Temporal Analysis of Whole-Head 306-Channel Magnetoencephalography & 66-Channel Electroencephalography Brain Imaging in Humans During Sleep
Presenter(s): David Shannahoff-Khalsa (UCSD) Mona Wong (SDSC) Jeff Sale (SDSC)
In chronobiology, the circadian rhythm is known as the 24-hr sleep-wake cycle. The ultradian rhythm has a shorter cycle with approximately a 1-3 hour periodicity, with considerable variability. This project's goal is to follow up on our earlier EEG work during sleep, and that of others, that has identified a rhythm of how the two cerebral hemispheres alternate in dominance with coupling to the ultradian rhythm of the rapid eye movement (REM) and non-rapid eye movement (NREM) sleep cycle. Here we are also comparing whole head and regional variations in cerebral dominance to gain better insight to this novel rhythm during sleep. This rhythm of alternating cerebral hemispheric dominance also manifests during the waking state, and it is apparently coupled to every major bodily system and now presents as a novel rhythm regulated by the central and autonomic nervous systems via the hypothalamus. With the support of XSEDE ECSS, this project has processed 306-channel magnetoencephalography that includes 3 signal types (1 magnetometer, 2 opposing gradiometers) and 66-channel EEG recordings from 4 normal healthy sleep subjects. We are analyzing the data to compare the 4 signal types filtered into 6 frequency bands, over the whole head and 6 discrete regions of the head to see how they vary with the REM and NREM sleep stages. Our analysis includes a relatively new algorithm called Fast Orthogonal Search that is well suited for analyzing the periodicity in nonlinear dynamical systems. Our analysis also includes unique methods in visualization for observing how these patterns of left minus right hemisphere power exhibit during sleep stages.
August 18, 2020
RDA Recommendations and Outputs
Presenter(s): Anthony Juehne (RDA Foundation)
The RDA was launched in 2013 to fill the identified need for a neutral, collaborative space gathering the diverse data communities and, through informed consensus, building the social and technical bridges to enable open data sharing. Since its founding, the RDA principles - Open, Consensus, Balance, Harmonization, Community-driven, Non-profit, and Technology-Neutral - have resonated across research communities. RDA membership includes currently over 11,000 participants representing 144 countries from all populated continents collaborating in 97 working or interest groups. The RDA is focused on actively building outcomes to accelerate the work to support open data interoperability, sharing, and use. This happens through the development and deployment of two primary output categories: i) technical infrastructure (e.g., tools, models, registries); and ii) social infrastructure (e.g., common standards, best practices, policies). This presentation will discuss an approach to implementing RDA developed outputs and recommendations across multiple areas of organizational operation, including human development and education, data laws and policies, research practices, data and metadata formats and standards, data sharing workflows, and infrastructure management for enhanced interoperability.
FAIR Data and SEAGrid Gateway a Research Data Alliance Adoption Project
Presenter(s): Rob Quick (Indiana University)
The Science and Engineering Grid (SEAGrid) Gateway has been an active resource for the computational community since 2016. During this time the utility of persistent identifiers for research data products has become prevalent in research communities as defined in the FAIR principles for open data. At the beginning of 2020 the Research Data Alliance funded an adoption project to integrate RDA outputs and recommendations focused on PID issuance to data and software components that make up a science workflow within the SEAGrid environment. This presentation will summarize this project and describes the gateway and data infrastructure components required for integration along with the details of the integration process. The work done in this adoption project can be used to inform future gateway projects that adopt the technical components of FAIR which are reliant on a persistent identifier resolution infrastructure.
June 16, 2020
Scalable Research Automation using Globus
Presenter(s): Rachana Ananthakrishnan (Globus)
REST APIs exposed by the Globus service, combined with high-speed networks and Science DMZs, create a data management platform that can be leveraged to increase efficiency in research workflows. In many cases, current ad hoc or human centered processes fall short of addressing the needs of researchers as their work becomes more data intensive. As data volumes grow, the overhead introduced by such non-scalable processes hampers core research activities, sometimes to the point where research takes a back seat to wrangling with IT infrastructure. However, technologies exist for reducing this burden and reengineering processes such that they can easily cope with growing data velocity and volume. One such technology is the Globus platform-as-a-service that facilitates access to advanced data management capabilities, and enables integration of these capabilities into existing and new scientific workflows to automate repetitive tasks: data replication, ingest from instruments, backup, archival, data distribution, etc. We will present real-world examples that illustrate how Globus can be used to perform data management tasks at scale, with no or minimal effort on the part of the researcher. Examples include streamlined data flows at the Advanced Photon Source data sharing system, used to distribute data from light source experiments. We will describe how the Globus platform provides intuitive access to authentication, authorization, sharing, transfer, and synchronization capabilities that can be included in simple scripts or integrated into more full-featured applications.
Building Source-to-Source Tools for High-Performance Computing
Presenter(s): Chunhua "Leo" Liao (LLNL)
Computational scientists face numerous challenges when trying to exploit powerful and complex high-performance computing (HPC) platforms. These challenges arise in multiple aspects including productivity, performance, correctness and so on. In this talk, I will introduce a source-to-source approach to addressing HPC challenges. Our work is based on a unique compiler framework named ROSE. Developed at Lawrence Livermore National Laboratory, ROSE encapsulates advanced compiler analysis and optimization technologies into easy-to-use library APIs so developers can quickly build customized program analysis and transformation tools for C/C++/Fortran and OpenMP programs. Several example tools will be introduced, including the AST inliner, outliner, and a variable move tool. I will also briefly mention ongoing work related to benchmarks, composable tools, and training for compiler/tool developers. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-810981).