Student Engagement Projects


Project: GPU/Xeon Phi Accelerator Optimization of Mantle Convection/Geodynamo Simulation Code

Description: The Computational Infrastructure for Geodynamics (CIG) develops, supports, and disseminates community-accessible software for the geodynamics research community. As part of this work, CIG develops codes that support advanced computational technologies such as GPGPU or coprocessor accelerated computing. CIG is currently developing codes to investigate the long term dynamics of mantle convection and the core dynamo. ASPECT (Advanced Solver for Problems in Earth's ConvecTion, http://aspect.dealii.org/) is a C/C++ code to simulate problems in thermal convection targeting simulation of mantle plume formation and subduction zone modeling. Calypso (http://www.geodynamics.org/cig/software/calypso/) is a Fortran code to simulate magnetohydrodynamics dynamo simulation in a rotating spherical shell to study the evolution of the Earth's core. These codes are fully functional, scale to thousands of processors through MPI and/or OpenMP, and are used in productive scientific work, though neither currently supports GPGPU or Xeon Phi accelerator based computation. The goal of this project is to improve one or both of the codes such that they are able to use the NVIDIA K20 GPU and Xeon Phi accelerators on XSEDE TACC Stampede system. A student who works on this project must have strong skills in either C/C++ or Fortran, as well as experience in using GPUs or the Xeon Phi accelerator for scientific computing. Ideal candidates will also have experience with numerical methods. Students will work primarily with the CIG lead developer, as well as other CIG developers involved in either code.

Required Skills:

1) High level C/C++ programming ability
2) High level Fortran programming ability
3) Experience with GPGPU and/or Xeon Phi accelerator based computing
4) Familiarity with advanced numerical methods (finite difference, finite element, etc)

Category:

Science (Computational Science Applications)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. The Computational Infrastructure for Geodynamics (CIG) develops, supports, and disseminates community-accessible software for the geodynamics research community. CIG software supports a variety of geodynamic research from mantle and core dynamics, to crustal and earthquake dynamics, to magma migration and seismology. CIG is a community-governed organization that is committed to develop and maintain the geodynamics community through community participation across this research spectrum. CIG provides: Reusable, well-documented geodynamics software that keeps pace with developments in computational technology; Software building blocks for geodynamics from which state-of-the-art modeling codes can be effectively assembled; Strategic partnerships with the larger world of computational science and geoinformatics to ensure best practices in developing community-specific tool kits for scientific computation in solid-Earth sciences; Specialized training and workshops and other community activities for both the geodynamics and larger Earth Science communities. For more information please see http://geodynamics.org/

Education Level:

Fifth Year
Masters Candidate PhD Candidate

You will be working on a team. You will be working anywhere.

Supervisor: Eric Heien

College, University, or Research Institution: University of California, Davis



Project: Video Camera Dead-Pixel Monitoring for NASA High Definition Earth Viewing Mission

Description: Design and implement an efficient parallel program to detect and monitor bad-pixels from HD earth viewing video frames live-streamed from NASA ISS. The basic steps include video frames sampling and comparison to capture space camera pixel degradation of the 4 video cameras over mission period (~3 years). The result can be used to assess space/satellite camera robustness and the radiation resistance for camera selections in future space missions. The project can be further extended to advanced topics in bad-pixel correction, image artifacts/noises removal, intelligent computer vision, image understanding, event/object tracking, remote sensing lag time reduction, 4 camera schedule control, big data handling, archiving and compression, etc.

Required Skills:

1) parallel programming
2) image processing
3) algorithm design
4) creative solution
5) passionate on space and earth viewing
6) machine intelligence (a plus)

Category:

Data ( Data Mining, Data Analytics)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. Successful candidate will be passionate on space and earth viewing. Contact Dr. Liwen Shih (shih@uhcl.edu)

Education Level:

Junior/Senior
Fifth Year
Masters Candidate PhD Candidate

You will be working on a team. You will be working anywhere.

Supervisor: Liwen Shih

College, University, or Research Institution: U of Houston - Clear Lake

 


Project: High Performance Simulated Spectral Imaging

Description: The central intent of this project is to develop a scientific simulation technology that requires the use of high performance computing resources to help guide, inform, and verify novel experimental electron microscopy research. Recent electron microscopy instrumentation advances have enabled the coupling of high spatial resolution Z contrast imaging with high energy resolution spectroscopic analysis. This combination can allow for simultaneous measurements of atomic and electronic structure in the form of a so-called spectral image. By correlating atomic and electronic structure in defect containing systems such as grain boundaries and interfaces, the underlying origin of bulk material properties that are strongly influenced by the presence of the defects may be discovered. In recent years, the Computational Physics Group at the University of Missouri – Kansas City has developed a computational analog of the above described experimental technique. The theoretical spectral image is generated through a series of independent calculations that target each individual atom in a given model. Presently, this calculation is not parallelized but a very large speed up is potentially attainable if it were parallelized because of the high degree of job independence and relatively straightforward load balancing issues. In this project the participating student would learn the elementary parallel programming that is needed to modify the existing serial workflow script for a high performance computing environment. Additionally, the student would have the opportunity to learn advanced visualization techniques for displaying the computed multi-dimensional spectral imaging data.

Required Skills:

1) Scripting language familiarity (Perl / Python)
2) Elementary MPI familiarity (may learn on the job)
3) Linux/UNIX familiarity

Category:

Science (Computational Science Applications)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. This project is directed at undergraduate students. The student will be working with me as the lead project advisor as well as a graduate student that is connected with this project.

Education Level:

Freshman/Sophomore Junior/Senior

You will be working on a team.
You will be working at
University of Missouri - Kansas City, Kansas City, KS.

Supervisor: Paul Rulis

College, University, or Research Institution: Univeristy of Missouri - Kansas City

 


Project: Performance optimization of multiscale QM/MM software for complex chemical systems

Description: Multiscale QM/MM (mixed quantum and molecular mechanical) computational methods are indispensable to advance understanding and solve problems in the chemical sciences ranging from drug design to artificial photosynthesis. This has been recognized with the award of the 2013 Nobel Prize in chemistry for "the development of multiscale models of complex chemical systems". Two of the major codes that are used by thousands of research groups in academia and industry for QM/MM simulations and which are deployed on XSEDE resources are CHARMM and AMBER. Highly optimized versions of the classical (MM) part of these codes are available, for example the recently developed CPU based domain-decomposition code for MM simulations in CHARMM and the GPU port of the MM code PMEMD which is part of AMBER. The QM codes, however, have been largely neglected during these extensive software optimization efforts and are now the bottleneck impeding scientific progress. We are involved in development efforts that aim at eliminating this bottleneck, among others in collaboration with researchers at the National Renewable Energy Laboratory who use both codes for projects relating to enzymatic biofuels production. In this project, which will be supervised by Dr. Andreas Goetz at the San Diego Supercomputer Center, the student will take a major role in this software optimization project. The student will profile the QM codes for typical simulation workloads on a range of XSEDE target architectures, identify computational bottlenecks, and develop strategies to improve both single thread performance as well as parallel scalability, targeting both modern Intel Xeon hardware as well as GPU accelerators. Unit and system tests will be developed for the optimized software components and benchmarks will establish the achieved performance improvements on the available hardware platforms.

Required Skills:

1) Fortran or C/C++ programming experience required
2) Parallel programming experience (MPI, OpenMP) would be a benefit
3) GPU programming experience (libraries, CUDA) would be a benefit
4) Familiarity with computational chemistry would be beneficial

Category:

Science (Computational Science Applications)
Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. This project will involve working onsite at the San Diego Supercomputer Center.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate PhD Candidate

You will be working independently.
You will be working at
San Diego Supercomputer Center, La Jolla, CA.

Supervisor: Andreas Goetz

College, University, or Research Institution: San Diego Supercomputer Center

 


Project: HPC Application Optimization and Performance Analysis

Description: Applications must now express parallelism to fully exploit hardware. Single core optimizations (cache, memory and vectorization), multicore parallelization and distributed memory parallelization must all be leveraged for maximum scalability. OpenFOAM (Open source Field Operation And Manipulation) is an open source software package that supports pre-processing, solving, and post- processing of continuum mechanics problems that cover both solid and fluid mechanics. OpenFOAM is a large code base, with 97.8% of the 1.2 million lines of code written in C++. The student is not expected to work directly on the OpenFOAM source code. Rather, underlying OpenFOAM algorithms have been re-implemented in the literature for increased performance. Based on the student's interest, these refinements for cache optimization, vectorization or MPI communication optimization will be implemented and tested on candidate HPC systems to determine their applicability to OpenFOAM. Alternatively, a student in a domain science like physics or chemistry may develop an optimization project around a code related to their work.

Required Skills:

1) C/C++ programming experience
2) Linux experience
3) Completion of computer architecture course 4) Linear algebra

Category:

Science (Computational Science Applications)
Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. Candidates will have to be proficient in Linux and C programming. Experience with parallel programming (MPI, OpenMP, Cilk) is a plus.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate

You will be working independently. You will be working anywhere.

Supervisor: David Hudak

College, University, or Research Institution: Ohio Supercomputer Center

 


Project: The CIPRES Science Gateway: A Public Resource for Large Tree Inference

 

Description: The CIPRES Science Gateway is a highly successful web-based resource that provides about 3,000 biologists per year with access to community phylogenetics codes run on XSEDE HPC resources. The work supports approximately 300 publications per year and is used by approximately 70 instructors for curriculum delivery. We are seeking a highly motivated, skilled, team-oriented individual to contribute to development of new features that will further increase the impact of the CIPRES Gateway as mechanism for accessing XSEDE resources. The exact details of the project will be arranged with the individual involved to take advantage of their specific skills and interests. Some possible opportunities for the project include: 1) install, benchmark, test, and oversee/support release of new phylogenetics codes for the community; 2) develop/evolve a management interface tool that allows CIPRES management to visually analyze usage patterns and probe issues that arise when jobs are run; 3) develop a tool to parse uploaded data sets, check their format, and use the information in the data set to configure runs of specific codes; 4) develop and test new user-requested features to the CIPRES browser interface. The project requires a strong background in Computer Science, with experience in parallel HPC code development and/or very strong Java programming skills. Experience with Biology concepts/vocabulary is a plus. The CIPRES project runs in a team environment, which requires both attention to detail and willingness to work creatively within the constraints of a stable but rapidly evolving software package.

Required Skills:

1) Required: Strong Background in Computer Science
2) Required: Experience with parallel HPC code development and/or significant Java experience
3) Preferred: Familiarity with team coding practices/ versioning tools
4) Prefered: Knowledge of Molecular Biology Terminology/Concepts

Category:

Science (Computational Science Applications)
Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate

You will be working on a team. You will be working anywhere.

Supervisor: Mark Miller

College, University, or Research Institution: SDSC

 


Project: Framework for Filesystems Performance Testing

Description: Create a test framework that can be populated and extended with I/O benchmarks. The framework will manage the execution of benchmarks against a PSC developed filesystems, SLASH2. Results will be cataloged in a database for reference over time to track the effects of software and hardware changes to the filesystem.

Required Skills:

1) Unix
2) Unix filesystems
3) C programming
4) Programming in a scripting language such as python
5) Database (eg mysql)

Category:

Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. The student will participate in an ongoing project with members of the facilities staff and developers of the filesystem software.

Education Level:

Freshman/Sophomore Junior/Senior
Fifth Year

You will be working on a team. You will be working anywhere.

Supervisor: J Ray Scott

College, University, or Research Institution: Pittsburgh Supercomputing Center


Project: Development and Evaluation of Concurrent Search Trees on Xeon Phi

Description: With the growing prevalence of multi-core and many-core machines, concurrent data structures are becoming increasingly important. In a concurrent data structure, multiple processes may need to operate on the data structure at the same time. Contention between different processes must be managed in such a way that all operations complete correctly and leave the data structure in a valid state. The abstract data type "dictionary'' is one of the most important structures in Computer Science. Dozens of different data structures have been proposed for implementing dictionaries, including hash tables, skip lists, and balanced/unbalanced binary search trees (BSTs). Recent research has produced encouraging results involving concurrent BSTs, especially lock-based and lock-free versions of BSTs. Nevertheless, most of the current "dictionary" data type implementations target general purpose CPUs rather than accelerators. Modern supercomputers are getting a speed boost from special purpose helper processors called accelerators and this technology is becoming more and more popular. The Intel Xeon Phi is a new accelerator provided by Intel and is used by many supercomputers, including Stampede at the Texas Advanced Computing Center (TACC), and Tianhe-2, which is currently the most powerful supercomputer in the world. With this new architectural shift in mind and using TACC facilities, the student will help develop concurrent search trees and evaluate their performance on these new co-processors / accelerators.

Required Skills:

1) Programming C/Fortran
2) Linux/Unix
3) Algorithmic
4) Parallel Programming
5) Knowledge on Xeon Phi

Category:

Science (Computational Science Applications)
Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate PhD Candidate

You will be working independently. You will be working anywhere.

Supervisor: Jerome Vienne

College, University, or Research Institution: Texas Advanced Computing Center (TACC)

 


Project: Performance analysis and optimization of scientific applications for Intel Xeon Phi clusters

 

Description: The intern will work with Purdue's Research Computing staff to analyze and optimize scientific applications developed and maintained by Purdue faculty and researchers. In particular, the student will concentrate on computational fluid dynamics and molecular dynamics codes currently used in the Physics, Aeronautics & Astronautics Engineering, and Biomedical Engineering departments. This internship will allow the student to gain experience in performance engineering, and work on optimizing applications to take advantage of Intel Xeon Phi coprocessors, currently available on Purdue's Conte cluster

Required Skills:

1) Strong programming skills using C/C++ or Fortran
2) Familiarity with Linux required
3) Parallel programming experience (e.g. MPI, OpenMP, OpenCL, pthreads)
4) Experience with scientific software, particularly in high performance computing
5) Familiarity with performance analysis and optimization tools
6) Ability to work independently to meet project requirements and deadlines

Category:

Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate

You will be working independently. You will be working anywhere.

Supervisor: Verónica Vergara

College, University, or Research Institution: Purdue University

 


Project: Security Analytics and Modeling

Description: The University of Arkansas at Pine Bluff (UAPB), Computer Science Cybersecurity Research Lab (CRL) currently has a funded research project from the Department of Defense (DoD), "Automatic Intrusion Detection and Response for Cyberinfrastructure-oriented environments (AIDR-COE)", (Contract# 59083CSREP), in collaboration with North Carolina Agricultural and Technical State University (NCAT). The research examines issue of automatic intrusion detection in a cyberinfrastructure-oriented environment. The project employs techniques from bioinformatics, and artificial intelligence known as plan recognition to identify classes or types of intrusions, which are based on alerts generated by Snort, the most commonly used open-source intrusion detection system (IDS). Intrusion types include temporal, predict behavioral patterns of intruders to construct visualize representations of current intrusions, and predict future attacks to allow automatic responds to be deployed [15]. The project constructs tangible representations of intrusion using the Boyd's Observe-Orient-Decide-Act rational reconstruction (OODA-RR) model, attack trees and attack graphs [16]. The Department of Homeland Security Protected Repository for the Defense of Infrastructure Against CyberThreats/ (PREDICT) project, which provides unique/current security relevant data in the form of a repository, which contains intrusion event/signature, source and destination of intrusions to construct plan recognition maps of intrusion. Multistage attacks are computationally expensive attacks to detect within large-scale systems such as power plants or cloud computing environments since such attacks are characterized by an intruder's ability to progressively go unnoticed by systems such as Snort. Consequently, prior to a critical asset being compromised, multiple infrastructure components such as virtual machines and power systems can be easily compromised. As a result this form of cyber analytics is computationally expensive, because of the amount of data involved in the analytics process. Traditionally, cyber analytic tools operate in an offline, post mortem mode, the volume of data simply overtakes the analytic process. Processing, visualizing, and interacting with data sets containing billions of records per day introduces specific analysis challenges. Beyond the size, rendering times and occlusion cause challenges for visual analysis. Parallelizing Security Analytics and Modeling: The project currently utilizes local HPC resources including the Apollo cluster at UAPB, and resources provided by XSEDE. The student will help in the parallelization of existing code in C++, Java, and modify Snort rules. This work will enhance intrusion detection and analysis code for analytic workflows that are optimized for high-volume data streams. In addition, the parallelization will enable visualizations to be produced in hours rather than the typical 24 hours now required.

Required Skills:

1) C++
2) Java
3) Knowledge of intrusion engines such Snort 4) MPI

Category:

Science (Computational Science Applications) Data ( Data Mining, Data Analytics)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Junior/Senior

You will be working on a team.
You will be working at
University of Arkansas at Pine Bluff, Pine Bluff,, AR.

Supervisor: Jessie J Walker

College, University, or Research Institution: University of Arkansas at Pine Bluff

 


Project: Diving into Heterogeneous Systems with Beacon

Description: The student will work with computational scientists at NICS on the Beacon cluster, which employs GPGPUs and Intel Xeon Phi coprocessors. The student will take a piece of their own code and port it to several architectures, thus learning about several aspects of HPC (power, hardware, coding practice) in a way that helps them truly have ownership of the concepts.

Required Skills:

1) Some programming experience (any language)
2) Desire to learn other languages and architectures
3) Ability to communicate effectively
4) Passion for their scientific application

Category:

Science (Computational Science Applications)
Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. You will be supplied with a machine and access to all relevant systems at NICS.

Education Level:

Freshman/Sophomore Junior/Senior
Fifth Year
Masters Candidate

You will be working on a team. You will be working anywhere.

Supervisor: Vincent Betro

College, University, or Research Institution: University of Tennessee

 


Project: Parallel Image Skeletonization Design & Implementation

Description: The scope of the project is to develop a parallel implementation of our popular Efficient 3D Binary Image Skeletonization code. Image Skeletonization promises to be a powerful complexity-cutting tool for wide range of applications, including compact shape description, pattern recognition, robot vision, animation, petrography pore space fluid flow analysis, model/analysis of nerve/bone/lung/circulation/protein-meshwork, and image/data compression for telemedicine. An efficient distance-based procedure to generate the skeleton of large, complex 3D images such as CT, MRI data was designed using a 3D Voxel Coding (3DVC) algorithm, based on Discrete Euclidean Distance Transform. Instead of actual distance, each interior voxel (3D pixel) in the 3D image object is labeled with an integer code according to its relative distance from the object border for computation efficiency. All center voxels, which are the furthest away from the object border, indicated by their largest integer codes, are then collected and thinned to form center skeleton clusters. To preserve the topology of the 3D image object, a sequential cluster-labeling heuristic is currently being applied to order the skeleton clusters, and to recursively connect the next nearest skeleton clusters, gradually reducing the total number of disjoint clusters, to generate one final connected skeleton for each 3D object. The algorithm provides a straightforward computation which is robust and not sensitive to noise or object boundary complexity. The extension to a vector/parallel version of 3DVC is expected to further enhance skeletonization speed benefiting big image/data reduction in wide range of applications.

Required Skills:

1) Passionate in research
2) Parallel Programming in any platform
3) Image Processing experience a plus
4) Algorithm Design
5) Problem Solving
6) Performance Optimization

Category:

Science (Computational Science Applications) Data ( Data Mining, Data Analytics) Image/Data Reduction

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. Our IEEE CSB2005 paper describing the sequential algorithm: http://www.lifesciencessociety.org/CSB2005/PDF2/130_shihl_skeletonization.pdf

Education Level:

Junior/Senior
Fifth Year
Masters Candidate PhD Candidate

You will be working independently. You will be working anywhere.

Supervisor: Liwen Shih

College, University, or Research Institution: U of Houston - Clear Lake

 


Project: Developer in XSEDE's Software Development and Integration group

 

Description: The XSEDE Software Development and Integration (SD&I) group provides an engineering process that enables software/service integration into XSEDE's research digital infrastructure. Student will help develop tools and Web interfaces to further automate the SD&I engineering process.

Required Skills:

1) PHP, Java/JSP/Javascript, Python, or Perl
2) HTML/CSS
3) Relational database/SQL
4) JIRA
5) SVN or other source control system
6) RPM packaging

Category:

Technology (Systems and Operations, Architecture, Software Development)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Freshman/Sophomore Junior/Senior
Fifth Year
Masters Candidate

You will be working independently. You will be working anywhere.

Supervisor: Shava Smallen

College, University, or Research Institution: San Diego Supercomputer Center

 


Project: Visual analytics for traffic demand analysis

Description: The student will work on a research project in analyzing dynamic traffic modeling through simulations. The simulation process involves graph analysis which can be computationally expensive and produce large data set for analysis when working with large metropolitan areas. The current serial analytic process could be improved with efficient parallel graph analysis methods which can scale over large data sets. The student will focus on presenting and analyzing the simulation results for model comparison and facilitating decision making process by the users. The expected work includes implementing clustering and pattern recognition algorithm over simulation results. The student will also contribute in building an interactive information visualization application to present those results to the users.

Required Skills:

1) Excellent in programming skill with Java.
2) Knowledgeable with data mining and information visualization method preferred.

Category:

Data ( Data Mining, Data Analytics)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Masters Candidate

You will be working independently.
You will be working at
Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX.

Supervisor: Weijia Xu

College, University, or Research Institution: Texas Advanced Computing Center, The University of Texas at Austin

 


Project: A high-level framework for parallelizing legacy applications

Description: There is a pattern or a set of steps involved in the process of adapting legacy applications to take advantage of modern computing platforms. As an example, consider the process of adapting applications so that they can be executed in parts on Intel® Xeon Phi coprocessor and in parts on the host (Intel® Sandy Bridge processor). All programs that are run partly on host and partly on coprocessors should include the library file named "offload.h". The prototype of functions and variables that are meant to be offloaded (or made available) on the coprocessor should have additional declaration- specifiers like "__declspec(target(mic))" or "__attribute__ (( target (mic)))". The regions in the code that should be offloaded to the coprocessor should be annotated with offload directive or pragma and should include appropriate specifiers like target architecture and offload-parameters. Such standard steps should be abstracted from the programmer and made repeatable across applications from diverse domains with minimum possible involvement of the programmer. In doing so, the problem of many programmers implementing and optimizing similar functionality multiple times, within an application and across multiple applications, can be overcome. With this motivation, the main goal of this project is to develop abstractions for transforming legacy programs for accelerators and coprocessors in a user- guided manner. These abstractions (1) will exist in the form of optimized code templates, (2) will be made available to the wider community as a plug-in for Eclipse Integrated Development Environment, and (3) will be woven into legacy applications through a source-to-source compiler. The intern will learn about the source-to-source compiler called ROSE and help in using it to achieve some of the goals of the project. Through this project, the intern will gain valuable skills in developing abstractions and hence high-level tools that can be of immense help for domain-experts.

Required Skills:

1) C/C++/Fortran
2) Java
3) Linux/Unix
4) Compilers
5) Parallel Programming

Category:

Science (Computational Science Applications)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference.

Education Level:

Fifth Year
Masters Candidate PhD Candidate

You will be working independently. You will be working anywhere

Supervisor: Ritu Arora

College, University, or Research Institution: Texas Advanced Computing Center, The University of Texas at Austin

 


Project: Quantum transport simulation with inelastic exciton-phonon and exciton-plasmon scattering effects in graphene and carbon nanodevices.

 

Description: The reliable electrical and magnetic manipulation of information and energy in nanostructures at the 1- to 100-nm scale is crucial for the development of novel, nanoelectronic devices. Such manipulation must be highly precise, which requires understanding such phenomena as Shottky barriers at metallic contacts, dissipation, material defects, and contact with the mesoscopic substrates. Carbon-based nanostructures are receiving increased attention as materials ideally suited for inexpensive, next-generation nanoelectronics. This attention is based in part on the belief that, due to highly-ordered crystalline structures, these materials will transport electrons ballistically – i.e., without scatter and therefore with resistance independent of transport length. Electronic correlation and exchange effects on quantum transport, including exciton-phonon and exciton-plasmon coupling, can be described effectively in a tightbinding transport description. It is the intent of this investigation to develop multi-scale analysis tools, which (1) rigorously quantify these interactions, and (2) yield open source tools available to the research community which allow general geometries to be analyzed for inelastic effects in charge transport. Transport observables like transmissivities, integrated currents, and magnetoresistance may be calculated via a non-equilibrium Green's function method for realistic system sizes of 10,000 atoms and more.

Required Skills:

1) C and/or C++ programming required
2) MPI preferred
3) Physics (quantum theory) preferred
4) Eclipse software useful but not necessary
5) GPU programming useful but not necessary
6) Parallel profiling and optimization tools useful but not necessary

Category:

Science (Computational Science Applications)

Additional Information: Travel support will be provided for you to attend an orientation meeting at the beginning of the project, and to participate in the XSEDE14 conference. Project either conducted with faculty (M. Jack) at home institution or remotely on NSF XSEDE resources (TACC Stampede) or, in case of additional support by Department of Energy, at Oak Ridge National Laboratory on Titan or NICS/Beacon clusters with faculty member as student/faculty team.

Education Level:

Junior/Senior
Fifth Year
Masters Candidate PhD Candidate

You will be working independently. You will be working anywhere.

Supervisor: Mark A. Jack

College, University, or Research Institution: Florida A&M University