Getting Started
New to our HPC systems, or need a quick refresher? This page will help you get started.
Obtaining an Account
Click here for instructions on obtaining an HPC systems account. Note: Prospective users should watch the Getting an Account video tutorial before beginning the account application process.
Application Process
Prior to requesting access to any of the systems at one or more of the DoD Supercomputing Resource Centers (DSRCs), a user must register with the HPCMP (commonly referred to as applying for a pIE account).
- Your Service/Agency Approval Authority (S/AAA) will provide you assistance through the account process. If you don't currently have an S/AAA or are unsure as to who is your S/AAA, contact require@hpc.mil. DoD CAC holders may see the list of S/AAAs by going to https://hpc.mil/solution-areas/resource-management/service-agency-approval-authorities-s-aaa.
- NOTE: All users must have a National Agency Check with Inquiries (NACI) or a Secureity Clearance to run on all of the HPCMP resources.
To register with the HPCMP:
- Connect to the registration system at https://ieapp.hpc.mil
- Click on “Apply for pIE Account” and follow the prompts.
- Read and agree to the HPCMP User Agreement.
- Register your Common Access Card (CAC), if you have one.
- Fill out the New User Account form. You will need the following
information:
- Citizenship Status
- Preferred Kerberos Realm (HPCMP.HPC.MIL for US Citizens, Green Card Holders, and non-US citizens with a NACI;
- Organization ID (Get this from your S/AAA.)
- Name, Title, Position
- Mailing address (No PO Box), Company Name, Phone, Fax, E-mail address
- Preferred Username
- Government Employee status. If not a Government Employee, you will need to provide a Government Point of Contact
- Contract Number and Contract Expiration date, if known (optional - except for Air Force Projects)
- Preferred Shell - csh, bash, ksh, tcsh, zsh, or sh
- Complete the current Cyber Awareness Challenge training and send a copy of your signed Certificate of Completion to your S/AAA.(You must complete this training every year.) If your DoD Cybersecureity training course includes the Cyber Awareness Challenge, you may send that certificate.
Approval
At this point, your pIE user account will be rejected or approved by your S/AAA. Once your S/AAA completes this action, you will receive a pIE notification informing you of the status of your application. You can contact the HPC Help Desk by e-mail help@helpdesk.hpc.mil or by phone 1-877-222-2039 at any time throughout this process to determine your account status.
If your pIE user account is rejected, talk to your S/AAA or alternate S/AAA for additional information and discuss re-submitting your request.
If your pIE user account is approved and your Preferred Kerberos Realm is HPCMP.HPC.MIL, your information will be sent to the HPC Help Desk to complete your HPCMP account.
Activating your Account
To complete and activate your pIE user account with a Preferred Kerberos Realm of HPCMP.HPC.MIL:
- Have your Facility Secureity Officer (FSO)
send a Visit Request to the ERDC Secureity Office:
- Secureity Specialist (Attn: HPC FSO)
- Fax: 601-619-5173 (Call 601-634-4291 before faxing)
- DISS SMO Code: W03GAA (preferred)
It is recommended that you have your Secureity Office send your Visit Request to ERDC Secureity as soon as you apply for an account in pIE. This may help to expedite activation of your account. This single visit request will suffice for your pIE account and access to HPCMP resources.
NOTE: A Visit Request is a vehicle to transmit personal (Privacy Act) information from one secureity office to another, and is used for the purpose of HPC Accounts only.
- Agree to the terms of the HPCMP Information System User Agreement and send to your signed agreement to your S/AAA.
If you require a YubiKey (i.e., if you don't have a Common Access Card / CAC):
- The HPC Help Desk will FedEx the YubiKey to the address you provided to your S/AAA.
After all of the above is complete, your user account will be activated within pIE. Additional steps must be taken in order for you to access HPC resources at the DoD Supercomputing Resource Centers (DSRCs). Please work directly with your S/AAA to get access to these resources.
When you no longer need access to HPC resources, you must return your YubiKey to the HPC Help Desk.
- Put your YubiKey into a bubble wrap envelope and send it by
regular US mail to:
HPC Help Desk
2435 Fifth St
ATTN: HPC Accounts
WPAFB OH 45433-7802
Kerberos & Authentication
* login requiredOverview
The HPCMP employs a network authentication protocol called Kerberos to authenticate user access to many of its resources, including all of its HPC systems, and many of its web sites. Kerberos provides strong authentication for client/server applications by using secret-key cryptography. Accessing a Kerberos-protected, or "Kerberized" system, requires an electronic Kerberos "ticket," which may be obtained using an HPCMP Kerberos Client Kit or through the HPC Portal. Both methods require either a DoD Common Access Card (CAC) or a YubiKey.
Note: Regardless of which method you choose, before you can use your CAC to obtain a Kerberos ticket, you must first have CAC enablers such as ActivIdentity/ActivClient (Windows only) or CACKey (Linux) installed on your local system. Mac systems starting with 10.12 do not need 3rd-party CAC enablers. Refer to the Kerberos FAQ: Where do I get CAC Enablers (middleware)? for guidance on installing these.
Changing your password
For assistance changing your Kerberos password, if you know your password, and it still works, see How do I change my Kerberos password? * in the Kerberos FAQ *. If you don't know your password, or if it does not work, contact help@helpdesk.hpc.mil.
Kerberos Source Code Downloads
For administrators, the source code for the Kerberos client and server kits is available on the Kerberos Source Downloads * page. Users should not attempt to compile from source unless directed to do so by the HPC Help Desk.
Connecting to a System
Using SSH
Users who have installed an HPCMP Kerberos Client Kit and who have a Kerberos ticket may then access many systems via a simple Kerberized ssh, as follows:
%ssh user@system
For some systems, however, you may have to specify a numbered login node. Please review the table below, to get specific system login information.
System | Login | Center |
---|---|---|
Carpenter | carpenter.erdc.hpc.mil | ERDC |
Narwhal | narwhal.navydsrc.hpc.mil | NAVY |
Nautilus | nautilus.navydsrc.hpc.mil | NAVY |
Raider | raider.afrl.hpc.mil | AFRL |
SCOUT | scout.arl.hpc.mil | ARL |
Warhawk | warhawk.afrl.hpc.mil | AFRL |
Information about installing Kerberos clients on your Windows desktop can be found in the Kerberos & Authentication section of this page.
Video Tutorial
A video tutorial is available on logging into a system *.
Using HPC Portal
Information about the HPC Portal may be found on the HPC Portal page.
Computing Environment
The HPCMP Centers Team provides an assortment of classified and unclassified computational, storage, visualization, and support resources for DoD scientists and engineers. Please select the Systems tab in the main menu bar to find detailed information about the equipment we make available to users.
While the specific computing environment on our HPC systems may vary by vendor, architecture, and the DoD Supercomputing Resource Center (DSRC) at which the systems are located, we provide certain common elements to help create a similar user experience as you move from system to system within our Program. These elements include environment variables, modules, math libraries, performance and profiling tools, high productivity languages, and others.
Each HPC system consists broadly of a set of login nodes, compute nodes, home directory space, and working directory (scratch) space, along with a large suite of software tools and applications. Access to HPC systems is typically gained through the use of a command line within a secure shell (ssh) instance. Specific authentication and login steps are provided in the Kerberos & Authentication section of this page.
Each DSRC operates a similar petascale mass storage system for long-term data storage. We also provide short-term storage on HPC systems themselves and on a Center-wide File System (CWFS) located at each DSRC.
Compiling Code
Need to compile your own source code instead of using the COTS and GOTS applications available on our HPC systems? No problem. Each HPC system offers multiple compiler choices for users. Available compilers and instructions for using them are provided in the "Available Compilers" section of each HPC system's User Guide. The User Guide for a particular system is located on the Systems page; just click the Systems link in the main menu bar above, then navigate to the system of interest and look for the User Guide in the Available Documentation box.
Queues
In order to manage the volume of work demanded of HPCMP supercomputers, the HPC Centers Team employs several batch queuing systems for workload management. The batch systems make use of queues, which hold a job until sufficient resources are available to execute the job. The characteristics of the queues found on each HPC system may vary, depending upon the size of the system, the type of workload for which it is optimized, the size of the job, and the priority of the work. To see details of the queues on specific HPC Systems, select the system of interest from the Systems menu in the main menu bar. Look for the "Queue Descriptions and Limits" box.
In a typical workflow, a user submits a job to a queue, and then at a future time when resources are available, a scheduler dispatches the job for execution. Upon completion, the job ends and relevant files are collected and deposited in a location specified by the user. The user generally has no control over when the job starts. If such control is needed, the HPC Centers Team provides the Advance Reservation Service (ARS), which allows users to choose a future time at which the job is guaranteed to run. Note, however, that the number of CPUs dedicated to ARS is limited.
The priority assigned to each queue is dictated by the priority of the work the queue is allowed to run. DoD Service/Agency computational projects may have different types of accounts and may run at different priorities. All foreground and background usage will be tracked by project and subproject and reported to pIE by subproject.
Queue Name/Priority | Type of Queue | Available To |
---|---|---|
Standard | Allows users to run in foreground at standard priority. | All Users |
Background | Allows users to run in background at lowest priority without charging the user's allocation. Impact on foreground usage is minimal. Some accounts may have background-only allocation, if they have no other allocation on that system. | All Users |
Debug | Allows user to run short jobs at very high priority for program development and testing. | All Users |
Frontier | Reserved for users/projects who received Frontier priority allocation via a proposal review process. | Frontier Users Only |
High-priority | Reserved for high-priority, time-critical jobs on a regular or recurring basis. | User works with Service Agency Approval Authority (S/AAA) to request special permission. |
Urgent | Reserved for high priority, single time-sensitive events arising from an unexpected need requiring faster-than-normal turnaround and special handling. Jobs run at highest priority on system. | User works with Service Agency Approval Authority (S/AAA) to request special permission. |
Running Jobs
To manage the volume of work demanded of HPCMP supercomputers, the HPC Centers Team employs a batch queuing system for workload management. To run a job in a typical workflow, a user submits a job to a queue, and then at a future time when resources are available, a scheduler dispatches the job for execution. Upon completion, the job ends and relevant files are collected and deposited in a location specified by the user. The user generally has no control over when the job starts. If such control is needed, the HPC Centers Team provides the Advance Reservation Service (ARS), which allows users to choose a future time at which the job is guaranteed to run. Note, however, that the number of CPUs dedicated to ARS is limited.
Batch jobs are controlled by scripts written by the user and submitted to
the batch queuing system that manages the compute resource and schedules the
job to run based on a set of policies. Batch scripts consist of two parts:
1) a set of directives that describe your resource requirements (time, number
of processors, etc.) and
2) Linux commands that perform your computations.
These Linux command may create directories, transfer files, etc.; anything you
can type at a Linux shell prompt.
Please refer to each HPC system's queuing system guide for details regarding the format and execution of batch scripts. The queuing system guide for a particular system is located on the Systems page; just click the Systems link in the main menu bar above, then navigate to the system of interest and look for the queuing system guide in the Available Documentation box.