- How do I get an account on Longleaf (or Dogwood)?
- What are the details of the new filesystem location(s)?
- Why aren’t `/netscr` nor `/lustre` present on Longleaf?
- What is the queue structure on Longleaf?
- How do I transfer data between Research Computing clusters?
- How do I transfer data onto a Research Computing cluster?
You can visit the Onyen Services page, click on the Subscribe to Services button and select Longleaf. You should then click a link to https://improv.itsapps.unc.edu/#ServiceSubscriptionPlace:.
For more information on Longleaf, see: https://its.unc.edu/research-computing/longleaf-cluster/.
For more information on Dogwood, see: https://its.unc.edu/research-computing/dogwood-cluster/.
The `/lustre` filesystem is available only via the Infiniband fabric, which we had on Killdevil. Since Longleaf and Dogwood nodes in no way access that fabric, `/lustre` is not present on them.
With respect to net-scratch, `/netscr`, it is not present on Longleaf and Dogwood for performance reasons. First, computing with the research cluster nodes against `/netscr` would add an extremely significant workload that `/netscr` cannot sustain—it would thus severely degrade performance for everyone. Secondly, the `/pine` filesystem is purpose-built for I/O and balanced/designed for our research clusters: though it may take some effort to move files/data to a filesystem present on our research clusters, your results will be vastly better than doing something else. Third, the quotas on the `/pine` filesystem are higher, so you have more resource to work with.
The queue systems are managed through SLURM partitions, which vary by research cluster:
- To transfer data off of the retired killdevil cluster, see Killdevil Retirement.
- To copy files to/from mass storage use SLURM to submit a cp command or GLOBUS.
- To copy a big file or thousands of small files, use GLOBUS.
- To copy a medium sized file, do not connect to longleaf.unc.edu (or dogwood.unc.edu), instead connect to one of our data mover nodes to use the cp command. There are four data mover nodes: `rc-dm1.its.unc.edu`, `rc-dm2.its.unc.edu`, `rc-dm3.its.unc.edu`, and `rc-dm4.its.unc.edu`. Connecting to the general the host address:
`rc-dm.its.unc.edu` will connect you to the least busy of the four. This will generally result in the best performance.
- To copy small sized files to & from anywhere other than mass storage, use the cp command from the login node.
For transfers from your desktop or home computer, or another computer external to Research Computing, to one of the Research Computing, there are several methods:
- Globus Online: https://help.unc.edu/help/globus-connect-file-transfer/. To get started with Globus Online see the Getting Started page, see https://help.unc.edu/help/getting-started-with-globus-connect/.
- In addition to Globus Online there are a number of SFTP (Secure File Transfer Protocol) tools available for medium or small file transfers on both Mac and Windows platforms. [Big file transfers should be done with GLOBUS.]
- SSH/SFTP Secure Shell is available for Windows platforms at UNC Software Acquisition Shareware: http://software.sites.unc.edu/shareware/#s.
- CyberDuck (https://cyberduck.io/) is available for both Mac and Windows platforms.
- CoreFTP (http://coreftp.com/) is another possibility for Windows platforms.
- FileZilla (https://filezilla-project.org/) is also an option for Mac, Windows, and Linux platforms.
For the SFTP tools, although it is possible to connect directly to the cluster log in nodes, the cluster log in nodes are a shared resource and it is preferred you use on of our specialized data mover nodes.
There are four data mover nodes: `rc-dm1.its.unc.edu`, `rc-dm2.its.unc.edu`, `rc-dm3.its.unc.edu`, and `rc-dm4.its.unc.edu`.
Connecting with the host address: `rc-dm.its.unc.edu` will connect you to the least busy of the four. This will generally result in the best performance.