Skip to main content

Acceptable Use Guidelines


The Palmetto Cluster is a shared resource used by a diverse set faculty, staff, students and other researchers. Each user is expected to be a good steward of the cluster by promoting and practicing efficient and responsible use of the resources. An important part of being a good steward is understanding the resources available by attending an onboarding session and reviewing relevant cluster documentation.


When users are found to be violating these community guidelines, we will contact them and/or their account sponsor. Multiple violations may result in the suspension of their Palmetto account.

The guiding principle behind these guidelines is that inappropriate use reduces the availability and performance of the Palmetto Cluster. This negatively impacts your follow researchers and is also a violation of section 3.7.3 of the Acceptable Use of Information Technology Resources Policy.

Acceptable Use

All users of RCD resources must abide by Clemson University IT Policies and Standards, including but not limited to the:

In addition, the use of the Palmetto Cluster is limited to research and education use. Non-academic personal and commercial use is prohibited.

Login Nodes

When connected to the cluster through ssh, all users connect to a small set of login nodes. These nodes are for job preparation and submission, file editing, and monitoring of jobs. Using the login nodes for pre-/post-processing, compilation, or other resource intensive tasks is prohibited because it will degrade everyone's performance. These processes will be terminated without notice. Use compute nodes for these tasks. Compute nodes can be accessed interactively through interactive jobs.

Job Efficiency

The resources within Palmetto are finite. The is especially true with our newer and more powerful resources (e.g. GPUs). It is critical that users are not requesting resources that will not be used. Doing so hurts all users, increasing wait times across the cluster. You must abide by the following guidelines:

  1. Do not request resources your job cannot use. For example, do not request GPUs if you job can only make use of CPUs. Do not request many cores if your job will only use a single thread. It is important that users of the cluster understand and monitor their workload in order to request only the resources that are needed. The RCD team can help guide you through this process during office hours.

  2. Scale up your job incrementally and monitor your job to ensure it is using the additional resources. The first time you run an application, start off with modest resource requests (rather than request for large number of GPUs, cores, or nodes) and monitor your job. Only increase your resource requests once you have verified your are making effective use of the resources.

  3. Ensure interactive jobs are not left idle. If you run an interactive job (either via the terminal or via Open OnDemand), you should not let it sit idle. If your job is complete, or you need to take a break, please end your job to release the resources back to the cluster for other users.

  4. Reserving shared resources is prohibited. Do not begin a job for the purposes of reserving resources for future use. If you do need priority access to begin jobs, consider purchasing nodes. Owner queues have preemption rights on their nodes.

Jobs that do not abide by these guidelines may be terminated. If this happens to your job, please adjust the resource request before resubmitting. Feel free to reach out to us for help with this process.


The scratch file systems must be used in the way they are intended: as temporary storage space. Data in scratch is not backed up and it is periodically purged. Long term storage is not allowed and attempting to circumvent the purging process is prohibited. When system administrators discover instances of circumvention, files will be purged without notice.

Instead, long term files should be kept in your home directory, project space on Indigo, or outside the cluster. If you have any concerns, please reach out to us.