Research IT hosted services
Please see our Cluter Usage documentation for our primary Cluster usage documentation.
We maintain documentation specific to some of the clusters and other systems we host here.
TCIN
Research IT host some systems for the Trinity College Institute of Neuroscience (TCIN).
Getting an account
First of all, make sure you have applied for your Research IT account: apply here.
MRI/EEG Booking System
In order to gain access to the MRI/EEG booking system:
Login to the TCIN Booking System with your Research IT account:
https://tcin-bookingsystem.tchpc.tcd.ie/Web/
Please note the TCIN Booking System is currently only available from certain parts of the College network, the TCIN desktop network and the College VPN.
Then send an email to Sojo Joseph, the MRI Radiographer, requesting access rights to make bookings.
Request access to the MRI Archives
In order to gain access to the MRI Archived data please send an email to neuro@tchpc.tcd.ie with the following details:
- Your Research IT username (see above for applying if you don't have one).
- The name and email address of the PI in who's group you work.
- A short description of the project you intend to undertake and its expected duration.
-
Please indicate which scanner data you require access to. If you do not know which one please ask the TCIN radiographer before making the request.
- Philips 3T
- Bruker 7T
- Simens DICOM (the default)
- Simens RAW
Access the MRI Archives
To transfer files from the MRI Archive please use the SSH protocol, (you will need an SSH file transfer client such as WinSCP), from the host mri-archives.tchpc.tcd.ie
.
Please see our file transfer instructions for more details on how to do so.
Scanner | Mount point on mri-archives.tchpc.tcd.ie |
---|---|
Philips 3T | /mnt/tcin-philips/philips/ |
Bruker 7T | /mnt/tcin-bruker/bruker/ |
Simens DICOM | /mnt/siemens-dicom/ |
Simens RAW | /mnt/siemens-raw/ |
The mri-archives.tchpc.tcd.ie
host is only accessible from the College network, not the internet.
TCIN HPC Cluster
Service note: As of November 2022 the previous TCIN 8 node, 32 core mini cluster has been retired, (had been accessible via tcin-n01.cluster
).
TCIN have access to a small dedicated HPC cluster with the following characteristics:
- 1 head node with ~12TB of shared storage
- 2 compute nodes, each with 32 CPU cores, (2 by 16 core Intel Xeon Silver 4314 CPU's @ 2.40GHz), ~250GB of RAM and ~2TB of local scratch disk for each compute node, each compute node mounts the shared storage from the head node
- 10GbE networking to be commissioned in the future.
- Ubuntu 20.04 LTS Operating System
To access it you must have a Research IT account, please apply for one if you don't have one.
To request access to the cluster please email ops@tchpc.tcd.ie.
To login please connect to neuro01.tchpc.tcd.ie
using the usual SSH instructions.
It is accessible from the College network, including the VPN. To connect to it from the internet please first login to the College VPN or relay through rsync.tchpc.tcd.ie
as per our instructions.
File systems:
/home
is a stand alone file system ~12TB in size that is shared from the head node to all compute nodes. It is separate from the existing file system the Kelvin, TCIN clusters have used. You will have to copy data onto the neuro cluster.- Each node has a local
/tmp
scratch disk that is only accessible on that node, it is not shared with others, and while you have been allocated to that node, you will not be able to access data in those disks once your allocated jobs have finished. - The
siemens-dicom
,siemens-raw
,tcin-bruker
andtcin-philips
MRI archive shares are available on the head node in the/mnt
directory, e.g./mnt/siemens-dicom
. They are not accessible from the compute nodes. If you want to process data from one of those shares you will have to first copy it from the share to/home
on the head node, process the data in the compute queue, then copy any results back.
Software is installed with our usual modules system.
You can view the available software with module av
and an example to load a module is: module load apps afni
. You may need to load the apps
modules list first to see all available packages with the module load apps
command.
Running jobs must be done via the Slurm scheduler.
Batch job example:
#!/bin/bash
#SBATCH -n 12
#SBATCH --mem=96GB
module load apps afni ANTs eigen freesurfer fsl
echo "Starting"
./exe.x
matlab -nojvm -nodesktop -nosplash simple.m
Interactive allocation equivalent: salloc -n 12 --mem=96GB
Please see our udocker instructions for the recommended way to run containers on our clusters.
See the HPC clusters usage documenation for furhter instructions.
Using VNC on the neuro HPC cluster
Notes:
- VNC is only available on the head node, it is not possible to get VNC to any of the compute nodes.
- VNC is only available over SSH forwarding, it is not possible to connect a VNC viewer client on your computer to neuro01 for security reasons.
- These notes assume you are using linux of MacOS.
Pre-requisites, necessary on neuro01 before being able to get VNC to work:
- Determine your user id, you will need to reference this later:
id -u
- Set a VNC password:
tigervncpasswd
Steps to start a VNC session:
Log into neuro with X and SSH forwarding on:
> ssh -X -L 5901:localhost:ID username@neuro01.tchpc.tcd.ie
- Replace
ID
with the output of theid -u
command above. - Replace
username
with your user name.
Start a desktop session you can connect to:
> tigervncserver -xstartup /usr/bin/mate-session
Note the value of the Use xtigervncviewer ...
output from the tigervncserver ...
command above, it will give you the command to use to connect to the VNC session.
Example command to connect to the session:
> xtigervncviewer -SecurityTypes VncAuth -passwd ~/.vnc/passwd :1
Note: the :1
value may not be the correct one, adjust that for the number provided from the output of the tigervncserver
command or use tigervncserver -list
to list the sessions running.
Once you are finished stop your vnc sessions:
- Disconnect from your running session.
- Use
tigervncserver -list
to list the id's of your running session(s) - Use
tigervncserver -kill :1
to kill the session, replacing:1
with the correct value from 2 above.
neuro cluster caveats as of November 2022
- The cluster is still under development, it may be unstable, (e.g. jobs may fail or not be accepted), or unaccessible, (e.g. for necessary reboots), at times.
- This is the first HPC cluster Research IT manage that uses the Ubuntu operating system. Until now all other clusters have used Scientific Linux. This has increased the lead in time for this cluster and may make support more difficult. Research IT reserve the right to re-install the cluster with Scientific Linux at any time.
- No data on the cluster is backed up.
- The scheduler is configured to share jobs on the same node, i.e. multiple jobs can run on the same node simultaneously. This may lead to contention issues where jobs interfere with each other, please let us know if you have issues with that.
- Matlab is not working on the compute nodes, this is being worked on.
- Interactive logins are not working on the neuro-n02 compute node currently, this is to be resolved.
Tinney
Documentation on the Tinney cluster can be found here.
Plant Eco Model Storage Server for Botany Climate Group
To get access to the Plant Eco Model Storage Server you will require a Research IT account. Apply for an account with Research IT if you don't have one. We will confirm your access rights with the local contact when assigning your account permissions.
Please see our general instructions for how to transfer files.
Host name(s) to login to:
hprc-guest-114-232.tchpc.tcd.ie
plant-eco-model.tchpc.tcd.ie
pem.tchpc.tcd.ie
(plant-eco-model
and pem
are DNS pointers to hprc-guest-114-232
, all three host names will take you to the same server).
astro01
astro01
is a 64 core, 128G of RAM, server hosted by Research IT for the Astrophysics group in the School of Physics (local contact: Prof. Luca Matra). This server was co-funded by the School of Physics and by the IRC through grant award IRCLA/2022/3788.
Access
To get access to astro01
you will require a Research IT account. Apply for an account with Research IT if you don't have one. We will confirm your access rights with the local contact when assigning your account permissions to astro01
.
Logging in with SSH
The endpoint to connect to is astro01.tchpc.tcd.ie
. Please see our logging in instructions for more.
Logging in with VNC
(Will only work from the College network including the VPN).
Prerequisite: login to astro01 via SSH and run the vncpasswd
command to set a VNC password if you have not set one. This step only needs to be done once but VNC won't work without it. Your VNC password will be separate from your Research IT login password.
Prerequisite: a dedicated tcp port number on astro01 is required for each user who wishes to use VNC. In the instructions below tcp port 5902 is used as the tcp port to run on on astro01.
If someone is already using tcp port 5902 on astro1 there will be a port conflict. (The command netstat -l | grep localhost:59 | grep -v tcp6 | awk '{print $4}'
will display what ports in that range are being used on astro01).
Pick a number in the 59xx range, (that is an un privileged range), and replace 5902 below with it.
(1) From your desktop/laptop SSH port forward remote port 5902 to local port 5901.
E.g. from a Linux or MacOS clients open a terminal use the following command and replace username
with your user name.
ssh -L 5902:localhost:5901 username@astro01.tchpc.tcd.ie
From a Windows client:
- Make sure VNC and Putty are installed.
- Enter
astro01.tchpc.tcd.ie
for your session on port 22 - Click on Connections - SSH - Tunnels
- In the box labeled “Source Port”, type 5902
- In the destination box, type:
localhost:5901
- Source and further instructions. (Remember to update the hostnames etc if referencing that source).
(2) From an SSH session on astro01 start a VNC session with the command: vncserver -localhost
.
Note: Check the VNC server is running with vncserver -list
and if it is not running and you are using a conda environment or similar, deactivate the conda environment with conda deactivate
.
(3) From your desktop/laptop point your VNC client/viewer to localhost:5902
E.g. for linux: vncviewer localhost:5902
From MacOS go to the Finder application and press the cmd
and k
buttons simultaneously. Enter vnc://localhost:5902
as the Server Address and click Connect
.
Enter your VNC password and click Connect
. Source and more information, (ensure to update the hostnames etc if using that page).
Note: if you get an error like: "The software on the remote computer appears to be incompatible with this version of Screen Sharing" when using MacOS you can use the modified command vncserver -localhost -SecurityTypes=None
but note that means
you will not be prompted for a vnc password and it may be possible for another user on astro01 to connect to your vnc session. This is not recommended.
(4) To finish your VNC session kill it with: vncserver -kill :1
- assuming :1
is the session number, use vncserver -list
to check.
Error: "XDG_CURRENT_DESKTOP=GNOME environment variable"
If you receive an error like the following:
Could not make bus activated clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable: Could not connect: Connection refused
It may be because a conda or another tool that is modifying your environment is preventing VNC from launching. Please try to run VNC after running the command conda deactivate
(assuming you are using conda) and see if VNC will then work for you.
Resetting your VNC password
If you lost your VNC password you can reset it with the following steps. This applies only to your VNC password on the system you have set it on and not on any other systems or passwords.
-
Delete the existing VNC password:
rm ~/.vnc/passwd
(Note do not do this unless you want to reset your password) -
Run the
vncpasswd
command again to set a new password.
File systems & quotas
There are two main file systems:
/home
is for users home directories. A per user 500GB file system quota is applied to/home
. If a larger quota is required please talk to the local contact in the Astrophysics group to liaise with Research IT./home
is backed up to tape. The backup schedule is a full backup every quarter and weekly incremental backups. There is a limit to the amount of data that can be backed up so backup usage of astro01 will be monitored and may have to be changed if it is taking too much backup space./scratch
is a ~1.8TB RAID0 volume stripped across two SSD's. That should make it a fast file system but/scratch
has no data redundancy, if one of the disks fails all data will be lost. Do not store data here that you cannot afford to loose. Data in/scratch
will never be backed up./scratch
is a shared file system, so users can modify and delete files owned by others. No file system quotas are applied to/scratch
.
Checking /home quota usage
Use the qrep
tool to see each users quota and usage in /home
on astro01.
Using software
Centrally installed software will be controlled with the modules
system. Please see our instructions for full information. Here are some brief tips:
To get a list of available software from the modules system use the: module av
command.
To make a software module available use module load module-name
, e.g. module load gcc-12.1.0-gcc-4.8.5-envg3aa
.
The anaconda
and conda
python package management systems are installed.
Jupyter Notebooks
Installation
(Should only be needed once)
> python3 -m pip install --user --upgrade pip
> python3 -m pip install --user jupyterlab
Note, this will also install ipython
.
Access a Notebook from VNC
See the "Logging in with VNC" instructions above. Launch a terminal and run the command:
jupyter notebook
Access a Notebook via SSH
(1) Run jupyter on astro01:
jupyter notebook --ip 0.0.0.0 --port=8888
(2) Port forward, from your own computer run:
ssh -N -L 8080:astro01.tchpc.tcd.ie:8888 UESRNAME@astro01.tchpc.tcd.ie
Note: ensure to replace USERNAME with your username.
(3) Point your browser to http://127.0.0.1:8080/
Quit jupyter notebooks
- On astro01:
CTRL C
in the terminal running it - Close your browser
- Quit the port forwarding ssh session,
CTRL C
in that terminal or closing it should do.
Running jobs
The Slurm resource manager is installed on astro01
, please use it as much as possible to run computationally intensive jobs as it will more equally share the machine and prevent resource contention, e.g. the same users using the same processors at the same time.
Please see our Slurm instructions for full information. Here are some quick pointers.
To run a python script called numbers.py
on 4 cpu cores, (the -n
flag):
srun -n 4 python numbers.py
To request 1 core for 6 hours, 15 minutes in an interactive allocation, (where you run the work yourself), use this command:
salloc -n 1 -t 06:15:00
Change the -n
for number of processors and -t
flags as needed.
Here is an example batch submission script that will request 12 cores for 1 day, 12 hours and 45 minutes. Again change the -n
for number of processors and -t
flags as needed. Batch jobs are ones where the scheduler runs the work for you.
#!/bin/bash
#SBATCH -n 12
#SBATCH -t 1-12:45:00
# [load any necessary modules](#load-any-necessary-modules)
module load gcc-12.1.0-gcc-4.8.5-envg3aa
echo "Starting job at:"
date
echo "On machine:"
hostname -f
# [run your executable](#run-your-executable)
./exe
Assuming you have called the file run.sh
you can submit it to the queue with this command:
sbatch run.sh
To check what is running in the queue you can use the squeue
command.
Acknowledgements policy
As per our acknowledgements policy please use the following acknowledgement line:
All calculations were performed on the astro01 system maintained by the Trinity Centre for High Performance Computing (Research IT). This system is co-funded by the School of Physics and by the Irish Research Council grant award IRCLA/2022/3788.
Lanczos
Documentation on lanczos cluster can be found here.