Software

At Abacus we maintain a short list of standard software available for our users. A few software packages are only available for some research groups.

Users are not limited to using the software installed by us. You are welcome to install your own software either in your home directory or in your project's /work/project/ folder.

You are also welcome to contact us, and we will in many cases help you with the installation. If the software is freely available, we in general add the software to our software "modules". For more information on modules, see our page specifically on modules.

Many software modules are available in multiple versions. The default version is shown in below. To get consistent results, you should always specify the version you want when using module load in your sbatch scripts.

Applications

Amber

Further information:

Versions available:

  • amber/14-2015.04
  • amber/14-2015.05
  • amber/16-2016.05 (default)
  • amber/16-2017.02
  • amber/16-2017.04

AMBER is a collection of molecular dynamics simulation programs.

Amber is ONLY available to SDU users.

To get the currently default version of the amber module, use

testuser@fe1:~$ module load amber/16-2016.05

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/amber/amber-14-2015.05.sh. The contents of this file is shown below.

#! /bin/bash
#
#SBATCH --account test00_gpu      # account
#SBATCH --nodes 1                 # number of nodes
#SBATCH --ntasks-per-node 2       # number of MPI tasks per node
#SBATCH --time 2:00:00            # max time (HH:MM:SS)
#
# Name of the job
#SBATCH --job-name test-1-node
#
# Send email
# Your email address from deic-adm.sdu.dk is used
# Valid types are BEGIN, END, FAIL, REQUEUE, and ALL
#SBATCH --mail-type=ALL
#
# Write stdout/stderr output, %j is replaced with the job number
# use same path name to write everything to one file
# The files are by default placed in directory you call sbatch
#SBATCH --output slurm-%j.txt
#SBATCH --error  slurm-%j.txt

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"

# Load relevant modules
module purge
module add amber/14-2015.05

# Copy all input files to local scratch on all nodes
for f in *.inp *.prmtop *.inpcrd ; do
    sbcast "$f" "$LOCALSCRATCH/$f"
done

cd "$LOCALSCRATCH"

if [ "${CUDA_VISIBLE_DEVICES:-NoDevFiles}" != NoDevFiles ]; then
    # We have access to at least one GPU
    cmd=pmemd.cuda.MPI
else
    # no GPUs available
    cmd=pmemd.MPI
fi

export INPF="$LOCALSCRATCH/input"
export OUPF="$LOCALSCRATCH/input"
srun "$cmd" -O -i em.inp -o "$SLURM_SUBMIT_DIR/em.out" -r em.rst \
     -p test.prmtop -c test.inpcrd -ref test.inpcrd

echo Done.

Comsol

Further information:

Versions available:

  • comsol/5.1
  • comsol/5.2a
  • comsol/5.2 (default)
  • comsol/5.3

COMSOL Multiphysics is a finite element analysis, solver and Simulation software / FEA Software package for various physics and engineering applications, especially coupled phenomena, or multiphysics.

COMSOL is currently only available for a very small set of users. Contact us if you want to use COMSOL to hear about your options.

By default, COMSOL creates a lot of small files in your $HOME/.comsol folder. These files may fill up your home directory, after which COMSOL and other programs are not able to run. The easiest way to fix this is to use /tmp for all COMSOL related temporary files (with the side effect that COMSOL settings are not saved between consecutive COMSOL runs).

rm -rf ~/.comsol
ln -s /tmp ~/.comsol

To get the currently default version of the comsol module, use

testuser@fe1:~$ module load comsol/5.2

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/comsol/comsol-5.1.sh. The contents of this file is shown below.

#! /bin/bash
#
#SBATCH --nodes 1                 # number of nodes
#SBATCH --ntasks-per-node 1       # number of MPI tasks per node
#SBATCH --time 2:00:00            # max time (HH:MM:SS)

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"

# Load relevant modules
module purge
module add comsol/5.1

IN_MPH=GSPmetasurface2D.mph
OUT_MPH=out.mph

comsol -clustersimple batch -inputfile $IN_MPH -outputfile $OUT_MPH

echo Done.

Gaussian09

Further information:

Versions available:

  • gaussian09/D.01 (default)

Gaussian 09 provides state-of-the-art capabilities for electronic structure modeling.

Gaussian sbatch job scripts can be generated and submitted using the command subg09 test.com. Use subg09 -p test.com to see the generated script without submitting it.

This module is only available to some SDU users. Contact support@deic.sdu.dk for more information.

To get the currently default version of the gaussian09 module, use

testuser@fe1:~$ module load gaussian09/D.01

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/gaussian09/gaussian09-D.01.sh. The contents of this file is shown below.

#! /bin/bash
#
# Gaussian job script
#
#
#SBATCH --nodes 1
#SBATCH --job-name test
#SBATCH --time 10:00:00

# Setup environment
module purge
module add gaussian09/D.01

# Run Gaussian
g09 < test.com >& test.log

# Copy chk file back to workdir
test -r $GAUSS_SCRDIR/test.chk && cp -u $GAUSS_SCRDIR/test.chk .

Gaussview

Further information:

Versions available:

  • gaussview/5.0.8 (default)

GaussView is a GUI for Gaussian 09.

This module is only available to some SDU users. Contact support@deic.sdu.dk for more information.

To get the currently default version of the gaussview module, use

testuser@fe1:~$ module load gaussview/5.0.8

Gromacs

Further information:

Versions available:

  • gromacs/4.6.7
  • gromacs/5.0.4-openmpi
  • gromacs/5.0.4
  • gromacs/5.0.5
  • gromacs/5.0.6
  • gromacs/5.1 (default)
  • gromacs/5.1.2 (default)
  • gromacs/5.1.4
  • gromacs/2016.2
  • gromacs/2016.3

GROMACS is a collection of molecular dynamics simulation programs.

To get the currently default version of the gromacs module, use

testuser@fe1:~$ module load gromacs/5.1.2

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/gromacs/gromacs-5.1.2.sh. The contents of this file is shown below.

#! /bin/bash
#SBATCH --account sdutest00_gpu
#SBATCH --nodes 8
#SBATCH --time 24:00:00
#SBATCH --mail-type=ALL
#
# MPI ranks per node:
# * GPU nodes.....: one rank per GPU card, i.e., 2
# * Slim/fat nodes: one rank per CPU core, i.e., 24
#SBATCH --ntasks-per-node 2

echo Running on $(hostname)
echo Available nodes: $SLURM_NODELIST
echo Slurm_submit_dir: $SLURM_SUBMIT_DIR
echo Start time: $(date)

module purge
module add gromacs/5.1.2

if [ "${CUDA_VISIBLE_DEVICES:-NoDevFiles}" != NoDevFiles ]; then
    cmd="gmx_gpu_mpi mdrun"
else
    cmd="gmx_mpi mdrun"
fi

# Cores per MPI rank
OMP=$(( 24 / $SLURM_NTASKS_PER_NODE ))

# prod is the name of the input file
srun $cmd -pin on -ntomp $OMP -notunepme -deffnm prod -cpi prod.cpt -append

MATLAB

Further information:

Versions available:

  • matlab/R2015a
  • matlab/R2015b
  • matlab/R2016a (default)
  • matlab/R2017a

MATLAB (MATrix LABoratory) is a numerical computing environment developed by MathWorks. Matlab allows matrix manipulations, plotting of functions and data, and implementation of algorithms.

Note that the matlab file, e.g., test.m must include exit as the line to ensure that Matlab exits correctly.

MATLAB is currently available for most of our academic users. For further information, see the web page on our MATLAB Hosting Provider Agreement.

For using MATLAB together with a MATLAB GUI running on your own computer/laptop, you may want to look at our MATLAB documentation page. The page also contains further information relevant for any MATLAB user at Abacus 2.0.

To get the currently default version of the matlab module, use

testuser@fe1:~$ module load matlab/R2016a

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/matlab/matlab-R2016a.sh. The contents of this file is shown below.

#! /bin/bash
#
#SBATCH --nodes 1                 # number of nodes
#SBATCH --time 2:00:00            # max time (HH:MM:SS)

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"

# Load relevant modules
module purge
module add matlab/R2016a

# Run the MATLAB code available in matlab_code.m
# (note the missing .m)
matlab -nodisplay -r matlab_code

echo Done.

Namd

Further information:

Versions available:

  • namd/2.10 (default)

NAMD is a scalable parallel molecular dynamics package

To get the currently default version of the namd module, use

testuser@fe1:~$ module load namd/2.10

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/namd/namd-2.10.sh. The contents of this file is shown below.

#!/bin/bash
#
#SBATCH --account         sysops_gpu
#SBATCH --time            00:10:00
#SBATCH --nodes           4
#SBATCH --ntasks-per-node 1
#SBATCH --mail-type       FAIL

# Also see
# http://www.ks.uiuc.edu/Research/namd/wiki/index.cgi?NamdOnSLURM

# Specify input file at submission using
#    sbatch namd-test.sh /path/to/input.namd
# Default value is apoa1/apoa1.namd
INPUT=${1-apoa1/apoa1.namd}

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"
echo

module purge
module add namd

#
# Find version of namd command to use
#
cmd=namd2

# Should we use the MPI version?
if [ "$SLURM_NNODES" -gt 1 ]; then
    cmd="$cmd-mpi"
fi

# Should we use the CUDA version
if [ "${CUDA_VISIBLE_DEVICES:-NoDevFiles}" != NoDevFiles ]; then
    cmd="$cmd-cuda"
fi

if [ "$SLURM_NNODES" -gt 1 ]; then
    # Worker threads per MPI rank
    WT=$(( 24 / $SLURM_NTASKS_PER_NODE - 1 ))
    echo srun "$cmd" ++ppn "$WT" "$INPUT"
    srun      "$cmd" ++ppn "$WT" "$INPUT"
else
    # running on a single node
    charmrun ++local +p12 "$(which "$cmd")" "$INPUT"
fi

Further information:

Versions available:

  • netlogo/5.2.0 (default)

NetLogo is a multi-agent programmable modeling environment.

To get the currently default version of the netlogo module, use

testuser@fe1:~$ module load netlogo/5.2.0

Photoscan

Further information:

Versions available:

  • photoscan/1.1.6
  • photoscan/1.2.4 (default)

Agisoft PhotoScan is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data to be used in GIS applications, cultural heritage documentation, and visual effects production as well as for indirect measurements of objects of various scales.

This module is only available to some users. Contact support@deic.sdu.dk for further information.

To get the currently default version of the photoscan module, use

testuser@fe1:~$ module load photoscan/1.2.4

R

Further information:

Versions available:

  • R/3.2.2
  • R/3.2.5 (default)
  • R/3.3.1
  • R/3.3.2

R is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.

To get the currently default version of the R module, use

testuser@fe1:~$ module load R/3.2.5

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/R/R-3.2.2.sh. The contents of this file is shown below.

#! /bin/bash
#SBATCH --account test00_gpu
#SBATCH --nodes 1
#SBATCH --time 1:00:00

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"

module purge
module add R/3.2.2

R --vanilla < Fibonacci.R

Stata

Further information:

Versions available:

  • stata/14.0 (default)

Stata is a general-purpose statistical software package created by StataCorp.

This module is currently only available to SDU users. Contact support@deic.sdu.dk for more information.

To get the currently default version of the stata module, use

testuser@fe1:~$ module load stata/14.0

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/stata/stata-14.0.sh. The contents of this file is shown below.

#! /bin/bash
#
#SBATCH --nodes 1                 # number of nodes
#SBATCH --ntasks-per-node 1       # number of MPI tasks per node
#SBATCH --time 2:00:00            # max time (HH:MM:SS)

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"

# Load relevant modules
module purge
module add stata/14.0

# stata output is put in example.log
stata -b example.do

echo Done.

Vmd

Further information:

Versions available:

  • vmd/1.9.3 (default)

VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.

To get the currently default version of the vmd module, use

testuser@fe1:~$ module load vmd/1.9.3

An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/vmd/vmd-1.9.3.sh. The contents of this file is shown below.

#! /bin/bash
#
# VMD job script
#
#SBATCH --nodes 1
#SBATCH --job-name test
#SBATCH --time 1:00:00

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"
echo

# Setup environment
module purge
module add vmd/1.9.3

# Run VMD
vmd -eofexit < test.tcl

# If the TCL script takes arguments, use instead:
#   vmd -e test.tcl -args <arg1> <arg2> ...
# and be sure to place an exit statement at the end
# of the script.

echo End time: "$(date)"

Compilers

Gcc

Further information:

Versions available:

  • gcc/4.7.4
  • gcc/4.8-c7 (default)
  • gcc/5.4.0
  • gcc/6.2.0
  • gcc/6.3.0

The module gcc/4.8-c7 "loads" the default gcc compiler found on our Centos7 system. Other versions of the gcc compiler are also available.

To get the currently default version of the gcc module, use

testuser@fe1:~$ module load gcc/4.8-c7

Intel

Further information:

Versions available:

  • intel/2015.02
  • intel/2015.04
  • intel/2016.05 (default)
  • intel/2016.11

This module loads the Intel compiler suite including all packages except for Intel MPI, but including e.g. Intel MKL.

Intel MPI may be loaded separately using the module intelmpi.

To get the currently default version of the intel module, use

testuser@fe1:~$ module load intel/2016.05

MPI implementations

IntelMPI

Further information:

Versions available:

  • intelmpi/2015.02
  • intelmpi/2015.04
  • intelmpi/2016.05
  • intelmpi/2016.11 (default)

This module loads the Intel MPI compiler path and environment variables

This module can only be loaded if you first load one of the following modules:

  • gcc/4.7.4
  • gcc/4.8-c7
  • gcc/5.4.0
  • gcc/6.2.0
  • gcc/6.3.0
  • intel/2016.11

To get the currently default version of the intelmpi module, use

testuser@fe1:~$ module load intel/2016.11 intelmpi/2016.11

You can replace intel/2016.11 with other versions of the gcc or intel modules. Note that not all combinations of versions of intelmpi and these modules may be available.

MVAPICH2

Further information:

Versions available:

  • mvapich2/2.1
  • mvapich2/2.2 (default)

MPI implementation based on MPICH with added Infiniband support

This module can only be loaded if you first load one of the following modules:

  • gcc/4.8-c7
  • gcc/5.4.0
  • gcc/6.2.0
  • gcc/6.3.0
  • intel/2016.11

To get the currently default version of the mvapich2 module, use

testuser@fe1:~$ module load intel/2016.11 mvapich2/2.2

You can replace intel/2016.11 with other versions of the gcc or intel modules. Note that not all combinations of versions of mvapich2 and these modules may be available.

OpenMPI

Further information:

Versions available:

  • openmpi/1.8.4
  • openmpi/1.8.5
  • openmpi/1.8.7
  • openmpi/1.8.8
  • openmpi/1.10.2
  • openmpi/1.10.4
  • openmpi/2.0.1
  • openmpi/2.0.2 (default)
  • openmpi-i8/1.8.5
  • openmpi-i8/1.8.7
  • openmpi-i8/1.8.8
  • openmpi-i8/1.10.2
  • openmpi-i8/1.10.4
  • openmpi-i8/2.0.1
  • openmpi-i8/2.0.2

This module loads the OpenMPI compiler path and environment variables. OpenMPI is available in two versions openmpi and openmpi-i8. openmpi-i8 is compiled to use 64-bit integers when using Fortran (the default integer size is 32 bit).

You should use the openmpi module unless you are sure that you need the openmpi-i8 version.

This module can only be loaded if you first load one of the following modules:

  • gcc/4.8-c7
  • gcc/5.4.0
  • gcc/6.3.0
  • intel/2016.11

To get the currently default version of the openmpi module, use

testuser@fe1:~$ module load intel/2016.11 openmpi/2.0.2

You can replace intel/2016.11 with other versions of the gcc or intel modules. Note that not all combinations of versions of openmpi and these modules may be available.

Python modules, etc

Note that python-intel contains a lot of extra intel optimised python modules/packages like scipy and numpy.

Python

Further information:

Versions available:

  • python/2.7.9
  • python/2.7.10
  • python/2.7.11 (default)
  • python/2.7.12
  • python/2.7.13
  • python/3.4.3
  • python/3.5.1
  • python/3.5.2
  • python/3.6.0

Python programming language

To get the currently default version of the python module, use

testuser@fe1:~$ module load python/2.7.11

Python-Intel

Further information:

Versions available:

  • python-intel/2.7.10-184913
  • python-intel/2.7.11 (default)
  • python-intel/2.7.12
  • python-intel/2.7.12.35
  • python-intel/3.5.0-185146
  • python-intel/3.5.1
  • python-intel/3.5.2
  • python-intel/3.5.2.35

Intel optimised version of Python.

This is the Intel Distribution for Python. This is an Intel MKL performance-optimized Python distribution ideal for data analysis and scientific computing and includes several widely used python packages including numpy, scipy, pandas, matplotlib, etc.

https://software.intel.com/en-us/python-distribution

To get the currently default version of the python-intel module, use

testuser@fe1:~$ module load python-intel/2.7.11

IPython

Further information:

Versions available:

  • ipython/3.1.0 (default)
  • ipython/3.2.1

IPython is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history.

This module can only be loaded if you first load one of the following modules:

  • python/2.7.9
  • python/3.4.3

To get the currently default version of the ipython module, use

testuser@fe1:~$ module load python/3.4.3 ipython/3.1.0

You can replace python/3.4.3 with other versions of the python module. Note that not all combinations of versions of ipython and python may be available.

PP

Further information:

Versions available:

  • pp/1.6.4 (default)

Parallel Python is a python module which provides a mechanism for parallel execution of python code on SMP and clusters.

This module can only be loaded if you first load one of the following modules:

  • python-intel/2.7.10-184913
  • python-intel/2.7.11
  • python-intel/2.7.12
  • python-intel/2.7.12.35
  • python-intel/3.5.0-185146
  • python-intel/3.5.1
  • python-intel/3.5.2
  • python-intel/3.5.2.35
  • python/2.7.10
  • python/2.7.11
  • python/2.7.12
  • python/2.7.13
  • python/3.4.3
  • python/3.5.1
  • python/3.6.0

To get the currently default version of the pp module, use

testuser@fe1:~$ module load python/3.6.0 pp/1.6.4

You can replace python/3.6.0 with other versions of the python or python-intel modules. Note that not all combinations of versions of pp and these modules may be available. An example sbatch script can be found on the Abacus frontend node at the location /opt/sys/documentation/sbatch-scripts/pp/pp-1.6.4.sh. The contents of this file is shown below.

#! /bin/bash
#
#SBATCH --nodes 2                 # number of nodes
#SBATCH --time 2:00:00            # max time (HH:MM:SS)
#

echo Running on "$(hostname)"
echo Available nodes: "$SLURM_NODELIST"
echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR"
echo Start time: "$(date)"
echo

echo Writing hostnames
scontrol show hostnames $SLURM_NODELIST > /tmp/nodelist

echo Enable modules
module purge
module add python pp

echo Starting servers
srun ppserver.py -p 2048 &
sleep 1 # sleep a bit to ensure that the servers have started

echo Starting Python program
python test.py

echo
echo Killing ppservers - expect this to fail with an srun error
srun pkill pserver
sleep 1

echo Stop time: "$(date)"
echo Done.

Other libraries/utilities

CUDA

Further information:

Cuda

Further information:

Versions available:

  • cuda/7.0
  • cuda/7.5 (default)
  • cuda/8.0.44

CUDA is a parallel computing platform and application programming interface (API) model created by NVIDIA. Version 7.5 is installed on all compute nodes. Using this module you can load a specific version of CUDA if necessary. Note that GPU cards are of course only available on the GPU nodes.

To get the currently default version of the cuda module, use

testuser@fe1:~$ module load cuda/7.5

Cmake

Further information:

Versions available:

  • cmake/3.2.2
  • cmake/3.5.2 (default)
  • cmake/3.7.0
  • cmake/3.7.2

CMake is a cross-platform, open-source build system

To get the currently default version of the cmake module, use

testuser@fe1:~$ module load cmake/3.5.2

FFTW

Further information:

Versions available:

  • fftw/3.3.4
  • fftw/3.3.5
  • fftw/3.3.6 (default)

FFTW - Fastest Fourier Transform in the West

This module can only be loaded if you first load one of the following modules:

  • gcc/4.8-c7 intelmpi/2016.11
  • gcc/4.8-c7 mvapich2/2.2
  • gcc/4.8-c7 openmpi-i8/1.10.4
  • gcc/4.8-c7 openmpi-i8/2.0.2
  • gcc/4.8-c7 openmpi/1.10.4
  • gcc/4.8-c7 openmpi/2.0.2
  • gcc/5.4.0 intelmpi/2016.11
  • gcc/5.4.0 mvapich2/2.2
  • gcc/5.4.0 openmpi-i8/1.10.4
  • gcc/5.4.0 openmpi-i8/2.0.2
  • gcc/5.4.0 openmpi/1.10.4
  • gcc/5.4.0 openmpi/2.0.2
  • gcc/6.3.0 intelmpi/2016.11
  • gcc/6.3.0 mvapich2/2.2
  • gcc/6.3.0 openmpi-i8/1.10.4
  • gcc/6.3.0 openmpi-i8/2.0.2
  • gcc/6.3.0 openmpi/1.10.4
  • gcc/6.3.0 openmpi/2.0.2
  • intel/2016.11 intelmpi/2016.11
  • intel/2016.11 mvapich2/2.2
  • intel/2016.11 openmpi-i8/1.10.4
  • intel/2016.11 openmpi-i8/2.0.2
  • intel/2016.11 openmpi/1.10.4
  • intel/2016.11 openmpi/2.0.2

To get the currently default version of the fftw module, use

testuser@fe1:~$ module load intel/2016.11 openmpi/2.0.2 fftw/3.3.6

You can replace intel/2016.11 openmpi/2.0.2 with other versions of the , gcc, intel, intelmpi, mvapich2, openmpi or openmpi-i8 modules. Note that not all combinations of versions of fftw and these modules may be available.

OpenBabel

Further information:

Versions available:

  • openbabel/2.3.2 (default)
  • openbabel/2.4.1

Open Babel: The Open Source Chemistry Toolbox

To get the currently default version of the openbabel module, use

testuser@fe1:~$ module load openbabel/2.3.2

Java-Oracle

Further information:

Versions available:

  • java-oracle/1.7.0-45
  • java-oracle/1.8.0-51
  • java-oracle/1.8.0-66
  • java-oracle/1.8.0-91 (default)
  • java-oracle/1.8.0-112
  • java-oracle/1.8.0-121

Java SE Development Kit

Java Platform, Standard Edition or Java SE is a widely used platform for development and deployment of portable applications for desktop and server environments. Java SE uses the object-oriented Java programming language.

This module contains the Java SE development kit from Oracle.

To get the currently default version of the java-oracle module, use

testuser@fe1:~$ module load java-oracle/1.8.0-91

TurboVNC

Further information:

Versions available:

  • turbovnc/2.0 (default)

TurboVNC can be used to speed up X/GUI connections running over ssh making GUI programs running on our compute nodes actually usable. This is particularly useful for e.g. Matlab which is otherwise too slow to be used using the GUI.

For this to work, you must install a corresponding VNC client running on your own computer, e.g., TurboVNC.

The TurboVNC package also include MESA libraries (software 3D) which in some cases are required to make this run smoothly.

Note that you are only allowed to start a VNC server on the compute nodes.

To start a VNC job run the command sinteractive-vnc, e.g., sinteractive-vnc --time 0-4 --account sdutest_slim

To get the currently default version of the turbovnc module, use

testuser@fe1:~$ module load turbovnc/2.0