r49 - 20 Dec 2013 - 11:38:36 - FredericBastienYou are here: TWiki >  Public Web  > LisaCluster

NEW USER : Plan at least a few day (more for outside cluster) to have access to outside cluster and know how to use them.

You can see ClusterComparaison for more a comparaison of the cluster we use.

LISA members have access to 3 clusters:

  • the lisa/brams cluster
  • Compute Canada cluster(we use RQCHP, CLUMEQ and SHARCNet cluster)
    • You must acknowledge the usage of them in your paper.
  • and the old diro cluster that we don't use as it is too old

You can use the jobdispatch script to launch jobs on all of them. It was named dbidispatch in the past, so if you see information to dbidispatch, it probably also apply to jobs dispatch. It supports 6 backend systems (automatically selected for the lisa/brams cluster):

  • condor (lisa/brams cluster),
  • cluster (diro cluster),
  • bqtools (mammouth at RQCHP),
  • sge (old colosse job scheduler at CLUMEQ)
  • local (for smp machines...)
  • and ssh is in development (more in standby as their is other priorities; if this is a priority for you contact me at nouiz@nouiz.org).

After the jobs are launched you must use the respective back end tools (for condor, cluster, bqtools and sge only) to see what happens to your jobs, or to suspend/kill them.

The brams cluster is owned by brams, but uses the same management tools as the lisa cluster, hence the name lisa/brams. The access rights are different for them.

How to access outside cluster

REMINDER:
  • YOU MUST ACKNOWLEDGE the group that provided computing resource that you used in paper!
    • This is important to help them get money to continue to provide us with hardware and support!
    • This is a request in there user license!

This is a multistep process.

  • Register to Compute Canada and put jvb-000-01(it is the CCRI of Yoshua Bengio) as your sponsor.
  • Yoshua need to accept you as a person under is group.
  • Request account on individual group under Compute Canada that you want access: RQCHP, Clumeq and SharcNet?
  • Yoshua need to accept all those requests.
  • learn how to use them and use them
  • Note for SharcNet?, you must see a live presentation of 1h before you get full access. That presentation is done every other week. Maybe now their is one recoreded, but I'm not sure.

How to use lisa/brams cluster

  • Basic information
    • All files from the lisa account are copied locally on /opt/lisa. If you source the file .local.bashrc or .local.cshrc, this will provide a configuration that allow you to use the cluster. Basically, put this at the top of your .basrhc:
      if [ -e "/opt/lisa/os/.local.bashrc" ];then source /opt/lisa/os/.local.bashrc; else source /data/lisa/data/local_export/.local.bashrc; fi 
      
    • At DIRO, we use the Kerberos security system. This can cause some trouble that is automatically fixed in most cases. The remaining cases are when the submit machine is shut down, or when the job is not finished after 30 days (measured from the time of submission). If you care about those cases, all files (executable, data, config, libs,...) must be under /data/lisa/exp/$USER (for user experiments) or /data/lisa/data (for datasets only).
    • You MUST put your data set on the data file server under /data/lisa/data
    • You MUST put your experiments file(executable, script,...) on the experiment file server under /data/lisa/exp/$USER
    • You MUST submit your experiment from the /data/lisa/exp/$USER directory
    • There are 3 types of nodes in condor (a single computer can be one or more of these):
        • submit node (use the computer zappa8 in preference, list of host): these are nodes on which you run the jobdispatch command
        • compute node (most desktops+servers): these are nodes which execute the commands
        • central manager (monty): the condor back-end manager
      • for simplicity always use the same submit node as their is a different jobs queue for each submit node.
    • Best method to submit jobs is with jobdispatch (automatically available if you did the source command talked about before)
    • by default it will execute on a node with the same OS and the same architecture (32 vs 64 bits, only 64 bits now) and the same OS (fc9 only for as of writing) as the submit node. So you should specify is you want something else.
    • job numbers have the format "CLUSTER_ID.JOBS_ID". CLUSTER_ID is the number of a group of jobs. i.e. jobs sent at the same time JOBS_ID is the job number inside the group job numbers are unique for each submit node
    • [deprecated as their isn't CPU desktop on the cluster anymore]If you don't use the --server option of jobdispatch, your jobs can be suspended/unsuspended and killed/restarted. It is your responsibility that your jobs can be restarted for any reason!
  • basic commands of condor
    • condor_status; shows the status of all the execution nodes
    • condor_q; show the status of jobs not finished submitted on THIS submit node (by anyone)
    • condor_q -better-analyze; show why jobs don't run
      • sheduling is not always done immediatly after submit time if it shows it can run, please wait up to 2-3 minutes
    • condor_q -g; show the status of jobs not finished submited on ALL submit node
    • condor_history; show finished jobs on this submit node
    • condor_rm 4.0; remove the job number 4.0 on this submit node
    • condor_rm -all; remove all jobs of this user on this submit node
    • condor_vacate; if you want to force kill all condor job executing on your desktop, IF YOU NEED IT, TELL ME. Normally condor should do it automatically for you!
    • condor_submit submit_file; if you don't want to use jobdispatch, to talk directly to condor
  • jobdispatch basics (to submit jobs)
    • jobdispatch --help
    • jobdispatch --condor echo t t t ; will execute 'echo t t t' on a compute node
    • supports the syntax of apdispatch jobdispatch --condor echo "{{1,2,3}} {{a,b}}"-> execute echo 1 a; echo 2 a; echo 3 a; echo 1 b; echo 2 b; echo 3 b
    • jobdispatch is a wrapper around the DBI objects it generates the DBI object and executes it
    • jobdispatch --test, creates the DBI object in test mode and don't execute it. The DBI object is saved in file launchdbi.py useful to see the options used and the jobs launched
    • jobdispatch --testdbi, will execute jobdispatch in normal mode, but will execute the DBI object in test mode
    • Your environment variables
      • By default the environment variables is not forwarded to execution node. But the script in $CONDOR_LOCAL_SOURCE will be executed. Only bash and csh script have been tested to work. csh script must end in .cshrc! Also, if needed $CONDOR_LOCAL_SOURCE will be copied automatically for you. Default: CONDOR_LOCAL_SOURCE=/home/fringant2/lisa/local_export/.local.{ba,c}shrc
      • jobdispatch --condor --getenv; forwards your environment variables to the execution node. BAD as the HOSTNAME is not good, with plearn it will use the bad GOTO lib, ...
    • Works with many system: condor, cluster, bqtools(mammouth), locally and eventually ssh (see below)
    • put log file in LOGS/by_cluster_dir/
      • submit_file.condor -> condor file that is gived to condor_submit.
      • launch.sh or launch.csh -> script that will launch your jobs
      • condor.X.err, condor.X.out -> stderr, stdout of each job
      • condor.log -> log of when your jobs are started/suspended/preempted/restarted/...
    • [deprecated as the isn't any more CPU desktop on the cluster anymore] jobdispatch --server; will not execute on desktop, your job should not be interrupted by other job or users of a desktop

  • jobdispatch advanced
    • jobdispatch --condor --machine=mona01.iro.umontreal.ca
    • jobdispatch --condor --machines=mona0*; regexp on the machines names to use
      • seam to be bugged, you can use 'jobdispatch --machine=maggie3{1,2,3}.iro.umontreal.ca' to do the same
    • jobdispatch --server; will not execute on desktop, your job should not be interrupted by other job or users of a desktop
    • jobdispatch --condor --prefserver; you will prefer dedicated compute computer, but if none available, you will accept desktop(default)
    • jobdispatch --nice; will execute your job only if there is compute power available and not used by higher priority jobs.
    • JOBDISPATCH_DEFAULT_OPTION sets default option to jobdispatch: suggested JOBDISPATCH_DEFAULT_OPTION="--condor --server"
      • For compatibility with the old dbidispatch, if JOBDISPATCH_DEFAULT_OPTION is not set, we try: DBIDISPATCH_DEFAULT_OPTION
    • jobdispatch --condor --mem=N; This ask a computer with more then N meg of memory
    • jobdispatch --condor --os=[FC7,FC9]; This ask a computer with the operating system FC7 or FC9
    • jobdispatch --condor [--32,--64,--3264]; this ask for 32bits, 64 bits or both type of computer #not usefull now as all computer are 64 bits.
    • jobdispatch --condor --rank=EXPRESSION, This allow you to pass a condor expression that will tell the type of machine you will prefer
    • jobdispatch --condor --raw=EXPRESSION; will forward EXPRESSION in the submit file of condor example: jobdispatch --condor --raw=+IOJob=True

  • jobdispatch interactive jobs
    • Add the --interactive parameter to your jobdispatch command. You need to specify a program to run, but it will be ignored. Example:
    • jobdispatch --interactive --env=THEANO_FLAGS=floatX=float32,device=gpu echo
    • I had a few problem with kerberos not being started everytimes. I don't know why. To test this, try to cd into your home directory.
    • If you have permission error, execute "kinit" followed by "pkboost&", then "source ~/.bashrc" in the new shell to fix this and load your unloaded config.

  • condor advanced
    • advanced jobs prioritisation
      • Job priority. A user can tell which jobs he prefers among his jobs. This doesn't give him more importance than other users. Use the command condor_prio to change your jobs priority.
      • User priority. This decides the proportion of resources a user will have. There is a static value that can be set by admin, 0.5 by default. There is a dynamic value that changes by the use compute resources of individual users. The more you use condor, the lower your priority will be. This uses a half-life technique with half-life of 1 day. Use condor_userprio to see the priorities of users. The lower the better.
      • There is a condor nice_user option for jobs (--nice for jobdispatch). If true, the job will only execute if other jobs don't want some machine.
      • User are not starved even if they have low priority(except if nice?)
    • condor_hold cluster_id[.jobs_id]; put jobs in hold state. If they are running, they are killed without saving.
    • condor_release cluster_id[.jobs_id]; jobs that are in hold state, are put in the queu of jobs to run.

  • jobdispatch on mammouth
    • jobdispatch --bqtools=max_nb_parallel_jobs [--help|-h] [--test] [--duree=HH:MM:SS] '[--micro[=nb_batch]] [--long] {--file=FILEPATH | <command-template>}
    • jobs have a maximum duration of 5 days. In the past(mammouth don't allow it anymore) the option '--long' set the maximum duration to 50 days or you can use the jobdispatch --duree=S. To set it to something else.

  • jobdispatch on local host
    • jobdispatch can launch jobs in parallel on the local host
    • jobdispatch --local=nb_parallel_jobs [--help|-h] [--test] {--file=FILEPATH | <command-template>}

Examples

  • condor_status; jobdispatch --condor echo "{{1,2,3}}"; condor_status; condor_q
  • jobdispatch --condor --machine=zzz echo "{{1,2,3}}"; condor_q -better-analyze
  • jobdispatch --file=PATH_TO_A_FILE This will also submit the same 3 jobs as above With this content:
echo 1
echo 2
echo 2
  • "condor_q" to see the status of your submitted jobs on condor

Condor with GPU

See this google spreadsheet for the list of computer with gpu. Some can be used for development by ssh. Most MUST be used by condor. The slide tell how you must use each computer.

There a few computers on condor with GPUs. You can use them with theano. By default, condor won't use these computers.

  • To see the status of computer with GPU on condor, you can run: condor_status -const GPU
  • To see the GPU type on those host, you can run: condor_status -const GPU -format "%s" Machine -format " %s\n" GPU_TYPE
  • To send a job to a host with a GPU, use jobdispatch option --gpu
  • To send a job to host with a specific GPU, use jobdispatch option --gpu and "-raw=GPU_TYPE=*YOUR_TYPE*"

To use the GPU, make sure the THEANO_FLAGS variable includes "device=gpu" (without a gpu device number). This will let the driver select a card automatically like on angel.

In column "State" of the output, "Owner" means that the slot is available for a GPU job, but not for another kind of job.

You still need to configure Theano as you did before. This can be done in two ways:

1. setting THEANO_FLAGS with jobdispatch:

  • jobdispatch --gpu --env=THEANO_FLAGS=device=gpu,floatX=float32 python theano/misc/check_gpu_blas.py
1. setting THEANORC with jobdispatch:
  • jobdispatch --gpu --env=THEANORC=/opt/lisa/os/.local.theanorc:/u/bastienf/.theanorc.gpu python theano/misc/check_gpu_blas.py
  • This version allow to don't pass all parameter each time, but use a configuration file instead.

In all cases, be careful of the modifications that may be applied to these variables in your startup files (like .bashrc and .bash_profile): these files will be executed (FIXME: or maybe only if the CONDOR_LOCAL_SOURCE requires it?) afterwards, so if they contain "unset THEANO_FLAGS", for instance, then whatever you passed with --env= will be ignored.

How it works: On the host with gpu under condor, the gpu card for the monitor is put into a mode to not execute cuda. The other GPU cards are put into a mode to allow only one process to bind to a gpu card (exclusive mode). When we set theano flags device=gpu, theano let the driver select the first card that he can use. But it can only use gpu reserved for computation and send only 1 job on them. So all is good if condor is alone on that machine.

How does condor work with ssh ? 1) If there are 2 condor jobs, you will not be able to open the GPU device (because of the exclusive mode). 1) If there is 1 or 0 condor jobs, your job will start, and anyone using Condor to reserve the GPU will find their jobs are unable to get the GPU device.

Don't forget that you can combine this with interactive jobs (grep this page for interactive)

So PLEASE DO NOT SSH INTO COMPUTER WITH GPUS ON CONDOR AND LAUNCH GPU JOBS, because you will screw up the condor booking system.

Config

  • local user condor:gcondor 169:169
  • backfill job
  • Maintenance = (ClockMin? > 255 && ClockMin? < 315 && $(ConsoleBusy?) == False) # Maintenance is when nightly scripts run on CS machines raising the load. We don't want to evict a job due to load during these times.
  • config tutorial
  • also, you can do this: export _CONDOR_TOOL_DEBUG=D_ALL #bash setenv _CONDOR_TOOL_DEBUG D_ALL #tcsh
  • condor_q -debug

Trouble

  • no kerberos over nfs (solved automatically in jobdispatch at lisa)
  • Static execution slot
    • dynamic slot in the road map of mai 2007

Check-point technology

  • condor_compile
    • static compile
    • don't support pthread. NSPR need it. Can compile with user threads, but build failed
    • BLAS, need a generic version that run on a subset of machine. So not completly optimized version
      • We can't use boot version and check at each call of gemm. The trouble is if we save inside gemm and move to an old computer without the new instruction used it will crash.Internal Link
  • blcr
    • dynamic compile
    • support pthread
    • what if the version of blas change?
  • plearn auto-save in HyperLearner?
    • need to modif plearn
    • more portable
    • loose more data as we won't save at the exact moment of preemption

kerberos

  • condor_compile
    • forward io to the submit node.
    • If the data is not directly on the submit node this double the network traffic and augment the latence
  • condor file transfert
    • do not support directory for input or output
      • can do a wrapper in dbi that will compress/tar directory
    • If the data is not directly on the submit node this double the network traffic and augment the latence
    • transfert all done before and after job. This add latency at the job start and job end
    • all data access is local
  • no kerberos for one partition on NFS
    • less secur
    • if admin find a security bug, it will be removed
  • modif condor to forward tgt of the submit node to the execution node when the submit node reclaim the matched node

PLearn auto_save

Their is some know bug that I don't know how to resolve.
  • !!!TEST it before relying on it. Not all class correctly handle saving and loading correctly!
    • For the test, use your full dataset, but make small iteration and/or few iteration
    • Then execute it without auto_save
    • save a copy of this results
    • reexecute it with auto_save and auto_save_test at 1.
    • diff -r expdir1 expdir2
    • Their can be normal diff depending of special case. So investigate all diff manually.
    • if necessery correct the class that don't save and load correctly
  • This can save one HyperLearner?
  • to use
    • set HyperOptimize?::auto_save to true
    • HyperOptimize?::auto_save_test, will exit after all call to auto_save
    • In ALL PTester, set enforce_clean_expdir to false
    • HyperOptimize?::auto_save_diff_time is the mininum amount of time before the first save point and between two save point in second. Default 3h.
  • Trouble
    • If you made modif to your script and then you reload it. The reloaded version won't have your modification in or under the HyperLearner? execpt if you also do the modif in the saved HyperLearner?.
    • It will first load all your script. Then it will reload the old version. If the building of your component in or under HyperLearner? is long, it will be two or three time done(one for the script, one for the reload and meaby one as their is two copy of the learner you want to train(the current and the best!))!

-- FredericBastien - 12 May 2009

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r49 < r48 < r47 < r46 < r45 | More topic actions
Public.LisaCluster moved from Divers.LisaCluster on 20 Jan 2010 - 21:34 by FredericBastien - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback