在ubuntu16.04台式机上安装/仿真SLURM: slurmd启动失败。

时间:2022-03-12 23:39:48

Edit

What I am really looking for is a way to emulate SLURM, something interactive and reasonably user-friendly that I can install.

我真正想要的是一种模拟声音的方法,它是一种交互式的,而且相当友好的用户界面,我可以安装。


Original post

I want to test drive some minimal examples with SLURM, and I am trying to install it all on a local machine with Ubuntu 16.04. I am following the most recent slurm install guide I could find, and I got as far as "start slurmd with sudo /etc/init.d/slurmd start".

我想测试一些使用SLURM的最小的例子,我正在尝试将它全部安装在一个Ubuntu 16.04的本地机器上。我正在跟踪我能找到的最新的“思乐冰”安装指南,而且我还得到了“用sudo / etc/initi开始”的声音。d / slurmd开始”。

[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!

I do not know how to interpret the systemctl log:

我不知道如何解释systemctl日志:

● slurmd.service - Slurm node daemon
   Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago
  Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE)

Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon...
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited, code=exited status=1
Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered failed state.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'.

lsb_release -a gives the following. (Yes, I know, KDE Neon is not exactly Ubuntu, strictly speaking.)

lsb_release -a给出以下内容。(是的,我知道,严格地说,KDE氖并不是Ubuntu。)

o LSB modules are available.
Distributor ID: neon
Description:    KDE neon User Edition 5.11
Release:        16.04
Codename:       xenial

Unlike the guide said, I used my own user name, wlandau, and I made sure to chown /var/lib/slurm-llnl and /var/run/slurm-llnl to me. Here is my /etc/slurm-llnl/slurm.conf.

不像导游说的那样,我使用了我自己的用户名wlandau,我确保了chown /var/lib/slurm-llnl和/var/run/slurm-llnl给我。这是我的/etc/slurm-llnl/slurm.conf。

# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=linux0
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
CacheGroups=0
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerPlugin=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP

Follow-up

After rewriting my slurm.conf with the help of @damienfrancois, slurmd starts now. But unfortunately, sinfo hangs when I call it, and I get the same error message as before.

在重写我的粘。很多conf在@damienfrancois的帮助下,从现在开始。但不幸的是,当我调用sinfo时,它会挂起,我得到的错误消息和以前一样。

$ sudo /etc/init.d/slurmctld stop
[ ok ] Stopping slurmctld (via systemctl): slurmctld.service.
$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
$ sinfo
slurm_load_partitions: Unable to contact slurm controller (connect failure)
$ slurmd -Dvvv
slurmd: fatal: Frontend not configured correctly in slurm.conf.  See man slurm.conf look for frontendname.

Then I tried restarting the daemons, and slurmd failed to start all over again.

然后,我试着重新启动守护进程,而slurmd又没能从头开始。

$ sudo /etc/init.d/slurmctld start
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!

1 个解决方案

#1


4  

The value in front of ControlMachine has to match the output of hostname -s on the machine on which slurmctld starts. The same holds for NodeName ; it has to match the output of hostname -s on the machine on which slurmd starts. As you only have one machine and it appears to be called Haggunenon, the relevant lines in slurm.conf should be:

在ControlMachine前面的值必须与机器上的主机名的输出匹配,在此机器上,slurmctld开始。这同样适用于NodeName;它必须匹配机器上的主机名-s的输出。当你只有一台机器时,它似乎被称为“哈格森农”(Haggunenon),它是“思乐”(slurm)中的相关线。设计应该是:

ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN

If you want to start several slurmd daemon to emulate a larger cluster, you will need to start slurmd with the -N option (but that requires that Slurm be built using the --enable-multiple-slurmd configure option)

如果您想要启动多个slurmd守护进程来模拟一个更大的集群,那么您将需要使用-N选项启动slurmd(但这需要使用- enab- multislurmd配置选项来构建Slurm)


UPDATE. Here is a walkthrough. I setup a VM with Vagrant and VirtualBox (vagrant init ubuntu/xenial64 ; vagrant up) and then after vagrant ssh, I ran the following:

更新。这是一个介绍。我设置了一个有流浪汉和VirtualBox的VM(流浪的init /xenial64;在流浪的宋承宪之后,我做了以下工作:

ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:   xenial
ubuntu@ubuntu-xenial:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 
[...]
  python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.

And in the end, it gives me the expected result:

最后,它给出了预期的结果:

ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1   idle ubuntu-denial

If following the exact steps here does not help, try running

如果按照正确的步骤进行操作,那么尝试运行。

sudo slurmctld -Dvvv

sudo slurmctld -Dvvv

and

sudo slurmd -Dvvv

sudo slurmd -Dvvv

The messages should be explicit enough.

这些消息应该足够明确。

#1


4  

The value in front of ControlMachine has to match the output of hostname -s on the machine on which slurmctld starts. The same holds for NodeName ; it has to match the output of hostname -s on the machine on which slurmd starts. As you only have one machine and it appears to be called Haggunenon, the relevant lines in slurm.conf should be:

在ControlMachine前面的值必须与机器上的主机名的输出匹配,在此机器上,slurmctld开始。这同样适用于NodeName;它必须匹配机器上的主机名-s的输出。当你只有一台机器时,它似乎被称为“哈格森农”(Haggunenon),它是“思乐”(slurm)中的相关线。设计应该是:

ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN

If you want to start several slurmd daemon to emulate a larger cluster, you will need to start slurmd with the -N option (but that requires that Slurm be built using the --enable-multiple-slurmd configure option)

如果您想要启动多个slurmd守护进程来模拟一个更大的集群,那么您将需要使用-N选项启动slurmd(但这需要使用- enab- multislurmd配置选项来构建Slurm)


UPDATE. Here is a walkthrough. I setup a VM with Vagrant and VirtualBox (vagrant init ubuntu/xenial64 ; vagrant up) and then after vagrant ssh, I ran the following:

更新。这是一个介绍。我设置了一个有流浪汉和VirtualBox的VM(流浪的init /xenial64;在流浪的宋承宪之后,我做了以下工作:

ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:   xenial
ubuntu@ubuntu-xenial:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 
[...]
  python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.

And in the end, it gives me the expected result:

最后,它给出了预期的结果:

ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1   idle ubuntu-denial

If following the exact steps here does not help, try running

如果按照正确的步骤进行操作,那么尝试运行。

sudo slurmctld -Dvvv

sudo slurmctld -Dvvv

and

sudo slurmd -Dvvv

sudo slurmd -Dvvv

The messages should be explicit enough.

这些消息应该足够明确。