维护手册-常见错误锦集
4.ceph-deploy install node02错误... 4
5. healthHEALTH_WARN too few PGs per OSD (16 < min 30) 10
7.上锁问题could not get lock /var/lib/dpkg/lock. is another process using it? 12
8. SSH 登录失败:Host key verification failed 的解决方法 13
9. 添加monitor失败 GenericError: Failed to add monitor to host 16
10. ImportError: No module named rados locale.Error: unsupported locale setting 18
1.对硬盘分区(如新分区为/dev/sdb)
1. 查看硬盘
2. 分区sudo fdisk/dev/sdb
二、格式化新分区
1.格式化为ext4文件格式: sudomkfs.ext4 /dev/sdb1
三、挂载,挂载到/home2文件夹下
1. 手动挂载
sudomount -t ext4 /dev/sdb1 /home2
2. 自动挂载
sudo vi/etc/fstab
增加下面一行
/dev/sdb1 /home2 ext4 defaults 0 0
2.重启服务器(一般方法重启重启失败后使用)
watchdog 通过监控数据输入是否正常可以实现在系统出现异常时自动重启系统,这里我们刚好可以借用的。
首先需要加载 watchdog 支持,这个和主板硬件设备有关,如果只需要软件模拟的,可以运行:
1.[root@localhost ~]#modprobe softdog
命令加载软件 watchdog 支持,接着再运行:
2. [root@localhost ~]# cat/dev/watchdog
命令,该命令会马上退出并报错,同时系统日志中就会提示(不提示也没问题):
softdog:Unexpected close, not stopping watchdog!
这就表示 watchdog 设备是被意外关闭的而不是正常停止的,大约等待 60 秒之后你就会发现 Linux 系统自动重启了。Linux watchdog 的异常等待时间是通过 /proc/sys/kernel/watchdog_thresh 设置的,一般默认为 60 秒。
3. Ignoring file '50unattended-upgrades.ucf-dist'in directory '/etc/apt/apt.conf.d/' as it has an invalid filename extension
cephCluster@node02:~$sudo apt-get update
Hit:1http://mirrors.aliyun.com/ubuntu xenialInRelease
Get:2http://mirrors.aliyun.com/ubuntu xenial-backportsInRelease [102 kB]
Get:3http://mirrors.aliyun.com/ubuntu xenial-proposed InRelease [253 kB]
Get:4http://mirrors.aliyun.com/ubuntu xenial-security InRelease [102 kB]
Get:5http://mirrors.aliyun.com/ubuntu xenial-updates InRelease [102 kB]
Get:6http://mirrors.aliyun.com/ubuntu xenial-backports/main Sources [3,312 B]
Get:7http://mirrors.aliyun.com/ubuntu xenial-backports/main amd64 Packages [4,680 B]
Get:8http://mirrors.aliyun.com/ubuntu xenial-backports/main i386 Packages [4,688 B]
Get:9http://mirrors.aliyun.com/ubuntu xenial-backports/main amd64 DEP-11 Metadata[3,328 B]
Get:10http://mirrors.aliyun.com/ubuntu xenial-backports/universe amd64 DEP-11Metadata [4,672 B]
Get:11http://mirrors.aliyun.com/ubuntu xenial-proposed/main Sources [91.7 kB]
Get:12http://mirrors.aliyun.com/ubuntu xenial-proposed/main amd64 Packages [113kB]
Get:13http://mirrors.aliyun.com/ubuntu xenial-proposed/main i386 Packages [107 kB]
Get:14http://mirrors.aliyun.com/ubuntu xenial-proposed/main Translation-en [43.7 kB]
Get:15http://mirrors.aliyun.com/ubuntu xenial-proposed/main amd64 DEP-11 Metadata[76.3 kB]
Get:16http://mirrors.aliyun.com/ubuntu xenial-proposed/multiverse amd64 DEP-11Metadata [3,932 B]
Get:17http://mirrors.aliyun.com/ubuntu xenial-proposed/multiverse DEP-11 64x64 Icons[6,734 B]
Get:18http://mirrors.aliyun.com/ubuntu xenial-proposed/universe amd64 Packages [55.6kB]
Get:19http://mirrors.aliyun.com/ubuntu xenial-proposed/universe i386 Packages [52.4kB]
Get:20http://mirrors.aliyun.com/ubuntu xenial-proposed/universe Translation-en [20.7kB]
Get:21http://mirrors.aliyun.com/ubuntu xenial-proposed/universe amd64 DEP-11 Metadata[12.9 kB]
Get:22http://mirrors.aliyun.com/ubuntu xenial-proposed/universe DEP-11 64x64 Icons[19.8 kB]
Get:23http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 DEP-11 Metadata[54.6 kB]
Get:24http://mirrors.aliyun.com/ubuntu xenial-security/main DEP-11 64x64 Icons [45.7kB]
Get:25http://mirrors.aliyun.com/ubuntu xenial-security/universe amd64 DEP-11 Metadata[35.7 kB]
Get:26http://mirrors.aliyun.com/ubuntu xenial-security/universe DEP-11 64x64 Icons[52.2 kB]
Get:27http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages [572 kB]
Get:28http://mirrors.aliyun.com/ubuntu xenial-updates/main i386 Packages [553kB]
Get:29http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 DEP-11 Metadata [298kB]
Get:30http://mirrors.aliyun.com/ubuntu xenial-updates/main DEP-11 64x64 Icons [201kB]
Get:31http://mirrors.aliyun.com/ubuntu xenial-updates/multiverse amd64 DEP-11Metadata [2,516 B]
Get:32http://mirrors.aliyun.com/ubuntu xenial-updates/universe amd64 Packages [493kB]
Get:33http://mirrors.aliyun.com/ubuntu xenial-updates/universe i386 Packages [474kB]
Get:34http://mirrors.aliyun.com/ubuntu xenial-updates/universe amd64 DEP-11 Metadata[163 kB]
Get:35http://mirrors.aliyun.com/ubuntu xenial-updates/universe DEP-11 64x64 Icons[208 kB]
Fetched4,337 kB in 16s (266 kB/s)
Readingpackage lists... Done
N:Ignoring file '50unattended-upgrades.ucf-dist' in directory'/etc/apt/apt.conf.d/' as it has an invalid filename extension
解决方法:
sudorm/etc/apt/apt.conf.d/50unattended-upgrades.ucf-dist
4.ceph-deploy install node02错误
cephCluster@admin-node:~/my-cluster$ceph-deploy install node02
[ceph_deploy.conf][DEBUG] found configuration file at: /home/cephCluster/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deployinstall node02
[ceph_deploy.cli][INFO ]ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] testing :None
[ceph_deploy.cli][INFO ]cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f36cf0d6d88>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ]dev_commit : None
[ceph_deploy.cli][INFO ]install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ]default_release : False
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ]adjust_repos : True
[ceph_deploy.cli][INFO ]func :<function install at 0x7f36cf52c398>
[ceph_deploy.cli][INFO ]install_all : False
[ceph_deploy.cli][INFO ] repo :False
[ceph_deploy.cli][INFO ] host :['node02']
[ceph_deploy.cli][INFO ]install_rgw : False
[ceph_deploy.cli][INFO ]install_tests : False
[ceph_deploy.cli][INFO ]repo_url : None
[ceph_deploy.cli][INFO ]ceph_conf : None
[ceph_deploy.cli][INFO ]install_osd : False
[ceph_deploy.cli][INFO ]version_kind : stable
[ceph_deploy.cli][INFO ]install_common : False
[ceph_deploy.cli][INFO ]overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ]dev : master
[ceph_deploy.cli][INFO ]nogpgcheck : False
[ceph_deploy.cli][INFO ]local_mirror : None
[ceph_deploy.cli][INFO ] release :None
[ceph_deploy.cli][INFO ]install_mon : False
[ceph_deploy.cli][INFO ]gpg_url : None
[ceph_deploy.install][DEBUG] Installing stable version jewel on cluster ceph hosts node02
[ceph_deploy.install][DEBUG] Detecting platform for host node02 ...
[node02][DEBUG] connection detected need for sudo
[node02][DEBUG] connected to host: node02
[node02][DEBUG] detect platform information from remote host
[node02][DEBUG] detect machine type
[ceph_deploy.install][INFO ]Distro info: Ubuntu 16.04 xenial
[node02][INFO ] installing Ceph on node02
[node02][INFO ] Running command: sudoenvDEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q--no-install-recommends install ca-certificates apt-transport-https
[node02][DEBUG] Reading package lists...
[node02][DEBUG] Building dependency tree...
[node02][DEBUG] Reading state information...
[node02][DEBUG] ca-certificates is already the newest version (20160104ubuntu1).
[node02][DEBUG] apt-transport-https is already the newest version (1.2.20).
[node02][DEBUG] The following packages were automatically installed and are no longerrequired:
[node02][DEBUG] account-plugin-twitter checkbox-ng cuda-cublas-8-0cuda-cudart-8-0
[node02][DEBUG] cuda-curand-8-0 cuda-cusparse-8-0cuda-license-8-0 cython dh-apparmor
[node02][DEBUG] g++-4.8 gir1.2-ebook-1.2gir1.2-ebookcontacts-1.2 gir1.2-edataserver-1.2
[node02][DEBUG] gir1.2-gnomebluetooth-1.0gir1.2-gtk-2.0 gir1.2-messagingmenu-1.0
[node02][DEBUG] gir1.2-networkmanager-1.0gnome-icon-theme gnome-icon-theme-symbolic
[node02][DEBUG]graphicsmagick gstreamer0.10-nice gstreamer0.10-plugins-good
[node02][DEBUG] gstreamer0.10-pulseaudio gstreamer0.10-xgstreamer1.0-clutter
[node02][DEBUG] gtk3-engines-unico gyp i965-va-driveriprouteipythonjavascript-common
[node02][DEBUG] libaacs0 libaio1 libamd2.3.1libatk1.0-dev libavcodec-ffmpeg-extra56
[node02][DEBUG] libavcodec54 libavformat54 libavutil-devlibavutil-ffmpeg54 libavutil52
[node02][DEBUG] libbdplus0 libbind9-90 libbluray1libboost-atomic1.54.0
[node02][DEBUG] libboost-chrono1.54.0libboost-context1.54.0 libboost-date-time1.54.0
[node02][DEBUG] libboost-filesystem1.54.0 libboost-graph1.54.0libboost-iostreams1.54.0
[node02][DEBUG] libboost-locale1.54.0libboost-log1.54.0 libboost-math1.54.0
[node02][DEBUG] libboost-program-options1.54.0libboost-python1.54.0 libboost-random1.54.0
[node02][DEBUG] libboost-regex1.54.0libboost-serialization1.54.0 libboost-signals1.54.0
[node02][DEBUG] libboost-system1.54.0libboost-test1.54.0 libboost-thread1.54.0
[node02][DEBUG] libboost-timer1.54.0libboost-wave1.54.0 libbz2-dev libc-ares-dev libc-ares2
[node02][DEBUG] libcairo-script-interpreter2libcairo2-dev libcamd2.3.1 libcamel-1.2-45
[node02][DEBUG] libccolamd2.8.0 libcdr-0.0-0libcephfs1 libcholmod2.1.2 libck-connector0
[node02][DEBUG] libcmis-0.4-4 libcolamd2.8.0libcolord1 libcolorhug1 libcr0
[node02][DEBUG] libcrypt-passwdmd5-perl libcrystalhd3libcudnn5 libdc1394-22
[node02][DEBUG] libdc1394-22-dev libdns100libebackend-1.2-7 libebook-1.2-14
[node02][DEBUG] libebook-contacts-1.2-0libedata-book-1.2-20 libedataserver-1.2-18
[node02][DEBUG] libegl1-mesa-driverslibegl1-mesa-lts-vivid libelfg0 libexif-dev libexiv2-12
[node02][DEBUG] libfarstream-0.1-0 libfcgi0ldbllibfontconfig1-dev libfreeimage3
[node02][DEBUG] libfreetype6-dev libgbm1-lts-vividlibgdata13 libgdk-pixbuf2.0-dev libgee2
[node02][DEBUG] libgl1-mesa-dri-lts-vividlibgl1-mesa-glx-lts-vivid libglapi-mesa-lts-vivid
[node02][DEBUG] libgles1-mesa-lts-vividlibgles2-mesa-lts-vivid libglew1.10 libglewmx1.10
[node02][DEBUG] libgme0 libgnome-bluetooth11libgnome-control-center1 libgnome-desktop-3-7
[node02][DEBUG] libgoogle-perftools4libgphoto2-port10 libgraphicsmagick-q16-3
[node02][DEBUG] libgraphicsmagick1-devlibgraphicsmagick3 libgtk2.0-dev libgtkglext1
[node02][DEBUG] libgtop2-7 libharfbuzz-devlibharfbuzz-gobject0 libhdf5-7 libice-dev
[node02][DEBUG] libicu52 libilmbase-dev libilmbase6libimobiledevice4 libisc95 libisccc90
[node02][DEBUG] libisccfg90 libisl10libjasper-devlibjbig-devlibjs-inherits libjs-jquery
[node02][DEBUG]libjs-jquery-uilibjs-node-uuidlibjs-underscore libjxr0 liblcms2-dev
[node02][DEBUG] liblinear1 libllvm3.6 liblouis2liblttng-ust-ctl2 liblttng-ust0 liblwres90
[node02][DEBUG]liblzma-dev libmbim-glib0 libminiupnpc8 libmodplug1 libmspub-0.0-0 libnccl1
[node02][DEBUG] libnl-route-3-200 libntdb1libopencv-calib3d-dev libopencv-calib3d2.4v5
[node02][DEBUG]libopencv-core-dev libopencv-core2.4v5 libopencv-features2d-dev
[node02][DEBUG] libopencv-features2d2.4v5libopencv-flann-dev libopencv-flann2.4v5
[node02][DEBUG]libopencv-gpu-dev libopencv-gpu2.4v5 libopencv-imgproc-dev
[node02][DEBUG] libopencv-imgproc2.4v5libopencv-ml-dev libopencv-ml2.4v5
[node02][DEBUG]libopencv-photo-dev libopencv-photo2.4v5 libopencv-stitching-dev
[node02][DEBUG] libopencv-stitching2.4v5 libopencv-video-devlibopencv-video2.4v5
[node02][DEBUG]libopenexr-dev libopenexr6 libopenjp2-7 libopenjpeg2 libopenjpeg5
[node02][DEBUG] libopenobex2 libopts25 liborcus-0.6-0libpango1.0-0 libpango1.0-dev
[node02][DEBUG] libpangox-1.0-0 libparted0debian1 libpci-devlibpgm-5.1-0 libpixman-1-dev
[node02][DEBUG] libplist1 libpng12-devlibpocketsphinx1 libpoppler44 libprotobuf-lite8
[node02][DEBUG] libprotobuf8 libprotoc8libpthread-stubs0-dev libqmi-glib0 libqpdf13
[node02][DEBUG] libqt5qml-graphicaleffectslibqt5sensors5 libqt5webkit5-qmlwebkitplugin
[node02][DEBUG] librados2 libradosstriper1libraw1394-dev libraw1394-tools libraw9 librbd1
[node02][DEBUG] librgw2 librhythmbox-core8 librtmp0libschroedinger-1.0-0 libshine3
[node02][DEBUG]libsm-dev libsnappy1v5 libsoxr0 libsphinxbase1 libssh-gcrypt-4 libssl-dev
[node02][DEBUG]libssl-doc libswresample-dev libswresample-ffmpeg1 libswscale-dev
[node02][DEBUG] libswscale-ffmpeg3 libswscale2libsystemd-daemon0 libsystemd-journal0
[node02][DEBUG] libsystemd-login0 libt1-5 libtbb2libtcmalloc-minimal4 libthumbnailer0
[node02][DEBUG] libtiff5-dev libtiffxx5 libtorque2libudev-dev libumfpack5.6.2
[node02][DEBUG] libunityvoice1 libupower-glib1liburcu1 liburcu4 libusbmuxd2 libuv1
[node02][DEBUG] libuv1-dev libv8-3.14-devlibv8-3.14.5 libva1 libvdpau1 libvisio-0.0-0
[node02][DEBUG] libvo-aacenc0 libvo-amrwbenc0 libvpx1libwmf-devlibwnck-common libwnck22
[node02][DEBUG] libwpd-0.9-9 libwpg-0.2-2libwps-0.2-2 libx11-dev libx11-doc libx264-142
[node02][DEBUG] libx264-148 libx265-79libxatracker2-lts-vivid libxau-dev libxcb-render0-dev
[node02][DEBUG] libxcb-shm0-dev libxcb-util0libxcb1-dev libxcomposite-devlibxcursor-dev
[node02][DEBUG]libxdamage-devlibxdmcp-devlibxext-devlibxfixes-devlibxft-devlibxi-dev
[node02][DEBUG]libxinerama-dev libxml2-dev libxml2-utils libxrandr-devlibxrender-dev
[node02][DEBUG] libxtables10 libxvidcore4 libzmq3libzvbi-common libzvbi0
[node02][DEBUG] linux-headers-3.19.0-25 linux-headers-3.19.0-25-generic
[node02][DEBUG]linux-headers-generic-lts-vivid linux-image-3.19.0-25-generic
[node02][DEBUG] linux-image-extra-3.19.0-25-genericlinux-image-generic-lts-vivid
[node02][DEBUG] mesa-vdpau-drivers nginxnginx-commonnginx-core node-abbrev node-ansi
[node02][DEBUG] node-ansi-color-table node-archynode-async node-block-stream
[node02][DEBUG] node-combined-stream node-cookie-jarnode-delayed-stream node-forever-agent
[node02][DEBUG] node-form-data node-fstream node-fstream-ignorenode-github-url-from-git
[node02][DEBUG] node-glob node-graceful-fs node-gypnode-inherits node-ini
[node02][DEBUG] node-json-stringify-safenode-lockfile node-lru-cache node-mime
[node02][DEBUG] node-minimatch node-mkdirp node-mute-streamnode-node-uuid node-nopt
[node02][DEBUG] node-normalize-package-datanode-npmlog node-once node-osenv node-qs
[node02][DEBUG] node-read node-read-package-jsonnode-request node-retry node-rimraf
[node02][DEBUG] node-semver node-sha node-sigmundnode-slide node-tar node-tunnel-agent
[node02][DEBUG] node-underscore node-whichnodejsnodejs-devnpmobex-data-server
[node02][DEBUG]opencv-data python-cephfs python-colorama python-commandnotfound
[node02][DEBUG] python-cycler python-dateutilpython-dbus-dev python-decorator
[node02][DEBUG] python-distlib python-engineiopython-flask python-flaskext.socketio
[node02][DEBUG] python-flaskext.wtf python-gconfpython-gdbm python-gevent
[node02][DEBUG] python-gevent-websocket python-gnomekeyringpython-gobject python-greenlet
[node02][DEBUG] python-h5py python-ibuspython-itsdangerous python-jinja2 python-libxml2
[node02][DEBUG] python-lmdb python-markupsafepython-matplotlib python-matplotlib-data
[node02][DEBUG] python-notify python-ntdbpython-pexpect python-protobuf python-psutil
[node02][DEBUG] python-ptyprocess python-pydotpython-pyinotify python-pyparsing
[node02][DEBUG] python-rados python-rbdpython-renderpm python-reportlab
[node02][DEBUG] python-reportlab-accelpython-requests python-scipy python-simplegeneric
[node02][DEBUG] python-skfmm python-skimagepython-skimage-lib python-smbc python-socketio2
[node02][DEBUG] python-support python-tzpython-werkzeug python-wtforms python3-checkbox-ng
[node02][DEBUG]qml-module-qtquick-dialogs qml-module-qtquick-localstorage
[node02][DEBUG]qml-module-qtquick-privatewidgetsqml-module-ubuntu-ui-extras-browser
[node02][DEBUG] qtdeclarative5-dialogs-pluginqtdeclarative5-localstorage-plugin
[node02][DEBUG] qtdeclarative5-privatewidgets-pluginqtdeclarative5-qtfeedback-plugin
[node02][DEBUG] qtdeclarative5-ubuntu-ui-extras-browser-plugin
[node02][DEBUG] qtdeclarative5-ubuntu-ui-extras-browser-plugin-assets
[node02][DEBUG] qtdeclarative5-window-pluginrhythmbox-mozilla sphinx-voxforge-hmm-en
[node02][DEBUG] sphinx-voxforge-lm-entelepathy-indicator ubuntu-extras-keyring
[node02][DEBUG] unity-voice-service va-driver-allvdpau-driver-all vdpau-va-driver
[node02][DEBUG] x11proto-composite-dev x11proto-core-devx11proto-damage-dev
[node02][DEBUG] x11proto-fixes-dev x11proto-input-devx11proto-kb-dev x11proto-randr-dev
[node02][DEBUG] x11proto-render-dev x11proto-xext-devx11proto-xinerama-dev xfonts-mathml
[node02][DEBUG]xorg-sgml-doctoolsxserver-xorg-input-evdev-lts-vivid
[node02][DEBUG]xserver-xorg-input-mouse-lts-vivid xserver-xorg-input-synaptics-lts-vivid
[node02][DEBUG]xserver-xorg-input-vmmouse-lts-vivid xserver-xorg-input-wacom-lts-vivid
[node02][DEBUG]xserver-xorg-video-ati-lts-vivid xserver-xorg-video-cirrus-lts-vivid
[node02][DEBUG]xserver-xorg-video-fbdev-lts-vivid xserver-xorg-video-intel-lts-vivid
[node02][DEBUG] xserver-xorg-video-mach64-lts-vividxserver-xorg-video-mga-lts-vivid
[node02][DEBUG]xserver-xorg-video-neomagic-lts-vivid xserver-xorg-video-nouveau-lts-vivid
[node02][DEBUG]xserver-xorg-video-openchrome-lts-vivid xserver-xorg-video-r128-lts-vivid
[node02][DEBUG]xserver-xorg-video-radeon-lts-vivid xserver-xorg-video-savage-lts-vivid
[node02][DEBUG]xserver-xorg-video-siliconmotion-lts-vivid
[node02][DEBUG]xserver-xorg-video-sisusb-lts-vivid xserver-xorg-video-tdfx-lts-vivid
[node02][DEBUG]xserver-xorg-video-trident-lts-vivid xserver-xorg-video-vesa-lts-vivid
[node02][DEBUG]xserver-xorg-video-vmware-lts-vivid xtrans-dev
[node02][DEBUG] Use 'sudo apt autoremove' to remove them.
[node02][DEBUG] 0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.
[node02][DEBUG] 5 not fully installed or removed.
[node02][DEBUG] After this operation, 0 B of additional disk space will be used.
[node02][DEBUG] Setting up linux-image-extra-4.4.0-85-generic (4.4.0-85.108) ...
[node02][DEBUG]run-parts: executing/etc/kernel/postinst.d/apt-auto-removal 4.4.0-85-generic/boot/vmlinuz-4.4.0-85-generic
[node02][DEBUG ] run-parts: executing/etc/kernel/postinst.d/initramfs-tools 4.4.0-85-generic/boot/vmlinuz-4.4.0-85-generic
[node02][DEBUG ] update-initramfs: Generating/boot/initrd.img-4.4.0-85-generic
[node02][DEBUG] W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast
[node02][DEBUG]
[node02][DEBUG]gzip: stdout: No space left on device
[node02][DEBUG] E: mkinitramfs failure cpio 141 gzip 1
[node02][DEBUG] update-initramfs: failed for /boot/initrd.img-4.4.0-85-generic with 1.
[node02][DEBUG] run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1
[node02][DEBUG]dpkg: error processing package linux-image-extra-4.4.0-85-generic(--configure):
[node02][DEBUG]subprocess installed post-installation script returned error exit status 1
[node02][DEBUG]dpkg: dependency problems prevent configuration of linux-image-generic:
[node02][DEBUG]linux-image-generic depends on linux-image-extra-4.4.0-85-generic; however:
[node02][WARNIN]No apport report written because the error message indicates its a followuperror from a previous failure.
[node02][DEBUG] Packagelinux-image-extra-4.4.0-85-generic is not configured yet.
[node02][WARNIN]No apport report written because the error message indicates its afollowuperror from a previous failure.
[node02][DEBUG]
[node02][WARNIN]No apport report written because MaxReports is reached already
[node02][DEBUG]dpkg: error processing package linux-image-generic (--configure):
[node02][WARNIN]No apport report written because MaxReports is reached already
[node02][DEBUG] dependency problems - leavingunconfigured
[node02][DEBUG]dpkg: dependency problems prevent configuration of linux-generic:
[node02][DEBUG]linux-generic depends on linux-image-generic (= 4.4.0.85.91); however:
[node02][DEBUG] Package linux-image-generic is notconfigured yet.
[node02][DEBUG]
[node02][DEBUG]dpkg: error processing package linux-generic (--configure):
[node02][DEBUG] dependency problems - leavingunconfigured
[node02][DEBUG]dpkg: dependency problems prevent configuration of linux-generic-lts-vivid:
[node02][DEBUG]linux-generic-lts-vivid depends on linux-generic; however:
[node02][DEBUG] Package linux-generic is notconfigured yet.
[node02][DEBUG]
[node02][DEBUG]dpkg: error processing package linux-generic-lts-vivid (--configure):
[node02][DEBUG] dependency problems - leavingunconfigured
[node02][DEBUG]dpkg: dependency problems prevent configuration oflinux-image-generic-lts-vivid:
[node02][DEBUG]linux-image-generic-lts-vivid depends on linux-image-generic; however:
[node02][DEBUG] Package linux-image-generic is notconfigured yet.
[node02][DEBUG]
[node02][DEBUG]dpkg: error processing package linux-image-generic-lts-vivid (--configure):
[node02][DEBUG] dependency problems - leavingunconfigured
[node02][DEBUG] Errors were encountered while processing:
[node02][DEBUG] linux-image-extra-4.4.0-85-generic
[node02][DEBUG]linux-image-generic
[node02][DEBUG]linux-generic
[node02][DEBUG]linux-generic-lts-vivid
[node02][DEBUG]linux-image-generic-lts-vivid
[node02][WARNIN]E: Sub-process /usr/bin/dpkg returned an error code (1)
[node02][ERROR ]RuntimeError: command returned non-zero exitstatus: 100
[ceph_deploy][ERROR ]RuntimeError: Failed to execute command:env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get--assume-yes -q --no-install-recommends install ca-certificatesapt-transport-https
解决方法:
删除/boot/目录下的vmlinuz-4.4.0-85-generic,initrd.img-4.4.0-85-generic
···································
run-parts:executing /etc/kernel/postinst.d/apt-auto-removal 4.4.0-85-generic/boot/vmlinuz-4.4.0-85-generic
[node02][DEBUG] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.4.0-85-generic/boot/vmlinuz-4.4.0-85-generic
[node02][DEBUG] update-initramfs: Generating /boot/initrd.img-4.4.0-85-generic
5. healthHEALTH_WARN too few PGs per OSD (16 < min 30)
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
too few PGs per OSD (16 < min 30)
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e50: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v119: 64 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean
解决方法:
查看rbd pool的PGS
$ sudo ceph osd pool get rbd pg_num
pg_num: 64
pgs为64,因为是2副本的配置,所以当有8个osd的时候,每个osd上均分了64/8 *2=16个pgs,也就是出现了如上的错误小于最小配置30个
解决办法:修改默认pool rbd的pgs
$ sudo ceph osd pool set rbd pg_num 128
set pool 0 pg_num to 128
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
64 pgs stuck inactive
64 pgs stuck unclean
pool rbd pg_num 128 > pgp_num 64
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e52: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v121: 128 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean
64 creating
发现需要把pgp_num也一并修改,默认两个pg_num和pgp_num一样大小均为64,此处也将两个的值都设为128
{官网给出参数:
Less than5 OSDs set pg_num to 128
Between 5and 10 OSDs set pg_num to 512
Between10 and 50 OSDs set pg_num to 1024
If youhave more than 50 OSDs, you need to understand the tradeoffs and how tocalculate the pg_num value by yourself
Forcalculating pg_num value by yourself please take help of pgcalc tool
}
$ sudo ceph osd pool set rbd pgp_num 128
set pool 0 pgp_num to 128
最后查看集群状态,显示为OK,错误解决:
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_OK
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e54: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v125: 128 pgs, 1 pools, 0 bytes data, 0 objects
718 MB used, 27550 GB / 29025 GB avail
128 active+clean
6.修改权限问题
[root@lab8106ceph]# mount/dev/sde1 /mnt
[root@lab8106 ceph]# ll/mnt/
total 32
-rw-r--r-- 1 root root 193 Dec 26 13:11activate.monmap
-rw-r--r-- 1 ceph ceph 37 Dec 26 13:11ceph_fsid
drwxr-xr-x 3 ceph ceph 37 Dec 26 13:11current
-rw-r--r-- 1 ceph ceph 37 Dec 26 13:11fsid
lrwxrwxrwx 1 ceph ceph 9 Dec 26 13:11journal -> /dev/sdf1
-rw-r--r-- 1 ceph ceph 37 Dec 26 13:11journal_uuid
-rw-r--r-- 1 ceph ceph 21 Dec 26 13:11magic
-rw-r--r-- 1 ceph ceph 4 Dec 26 13:11store_version
-rw-r--r-- 1 ceph ceph 53 Dec 26 13:11superblock
-rw-r--r-- 1 ceph ceph 2 Dec 26 13:11whoami
[root@lab8106 ceph]# ll/dev/sdf1
brw-rw---- 1 root disk 8, 81 Dec 26 13:03 /dev/sdf1
创建sdf1的journal的时候权限有问题,我们给下磁盘权限
[root@lab8106ceph]# chownceph:ceph /dev/sdf1
[root@lab8106 ceph]# ceph-deploy osd activate lab8106:/dev/sde1:/dev/sdf1
可以看到成功了
7.上锁问题could not get lock /var/lib/dpkg/lock. is another process using it?
解决方法:
解锁:
sudo rm/var/cache/apt/archieves/lock
sudo rm/var/lib/dpkg/lock
8. SSH登录失败:Host key verification failed 的解决方法
1.参考文章
http://www.zhanghaijun.com/post/866/
http://www.anheng.com.cn/news/30032.html
当我想ssh slave节点时,出现如下错误
1. hadoop@xuwei-laptop:~$ ssh slave
2. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
3. @ WARNING: POSSIBLE DNS SPOOFING DETECTED! @
4. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
5. The RSA host key for slave has changed,
6. and the key for the corresponding IP address 192.168.0.42
7. is unchanged. This could either mean that
8. DNS SPOOFING is happening or the IP address for the host
9. and its host key have changed at the same time.
10. Offending key for IP in /home/hadoop/.ssh/known_hosts:9
11. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
12. @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
13. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
14. IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
15. Someone could be eavesdropping on you right now (man-in-the-middle attack)!
16. It is also possible that the RSA host key has just been changed.
17. The fingerprint for the RSA key sent by the remote host is
18. e1:e0:28:39:6d:fb:d6:45:72:71:c1:ec:3d:ef:78:19.
19. Please contact your system administrator.
20. Add correct host key in /home/hadoop/.ssh/known_hosts to get rid of this message.
21. Offending key in /home/hadoop/.ssh/known_hosts:5
22. RSA host key for slave has changed and you have requested strict checking.
23. Host key verification failed.
错误原因是因为我修改过slave节点。就是第一次我ssh slave节点的ip是192.168.0.10,而第二次的时候slave的ip变为了192.168.0.50.这个时候我在使用ssh slave命令就会出现上述错误。
3.问题解释及解决方法
用OpenSSH的人都知ssh会把你每个你访问过计算机的公钥(public key)都记录在~/.ssh/known_hosts。当下次访问相同计算机时,OpenSSH会核对公钥。如果公钥不同,OpenSSH会发出警告,避免你受到DNS Hijack之类的攻击。因此我们现在只需要删除knows_hosts文件中所对应的slave节点的公钥,然后再ssh slave就可以了。我们使用命令
[cpp] view plain copy
1. sudo gedit known_hosts
打开known_hosts,但是我们发现文件内的内容根本找不到slave,文件内容如下
[cpp] view plain copy
1. |1|Xv9OoqvMzLO8ZB6RBgo5huXiJsM=|zwBphczddm/ogCsQfJJb8pO8CNo= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
2. |1|peUXkOx+hnb6XQZZGGwMhOXAj04=|mqdXqRTi/MiqARL+dylDygmDgpk= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
3. |1|2Agq45UHkRDXi73GGuTp7+ONWyQ=|wfAq9PffuqQch0E1tsFJDGlAPQk= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
4. |1|GT/tN5xUgbRZhRt31sCAnpWPtH4=|mkxeWxXDrk9XSLo2DtIwvD/J9w4= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
5. |1|hMbmluXaSJKOv4bZydZ75Ye3OUc=|rcfbiV7hrXoaDt02BrVb9UxJSqI= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0d8quvKVdc0b620eAY46ucB87dK/Q1EqEsPOdUltfEpr7/r9hCG+yJqjM6l3roOzkkc9Fi/iZEx0pvIgDdtD+n5YEQrQu81/mj1cWXmkN9xuXvqv9BZxOTeETRF5g1cL0yr4T91CmvXIMewUzv1fE1pWOzZvMKj8SqMOn7PpTjQhpDoS8SkTuNO81k41DkyrDe3DIRL0yC6aUGTF3YOTAe4DbpF8jMHD3+wDm4JT//ULRNKSwRQPOb57XWHy9GLHm89oOm2e8wrjrz84nCRJ5hgJdsjaPt3qJEEGeb8OAMj2fevyu+e1KDF3KExSE0jGBegEWpJilxaD8AV4+1CHsw==
6. |1|5BwBb0f3G3TO2JJ0ATWbGrFOfjA=|Mr+5xsBD6SDMpg9ITPpDT3r6ULc= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0d8quvKVdc0b620eAY46ucB87dK/Q1EqEsPOdUltfEpr7/r9hCG+yJqjM6l3roOzkkc9Fi/iZEx0pvIgDdtD+n5YEQrQu81/mj1cWXmkN9xuXvqv9BZxOTeETRF5g1cL0yr4T91CmvXIMewUzv1fE1pWOzZvMKj8SqMOn7PpTjQhpDoS8SkTuNO81k41DkyrDe3DIRL0yC6aUGTF3YOTAe4DbpF8jMHD3+wDm4JT//ULRNKSwRQPOb57XWHy9GLHm89oOm2e8wrjrz84nCRJ5hgJdsjaPt3qJEEGeb8OAMj2fevyu+e1KDF3KExSE0jGBegEWpJilxaD8AV4+1CHsw==
7. |1|UVN6ra08UPwZpm4ZmW6YjAC2Zvg=|dDwgm6Ep/OdeicdFkqXJS46gTmo= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
8. |1|QLl8/P9ESKa1gVjgt9CVMT0a1Rw=|HopDtnlmB0JoXC5Y0kAAKMja1EA= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
9. |1|QZa400OJbWjQNrKTQdqlNvJkyEs=|yVD3EAylkfJaW43kRSUIFcJla10= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3fvljAowPKD9Lx4GkF75FQfj1edEPLTsY9TQv7S5mDS6RB8YvgP9SQxnqt0Dr+IpiDpRg6y7iv6Qm6WC4dOd4jJCPfbI4FUbGTkwLL4qeKo0+ZHZUS2ByeMd+PbqM0iIubKBsNBebA5c+RvqOCneYHOkrTKtwJsq2NnwhgFBz0odeFF7G7tBq6huK7KqikXZauEk7B4gnbtSiD2pG1XZzEUeXq8qEFLjWFPKBRYr8/AL/RZjktJRj98mCRtXCB9tef3DhFkHnXODfC/LzMX3vkQP2ahP4kbNmtXM8nkK2YFx0emAL07h66j89k9ByXzuN0mGw2QKcjFkDWNVkwk6CQ==
10. |1|yp3zvqWQ6ChoV+KJiRhaNUHVaaA=|Zh9+UnUaW8W1XTCXh8oaOjnsCGM= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs6IHrmuZwTilgYsOjwiOfQP1u1c3Aj6MH2EOaypGYxccDddcQu6QoKhVXQZWMZFt7W6MeZUa/QKqNVmcpho4NubyCpxfkBIybPzqbqQif8EmYrCCsbaU41hQppISrXNdlcn/S7TKM9T6sbQV1/moYScjQ4kEO+MchVmuIY5cm8kz5p8jxklSF2xFftB+kz7RmMJN3+GcGHOcgACngtxqFnqXfNF8RV1wv2lP5wLui7cv7V+pogExckzqiNfJMPnt8SCMvODHVMRnlJC5yOtpkDKH29X056KeYtK40KhMCQL9UMHfRPQgvQL0qArQ65RLevIckZ8YehOG9aCXbcWzBw==
11. |1|QLuhhDXvYoefLiZ7+fw5A9ErlV4=|bWOSl44257+rbK1Fn4zwMY8GE3c= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs6IHrmuZwTilgYsOjwiOfQP1u1c3Aj6MH2EOaypGYxccDddcQu6QoKhVXQZWMZFt7W6MeZUa/QKqNVmcpho4NubyCpxfkBIybPzqbqQif8EmYrCCsbaU41hQppISrXNdlcn/S7TKM9T6sbQV1/moYScjQ4kEO+MchVmuIY5cm8kz5p8jxklSF2xFftB+kz7RmMJN3+GcGHOcgACngtxqFnqXfNF8RV1wv2lP5wLui7cv7V+pogExckzqiNfJMPnt8SCMvODHVMRnlJC5yOtpkDKH29X056KeYtK40KhMCQL9UMHfRPQgvQL0qArQ65RLevIckZ8YehOG9aCXbcWzBw==
OpenSSH在4.0p1引入了 Hash KnownHosts功能,在known_hosts中把访问过的计算机名称或IP地址以hash方式存放,令入侵都不能直接知道你到访过那些计算机。这项新项功能缺省是关闭的,要你手动地在ssh_config加上\"HashKnownHosts yes\"才会被开启。不过Ubuntu就缺省开启了个功能。
然而,偶然一些计算机的ssh公钥是合理地被更动。虽然遇到这些情况OpenSSH会发出惊告并禁止你进入该计算机。以往当我们确定该次ssh公钥被更动没有可疑时,我们用文字编辑器开启known_hosts,把相关的公钥记录删掉就可以了。但现在因为所有计算机名称或IP地址都被 hash了,我们很难知道那行是相关计算机的公钥。当然我们可以把整个known_hosts删除,但我们会同时失去其他正常计算机的ssh公钥。事实上OpenSSH在工具ssh-keygen加了三个选项,协助你管理hash了的known_hosts。你可以用\"ssh-keygen-F 计算机名称\"找出相关的公钥,使用如下命令找出slave所对应的公钥
[cpp] view plain copy
1. ssh-keygen -F slave
执行命令以后得到如下内容:
[cpp] view plain copy
1. # Host slave found: line 5 type RSA
2. |1|hMbmluXaSJKOv4bZydZ75Ye3OUc=|rcfbiV7hrXoaDt02BrVb9UxJSqI= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0d8quvKVdc0b620eAY46ucB87dK/Q1EqEsPOdUltfEpr7/r9hCG+yJqjM6l3roOzkkc9Fi/iZEx0pvIgDdtD+n5YEQrQu81/mj1cWXmkN9xuXvqv9BZxOTeETRF5g1cL0yr4T91CmvXIMewUzv1fE1pWOzZvMKj8SqMOn7PpTjQhpDoS8SkTuNO81k41DkyrDe3DIRL0yC6aUGTF3YOTAe4DbpF8jMHD3+wDm4JT//ULRNKSwRQPOb57XWHy9GLHm89oOm2e8wrjrz84nCRJ5hgJdsjaPt3qJEEGeb8OAMj2fevyu+e1KDF3KExSE0jGBegEWpJilxaD8AV4+1CHsw==
上述给出了slave的公钥,以及在所在的行数。我们去known_hosts中找到对应的公钥将其删除。
9. 添加monitor失败 GenericError: Failed to add monitor to host
ubuntu@dlp:~/ceph$
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO ]Invoked (1.5.32): /usr/bin/ceph-deploy --overwrite-confmon add node02
[ceph_deploy.cli][INFO ]ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ]overwrite_conf :True
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ]cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f245771bf38>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ]mon :['node02']
[ceph_deploy.cli][INFO ]func : <function mon at0x7f24576f9ed8>
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ]ceph_conf :None
[ceph_deploy.cli][INFO ]default_release :False
[ceph_deploy.mon][INFO ]ensuring configuration of new mon host: node02
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
[node02][DEBUG ] connection detected need for sudo
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host node02
[ceph_deploy.mon][DEBUG ] using mon address by resolving host:192.168.0.119
[ceph_deploy.mon][DEBUG ] detecting platform for host node02 ...
[node02][DEBUG ] connection detected need for sudo
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ]distro info: Ubuntu 16.04 xenial
[node02][DEBUG ] determining if provided host has same hostname inremote
[node02][DEBUG ] get remote short hostname
[node02][DEBUG ] adding mon to node02
[node02][DEBUG ] get remote short hostname
[node02][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf
[node02][DEBUG ] create the mon path if it does not exist
[node02][DEBUG ] checking for done path:/var/lib/ceph/mon/ceph-node02/done
[node02][DEBUG ] create a done file to avoid re-doing the mondeployment
[node02][DEBUG ] create the init path if it does not exist
[node02][INFO ] Runningcommand: sudoceph-mon -i node02 --pid-file /var/run/ceph/mon.node02.pid--public-addr 192.168.0.119
[node02][WARNIN] 2017-06-22 20:44:52.227186 7f96947cd700 -1mon.node02@-1(probing) e5 not in monmap and have been in a quorum before; musthave been removed
[node02][WARNIN] 2017-06-22 20:44:52.227692 7f96947cd700 -1 mon.node02@-1(probing)e5 commit suicide!
[node02][WARNIN] 2017-06-22 20:44:52.2279497f96947cd700 -1 failed to initialize
[node02][ERROR ]RuntimeError: commandreturned non-zero exit status: 1
[ceph_deploy.mon][ERROR ] Failed to execute command: ceph-mon -inode02 --pid-file /var/run/ceph/mon.node02.pid --public-addr 192.168.0.119
[ceph_deploy][ERROR ]GenericError: Failed to add monitor tohost: node02
解决方法:
原因是节点中node04: /var/lib/ceph/mon无权限
到相应的节点中
#sudo su
#cd /var/lib/ceph
删除mon文件夹
#rm mon
#mkdir mon
#chown ceph:ceph mon
即可安装监控
10. ImportError: No module named rados locale.Error: unsupported locale setting
cephCluster@node01:~/ceph-dashfolder/ceph-dashlog$cat cephdash.log
Traceback(most recent call last):
File "ceph-dash.py", line 4, in<module>
from app import app
File"/home/cephCluster/ceph-dashfolder/ceph-dash/app/__init__.py", line11, in <module>
from app.dashboard.views importDashboardResource
File"/home/cephCluster/ceph-dashfolder/ceph-dash/app/dashboard/views.py",line 13, in <module>
from rados import Rados
ImportError:No module named rados
cephCluster@node01:~/ceph-dashfolder/ceph-dashlog$pip install rados
Traceback(most recent call last):
File "/usr/bin/pip", line 11, in<module>
sys.exit(main())
File"/usr/lib/python2.7/dist-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File"/usr/lib/python2.7/locale.py", line 581, in setlocale
return _setlocale(category, locale)
locale.Error:unsupported locale setting
解决方法:
Run following commands
export LC_ALL="en_US.UTF-8"
export LC_CTYPE="en_US.UTF-8"
sudo dpkg-reconfigure locales
it will solve this. (I was not able to solve myproblem with other solutions. This one works for me)
11.osd 状态为down修复方法
a) 激活down的osd节点
run:
$ceph-deploy osd activatenode01:/disk8Tosd5/osd5
$ssh node01
$ sudo systemctl enable ceph.target
查看osd状况:
$ceph osd tree
查看健康状况:
$ceph –s or $ceph health detail
相关显示如下:
cephCluster@admin-node:~/my-cluster$ceph-deploy osd activate node01:/disk8Tosd5/osd5
[ceph_deploy.conf][DEBUG] found configuration file at: /home/cephCluster/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deploy osdactivate node01:/disk8Tosd5/osd5
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9d2f4e6d40>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :<function osd at 0x7f9d2fd66c08>
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] disk : [('node01','/disk8Tosd5/osd5', None)]
[ceph_deploy.osd][DEBUG] Activating cluster ceph disks node01:/disk8Tosd5/osd5:
[node01][DEBUG] connection detected need for sudo
[node01][DEBUG] connected to host: node01
[node01][DEBUG] detect platform information from remote host
[node01][DEBUG] detect machine type
[node01][DEBUG] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.osd][DEBUG] activating host node01 disk /disk8Tosd5/osd5
[ceph_deploy.osd][DEBUG] will use init type: systemd
[node01][DEBUG] find the location of an executable
[node01][INFO ] Running command: sudo /usr/sbin/ceph-disk-v activate --mark-init systemd --mount /disk8Tosd5/osd5
[node01][WARNIN]main_activate: path = /disk8Tosd5/osd5
[node01][WARNIN]activate: Cluster uuid is c27cd1ea-f727-48f5-89f0-42f17629e7a6
[node01][WARNIN]command: Running command: /usr/bin/ceph-osd --cluster=ceph--show-config-value=fsid
[node01][WARNIN]activate: Cluster name is ceph
[node01][WARNIN]activate: OSD uuid is 8a89f1ff-42f6-4448-ba5b-98bac790847d
[node01][WARNIN]activate: OSD id is 3
[node01][WARNIN]activate: Marking with init system systemd
[node01][WARNIN]command: Running command: /bin/chown -R ceph:ceph /disk8Tosd5/osd5/systemd
[node01][WARNIN]activate: ceph osd.3 data dir is ready at /disk8Tosd5/osd5
[node01][WARNIN]start_daemon: Starting ceph osd.3...
[node01][WARNIN]command_check_call: Running command: /bin/systemctl disable ceph-osd@3
[node01][WARNIN]command_check_call: Running command: /bin/systemctl enable --runtime ceph-osd@3
[node01][WARNIN]Created symlink from/run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to/lib/systemd/system/ceph-osd@.service.
[node01][WARNIN]command_check_call: Running command: /bin/systemctl start ceph-osd@3
[node01][INFO ] checking OSD status...
[node01][DEBUG] find the location of an executable
[node01][INFO ] Running command: sudo /usr/bin/ceph--cluster=ceph osd stat --format=json
[node01][WARNIN]there is 1 OSD down
[node01][WARNIN]there is 1 OSD out
[node01][INFO ] Running command: sudo systemctl enableceph.target
12.mongodb
1) mongodb安装:
sudo apt-get installmongodb-server
安装后配置文件路径信息:
/etc/mongodb.conf
dbpath=/var/lib/mongodb
logpath=/var/log/mongodb/mongodb.log
2) mongodb开启与关闭
关闭:
法1:sudoservice mongodb stop
法2:$mongo
>use admin
>db.shutdownServer()
开启:
法1:sudo service mongodb start
法2:sudo mongod –f /etc/mongod.conf
法3:sudo numactl --interleave=all mongod --dbpath=/var/lib/mongodb --fork --logpath=/var/log/mongodb/mongodb.log--auth --httpinterface –rest
法4:sudonumactl --interleave=all mongod -f/etc/mongodb.conf
参考文献链接
1.如何在Linux中查看所有正在运行的进程
http://os.51cto.com/art/201101/244090.htm
配置:
http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/
https://github.com/ceph/ceph-cookbook/issues/187
2. 磁盘修复:
http://linux.51yip.com/search/fsck.ext4
3.配置防火墙
http://dataunion.org/27230.html
4.时间同步ntp ntp 服务器
http://www.voidcn.com/blog/skdkjxy/article/p-2649162.html